« search calendars« Human-Machine Collaboration in a Changing World 2022 (HMC22)

« On Explanations, Fairness, And Appropriate Reliance in Human-AI Decision-Making

On Explanations, Fairness, And Appropriate Reliance in Human-AI Decision-Making

December 01, 2022, 2:10 PM - 2:20 PM

Location:

Online and Paris, France

Jakob Schöffer, Karlsruhe Institute of Technology

Explanations are often framed as an essential pathway towards improving fairness in human-AI decision-making. Empirical evidence on explanations’ ability to enhance distributive fairness is, however, inconclusive [1]. Prior work has found that humans’ perceptions towards an AI system are influenced by the features that a system is considering in its decision-making process [2,3,4]. For instance, if explanations were to highlight the importance of sensitive features (e.g., gender or race), it is likely that humans will perceive such a system as unfair. However, researchers have challenged the assumption that “unawareness” of an AI with regard to sensitive information will generally lead to fairer outcomes [5,6,7]. Moreover, the relationship between humans’ perceptions and their ability to override wrong AI recommendations and adhere to correct ones—i.e., to appropriately rely on the AI—is not well understood. 

In our work we examine the interplay of explanations, perceptions, and appropriate reliance on AI recommendations; and we argue that claims regarding explanations’ ability to improve distributive fairness should, first and foremost, be evaluated against their ability to foster appropriate reliance—i.e., enable humans to override wrong AI recommendations and adhere to correct ones. To empirically support our conceptual arguments, we conducted a user study for the task of occupation prediction from short bios. In our experiment, we assess differences in perceptions and reliance behavior when humans see and do not see explanations, and when these explanations indicate the use of sensitive features in predictions vs. when they indicate the use of task-relevant features. Ultimately, we test for differences in perceptions and reliance behavior across conditions and infer implications for the appropriate characterization of explanations’ role in human-AI decision-making. 

Our findings show that explanations influence humans’ fairness perceptions, which, in turn, affect reliance on AI recommendations. However, we observe that low procedural fairness perceptions lead to more overrides of AI recommendations, regardless of whether they are correct or wrong—a phenomenon sometimes referred to as “algorithm aversion”. This (i) raises doubts about the usefulness of common explanation techniques for enhancing distributive fairness, and, more generally, (ii) emphasizes that fairness perceptions must not be conflated with distributive fairness.