« search calendars« Human-Machine Collaboration in a Changing World 2022 (HMC22)

« Overcoming Algorithm Aversion through Process Control: People Will Use Imperfect Algorithms if They Can (Even Slightly) Customize Them

Overcoming Algorithm Aversion through Process Control: People Will Use Imperfect Algorithms if They Can (Even Slightly) Customize Them

December 01, 2022, 2:20 PM - 2:30 PM

Location:

Online and Paris, France

Lingwei Cheng, Carnegie Mellon University

Understanding the effects of providing greater control to intended users of algorithmic tools is central to advancing the responsible development and deployment of AI technologies in human-in-the-loop systems. While there is now an increasing emphasis on the use of participatory design methods for AI development, algorithms mostly continue to be designed by third-party researchers and organizations that may not fully understand users’ needs and values. This can lead to algorithm aversion, wherein human decision-makers are reluctant to use algorithms even when those algorithms outperform expert human judgment [1–3]. Studies have found that users are more willing to use algorithms as long as they have some control over the outcomes [4], and are more likely to perceive the algorithms as fair in those settings [6]. This ability to appeal or modify the outcome of a decision once it has been made is
termed “outcome control” [5]. Outcome control can be contrasted with “process control”, which entails control over the processes that lead to the algorithmic tool (e.g., data curation, the training procedure, etc.) The effect of process control on algorithm aversion is presently under-explored. We ask: Does process control mitigate algorithm aversion? Does providing both process control and outcome
control more greatly mitigate algorithm aversion than either form of control on its own? We conduct a replication study of outcome control [4], and test novel process control study conditions on Amazon Mechanical Turk (MTurk) and Prolific by allowing users to customize what input factors or model family (e.g., linear regression, trees, etc.) are used in the training process. Our results (mostly) confirm prior findings on the mitigating effects of outcome control. We find that process control in the form of choosing the training algorithm mitigates algorithm aversion, but changing inputs does not. Choosing the training algorithm also mitigates algorithm aversion to the same extent as does outcome control. Lastly, giving users both outcome and process control does not reduce algorithm aversion more than outcome or process control alone. Our study contributes to design considerations around mitigating algorithm aversion and reflects on the challenges of replication for crowdworker studies of human-AI interaction.