« search calendars« Human-Machine Collaboration in a Changing World 2022 (HMC22)

« Treating human uncertainty in human-machine teaming

Treating human uncertainty in human-machine teaming

December 01, 2022, 4:15 PM - 4:25 PM

Location:

Online and Paris, France

Katherine Collins, University of Cambridge

f a collaborator is unsure about a task that you are working on together, you expect them to communicate their uncertainty. We argue this practice should be followed when developing and deploying human-machine teams: if any team member is uncertain, efforts should be made to communicate and resolve or compensate for such uncertainty. Efforts have been made in the machine learning community to design frameworks which encourage a model to better incorporate uncertainty in its outputs [1], for instance, generating calibrated predictions [2, 3] or producing a set of plausible responses rather than a single estimate when unsure [4, 5]. However, while human probabilistic reasoning has been studied extensively within the cognitive science, psychology, and crowdsourcing communities [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16], human uncertainty representation specifically in machine learning has been limited, and if considered, is often captured through a single scalar measure [17, 18, 19, 20, 21, 22]. We argue for further elicitation and incorporation of human uncertainty, not just model uncertainty.

Our research program aims to address this gap. In Collins et al. [23], we took a step in this direction by
eliciting soft labels over multinomial label distributions representing annotators’ uncertainty in challenging image classification, finding that training models with richer labels can improve model generalization, robustness, and calibration. But this work has so far only focused on improving the machine learning system performance itself; human uncertainty has the potential to support the design of more effective and reliable collaborative systems. Already, recent works have begun to show that humans who are uncertain are more likely to side with a model, even if the model is wrong [24]. How then can we guard against propagating human biases in decision-making, while still designing systems which complement humans [25] accounting for their uncertainty to empower safer decision making under ambiguity? This is crucial in high-stakes settings, such as forming a treatment plan for a patient with comorbidities, or deciding from a set of policies to enact in response to geopolitical or climate instability. We believe the next generation of human-machine collaborative systems will benefit from a careful treatment of model and human uncertainty to adapt to an ever-uncertain world, and do so in ways that engender trust through appropriate transparency [26, 27].