« search calendars« Human-Machine Collaboration in a Changing World 2022 (HMC22)

« From Accuracy to Alignment: How Radiologists Work with and Build Trust in Machine Learning Algorithms

From Accuracy to Alignment: How Radiologists Work with and Build Trust in Machine Learning Algorithms

December 02, 2022, 1:10 PM - 1:20 PM

Location:

Online and Paris, France

Wanheng Hu, Cornell University

The increasing use of machine learning algorithms to support human decision-making has brought about the popular notion of “trustworthy AI”. Accuracy and explainability, among other things, are deemed to be two key elements in the trustworthiness of machine learning systems. They have become not only essential terms for formulating ethical AI guidelines but also important goals for computer science research efforts. The underlying assumption is that, if the output of AI systems is more “accurate” and “explainable,” then they become more trustworthy and trusted by users. Drawing on extensive participant observation and 36 semi-structured interviews with radiologists in China, this paper problematizes such assumptions and proposes an alternative framework centered on “human-machine alignment” to understand the issue of trustworthiness. I argue that radiologists develop their trust based on the degree of alignment between their own judgment and the algorithmic output, including what I call “direct alignment” and “adjusted alignment.” Regardless of the claimed performance indicated by the statistical parameters such as sensitivity and specificity, radiologists are still prompted to judge if the algorithmic decisions directly align with their own for each case. Such direct alignment practices are motivated by two factors. First, the probabilistic nature of the evaluation metrics of the algorithm’s performance cannot guarantee its correctness in the specific case in question, especially with the unavailability of a ready “ground truth” in real-world clinical practices. Second, under current legal and regulatory regimes, radiologists are held accountable for the medical imaging reports and are therefore motivated to doublecheck AI’s recommendations. Yet, even if the direct alignment is low, radiologists may still develop trust in and make use of the algorithmic output if they can observe certain patterns of, and thus explain away, the misaligned algorithmic output. This leads to an “adjusted alignment” based on the radiologist’s own interpretations. In conclusion, the paper suggests that notions of accuracy and explainability, rooted in algorithmic testing and designing, are misplaced in conceptualizing user’s trust in AI in real-world applications; instead, the trustworthiness of AI is a result of human-machine alignment and could not be reduced to some intrinsic features of the algorithms.