« search calendars« Human-Machine Collaboration in a Changing World 2022 (HMC22)

« Contributed talk: Whose fault is it? Liability profiles in Surgical Systems

Contributed talk: Whose fault is it? Liability profiles in Surgical Systems

December 01, 2022, 4:00 PM - 4:15 PM

Location:

Online and Paris, France

Maria-Camilla Fiazza, University of Verona

Robotic surgery has become the standard of care in a growing number of procedures. Surgical robots on the market are currently very sophisticated tools, operated under the nearly complete control of a surgeon, with whom they can interact in a number of ways. Although some autonomous capabilities are already within technical reach, they are in fact not yet deployed. The limiting factors are regulatory and legal uncertainty, and the lack of precise computational correlates for the notions of responsibility and liability.

Whereas the regulatory intent behind requirements of trustworthiness, correctness, and fairness are clear, systematically unpacking the meaning of basic terms is needed before one can move from principle to practice in cyberphysical human systems. This work presents a perspective on the concepts necessary to navigate dependencies between decisions jointly made by man and machine, as they cooperate via a range of interaction modalities known as the levels of autonomy. We examine the conceptual landscape (e.g., supervision, joint control, ..) under the lens of liability and in relationship to the requirements outlined in the European Union’s proposed regulation for AI systems, the 2021 AI Act [1].

The surgical domain has unique challenges, which can illuminate the terms of the general debate on human-machine collaboration. The notion of safety as the avoidance of harm cannot be directly applied, because surgery is in fact about causing controlled (local) harm in pursuit of a system-level benefit. Safety emerges as tightly tied to decisional correctness. Regulations that require human supervision and mandate that humans may intervene at any place in the decisional process improve the chances of catching machine errors—wherever humans still have the upper hand. Paradoxically, they also enormously widen the error surface where system-level errors could originate.

Grounded in examples from the surgical field, we explore ways in which humans can bias machines through their sensory limitations and incomplete knowledge, endeavoring to distinguish the errors we must strive to avoid through careful and ethical design from the errors we must learn to accept.