Trusting in AI in medicine

[Translate to Englisch:] Colourbox
© Colourbox

The use of artificial intelligence in medicine offers new ways for making more precise diagnoses and relieving doctors from routine tasks. How well do doctors really have to understand this technology to develop the "right” measure of trust in such systems? And does the use of AI lead to any ethically relevant changes in the doctor-patient relationship? It is answers to these and similar questions that a project headed by the THI Ingolstadt and the KU will be working on. Cooperative partners in this project are Prof. Dr. Matthias Uhl, who holds the Professorship for Social Implications and Ethical Aspects of AI and Prof. Dr.-Ing. Marc Aubreville, Professor of Image Understanding and Medical Application of AI at the THI as well as Prof. Dr. Alexis Fritz, holder of the Chair of Moral Theology at the KU. The project “Responsibility Gaps in Human-Machine Interactions: The Ambivalence of Trust in AI” is being funded by the bidt, the Bavarian Research Institute for Digital Transformation.

Monotonous tasks are time-consuming and tiring to humans. Having experienced doctors assess dozens of mammograms can have the unwanted side-effect that small but diagnostically relevant details are overlooked. Putting AI to good use in this field has the potential to relieve humans of this burden and free up their capacities for decision-making. “This is based on the assumption, that the human experts must be able to trust the AI system. This trust, however, can lead to the doctor not critically reassessing the AI decision”, says Prof. Dr. Marc Aubreville. Even systems that are typically used in the medical field are not infallible. That is, after all, why in all procedures humans are meant to be the last authority in the decision-making chain.