Towards meaningful human control. Using artificial intelligence in clinical decision making

By Matthias Braun, Patrik Hummel, Susanne Beck, Peter Dabrock

Clinical decision-making can be challenging. The subject matter is complex. Decisions can have incisive, long-lasting consequences. There is imperfect evidence and informational asymmetries between those involved. Time constraints and economic restrictions complicate the process further.

In view of difficulties like these, it is tempting to deploy technology to ameliorate decision-making. The idea of computerized clinical decision-support systems is not new and has been pursued for a number of decades. But in the age of ubiquitous datafication, increased computing power, and advances in machine learning, new possibilities have appeared on the horizon. Granted, commentators in medicine and beyond caution against overblown expectations, unreflective uses of buzz words, and the massive gaps between close-to-ecstatic fictions about AI-driven potentials on the one hand, and sober reality on the other.

But even fictions deserve reflection. And AI-driven transformations might not only be nearer than we think. They have already begun.

In our article, we scrutinize some of the ethical and legal challenges that arise in this process. One intuitive option might be to keep one’s focus on ethical principles, e.g. the established principles of biomedical ethics, and to apply them to situations in which medical AI is used. In the recent past, guidelines and reports have been published by a variety of expert groups and stakeholder organizations that compile and spell out catalogues of principles for AI in and across several sectors. Others highlight that principles by themselves might not be enough, and that bridging the gap to practice will be the crucial step.

We pursue a different avenue. Our approach is based on the conviction that the situation is more complicated. By virtue of appearing on the scene of clinical decision-making, AI affects and transforms modes of interaction between different agents in the clinic. Specifically, we distinguish modes in which AI functions in a quite straightforward, conventional manner as an auxiliary tool of the clinician—akin, maybe, to a stethoscope or an ultrasound device—from instances where it works in a more integrated, quasi-autonomous way, or even extreme cases where the system is fully automated.

Moreover, different ways of being embedded in decision-making processes induce shifts in the application conditions of key normative notions, such as those figuring in the catalogues of AI principles. We focus on four concepts and their entanglements: trustworthiness, transparency, agency, and responsibility.

For example, consider agency. Already without AI-DSS, it is an oversimplification to regard the clinician as the sole decision-maker. The ideal of shared decision-making involves clinician and patient exchanging perspectives and arriving at decisions together. And with AI on the scene and embedded in the different modes of interaction just sketched, we might ask: how does this intertwinement of clinician and patient agency change? Does, could, and should the system attain quasi-agential features and authority in its own right?

Suppose these observations are correct, and AI-DSS eventually transforms clinical interaction modes and the conceptual building blocks of bioethical theory. What follows from it? How should we proceed with this kind of promising, yet puzzling technology?

We pick up on what we perceive to be an attractive, but somewhat fuzzy concept: the notion of ’meaningful human control’, and develop some suggestion on what exactly this could mean in connection with AI-DSS.

As one first example, AI-DSS are data-driven, and for data subjects—in this context: patients—the ideal of meaningful control calls for concrete modes for individual control over their data. We suggest that this requires envisioning patients as comanagers of their data.

Second, continuous reflection is needed on the role and decisional authority of the clinician. AI-DSS confront them with opacity as well as uncertainty about the validity and error-proneness. At the same time, heightened anticipations and perceived potentials of AI-DSS raise the question under which conditions clinicians can actually refrain from deploying such systems or, once they are deployed, make decisions that contrast with the system’s outputs.

Could such pressures undercut ‘meaningful human control’? We argue: it depends. One factor will be the strength of evidence that in the particular context at hand, reliance on AI-DSS addresses the patient’s need better than alternative courses of actions. But even once such evidence is on the table, the case is not settled. We propose that for ‘meaningful human control’, any remaining risks and uncertainties would need to be deliberated upon on by humans, in particular clinicians and patients.

Even with the most sophisticated AI-DSS, complexity and uncertainty will most likely remain part of medical practice. AI-DSS might help navigate them, but will not resolve them. It remains a critical task of the medical profession to provide the competence and resources for assessing, avoiding and taking risks responsibly, and to counsel the patient throughout this process.

 

Paper title: Primer on an ethics of AI-based decision support systems in the clinic [OPEN ACCESS ARTICLE]

Authors: Matthias Braun1, Patrik Hummel1, Susanne Beck2, Peter Dabrock1

Affiliations:

1 Institute for Systematic Theology, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany

2 Institute for Criminal Law and Criminology, Leibniz University Hannover, Hannover, Germany

Competing interests:None

(Visited 644 times, 1 visits today)