By Andreas Wabro.
Achieving transparency and interpretability of algorithmic predictions remains an important research goal for many AI experts around the world. In particular, the epistemic benefits of explainable AI (XAI) methods have been widely discussed, and especially in the context of healthcare, international institutions and academic experts often call for measures to improve physicians’ understanding of algorithmic output as far as technically feasible. Many experts, with very valid arguments, proclaim various potentials of such explainable AI methods, particularly in the field of medicine.
But what can be expected from XAI methods incorporated into the system design of AI-based decision support systems when such methods are applied in time-sensitive clinical environments?
Drawing on expert interviews we conducted with software engineers working in the field of computational cardiology, we sought to elaborate on the most prevalent ethical implications arising from XAI usage in such demanding situations. And while this particular question seemed trivial to us at first, it soon proved to be of far-reaching importance and raised striking ethical implications once we decided to dig deeper.
After conducting a detailed ethical analysis, we found that time-sensitive clinical environments may indeed conflict with the use of XAI for a number of reasons, which we discuss in our recent JME paper. Importantly, because time-sensitive environments typically require immediate medical attention or urgent surgical resolution, they often do not allow the physicians or surgeons responsible for making the clinical decision at hand to conduct a detailed analysis of the AI explanations provided.
However, in-depth analysis is often necessary to rule out potential XAI fallacies or intrinsic errors, which are difficult to detect with limited exposure time. In extreme cases, where the clinical environment prevents doctors or surgeons from adequately considering the system’s explanations, XAI methods could even harm patients’ physical integrity.
In particular, as XAI methods still appear to be prone to various errors, they continue to present an intrinsic set of challenges that are less well suited to being dealt with in time-critical clinical situations. In such scenarios, a well-designed black-box AI might, after weighing conflicting arguments, be more ethically desirable than a less well-designed XAI that fails to live up to its promises and the heightened expectations that flow from its specific nature and design.
Paper title: When time is of the essence: ethical reconsideration of XAI in time-sensitive environments
Authors: Andreas Wabro, Markus Herrmann, Eva C Winkler
Affiliations: National Center for Tumor Diseases (NCT) Heidelberg, NCT Heidelberg, a partnership between DKFZ and Heidelberg University Hospital, Germany, Heidelberg University, Medical Faculty Heidelberg, Heidelberg University Hospital, Department of Medical Oncology, Section Translational Medical Ethics
Competing interests: None declared.
Social media accounts of post author: @EthicsWinkler; @ethicswinkler.bsky.social