Meeting the challenges of using automated second opinions

By Hendrik Kempt and Saskia K. Nagel.

Diagnostics is a difficult inferential process requiring an immense amount of cognitive labor. Not only must physicians gather evidence and evaluate that evidence to fit the symptoms of a patient, they usually need to do that with imperfect knowledge in an ever changing field of research, and limited resources in an ever growing arsenal of tools and means to produce evidence. Viewed from this perspective, it is no surprise that misdiagnoses occur, and that they might be unavoidable altogether. However, the responsibility of physicians remains, and as experts with expert knowledge, they can be made responsible for misdiagnosing their patients.

One strategy to reduce misdiagnoses is to minimize the opportunity for mistakes. This is to be seen in contrast to misdiagnoses due to limits of human knowledge: That the first patients of a new sickness, e.g., Covid19, are being misdiagnosed is not a “mistake” on the physician’s part; there simply was no knowledge about that specific sickness that could have been “misapplied”.

One way of reducing mistakes is to have another physician assess a patient and propose their own diagnosis (or review the process of the first physician) as a so-called “second opinion”. In different stages of formalization, this is already being done (sometimes patients request a second opinion, some physicians ask their peers, and some diseases are too difficult to diagnose reliably, or the consequences of false positives or negatives are too severe to leave the decision to only one person). With false negatives, delayed treatments may cause a more problematic course of the illness, while false positives may incur harmful psychological effects or lead to further painful testing or treatments.

When a diagnosis is complemented by a second opinion, however, responsibility distribution becomes more complicated than these formalizations suggest. If a physician giving a second opinion provides a misdiagnosis, leading the physicians in charge to change their minds in the wrong direction, the misdiagnosis cannot be reasonably blamed on the physician in charge alone. Thus, a second opinion ought to be considered relevant in the overall assessment of the primary physician’s ultimate diagnosis from a perspective of responsibility.

Some have suggested that artificial intelligent clinical decision support systems (CDSS) should be used as second opinions in medical diagnostics (in the following, we concern ourselves with AI-based CDSS only). The benefits of such a proposal are clear: the ability of CDSS to search huge numbers of similar cases and symptoms to provide a diagnosis expands the access of medical knowledge manifold; theoretically, this knowledge could be kept up to date within these CDSS to apply new insights into diseases in almost real-time; the speed and efficacy of CDSS could make second opinions a standard procedure for most diagnoses; it does not require CDSS to function fully autonomously, seemingly avoiding hard cases of so-called responsibility gaps (which leave a gap in the chain of responsibility-tracking in case an autonomous artificial system is part of the chain).

Yet, while these may all be beneficial reasons, the nature of second opinions is one of distributed responsibility. A second opinion can weigh as much for an overall diagnosis as the initial one. Replacing physician-provided second opinions for those of an AI, then, seems to require us to be willing to assign responsibility to these machines. Most of us do not want this. These concerns are especially difficult to resolve if we cannot even explain how these machines come to their conclusions, which leaves deciding physicians potentially unable to explain their diagnoses to their patients.

One could ask, then: should we abandon the idea of using AI in helping us reduce the cognitive labor load and get more information checked before a diagnostic decision is made, given the issues with explainability and responsibility distribution? We propose a rule that would allow keeping the beneficial effects of such technology, its speed and its relative accuracy, in the face of the substantial moral concerns, its inability to take responsibility and lack of explainability. The “rule of disagreement” allows for CDSS to propose second opinions, and a physician may proceed if the CDSS confirms the physician’s diagnosis. If, however, the CDSS is in conflict with the original diagnosis, we may require a human third opinion to break the tie.

On the one hand, this guarantees that AI-provided diagnoses are still considered and their benefits reaped, but on the other hand not in a way that would require CDSS to take any responsibility or even be fully explainable. If it merely confirms the physician’s diagnosis, they can be assured that their diagnosis has been checked. If the machine disagrees, they can consult another physician.

These deliberations are not without pitfalls, as we may anticipate a certain willingness to reverse the roles and have the CDSS propose its diagnosis first. This way, a physician could reverse-engineer their own diagnosis to fit the machine’s. However, both technical solutions (e.g., a merely reactive CDSS) as well as an appeal to the ethical behavior of physicians can cover these concerns and thus keep the proposed rule usable.

Paper title: Responsibility, Second Opinions, and Peer-Disagreement – Ethical and Epistemological Challenges of Using AI in Clinical Diagnostic Contexts

Authors: Hendrik Kempt, Saskia K. Nagel

Affiliations: Applied Ethics Group, RWTH Aachen, Germany

Competing interests: None declared

Social media accounts of post authors: Hendrik Kempt: Twitter @hkem

(Visited 394 times, 1 visits today)