How much credibility does my testimony deserve? This is not for an algorithm to decide!

By Giorgia Pozzi.

The hype about the promises of machine learning (ML) systems in medicine is real, even though not always justified. As ethicists have been increasingly pointing out in the past years, quite some work still needs to be done to ensure their responsible use and safeguard fundamental bioethical principles, such as justice and non-discrimination. However, what if the harm that ML systems can cause to patients would affect them also as epistemic subjects (i.e., as knowing subjects), thus preventing them from being meaningfully involved in medical decision-making?

Precisely this is what happened to Kathryn. She is a woman suffering from acute pain and has been denied medication due to a high risk score provided by an algorithmic system that informs medical decisions regarding opioid prescriptions. As I point out in my paper, Kathryn is not only unjustly denied medical attention and support, but she is also silenced by the ML system providing her risk score. In fact, her credibility is assessed based on the risk score attributed to her, and her possibilities to communicate that she has, contrary to the ML prediction, never misused drugs are considerably constrained. She thus clearly suffers what Miranda Fricker has labeled testimonial injustice.I am convinced that since ML systems play an increasingly relevant role in medicine, analyzing how they can epistemically harm patients is paramount.

This is a challenging task. As Fricker teaches us, epistemic injustices are rarely obvious. They rather lurk in the background. They are deeply rooted in prejudices and absorbed by our social structures. They are everywhere, yet they are often hard to identify and even harder to counteract. However, their consequences are painfully tangible for their victims: they manifest in the feeling of not being heard, of being excluded, misunderstood, unacknowledged, silenced. These are highly concerning aspects that stand in the way of proper medical care and undermine a “good” patient-physician relationship. Epistemic injustices are surely present also in human-human interactions in medicine. However, with ML systems they risk acquiring a whole new dimension, propagating rapidly and structurally affecting healthcare practices. It seems worrisome that these forms of injustice can be even further exacerbated by ML systems, as I argue in my paper.

The main issue is that it is epistemically and morally unjustified to base assessments regarding patients’ credibility on the ML-produced risk scores attributed to them. This is the case for at least three reasons. First, we need to recognize and take seriously that an automation bias can be in place, prompting physicians to attribute more credibility to an ML score than to patients’ testimony without good epistemic reasons. Second, spurious correlations can misclassify patients attributing them a risk score that does not represent their real drug consumption and their risk of developing opioid misuse. This can be the case particularly for patients who belong to disadvantaged social groups, with the consequence of perpetuating discriminatory practices. Finally, the harm caused to patients in terms of testimonial injustice in ML-mediated medical procedures acquires a large scale of propagation that makes it harder for physicians to spot and amend. Therefore, particular caution is needed. Delicate decisions involving assessing patients’ credibility require careful scrutiny.

These are only initial considerations that will hopefully pave the way to more discussions on how ML systems in medicine can undermine patients’ epistemic status. I am convinced that to have a complete picture of the potentially harmful impact of these systems on medical care, it is necessary to have a more nuanced approach to the analysis of how the fundamental principle of justice can be endangered, taking into account its epistemic dimension.

 

Paper title: Testimonial injustice in medical machine learning

Author: Giorgia Pozzi

Affiliations: Delft University of Technology, The Netherlands

Competing interests: None.

(Visited 252 times, 1 visits today)