By Marco Annoni.
Artificial intelligence (AI) may soon be able to predict which treatments a patient would prefer to receive—or refuse. Among the many applications of AI in healthcare, one of the most promising is its potential to support substitute decision-making.
Substitute decisions are required when patients lack the capacity to make informed decisions for themselves, such as in the case of newborns or individuals incapacitated due to trauma or illness. However, substitute decision-making is notoriously complex and fraught with challenges.
The primary difficulty lies in the surrogate’s ability to determine the patient’s preferences, often due to insufficient knowledge or information about the patient’s wishes. Surrogates themselves may also face barriers, such as limited cognitive abilities, low health literacy, or inadequate ethical preparation. Additionally, both surrogates and healthcare professionals frequently experience high levels of stress and moral distress in making these critical decisions.
This is where AI-powered tools, such as the proposed “Personalized Patient Preference Predictor” (P4), could make a significant difference. By analyzing a patient’s digital footprint—emails, social media posts, health records, and more—these tools could predict the patient’s likely treatment preferences. Algorithmic tools trained on a patient’s unique digital data could potentially predict preferences with greater accuracy than family members or caregivers.
What, then, justifies developing these decisional aids? Beyond alleviating surrogate distress, could a P4-empowered decision also achieve other morally significant goals for incapacitated patients?
Traditional answers revolve around the importance of respecting the value patient autonomy. On this view, relying on P4-empowered substitute judgments honours patients’ self-determination by enabling surrogates to choose “as the patient would have chosen, if competent.”
In a recent article, however, I argue that the relationship between substitute decisions, patient preference predictors like the P4, and respect for patient autonomy is more nuanced than it initially seems. While these tools hold promise in easing the burden on surrogates, their ability to uphold patient autonomy deserves closer examination.
I question two dominant assumptions in the current ethical debate surrounding the use of tools like the P4. One is the belief that the autonomy of a patient who has lost decision-making capacity can still be meaningfully respected through a P4-enabled judgment. The other is that respecting autonomy is merely about satisfying a patient’s individual treatment preferences.
Both assumptions, I argue, are problematic. Respect for autonomy cannot be reduced to the act of delivering the “right” treatments, and broadening the scope of agency beyond first-person decisions raises challenges for standard clinical practices.
Instead, I propose that the development of such algorithmic tools can be justified by their ability to achieve other morally significant goals. These include honouring a patient’s unique identity and alleviating the emotional and cognitive burdens on surrogates. Though distinct from autonomy, these outcomes remain ethically and practically significant.
As tools like the P4 promise to transform surrogate decision-making, it is crucial to clarify the ethical foundations of their use. Clarifying what these technologies can—and cannot—achieve is crucial to guiding their responsible development. By situating these tools within a broader ethical framework, we can unlock their transformative potential while staying rooted in the core values of healthcare.
Author: Marco Annoni
Affiliations: Interdepartmental Center for Research Ethics and Integrity (CID Ethics), National Research Council, Rome, Italy
Competing interests: None declared
Social media accounts of post author: @Marcoannoni