Up close and personal: Using AI to predict patient preferences?

By Nikola Biller-Andorno.

Have you ever tried to put together a ballpoint pen that has fallen apart? Or, more ambitiously, tried to repair your child’s programmable toy robot that continues to bump into walls? There is nothing like building, taking apart and rebuilding to understand a gadget or system’s flaws and weaknesses.

This is what I felt when I got nominated as Fellow to the Collegium Helveticum, a Swiss Institute of Advanced Studies co-sponsored by the University of Zurich, the Federal Institute of Technology Zurich and the Zurich University of the Arts.

As the only ethicist in the fellowship cohort 2016 – 2020, which focused on digital societies as an overall research theme, I quickly got the sense that I was expected first of all to be worried about digitalization in general and about artificial intelligence in particular and, secondly, to elaborate on my concerns by invoking grand theories.

I chose another route that seemed more appealing to me. Given the luxury to follow a “no risk, no fun” strategy during the fellowship, I decided to explore the shallows and abysses of digital health care tools by thinking of ethically relevant potential use cases and then to develop a concept or possibly even a prototype.

As I was interested in patient-centred medicine and the means to strengthen patients’ voices in an era of digitalization I was wondering if algorithmic patient preference predictions might help patients sort out what they want or help caregivers make decisions when the patients themselves were unable to and their wishes were unknown.

After setting out in a New England Journal of Medicine paper to outline how such use cases might look like and what their ethical ramifications were, I started looking around for colleagues in computer sciences who might be interested in teaming up to think more concretely about building a patient preference predictor. The Collegium proved to be a fertile ground for such transdisciplinary explorations, and a working group including ethicists, clinicians and computer scientists could quickly be established. The local University Hospital was available as an excellent potential training site.

Reanimation decisions seemed like a good choice – incapacitated patients with often unknown preferences, a highly preference-sensitive yes/no decision that had to be made, and a less than perfect status quo. However, things quickly started to get tricky. What should we train our smart resuscitation decision assistant to predict – the code status a patient had? The code status a patient would have wanted to have? The code status that would likely be jointly agreed on after a process of shared decision-making between patient and physician? The likelihood that cardiopulmonary resuscitation would in fact be performed on a patient? Or, most boldly, the code status that would most likely be best for the patient given outcome probabilities?

We figured out a model that would require the elicitation of outcome-specific patient preferences – something that is only starting to become implemented as part of advance care planning. But new questions arose: What would such a system mean for the interaction of physicians, patients and relatives? Would they rather rely on algorithmic predictions than have delicate conversations with overwhelmed families, thus undermining any intended assistive role of the tool? Or would they maybe reject such a system altogether?

A typical initial reaction to the project idea, from lay people as well as from academics including data scientists, was: a) AI should stay … out of life-or-death decisions, and b) How would an algorithm ever be able to predict preferences of complex human individuals regarding such a highly personal issue? We probed the idea further with potential users in an exploratory qualitative study with health care professionals and were surprised to see they were not uncritical but actually quite open to the idea. But very clearly, context matters in how such predictions would be used, and, also very clearly, there are many ways to get it wrong.

We hope to be able to proceed to the next step and try to build a pilot version of a smart patient preference predictor. Maybe we’ll find things are not so complex after all and end up with simple score based on a regression analysis. Maybe we’ll find exciting, unexpected patterns that only an AI could have revealed. Or maybe we’ll fail altogether, and talking to patients (if possible) and relatives will remain the gold standard. We shall see, I hope. Stay tuned.

 

Paper title: AI support for ethical decision-making around resuscitation: proceed with care

Blog post author: Nikola Biller-Andorno

Affiliation: Institute of Biomedical Ethics and History of Medicine, University of Zurich

Competing interests: None

(Visited 582 times, 1 visits today)