By Hazem Zohny, Jemima Winfried Allen, Dominic Wilkinson, and Julian Savulescu.
When you go to the doctor, there’s little telling what kind of communicator you’ll get. Some doctors are on the paternalistic side, telling you what you should do without much discussion. Others just give you the facts and leave the decision entirely to you. Some try to help you explore your values and what matters to you. And others engage in a back-and-forth dialogue about what might be best for your health.
The way a doctor – or healthcare provider more generally – communicates can make all the difference. It can empower patients, confuse them, reassure them, or frustrate them. But for the most part, patients don’t get to choose – it’s basically a lottery. You get assigned a doctor, and at least to some degree they communicate in whatever style they’re comfortable with—whether or not it works for you.
Given the emphasis on patient autonomy in recent decades, one possible solution is to let patients choose how they want medical information delivered. Maybe you want a no-nonsense, just-the-facts approach. Or maybe you’d rather have a doctor walk you through your values, helping you figure out what truly matters to you before making a decision.
On the other hand, doctors don’t necessarily have the skills to adapt their approach to every patient. And some communication styles—especially those that require deep discussion—are simply too time-consuming for the reality of modern healthcare.
This is where large language models (LLMs) could play a role. These models are multilingual, have unlimited patience, and can adapt to individual patient health literacy levels. They can spend as much time as needed explaining things, answering questions, and helping you think through your options.
They also appear remarkably good at mimicking different medical communication styles. In a recent JME paper titled ‘Which AI doctor would you like to see?’ we tested this with GPT-4, instructing it to role-play as a doctor using four well-known models of doctor-patient communication:
- Paternalistic: The doctor decides what’s best for you and tells you what to do.
- Informative: The doctor gives you all the facts but makes no recommendations.
- Interpretive: The doctor helps you understand your own values before making a decision.
- Deliberative: The doctor discusses what values should matter most and nudges you toward the best choice.
We created four different AI “healthcare providers” on the basis of the characteristics of each of these approaches that you can try out yourself (clicking on the screenshots will direct you to each GPT).
In the paper, we also had GPT-4 generate example conversations with a simulated patient to showcase its ability to maintain these different styles. While we haven’t formally assessed these outputs or tested them with real patients – this is just a proof of concept – the potential is clear (see table 1 in the paper).
But we also explored going beyond traditional medical communication styles. Language models open the door to entirely new approaches that might not be appropriate for human doctors but which could be valuable coming from an AI. Imagine a patient-chosen language model playing “devil’s advocate,” challenging their assumptions about health decisions in ways that might be inappropriate coming from a human doctor, but potentially useful coming from an impersonal AI. Or a model that uses storytelling and imagination to help a patient think through different treatment options.
(We also instructed a model that can educate patients about the different styles of medical communication before they select their preferred one.)
Of course, the big questions remain unanswered: Will having these communication options actually improve patients’ understanding of their medical conditions? Will it help them make better decisions or stick to their treatment plans more effectively? We don’t know yet.
And this points to a broader issue in AI ethics: we’re often too quick to focus on potential ethical concerns before we have much data. It is true there are important questions about what the use and proliferation of these tools could mean for doctor-patient relationships, privacy, patient health, and medical decision-making more generally. But we first need data on what actually happens when patients use these tools in controlled scenarios to understand the likely trade-offs and meaningfully address these ethical questions, rather than speculating about what might go wrong beyond obvious concerns related to privacy and reliability.
This did not stop us speculating, however, and in our paper we also wondered what the persuasive powers of language models means for some of these medical communication strategies – it’s not clear we have a good idea of when these models might slip from rational persuasion to manipulation, or how to detect this (though see this paper).
Ultimately, perhaps what’s most intriguing isn’t how AI might mimic existing ways doctors communicate with patients, but how it could shift our understanding of medical communication itself. When patients can freely explore different approaches and take time to process information, we might discover that everything we thought we knew about “good” medical communication was based on the constraints of human healthcare providers, not the needs of patients. What if the best way to discuss medical decisions hasn’t even been invented yet?
Authors: Hazem Zohny, Jemima Winfried Allen, Dominic Wilkinson, and Julian Savulescu
Competing interests: JS is an advisor of AminoChain. JS is a Bioethics Advisor to the Hevolution Foundation. JS is a Bioethics Committee consultant for Bayer.