By Pranab Rudra.
The use of AI in healthcare is rapidly expanding, raising critical ethical questions about its role in informed consent, a process that relies not only on clear, accurate information but also on genuine human connection. This focus is crucial because informed consent is both an informational and an emotional interaction, and any deviation from authentic, human-centered communication can have profound ethical implications.
Why is this important? Informed consent is not just a formality, it’s the foundation of trust between patients and healthcare providers. When patients face major surgical decisions, they need not only clear, accurate information but also genuine emotional support to help them navigate their fears and uncertainties. The idea that a machine might provide this support, even if it appears empathetic, challenges our traditional understanding of healthcare. It raises crucial questions: Is it ethical to rely on simulated empathy, knowing that a chatbot, no matter how convincing, doesn’t actually feel? Could such an approach inadvertently mislead patients, giving them a false sense of comfort, or even worse, manipulate their decision-making in subtle ways?
My journey into this topic began during my time as a research associate on the project “My doctor, the AI and I” at Hannover Medical School. There, I grew increasingly fascinated by AI, particularly chatbots, and their potential to transform how we interact with technology in healthcare settings. I started questioning the ethical significance of simulated empathy. Should we accept these emotional cues as real, or should we reject them because genuine empathy is inherently human? These burning questions motivated me to write the manuscript, aiming to shed light on both the opportunities and the ethical pitfalls of using Large Language Models (LLMs) in surgical informed consent.
What’s even more exciting is that prototypes of such chatbots are already currently being developed. Imagine a platform where a patient, facing a complex surgical decision, interacts with an AI-driven consent aid that not only personalizes the information based on their specific condition but also clarifies doubts and fills in informational gaps. In this setup, simulated empathy and distress recognition aren’t just buzzwords—they’re critical components that determine how effectively the system can support both patients and physicians.
In short, our paper explores whether LLMs can and should play a role in such sensitive healthcare interactions. It discusses how these systems might provide consistent, clear information and even help flag when a patient’s emotional state requires human intervention, without crossing the line into deceptive simulation of genuine empathy.
This exploration is more than an academic exercise; it’s about ensuring that as we integrate AI into healthcare, we do so in a way that respects patient autonomy, builds trust, and ultimately improves outcomes in one of the most emotionally charged aspects of medicine: informed consent.
Paper title: Large Language Models for surgical informed consent: an ethical perspective on simulated empathy
Authors: Pranab Rudra, Wolf-Tilo Balke, Tim Kacprowski, Frank Ursin and Sabine Salloch
Affiliations: Hannover Medical School
Competing interests: None declared