Should ChatGPT be used to take consent from patients prior to surgery?

By Jemima Allen, Dominic Wilkinson, Brian Earp and Julian Koplin.

Next month, you are due to have surgery on your knee.  

You’ve been on the waiting list for a while now, but the date for surgery is finally coming up. Normally, you would expect to speak to a member of the surgical team on the morning of the operation. This is typically a rushed conversation where they discuss the procedure and ask you to sign a consent form. 

But instead of speaking to someone in-person, your surgeon has sent you a link to an app called “Consent-GPT”.  This app will allow you to ask as many questions as you like of an AI-generated virtual surgeon. The app can provide up-to-date details about the procedure, like what it will involve, any risks and complications, and what to expect afterwards. The app will also be tailored to your medical situation so that the information is relevant and accurate to you.

At the end of the interaction, it will check that you are still happy to go ahead with the procedure. The app will then send a comprehensive transcript of what was discussed to your surgeon, who will include it in your medical records for reference.

Is this a good development, you wonder?

Though this may sound like science-fiction, recent advances in generative artificial intelligence (AI), including large language models (LLMs) like ChatGPT and Bard, suggest that “ConsentGPT” or something like it may soon become reality. In fact, the use of LLMs in the medical consent process already seems technically feasible.

While some are very positive about the use of LLMs in medicine, others voice their reservations over concerns for the future of medicine and the doctor-patient relationship. Our newly published paper examines the promise and perils of this sort of technology.

LLMs, as they currently stand, are far from perfect. But the potential for such tools to improve current standards of clinical practice should not be overlooked.

The consent process for medical procedures (like knee operations) is inconsistent and often suboptimal. Frequently, consent-seeking is delegated to junior doctors who lack sufficient training and knowledge of the procedure to carry out the task effectively. This leads to low levels of patient understanding for informed decision-making and ultimately undermines patient autonomy. High clinical workloads and the demanding time-pressures faced by junior doctors only aggravate these issues. Consent is frequently sought from patients on the day of or indeed moments before surgery.

When it comes to the surgical consent process, LLMs offer several key advantages over current practice.

Firstly, LLMs could improve patient understanding of medical procedures given their extensive access to medical information sources and the internet. In fact, LLMs may be more reliable than junior doctors at providing patients with up-to-date information for clinical decision-making. They could allow patients to access that information in advance, in their own time, and give them ample opportunity to ask questions.

As generative AI technology, LLMs could be programmed to ensure that all patients receive essential standardised information for a procedure. Estimates about potential risks and benefits of the procedure could then be tailored to the patient and adjusted based on age, comorbidities or other relevant factors.

Clinical trials assessing initial public attitudes towards LLMs in medical consent suggest that people find such agents engaging, personalised and easy-to-use.

From a doctor’s perspective, such agents could ease clinical workloads and streamline the consent process. Transcripts of consent interactions between patients and LLMs could also act as written documentation and legal reference. This could be used to verify the information disclosed during the consent process. Given the rise in negligence claims due to inadequate consent processes, such detailed evidence may provide clinicians with robust legal safeguards. This is because AI isn’t prone to the kinds of errors humans make, like forgetting to mention key information prior to surgery.

However, the potential benefits of delegating consent to LLMs should be tempered with a degree of caution. Prior to its safe implementation in clinical practice, it will be important to establish the accuracy of information delivered by LLMs and avoid risk of misinformation.

There are also additional concerns regarding patient and community trust, and how the use of LLMs in consent may affect the doctor-patient relationship. Similarly, robust clinical guidelines pertaining to patient privacy and data usage, as well as clinical responsibility and safeguards for patients and clinicians, will be necessary to ensure the safe and effective implementation of such agents in the medical consent process.

On balance, you may find that you would actually quite like to speak to a virtual surgeon like Consent-GPT. It might seem a convenient option for you, and a chance to ask questions you may otherwise feel you don’t have time to ask.

Or, you may find that actually speaking to a person, even if it’s not your treating surgeon, is inherently important to you.

But while healthcare providers and computer programmers are working towards answering the question ‘Can we turn Consent-GPT into a reality?’, we are yet to answer the far more pressing question, ‘Should we?’

 

Paper title: Consent-GPT: Is it ethical to delegate procedural consent to conversational AI?

Authors: Jemima W. Allen, Brian D. Earp, Julian J. Koplin, Dominic Wilkinson

Affiliations:

  1. Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, UK.
  2. Faculty of Medicine, Nursing & Health Sciences, Monash University, Melbourne, Australia
  3. Monash Bioethics Centre, Monash University, Monash, Melbourne
  4. John Radcliffe Hospital, Oxford, UK
  5. Murdoch Children’s Research Institute, Melbourne, Australia
  6. Centre for Biomedical Ethics, National University of Singapore Yong Loo Lin School of Medicine, Singapore.

Competing interests: None to declared

(Visited 317 times, 1 visits today)