AI in healthcare – why start a conversation with the general public?

By Elizabeth Ford

Imagine the future.

Imagine you are 76 years old. You visit your doctor to have her examine your knee, which hurts since you fell over yesterday, tripping on your front step. After tapping some information into her computer, your doctor turns to you and says, “Would you like to discuss any concerns about your memory?”

Where did that come from? Your doctor has an artificial intelligence (AI) algorithm running on all your patient notes, which flags to her when it looks like you might be developing dementia. It can read the diagnoses recorded, your test results, your social context such as the fact that that you live alone, and the letters which come from specialist clinics you have visited. Maybe it combines these with information from your Apple Watch or your fitness or sleep tracker, and draws on a brain scan you had three years ago for headaches.

It can detect that you may be developing dementia, before you or your doctor have even thought about it.

This kind of technology is in development right now and other types of AI are already being used in healthcare in the UK and beyond. AI is currently good enough to analyse radiology images, run chatbots for remote consultations, and monitor and predict population-level infectious disease outbreaks.

If AI has the potential to become mainstream in primary health care, what kind of conversations should we be having in society? How should we bring up these issues with patients or the general public?

What are the issues that we might want to discuss?

There are a number of conversations we might want to have across society.

Firstly, patient privacy and confidentiality. Does everyone want their healthcare data to be used to develop these technologies? Should patients have a say in whether their data is used? If patients choose not to donate their data to these projects, we should try to understand the risks or harms the public are wary of.

Secondly, crucial issues are fairness and public trust. The patients who are most vulnerable to harm from data-sharing, and who might opt out of their data being used, are possibly those who have the greatest healthcare needs. If their data is no longer in the training set, will the AI cater to their needs? What about biases or unfairness in the way doctors currently treat patients, such as racism or sexism? Will these end up “baked in” to the AI output? How can we develop AI which treats all patients fairly?  If the development of this kind of technology goes on effectively in secret, we risk losing public trust. How can we make AI and technology development as open as possible, so everybody knows what is happening? Many members of the public have said that they fear that these technologies will just be developed for a company’s profit, not for the good of society.

Thirdly, how will the doctor and the machine balance their respective roles in the consulting room? Who has priority if opinions conflict – does the doctor get to overrule the AI? If something goes wrong, can the AI be sued? Should AI technology be forced to be transparent about how it reaches its decisions?

Fourthly, who gets to decide, and how, whether AI is good enough to roll out in the clinic? What should the threshold be, and who should decide this threshold? What if AI causes harms in some patients? One example is a decision-making algorithm for treatment of patients with pneumonia, which sent home patients with asthma – a highly risky decision. It had learned from its training data that patients with asthma were less, rather than more, at risk of dying, because they had previously received more intensive treatment and therefore had better outcomes.

Lastly, how do we train the healthcare workforce to deliver these technologies, interact with them, and integrate them into their workflow in a way which does not upset their usual relationships with patients?

How can we engage the public in these conversations?

Researchers have a range of methods for discussing these issues with the public and soliciting their views. Surveys, interviews and focus groups are very useful for gaining a snapshot of public opinion. But when we want opinions about complex, ethical topics, with a lot of information and opposing arguments to consider, it is difficult for members of the public to respond in surveys with well-informed views.

Deliberative research methods such as citizens’ juries, citizens’ assemblies, and participatory research invite members of the public to spend several days or longer learning about, discussing, and developing collective solutions or recommendations on complex health, social and policy problems. These should arguably be the mainstay of our approach to engaging the public in conversations on the new development and implementation of AI in healthcare. That’s why in our research we chose to use a citizens’ jury to find out about public opinion around patients’ clinic notes and letters being shared with university researchers to carry out health research and develop new algorithms. We found this format stimulated deeper conversation among participants and allowed us to understand the nuances of people’s opinions as they became more informed.

In many areas of big data, AI, and technology development, ethical and policy discussions are difficult, and our understanding of public opinion lags behind the fast pace of technological progress. We cannot afford for this to happen in healthcare, which will surely one day affect us all.

 

Paper title: Should free text data in electronic medical records be shared for research? A citizens’ jury study in the United Kingdom

Authors: Elizabeth Ford1, Malcolm Oswald2, Lamiece Hassan3, Kyle Bozentko4, Goran Nenadic5, Jackie Cassell1

Affiliations:

  1. Department of Primary Care and Public Health, Brighton and Sussex Medical School, Brighton, UK
  2. Citizens’ Juries CIC, Manchester, UK
  3. Division of Informatics, Imaging and Data Sciences, School of Health Sciences, The University of Manchester, Manchester, UK.
  4. Jefferson Center, Saint Paul, Minnesota, USA
  5. Department of Computer Science, The University of Manchester, Manchester, UK.

Competing interests: EF, LH, GN and JC are members of the UK Healthcare Text Analytics Network, which funded the published study.

Social media accounts of post author: @drelizabethford

 

 

 

(Visited 500 times, 1 visits today)