AI in healthcare: promise, peril, and professional responsibility

By Helen Smith, John Downer and Jonathan Ives.

Everyone is excited about the idea of AI being brought to the bedside, and who wouldn’t be? We’re short of all staffing groups, daily stories of how everyone is overloaded, overworked, struggling; all help is heartily welcomed, no?

But, at risk of being called a killjoy here, it’s worth being cautious. Eventually, no matter how good a system is, something is going to go wrong. One day, an AI will give the wrong output; a clinician will mistakenly use that output, and a patient will be hurt.

Everyone wants AI to be one of the many different types of saviours that will help get us out of the global service provision crisis, but, given the millions of patient contacts in healthcare each year, any AI (or clinician for that matter) could make a mistake in their practice and we must start planning for it, now.  We know what to do when a human clinical actor makes a mistake, and we have processes and precedent for this.  But should erroneous decisions made using AI be treated differently?

Currently, it is humans with specialist training who make decisions about, for example, an optimal drug choice/dose/route, whether the NG tube is seen as sited correctly on the X-ray, or whether that rash is psoriasis or skin cancer. But times are changing.

Clinicians have long incorporated the outputs of machines into their decision-making. They take the advisories of medical devices  –  the ting-ting-ting of a monitor announcing an abnormal heart rate, or low SPO2  –  and use them to plan and take action. The systems behind these advisories work by taking information from patients, processing it, and, where appropriate, outputting a signal (the alarm) prompting clinicians to look more closely to understand what is going on: is the patient laughing, which is fooling the machine into providing an unnecessary warning? Or are they deteriorating in ways that require action? Anyone who has ever attended a cardiac arrest knows how commanding “ANALYSING RHYTHM: SHOCK ADVISED” is when it’s stated from the AED. Even then, the user must still consider ethical, legal, and other wide-ranging clinically multi-faceted details before pressing the ⚡ button.

Of late, AI is being developed to help clinicians with more complex issues: aiding radiographers to double-check mammograms for cancer screening, for example, or for the planning of radiotherapy. Have no doubt, these are exciting developments, which should be encouraged and adopted when ready. Again, however, in both these uses the AI is not left to make decisions on its own, but complements the decision-making of the clinician.

It might go without saying that, to have any effect, the direction of AI needs to be accepted and followed – otherwise it is just an ornament. Where novel technology is introduced to improve clinician-delivered patient care, the AI system in question will (at least initially) need clinicians to use it to leave any impression on a patient’s journey through healthcare. Therefore, clinicians must be willing to use it and feel comfortable doing so, and, at present, this means being comfortable accepting responsibility for either following or not following its direction. As it stands right now, they may very well not be inclined to.

The clinicians who will have to decide whether or not to enact an AI’s recommendations are subject to the requirements of their professional regulatory bodies in a way that AIs (or AI developers) are not. This means that clinicians carry responsibility for not only their own actions, but also the effect of the AI that they use to inform their practice.

As it stands right now, clinicians using AI will underwrite its safety when used at the bedside. How do we feel about that? Is this a professional burden that clinicians are willing, or should be willing to bear? Is now the time for the regulators to unite and decide what the new standard of practice will be where AI is used?

Now is the time to figure all this out.

 

Paper title: Clinicians and AI use – Where’s The Professional Guidance?

Authors and affiliations:

  • Helen Smith. Centre for Ethics in Medicine, Bristol Medical School, University of Bristol.
  • John Downer. School of Sociology, Politics and International Studies, University of Bristol.
  • Jonathan Ives. Centre for Ethics in Medicine, Bristol Medical School, University of Bristol.

Competing interests:

All authors are fully or part funded via the UKRI’s Trustworthy Autonomous Systems Node in Functionality under grant number EP/V026518/1.

Smith is additionally supported by the Elizabeth Blackwell Institute, University of Bristol via the Wellcome Trust Institutional Strategic Support Fund.

Ives is in part supported by the NIHR Biomedical Research Centre at University Hospitals Bristol and Weston NHS Foundation Trust and the University of Bristol. The views expressed in this publication are those of the authors and not necessarily those of the NHS, the National Institute for Health Research or the Department of Health and Social Care.

Social media accounts of post authors:

Twitter:

(Visited 467 times, 1 visits today)