Kieran Walsh: What happens when the robot makes a mistake?  

Clinical decision support resources can reduce the risk of medical error. As a result, we encourage doctors and other healthcare professionals to use clinical decision support (CDS) when making decisions. But like anything else clinical decision support resources can sometimes get things wrong. What happens then? Do doctors notice that the clinical decision support system has made an error and over-ride it. Or do they blindly follow it and make an error themselves?  

Lyell et al have done a clever study to find out the answer to this question. [1] Their research was based around the concept of automation bias. This bias occurs when “users over-rely on CDS” and so become less vigilant in looking for errors themselves that the clinical decision support system might have missed. Errors can be those of omission or commission (a bit like the Catholic sins). An omission error is when you don’t notice an error because the clinical decision support system didn’t tell you that there is an error. A commission error is when you follow recommendations that are not correct. They found that users of a clinical decision support system made both types of error. Their study was conducted with medical students in a simulation setting. It is hard to tell if you would get the same result with qualified doctors in a real-world setting. However, the simulation did mimic the real world quite closely—they made some tasks more complex than others and staged interruptions to the user’s workflow—just as happens in real life.

This is an important study. There is a lot of research into the effectiveness of robots and whether or not they may be better than humans at certain tasks. There is also a lot of research into how to make humans better and safer clinicians. However, there is not as much research on the interface between robots and humans and how one might affect the other. This study adds a piece to the jigsaw, but it also begs further questions. The most important one is what can we do to make sure that doctors do not follow clinical decision support systems down blind alleys.

Thus far our efforts have focused on telling clinicians that they should be vigilant and check things and don’t rely too much on clinical decision support. The problem with this approach is that there is little evidence that it works. Clinicians might also understandably complain that they are receiving mixed messages—on the one hand that clinical decision support will help them a great deal and on the other that they cannot rely on it completely. Maybe we need to be careful not to commit sins of omission when telling clinicians about the effectiveness of clinical decision support systems.  

Kieran Walsh is clinical director of BMJ Learning and BMJ Best Practice. He is responsible for the editorial quality of both products. He has worked in the past as a hospital doctor—specialising in care of the elderly medicine and neurology. 

References:

  1. Lyell D, Magrabi F, Raban MZ, Pont LG, Baysari MT, Day RO, Coiera E. Automation bias in electronic prescribing. BMC Med Inform Decis Mak. 2017 Mar 16;17(1):28.

Competing interests:

Kieran Walsh works for The BMJ—which provides the clinical decision support tool BMJ Best Practice.