Clinical decision support can improve health outcomes and reduce the risk of medical error. But it isn’t always used. And a bit like a wondrously effective drug—when it’s not used, it doesn’t work. So why isn’t it used?
Khairat and colleagues have recently published an intelligent paper that attempts to explain why.  They reviewed studies that evaluated user acceptance of clinical decision support. They found lots of reasons why clinical decision support isn’t used. Sometimes it was because it interfered with the clinician’s workflow and sometimes because it took too much time (which meant less time with patients). Some clinicians thought that they were receiving too many alerts from clinical decision support systems; others that they didn’t feel they could rely on the evidence underlying the content. Some clinicians found clinical decision support both difficult to use and not useful.
This is a helpful summary but there isn’t a great deal new in it. Clinicians have been complaining about these shortcomings of clinical decision support for many years. And providers have been trying to improve clinical decision support in light of them for many years. For example, they have tried to have fewer alerts or to make the content shorter or to make it more evidence based.
But these strategies haven’t always worked either. Fewer alerts mean that important alerts are sometimes missed out. Shorter content means that not all clinicians will feel that they are getting content in sufficient detail. It is difficult to get it right for all healthcare professionals in all circumstances.
But this paper suggests another strategy to deal with the problems that we have with clinical decision support. It states that a more fundamental problem is that doctors don’t know why they are getting alerts or whether the clinical content they are seeing is evidence based or what point on an algorithm they might be on—that, in other words, clinical decision support “is a black box to the physician.” The authors think that the fact that clinical decision support tools “do not reveal how output decisions are made may be a driving force behind the lack of users’ acceptance.” And they suggest that creators of clinical decision support should start being more transparent.
Our experience with BMJ Best Practice is that users certainly do appreciate being able to delve into the underlying evidence base if they want to—even though we realise that they don’t always have time. This also helps them to get an answer quickly—if they are at the point of care. And also to spend more time and to read around the subject—if they are at the point of going to the library.
The authors acknowledge that their ideas need to be validated. Similarly, we have not subjected all our experiences to independent trials. But evidence from other walks of life—from healthcare to information technology to retail—suggests that listening to users and explaining to them what you are doing is a good idea.
Kieran Walsh is clinical director of BMJ Learning and BMJ Best Practice. He is responsible for the editorial quality of both products. He has worked in the past as a hospital doctor—specialising in care of the elderly medicine and neurology.
Competing interests: Kieran Walsh works for BMJ Learning and BMJ Best Practice which produce a range of resources on infectious and non-infectious diseases.