Janice Kwan: What I have learned about clinical decision support systems over the past decade

When I embarked upon my journey as a freshly minted physician in 2009, practicing clinical medicine involved a motley assortment of low- and high-fidelity technologies. I recall documenting notes fastidiously by hand, while parsing through medical records situated concurrently on the computer screen, stuck in the fax machine, and hidden in bound stacks of paper. I dictated some discharge summaries through the telephone, while typing out others, and memorized which tests and medications were entered on the electronic health record (EHR), and which ones were written out the old fashioned way. 

Fast forward just over a decade later, and it is evident that our clinical milieu has become increasingly digitized. Widespread adoption of EHRs likely reflects multiple factors, including enhanced documentation, access to health records and test results, interoperability across health systems, computerized order entry, and administrative functions, such as billing. A central feature of EHRs is clinical decision support systems (CDSSs)—from pop-up alerts about serious drug allergies to more sophisticated tools incorporating clinical prediction rules—that prompt clinicians to deliver evidence-based medicine at the point-of-care. In fact, health care organizations have spent billions of dollars implementing sophisticated clinical information systems with the expectation that EHRs and the CDSSs that they contain will produce major improvements in guideline adherence and patient outcomes. [1] Despite optimism over the impacts of CDSSs, a systematic review published by our group in 20104 found that CDSSs typically improved the proportion of patients who received target processes of care by less than 5%. [2,3]

Given all that has changed over the past 10 years, we felt that it was important to update our review. Although we expected to see significant growth in the number of studies evaluating the impact of CDSSs, we did not anticipate a near quadrupling over the ensuing decade. Even with a substantially updated number of included studies, which permitted the use of more sophisticated and rigorous statistical methodology (i.e., meta-analysis and meta-regression), we were surprised to discover that our core findings remained essentially unchanged. Specifically, we found that in over 100 trials reporting data from over 1 million patients and 10 000 providers, CDSSs increased the average proportion of patients receiving desired care by 5.8% (95% confidence interval 4.0% to 7.6%). To help put this into context, for the control groups in the included trials, a median of 39.4% of patients received care recommended by the CDSSs. Thus, in the typical intervention group, about 45% would receive this recommended process of care, which still left over 50% of patients not receiving recommended care.

Furthermore, we showed that these trials exhibited substantial heterogeneity that did not improve with meta-regression. This implies that the current literature, despite its considerable size, provides little guidance for identifying the circumstances under which CDSS interventions produce worthwhile improvements in care, therefore suggesting that implementation of CDSSs largely remain a game of trial and error. 

These findings are timely, given the emergent interest in artificial intelligence to support clinical decision support. Even with the most effective machine learning algorithms, the CDSSs notifying physicians of their complex outputs will still largely depend on—and therefore be limited by—the small to moderate effect sizes described in our study. Moreover, these findings add to the mounting evidence supporting the need to mitigate alert fatigue. 

Presently, it will be important to consider the strength of the connection between the targeted process of care and clinical outcomes when implementing CDSSs. Future research must determine how CDSSs can be designed to confer larger improvements in care while balancing the threat of alert fatigue and physician burnout related to EHRs. Although the past decade did not produce the meaningful improvements in CDSS effectiveness that we had hoped for, we still witnessed remarkable progress in health care technology more broadly. This leaves me optimistic about what the next 10 years will bring.

Janice Kwan practices general internal medicine and is an assistant professor of medicine at the University of Toronto. Twitter: @KwanJanice.

Competing interests: None declared.

References:

  1. McCluskey PD. Partners’ $1.2b patient data system seen as key to future  [March 1, 2019]. Available from: https://www.bostonglobe.com/business/2015/05/31/partners-launches-billion-electronic-health-records-system/oo4nJJW2rQyfWUWQlvydkK/story.html.
  2. Kawamoto K, Houlihan CA, Balas EA, et al. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005;330(7494):765. doi: 10.1136/bmj.38398.500764.8F [published Online First: 2005/03/16]
  3. Garg AX, Adhikari NK, McDonald H, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. Jama 2005;293(10):1223-38. doi: 10.1001/jama.293.10.1223 [published Online First: 2005/03/10]
  4. Shojania KG, Jennings A, Mayhew A, et al. Effect of point-of-care computer reminders on physician behaviour: a systematic review. CMAJ 2010;182(5):E216-25. doi: 10.1503/cmaj.090578 [published Online First: 2010/03/10]