Enrico Coiera et al: First compute no harm

We will need new principles and regulations to govern medical artificial intelligence


Clinicians are guided by the Hippocratic oath to do no harm, and we must surely expect the same of medical Artificial Intelligence (AI). Once considered a technology of the future, AI is becoming increasingly commonplace. With a healthcare system struggling to maintain its workforce and manage costs, and faced with rising service complexity, there is a clear imperative to delegate routine tasks to AI.

Today’s AI comes in many forms. [1] Most common are the special purpose analytics tools that find application everywhere, automatically screening laboratory results or medical images, through to early warning of epidemic outbreaks. These tools use algorithms created with machine learning, where the “intelligence” sits in the process of learning from past data. Conversational agents such as “chatbots” exhibit intelligent responses tailored to a broader set of circumstances, and can for example be delegated the task of triaging patients based upon their symptoms, as the NHS is now trialling. [2] AI that is characterized by independent agency and a capacity to reason broadly is known as Artificial General Intelligence (AGI), and remains some years away.

Information technology (IT) profoundly shapes human decision processes, and poorly designed or used IT can lead to patient harm. [3] Asimov’s laws famously constrained robots to not injure or kill a human being or, through inaction, allow a human being to come to harm, and these laws were to be hardwired into their digital DNA. Yet any notion of harm and benefit arise culturally shaped and personally defined. While Asimov’s laws have their origins in fiction, they have provided the foundational scaffold for thinking about AI safety. Such simple rules unfortunately quickly become unhelpful in the complex ethical jungle that is healthcare. [4]

What for example, if an AI was a participant in the end-of-life decision process? Clinicians are already considering algorithms that assess risk of death as a trigger for such discussions with patients and family. [5] Is it acceptable for algorithms today, or an AGI in a decade’s time, to suggest withdrawal of aggressive care and so hasten death? Or alternatively, should it recommend persistence with futile care? The notion of “doing no harm” is stretched further when an AI must choose between patient and societal benefit. We thus need to develop broad principles to govern the design, creation, and use of AI in healthcare. These principles should encompass the three domains of technology, it users, and the way in which both interact in the (socio-technical) health system. [6,7]

Firstly, like any technology, AI must be designed and built to meet safety standards that ensure it is fit for purpose and operates as intended. [8] AI must be designed for the needs of those who will work with it, and fit their workflows. [9] It is one thing to do no harm, another to waste time, or make others work harder to suit the AI. The point at which humans are taken entirely out of the decision loop, and tasks are delegated to machine will vary by task and setting. Deciding when that point is reached will require methods that test safety, effectiveness, and acceptability. Such testing cannot fully guarantee the safety of real-world behaviors of a complex AI, and so prospective monitoring, as well as fail-safe procedures will be required. Future AGIs might also have an inbuilt awareness of safety principles to allow them to navigate unexpected situations. [10]

Next, we require principles to ensure that the conclusions or actions of an AI can be trusted. Perhaps AIs must be able to explain how they came to a conclusion, and provide evidence to support their reasoning. [11] A corollary might be that humans have the right to challenge an AI’s decision if they believe it to be in error. Explanation is straightforward when knowledge within an AI is explicit, such as a clinical rule—the explanation can point to the rule’s applicability to the current situation. However, explanation is challenging for AIs based on current-generation neural networks, because knowledge is no longer explicit, but rather is non-transparently encoded in the connections between “neurons.”

Other problems emerge when AIs are built using machine learning, extracting knowledge from large data sets. If data are inaccurate or missing, then they may not be fit for the purpose of training an AI, or for decision-making. If data are biased, so too will be the AI’s knowledge. Hidden biases can discriminate against some patients, whether based on gender, ethnicity or disease, because they are under-represented in the original data used to train the AI. [12]

We also need principles to govern how humans use AI. Humans should not direct AIs to perform beyond the bounds of their design or delegated authority. Humans should recognize that their own performance is altered when working with AI. Automation bias, for example, is the phenomenon of humans over-delegating to technology and reducing vigilance. [13] As a consequence, humans may fall “out of loop,” miss critical events around them, or fail to understand the situation well enough to recover from a mishap. If humans are responsible for an outcome, they should be obliged to remain vigilant, even after they have delegated tasks to an AI.

These considerations are well beyond the scope of current regulatory processes for medical devices. They include discussions of values and ethics, and are likely to require frequent revision as the capabilities and roles for AI develop. [14] The EU is now developing a framework for the legal and ethical implications of robots. [15] Healthcare also needs to devote energy into developing principles, regulatory and governance structures that make the transition to an AI enabled health system as safe and effective as possible.

Governance of individual AI systems is probably the remit of those already tasked with oversight or regulation of IT in healthcare. It may require significant effort to accommodate the very different risk profiles of this technology class. Government, health services, consumer groups and the clinical professions will need to focus on operational implications. AI for example will require changes across the board from education to clinical workflows.

AI, whether embodied in a robot or dispersed across a computer network, is thus going to challenge our conception of clinical work, professional duty, and the very nature and design of health services. If we get it right, we will create a world where we, as clinician or patient, work safely side by side with our valuable computational companions.

Enrico CoieraDirector, Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney Australia.

 

 

 

Maureen BakerChair, Shadow Board of Faculty of Clinical Informatics 

 

 

 

 

Farah Magrabi, Associate Professor, Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney Australia.

 

 

 

Competing Interests: We have read and understood BMJ policy on declaration of interests and declare that we have no competing interests.

Not commissioned, peer reviewed.

References:

  1. Coiera E. Guide to Health Informatics (3rd Edition). 3rd ed. London: CRC Press 2015.
  2. Burgess M. The NHS is trialling an AI chatbot to answer your medical questions. Wired 2017 5 January 2017. http://www.wired.co.uk/article/babylon-nhs-chatbot-app (accessed 13 June 2017).
  3. Kim MO, Coiera E, Magrabi F. Problems with health information technology and their effects on care delivery and patient outcomes: a systematic review. Journal of the American Medical Informatics Association 2017;24(2):246-50.
  4. Clarke R. Asimov’s laws of robotics: implications for information technology-Part I. Computer 1993;26(12):53-61.
  5. Cardona-Morrell M, Chapman A, Turner RM, et al. Pre-existing risk factors for in-hospital death among older patients could be used to initiate end-of-life discussions rather than Rapid Response System calls: A case-control study. Resuscitation 2016;109:76-80. doi: https://doi.org/10.1016/j.resuscitation.2016.09.031
  6. Coiera E. Four rules for the reinvention of healthcare. BMJ 2004;328:1197-99.
  7. Sittig DF, Singh H. A new sociotechnical model for studying health information technology in complex adaptive healthcare systems. Quality and Safety in Health Care 2010;19(Suppl 3):i68-i74.
  8. Fox J, Das S. Safe and Sound: Artificial Intelligence in Hazardous Applications. Cambridge Mass: MIT Press, 2000.
  9. Coiera EW. Artificial intelligence in medicine: the challenges ahead. Journal of the American Medical Informatics Association 1996;3(6):363-66.
  10. Ong MS, Magrabi F, Coiera E. Syndromic surveillance for health information system failures: a feasibility study. Journal of the American Medical Informatics Association : JAMIA 2012;20(3):506-12. doi: 10.1136/amiajnl-2012-001144 [published Online First: 2012/11/28]
  11. Shortliffe EH, Axline SG, Buchanan BG, et al. An artificial intelligence program to advise physicians regarding antimicrobial therapy. Computers and Biomedical Research 1973;6(6):544-60.
  12. Reynolds M. Bias test to prevent algorithms discriminating unfairly 2017 [updated 1 April. Available from: https://www.newscientist.com/article/mg23431195-300-bias-test-to-prevent-algorithms-discriminating-unfairly/ accessed 3119.
  13. Lyell D, Coiera E. Automation bias and verification complexity: a systematic review. Journal of the American Medical Informatics Association 2016:ocw105.
  14. Anderson M, Anderson SL. Machine ethics: Cambridge University Press 2011.
  15. Committee on Legal Affairs. Draft Report with recommendations to the Commission on Civil Law Rules on Robotics: European Parliament, 2016.