The robot needs a human heart—why AI in medicine brings moral choices into focus

In a crisp, white building deep in the heart of California’s Silicon Valley, teams of people make moral choices on your behalf. The development of self-driving cars may improve global road safety and efficiency, but for individuals, they also transform purely philosophical questions of the past into a harsh reality of today. When self-driving cars need to choose between a head-on collision into a child or swerving into an adult, what should they do? What impact will these dilemmas have for artificial intelligence in medicine?

Described as the “Trolley problem,” a modern version can be traced back to the British philosopher Philippa Foot. [1] She described a runaway trolley heading toward five people who will be killed by the collision. The trolley could be steered onto a different track on which there is only one person by pulling a lever. Intuitively, it seems permissible to turn the trolley to kill one person compared with five. Yet it also doesn’t seem permissible to kill one person to save five in other cases such as organ donation.

Fast forward to 2018, with the first self-driving car fatality, AI collision avoidance systems need a steer on how they should react. Vehicles cannot escape from moral value judgements implicit in their pre-programmed decision rules. What should the humans with hearts tell these inanimate machines to do? Maybe these robots need a human heart?

One way to inform these decisions is to simply ask people. The Massachusetts Institute of Technology ran an online global experiment called the “Moral Machine” where millions of people from over 200 countries took a quiz, resulting in 40 million ethical decisions. The study’s authors describe consistent global preferences in collision avoidance for sparing humans over animals, saving more lives rather than fewer and saving children over adults.

While variation is expected, they also described large shifts in choices made across social, geographic, and demographic groups. In China, Japan, and Saudi Arabia for example, the preference to spare younger rather than older people was far less pronounced.

With AI in medicine consistently described as one of the most important advances in healthcare, the “Trolley problem” is soon coming to a hospital near you. AI models are increasingly promoted for use in diagnostic imaging, risk prediction, and even treating sepsis. Up-front ethical decisions may need to be an integral part of AI modules in healthcare.

When providing care for critically ill patients, predictive AI may help guide who should be admitted to the last critical care bed. This is a close comparator to whether healthcare professionals should “pull the trolley lever” to admit the sick child with leukaemia or else the elderly adult with pneumonia. What should we do?

The first step in managing this problem is appreciating that it exists. Although the hype around AI suggests it is a panacea for improving healthcare, equal focus now needs to be placed on the inherent challenges to humanity as well as the challenges in computing. Social scientists need to be let back into the room, sharing a table with computer scientists, healthcare professionals, politicians and, importantly, patients. Perhaps a medical version of the “Moral Machine” may help gauge the public’s attitude to these ethical dilemmas. We should also consider if healthcare decisions should echo views of people from different geographical areas or simply act as a universal moral compass. Finally, perhaps we should give the owners of self-driving cars the autonomy to make difficult ethical choices themselves in advance as individuals. Some may choose to swerve, some may not. If so, AI in medicine could also be uniquely tuned by individuals to best suit their personal choices and values around health and disease. These decisions could be made in advance before mental capacity was lost as is done through systems of opt-in and opt-out to organ donation. This way, silicone derived artificial intelligence could adjust to the needs of complex organic life.

Matt Morgan, Honorary Senior Research Fellow at Cardiff University, Consultant in Intensive Care Medicine, Research & Development lead for Critical Care at University Hospital of Wales, and an editor of BMJ OnExamination. He is on twitter: @dr_mattmorgan His first book Critical will be published in May 2019.

Paul Dark,Consultant in Critical Care Medicine, NIHR Clinical Research Network National Specialty Lead for Critical Care and Chair in Critical Care Medicine, University of Manchester. He is on twitter: @DarkNatter

Competing interests: none declared.

References:

1] Philippa Foot, The Problem of Abortion and the Doctrine of the Double Effect in Virtues and Vices Oxford: Basil Blackwell, 1978, originally appeared in the Oxford Review, Number 5, 1967.