Can AI fulfil its medical promise?

AI technology has challenges to overcome, but it can be a force for good in medicine, say Luxia Zhang, Guilan Kong, Liwei Wang, and Qi-Min Zhan

The first medical decision support system based on artificial intelligence (AI) was developed in the early 1970s. Although this system was never actually used in medical practice (largely due to the fact that it was a “standalone” system before the development of personal computers), this pioneering trial pushed a door open to a new world.  

Over the past decade, with the increasing digitisation of medicine and the progression of technology, applying AI in medicine has become a hot topic. Now it’s largely recognised that through AI we can harness technology to solve medical problems. Obviously, AI cannot replace doctors, but it has great potential to help doctors and patients in many scenarios.

Applying AI in medical imaging and pathology is one of these promising scenarios. Studies have shown that the diagnostic accuracy of AI algorithms is comparable to experienced medical experts for diabetic retinopathy, heart disease, and certain cancers. An impressive example is detecting pulmonary nodules on lungs in computer tomography scans. It usually takes physicians several minutes to do this, while AI based systems only need a few seconds. Not to mention that machines can work 24/7, without experiencing the fatigue or diminished judgment that humans do from working long shifts.

AI can support and enhance decision making in more general situations for common diseases. When integrated into clinical decision support systems, AI can provide “evidence based” guidance as to diagnosis and treatment, which is particularly important for countries or regions with substantial variation in healthcare quality. Those scenarios usually involve less complex algorithms and are therefore relatively achievable. AI can also support doctors’ decision making with complex clinical conditions, where it could provide relatively accurate risk prediction, as well as clues for diagnoses and examination suggestions. Obviously, though, the caveat here is that these systems are only useful when based on medical literature and real world data that is of a good quality.

In those scenarios, AI functions like a “physician assistant,” but its end user can also be patients. There have been several commercially available AI powered healthcare products claiming to provide healthcare that is of comparable quality to human doctors, and which are attracting an increasing number of patient users. Furthermore, AI enhanced clinical decision support systems could also be used by health administrations and health insurance sectors, if integrated into medical quality monitoring and improvement processes.

All those scenarios show the benefits of utilising AI in medicine. But one crucial question cannot and should not be ignored: “Is the information generated by an AI system trustworthy?” If we need to rely on an AI system to assist our decision making, we should care about its reliability and effectiveness.

Currently, AI algorithms based on deep learning act like a “black box”: the inner logic behind most machine learning models is hard to explain, and the doctors using them are not given explanations for the advice they receive from these systems. This is not an intuitive way for doctors to practise and raises uncertainty about using AI, since the principle of identifying causality and treating causes is integral to medicine. Researchers have also expressed concerns about the risks of AI powered diagnostic apps on smartphones providing false reassurance to patients. As the author of a study on skin cancer apps pointed out, there’s a possibility that if the app misses red flag symptoms, “patients will not seek professional advice in the early stages” and miss out on early diagnosis.

A century ago, Sir William Osler said that “Medicine is a science of uncertainty and an art of probability.” The concept of “evidence based medicine” seems most apt to deal with the uncertainty that AI’s use in medicine presents. Just like any other new intervention in medicine, the efficacy and safety of AI powered systems should be carefully evaluated before being confidently applied in practice and on patients.

Several initiatives have already applied evidence based medicine principles to verify the medical information generated by AI systems, and hopefully other systems will follow suit. We think it’s likely that as technology advances, AI algorithms will become more robust and developed. Yet it’s important that more comparative effectiveness research is carried out to evaluate how good AI algorithms are in a real world setting and how they affect the health outcomes that are important to patients. The performance of predictive AI algorithms also needs to be evaluated in epidemiological or medical research.  

Finally, we must not forget that the part of medicine that relies on human care and common understanding is still crucial. The knowledge that a doctor collects in consultations and the decision making that follows is a product of reciprocal communication between physicians and patients. Doctors will adapt their questions and advice based on patients’ responses, either verbal or non-verbal, so that they can decide on a treatment path, which reflects a patient’s values and preferences. It’s hard to image how AI could replicate or replace these kinds of interactions. Medicine contains certain “humanity tasks,” such as communication and empathy, that can only be accomplished by humans.  

AI technology has challenges to overcome, but it can be a force for good in medicine. If we want to maximise the benefits of  AI for the sake of patients and the public, then medical doctors, researchers, and AI scientists should work closely together. Using robust methodology, and following ethical norms and no harm principles, we can apply, evaluate, and improve AI technology in medical practice to generate a healthier world.

Luxia Zhang is a professor of medicine at Peking University First Hospital, and assistant dean at the National Institute of Data Science in Health and Medicine at Peking University. 

Competing interests: Luxia Zhang received research funding from AstraZeneca.

 

Guilan Kong is an associate research professor at the National Institute of Data Science in Health and Medicine at Peking University.

Competing interests: None declared.

 

Liwei Wang is a professor in the department of machine intelligence, Peking University, with research interests in machine learning and its applications in medicine.

Competing interests: None declared.

 

Qimin Zhan is currently an academician at the Chinese Academy of Engineering, executive vice president of Peking University, president of the Peking University Health Science Center, and dean of the National Institute of Data Science in Health and Medicine at Peking University

Competing interests: None declared.