Kieran Walsh: Artificial intelligence in healthcare: are systems and providers deliberately opaque?

Artificial intelligence isn’t inherently opaque, deliberate obfuscation is practised to maintain hierarchies of power to the detriment of healthcare providers and patients, says Kieran Walsh


When I was a medical student, I was repeatedly told by senior staff not to say the word cancer in front of patients. I heard senior doctors say things like “if you tell patients that they are having tests for cancer, they will be really upset—and they might not even have cancer anyway so you are upsetting them for no reason—and they probably won’t understand that the cancer might be treatable—and so they will give up—so don’t say cancer”. I wondered what we should tell patients, and seniors had some clever answers. They said, “don’t mention the word tumour or carcinoma or neoplasm, everyone knows what they mean these days, use the phrase ‘potential mitotic lesion’— they definitely won’t understand that.” The seniors were right that patients didn’t understand and, by deliberate opaqueness, we were able to reassure ourselves that these were complex matters and that patients simply didn’t understand them. 

Today things are better. Doctors are more open with patients and tell them their diagnosis and options for tests and treatments in as much detail as patients want. Our own clinical decision support tool, BMJ Best Practice, is designed to enable this. It provides evidence-based knowledge for doctors and patient information leaflets based on the same evidence for patients—but written to be much more accessible. However, there are now other problems in healthcare caused by a lack of transparency. Artificial intelligence systems are likely to be one such example.

Thirty years ago, healthcare professionals said, “it might be a mitotic lesion”— today’s technology barons say, “it might be the algorithm” or “the system is a black box” or “it’s nobody’s fault—it’s probably related to the machine learning interrogating the taxonomy incorrectly.” Should we just accept this? In his paper, The fallacy of inscrutability, Joshua Kroll argues that we should not. Kroll states that algorithms “are fundamentally understandable pieces of technology.” He says that people who create algorithms do so for specific purposes and that people without expertise in technology can understand them. He thinks that the current culture of opaqueness around artificial intelligence systems is more about power structures, in technology and business, than about the challenges of the subject matter.

Kroll gives the example of artificial intelligence systems that decide what online adverts should be served to particular individuals. Some people will inevitably receive an advert that they don’t want and might ask the supplier of the system why they received it. A likely outcome is that they will not get an answer at all. Another possible outcome is that they might be told that no one really knows why they received the ad that they did. But this last answer is not credible. Companies invest massively in building and improving ad systems—it is not credible that they don’t know the mechanics of how they work. Moving from mechanics to a higher and more strategic level, it is obvious that the advert was served because the company wanted to sell something to the recipient and to make a profit from the sale. That is why they received the ad. 

It is not hard to think of analogous artificial intelligence systems in healthcare. Such a system might be developed to ensure more economical uses of a hospital’s resources. If it goes wrong, then the technologists who created it should be able to discover how it went wrong and explain this to anyone. And the leadership team at the hospital who commissioned the tool should give some thought as to why they commissioned it in the first place and whether they placed too much emphasis on cost saving.

Artificial intelligence systems will make rapid inroads into healthcare in the next ten years. The healthcare community should ask simple questions of the providers of these systems: what exactly are you doing? And why exactly are you doing this? We should insist on straight answers to these straight questions.    

Kieran Walsh is clinical director of BMJ Learning and BMJ Best Practice. He is responsible for the editorial quality of both products. He has worked in the past as a hospital doctor—specialising in care of the elderly medicine and neurology. 

Competing interests: KW works for BMJ Best Practice – which is the clinical decision support tool of the BMJ.