Demystifying a part of the Wild West of healthcare AI

By Jordan Joseph Wadden.

The pace at which we can see technological developments in healthcare is truly amazing. Sometimes it almost seems like technology is moving too fast, which raises some familiar concerns about patient safety, job security, and sometimes even what it means to be human.

One of the fastest growing uses of technology in healthcare is the application of artificial intelligence. What sometimes isn’t fully grasped is the range of AI that can exist. While there is no uniform set of definitions, typically we see two or three levels or categories. I like to refer to these as “weak AI”, “general AI”, and “strong AI”. Weak AI are those systems that are built for specific purposes, like Siri or Alexa, while general AI are those systems that can learn and grow beyond their initial application, such as IBM’s Watson. Strong AI are those human-level and beyond systems that currently only exist in science-fiction – think Terminators or HAL 9000.

Other terms such as “narrow AI”, “super AI”, or “artificial general intelligence” can equally be found in media and academic sources. Likewise, the terms I use can sometimes have different definitions than how I have come to represent them – for example, some consider general AI and strong AI to be synonymous. So, as you can see, defining AI is truly a Wild West kind of endeavour.

But why does all this matter for healthcare?

As we move to integrate more and more complex technology in our hospitals, clinics, and other patient-centred services, we need to contend with the fact that there are varying definitions of the systems we choose to employ. One area this can be seen is in the application of so-called black box AI systems. These belong to both the weak AI and general AI categories, which already complicates our ability to reckon with how best to ethically evaluate them.

What is especially tricky with these black box systems is the fact that we don’t know how they reach their recommendations. We know their inputs and outputs, and sure we can know their basic architecture, but we cannot examine or explain their inner workings. The problem here is trying to decide what counts as a black box system.

In my paper, I survey the existing literature and develop a three-part classification of definitions for black box systems. The first category relates to those definitions that say a black box is anything that is opaque to experts – so long as a programmer can understand the system, it does not matter if anyone else can. The second category are those definitions that say a black box is anything opaque to non-experts, such as the clinicians and patients in a healthcare setting. The third way these systems can be described is being opaque to some unspecified person/people. These are the most generalist definitions and provide little guidance.

I argue that this vast spread of definitions can be detrimental to patient care, and thus we need to come to a uniform definition. I develop a definition that I believe fits this need – one that is centred on patient understanding and a recognition that they are a specific sub-group of the non-expert category. To do this, I work through several criteria I believe any good definition of the black box should consider.

The speed of technological development is truly amazing, but we need to be careful that we don’t put patients at risk simply because something seems cool. Being more cautious, and working together with uniform definitions of these Wild West terms, can help us ensure we move forward in the best way possible.

 

Paper Title: Defining the Undefinable: The Black Box Problem in Healthcare Artificial Intelligence

Author: Jordan Joseph Wadden

Affiliation: Department of Philosophy, University of British Columbia

Competing Interests: None to disclose.

Social Media: https://twitter.com/BioethicsBeau

 

(Visited 394 times, 1 visits today)