By Soogeun S Lee.
In 2018, the UK government published a Code of Conduct, hereafter the Code, for using artificial intelligence (AI) technologies in the NHS. The Code contains ten principles that outline a gold-standard of ethical conduct of AI developers and implementers within the NHS. Considering the importance of trust in traditional medical practice, I examine how the Code conceptualises trust and whether it is problematic in its presuppositions. Although my essay pertains directly to trusting AI in the NHS, I believe my argument can be generalised to trusting AI in general. I will now briefly try to do so.
How do we trust in an AI-algorithm that potentially affects our entire livelihood? Cancer diagnosing AI is one example, but a clearer example may be a self-driving car; how do we trust a potentially life-threatening AI-technology? An answer may be that we should assess the risk of a self-driving car based on crash-reports and safety statistics: if a self-driving car has a crash rate at an acceptably low rate, we can trust it. But what if the AI-algorithm was only trained on well paved, low-traffic roads and you live in a densely populated pothole ridden city? Perhaps the voice-control function does not recognise certain BME accents due to a bias in the training data. It may then be reasonable to also evaluate the data used to train the AI. But perhaps you are an AI-engineer and you know that older training methods of AI-algorithms can lead to unexpected results and are therefore concerned as to whether the self-driving car has been trained using newer, safer methods. Here, it would be reasonable to evaluate the AI-algorithm’s training methodology on top of everything else. We can see that conceptualising trust in this way in something as complex as AI requires time and varying levels of information and expertise.
Another way we could justify trust in AI is through an all-things-considered basis. By this I mean by considering all immediately available information including your subjective values and feelings. Continuing the example, you can still decide on trusting a self-driving car without knowing the exact crash-report statistics by using how you feel about the brand of the car, recent headlines in the news, personal anecdotes and so on. This principle is arguably how we justify trust in the majority of cases: we rarely have all relevant information regarding risks at hand and even if we do we interpret them subjectively relying on our values, beliefs, and previous experiences.
To support this claim, consider trusting in taxis. We do not need to understand the percentage of abductions and crashes to believe that the taxi will take us to our destination safely. Instead, we can intuitively formulate an assessment of risk based on everything we perceive. We look at the condition of the car and we talk to the taxi driver. A sturdy looking car with a local driver may fill us with confidence. We also draw upon past experiences. A female passenger with prior incidents of inappropriate or dangerous situations in taxis may decide to completely distrust taxis. A BME individual who has experienced racial abuse may be less inclined to trust taxis. Thus, trust conceptualised in this way accounts for the multifactorial, individual and situation dependent justifications of trust.
In my essay, I show that the Code explicitly recommends the first method (rationally justified trust) to foster public trust in AI. As I have shown, justifying trust in this way can be intellectually cumbersome and requires a degree of statistical and technical knowledge. I argue the second method (value-based trust) is a more practical and fruitful method in fostering trust. I therefore recommend the Code to introduce further guidelines that support value-based trust in an authentic and trustworthy manner.
Author: Soogeun S Lee
Affiliations: Cardiff University
Competing interests: None
Social media accounts of post author: Twitter: @SoogeunS