By Lars Lindblom and Erik Gustavsson
AI is coming to health care. But what is AI? Ethem Alpaydin’s excellent Machine Learning: The New AI provides a helpful definition: Programing computers to do things, which, if done by humans, would be said to require “intelligence”.
This definition captures something important about how to conceive of AI going forward. Consider for example the following quote from a promising new approach to Human-AI cooperation: “There are likely to be significant individual differences in humans’ willingness, or desire to engage sexually or romantically with an AI”.
The Alpaydin definition puts computer programming at center stage. If one drafts a revised version of the quote with this at core, one gets the following: There are likely to be significant individual differences in humans’ willingness, or desire to engage sexually or romantically with a computer program.
The two versions of the quote refer to the same things, but the meanings differ. Intuitively, it may seem strange to fall in love with a computer program, but it might make more sense to be romantically engaged with an AI. When we think about AI, we tend to get a picture of an agent just like us, or better, in our mind, whereas the concept of a computer program suggests more technical issues such as word processing or hardware control.
These technical issues are important, and how we solve them decides what kind of AI agent we end up with. In a recent paper we take this perspective to an issue with potentially far-reaching impact on health care, namely AI as moral advisors. We take as our starting point a promising approach developed by Jana Schaich Borg, Walter Sinnott-Armstrong and Vincent Conitzer in their book Moral AI.
They propose a highly plausible approach for building morality into AI. Their focus is on the training data for machine learning models and consists of five steps: survey people’s moral views, use preference elicitation methods to ascertain the weights of different considerations, idealize preferences to avoid problems of bias based on ignorance, aggregate individual preferences to group judgements, and model moral decision-making.
In our paper, Moral AI in Medical Decision Making, we make use of three pairs of distinctions from moral philosophy to suggest that there are better paths forward towards moral AI. First, we distinguish between preferences and reasons, and try to show that we would do well to conceive of the appropriate training data as consisting of information about reasons rather than of preferences. Second, we make a distinction between rankings and deliberation to illustrate that there are ways to incorporate a richer notion of deliberation and information in the development of AI than the one inherent to preference elicitation. Finally, we use the difference between prediction and judgment to show that there is danger of misinterpreting the advice one gets from AI if one conflates prediction with judgement.
We take this to imply that taking reasons, deliberation and judgement as starting points for developing moral AI will open up new research avenues and bring us closer to the goal of having AI that can provide moral advice. And if AI is coming to health care, we best make sure that the computer programs we get are such that actually can do what we need help with.
Article: Moral AI in Medical Decision Making
Author(s): Lars Lindblom [1] and Erik Gustavsson [1, 2]
Conflict of Interest: None to declare
Social Media: Lars Lindblom @lars-lindblom.bsky.social