Westworld, ethics and maltreating robots

By Colin Gavaghan and Mike King

This week saw the return, for a third season, of the critically acclaimed HBO series Westworld. WW’s central premise in its first 2 seasons was a theme park, sometime in the near future, populated by highly realistic robots or ‘hosts’. Human guests can pay exorbitant sums to interact with these robots, in a huge range of ways. In the ‘western’ themed area – after which the show is named – guests can choose to be white-hatted heroes or black-hatted villains. The good guys get to be brave, chivalrous, honourable and generally decent. The bad guys, on the other hand, get to indulge in darkest pits of human depravity, including murder, torture and rape.

Of course, they aren’t really murdering, torturing and raping, because their ‘victims’ are just machines. Sure, they look human, and they are highly convincing in their reactions of pain, fear etc. But, the guests are assured, there’s no actual suffering going on, because they ‘hosts’ aren’t the sorts of things capable of suffering.

It hopefully isn’t too much of a spoiler to say that things turn out to be a bit different from that. But despite exciting developments in AI and robotics, we’re still some way from creating thinking, feeling machines. It’s also, surely, safe to say that, if we ever did create such beings, then we would owe them moral obligations not to treat them in the manner that the more sadistic guests to Westworld treat the hosts. Insofar as they would be capable of suffering, we would have duties not to treat them cruelly. And at least insofar as they had inner lives like our own, we would have duties not to kill them.

Some academics, though, have recently started asking a different question. Might we have duties with regard to robots and AIs even if we know they can’t feel or think? And if we do, could there be a case for these to be enforced via legal duties? These arguments aren’t just about the super-sophisticated robots of Westworld, but about some here-and-now ‘social robots’, built to interact with us in various ways including providing medical care and support, some with specific design to resemble cute live animals.

Any such duties wouldn’t be owed to the robots themselves. There are different accounts of why we have duties, and to whom – or what – they can be owed, and we won’t get into those here. For the purposes of this argument, we will stipulate that all robots we refer to, lacking any sort of internal mental life, are not morally considerable for their own stake. We’ll also assume that these robots are very like us in form, and to some extent behaviour. They are humanlike, but still distinguishable from human persons.

Given this, are there good reasons for us to have duties regarding such robots, or other reasons not to ‘maltreat’ them; that is to say to treat them in ways that would be morally and legally objectionable or impermissible?

Reason 1: Heightened risk

Behaving in such ways towards robots – particularly those which behave and react in humanlike ways – would heighten the likelihood that we would act in similar ways towards actual humans. This may be via a process of emotional hardening; we may become more callous towards human suffering by ‘practising’ on mock humans. Or it may be by strengthening our darker desires by indulging them in virtual settings.

A different reason for risk being heightened is that there’s greater chance of erroneous bad conduct. If harming of robots is permissible, we might falsely believe that we’re maltreating a robot when in fact we’re harming a human. In episode 2 of Westworld, Angela responds to a guest unsure if she is human or robot: “Well if you can’t tell, does it matter?” If we can’t reliably distinguish human from robot, then that might give a pretty good reason not to maltreat something that we think is a robot.

Reason 2: Harm to observers

For Kate Darling, the likelihood that others will be distressed by seeing us mistreating a humanlike robot would be reason enough not to behave in that way. Maybe even reason enough to ban such behaviour: “societal desire for robot protection should be taken into account and translated into law as soon as the majority calls for it.

Whether most people would really feel this way isn’t certain, but it’s not implausible. Boston Dynamics’ headless quadrupedal Spot doesn’t look anything like a human, and not even much like an animal, but many us still feel an emotional response when we see it being kicked around in ‘stress tests.’

A harder question is whether the mere fact of majority disapproval should provide a persuasive reason for moral disapproval or legal prohibition. There’s a long tradition of moral and legal theory addressing that question, going all the way back to John Stuart Mill, via the Wolfenden Report.

Writers like Joel Feinberg have argued that causing offence can sometimes justify legal prohibition. Even if it falls short of causing actual harm – Mill’s famous threshold for justified criminalisation – offence is at least an unpleasant experience that we should (at least in non-trivial cases) be able to choose to avoid. Most legal systems have some sorts of rules against very offensive conduct in public. But the protection is against being unwillingly exposed to the behaviour, not against it happening at all.

Maybe there would be a case, then, for a ban on mistreating humanlike robots – or even robots like Spot, if they invoke similar sorts of feelings – in public, or where people likely to be distressed by the sight (children, for example) would see. It’s certainly not unknown for law to impose those kinds of restrictions on various forms of behaviour. Being offended by the mere idea that someone, somewhere, was doing something that turned our collective stomachs wouldn’t be enough to justify a ban.

Reason 3: Legal moralism

Not all of the arguments for rules against such behaviour are focused on the bad consequences of such behaviours for human beings. ‘Legal moralism’ is the idea that, sometimes, the law might be justified in prohibiting some behaviours that don’t cause any harm even in indirect ways. These behaviours should be prevented just because they are morally reprehensible, even if they cause no actual harm to anyone else. John Danaher has addressed a species of concern in relation to robots programmed for particularly troubling forms of sexual activity in his JME blog post: sex robots that are designed for fantasies of child abuse or rape. Danaher argues that a case could be made for banning such activities on the grounds they are harmful to the “moral character” of those who carry them out.

But that would involve a decision that the conduct in question is inherently morally wrong. And where it doesn’t actually harm anyone, infringe their rights, or cause offence, what would be the basis of that conclusion? Guests in Westworld would presumably argue that the element that makes murder and cruelty immoral isn’t present in the absence of an actual victim. The claim that it’s immoral even to pretend to do those things needs some sort of justification – and raises a bunch of questions about those of us who enjoy shooting up opponents at paintball or Fortnite!

Our concern

We’ve looked at three possible arguments in favour of duties to avoid mistreating humanlike robots. Whether they’re persuasive will depend on evidence but also on the competing weights of the different concerns involved.

But whether or not we find them persuasive, we want to suggest that there may be another reason to go cautiously when imposing rules around humanlike robots. It has to do with a point made by Kate Darling, about lifelike and alive becoming subconsciously muddled. That, in fact, might turn out to be one of the biggest threats posed by AI and robots.

Depending on their design and capabilities, we can engage with robots emotionally, and ‘empathise’ with them when they behave as though they are experiencing emotions. In some situations, that might be a good thing. But it’s also likely that it could render us susceptible to emotional manipulation in our interactions with some robots, even when we know we’re dealing with a robot. As AI expert Joanna Bryson has warned, ‘If people think of robots as humans, a lot of thing could go wrong. It could open them up for economic exploitation. They might think they need to protect the robot or feel badly for turning it off.’

If Bryson is right about that danger, then a skill we’re likely to need for the future is the capacity to distinguish between thinking, feeling human beings or other animals, and AIs that merely resemble them: that are programmed to mimic their responses, but which have no internal life. Not only to distinguish between them intellectually, but emotionally as well. The AI chatbot trying to sell us its wares doesn’t actually care if we buy them; the robot dog isn’t really sad if we don’t play with it (or refuse to buy it an expensive robot brother or sister!).

This might involve a fine-tuning of our empathetic and sympathetic reactions. And that might mean turning down the emotional volume when dealing with humanlike robots and AIs. Doing so may reduce the strength of some of the reasons for prohibition of maltreatment of robots. We may be less likely to mistake a human for a robot in ethically risky ways, and we may be less offended by some treatment of robots.

It doesn’t, of course, mean that we have to treat them like the black-hatted guests on Westworld. But there might be a good reason to be wary of introducing moral taboos or legal rules that further blur the distinction between genuine claims on our moral concern, and false or manipulative claims on our insufficiently tuned instincts.

 

Authors: Colin Gavaghan1 and Mike King2

Affiliations:

1 Faculty of Law, University of Otago, Dunedin, New Zealand

2 Bioethics Centre, University of Otago, Dunedin, New Zealand

Competing interests: None.

Social media accounts of post authors: Colin Gavaghan Twitter Facebook

(Visited 2,032 times, 1 visits today)