Julian Sheather: Time to debate the ethics of robot care?

We take it for granted that compassion is at the heart of good care. But what if the hand that reaches out to yours is a robot’s? What if the last face you see on earth is a facsimile? The use of robotics is well established in parts of medicine. Surgery involving remote manipulation of robotic arms is now commonplace. Some have even predicted that surgery will cease to leave a scar: introduced via orifices, robots will travel to the afflicted area, self-assemble and do their robotic thing. But by and large these remain an extension of ordinary human tools—just better. Fully-autonomous robots, our companions and servants—even our nemeses—have not made it far from the pages of science fiction. As an editorial in The Economist has it, set aside car manufacturing and vacuum cleaning, robots have been more successful at parsing our anxieties about technology than delivering real world promise.  But things might be changing. Scientists at Ohio State University have developed an algorithm that enables computers to read human emotions. Google and Amazon are both investing heavily in robots. Big data, cloud computing and the ever-growing crunching power of ever-shrinking processors suggest that robotics may be about to take an automated leap forward.

Remote surgery has raised few ethical qualms. By and large we are asleep, and the important question is surely how good it is. But what if the first face we see on coming round is a robot’s? There is little reason to suppose that robots will not be able to deliver a lot of the practical aspects of day to day care: dispensing meds, giving injections, providing food and water, turning us in our beds—and all without pay, without shifts, without sleep. Intriguingly, according to The Economist, in Japan, a robotic baby seal, designed to respond to stroking and to the human voice, is said to be helping people with dementia. But there is more to care than material goods and imitation pets. We know that grasping that “more” can be tricky. Words like “compassion,” “humanity,” “dignity” and “respect,” are often put in the frame. And they all look like the sort of things that robots cannot provide, although they may be able to copy the moves. Take “compassion.” My dictionary gives the following: a feeling of distress and pity for the suffering or misfortune of another, often including the desire to alleviate it. It looks like the kind of thing that only one human being can give to another. By definition, robots are excluded. Granted some of us might find more dignity in being cleaned, or having our bottoms wiped, by a robot than by someone straight out of college, but that seems to have more to do with our privacy, and seems somehow to prove the rule—we simply respond differently to a human being. I am tempted by the thought that only another human being can provoke in us feelings of shame.

In a piece in 3quarksdaily that otherwise brings me out in all sorts of hives—I suspect in places he might be joking—Thomas Wells points up one of the moral questions this raises. He cites—and then rejects—a feminist ethics of care. Our need for the care of others during childhood, sickness, and in old age moralises us. It highlights our inter-dependence, the reciprocality, over time, of our needs. By turning to robots for our care, we risk demoralisation—a severing of the ties that bind us. And arguably this is already under way, a product of the institutional care that post-war welfare states have facilitated: we export the care of our dependents to paid professionals.

Wells goes further, welcoming the advent not just of robotic carers and house-servants, but robotic friends and even lovers. “Physically,” he says, “this would require robots to look enough like a person…for humans to relate to.” And then he gives himself away: “Cognitively, this would require robots to simulate the perfect lover, that is, the perfect worshipper.” Now love and worship are not synonyms. Love is the difficult moral work of recognising the irreducible singularity of another person, patched together as we all are out of good and ill. Wells’ idea of worship is the mere unqualified lavishing of praise, and much as I love some of them, I have yet to meet a human being worthy of it. Jonathans Sacks is wonderful on this. This is from Thought for Today:

My philosophy tutor the late Bernard Williams used to say: we love individuals, not types. We love what is unique and irreplaceable, not what can be mass produced. That is what gives love its poignancy: its inseparable connection with the possibility of loss. It’s what makes human life sacred: the fact that no one is a substitute for any other. And it’s what the rabbis meant when they said 2000 years ago that when a human makes many coins in the same mint, they all come out the same. God makes us all in the same image, His image, but we all come out different.

So what does all this have to do with the march of robots into medicine? Merely to say that for all the enormous and irreplaceable benefits of technology the desire for a human touch is unlikely to wither any time soon. Robots, like computers before them, will no doubt transform our lives in ways we cannot yet foresee. Perhaps some of us, like the man in the film Her who falls in love with the voice in his computer—it probably helps that the voice is Scarlett Johansson’s—will come to prefer programmable partners to free human beings. But as Wells puts it, this is because, “they have no emotional states of their own to overcome,” and so “they will have none of the difficulties humans have in presenting the right emotional states at the right time.” But of course this is not love, it is narcissism. Love, like compassion, is given not by machines, but by free human beings. That is why we value it. And the biggest challenge in healthcare is not how to make room for more machines, but how to liberate health professionals to provide the care they want to give. Swapping nurses for robots is not the answer. What we need are ways to stop overstrained, under-funded and sclerotically bureaucratic health systems from turning them into robots.

Julian Sheather is ethics manager, BMA. The views he expresses in his blog posts are entirely his own.

See also: