Do philosophers stand in the way of their own philosophy?

By Matti Häyry.

I recently interviewed Peter Singer AI – that is to say, philosopher Peter Singer’s digital representation petersinger.ai, created by Sankalpa Ghose and available online. After the interview with the bot, I asked Peter Singer himself what he thought about the answers. His response revealed an interesting tension.

For me, the experience was almost authentic. I know Peter Singer personally and professionally, and the answers were close to what I would expect to hear him say. Only a tendency to avoid confrontation at some crucial points felt a little off. He confirmed that this was also his impression.

The topic of the interview was sentience and its ethical implications. If, for instance, rudimentary organisms like the millimetre-long roundworm Caenorhabditis elegans (C. elegans) can fulfil some observable criteria for sentience, should their use in scientific research be reconsidered, maybe banned?

The AI representation of Peter Singer first tried to squirm out of the issue by taking an aggregative-utilitarian stand. The harm to C. elegans is negligible, the benefits of scientific research are potentially huge, and a balanced analysis will conclude that, elementary sentience notwithstanding, work should go on.

I pressed gently on and introduced the notion of precaution. Surely, if we cannot be certain of C. elegans’ insentience, we should stop manipulating them in unpleasant ways – by exposing them to odours they try to avoid, and so on. The AI representation gradually warmed up to the idea.

Peter Singer in person objected to the lenience of his simulation. Flatulence in public is not illegal, so why should we ban bad smells imposed on roundworms? And caution is one thing, excessive timidity another. How could there be any scientific progress if we cannot make trade-offs?

The difference between the philosopher and his representation is, I think, intriguing. What if the bot is right in the sense that Peter Singer’s published philosophical oeuvre points to the direction of greater precaution than he is ready to assume? Who knows better, the man or the machine?

A natural answer would be that the philosopher has the last word. The bot is, after all, only approximating his ideas. But then, is not the philosopher doing the same: interpreting his own erstwhile views? Is his current take somehow overridingly authentic? That seems to be the assumption.

Based on this assumption, the minders of philosopher bots – I know only of petersinger.ai for now but they are sure to abound – can keep making edits to match their philosophers’ latest convictions. But the alternative would be to challenge those by an appeal to accumulated evidence.

Well-trained bots would know the direction to which a body of work is pointing. When utilitarian and precautionary considerations clash, they could then provide a considered opinion on the thinker’s convictions as they have historically evolved. Would dissenting philosophers then stand in the way of their own philosophy?


Author: Matti Häyry

Affiliation: Professor, Senior Fellow, Aalto University School of Business

Competing interests: None declared

(Visited 67 times, 67 visits today)