Does oxytocin come as a liquid? I can only assume that it does, and that it’s possible to drown in a vat of it. I’ve come to this conclusion after reading this interview with Patricia Churchland in The Chronicle of Higher Education. It ought to come as no surprise to those who’re familiar with Churchland’s reductionist approach to metaphysics that she thinks that the same kind of reductionism can be applied to ethics; but I’m going to have to get hold of a copy of her new book, Braintrust, just to make sure that she really is as reductionist as she appears here. Because, based on the evidence of the interview, her position is… um… odd.
Things start off well enough: her picture seems to have something in common with Aristotle’s, inasmuch as
morality is not about rule-making but instead about the cultivation of moral sentiment through experience, training, and the following of role models.
There’re plenty of people who’d disagree with this, but it’s not a wildly outré position, and there’re plenty of people who’ll accept it, too. But then…
This view stands in sharp contrast to those philosophers who argue that instinctual reactions must be scrutinized by reason
Really? It’s hard to tell whether this is an error made by Christopher Shea, the writer of the interview, or by Churchland; but it’s not hard to tell that it’s an error. Are we really supposed to accept that rational scrutiny is opposed to, and perhaps incompatible with, a vaguely Aristotelian position? That’d be a surprise to Aristotle; but it’d also be a surprise to just about everyone else. If it’s true, we might all just as well go home – and that “we” includes Churchland herself, since she seems to be scrutinising moral sentiments through the prism of reason. You don’t have to be an out-and-out rationalist to agree that reason helps us make sense of actions and values, and is so a big part of moral enquiry. There’s no contrast to be had.
The villains of her books are philosophical system-builders—whether that means Jeremy Bentham, with his ideas about maximizing aggregate utility (“the greatest good for the greatest number”), or Immanuel Kant, with his categorical imperatives (never lie!), or John Rawls, erector of A Theory of Justice.
Fine: but the alternative to system-building is not reductionism. (And what about Hegel, who could easily be read as having an account in which moral “spirit” is cultivated in a staggeringly systematic way? Oh, yes – one other thing: imperatives? Imperatives? In the plural? That’d be news to Kant…)
Churchland seems to fall into a trap of what we might want to call “naive neurology”; she makes great play of the role of oxytocin in promoting empathetic behaviour, and the moral importance of that role. Hence
“It all changed when I learned about the prairie voles,” she says—surely not a phrase John Rawls ever uttered.
She told the story at the natural-history museum, in late March. Montane voles and prairie voles are so similar “that naifs like me can’t tell them apart,” she told a standing-room-only audience (younger and hipper than the museum’s usual patrons—the word “neuroscience” these days is like catnip). But prairie voles mate for life, and montane voles do not. Among prairie voles, the males not only share parenting duties, they will even lick and nurture pups that aren’t their own. By contrast, male montane voles do not actively parent even their own offspring. What accounts for the difference? Researchers have found that the prairie voles, the sociable ones, have greater numbers of oxytocin receptors in certain regions of the brain. (And prairie voles that have had their oxytocin receptors blocked will not pair-bond.)
“As a philosopher, I was stunned,” Churchland said, archly. “I thought that monogamous pair-bonding was something one determined for oneself, with a high level of consideration and maybe some Kantian reasoning thrown in. It turns out it is mediated by biology in a very real way.”
The biologist Sue Carter, now at the University of Illinois at Chicago, did some of the seminal work on voles, but oxytocin research on humans is now extensive as well. In a study of subjects playing a lab-based cooperative game in which the greatest benefits to two players would come if the first (the “investor”) gave a significant amount of money to the second (the “trustee”), subjects who had oxytocin sprayed into their noses donated more than twice as often as a control group, giving nearly one-fifth percent more each time.
Well, OK – but so what? Let’s agree that oxytocin makes people pro-social and loved-up. What’s been missed (by Churchland, by at least some people who think we should take seriously the prospect of “moral enhancement” by means of the manipulation of brain-chemistry, and by at least some people who think that it might be possible to “cure” certain kinds of propensities towards undesirable behaviour) is that none of this eliminates the need for moral philosophy. Rather, it ignored (or begs) all the interesting philosophical questions: questions like why it’s better to be pro-social, why altruism is good, and so on. Just about everyone will agree that they are, but that won’t tell us why, or whether just about everyone is correct. Nor will they tell us anything about unusual situations in which we might think that the time for fellow-feeling is over: think of Stanisław Lem’s “Altruizine“, or John Harris’ example of the passenger on the aeroplane who cannot bring himself to act aggressively to disarm the hijacker. It’s not good enough to get all gooey about oxytocin because it’s nice to be nice.
Owen Flanagan defends Churchland on the grounds that her approach “leads to a ‘more democratic’ morality” – which, too, assumes that morality is a democratic thing, or that “democratic” morality is better than “non-democratic” morality, whatever that might be (we wouldn’t complain about non-democratic maths: why is morality different?). But what’s the standard of evaluation and comparison here? And was that standard itself “democratically” decided, or does it come by stipulation? If the former option, we’re heading into circularity; if the latter, why accept only one stipulation? If you can have one, why not more than one? Why appeal to “democracy” at all? (My hunch is that Flanagan is committing the increasingly common fallacy of linking democracy with virtue.)
Jesse Prinz is also quoted in the article, noting that
“If you look at a lot of the work that’s been done on scientific approaches to morality—books written for a lay audience—it’s been about evolutionary psychology. And what we get again and again is a story about the importance of evolved tendencies to be altruistic. That’s a report on a particular pattern of behavior, and an evolutionary story to explain the behavior. But it’s not an account of the underlying mechanism. […]”
Nevertheless, he says, how to move from the possibility of collective action to “the specific human institution of moral rules is a bit of connective tissue that she isn’t giving us.”
Exactly. Teleological reasoning on its own won’t tell us the important stuff. And, as Guy Kehane points out, collective action to bring about a particular moral aim is all well and good – but only if everyone agrees on that moral aim. The substantial moral questions – is this permissible? Should I do that? – won’t go away, and you can’t answer them by appealing to brain-chemicals that encourage pro-social behaviour. (“What should the time limit for elective abortions be?” “Oh, whatever’s nicest…” See? It doesn’t work.)
Damnit. I think I’m going to have to read Braintrust. And I think it’ll be a frustrating exercise.