A piece appeared in The Atlantic a few days ago that aims to prick the perceived bubble of professional ethicists. In fact, the headline is pretty hostile: THE HYPOCRISY OF PROFESSIONAL ETHICISTS. Blimey. The sub-headline doesn’t pull its punches either: “Even people who decide what’s right and wrong for a living don’t always behave well.”
I know that headlines are frequently not written by the person whose article they head, and so these won’t tell us much about the article – but, even so, I’m beginning to twitch. Do I decide what’s right and wrong for a living? I don’t think I do. I possibly thought that that’s what an ethicist does when I was a fresher, or at school – but I’m not certain I did even then. And even if I did, I discovered pretty quickly that it’s quite a bit more complicated than that. For sure, I think about what’s right and wrong, and about what “right” and “wrong” mean; and I might even aspire to make the occasional discovery about right and wrong (or at least about how best to think about right and wrong).* But as for deciding what is right and wrong? Naaaah.
Anyway: to the substance of the piece, which – to be fair – is more moderate in tone, pointing out that “those who ponder big questions for a living don’t necessarily behave better, or think more clearly, than regular people do”. That’s probably accurate enough, at least a good amount of the time. I’d like to think that I’m thinking better about a particular problem than most people when I’m working on it; but I’m also thinking better about in that context than I would be at other times. (Ask me about – say – genetic privacy while I’m drafting a section of a paper on genetic privacy, and I’m your man. Ask me while I’m making pastry… not so much.) If we allow that I’m better at dealing with (a) specific moral question(s) while “on duty”, that won’t mean I’m not susceptible to the same intellectual shortcuts and fallacies as everyone else at least most of the rest of the time. I’m probably almost as susceptible to them even when I am on duty. I’d assume that the same applies to others in the profession, too.
The article does make great play of the apparent inconsistencies between what ethicists say and what they/ we do. So there’s the finding about how many more say that eating meat is morally problematic than actually avoid it, and the chestnut about how ethics books are the ones most frequently stolen from libraries.** At least there are decent sources cited – peer-reviewed papers like this one that are philosophically informed, to boot.
So: ethicists aren’t reliably better behaved than others. I don’t think that should surprise us, though. But, there’s a couple of questions into which we might still want to dig more deeply. The first has to do with whether this amounts to hypocrisy. I don’t think it does – any more than a discovery that a fireman once left a chip-pan unattended while answering the phone makes him a hypocrite. To be a hypocrite is to be an actor – to use public statements as a cover for incompatible private behaviour. The intent to give a false impression is important. One can be inconsistent without being a hypocrite.
But this leads to more interesting questions that have to do with what it is to say that something is a morally problematic, and with what it takes to be “better behaved” than others, and why ethicists should be expected to be better behaved. Here, I’ll move from the academic to the lay understanding of ethics – and, inevitably, it’ll mean touching on The Paper Of Which We Do Not Speak.
Take, for example, the finding that many people think that eating meat is morally bad, but do it anyway. For any reasonably sophisticated account of ethics, that shouldn’t be a problem: something can be morally bad without that meaning that there’s not a good non-moral reason to do it. If you are very fond of meat “aesthetically”, then you can acknowledge that there is a reason not to eat it that is more-than-matched by a reason to eat it. Moral reasons aren’t trumps – or, at least, if they are, it’s going to take a lot more work to establish that they are.
Indeed, this is what makes something a moral problem in the first place. Bad behaviour is not, I don’t think, about setting out to do something that one recognises one has an overarching reason not to do; rather, it’s about misidentifying the most pressing reasons to act or refrain from acting in a given way. Our ethicist has a good reason to return the book to the library; but she might have all kinds of reasons not to, and they might come out more important. The question is frequently one of how one responds to competing reasons, and how one ought to evaluate those reasons. Note that we generally expect “moral” reasons to come out trumps by default – but, again, that’d need further argument.
And that brings us to the direct question of ethicists’ behaviour. I suppose that it does make sense in a way to expect an ethicist to behave better than others; after all, if he is thinking about reasons to act, then he would – eventually – come up with a reason to do x rather than y. Unless you’re a particularly strict externalist about the relationship between moral beliefs and motivation, you might expect that it’d be difficult to acknowledge a reason for acting without it making some difference to your preferences. So even if, for a given event, a belief doesn’t make a difference to behaviour, it might be that we’d think that thinking about morality is likely to make some difference, howsoever small, to behaviour.
BUT… (and here’s the important bit) who’s to say that considered judgements about behaviour would match unconsidered expectations all that well? Even if we think that being a moral philosopher is going to make a person better behaved, it won’t follow that we know anything about what better behaviour actually demands. That is: a period of reflection might lead an ethicist to decide that the best thing to do is something that commonsense morality would reject out of hand. (There’s a common assumption that morality can be equated with a particular kind of altruism, and a particular kind of liberal outlook. It might turn out to demand them. But it’s not the same.) At the very least, it might turn out that something that commonsense morality holds to be obligatory or abhorrent isn’t, but is a matter of moral indifference. Hence The Paper Of Which We Do Not Speak: whether or not you agree with the conclusion, the fact that it’s counterintuitive won’t tell us that it’s mistaken.
If we don’t accept this possibility, we seem to be left with the idea that the role of the ethicist is to confirm, and perhaps provide elaborate accounts to back up, our intuitions. But why should we accept that? That’d be like saying that the role of the medic is to reassure us that you really can get a cold simply from being cold. Taking that role, though, makes the ethicist’s job rather pointless at best – but maybe actually meretricious.
And that’s why I didn’t return the books to the library, your honour.
*All subject to teaching and near endless administrative tasks being got out of the way: the idea that academics get much time to think about or discover anything much is rather quaint…
**Incidentally: do we know the books’re stolen? That implies intent. It might just be that they’re mislaid. Or – and I do like this option – that there’re guerilla ethicists simply moving things around libraries, on the basis that management students really do need to be exposed to Sidgwick by any means necessary.