On Nailing one’s Colours to the Mast

“You’re a Kantian,” people in my department tell me.  At least, I think that’s what they say – I’m assuming that there’s no comma before the final syllable, and that I’ve got all the vowels right.  I don’t think I am, actually (a Kantian, that is – I couldn’t comment on the other option).  I’m fascinated by Kant, and think his moral system is magnificent, but I’m not sure it’s correct: I don’t buy the stuff about the nature of the will, I think he occludes the difference between reason and reasonability, and so on.  Rather, my hunch is that it’s like a magnificently-crafted carriage clock in which some of the more important cogs have the wrong number of teeth.  The mechanism may work smoothly and be a wonder of engineering, but the piece itself is deeply unreliable.

I’m not sure what my theoretical starting point is – I suspect it’s a kind of weirdly mutated Aristotelianism, laced with a bit of Kant (whom I think has more in common with Aristotle than he’d ever have admitted anyway).  Having said this, I’m reasonably sure that I’m a non-consequentialist, and quite possibly an anti-consequentialist to boot.

At least, that’s what I’d say if you asked me.  In practice, though? Well, I’m not so sure.  I suspect that many people are attracted to consequentialism because an appeal to outcomes is not only theoretically simple and metaphysically thrifty, but also makes sense in the light of everyday, unconsidered moral judgement.  Certainly, when it comes to certain big questions, the lure of consequentialism is difficult to resist.  So, for example, when they come across Kant’s murderer at the door, they’ll think that lying is so obviously the correct option that it’s barely worth articulating, and that this is because of the outcome.  Even Kantians like Korsgaard at least entertain the possibility that Kant simply blundered here.  And I’d place myself in the group of people who, in practical situations and in some debates, find it straightforward to choose the most optimific course of action.  There are situations when sticking to my non-consequentialist guns would be not only obtuse (which doesn’t bother me), but also really, really hard (which does).  Nuts to my intellectual non-consequentialism: put me on the spot, and it turns out that I’m at least more consequentialist than I might like to admit.

So I was prepared to admit that even avowed non-consequentialists sometimes (frequently?) betray themselves as having at least one consequentialist bone.  And then I came across this post by David Sobel, in which he – intellectually a consequentialist – suspects that he may not always be consequentialist in practice.

I am intellectually persuaded by the arguments for Consequentialism. However, like most people in that situation, by my own lights I fail to live up the demands of that moral theory by a wide margin. And again, like most in my situation I suspect, this is a source of disquiet but not persistent hand-wringing. But there is another moral view one might attribute to me. It is more deontological in tone. And this other moral code is connected much more directly to emotional reactions such as guilt and moralized anger. If others cheat in a business deal or steal (except in desperation) and I am close enough to the situation, I will likely have an engaged moral reaction to such a person. I will speak badly of them, refuse to hang with them, and think poorly of them. Yet the decently well-off person who fails to contribute much money to an effective charity does not elicit such reactions in me to a similar degree. Similarly, while I myself regularly fail to be governed by consequentialist morality in my actions or my emotional reactions to my or other’s actions, I am quite effectively governed in both my actions and my emotions to this other moral view. My conscience, let’s call it, effectively keeps me from doing a wide range of things such as lying, cheating, stealing, hurting and so on. In most cases I simply would not dream of doing such things and if I did somehow do some such thing (or even fear that I did) I would likely feel really bad about it. Such governance in deed and action would, if I believed in commonsense (more deontological) morality, pass for tolerable moral motivation.

So where I’m saying that I think of myself as a non-conseqentialist who’s sometimes tricked into consequentialism by reality, Sobel seems to be saying the opposite: that he’s a consequentialist who’s tricked into con-consequentialism by reality.  And, thinking about it, I wonder if the pattern may be repeated for other consequentialists.  They’ll argue for hours to defend all kinds of other consequentialists claims for as long as they’re in the seminar room – but take them outside, and things aren’t so straightforward.

One possible explanation for this is that people simply aren’t very good at being moral agents.  Quite aside from wether consequentialists or non-consequentialists are correct in theory, the charge might be that we’re all likely to be inconsistent.  That’s possibly true – but it’s not the only option.  Another option is that we’re mistaken to put too much faith in one theoretical approach, because there always will be situations that don’t fit, and in which we don’t behave how our own ostensible theory demands; and maybe that’s not a failing on our part, but an indication of a perfectly acute kind of moral vision that gets obscured if we’re too devoted to one particular methodological approach.

Anti-theory is reasonably popular in bioethics: there’re whole forests of stuff taking a casuistical approach to a given problem.  Am I endorsing this?  Not at all.  Casuistry is, I think, utterly mistaken – basically because it seems to think you can be a utilitarian before lunch, and a deontologist after, and that the thing that determines your methodology is whether you like the answer it generates.  We wouldn’t accept that in any other field, so it’s not clear why we should accept it here.  Moreover, if its the liking of an answer that’s important, it’s not clear why we should bother with the method at all.  Finally, casuistical approaches can’t – as far as I can see – really deal with moral disagreement.

No: I think that there has to be some kind of theory – or meta-theory – to make sense of why something is a moral problem, of the basis on which there’s disagreement, and to set the ground rules for debate surrounding that problem.  But this would be a fairly minimal thing, to which different commentators could then append a given methodology, on the understanding that that method isn’t to be taken as complete.

So the question is, I suppose, this: do we need a “complete” moral theory?  If mathematicians can have incompleteness without giving up on methodological rigour, why can’t we?  And if it’s OK to be (say) a consequentialist without having to think that a given version of consequentialism is complete, does it matter that someone is a consequentialist in the seminar room but not always on the street?

(Visited 168 times, 1 visits today)