For my money, one of the best papers at the nonhuman animal ethics conference at Birmingham a couple of weeks ago was Steve Cooke’s.* He was looking at the justifications for direct action in the name of disrupting research on animals, and presented the case – reasonably convincingly – that the main arguments against the permissibility of such direct action simply don’t work. For him, there’s a decent analogy between rescuing animals from laboratories and rescuing drowning children from ponds: in both cases, if you can do so, you should, subject to the normal constraints about reasonable costs. The question then becomes one of what is a reasonable cost. He added to this that the mere illegality of such disruption mightn’t tip the balance away from action. After all, if a law is unjust (he claims), it’s hard to see how that alone would make an all-else-being-equal permissible action impermissible. What the law allows to be done to animals in labs is unjust, and so it doesn’t make much sense to say that breaking the law per se is wrong.
Now, I’m paraphrasing the argument, and ignoring a lot of background jurisprudential debate about obligations to follow the law. (There are those who think that there’s a prima facie obligation to obey the law qua law; but I think that any reasonable version of that account will have a cutoff somewhere should the law be sufficiently unjust.) But for my purposes, I don’t think that that matters.
It’s also worth noting that, at least formally, Cooke’s argument might be able to accommodate at least some animal research. If you can claim that a given piece of research is, all things considered, justifiable, then direct action to disrupt it might not have the same moral backing. Cooke thinks that little, if any, animal research is justified – but, again, that’s another, higher-order, argument.
One consideration in that further argument may be whether you think that there’s a duty to carry out (at least certain kinds of) research. For example, if you think that medical research is morally required, you might be more forgiving of particular pieces of research, on the basis that they’re a necessary means to that end. Against that, if you’re of a Kantian turn of mind, you might think that to will the end implies willing the means necessary thereto, and that if the means are impermissible, that means that you should drop the end. Much of the debate here will depend on how you think that chains of justification work. Roughly, consequentialists think that the desirability of the outcome can cleanse dirty hands; Kantians will think that dirty hands besmirch the outcome. Finally, if – like me – you think that research may be a good thing but is unlikely to be obligatory, then you might be less willing to accept the permissibility of animal research, since the justification won’t be there to begin with – or, at least, it won’t be so straightforward. Whereas the Kantian claim would be that the research has to be ditched because the means are unacceptable, the argument here is that we can avoid being committed to the means, on the basis that the end doesn’t have nearly as much moral gravity as some would say it does in the first place.
Be all that as it may, I’m satisfied that, at least sometimes, it’s permissible to disrupt certain forms of research on nonhuman animals, for more or less the same reason that we’d think that it was permissible to disrupt some form of research on neonates. It doesn’t follow from that that violence against researchers is justified – but there might be times when damaging a lab would be justified, if that’s the only way to prevent a particularly egregious piece of research. Again, for the sake of clarity, I’m not going to say when such action is justified, because I don’t know – and, for the sake of this argument, it doesn’t matter. What does matter is the mere principle that there is some circumstance c in which disrupting research is justified, to the possible extent of damaging a lab.
There’s an economic rationale to the claim: in essence, the idea would be that, by making research expensive and difficult, an activist is making it less likely to happen at all. Slightly differently, we could even say that the research (like any activity) has a “moral cost” that is infrequently reflected in the amount of money that is required to carry it out; but by making that research more expensive and difficult, one is simply ensuring that that moral cost gets noted on the ledger in financial terms.
And this is where I wonder whether the argument is a bit naive.
The UK is a country in which a lot of animal research takes place. However, it’s also a country in which there are reasonably tight animal welfare laws. The research that happens is, therefore, reasonably well controlled.
Make it expensive to do that work in the UK, though, and it won’t mean that the research won’t happen. It simply means that it’ll move to another country – possibly one with much less stringent welfare rules. Therefore, you might actually be ensuring that the world is a worse place than it could have been. So you perhaps ought not to disrupt research in the UK after all, since the big pharma companies know no national boundaries.
On the other hand, it’s obviously true that the UK’s welfare laws mightn’t satisfy the abolitionist. If you’re opposed to judicial execution, for example, you might be able to accept that a quick and painless method is preferable to a slow and painful one, without thereby being committed to the view that the former is in any way permissible. And the same might apply to animal research. Well-regulated animal research might be better in some way than unregulated research – but the problem is more fundamental than that: the problem is that it happens at all.
The question is, it might seem, one of how one compares badness with wrongness, and if one can (or should) compare them at all. Some wrong actions may not be as bad as others; but they might still be bad, all things considered. And even if they aren’t bad at all, they might still be wrong. (As I’ve argued on this blog before, we could imagine a world in which some slaves are actually better off than they would have been in material terms, without having to sacrifice the idea that the system is still wrong.) But still: if something wrong is going to happen anyway, isn’t it desirable that it should be as minimally bad as possible? If so, then maybe people who care about lab animals should bite the bullet and try not to scare the researchers off to countries where regulation is weaker.
Again, though, that might not touch the abolitionist’s case. Asking how to compare badness and wrongness might be asking the wrong question to begin with, not least because it may turn out to be a distraction from what actually does matter. If we think that a procedure is wrong, then no balance of desirable or undesirable outcomes would matter. In other words, when it comes to animal research (the claim might go), it doesn’t matter what the outcome is, because that’s not a relevant moral consideration to begin with.
As to what that means in practice… I don’t know. To ignore the fact that better wrong stuff is better than worse wrong stuff seems to be absurdly idealistic; any practical ethical claim will have to have at least some purchase on the real world. I suspect that for the activist who’s opposed to animal research as a matter of principle rather than on (say) the utilitarian grounds that it’s simply not reliable, it’s going to be a horribly tricky dilemma.
*Oscar Horta’s paper was also very enjoyable… and reassuringly weird.