In a piece titled in a fashion to simultaneously win the internet and cause every male reader to wince, Michelle Meyer asks “Whose Business Is It If You Want a Bee To Sting Your Penis? Should IRBs Be Policing Self-Experimentation?”
In this piece she describes the case of a Cornell graduate student who carried out a piece of self-experimentation without IRB approval (based on the mistaken belief it wasn’t required) which aimed to assess which part of the body was worst to be stung by a bee on and involved: “five stings a day, always between 9 and 10am, and always starting and ending with “test stings” on his forearm to calibrate the ratings. He kept this up for 38 days, stinging himself three times each on 25 different body parts.”
While IRB approval was required and not sought in this case, Meyer argues that this isn’t problematic effectively because in her view regulating researcher self experimentation constitutes an unacceptable level of paternalism: “The question isn’t whether or not to try to deter unduly risky behavior by scientists who self-experiment; it’s whether this goal requires subjecting every instance of self-experimentation, no matter how risky, to mandatory, prospective review by a committee. It’s one thing to require a neutral third party to examine a protocol when there are information asymmetries between investigator and subject, and when the protocol’s risks are externalized onto subjects who may not share much or any of the expected benefits. Mandatory review of self-experimentation takes IRB paternalism to a whole other level.”
Perhaps this is just my inherent lack of a distaste for relatively benign paternalism but I don’t quite see this objection to regulating self experimentation working for three reasons.
Firstly the distinction Meyer draws between self and other experimentation assumes a high level of understanding of the risks and benefits on the behalf of the researcher in a way that negates the need for the normal consent process. This is probably right most of the time and so we can assume consent is present. Does this negate the need for external review? I am not sure it does since the researchers understanding is not perfect and they may be self deceiving in regards to the magnitude and level of risk. Meyer notes for example that this project originally involved stings to the eye, until the supervisor of this student pointed out that this risked blindness. So review by external experts regarding risks and benefits of research can and does reduce the levels of risks in research. In Research Exceptionalism James Wilson and I argue that this is a general justification for external research regulation – the ethics and risks and harms of research are complex and unpredictable and hence external regulation helps clarify these risks and ethical issues to enable researchers to fulfil their moral duties. This is of course paternalistic in the case of self-experimentation, but I presume that the student in this case is grateful to his supervisor for saving his vision, so I think it is the kind of paternalism we ought to endorse, since it is in regards to a risk that the person wouldn’t want to run.
Secondly valid consent, doesn’t just consist of having information, it also requires competency and particularly in these types of cases an absence of coercion. This is a graduate student who is to be frank in a vulnerable institutional position (like many of us in academia…) – if they want to improve their standing and move to the next level they need to keep their superiors happy. This makes them vulnerable to self exploitation and risk taking, which external regulation can reduce and remove.
Finally I suspect that what is going on here is a kind of reverse research exceptionalism where the regulation of research is seen as somehow more problematic than the regulation of other aspects of our lives. It is commonplace for health and safety to require us in the course of our employment to to act and not act in particular ways. This is both paternalistic insofar as it protects us, but it is also not paternalistic insofar as it protects both others and the instution we work at. In this case, this student is working in a lab in an institutional context and if something had gone wrong for the student or others in the course of this research then the institution could well have been held liable for damages arising from this. As such it seems perfectly within their rights to me to decide how to regulate these risks to them, and to decide to regulate these via prospective review.
Now as Meyer notes this is an external requirement rather than a choice that Cornell has made, but I don’t think this changes the justification for the regulation – given that we know in markets competition tends to drive towards failures to protect workers and others, there is nothing inappropriate with the state correcting the market failure here via legislation.