You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our Group site.

Research Ethics

What should Investigators be Doing with Unexpected Findings in Brain Imaging Research?

22 Jun, 15 | by BMJ

Guest Post by Caitlin Cole

Incidental findings in brain imaging research are common. Investigators can discover these unexpected findings of potential medical significance in up to 70% of their research scans. However, there are no standards to guide investigators as to whether they should actively search for these findings or which, if any, they should return to research participants.

This complex ethical issue impacts many groups in brain imaging: participants and parents of child participants who may desire relevant health information, but alternatively may suffer from anxiety and financial burden; investigators who must ethically grant their participants autonomy, but who also may suffer from budget and personnel restrictions to manage the review and report of these findings; Institutional Review Board (IRB) members who must provide ethical oversight to imaging research and help mandate institutional standards; and health providers who must interface with their patients and assist with follow up care when necessary.

Our research study shows these groups share some ideas on the ethics of returning incidental findings – the researcher has an ethical responsibility or obligation to tell a subject that there’s something there, however they do it, but just inform the subject, even though it’s not part of the research” – yet also acknowledge the inherent risk in reporting medical research information. As one of our IRB members commented, I mean [in regards to withholding findings] one reason would be to protect the patient from doing something stupid about them.

When participants are asked about incidental findings, they consistently state that they want to receive all information pertinent to their health. Research participants want to make their own medical decisions and feel investigators have a responsibility to keep them informed.

However, it is clear from our research that participants do not always understand the difference between a brain scan for research purposes and a clinical scan. The incidental finding reports that they receive include personal health information, written in medical jargon, discovered during a clinical procedure that may have immediate or long term medical significance. Because of this crossover between conducting research and sharing health information, participants may overestimate the clinical utility of the reported research information. This is a challenge for investigators whose role is to conduct research, not to diagnose participants or offer findings with clinical certainty. Participant assumptions otherwise have the potential to cause downstream legal complications for the research institution.

It is necessary to understand the impact on all parties involved in the process of disclosing incidental findings to determine appropriate management policy. This challenging task should not be underestimated as these groups think differently about the balance between risk and benefit based on their role in this process, whether they be a research participant, a research investigator, an IRB member or a health provider. Overall there is an ethical demand to manage and report unexpected findings discovered in brain imaging research; finding a way to do this while minimizing negative impact for all involved is important.

Read the full paper here.

Research Ethics: You’re Doing it Wrong!

1 Jun, 15 | by Iain Brassington

With any luck, the marking tsunami will have receded by the end of the week, and so I should be able to get back to blogging a bit more frequently soon.

In the meantime, I’ll fill some space by ripping off something from the “Feedback” page of the latest New Scientist:

The TV industry has […] yet another new mantra: “Not just more pixels, but better pixels”.  The marketeers’ problem is that few people can actually see the extra details in their newest, flashiest sets unless they sit very close or the screen is very, very bright.

A colleague found a demonstration unpleasant, especially when the image flashed, and wondered about the possible risk of this triggering photo-epilepsy or migraines.  One company said, yes, this was being looked into- but no, they could not identify the university doing the work.

Then in the tea break at a tech conference a senior engineer from a UK TV station confided the reason: “We are very aware of the risks and would love to do some real research.  But nobody dares to do it because it would involve tests that deliberately push subjects into epileptic fits, and might very possibly kill them.”

In other words: here’s an intuitively plausible risk associated with product p; we could test whether p is safe; but doing that test itself would be unsafe.  Were this a pharmaceutical trial, one would expect that things would stop there – or, at the very least, that things would move very slowly and carefully indeed.  (Maybe if the drug is highly beneficial, and can be used in highly controlled circumstances, it might be worth it.)

But with TVs… well, it looks like journalists have been invited to the product launch already.  My guess is that if the TV is found to be risky, it’d be quietly withdrawn ex post facto – which seems rather late in the day.

It is a bit strange that trials on a product aren’t being done not so much because of what they might reveal, as because even doing the test might be iffy.  Stranger yet that this is unlikely to make much of a dent in the marketing strategy.  Or, given the requirements of consumer capitalism, not all that strange after all: take your pick.

Sometimes, Big Pharma can seem like a model of probity.

Animal Liberation: Sacrificing the Good on the Altar of the Perfect?

24 Apr, 15 | by Iain Brassington

For my money, one of the best papers at the nonhuman animal ethics conference at Birmingham a couple of weeks ago was Steve Cooke’s.*  He was looking at the justifications for direct action in the name of disrupting research on animals, and presented the case – reasonably convincingly – that the main arguments against the permissibility of such direct action simply don’t work.  For him, there’s a decent analogy between rescuing animals from laboratories and rescuing drowning children from ponds: in both cases, if you can do so, you should, subject to the normal constraints about reasonable costs.  The question then becomes one of what is a reasonable cost.  He added to this that the mere illegality of such disruption mightn’t tip the balance away from action.  After all, if a law is unjust (he claims), it’s hard to see how that alone would make an all-else-being-equal permissible action impermissible.  What the law allows to be done to animals in labs is unjust, and so it doesn’t make much sense to say that breaking the law per se is wrong.

Now, I’m paraphrasing the argument, and ignoring a lot of background jurisprudential debate about obligations to follow the law.  (There are those who think that there’s a prima facie obligation to obey the law qua law; but I think that any reasonable version of that account will have a cutoff somewhere should the law be sufficiently unjust.)  But for my purposes, I don’t think that that matters.

It’s also worth noting that, at least formally, Cooke’s argument might be able to accommodate at least some animal research.  If you can claim that a given piece of research is, all things considered, justifiable, then direct action to disrupt it might not have the same moral backing.  Cooke thinks that little, if any, animal research is justified – but, again, that’s another, higher-order, argument.

One consideration in that further argument may be whether you think that there’s a duty to carry out (at least certain kinds of) research. more…

Animals in US Laboratories: Who Counts, Who Matters?

21 Mar, 15 | by BMJ

Guest post by Alka Chandna

How many animals are experimented on in laboratories? It’s a simple question, the answer to which provides a basic parameter to help us wrap our heads around the increasingly controversial and ethically harrowing practice of locking animals in cages and conducting harmful procedures on them that are often scary, painful, and deadly. Yet ascertaining the answer in the United States – the world’s largest user of animals in experiments – is surprisingly difficult.

In the eyes of the US Animal Welfare Act (AWA) – the single federal law that governs the treatment of animals used in experimentation – not all animals are created equal. Mice, rats, and birds bred for experimentation, and all cold-blooded animals – estimated by industry to comprise more than 95 percent of all animals used – are all unscientifically and dumbfoundingly excluded from the AWA’s definition of “animal”. Orwell cheers from his grave while Darwin rolls in his.

Leaving aside the question of whether mice and rats should be categorized as vegetable or mineral, the exclusion of these animals from the AWA also results in a dearth of data on the most widely used species, as the only figures on animal use in US laboratories that are systematically collected, organized, and published by the government are on AWA-regulated species. more…

Saatchi Bill – Update

28 Oct, 14 | by Iain Brassington

Damn. Damn, damn, damn.

It turns out that the version of the Medical Innovation Bill about which I wrote this morning isn’t the most recent: the most recent version is available here.  Naïvely, I’d assumed that the government would make sure the latest version was the easiest to find.  Silly me.

Here’s the updated version of §1(3): it says that the process of deciding whether to use an unorthodox treatment

must include—

(a) consultation with appropriately qualified colleagues, including any relevant multi-disciplinary team;

(b) notification in advance to the doctor’s responsible officer;

(c) consideration of any opinions or requests expressed by or on behalf of the patient;

(d) obtaining any consents required by law; and

(e) consideration of all matters that appear to the doctor to be reasonably necessary to be considered in order to reach a clinical judgment, including assessment and comparison of the actual or probable risks and consequences of different treatments.

So it is a bit better – it seems to take out the explicit “ask your mates” line.

However, it still doesn’t say how medics ought to weigh these criteria, or what counts as an appropriately qualified colleague.  So, on the face of it, our homeopath-oncologist could go to a “qualified” homeopath.  Or he could go to an oncologist, get told he’s a nutter, make a mental note of that, and decide that that’s quite enough consultation and that he’s still happy to try homeopathy anyway.

So it’s still a crappy piece of legislation.  And it still enjoys government support.  Which does, I suppose, give me an excuse to post this:

Many thanks to Sofia for the gentle correction about the law.

An Innovation Too Far?

28 Oct, 14 | by Iain Brassington

NB – Update/ erratum here.  Ooops.

One of the things I’ve been doing since I last posted here has involved me looking at the Medical Innovation Bill – the so-called “Saatchi Bill”, after its titular sponsor.  Partly, I got interested out of necessity – Radio 4 invited me to go on to the Sunday programme to talk about it, and so I had to do some reading up pretty quickly.  (It wasn’t a classic performance, I admit; I wasn’t on top form, and it was live.  Noone swore, and noone died, but that’s about the best that can be said.)

It’s easy to see the appeal of the Bill: drugs can take ages to come to market, and off-label use can take a hell of a long time to get approval, and all the rest of it – and all the while, people are suffering and/ or dying.  It’s reasonable enough to want to do something to ameliorate the situation; and if there’s anecdotal evidence that something might work, or if a medic has a brainwave suggesting that drug D might prove useful for condition C – well, given all that, it’s perfectly understandable why we might want the law to provide some protection to said medic.  The sum of human knowledge will grow, people will get better, and it’s raindrops on roses and whiskers on kittens all the way; the Government seems satisfied that all’s well.  Accordingly, the Bill sets out to “encourage responsible innovation in medical treatment (and accordingly to deter innovation which is not responsible)” – that’s from §1(1) – and it’s main point is, according to §1(2), to ensure that

It is not negligent for a doctor to depart from the existing range of accepted medical treatments for a condition, in the circumstances set out in subsection (3), if the decision to do so is taken responsibly.

Accordingly, §1(3) outlines that

[t]hose circumstances are where, in the doctor’s opinion—

(a) it is unclear whether the medical treatment that the doctor proposes to carry out has or would have the support of a responsible body of medical opinion, or

(b) the proposed treatment does not or would not have such support.

So far so good.  Time to break out the bright copper kettles and warm woollen mittens*, then?  Not so fast. more…

Adrenaline, Information Provision and the Benefits of a Non-Randomised Methodology

17 Aug, 14 | by Iain Brassington

Guest Post by Ruth Stirton and Lindsay Stirton, University of Sheffield

One of us – Ruth – was on Newsnight on Wednesday the 13th August talking about the PARAMEDIC2 trial.  The trial is a double blind, individually randomised, placebo controlled trial of adrenaline v. normal saline injections in cardiac arrest patients treated outside hospital.  In simpler terms, if a person were to have a cardiac arrest and was treated by paramedics, they would usually get an injection of adrenaline prior to shocks to start the heart.  If that same person was enrolled in this study they would still receive an injection but neither the person nor the paramedic giving the injection would know whether it was adrenaline or normal saline.  The research team is proposing to consent only the survivors for the collection of additional information after recovery from the cardiac arrest.  This study is responding to evidence coming from other jurisdictions that indicates that there might be some significant long term damage caused by adrenaline – specifically that adrenaline saves the heart at the expense of the brain.  It is seeking to challenge the accepted practice of giving adrenaline to cardiac arrest patients.

Our starting position is that we do not disagree with the research team.  These sorts of questions need to be asked and investigated.  The development of healthcare depends on building an evidence base for accepted interventions, and where that evidence base is not forthcoming from the research, the treatment protocols need changing.  This going to be tricky in the context of emergency healthcare, but that must not be a barrier to research.

There are two major ethical concerns that could bring this project to a grinding halt.  One is the opt-out consent arrangements, and the other is the choice of methodology.

Consent, then. more…

Consigned to the Index

28 May, 14 | by Iain Brassington

There’re probably times when all of us have had a solution, and just had to find a problem for it.  It’s an easy trap; and it’s one into which I suspect Gretchen Goldman may have fallen in an article in Index on Censorship about scientific freedom and how it’s under threat from disputes about Federal funding in the US.  No: I’m not going to be arguing against scientific freedom here.  Only against a certain use of the appeal to scientific freedom in response to a particular problem. First up, let’s note the points on which Goldman may well be correct.  She notes that the disputes in the US about federal funding that have led to big cuts and a short-but-total government shutdown are very bad for science.  She points out that political machinations even meant that researchers working in government-funded areas couldn’t access their emails.  This had direct and indirect consequences, all of which were pretty undesirable.  For example,

[m]any government scientists were not allowed to access email, much less their laboratories. One scientist noted that his “direct supervisor … confiscated all laptop computers on the day of the shutdown”.

Without access to work email accounts, federal scientists were also prevented from carrying out professional activities that went beyond their government job duties. Several scientists pointed out that their inability to access emails significantly slowed down the peer-review process and, therefore, journal publication.

In the wider sense, to have science and funding bodies that are vulnerable to political shenanigans isn’t good for science, and is probably not good for humanity.  You don’t have to think that research is obligatory to think that it’s often quite a good thing for science to happen all the same.  And shutdowns are particularly bad for students and junior researchers, whose future career might depend on the one project they’re doing at the moment; if a vital field trip or bit of analysis or experiment is liable to get pulled at almost any moment, they don’t have a reputation yet to tide them over.

So far, so good.  However, things are iffier elsewhere. more…

Resurrectionism at Easter

23 Apr, 14 | by Iain Brassington

There’s a provocative piece in a recent New Scientist about what happens to unclaimed bodies after death – about, specifically, the practice of coopting them for research purposes.

Gareth Jones, who wrote it, points out that the practice has been going on for centuries – but that a consequence of the way it’s done is that it tends to be the poor and disenfranchised whose corpses are used:

[T]he probably unintended and unforeseen result [of most policies] was to make poverty the sole criterion for dissection. [… U]nclaimed bodies are still used in countries including South Africa, Nigeria, Bangladesh, Brazil and India. While their use is far less in North America, they continue to constitute the source of cadavers in around 20 per cent of medical schools in the US and Canada. In some states in the US, unclaimed bodies are passed to state anatomy boards.

For Jones, the practice of cooption ought to be stopped.  His main bone of contention is the lack of consent – it’s a problem that’s made more acute by the fact that the bodies of the disenfranchised are more likely to be unclaimed, but I take it that the basic concern would be there for all.

One question that we might want to ask right from the off is why informed consent is important. more…

This will hurt a bit

11 Apr, 14 | by David Hunter

In a piece titled in a fashion to simultaneously win the internet and cause every male reader to wince, Michelle Meyer asks “Whose Business Is It If You Want a Bee To Sting Your Penis? Should IRBs Be Policing Self-Experimentation?

In this piece she describes the case of a Cornell graduate student who carried out a piece of self-experimentation without IRB approval (based on the mistaken belief it wasn’t required) which aimed to assess which part of the body was worst to be stung by a bee on and involved:  “five stings a day, always between 9 and 10am, and always starting and ending with “test stings” on his forearm to calibrate the ratings. He kept this up for 38 days, stinging himself three times each on 25 different body parts.”

While IRB approval was required and not sought in this case, Meyer argues that this isn’t problematic effectively because in her view regulating researcher self experimentation constitutes an unacceptable level of paternalism:  “The question isn’t whether or not to try to deter unduly risky behavior by scientists who self-experiment; it’s whether this goal requires subjecting every instance of self-experimentation, no matter how risky, to mandatory, prospective review by a committee. It’s one thing to require a neutral third party to examine a protocol when there are information asymmetries between investigator and subject, and when the protocol’s risks are externalized onto subjects who may not share much or any of the expected benefits. Mandatory review of self-experimentation takes IRB paternalism to a whole other level.”

Perhaps this is just my inherent lack of a distaste for relatively benign paternalism but I don’t quite see this objection to regulating self experimentation working for three reasons.

Firstly the distinction Meyer draws between self and other experimentation assumes a high level of understanding of the risks and benefits on the behalf of the researcher in a way that negates the need for the normal consent process. This is probably right most of the time and so we can assume consent is present. Does this negate the need for external review? I am not sure it does since the researchers understanding is not perfect and they may be self deceiving in regards to the magnitude and level of risk. Meyer notes for example that this project originally involved stings to the eye, until the supervisor of this student pointed out that this risked blindness. So review by external experts regarding risks and benefits of research can and does reduce the levels of risks in research. In Research Exceptionalism James Wilson and I argue that this is a general justification for external research regulation – the ethics and risks and harms of research are complex and unpredictable and hence external regulation helps clarify these risks and ethical issues to enable researchers to fulfil their moral duties. This is of course paternalistic in the case of self-experimentation, but I presume that the student in this case is grateful to his supervisor for saving his vision, so I think it is the kind of paternalism we ought to endorse, since it is in regards to a risk that the person wouldn’t want to run.

Secondly valid consent, doesn’t just consist of having information, it also requires competency and particularly in these types of cases an absence of coercion. This is a graduate student who is to be frank in a vulnerable institutional position (like many of us in academia…) – if they want to improve their standing and move to the next level they need to keep their superiors happy. This makes them vulnerable to self exploitation and risk taking, which external regulation can reduce and remove.

Finally I suspect that what is going on here is a kind of reverse research exceptionalism where the regulation of research is seen as somehow more problematic than the regulation of other aspects of our lives. It is commonplace for health and safety to require us in the course of our employment to to act and not act in particular ways. This is both paternalistic insofar as it protects us, but it is also not paternalistic insofar as it protects both others and the instution we work at. In this case, this student is working in a lab in an institutional context and if something had gone wrong for the student or others in the course of this research then the institution could well have been held liable for damages arising from this. As such it seems perfectly within their rights to me to decide how to regulate these risks to them, and to decide to regulate these via prospective review.

Now as Meyer notes this is an external requirement rather than a choice that Cornell has made, but I don’t think this changes the justification for the regulation – given that we know in markets competition tends to drive towards failures to protect workers and others, there is nothing inappropriate with the state correcting the market failure here via legislation.

 

 

 

 

 

JME blog homepage

Journal of Medical Ethics

Analysis and discussion of developments in the medical ethics field. Visit site



Creative Comms logo

Latest from JME

Latest from JME

Blogs linking here

Blogs linking here