You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our Group site.

Research Ethics

Circumcision and Sexual Function: Bad Science Reporting Misleads Parents

22 Apr, 16 | by bearp

by Brian D. Earp / (@briandavidearp)

Introduction

Another day, another round of uncritical media coverage of an empirical study about circumcision and sexual function. That’s including from the New York Times, whose Nicholas Bakalar has more or less recycled the content of a university press release without incorporating any skeptical analysis from other scientists. That’s par for the course for Bakalar.[1]

The new study is by Jennifer Bossio and her colleagues from Queen’s University in Ontario, Canada: it looked at penile sensitivity at various locations on the penis, comparing a sample of men who had been circumcised when they were infants (meaning they had their foreskins surgically removed), with a sample of men who remained genitally intact (meaning they kept their foreskins into adulthood).[2]

What did the researchers discover? According to a typical headline from the past few days:

Circumcision does not reduce penis sensitivity.”

But that’s not what the study showed. Before we get into the details of the science, and looking just at this claim from the “headline” conclusion, it might be helpful to review some basic anatomy.

more…

A Tool to Help Address Key Ethical Issues in Research

22 Feb, 16 | by BMJ

Guest post by Rebecca H. Li and Holly Fernandez Lynch

One of the most important responsibilities of a clinical project lead at a biotech company or an academic research team is to generate clinical trial protocols. The protocol dictates how a trial will be conducted and details background information on prior research, scientific objectives, study rationale, research methodology and design, participant eligibility criteria, anticipated risks and benefits, how adverse events will be handled, plans for statistical analysis, and other topics. Many protocol authors use as a starting point a “standardised” protocol template from their funder or institution. These templates often provide standard language, and sections for customisation, sometimes with various “pick-and-choose” options based on the nature of the research. They inevitably cover each of the key topics listed above, but often fail to include ethical principles and considerations beyond the regulatory requirement of informed consent. Indeed, the process of protocol writing has traditionally focused on scientific detail, with ethical analysis often left to institutional review boards (IRBs) and research ethics committees (RECs); unfortunately, robust discussion of specific ethical issues is often absent from clinical trial protocols.

When IRBs and RECs convene to review protocols, they are expected to evaluate whether the study will adequately protect enrolled participants. When the protocol fails to address potential ethical concerns explicitly, reviewers are left to speculate: did the investigator consider the concern, but dismiss it as not relevant in this particular context; did the investigator fail to understand the concern; does the investigator have an appropriate plan in place to resolve the concern, but has left it unstated in the protocol? This uncertainty can contribute to delays as reviewers debate among themselves, and can require lengthy back-and-forth with researchers, including series of protocol revisions and re-reviews until clarity is established. In some cases, it may also be that reviewers with less experience or expertise fail to identify an ethical concern that has not been brought to their attention in a protocol. more…

The Unbearable Asymmetry of Bullshit

16 Feb, 16 | by bearp

By Brian D. Earp (@briandavidearp)

* Note: this article was first published online at Quillette magazine. The official version is forthcoming in the HealthWatch Newsletter; see http://www.healthwatch-uk.org/.

Introduction

Science and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings—concerns I share and which I have written about at length — I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest.

Scientists are people too

In other words, science is flawed. And scientists are people too. While it is true that most scientists — at least the ones I know and work with — are hell-bent on getting things right, they are not therefore immune from human foibles. If they want to keep their jobs, at least, they must contend with a perverse “publish or perish” incentive structure that tends to reward flashy findings and high-volume “productivity” over painstaking, reliable research. On top of that, they have reputations to defend, egos to protect, and grants to pursue. They get tired. They get overwhelmed. They don’t always check their references, or even read what they cite. They have cognitive and emotional limitations, not to mention biases, like everyone else.

At the same time, as the psychologist Gary Marcus has recently put it, “it is facile to dismiss science itself. The most careful scientists, and the best science journalists, realize that all science is provisional. There will always be things that we haven’t figured out yet, and even some that we get wrong.” But science is not just about conclusions, he argues, which are occasionally (or even frequently) incorrect. Instead, “It’s about a methodology for investigation, which includes, at its core, a relentless drive towards questioning that which came before.” You can both “love science,” he concludes, “and question it.”

I agree with Marcus. In fact, I agree with him so much that I would like to go a step further: if you love science, you had better question it, and question it well, so it can live up to its potential.

And it is with that in mind that I bring up the subject of bullshit.

more…

What should Investigators be Doing with Unexpected Findings in Brain Imaging Research?

22 Jun, 15 | by BMJ

Guest Post by Caitlin Cole

Incidental findings in brain imaging research are common. Investigators can discover these unexpected findings of potential medical significance in up to 70% of their research scans. However, there are no standards to guide investigators as to whether they should actively search for these findings or which, if any, they should return to research participants.

This complex ethical issue impacts many groups in brain imaging: participants and parents of child participants who may desire relevant health information, but alternatively may suffer from anxiety and financial burden; investigators who must ethically grant their participants autonomy, but who also may suffer from budget and personnel restrictions to manage the review and report of these findings; Institutional Review Board (IRB) members who must provide ethical oversight to imaging research and help mandate institutional standards; and health providers who must interface with their patients and assist with follow up care when necessary.

Our research study shows these groups share some ideas on the ethics of returning incidental findings – the researcher has an ethical responsibility or obligation to tell a subject that there’s something there, however they do it, but just inform the subject, even though it’s not part of the research” – yet also acknowledge the inherent risk in reporting medical research information. As one of our IRB members commented, I mean [in regards to withholding findings] one reason would be to protect the patient from doing something stupid about them.

When participants are asked about incidental findings, they consistently state that they want to receive all information pertinent to their health. Research participants want to make their own medical decisions and feel investigators have a responsibility to keep them informed.

However, it is clear from our research that participants do not always understand the difference between a brain scan for research purposes and a clinical scan. The incidental finding reports that they receive include personal health information, written in medical jargon, discovered during a clinical procedure that may have immediate or long term medical significance. Because of this crossover between conducting research and sharing health information, participants may overestimate the clinical utility of the reported research information. This is a challenge for investigators whose role is to conduct research, not to diagnose participants or offer findings with clinical certainty. Participant assumptions otherwise have the potential to cause downstream legal complications for the research institution.

It is necessary to understand the impact on all parties involved in the process of disclosing incidental findings to determine appropriate management policy. This challenging task should not be underestimated as these groups think differently about the balance between risk and benefit based on their role in this process, whether they be a research participant, a research investigator, an IRB member or a health provider. Overall there is an ethical demand to manage and report unexpected findings discovered in brain imaging research; finding a way to do this while minimizing negative impact for all involved is important.

Read the full paper here.

Research Ethics: You’re Doing it Wrong!

1 Jun, 15 | by Iain Brassington

With any luck, the marking tsunami will have receded by the end of the week, and so I should be able to get back to blogging a bit more frequently soon.

In the meantime, I’ll fill some space by ripping off something from the “Feedback” page of the latest New Scientist:

The TV industry has […] yet another new mantra: “Not just more pixels, but better pixels”.  The marketeers’ problem is that few people can actually see the extra details in their newest, flashiest sets unless they sit very close or the screen is very, very bright.

A colleague found a demonstration unpleasant, especially when the image flashed, and wondered about the possible risk of this triggering photo-epilepsy or migraines.  One company said, yes, this was being looked into- but no, they could not identify the university doing the work.

Then in the tea break at a tech conference a senior engineer from a UK TV station confided the reason: “We are very aware of the risks and would love to do some real research.  But nobody dares to do it because it would involve tests that deliberately push subjects into epileptic fits, and might very possibly kill them.”

In other words: here’s an intuitively plausible risk associated with product p; we could test whether p is safe; but doing that test itself would be unsafe.  Were this a pharmaceutical trial, one would expect that things would stop there – or, at the very least, that things would move very slowly and carefully indeed.  (Maybe if the drug is highly beneficial, and can be used in highly controlled circumstances, it might be worth it.)

But with TVs… well, it looks like journalists have been invited to the product launch already.  My guess is that if the TV is found to be risky, it’d be quietly withdrawn ex post facto – which seems rather late in the day.

It is a bit strange that trials on a product aren’t being done not so much because of what they might reveal, as because even doing the test might be iffy.  Stranger yet that this is unlikely to make much of a dent in the marketing strategy.  Or, given the requirements of consumer capitalism, not all that strange after all: take your pick.

Sometimes, Big Pharma can seem like a model of probity.

Animal Liberation: Sacrificing the Good on the Altar of the Perfect?

24 Apr, 15 | by Iain Brassington

For my money, one of the best papers at the nonhuman animal ethics conference at Birmingham a couple of weeks ago was Steve Cooke’s.*  He was looking at the justifications for direct action in the name of disrupting research on animals, and presented the case – reasonably convincingly – that the main arguments against the permissibility of such direct action simply don’t work.  For him, there’s a decent analogy between rescuing animals from laboratories and rescuing drowning children from ponds: in both cases, if you can do so, you should, subject to the normal constraints about reasonable costs.  The question then becomes one of what is a reasonable cost.  He added to this that the mere illegality of such disruption mightn’t tip the balance away from action.  After all, if a law is unjust (he claims), it’s hard to see how that alone would make an all-else-being-equal permissible action impermissible.  What the law allows to be done to animals in labs is unjust, and so it doesn’t make much sense to say that breaking the law per se is wrong.

Now, I’m paraphrasing the argument, and ignoring a lot of background jurisprudential debate about obligations to follow the law.  (There are those who think that there’s a prima facie obligation to obey the law qua law; but I think that any reasonable version of that account will have a cutoff somewhere should the law be sufficiently unjust.)  But for my purposes, I don’t think that that matters.

It’s also worth noting that, at least formally, Cooke’s argument might be able to accommodate at least some animal research.  If you can claim that a given piece of research is, all things considered, justifiable, then direct action to disrupt it might not have the same moral backing.  Cooke thinks that little, if any, animal research is justified – but, again, that’s another, higher-order, argument.

One consideration in that further argument may be whether you think that there’s a duty to carry out (at least certain kinds of) research. more…

Animals in US Laboratories: Who Counts, Who Matters?

21 Mar, 15 | by BMJ

Guest post by Alka Chandna

How many animals are experimented on in laboratories? It’s a simple question, the answer to which provides a basic parameter to help us wrap our heads around the increasingly controversial and ethically harrowing practice of locking animals in cages and conducting harmful procedures on them that are often scary, painful, and deadly. Yet ascertaining the answer in the United States – the world’s largest user of animals in experiments – is surprisingly difficult.

In the eyes of the US Animal Welfare Act (AWA) – the single federal law that governs the treatment of animals used in experimentation – not all animals are created equal. Mice, rats, and birds bred for experimentation, and all cold-blooded animals – estimated by industry to comprise more than 95 percent of all animals used – are all unscientifically and dumbfoundingly excluded from the AWA’s definition of “animal”. Orwell cheers from his grave while Darwin rolls in his.

Leaving aside the question of whether mice and rats should be categorized as vegetable or mineral, the exclusion of these animals from the AWA also results in a dearth of data on the most widely used species, as the only figures on animal use in US laboratories that are systematically collected, organized, and published by the government are on AWA-regulated species. more…

Saatchi Bill – Update

28 Oct, 14 | by Iain Brassington

Damn. Damn, damn, damn.

It turns out that the version of the Medical Innovation Bill about which I wrote this morning isn’t the most recent: the most recent version is available here.  Naïvely, I’d assumed that the government would make sure the latest version was the easiest to find.  Silly me.

Here’s the updated version of §1(3): it says that the process of deciding whether to use an unorthodox treatment

must include—

(a) consultation with appropriately qualified colleagues, including any relevant multi-disciplinary team;

(b) notification in advance to the doctor’s responsible officer;

(c) consideration of any opinions or requests expressed by or on behalf of the patient;

(d) obtaining any consents required by law; and

(e) consideration of all matters that appear to the doctor to be reasonably necessary to be considered in order to reach a clinical judgment, including assessment and comparison of the actual or probable risks and consequences of different treatments.

So it is a bit better – it seems to take out the explicit “ask your mates” line.

However, it still doesn’t say how medics ought to weigh these criteria, or what counts as an appropriately qualified colleague.  So, on the face of it, our homeopath-oncologist could go to a “qualified” homeopath.  Or he could go to an oncologist, get told he’s a nutter, make a mental note of that, and decide that that’s quite enough consultation and that he’s still happy to try homeopathy anyway.

So it’s still a crappy piece of legislation.  And it still enjoys government support.  Which does, I suppose, give me an excuse to post this:

Many thanks to Sofia for the gentle correction about the law.

An Innovation Too Far?

28 Oct, 14 | by Iain Brassington

NB – Update/ erratum here.  Ooops.

One of the things I’ve been doing since I last posted here has involved me looking at the Medical Innovation Bill – the so-called “Saatchi Bill”, after its titular sponsor.  Partly, I got interested out of necessity – Radio 4 invited me to go on to the Sunday programme to talk about it, and so I had to do some reading up pretty quickly.  (It wasn’t a classic performance, I admit; I wasn’t on top form, and it was live.  Noone swore, and noone died, but that’s about the best that can be said.)

It’s easy to see the appeal of the Bill: drugs can take ages to come to market, and off-label use can take a hell of a long time to get approval, and all the rest of it – and all the while, people are suffering and/ or dying.  It’s reasonable enough to want to do something to ameliorate the situation; and if there’s anecdotal evidence that something might work, or if a medic has a brainwave suggesting that drug D might prove useful for condition C – well, given all that, it’s perfectly understandable why we might want the law to provide some protection to said medic.  The sum of human knowledge will grow, people will get better, and it’s raindrops on roses and whiskers on kittens all the way; the Government seems satisfied that all’s well.  Accordingly, the Bill sets out to “encourage responsible innovation in medical treatment (and accordingly to deter innovation which is not responsible)” – that’s from §1(1) – and it’s main point is, according to §1(2), to ensure that

It is not negligent for a doctor to depart from the existing range of accepted medical treatments for a condition, in the circumstances set out in subsection (3), if the decision to do so is taken responsibly.

Accordingly, §1(3) outlines that

[t]hose circumstances are where, in the doctor’s opinion—

(a) it is unclear whether the medical treatment that the doctor proposes to carry out has or would have the support of a responsible body of medical opinion, or

(b) the proposed treatment does not or would not have such support.

So far so good.  Time to break out the bright copper kettles and warm woollen mittens*, then?  Not so fast. more…

Adrenaline, Information Provision and the Benefits of a Non-Randomised Methodology

17 Aug, 14 | by Iain Brassington

Guest Post by Ruth Stirton and Lindsay Stirton, University of Sheffield

One of us – Ruth – was on Newsnight on Wednesday the 13th August talking about the PARAMEDIC2 trial.  The trial is a double blind, individually randomised, placebo controlled trial of adrenaline v. normal saline injections in cardiac arrest patients treated outside hospital.  In simpler terms, if a person were to have a cardiac arrest and was treated by paramedics, they would usually get an injection of adrenaline prior to shocks to start the heart.  If that same person was enrolled in this study they would still receive an injection but neither the person nor the paramedic giving the injection would know whether it was adrenaline or normal saline.  The research team is proposing to consent only the survivors for the collection of additional information after recovery from the cardiac arrest.  This study is responding to evidence coming from other jurisdictions that indicates that there might be some significant long term damage caused by adrenaline – specifically that adrenaline saves the heart at the expense of the brain.  It is seeking to challenge the accepted practice of giving adrenaline to cardiac arrest patients.

Our starting position is that we do not disagree with the research team.  These sorts of questions need to be asked and investigated.  The development of healthcare depends on building an evidence base for accepted interventions, and where that evidence base is not forthcoming from the research, the treatment protocols need changing.  This going to be tricky in the context of emergency healthcare, but that must not be a barrier to research.

There are two major ethical concerns that could bring this project to a grinding halt.  One is the opt-out consent arrangements, and the other is the choice of methodology.

Consent, then. more…

JME blog homepage

Journal of Medical Ethics

Analysis and discussion of developments in the medical ethics field. Visit site



Creative Comms logo

Latest from JME

Latest from JME

Blogs linking here

Blogs linking here