You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our Group site.


Individually-Randomized Controlled Trials of Vaccines Against the Next Outbreak

11 Apr, 17 | by miriamwood

Guest Post: Nir Eyal, Marc Lipsitch

Paper: Vaccine testing for emerging infections: the case for individual randomisation 

The humbling experience of international response to Ebola taught the world a thing or two on preparing for Zika and for other emerging infections.

Some of those lessons pertain to vaccine development against emerging infections. One lesson was that vigorous vaccine development should start long in advance of outbreaks. CEPI, the Coalition for Epidemic Preparedness Innovations, was recently launched with an initial investment of half a billion US dollars from the Gates Foundation, Britain’s Wellcome Trust and the governments of Japan, Norway and Germany. There is also growing recognition that best practices on vaccine testing should be developed prior to outbreaks, from a study methodology viewpoint.

By contrast, in Zika, ethical guidelines on response in general and on an aspect of vaccine testing were created only once the pandemic erupted. Shouldn’t ethical disputes, e.g. on trial design for vaccine candidates, be ironed out in advance of emerging infections?

One persistent ethical question in vaccine testing pertains to individually-randomized control in efficacy trials. At the height of the 2014-5 Ebola outbreak, individually-randomized controlled trials were much maligned. Our paper at the Journal of Medical Ethics sets out to defend that approach for vaccine efficacy testing in emerging infections, including highly fatal and untreatable ones in developing countries.


A Hot Take on a Cold Body

21 Nov, 16 | by Iain Brassington

It’s good to see Nils’ post about the recent UK cryonics ruling getting shared around quite a bit – so it should.  I thought I’d throw in my own voice, too.

About 18 months ago, Imogen Jones and I wrote a paper musing on some of the ethical and legal dimensions of Christopher Priest’s The Prestige.  One dimension of this was a look at the legal status of the bodies produced as a result of the “magic” trick – in particular, the haziness of whether they were alive or dead; the law doesn’t have any space for a third state.  The paper was something of a jeu d’esprit, written to serve a particular function in a Festschrift for Margot Brazier.  If I say so myself, I think it’s a pretty good paper – but it’s also meant to be fun, and is clearly rather less serious than most ethico-legal scholarship (or anything else in the book, for that matter).


Not quite “Cold Lazarus”, but close enough…

So it’s a bit of a surprise to see relevantly similar themes popping up in the news.  If we’re freezing people in the hope of curing terminal illness in the future, what’s the status of the bodies in the meantime (especially if the death certificate has been signed)?  There’s a load of questions that we might want to ask before we get too carried away with embracing cryonics.

Right from the start, there’s a question about plausibility.  For the sake of what follows, I’m going to treat “freezing” as including the process of defrosting people successfully as well, unless the context makes it clear that I mean something else.  Now, that said, the (moral) reasons to freeze people rely on the plausibility of the technology.  If the technology is not plausible, we have no reason to make use of it.  It wouldn’t follow from that that using it’d be wrong – but since the default is not to act in that way, it’s positive reasons that we need, rather than negative ones.  Neither could we really rely on the thought that we could cryopreserve someone in the hope that the freezing-and-thawing process becomes more plausible in future, because we’d have no reason to think that we’d chosen the right version of the technology.  We can only cryopreserve a person once: what if we’ve chosen the wrong technique?  How would we choose the best from an indefinitely large number of what we can at best treat as currently-implausible ones?

So how plausible is it to put a body on ice, then revive it many years later?  It’s been pointed out by some that we currently do preserve embryos without apparent ill-effect, with the implication that there’s no reason in principle why more developed humans couldn’t be frozen successfully.  However, whole humans are a wee bit more complex than embryos; it’s not at all clear that we can extrapolate from balls of a few cells to entire humans.  Even the admittedly limited experimental evidence that it’s possible to freeze whole organs won’t show us that, since we’re systems of organs.  One can accept that an organ is a system, too; but all that means is that we’re systems of systems – so we’ve squared the complexity.  And, of course, the timescales being considered here are tiny compared with the kind of timescales envisaged in cryonic fantasies. more…

The End is Not What it Seems – Feasibility of Conducting Prospective Research in Critically Ill, Dying Patients.

14 Oct, 16 | by miriamwood

Guest Post by Amanda Van Beinum

Re: Feasibility of conducting prospective observational research on critically ill, dying patients in the intensive care unit

Collecting information about how people die in the intensive care unit is important. Observations about what happens during the processes of withdrawal of life sustaining therapies (removal of breathing machines and drugs used to maintain blood pressure) can be used to improve the care of dying patients. This information can also be used to improve processes of organ donation. But when the Determination of Death Practices in Intensive Care Units (DDePICT) research group first proposed to start collecting prospective data on dying and recently dead patients, a common response from other clinical researchers was, “You’re going to do what?” The research community did not believe that prospective research using an informed consent model would be possible in patients dying after withdrawal of life sustaining therapies in the intensive care unit.

While the clinical research community supported the “big picture” idea behind conducting this research, they were skeptical about our prospective research design and our intent to obtain full informed consent from all families prior to the patient’s death. Some also felt that we would have a hard time obtaining institutional ethics board approval or would encounter barriers from research coordinators uncomfortable with approaching families for consent at a difficult and emotional time in the patient’s care. However, the DDePICt group was persistent, and succeeded in their efforts to design the first prospective, observational pilot study in Canada of patients dying in the intensive care unit after withdrawal of life sustaining therapies. As part of the study design, the DDePICt pilot study collected data for an ethics sub-study to investigate how these anticipated challenges were overcome. The ethics sub-study sought an answer to the question; can we conduct ethical, prospective, observational research on a critically ill and imminently dying population in the intensive care unit?


We’re all Gonna Die… Eventually

6 Oct, 16 | by Iain Brassington

It might just be a product of the turnover of people with whom I have much professional contact, but I’ve not heard as much about human enhancement in the past couple of years as I had in, say, 2010.  In particular, there seems to be less being said about radical life extension.  Remember Aubrey de Grey and his “seven deadly things“?  The idea there was that senescence was attributable to seven basic processes; those basic processes are all perfectly scrutable and comprehensible biological mechanisms.  Therefore, the argument went, if we just put the time and effort into finding a way to slow, halt, or reverse them, we could slow, halt, or reverse aging.  Bingo.  Preventing senescence would also ensure maximum robustness, so accidents and illnesses would be less likely to kill us.  To all intents and purposes, we’d be immortal.  Some enterprising people of an actuarial mindset even had a go at predicting how long an immortal life would be.  Eventually, you’ll be hit by a bus.  But you might have centuries of life to live before that.

Dead easy.

I was always a bit suspicious of that.  The idea that death provides meaning to life is utterly unconvincing; but the idea that more life is always a good thing is unconvincing, too.  What are you going to do with it?  In essence, it’s one thing to feel miffed that one isn’t going to have the time and ability to do all the things that one wants to do: life is a necessary criterion for any good.  But that doesn’t mean that more life is worth having in its own right.  Centuries spent staring at a blank wall isn’t made any better by dint of being alive.

But a letter published this week in Nature suggests that there is an upper end to human lifespan after all.  In essence, the demographic data seem to suggest that there’s an upper limit to survivability.  That being the case, we should stop worrying about making people live longer and longer, and concentrate on what’s going on during the 125 years or so that Dong, Milholland and Vijg think is allotted to us. more…

The Challenge of Futile Treatment

29 Jul, 16 | by Iain Brassington

Guest Post by Lindy Willmott and Ben White

For decades, researchers from around the world have found evidence that doctors provide futile treatment to adult patients who are dying.  Some discussion of this topic has turned on matters of definition (see our recent contribution to this debate), with a broader concept of “perceived inappropriate treatment” being favoured by commentators more recently.  However, this debate skirts the fundamental issue: how can treatment that may prolong or increase patient suffering, waste scarce health care resources, and cause distress to health care workers still occur in hospitals around the world?  In other words, in these days of overworked doctors and underfunded healthcare systems, how is this still an issue?

Some research has tackled this although it has tended to focus on doctors operating in intensive care units and there has been very little research which looks at the reasons given by doctors from a range of specialties about why futile treatment is provided at the end of life.

Our study, undertaken by a team of interdisciplinary researchers, explored the perceptions on this topic of doctors, from a range of specialities, who are commonly involved with treatment at the end of life.  We interviewed 96 doctors at three hospitals in Queensland, Australia, from a range of specialities including intensive care, oncology, internal medicine, cardiology, geriatrics, surgery, and emergency.  Doctors reported that doctor-related and patient-related factors were the main drivers of futile treatment, although reasons relating to the institutional nature of hospitals were also important.

We found that doctor-related reasons were important in the provision of futile end-of-life care.  Many doctors reported attitudes of their colleagues that reflect a cultural aversion to death.  Doctors saw themselves as trained healers who viewed every death as a failure, and pursued a cure rather than appropriate palliative treatment for dying patients.  Doctors described wanting to help the patient and not give up hope that a treatment might provide some benefit.  They also said they wanted to satisfy patients, families, and medical professionals themselves that everything possible had been done, due to both emotional attachment to the patient and fear of the legal consequences of refusing demands for treatment.  They also admitted to providing families and patients with a smorgasbord of treatment options as a means of avoiding uncomfortable conversations about dying.  Doctors’ personalities, religious backgrounds, and their own experiences with death and dying were also said to contribute to the giving of futile treatment. more…

Where to Publish and Not to Publish in Bioethics

5 May, 16 | by bearp

Guest Post by Stefan Eriksson & Gert Helgesson, Uppsala University

* Note: this is a cross-posting from The Ethics Blog, hosted by the Centre for Research Ethics & Bioethics (CRB) at Uppsala University. The link to the original article is here. Re-posted with permission of the authors.


Allegedly, there are over 8,000 so-called predatory journals out there. Instead of supporting readers and science, these journals serve their own economic interests first and at best offer dubious merits for scholars. We believe that scholars working in any academic discipline have a professional interest and a responsibility to keep track of these journals. It is our job to warn the young or inexperienced of journals where a publication or editorship could be detrimental to their career. Even with the best of intent, researchers who publish in these journals inadvertently subject themselves to criticism. We have seen “predatory” publishing take off in a big way and noticed how colleagues start to turn up in the pages of some of these journals. This trend, referred to by some as the dark side of publishing, needs to be reversed.


Circumcision and Sexual Function: Bad Science Reporting Misleads Parents

22 Apr, 16 | by bearp

by Brian D. Earp / (@briandavidearp)


Another day, another round of uncritical media coverage of an empirical study about circumcision and sexual function. That’s including from the New York Times, whose Nicholas Bakalar has more or less recycled the content of a university press release without incorporating any skeptical analysis from other scientists. That’s par for the course for Bakalar.[1]

The new study is by Jennifer Bossio and her colleagues from Queen’s University in Ontario, Canada: it looked at penile sensitivity at various locations on the penis, comparing a sample of men who had been circumcised when they were infants (meaning they had their foreskins surgically removed), with a sample of men who remained genitally intact (meaning they kept their foreskins into adulthood).[2]

What did the researchers discover? According to a typical headline from the past few days:

Circumcision does not reduce penis sensitivity.”

But that’s not what the study showed. Before we get into the details of the science, and looking just at this claim from the “headline” conclusion, it might be helpful to review some basic anatomy.


A Tool to Help Address Key Ethical Issues in Research

22 Feb, 16 | by BMJ

Guest post by Rebecca H. Li and Holly Fernandez Lynch

One of the most important responsibilities of a clinical project lead at a biotech company or an academic research team is to generate clinical trial protocols. The protocol dictates how a trial will be conducted and details background information on prior research, scientific objectives, study rationale, research methodology and design, participant eligibility criteria, anticipated risks and benefits, how adverse events will be handled, plans for statistical analysis, and other topics. Many protocol authors use as a starting point a “standardised” protocol template from their funder or institution. These templates often provide standard language, and sections for customisation, sometimes with various “pick-and-choose” options based on the nature of the research. They inevitably cover each of the key topics listed above, but often fail to include ethical principles and considerations beyond the regulatory requirement of informed consent. Indeed, the process of protocol writing has traditionally focused on scientific detail, with ethical analysis often left to institutional review boards (IRBs) and research ethics committees (RECs); unfortunately, robust discussion of specific ethical issues is often absent from clinical trial protocols.

When IRBs and RECs convene to review protocols, they are expected to evaluate whether the study will adequately protect enrolled participants. When the protocol fails to address potential ethical concerns explicitly, reviewers are left to speculate: did the investigator consider the concern, but dismiss it as not relevant in this particular context; did the investigator fail to understand the concern; does the investigator have an appropriate plan in place to resolve the concern, but has left it unstated in the protocol? This uncertainty can contribute to delays as reviewers debate among themselves, and can require lengthy back-and-forth with researchers, including series of protocol revisions and re-reviews until clarity is established. In some cases, it may also be that reviewers with less experience or expertise fail to identify an ethical concern that has not been brought to their attention in a protocol. more…

The Unbearable Asymmetry of Bullshit

16 Feb, 16 | by bearp

By Brian D. Earp (@briandavidearp)

* Note: this article was first published online at Quillette magazine. The official version is forthcoming in the HealthWatch Newsletter; see


Science and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings—concerns I share and which I have written about at length — I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest.

Scientists are people too

In other words, science is flawed. And scientists are people too. While it is true that most scientists — at least the ones I know and work with — are hell-bent on getting things right, they are not therefore immune from human foibles. If they want to keep their jobs, at least, they must contend with a perverse “publish or perish” incentive structure that tends to reward flashy findings and high-volume “productivity” over painstaking, reliable research. On top of that, they have reputations to defend, egos to protect, and grants to pursue. They get tired. They get overwhelmed. They don’t always check their references, or even read what they cite. They have cognitive and emotional limitations, not to mention biases, like everyone else.

At the same time, as the psychologist Gary Marcus has recently put it, “it is facile to dismiss science itself. The most careful scientists, and the best science journalists, realize that all science is provisional. There will always be things that we haven’t figured out yet, and even some that we get wrong.” But science is not just about conclusions, he argues, which are occasionally (or even frequently) incorrect. Instead, “It’s about a methodology for investigation, which includes, at its core, a relentless drive towards questioning that which came before.” You can both “love science,” he concludes, “and question it.”

I agree with Marcus. In fact, I agree with him so much that I would like to go a step further: if you love science, you had better question it, and question it well, so it can live up to its potential.

And it is with that in mind that I bring up the subject of bullshit.


The Legal and Moral Significance of Implantation

23 Jun, 15 | by BMJ

Guest post by Sally Sheldon

We tend to talk about contraception and abortion as if they were two separate and readily distinguishable practices, the former preventing pregnancy and the latter ending it. This understanding has a very important effect in current British law, where a relatively permissive approach to the availability of contraception stands in stark contrast to the morally grounded, onerous criminal sanctions against abortion. Yet is the distinction between abortion and contraception really so clear cut?  How and why do we make it? And is the line that we have drawn between the two morally defensible?

As a matter of biological fact, the development of human life is not characterised by bright lines. As the eminent lawyer Glanville Williams once put it, “abstract human life does not ‘begin’; it just keeps going.” A seamless biological continuum exists through the production of sperm and egg, their joining together in a process of fertilisation, the gradual development of the new entity thus created throughout pregnancy, birth, subsequent growth, eventual death and ensuing decay of the body. Defining what happens along the way as an ‘embryo’, ‘fetus’, ‘person’, ‘adult’, or ‘corpse’ requires an attempt to draw lines on the basis of criteria selected as holding significance for legal or other purposes. How and where we draw such lines is a tricky business, involving careful moral reflection informed by medical fact.

The “regulatory cliff edge” between the relatively permissive regulation of contraception and the criminal prohibition of abortion relies on a line drawn on the basis of the biological event of implantation, where the fertilised egg physically attaches itself to the wall of the womb some six to twelve days after ovulation. Yet while enormous legal weight has been placed upon it, little consideration seems to have been given as to why implantation matters morally. The voluminous philosophical literature on the ethical status of the human embryo and foetus offers little support for the view that implantation is an important marker.

Further, while it might once have been suggested that implantation offers a conveniently timed moment for a necessary gear change between the appropriate regulation of contraception and abortion, this argument is difficult to sustain in the light of modern medical science. more…

JME blog homepage

Journal of Medical Ethics

Analysis and discussion of developments in the medical ethics field. Visit site

Creative Comms logo

Latest from JME

Latest from JME

Blogs linking here

Blogs linking here