You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our Group site.


Randomised placebo-controlled trials of surgery: ethical analysis and guidelines

25 Oct, 16 | by miriamwood

Guest Post: Karolina Wartolowska

Paper [OPEN ACCESS]: Randomised placebo-controlled trials of surgery: ethical analysis and guidelines

Surgical placebo-controlled randomised controlled trials are, in many ways, like placebo-controlled drug trials. Like in case of drug trials, sometimes, a placebo-controlled design is necessary so that the results are valid and unbiased. Placebo control is usually necessary when a surgical trial has only subjective outcomes. This is often the case, because many surgeries are done to relieve pain and improve function. Validating the efficacy of a surgery in a well-designed trial helps to improve clinical practice. If the procedure is ineffective it should be discontinued and less risky treatment should be used instead. It also demonstrates the need for new effective interventions. But if the surgery is effective the resources should be allocated to the better intervention. If efficacy of intervention is never tested, many patients may be exposed to risks associated with a surgery but do not get any real benefits.  They also do not get other treatment, which may provide similar benefits without the risks and costs associated with surgery.

Surgical placebo-controlled randomised controlled trials may be undertaken in an ethical way. Firstly, there needs to be “equipoise”. In other words, there should be uncertainty, lack of strong evidence and lack of agreement among the clinicians whether the investigated surgery is effective or whether it is better than conservative treatment.  If there is equipoise, there is no true “best treatment” which can be recommended to the patient. Secondly, there should be some preliminary evidence that the surgery works (form animal studies, open-label trials). There is no point undertaking a surgical trial if it fails to show any improvement in the surgical arm. Thirdly, the risks associated with a surgical trial should not be disregarded. To be justified, such trials should have high scientific and clinical value and a potential to change clinical practice. Moreover, the risks of harm in both trial arms should be as small as possible. This is particularly important in the placebo/sham arm. The placebo mimics the active surgery but it also omits the surgical element which is the key part of the active surgery. So some procedures necessary in the surgical arm, for example anticoagulants or antibiotics, may be avoided in the placebo arm or can be replaced with a saline injection.  Ideally, the placebo/sham procedure should benefit the patients, for example as a diagnostic procedure. And last but not least, it is important that there is an uncertainty about the treatment allocation but there is no actual deception. Patients should understand which procedures are or are not performed and what are the associated risks in each trial arm.

Surgery is inherently risky but it is important to know whether it is also effective and worth taking these risks.

The End is Not What it Seems – Feasibility of Conducting Prospective Research in Critically Ill, Dying Patients.

14 Oct, 16 | by miriamwood

Guest Post by Amanda Van Beinum

Re: Feasibility of Conducting Prospective Observational Research on Critically Ill, Dying Patients in the Intensive Care Unit

Collecting information about how people die in the intensive care unit is important. Observations about what happens during the processes of withdrawal of life sustaining therapies (removal of breathing machines and drugs used to maintain blood pressure) can be used to improve the care of dying patients. This information can also be used to improve processes of organ donation. But when the Determination of Death Practices in Intensive Care Units (DDePICT) research group first proposed to start collecting prospective data on dying and recently dead patients, a common response from other clinical researchers was, “You’re going to do what?” The research community did not believe that prospective research using an informed consent model would be possible in patients dying after withdrawal of life sustaining therapies in the intensive care unit.

While the clinical research community supported the “big picture” idea behind conducting this research, they were skeptical about our prospective research design and our intent to obtain full informed consent from all families prior to the patient’s death. Some also felt that we would have a hard time obtaining institutional ethics board approval or would encounter barriers from research coordinators uncomfortable with approaching families for consent at a difficult and emotional time in the patient’s care. However, the DDePICt group was persistent, and succeeded in their efforts to design the first prospective, observational pilot study in Canada of patients dying in the intensive care unit after withdrawal of life sustaining therapies. As part of the study design, the DDePICt pilot study collected data for an ethics sub-study to investigate how these anticipated challenges were overcome. The ethics sub-study sought an answer to the question; can we conduct ethical, prospective, observational research on a critically ill and imminently dying population in the intensive care unit?


A Eulogy for the UK Donation Ethics Committee

13 Oct, 16 | by miriamwood

Guest Post by David Shaw

Re: The Untimely Death of the UK Donation Ethics Committee

Most people I know want to donate their organs after they die. Why wouldn’t they? If you have to die, you might as well do your best to save several other lives once you’re gone. But organ donation is a more ethically complex topic than many people realise. From Spring 2014 until April this year I was a member of the UK Donation Ethics Committee (UKDEC), which advised NHS Blood and Transplant and the various UK health departments on the ethics of organ donation and transplantation. The committee included doctors, lawyers, nurses, ethicists like me, and ‘lay’ members – ordinary members of the public. In my JME article, I discuss the committee’s work and why it came to an end.

UKDEC dealt with a wide variety of topics. We advised the Welsh Government on the ethical implications of a switch to ‘deemed consent’ to organ donation in Wales, undertook an analysis of the role of the family in donation, and engaged with ethnic minorities and religious groups to facilitate discourse about donation. Most of all, our work was important because we provided practical ethical guidance to healthcare professionals who were often unsure about the ethics and sometimes the legality of new developments in organ donation. Every year new technologies emerge that can enable donation where it was previously impossible, or which can improve the viability of donated organs. Sometimes doctors would approach UKDEC for our advice on their protocols that wished to make use of these new innovations. One of UKDEC’s final publications was a discussion paper concerning so-called “elective ventilation”, where a patient is placed onto on life support not because it will physically benefit him or her, but in order to facilitate organ donation.

But perhaps the most important contribution UKDEC made concerned organ donation after circulatory death (DCD). Nowadays, over 40% of UK donations involve DCD. But until around a decade ago, almost all organ donation in the UK took place after neurological determination of death – in other words, you had to be “brain-dead” before your organs could be donated and transplanted into recipients. In contrast, DCD involves organ donation after a patient’s heart has stopped beating. This might sound relatively straightforward, but in fact many doctors and nurses objected to DCD because of concerns about the potential reversibility of death, the burden on families and perceived conflicts of interest. Indeed, with the use of new technologies, heart donation after circulatory death is even possible, which might seem paradoxical.


Victims, Vectors and Villains? Are Those Who Opt Out of Vaccination Morally Responsible for the Deaths of Others?

11 Oct, 16 | by miriamwood

Guest Post by Euzebiusz Jamrozik, Toby Handfield, Michael J Selgelid

Re: Victims, vectors and villains: are those who opt out of vaccination morally responsible for the deaths of others?

Who is responsible for the harms caused by an outbreak for vaccine preventable disease?

Are those who opt out of vaccination and transmit disease responsible for the resultant harms to others?

Suppose that health care systems make vaccines widely available and easily affordable–but some choose not to be vaccinated, resulting in an outbreak. If the outbreak only affected those who could have been safely and effectively vaccinated, but nonetheless opted out, then we might say that those who become infected consented to the risks involved and are thus responsible for their own illness. What should we think, however, about scenarios where harm occurs to those who cannot be safely or effectively vaccinated – e.g. vulnerable groups such as infants and the immunosuppressed? These groups are often at the highest risk of severe harm, and depend upon herd immunity (resulting from high vaccination rates) to protect them from vaccine-preventable infections. Members of such groups bear the burden of others’ freedom to opt out of vaccination, and this can cost them their lives. In 2015, for example, an immunosuppressed woman died in the United States during a measles outbreak made possible by a lapse in local vaccination rates[1].

Our recent article in the Journal of Medical Ethics argues that imposing risks of infection on others without good justification is morally blameworthy–and that individuals who opt out of vaccination are thus morally responsible for resultant harms to others. In defence of this thesis we address numerous important questions, and our answers may have significant implications for public health policy.


Amoral Enhancement

10 Oct, 16 | by miriamwood

Guest Post by Saskia Verkiel

Re: Amoral Enhancement

A reply to Douglas’ reply to Harris’ reply to Douglas regarding the issue of freedom in cases of biomedical moral enhancement

Wouldn’t it be awesome if we could just swallow a pill and become better people?

With many aspects of life, growing numbers of people are embracing biomedical interventions to improve physical or cognitive performance and endurance, whether indicated for those purposes or not. Think doping in sports. Think Ritalin in college. Think beta blockers in stage performers. Think modafinil in pilots and surgeons who have to be alert for long stretches of time.

The funny thing is that when it comes to moral enhancement, we tend to think more in terms of its application to others, who are ‘obviously’ not such good people. Swindlers. Rapists. Basically all kinds of performers of crime.

Thomas Douglas was the first to write an analysis specifying when certain kinds of biomedical moral enhancement would be permissible, in 2008, and he realised that it’s important to make this distinction of whom we want the enhancement for. He focused on voluntarily enhancing the self. It’s a jolly nice read.

This paper triggered a cascade of replies.

To be fair, seeing the replies fly back and forth in this debate is not unlike watching a ballgame, albeit more enlightening (or so I think). Compare with Monty Python’s Philosophers’ Football. There’s team “Let’s put it in the drinking water!” (roughly: Oxford) and there’s team “Hold it, hold it…” (captained by John Harris and including yours truly).


Further Clarity on Co-operation and Morality

4 Oct, 16 | by miriamwood

Guest Post by David S. Oderberg, University of Reading

Re: Further Clarity on Co-operation and Morality

The 2014 US Supreme Court decision in Burwell v. Hobby Lobby was a landmark case on freedom of religion and conscience in the USA. The so-called ‘contraceptive mandate’ of the Affordable Care Act (aka Obamacare) requires employers to provide health insurance cover for contraception used by their employees. The Green family (Evangelical Christian), owners of the Hobby Lobby chain of arts and crafts stores, challenged the mandate as they objected to providing cover for at least those methods of contraception that are abortifacient. They were joined by the Hahn family (Mennonite Christian), owners of a furniture company.

The case wound up at the Supreme Court, where the majority, led by Alito J, agreed with the plaintiffs. Under the Religious Freedom Restoration Act 1993, the plaintiffs were ‘substantially burdened’ in their exercise of religious freedom. They sincerely believed that by providing insurance cover that violated their religious and moral beliefs, they would be complicit in sinful behaviour. Violation of the RFRA, the court decided, meant the plaintiffs were entitled to an ‘accommodation’ or ‘opt-out’ of the contraceptive mandate.

The case is remarkable for a number of reasons. Conscientious objection is not new to the courts, particularly as regards service in war. Nor is Hobby Lobby unusual for recognising that a legal person such as a corporation can have its freedom of religion violated in virtue of what its owners/executives are required to do by law. After all, the contraceptive mandate already exempted churches and other purely religious bodies. In the present case, however, the plaintiff corporations were not religious in nature: it was their owners/executives who claimed a corporate exemption based on their personal religious and ethical beliefs. The judgment thus radically extends the potential scope for religious freedom litigation under RFRA, something that will occupy the courts for many years to come.


Should Junior Doctors Still Strike?

20 Sep, 16 | by bearp

Guest Post by Adam James Roberts

In early July, the British Medical Association’s junior members voted by a 16-point margin to reject a new employment contract negotiated between the BMA’s leadership and the Government. The chair of the BMA’s junior doctors committee, Johann Malawana, stood down following the result, noting the “considerable anger and mistrust” doctors felt towards the Government and their concerns about what the contract would mean “for their working lives, their patients and the future delivery of care” in the National Health Service (the NHS).

The BMA pressed the Government to reopen negotiations and to reverse its decision to impose the contract unilaterally. Those appeals having been rebuffed, the BMA announced two months later a new programme of strikes, citing concerns about the impacts on part-time workers, “a majority of whom are women”; on those doctors who already work the greatest number of weekends, “typically in specialties where there is already a shortage” of staff; the contract’s implications for the ability of the NHS to “attract and keep enough doctors” into the future; and the lack of an answer as to how the Government would manage to staff and fund the extra weekend care which was so often drawn on to justify pushing that new contract through.

Earlier this year, Mark Toynbee and colleagues argued in the JME that the earlier rounds of strikes by British juniors were probably ethically permissible, noting that emergency care would continue to be available, that the maintenance of patient well-being was apparently a goal, and that the strikers felt they were treating industrial action as a last resort. In a later paper, I attempted to outline and apply an ethical framework drawing on Thomist ‘just war’ theories, reaching the same conclusion about the strikes as Toynbee did.

In this guest post, I attempt to update or supplement that literature, considering some of the more recent and popular arguments against the current rounds of strikes and whether any of them might be morally compelling. In particular, I look at the fact that the BMA’s junior leadership had described the rejected offer as “a good deal”; the argument that strikes are a disproportionate response to the remaining issues; the concerns voiced about the strikes by Britain’s General Medical Council; and the allegation that striking doctors are “playing politics”.


Is it Ethical to Pay Adolescents to Take HIV Treatment?

20 Sep, 16 | by miriamwood

Guest Post by Rebecca Hope, Nir Eyal, Justin Healy & Jacqueline Bhabha

Re: Paying for Antiretroviral Adherence: Is it Unethical When the Patient is an Adolescent?

With treatment, a child with HIV in sub-Saharan Africa can expect to live a healthy life. Better access to HIV treatment is contributing to a global decline in HIV deaths and new infections. Yet in adolescents, the mortality rate is rising – it increased by 30% between 2005-2012 – and HIV is now the leading cause of death among African adolescents. Globally, one in three adolescents with HIV do not take adequate therapy to suppress the HIV virus.

When antiretroviral treatment is life-saving and free, why is adherence so hard for infected adolescents? YLabs, a non-profit that designs and tests solutions to improve the health of disadvantaged youth, began working with adolescents living with HIV in Rwanda and South Africa to understand what prevents them from taking their treatment. Some of our team were involved in that work. Adolescents with HIV are navigating important transitions in their relationships, sexuality, and socio-economic roles, whilst living with a highly stigmatised condition. Lack of social support, isolation, and low mood made it hard for teens to motivate themselves to take medicines regularly. Poverty also stood in the way of regular clinic attendance. Many interviewees were more concerned about their finances than their health: one sixteen year old Rwandan girl living with HIV said: “When I’m in class thinking about how to pay school fees, I think about stopping taking my medicine and starting to try to find money.”

In addition, adolescence is often a time of risk-taking and short-term thinking, contributing to unhealthy habits. Neurodevelopmental research suggests that areas of the brain stimulated by rewards reach peak activation in adolescence, and adolescents prefer immediate, small rewards over larger gains that come later. At the same time, the development of ‘self-control’ regions, which help us make wise, considered decisions, lags far behind–a perfect neurodevelopmental storm. For many adolescents, skipping tablet-taking today, when they feel well, might be favoured over staying healthy in five years’ time. We asked, could adolescents’ increased susceptibility to rewards make short-term financial rewards a useful tool to improve long-term healthy adherence habits?

Copyright: YLabs. Photographer: Majdi Osman.

The YBank membership card for adolescents participating in a pilot study of financial incentives

With Rwandan adolescents, YLabs designed YBank, a new approach to improve antiretroviral treatment adherence, currently being piloted in Rwanda with the Rwanda Biomedical Centre. The YBank program combines short and long-term financial incentives with peer support, access to banking and financial literacy training. But is it ethical to pay adolescents to take their medications?
Researchers from YLabs and from the Harvard TH Chan School of Public Health’s Department of Global Health and Population investigated whether it is ethical to incentivize teens to take antiretroviral therapy. Payment for antiretroviral and other medication adherence is an accepted practice for adults. Our JME paper examines three ethical concerns about incentivizing adolescents with HIV to take antiretrovirals that might be more serious for adolescents than for adults.


Making Humans Morally Better Won’t Fix the Problems of Climate Change

25 Aug, 16 | by miriamwood

Guest Post by Bob Simpson, Monash University

Re: Climate Change, Cooperation and Moral Bioenhancement

The Intergovernmental Panel on Climate Change has repeatedly said that greenhouse gas emissions increase the likelihood of severe and irreversible harm for people and ecosystems. And in his State of the Union address in 2015, Barack Obama emphasised these problems, saying that climate change poses the greatest threat to humanity’s future. We’ve come to expect pronouncements like these. Political leaders and transnational policy institutions both have an important role to play in implementing the measures needed to address threats from climate change – measures like international economic agreements, energy sector reform, and technological research.

By contrast, we wouldn’t expect advocates of biotechnological human enhancement to be proposing solutions to climate change. What does human enhancement have to do with oceanic warming or greenhouse gas emissions? According to people like Ingmar Persson and Julian Savulescu, who advocate “moral bio-enhancement”, these things are in fact related. They say that we should be finding ways to use biotechnological interventions to make people more trusting and altruistic towards strangers, and hence more willing to make personal sacrifices – like, say, dramatically reducing their carbon footprint – in order to cooperate in global policies aimed at mitigating the impact of climate change.


What is a Moral Epigenetic Responsibility?

23 Aug, 16 | by miriamwood

Guest Post by Charles Dupras & Vardit Ravitsky

Re: The Ambiguous Nature of Epigenetic Responsibility

Epigenetics is a recent yet promising field of scientific research. It explores the influence of the biochemical environment (food, toxic pollutants) and the social environment (stress, child abuse, socio-economic status) on the expression of genes, i.e. on whether and how they will switch ‘on’ or ‘off’. Epigenetic modifications can have a significant impact on health and disease later in life. Most surprisingly, it was suggested that some epigenetic variants (or ‘epi-mutations’) acquired during one’s life could be transmitted to offspring, thus having long-term effects on the health of future generations.

Epigenetics is increasingly capturing the attention of social scientists and ethicists, because it brings attention to the importance of environmental exposure for the developing foetus and child as a risk factor for common diseases such as cardiovascular, diabetes, obesity, allergies and cancers. Scholars such as Hannah Landecker, Mark Rothstein and Maurizio Meloni have argued that epigenetics may be used to promote various arguments in ongoing debates on environmental and social justice, as well as intergenerational equity. Some even suggested that epigenetics could lead to novel ways of thinking about moral responsibilities for health.

Is it fair that disadvantaged populations are exposed to an inequitable share of harmful environments – such as polluted areas – that are epigenetically-detrimental to their health? Who should be held responsible for protecting children and future generations from epigenetic harm induced by their environments? Should we hold the parents accountable for detrimental epigenetic impact of their behavior on their children? And how should we manage the possible risks of stigmatization and discrimination of people that we consider blameworthy of inflicting epigenetic harm on others? These sensitive questions call for a nuanced investigation of the impact epigenetics can have on our understanding of moral responsibility.


JME blog homepage

Journal of Medical Ethics

Analysis and discussion of developments in the medical ethics field. Visit site

Creative Comms logo

Latest from JME

Latest from JME

Blogs linking here

Blogs linking here