You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our Group site.

How to Keep HIV Cure-Related Trials Ethical: The Benefit/Risk Ratio Challenge

20 Feb, 17 | by bearp

Guest Post by Nir Eyal

Re: Special Issue of the Journal of Medical Ethics on the ethics and challenges of an HIV cure

For most patients with HIV who have access to antiretroviral treatment and use it properly, that treatment works well. But the holy grail of HIV research remains finding a cure. Sometimes that means a literal, sterilizing cure that would remove HIV from the body. But increasingly the aim is to find a mere functional cure that would send HIV into sustained remission during which antiretrovirals would be unnecessary.

Early successes in cure-related research, most notably the apparent cure of ‘Berlin patient’ Timothy Brown, prompted the International AIDS Society and the US National Institutes of Health to declare cure-related research a high priority. Recent successes in animal models have re-kindled hopes, and cure-related research is ongoing.

But there is a catch. Many of the early-phase cure-related studies that are currently planned or under way carry risks that are either very high or hard to quantify. These risks come from toxicity (e.g., of stem cell transplantation in an immunocompromised population), necessary interruptions to antiretroviral treatment (either short ‘pauses’ or intentionally longer breaks), or invasive physical exams. They affect study subjects and, sometimes, third parties like sexual partners or foetuses.

While high or unknown risks are a mainstay of early-phase trials in areas like cancer research, cure study participants typically have a safe and efficacious alternative to those risks: remaining on antiretrovirals. Can we justify asking patients who are doing well on antiretrovirals to accept the risk and uncertainty of many HIV cure-related trials? If we cannot, we might need to give up on the hope of curing HIV, or of achieving controlled remission.

These ethical questions about HIV cure-related trials were first raised by an activist, then asked again and again. They also arise in human subject research beyond HIV cure-related studies: what should we do when it is hard to keep a socially-important study beneficial in prospect to study participants? Are we ever permitted to compromise the individual’s objective interests in the pursuit of collective goals? What are legitimate ways of pre-empting this dilemma? The entire February 2017 issue of Journal of Medical Ethics is dedicated to clarifying and trying to answer these questions.

After an introduction, the journal issue provides a background by leading HIV-cure related researchers Dan Kuritzkes and Kenneth Freedberg and Paul Sax, as well as myself, a philosopher. Articles by legally-trained bioethicists Rebecca Dresser and Seema Shah and philosopher Caspar Hare suggest ways to quantify and mitigate risks to participants of cure-related studies. Contributions by philosopher Lara Buchak, bioethicist and lawyer Emily Largent, and AIDS activist David Evans assess how much the potential benefits to study participants, ranging from the remote hope of being cured through financial incentives to the satisfaction of having helped others, can legitimately offset any remaining risks. Legally-trained bioethicist George Annas and philosopher Danielle Bromwich explore how much participants’ fully informed consent can count as ample protection in cure-related studies, and when that consent counts as full. Philosophers Dan Wikler, Nick Evans (with first author public health expert Regina Brown), Rahul Kumar, and Frances Kamm assess when, if ever, the potential public health benefits of research—e.g., finding a cure for HIV—can warrant placing individual study participants at high net risk. An afterword asks how these investigations should affect future directions in research ethics.

Many contributions agree that myriad ways exist to justify studies that, at least on the face of it, run counter to the best medical interests of candidate participants. Furthermore, one need not be a utilitarian to argue as much. Even so-called contractualist ethicists such as Rahul Kumar can justify such studies, provocative though they may be for current culture in clinical study oversight. That culture, these articles suggest, is hard to defend from a wide spectrum of ethical theories.


NOTE: This post will be cross-published at BMJ Opinion.

Harm: Could It Sometimes Be a Good Thing?

9 Feb, 17 | by miriamwood

Guest Post: Patrick Sullivan

Response: Hanna Pickard and Steve Pearce, Balancing costs and benefits: a clinical perspective does not support a harm minimization approach for self-injury outside of community settings

BBC news recently reported on the approval of plans for facilities to support self-injection rooms to allow drug users to inject safely under supervision in Glasgow. Needless to say the initiative is  controversial and as yet is  only approved in principle. The plan would involve addicts consuming their own drugs and in some cases being provided with medical-grade heroin. The move aims to address the problems caused by an estimated 500 or so users who inject on Glasgow’s streets. This initiative again brings into the public eye the issue of harm minimisation.

The concept of harm minimisation has been widely applied in a number of areas such as drug misuse where needle exchange programmes are the obvious example. The basic idea is that where we are unable to stop people engaging in dangerous activities we may sometimes have to settle for the fact that the best outcome possible is that the harm associated with the activity can be reduced. Many day-to-day activities are associated with harm reduction; seat belts on cars, motorcycle helmets, safety measures to reduce risks in extreme sports, advice on safe drinking levels. People will drive, ride motorbikes, engage in dangerous sporting activity and drink alcohol. If they do these things then it is important that they are done safely. Basically this is what harm minimisation is about.

A controversial application of these ideas has been in the area of self-injury. The fundamental idea is that people are allowed to harm themselves safely in the short term, whilst longer-term change is facilitated through access to psychological support. In my recent paper  ‘Should health care professionals be allowed to do harm? The case of self-injury’, I revisit the ethical issues associated with using harm minimisation to support people who self injure. This idea is controversial and counter intuitive given the health care professionals obligation to do no harm.  I challenge this perspective, suggesting that many clinical interventions do in fact involve harm. For example anyone who has experienced surgery or even dental treatment will acknowledge this fact quite readily.

Now it is important to be clear that I am not supporting the routine use of this approach in clinical practice. There is a place, in my view, for paternalism and the ethical case can be made in a number of scenarios. For example the prevention of suicide in people with a psychotic depression. Furthermore, I do not underestimate the risks associated with implementation in a mental health care inpatient setting. I do, however, believe it provides an alternative perspective that could be adopted with some people who self injure.


Balancing Costs and Benefits: A Clinical Perspective Does not Support a Harm Minimization Approach for Self-injury Outside of Community Settings

9 Feb, 17 | by miriamwood

Guest Post: Hanna Pickard and Steve Pearce

Responding to: Harm may sometimes be a good thing? Patrick Sullivan

Sullivan’s emphasis on the importance of supporting autonomy and independence among vulnerable people who self-injure is fundamental to good clinical practice. This is why some forms of harm minimization, such as encouraging reflection, responsibility, safe cutting and where appropriate self-aftercare, are uncontroversial and already widely practiced within community settings. The situation is different, however, with respect to both secure and non-secure inpatient settings. It is also different when we consider the other forms of harm minimization that Sullivan advocates, namely, the provision of self-harming instruments on wards alongside education about anatomy.

In secure (forensic) inpatient settings, it is neither practical nor ethical to provide implements that can be used as weapons to any patient, for any reason. This would be to severely compromise staff and patient safety.

In non-secure inpatient settings, patients are likely to be detained under the Mental Health Act. This raises the question of the grounds of detention. Typically, patients who self-injure are detained because they are judged to be currently at risk of life-endangering or life-changing injury. As Sullivan notes, it is not clinically or ethically appropriate to provide patients with the means to self-injure when they are in this state of mind. This means that the relevant inpatient population for which a harm minimization approach could even be considered is relatively small: those who have a standing pattern of self-injury and who are detained on non-secure units for reasons other than acute self-injury.

Sullivan suggests that the long-term benefits of facilitating self-injury for such patients may outweigh the costs. He notes that self-injury functions as a way of coping with psychological distress – which restrictions of liberty can heighten – and suggests that harm minimization may improve therapeutic relationships with staff and outcomes for patients over time. However, the potential benefits of a harm minimization approach to a particular patient must be weighed – in clinical and ethical decision-making in a non-secure inpatient setting – not only against the potential costs to that patient but also against the potential costs to staff and other patients. Consider these in reverse order.

With respect to costs to other patients, it is well-established that self-injury can be contagious. Patients who are admitted onto a ward without a history of self-injury may learn to self-injure if they see other patients doing it – this risk may be especially pronounced if self-injury is part of a therapeutic engagement with staff – and patients with a history of self-injury may learn new means. Specialist inpatient units, including one at which SP worked in the 1990s, which have employed a harm minimization approach in the past have had difficulties with patients adopting techniques from one another and self-injury escalating. Put bluntly, witnessing or even just hearing about self-injury increases the chance that people try it themselves. The impact on other patients of facilitated self-injury on wards needs to be factored into any assessment of costs and benefits.

With respect to the costs to staff, it is of course accepted that clinical work requires managing the psychological burden of treating challenging patients like those who self-injure. But facilitating self-injury through the provision of implements in non-secure inpatient settings would significantly increase this burden. Risk assessment is not an exact science and mistakes will occur – especially, perhaps, in the current NHS context where wards are both overpopulated and understaffed. If staff provide implements to people to self-injure in inpatient settings, they not only bear the psychological cost of knowing they have facilitated – and in that sense sanctioned – the process of self-injury. There will also be occasions where patients accidentally or deliberately kill themselves. Staff will then be in a position of having provided the means to this devastating outcome. Obviously by far the most important cost in such a situation is to patients. But the psychological burden of working with this risk – let alone dealing with its actual occurrence – and its potential impact on staff stress levels and burn-out will not be negligible, and again needs to be taken into account.

Finally, consider the potential costs to patients themselves. We do not deny that it is extremely difficult for patients who have a standing pattern of using self-injury as a way of coping with psychological distress to have it curtailed. No doubt, care would be improved by better awareness and attention to the impact this has on detained patients. But people self-injure not only to manage psychological distress. Self-injury is also a communication to others as well as linked to low self-esteem, negative core beliefs, and emotions like shame and self-hatred. It can both express and reinforce a person’s deeply held belief that they are bad, worthless, and deserving of punishment. This is part of its meaning. The impact of staff facilitating self-injury within a therapeutic relationship risks fuelling this mindset by implicitly sanctioning it. This risk might be mitigated in contexts where staff are highly trained and skilled in offering complex psychological interventions with vulnerable patients – as well as expertly supported and supervised – but, again, this is not a realistic expectation on today’s NHS wards.  Long-term self-injury is correlated with suicide. This is one reason why so much effort is made to address it across all mental health settings. Correlation is not causation, and we must acknowledge that mechanisms are as yet unknown, but it is natural to speculate that one reason is that self-injury maintains a negative self-concept –a known risk factor for suicide.

Indeed, even something as seemingly innocuous as education about anatomy carries risks that Sullivan does not acknowledge. In this respect, it is noteworthy that the medically trained population has higher suicide completion rates than the general population. Sullivan seems to presume that teaching someone about, for example, the important structures in the wrist, will enable them to cut with less risk. But we cannot assume knowledge is benign: rather than being used to self-injure more safely, it can, instead, be used to enable people to cut more dangerously and effectively.

The abstract principles of harm minimization are laudable, but from a clinical and practical ethical perspective, the devil is in the details. Apart from uncontroversial measures already practiced in community settings, we do not believe that – for self-injuring patients themselves, let alone when we factor in the potential impact on other patients and staff – the balance between costs and benefits tips in its favour.

Combating Doping in Sports: More of the Same or What?

7 Feb, 17 | by miriamwood

Guest Post: Bengt Kayser and Jan Tolleneer

Paper: Ethics of a relaxed antidoping rule accompanied by harm-reduction measures

Doping in sports continues to be prominently present in the media. Regularly ’scandals’ surface that then trigger flurries of articles, documentaries and reactions in the media. The general tone is one of moral opprobrium, dopers are considered deviant and bad. Frequently these episodes are accompanied by arguments for more means for repression of doping. These efforts, in principle coordinated by the World Anti Doping Agency (WADA), aim at eradicating doping from sports.

Doping is  considered cheating and dopers are bad. But despite increasing means doping remains rife, leading to what some call an arms race in a war on doping. Anti-doping still continues to cling to its essentialist objective, getting rid of this behaviour, even though it appears increasingly clear that this objective cannot be reached. Already today athletes have to comply with exceptional rules, such as the obligation to inform about their whereabouts 365 days a year in order to allow in and out of competition unannounced urine and blood sampling for anti-doping controls. But calls for more means and more repression resound. Increasingly countries, pressurized by the International Olympic Committee and WADA, introduce criminal law to repress doping, in several countries also applicable to non-athletes.

But repression of human behaviour comes with a cost. Prohibition of alcohol in the USA in the first part of the last century is good example, as is the so-called war on drugs. Like the latter, anti-doping also has unintended side-effects and it is possible that the overall societal cost of anti-doping may surpass the positive effects of anti-doping. The question then arises if there exist alternative approaches to dealing with doping. But so far the only two discourses on alternatives for dealing with doping focus on either repression or liberalisation.

In our recently published paper in the Journal of Medical Ethics we argue that there is an ethically acceptable alternative somewhere mid-stance. Our point of departure is a partial relaxation of the anti-doping rule, accompanied by harm reduction measures, in a dynamic setting, i.e. adaptable over time in reaction to what the effects would be. We develop our arguments on five levels: (1) What would it mean for the athlete (the self)? (2) How would it impact other athletes (the other)? (3) How would it affect the phenomenon of sport as a game and its fair play basis (the play)? (4) What would be the consequences for the spectator and the role of sports in society (the display)? and (5) What would it mean for what often is considered as essential to being human (humanity)? Our analysis suggests that a partial relaxation of the anti-doping rule accompanied by harm-reduction measures appears ethically defensible on all five levels. Our proposed alternative framework thus potentially provides an escape from the present spiralling towards criminalisation of doping and doping-like behaviour in society. It is time to start discussing the practical details of such a policy change and to start experimenting.

Bridging the Education-action Gap: A Near-peer Case-based Undergraduate Ethics Teaching Programme

6 Feb, 17 | by miriamwood

Guest Post: Dr Selena Knight and Dr Wing May Kong

Paper: Bridging the education-action gap – a near-peer case-based undergraduate ethics teaching programme

Medical ethics and law is a compulsory part of the UK undergraduate medical school curriculum. By the time they qualify, new junior doctors will have been exposed to ethics teaching in lectures and seminars, through assessments, and during clinical placements. However, does this really prepare them for the ethical minefield they will encounter as doctors?

Following my own graduation from medical school I started as a foundation year doctor in a busy London teaching hospital. Despite having had more exposure to ethics and law teaching than most by having completed an intercalated BSc in the subject, I found as a new doctor that I was often encountering ethical dilemmas on the wards but felt surprisingly ill-equipped to deal with them. I was generally able to identify that I was facing an ethical dilemma, but frequently found myself stuck when coming up with a practical solution.

If I felt like this having had an additional year of studying ethics and law, how on earth were other new doctors coping? In fact, when questioning my peers about their experiences they described that they also encountered dilemmas, but either didn’t specifically identify them as ethical in nature (e.g. they described feeling uncomfortable or uneasy with a decision made or a particular situation but couldn’t pinpoint why) and frequently described being unable to do anything to improve the situation either because they didn’t know what to do or they didn’t feel confident to speak up/rock the boat e.g. if they experienced a consultant acting unprofessionally

It became clear that even if ethics teaching at medical school was providing sufficient knowledge to enable junior doctors to identify ethical dilemmas, it was failing to prepare them to actually deal with such issues in practice. My own experiences, together with those I heard from my peers, formed the inspiration for the teaching programme that was subsequently designed.


Professional Codes and Diagnosis at a Distance

6 Feb, 17 | by Iain Brassington

This is the second part of my response to Trish Greenhalgh’s post on the propriety of medics, psychiatrists in particular, offering diagnoses of Donald Trump’s mental health.  In the last post, I concentrated on some of the problems associated with making such a diagnosis (or, on reflection, what might be better called a “quasi-diagnosis”).  In this, I’m going to concentrate on the professional regulation aspect.

Greenhalgh notes that, as a UK medic, she is bound by the GMC’s Duties of a Doctor guidance,

which – to my surprise – does not explicitly cover the question of a doctor’s duty towards a public figure who is not his or her patient.


My reading of the GMC guidance is that in extreme circumstances, even acknowledging the expectation of how doctors should normally behave, it may occasionally be justified to raise concerns about a public figure (for example, when the individual is relentlessly pursuing a course of action that places many lives at risk). Expressing clinical concern in such circumstances seems to involve a comparable ethical trade-off to the public interest disclosure advice (Duties of a Doctor paragraphs 53-56) that breach of patient confidentiality may be justified in order “to prevent a serious risk of harm to others.”

Well, to be honest, it’s not that much of a surprise to me that the GMC guidelines doesn’t stretch to public figures – but that’s a minor point.

The more interesting thing for me is what the relationship is between the practitioner and the GMC.  Greenhalgh ends her post by saying that she “wrote this blog to promote further debate on the topic and invite the GMC to clarify its position on it”.  But why should the GMC’s position be all that important?

OK: I’m going to go off on a bit of a tangent here.  Stick with me. more…

Diagnosing Trump

5 Feb, 17 | by Iain Brassington

It doesn’t take too much time on the internet to find people talking with some measure of incredulity about Donald Trump.  Some of this talk takes the tone of horrified fascination; some of it is mocking (and is accompanied by correspondingly mocking images); and some people are wondering aloud about his mental health.  In this last category, there’s a couple of sub-categories: sometimes, people are not really talking in earnest; sometimes, though, they are.  What if the forty-fifth President of the United States of America has some kind of mental illness, or some kind of personality disorder?  What if this affects his ability to make decisions, or increases the chance that he’ll make irrational, impulsive, and potentially dangerous decisions?

This does raise questions about the proper conduct of the medical profession – particularly, the psychiatric profession.  Would it be permissible for a professional to speak publicly about the putative mental health of the current holder of the most important political office in the world?  Or would such action simply be speculation, and unhelpful, and generally infra dig?  More particularly, while the plebs might say all kinds of things about Trump, is there something special about speaking, if not exactly ex cathedra, then at least with the authority of someone who has working knowledge of cathedrae and what it’s like to sit on one?

As far as the American Psychiatric Association is concerned, the answer is fairly clear.  §7.3 of its Code of Ethics, which you can get here, says that

[o]n occasion psychiatrists are asked for an opinion about an individual who is in the light of public attention or who has disclosed information about himself/herself through public media. In such circumstances, a psychiatrist may share with the public his or her expertise about psychiatric issues in general. However, it is unethical for a psychiatrist to offer a professional opinion unless he or she has conducted an examination and has been granted proper authorization for such a statement.

This rule is nicknamed the “Goldwater Rule”, after Barry Goldwater, the Senator who sued successfully for damages after a magazine polled psychiatrists on the question of whether or not he was fit to be President.  Following the rule would appear to rule out making any statement about whether a President has a mental illness, a personality disorder, or anything else that might appear within the pages of the DSM.

Over on the BMJ‘s blog, Trish Greenhalgh has been wondering about what a doctor may or may not do in cases like this:

I have retweeted cartoons that mock Trump, because I view satire and parody as legitimate weapons in the effort to call our leaders to account.

But as a doctor, should I go further? Should I point out the formal diagnostic criteria for a particular mental illness, cognitive condition, or particular personality disorder and select relevant examples from material available in the public domain to assess whether he appears to meet those criteria?

Her post is long, but it does generate an answer:

I believe that on rare occasions it may be ethically justified to offer clinically-informed speculation, so long as any such statement is clearly flagged as such. […] I believe that there is no absolute bar to a doctor suggesting that in his or her clinical opinion, it would be in the public interest for a particular public figure to undergo “occupational health” checks to assess their fitness to hold a particular office.

Her phrasing is such as to leave no bet unhedged – she’s careful not to say that she’s talking about anyone in particular; but, beneath that, the message is clear: it might be justifiable to depart from the Goldwater Rule to some extent in certain hypothetical circumstances.

My post in response will also be long – in fact, it’s going to spread out over two posts.  I think she’s plausibly correct; but the way she gets there is not persuasive.


Chappell on Midwives and Regulation

2 Feb, 17 | by Iain Brassington

Richard Yetter Chappell has drawn my attention to this – a blog post in which he bemoans the Nursing and Midwifery Council’s rules about indemnity insurance, and the effects that they’ll have on independent midwives.  (I’d never heard of independent midwives – but an IM – according to Independent Midwives UK – is “a fully qualified midwife who has chosen to work outside the NHS in a self-employed capacity”.)  In essence, what’s happened is that the NMC has ruled that the indemnity cover used by some IMs – around 80, nationwide, according to some reports – is inadequate; these 80 IMs (out of 41000!) are therefore barred from working.

I’ve got to admit that this seems like a bit of a storm in a teacup to me.  For sure, there may have been infelicities about the way that the NMC handled its decision.  That may well be unfortunate, but it may not be all that much to get excited about.  However, Chappell makes two particularly striking points.  The first is his opening claim, in which he refers to this as “a new low for harmful government over-regulation”.  Well, it’s not really government overregulation, is it?  It’s the NMC.  Governing bodies are not government.  And whether it’s overregulation at all is a moot point: we need more information about what the standard is by which we should assess any regulation.  That leads us to the second striking thing that Chappell says, to which I’ll return in a moment.  Whether it’s harmful is also a moot point.  I mean, it may be true – as he points out – that the decision will have an undesirable impact on the relationship between some women and their chosen midwife.  But that won’t tell us anything about whether the policy is desirable all told.  It’s certainly not enough to warrant calling it “unethical” – and to dub something unethical is not a moral argument.

The second striking thing is this: more…

The Importance of Disambiguating Questions about Consent and Refusal

2 Feb, 17 | by miriamwood

Guest Post: Rob Lawlor

Re: Cake or death? Ending confusions about asymmetries between consent and refusal

Imagine you have an adolescent patient who is in need of life saving treatment. You offer him the treatment, assuming that he would consent, but he refuses. As he is not yet a competent adult, you decide to treat him despite the fact that he wishes to refuse treatment.

Now consider the question: does it make sense to say that there is an asymmetry between consent and refusal?

If you are familiar with the term “asymmetry between consent and refusal”, the chances are that you will believe that you know what the question means and you are likely to have an opinion regarding the answer. And if you are like John Harris, you may also think that the answer is obvious and that any other answer would be “palpable nonsense”. However, if you are not familiar with the term or with the relevant literature, you may be far less confident that you even understand the question.

Despite their lack of familiarity with the question, I believe the latter group may have a better understanding of the issue than the first group. Why? Because these people are wondering, “What does this question mean?” My claim is that we would make more progress if more people took the time to ask this question. The phrase “the asymmetry between consent and refusal” allows us to capture the topic of a particular debate in a fairly succinct way, but I suggest that it obscures the ethical issues, rather than illuminating them.


HIV Cure Research and The Dual Aims of the Informed Consent Process

25 Jan, 17 | by miriamwood

Guest Post: Danielle Bromwich and Joseph Millum

Paper: Informed Consent to HIV Research 

Special Issue: The benefit/risk ratio challenge in clinical research, and the case of HIV cure

A cure for HIV would be tremendously valuable. Approximately 37 million people worldwide are HIV-positive and 15 million are currently on antiretroviral therapy. Until recently it was assumed that this therapy would be the extent of HIV treatment and that those with access to it would need to take their drugs for life. But what once seemed impossible is now in early phase clinical trials: interventions designed to completely eradicate HIV from the immune system.

Excitement surrounding these “HIV cure” studies is tempered by ethical concern. They require participants to come off their antiretroviral therapy and undergo highly risky interventions using gene transfers or stem cell therapy. These are currently proof of concept studies—no one expects the participants to be cured. Their purpose is to provide essential information about safety and pharmacokinetics, but in doing so they expose participants to high risks with little prospect of direct benefit.

If we could be confident that participants understood their trials’ true risk-benefit ratio, these high risks might be less troubling. But such confidence would be misplaced. Decades of data show poor comprehension of risk among participants in clinical trials. The fact that HIV is still a stigmatized condition amplifies this concern. Potential participants may be desperate to be rid of their disease and so downplay the risks and exaggerate the potential benefits. Understandably, HIV cure researchers and research ethics committees are worried. What should they do with a patient-participant who wants to come off his medication and receive a high-risk experimental intervention because he thinks that he’ll be “the one” who is cured?

Informed consent is generally thought to be one key protection for participants enrolled in risky studies. The standard view of informed consent says that valid consent requires the person giving consent to understand the risks and benefits of study participation. According to this view, someone who seriously misunderstands the study’s true risk-benefit ratio can be excluded on the grounds that he has not given valid consent to study participation.

In a recent paper, part of a special collection on HIV cure research in the Journal of Medical Ethics, we analyze a range of concerns about informed consent for HIV cure trials.


Latest from JME

Latest from JME

Blogs linking here

Blogs linking here