You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our Group site.

Research

The Legal and Moral Significance of Implantation

23 Jun, 15 | by BMJ

Guest post by Sally Sheldon

We tend to talk about contraception and abortion as if they were two separate and readily distinguishable practices, the former preventing pregnancy and the latter ending it. This understanding has a very important effect in current British law, where a relatively permissive approach to the availability of contraception stands in stark contrast to the morally grounded, onerous criminal sanctions against abortion. Yet is the distinction between abortion and contraception really so clear cut?  How and why do we make it? And is the line that we have drawn between the two morally defensible?

As a matter of biological fact, the development of human life is not characterised by bright lines. As the eminent lawyer Glanville Williams once put it, “abstract human life does not ‘begin’; it just keeps going.” A seamless biological continuum exists through the production of sperm and egg, their joining together in a process of fertilisation, the gradual development of the new entity thus created throughout pregnancy, birth, subsequent growth, eventual death and ensuing decay of the body. Defining what happens along the way as an ‘embryo’, ‘fetus’, ‘person’, ‘adult’, or ‘corpse’ requires an attempt to draw lines on the basis of criteria selected as holding significance for legal or other purposes. How and where we draw such lines is a tricky business, involving careful moral reflection informed by medical fact.

The “regulatory cliff edge” between the relatively permissive regulation of contraception and the criminal prohibition of abortion relies on a line drawn on the basis of the biological event of implantation, where the fertilised egg physically attaches itself to the wall of the womb some six to twelve days after ovulation. Yet while enormous legal weight has been placed upon it, little consideration seems to have been given as to why implantation matters morally. The voluminous philosophical literature on the ethical status of the human embryo and foetus offers little support for the view that implantation is an important marker.

Further, while it might once have been suggested that implantation offers a conveniently timed moment for a necessary gear change between the appropriate regulation of contraception and abortion, this argument is difficult to sustain in the light of modern medical science. more…

What should Investigators be Doing with Unexpected Findings in Brain Imaging Research?

22 Jun, 15 | by BMJ

Guest Post by Caitlin Cole

Incidental findings in brain imaging research are common. Investigators can discover these unexpected findings of potential medical significance in up to 70% of their research scans. However, there are no standards to guide investigators as to whether they should actively search for these findings or which, if any, they should return to research participants.

This complex ethical issue impacts many groups in brain imaging: participants and parents of child participants who may desire relevant health information, but alternatively may suffer from anxiety and financial burden; investigators who must ethically grant their participants autonomy, but who also may suffer from budget and personnel restrictions to manage the review and report of these findings; Institutional Review Board (IRB) members who must provide ethical oversight to imaging research and help mandate institutional standards; and health providers who must interface with their patients and assist with follow up care when necessary.

Our research study shows these groups share some ideas on the ethics of returning incidental findings – the researcher has an ethical responsibility or obligation to tell a subject that there’s something there, however they do it, but just inform the subject, even though it’s not part of the research” – yet also acknowledge the inherent risk in reporting medical research information. As one of our IRB members commented, I mean [in regards to withholding findings] one reason would be to protect the patient from doing something stupid about them.

When participants are asked about incidental findings, they consistently state that they want to receive all information pertinent to their health. Research participants want to make their own medical decisions and feel investigators have a responsibility to keep them informed.

However, it is clear from our research that participants do not always understand the difference between a brain scan for research purposes and a clinical scan. The incidental finding reports that they receive include personal health information, written in medical jargon, discovered during a clinical procedure that may have immediate or long term medical significance. Because of this crossover between conducting research and sharing health information, participants may overestimate the clinical utility of the reported research information. This is a challenge for investigators whose role is to conduct research, not to diagnose participants or offer findings with clinical certainty. Participant assumptions otherwise have the potential to cause downstream legal complications for the research institution.

It is necessary to understand the impact on all parties involved in the process of disclosing incidental findings to determine appropriate management policy. This challenging task should not be underestimated as these groups think differently about the balance between risk and benefit based on their role in this process, whether they be a research participant, a research investigator, an IRB member or a health provider. Overall there is an ethical demand to manage and report unexpected findings discovered in brain imaging research; finding a way to do this while minimizing negative impact for all involved is important.

Read the full paper here.

Is Age a Determinant Variable in Forgoing Treatment Decisions at the End of Life?

14 May, 15 | by BMJ

Guest post by Sandra Martins Pereira, Roeline Pasman and Bregje Onwuteaka-Philipsen

Decisions to forgo treatment are embedded in clinical, socio-cultural, philosophical, religious, legal and ethical contexts and beliefs, and they cannot be considered as representing good or poor quality care. Particularly for older people, it is sometimes argued that treatment is aggressive, and that there may be a tendency to continue or start treatments in situations where a shift to a focus on quality of life in light of a limited life expectancy might be preferred. Others argue that an attitude of ageism might prevent older people from receiving treatments and care from which they could benefit, thus resulting in some type of harm and compromising the ethical principles of beneficence and non-maleficence.

When the need to make a decision about treatment concerns an older person at the end of life, physicians need to reflect on the following questions: In this situation, for this person, what is the best course of action? Is this person capable of assessing the situation and making a decision about it adequately herself? What are the preferences of the person? Who needs to be involved in the decision-making process? What will be the consequences of starting or withholding this treatment?

Our study shows that decisions to forgo treatment preceded death in a substantial proportion of older people in the Netherlands, and more often than in younger groups. Also, it shows that compared to the younger age groups, in the older age group differences were more significant when deciding on withholding than on withdrawing a treatment. This is interesting because it suggests that Dutch physicians, especially those caring for older people, assume a palliative culture and approach, thus meeting the relatively more frequent preference older people have of receiving comfort care and not aggressive treatments aiming to prolong life. Moreover, it seems that decisions to forgo treatments among the ‘oldest old’ (i.e., older people aged 80 and above), when compared to the youngest age group, were made more frequently due to a wish of the patient, indicating consideration and respect for the patient’s wishes.

However, with regard to patient participation in decision making, we also saw that most of the patients, regardless of their age, did not discuss the forgoing treatment decision with the attending physician. As our findings indicate, this occurred mostly because the patient was not able to assess the situation and make a decision about it in an adequate manner. This result highlights the need to further implement strategies aiming at implementing advance care planning in practice and in an earlier stage of the disease trajectory.

Finally, based on our study, we cannot assume that any age-related differences in forgoing treatment decisions occur due to an attitude of ageism. On the contrary, our study suggests that care for older people in the Netherlands seems to be focused on providing palliative care, also suggesting a better acceptance that these patients are nearing death. This is particularly relevant for the discussion about the meaning of dying well in older ages, having an impact on older people’s experiences and end-of-life care.

Read the full paper here.

Animal Liberation: Sacrificing the Good on the Altar of the Perfect?

24 Apr, 15 | by Iain Brassington

For my money, one of the best papers at the nonhuman animal ethics conference at Birmingham a couple of weeks ago was Steve Cooke’s.*  He was looking at the justifications for direct action in the name of disrupting research on animals, and presented the case – reasonably convincingly – that the main arguments against the permissibility of such direct action simply don’t work.  For him, there’s a decent analogy between rescuing animals from laboratories and rescuing drowning children from ponds: in both cases, if you can do so, you should, subject to the normal constraints about reasonable costs.  The question then becomes one of what is a reasonable cost.  He added to this that the mere illegality of such disruption mightn’t tip the balance away from action.  After all, if a law is unjust (he claims), it’s hard to see how that alone would make an all-else-being-equal permissible action impermissible.  What the law allows to be done to animals in labs is unjust, and so it doesn’t make much sense to say that breaking the law per se is wrong.

Now, I’m paraphrasing the argument, and ignoring a lot of background jurisprudential debate about obligations to follow the law.  (There are those who think that there’s a prima facie obligation to obey the law qua law; but I think that any reasonable version of that account will have a cutoff somewhere should the law be sufficiently unjust.)  But for my purposes, I don’t think that that matters.

It’s also worth noting that, at least formally, Cooke’s argument might be able to accommodate at least some animal research.  If you can claim that a given piece of research is, all things considered, justifiable, then direct action to disrupt it might not have the same moral backing.  Cooke thinks that little, if any, animal research is justified – but, again, that’s another, higher-order, argument.

One consideration in that further argument may be whether you think that there’s a duty to carry out (at least certain kinds of) research. more…

Animals in US Laboratories: Who Counts, Who Matters?

21 Mar, 15 | by BMJ

Guest post by Alka Chandna

How many animals are experimented on in laboratories? It’s a simple question, the answer to which provides a basic parameter to help us wrap our heads around the increasingly controversial and ethically harrowing practice of locking animals in cages and conducting harmful procedures on them that are often scary, painful, and deadly. Yet ascertaining the answer in the United States – the world’s largest user of animals in experiments – is surprisingly difficult.

In the eyes of the US Animal Welfare Act (AWA) – the single federal law that governs the treatment of animals used in experimentation – not all animals are created equal. Mice, rats, and birds bred for experimentation, and all cold-blooded animals – estimated by industry to comprise more than 95 percent of all animals used – are all unscientifically and dumbfoundingly excluded from the AWA’s definition of “animal”. Orwell cheers from his grave while Darwin rolls in his.

Leaving aside the question of whether mice and rats should be categorized as vegetable or mineral, the exclusion of these animals from the AWA also results in a dearth of data on the most widely used species, as the only figures on animal use in US laboratories that are systematically collected, organized, and published by the government are on AWA-regulated species. more…

Growing a Kidney Inside a Pig Using your own DNA: The Ethics of ‘Chimera Organs’

6 Nov, 14 | by Iain Brassington

Guest post by David Shaw

Imagine that you’re in dire need of a new kidney. You’re near the top of the waiting list, but time is running out and you might not be lucky enough to receive a new organ from a deceased or living donor. But another option is now available: scientists could take some of your skin cells, and from them derive stem cells that can then be added to a pig embryo. Once that embryo is implanted and carried to term, the resulting pig will have a kidney that is a perfect genetic match to you, and the organ can be transplanted into your body within a few months without fear of immune rejection. Would you prefer to take the risk of waiting for an organ donated by a human, which would require you to take immunosuppressant drugs for the rest of your life? Or would you rather receive a “chimera organ”?

This scenario might seem far-fetched, but it is quite likely to be a clinical reality within a decade or so. Scientists have already used the same technique to grow rat organs inside mice, and it has also been shown to work in different types of pig. Although clinical trials in humans have not yet taken place, using these techniques to create human organs inside animals could solve the current organ scarcity problem by increasing supply of organs, saving thousands of lives each year in Europe alone. As illustrated in the example, organs created in this way could be tailored to the individual patient’s DNA, allowing transplantation without the risk of immune rejection. However, the prospect of growing organs of human origin within (non-human) animals raises several ethical issues, which we explore in our paper.

Although chimera organs are ‘personalised’ and unlikely to be rejected, one of the major concerns about using organs transplanted from animals is the risk of ‘zoonosis’ – the possibility that an animal virus might be transmitted along with the organ, resulting in a new disease that could cause a pandemic. more…

Saatchi Bill – Update

28 Oct, 14 | by Iain Brassington

Damn. Damn, damn, damn.

It turns out that the version of the Medical Innovation Bill about which I wrote this morning isn’t the most recent: the most recent version is available here.  Naïvely, I’d assumed that the government would make sure the latest version was the easiest to find.  Silly me.

Here’s the updated version of §1(3): it says that the process of deciding whether to use an unorthodox treatment

must include—

(a) consultation with appropriately qualified colleagues, including any relevant multi-disciplinary team;

(b) notification in advance to the doctor’s responsible officer;

(c) consideration of any opinions or requests expressed by or on behalf of the patient;

(d) obtaining any consents required by law; and

(e) consideration of all matters that appear to the doctor to be reasonably necessary to be considered in order to reach a clinical judgment, including assessment and comparison of the actual or probable risks and consequences of different treatments.

So it is a bit better – it seems to take out the explicit “ask your mates” line.

However, it still doesn’t say how medics ought to weigh these criteria, or what counts as an appropriately qualified colleague.  So, on the face of it, our homeopath-oncologist could go to a “qualified” homeopath.  Or he could go to an oncologist, get told he’s a nutter, make a mental note of that, and decide that that’s quite enough consultation and that he’s still happy to try homeopathy anyway.

So it’s still a crappy piece of legislation.  And it still enjoys government support.  Which does, I suppose, give me an excuse to post this:

Many thanks to Sofia for the gentle correction about the law.

An Innovation Too Far?

28 Oct, 14 | by Iain Brassington

NB – Update/ erratum here.  Ooops.

One of the things I’ve been doing since I last posted here has involved me looking at the Medical Innovation Bill – the so-called “Saatchi Bill”, after its titular sponsor.  Partly, I got interested out of necessity – Radio 4 invited me to go on to the Sunday programme to talk about it, and so I had to do some reading up pretty quickly.  (It wasn’t a classic performance, I admit; I wasn’t on top form, and it was live.  Noone swore, and noone died, but that’s about the best that can be said.)

It’s easy to see the appeal of the Bill: drugs can take ages to come to market, and off-label use can take a hell of a long time to get approval, and all the rest of it – and all the while, people are suffering and/ or dying.  It’s reasonable enough to want to do something to ameliorate the situation; and if there’s anecdotal evidence that something might work, or if a medic has a brainwave suggesting that drug D might prove useful for condition C – well, given all that, it’s perfectly understandable why we might want the law to provide some protection to said medic.  The sum of human knowledge will grow, people will get better, and it’s raindrops on roses and whiskers on kittens all the way; the Government seems satisfied that all’s well.  Accordingly, the Bill sets out to “encourage responsible innovation in medical treatment (and accordingly to deter innovation which is not responsible)” – that’s from §1(1) – and it’s main point is, according to §1(2), to ensure that

It is not negligent for a doctor to depart from the existing range of accepted medical treatments for a condition, in the circumstances set out in subsection (3), if the decision to do so is taken responsibly.

Accordingly, §1(3) outlines that

[t]hose circumstances are where, in the doctor’s opinion—

(a) it is unclear whether the medical treatment that the doctor proposes to carry out has or would have the support of a responsible body of medical opinion, or

(b) the proposed treatment does not or would not have such support.

So far so good.  Time to break out the bright copper kettles and warm woollen mittens*, then?  Not so fast. more…

Advance Directives, Critical Interests, and Dementia Research

14 Aug, 14 | by BMJ

Guest post by Tom Buller, Illinois State University

In my paper, “Advance Directives, Critical Interests, and Dementia Research”, I investigate whether advance directives can be applied in the context of dementia research. Consider, for the sake of argument, the following fictional case. William, a 77-year-old man who has moderate to severe dementia. When he was first diagnosed and while still competent he declared on many occasions that he wished to do all he could to help future sufferers of the disease and find a cure for Alzheimer’s, and he repeatedly said that he very much wanted to participate in any clinical trials, even those that might involve hardship and risk. With the full agreement of his family William was enrolled in a five-year clinical trial testing a new treatment for Alzheimer’s that involves.

I think it can be legitimately argued that William has the right to make a future-binding decision to participate in the above trial, for the reasons that justify the use of a decision in the treatment context also apply in the present research context. First, William’s beneficent desire to help future sufferers of Alzheimer’s is part and parcel of his character and what gives his life value. Second, the principle of precedent autonomy is not invalidated by the fact the person is encouraging, rather than, refusing intervention, and that the chosen course of action requires the assistance of others. Third, William’s decision is not invalidated by the fact that it is motivated by beneficence rather than self-interest.

If this analysis is correct, then it would seem that there are good reasons to think that a competent person has the right to decide to participate in future research once competence has been lost, even research that is (significantly) greater than minimal risk.

 

Read the full paper online first here.

Consigned to the Index

28 May, 14 | by Iain Brassington

There’re probably times when all of us have had a solution, and just had to find a problem for it.  It’s an easy trap; and it’s one into which I suspect Gretchen Goldman may have fallen in an article in Index on Censorship about scientific freedom and how it’s under threat from disputes about Federal funding in the US.  No: I’m not going to be arguing against scientific freedom here.  Only against a certain use of the appeal to scientific freedom in response to a particular problem. First up, let’s note the points on which Goldman may well be correct.  She notes that the disputes in the US about federal funding that have led to big cuts and a short-but-total government shutdown are very bad for science.  She points out that political machinations even meant that researchers working in government-funded areas couldn’t access their emails.  This had direct and indirect consequences, all of which were pretty undesirable.  For example,

[m]any government scientists were not allowed to access email, much less their laboratories. One scientist noted that his “direct supervisor … confiscated all laptop computers on the day of the shutdown”.

Without access to work email accounts, federal scientists were also prevented from carrying out professional activities that went beyond their government job duties. Several scientists pointed out that their inability to access emails significantly slowed down the peer-review process and, therefore, journal publication.

In the wider sense, to have science and funding bodies that are vulnerable to political shenanigans isn’t good for science, and is probably not good for humanity.  You don’t have to think that research is obligatory to think that it’s often quite a good thing for science to happen all the same.  And shutdowns are particularly bad for students and junior researchers, whose future career might depend on the one project they’re doing at the moment; if a vital field trip or bit of analysis or experiment is liable to get pulled at almost any moment, they don’t have a reputation yet to tide them over.

So far, so good.  However, things are iffier elsewhere. more…

JME blog homepage

Journal of Medical Ethics

Analysis and discussion of developments in the medical ethics field. Visit site



Creative Comms logo

Latest from JME

Latest from JME

Blogs linking here

Blogs linking here