You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our Group site.

Thinking Aloud

A Hot Take on a Cold Body

21 Nov, 16 | by Iain Brassington

It’s good to see Nils’ post about the recent UK cryonics ruling getting shared around quite a bit – so it should.  I thought I’d throw in my own voice, too.

About 18 months ago, Imogen Jones and I wrote a paper musing on some of the ethical and legal dimensions of Christopher Priest’s The Prestige.  One dimension of this was a look at the legal status of the bodies produced as a result of the “magic” trick – in particular, the haziness of whether they were alive or dead; the law doesn’t have any space for a third state.  The paper was something of a jeu d’esprit, written to serve a particular function in a Festschrift for Margot Brazier.  If I say so myself, I think it’s a pretty good paper – but it’s also meant to be fun, and is clearly rather less serious than most ethico-legal scholarship (or anything else in the book, for that matter).

coldlazarus5

Not quite “Cold Lazarus”, but close enough…

So it’s a bit of a surprise to see relevantly similar themes popping up in the news.  If we’re freezing people in the hope of curing terminal illness in the future, what’s the status of the bodies in the meantime (especially if the death certificate has been signed)?  There’s a load of questions that we might want to ask before we get too carried away with embracing cryonics.

Right from the start, there’s a question about plausibility.  For the sake of what follows, I’m going to treat “freezing” as including the process of defrosting people successfully as well, unless the context makes it clear that I mean something else.  Now, that said, the (moral) reasons to freeze people rely on the plausibility of the technology.  If the technology is not plausible, we have no reason to make use of it.  It wouldn’t follow from that that using it’d be wrong – but since the default is not to act in that way, it’s positive reasons that we need, rather than negative ones.  Neither could we really rely on the thought that we could cryopreserve someone in the hope that the freezing-and-thawing process becomes more plausible in future, because we’d have no reason to think that we’d chosen the right version of the technology.  We can only cryopreserve a person once: what if we’ve chosen the wrong technique?  How would we choose the best from an indefinitely large number of what we can at best treat as currently-implausible ones?

So how plausible is it to put a body on ice, then revive it many years later?  It’s been pointed out by some that we currently do preserve embryos without apparent ill-effect, with the implication that there’s no reason in principle why more developed humans couldn’t be frozen successfully.  However, whole humans are a wee bit more complex than embryos; it’s not at all clear that we can extrapolate from balls of a few cells to entire humans.  Even the admittedly limited experimental evidence that it’s possible to freeze whole organs won’t show us that, since we’re systems of organs.  One can accept that an organ is a system, too; but all that means is that we’re systems of systems – so we’ve squared the complexity.  And, of course, the timescales being considered here are tiny compared with the kind of timescales envisaged in cryonic fantasies. more…

We’re all Gonna Die… Eventually

6 Oct, 16 | by Iain Brassington

It might just be a product of the turnover of people with whom I have much professional contact, but I’ve not heard as much about human enhancement in the past couple of years as I had in, say, 2010.  In particular, there seems to be less being said about radical life extension.  Remember Aubrey de Grey and his “seven deadly things“?  The idea there was that senescence was attributable to seven basic processes; those basic processes are all perfectly scrutable and comprehensible biological mechanisms.  Therefore, the argument went, if we just put the time and effort into finding a way to slow, halt, or reverse them, we could slow, halt, or reverse aging.  Bingo.  Preventing senescence would also ensure maximum robustness, so accidents and illnesses would be less likely to kill us.  To all intents and purposes, we’d be immortal.  Some enterprising people of an actuarial mindset even had a go at predicting how long an immortal life would be.  Eventually, you’ll be hit by a bus.  But you might have centuries of life to live before that.

Dead easy.

I was always a bit suspicious of that.  The idea that death provides meaning to life is utterly unconvincing; but the idea that more life is always a good thing is unconvincing, too.  What are you going to do with it?  In essence, it’s one thing to feel miffed that one isn’t going to have the time and ability to do all the things that one wants to do: life is a necessary criterion for any good.  But that doesn’t mean that more life is worth having in its own right.  Centuries spent staring at a blank wall isn’t made any better by dint of being alive.

But a letter published this week in Nature suggests that there is an upper end to human lifespan after all.  In essence, the demographic data seem to suggest that there’s an upper limit to survivability.  That being the case, we should stop worrying about making people live longer and longer, and concentrate on what’s going on during the 125 years or so that Dong, Milholland and Vijg think is allotted to us. more…

Free Labour and Quiet Doubts

1 Aug, 16 | by Iain Brassington

Those of us on the academic side of things will almost certainly recognise the situation: you’re sitting in your school’s Teaching & Learning committee, or a staff/student committee meeting, or something like that, and you hear the complaint from students that they should get more contact time.  Academics should spend more time teaching rather than simply doing their own research.  After all, they’re paying however-many thousand pounds for their education.

And you’ll’ve heard the standard rebuttals – and maybe even trotted them out yourself: that course fees cover not just teaching costs, but libraries, labs, buildings and so on; that university learning isn’t about hours in a classroom; that teaching and research are intertwined; that students benefit from being taught by the people who’re writing the papers they’re reading.  But I wonder if these standard responses miss something important.

Back in April, I was getting companionably smashed with some of my final-year students, and we were talking about what they were going to do when they’d graduated, and about possible careers.  One or two were interested in academia, and so a part of the conversation concerned what life’s like from my side of the fence.  Predictably, pay was one thing that interested them.  I mentioned that I’d made about £80 in total from the books I’ve written, spread over 10 years.
“And what do you get paid for a paper?”
I held back my bitter laughter, and explained how much you get paid for papers, and how much you get for peer-reviewing, and all the rest of it.  The students had had no idea that this stuff was expected of us, but not remunerated.  Why would they?  Indeed, isn’t it insane that we’re not paid?

I think that one gets an insight here into students’ complaints about academics’ priorities being wrong.  If they think that we get paid for publishing papers, then of course they’re going to think that we have an incentive to resist extra contact hours – and everything we tell them about extra contact hours being at best academically unnecessary, and likely as not counterproductive, will sound like so much bad faith.  After all, of course we’d tell them that a course only needs 30 hours of lectures rather than 60 if we could be earning extra money with those spare 30 hours.

What prompts all this is an article in the Chronicle of Higher Education.  It’s from 2012, but it’s started popping up in my social media timelines this morning, and Carl posted it on Fear and Loathing in Bioethics last night.  It makes a proposal: more…

There’s Argument, and there’s Disputation.

7 Jun, 16 | by Iain Brassington

Very well, then: let’s allow that the quality of argument in bioethics – and clinical ethics in particular – is not of high quality.  What should be done about it?

That’s a hard question, though it’s predictable and wholly justifiable that it should be asked.  And, to be honest, I don’t know offhand.  I might have a few germs of ideas, but nothing that I’d be prepared to mention in public.  That doesn’t mean that I can’t look at other ideas, and test them out.  One such idea is mooted in this paper by Merrick et al: in essence, they propose a sort of debating competition.  They begin by explaining – with some plausibility – some of the factors that make it a bit hard to get full-blooded engagement with ethics in the medical curriculum:

As educators, we have observed additional challenges medical students face in their ethics education, which echo others’ experiences. First, because of the prodigious amount of information medical students are presented with during their first two years of training, they typically adopt a strategy of selectively reading assignments, attending large lectures, and participating in small group discussions.  In this context, ethics appears to be deprioritized, because, from the students’ perspective, it is both more demanding and less rewarding than other subjects.  Unlike other subjects, ethics requires students to reflect on their personal moral sensibilities in addition to understanding theory and becoming familiar with key topics and cases.  Yet, also unlike other courses, poor marks in ethics rarely cause academic failure, given the way performance in medical school curricula is typically evaluated.  Thus, ethics is both more demanding—because of the burdens of self-reflection—and less rewarding—because excellence in ethics does not contribute significantly to grades or test scores.

Second, medical students face challenges in how they individually conceptualize the value of ethics in the medical context.  Although many indicate that morality is important to them, they also suggest that it is a subject matter that relates to their personal, as opposed to professional, actions.  Instead, students often conflate the domains of institutional policy and health law (especially risk management and malpractice litigation) with medical ethics.  Although these domains are obviously also of essential concern for future physicians, they remain distinguishable from ethical issues likely to emerge in practice.  Consequently, rigorous and effective ethics education within the medical school context faces the challenge of distinguishing ethics from other aspects of professionalism.

Too often, ethics gets run alongside communication skills training (well, it’s all about getting informed consent, isn’t it?  Eh?  Eh?); and I’ve lost count of the number of times I’ve been asked to prepare multiple choice questions for ethics assessment.  (Standard answer: nope.  It’s got to be an essay of some sort, or it’s not worth doing.)

So what to do?  The paper, as I’ve already said, suggests a quasi-competitive debating competition, in which teams of students are given a problem, and a limited time to make a case in response to that problem.  An opposing team then has a limited amount of time to place a counterargument.  Then they swap roles, so the counterarguing team gets to make the argument, and the previous arguers now become counter-arguers.  Judges can ask questions, and assign a score.  “The basic aim of the MEB curriculum,” the authors say,

is to help students learn how to produce and present an argument for an ethical position in response to a realistic clinical situation.

Hmmmmm.

Every now and again I get asked to help judge debating competitions – sometimes for academic institutions, sometimes for non-University bodies, sometimes for others (*cough* Instituteofideas *cough*).  I used to be happy to help out.  But I’m not so sure now. more…

Writers Whose Expertise is Deplorably Low

4 Jun, 16 | by Iain Brassington

Something popped up on my twitter feed the other day: this document from Oxford’s philosophy department.  (I’m not sure quite what it is.  Brochure?  In-house magazine?  Dunno.  It doesn’t really matter, though.)  In it, there’s a striking passage from Jeff McMahan’s piece on practical ethics:

Even though what is variously referred to as ‘practical ethics’ or ‘applied ethics’ is now universally recognized as a legitimate area of philosophy, it is still regarded by some philosophers as a ghetto within the broader 
area of moral philosophy.  This view is in one way warranted, as there is much work in such sub-domains of practical ethics as bioethics and business ethics that is done by writers whose expertise is in medicine, health policy, business, or some area other than moral philosophy, and whose standards of rigour in moral argument
are deplorably low.  These writers also tend
 to have only a superficial understanding of normative ethics.  Yet reasoning in practical ethics cannot be competently done without sustained engagement with theoretical issues
in normative ethics.  Indeed, Derek Parfit believes that normative and practical ethics are so closely interconnected that it is potentially misleading even to distinguish between them.  In his view, the only significant distinction is between ethics and metaethics, and even that distinction is not sharp.  [emphasis mine]

It’s a common complaint among medical ethicists who come from a philosophical background that non-philosophers are (a) not as good at philosophy, (b) doing medical ethics wrong, (c) taking over.  All right: there’s an element of hyperbole in my description of that complaint, but the general picture is probably recognisable.  And I don’t doubt that there’ll be philosophers grumbling along those lines at the IAB in Edinburgh in a couple of weeks.  There’s a good chance that I’ll be among them.

There’s a lot going on in McMahan’s piece, and his basic claim is, I suppose, open to a claim that, being a philosopher, he would say that, wouldn’t he?  But even if that claim is warranted, it doesn’t follow that it’s false.  And it probably isn’t false.  There is some very low-quality argument throughout bioethics (and, from what I remember from my time teaching it, business ethics) – more particularly, in the medical ethics branch of bioethics, and more particularly still, in the clinical ethics sub-branch.  Obviously, I’m not going to pick out any examples here, but many of us could point to papers that have been simply not very good, because the standard of philosophy was low, without too much difficulty.  Often, these are papers we’ve peer-reviewed, and that haven’t seen the light of day.  But sometimes they do get published, and sometimes they get given at conferences.  I’ve known people who make a point of trying to find the worst papers on offer at a given conference, just for the devilry.

It doesn’t take too much work to come up with the common problems: a tendency to leap to normative conclusions based on the findings of surveys, or empirical or sociological work; value-laden language allowing conclusions to be smuggled into the premises of arguments; appeals to vague and – at best – contentious terms like dignity or professionalism; appeals to nostrums about informed consent; cultural difference used as an ill-fitting mask for special pleading; moral theories being chosen according to whether they generate the desired conclusion; and so on.  Within our field, my guess is that appeals to professional or legal guidelines as the solutions to moral problems is a common fallacy.  Not so long ago, Julian noted that

[t]he moralists appear to be winning.  They slavishly appeal to codes, such as the Declaration of Helsinki.  Such documents are useful and represent the distillation of the views of reasonable people.  Still, they do not represent the final word and in many cases are philosophically naïve.

Bluntly: yes, the WMA or the BMA or the law or whatever might say that you ought to do x; and that gives a reason to to x inasmuch as that one has a reason to obey the law and so on.  But it’s unlikely that it’s a sufficient reason; it remains open to us always to ask what those institutions should say.  Suppose they changed their minds and insisted tomorrow that we should do the opposite of x: would we just shrug and get on with the business of undoing what we did today?

And yet…  The complaint about poor argument is not straightforward, for a couple of reasons. more…

Why Brits? Why India?

3 Apr, 16 | by Iain Brassington

Julie Bindel had a piece in The Guardian the other day about India’s surrogate mothers.  It makes for pretty grim reading.  Even if the surrogates are paid, and are paid more than they might otherwise have earned, there’s still a range of problems that the piece makes clear.

For one thing, the background of the surrogates is an important factor.  Bindel writes that

[s]urrogates are paid about £4,500 to rent their wombs at this particular clinic, a huge amount in a country where, in 2012, average monthly earnings stood at $215.

It’s tempting, at first glance, to look at the opportunity to be a surrogate as a good thing in this context: these women are earning, by comparative standards, good money.  But, of course, you have to keep in mind that the standard is comparative.  If your choice is between doing something you wouldn’t otherwise do and penury, doing the thing you wouldn’t otherwise do looks like the better option.  But “better option” doesn’t imply “good option”.  So there’s more to be said there; more questions to be asked.  Choosing x over y because y is more awful doesn’t mean that x isn’t.  It might be a good thing; but it might not be.  There might be economic – structural – coercion.  Choosing to become a surrogate might be a symptom of there being no better alternative.

A related question is this: are the women really making a free choice in offering their reproductive labour even assuming that the terms are economically just?  Possibly not:

I have heard several stories of women being forced or coerced into surrogacy by husbands or even pimps, and ask Mehta if she is aware of this happening.  “Without the husbands’ [of the surrogates] consent we don’t do surrogacy.”

Note (a) the non-denial, and (b) the tacit acceptance that it’s the husband’s decision anyway.  That’s not good.

(In a wholly different context, I’ve recently been reading David Luban’s Lawyers and Justice, and – in a discussion about lawyers cross-examining complainants in rape cases, he makes this point:

([H]ere we have two people who are confronted by powerful institutions from which protection is needed.  The defendant is confronted by the state [that is: in any criminal trial, the defendant does need protection from the power of the state – IB], but the victim is confronted by the millennia-long cultural tradition of patriarchy, which makes the cliché that the victim is on trial true.  From the point of view of classical liberalism, according to which the significant enemy is the state, this cannot matter. But from the point of view of the progressive correction of classical liberalism, any powerful social institution is a threat, including diffuse yet tangible institutions such as patriarchy. (p 151)

(The sentiment would seem to apply here.  A view of human agency that sees liberty as being mainly or only about avoiding state interference is likely to miss all kinds of much more subtle, insidious pressures that are liberty-limiting.  Economic factors are such pressures.  The idea of the wife as property is another.)

I do wonder if readers of this blog might help out with answering one more question, though. more…

Thumbs Up for Privacy

30 Mar, 16 | by Iain Brassington

“Hey, Iain,” says Fran, a Manchester alumna, “What do you make of this?”  I won’t bother rehearsing the whole scenario described in the post, but the dilemma it describes – set out by one Simon Carley – is fairly easily summarised: you work in A&E; a patient is rolled in who’s unconscious; there’s no ID, no medic alert bracelet – in short, nothing to show who the patient is or what their medical history is; but the patient does have an iPhone that uses thumbprints as a security feature.  And it might be that there’s important information that’d be accessible by using the unconscious patient’s thumb to get at it – even if it’s only a family member who might be able to shed some light on the patient’s medical history.

It’s a potentially life-or-death call.  Would it be permissible to hold the phone to the patient’s thumb?

For those who think that privacy is a side-constraint – that is, a moral consideration that should not be violated – the answer will be obvious, and they’ll probably stop reading around about… NOW.  After all, if you’re committed to that kind of view, it’s entirely possible that the question itself won’t make a great deal of sense (tantamount to “Is it OK to do this thing that is plainly not OK?”), or at least not be worth asking.  But I don’t think that privacy is a side-constraint; I’m increasingly of the opinion that privacy is a bit of an iffy concept across the board, for reasons that needn’t detain us here, but that might be implied by at least some of what follows.  In short, I think that privacy is worth taking seriously as a consideration, but it’s almost certainly not trumps.  At the very least, that’s how I shall handle it here.  (Note here that the problem is one of privacy, not – as the OP has it – confidentiality; it’s a question about how to get information, rather than one of what you can do with information volunteered.  A minor quibble, perhaps, but one worth making.)  Even if I’m wrong about privacy in general, the question still seems to be worth asking, if only to confirm that and why it should not be violated. more…

Mature Content?

27 Feb, 16 | by Iain Brassington

There’s an aisle at the supermarket that has a sign above it that reads “ADULT CEREALS”.  Every time I see it, I snigger inwardly at the thought of sexually explicit cornflakes.  (Pornflakes.  You’re welcome.)  It’s not big, and it’s not clever: I know that.  But all these years living in south Manchester have taught me to grab whatever slivers of humour one can from life.

Anyway…  A friend’s FB feed this morning pointed me in the direction of this: a page on Boredpanda showing some of the best entries to the 2016 Birth Photography competition.  (Yeah: I know.  I had no idea, either.)

I guess that birth photography is a bit of a niche field.  The one that won “Best in Category: Labour” is, for my money, a brilliant picture.  Some of the compositions are astonishingly good – but then, come to think of it, childbirth isn’t exactly a surprise, so I suppose that if you’re going to invite someone to photograph it, they’re going to have plenty of time to make sure that the lighting is right.

A second thought that the pictures raise is this: no matter how much people bang on about the miracle of birth… well, nope.  Look at the labour picture again.  I can’t begin to express how glad I am that that’s never going to happen to me; and I’m even more convinced than I was that I don’t want to play any part in inflicting that on another person.

But my overriding response is something in the realm of astonishment that some of the pictures are blanked out as having “mature content”.

I mean… really? more…

R-E-S-P-E-C-T

24 Dec, 15 | by Iain Brassington

Here’s an intriguing letter from one John Doherty, published in the BMJ yesterday:

Medical titles may well reinforce a clinical hierarchy and inculcate deference in Florida, as Kennedy writes, but such constructs are culture bound.

When I worked in outback Australia the patients called me “Mate,” which is what I called them.

They still wanted me to be in charge.

Intriguing enough for me to go and have a look at what this Kennedy person had written.  It’s available here, and the headline goes like this:

The Title “Doctor” in an Anachronism that Disrespects Patients

Oooooo-kay.  A strong claim, and my hackles are immediately raised by the use of “disrespect” as a verb – or as a word at all.  (Don’t ask me why I detest that so; I don’t know.  It’s just one of those things that I will never be able to tolerate, a bit like quiche.)  But let’s see…  It’s not a long piece, but even so, I’ll settle for the edited highlights: more…

Assisted Dying’s Conscience Claws

11 Sep, 15 | by Iain Brassington

Aaaaaaaand so the latest attempt to get assisted dying of some sort onto the statute books in the UK has bitten the dust.  I can’t say I’m surprised.  Watching the debate in the Commons – I didn’t watch it all, but I did watch a fair chunk of it – it was striking just how familiar the arguments produced by both sides were.  It’s hard to shake the feeling that, just as is the case with the journals, the public debate on assisted dying has become a war of attrition: noone has much new to say, and in the absence of that, it’s simply a matter of building up the numbers (or grinding down the opposition).  The Nos didn’t win today’s Parliamentary debate because of any dazzling insight; the Ayes didn’t lose it because their speakers were measurably less impressive than their opponents’.  If the law does change in the UK, I’d wager that it’ll be because of demographic brute force rather than intellectual fireworks.

(Every now and again I hear a rumour of someone having come up with a new approach to assisted dying debates… but every now and again I hear all kinds of rumours.  I live in hope/ fear: delete as applicable.)

Still, I think it’s worth spending a little time on one of the objections that’s been raised over the last couple of days to this Bill in particular; it’s an objection that was raised by Canon Peter Holliday, the Chief Executive of a hospice in Lichfield:

In an interview with the Church of England, Canon Holliday said: “If there is no possibility within the final legislation for hospices to opt out of being a part of what is effectively assisted suicide, then there is nervousness about where our funding might be found in the future. Would the public continue to support us and indeed would the NHS continue to give us grants under contract?”

Canon Holliday said the Assisted Dying Bill also contains no opt out for organisations opposed to assisted suicide in spite of high levels of opposition to a change in the law amongst palliative care doctors. Where hospices did permit assisted suicide the potential frictions amongst staff could be ‘enormous’ with possible difficulties in recruiting doctors willing to participate, he said.

“The National Health Service requires us, in our contracts, to comply with the requirements of the NHS. Now if the NHS is going to be required to offer assisted dying there is of course the possibility that it would require us or an organisation contracting with the NHS also to offer assisted dying. If we as an organisation were able, and at the moment under the terms of the bill there is no indication we would be able, but if we were able to say that assisted dying was not something that would happen on our premises, would that prejudice our funding from the NHS ?”

Is this worry well-founded? more…

JME blog homepage

Journal of Medical Ethics

Analysis and discussion of developments in the medical ethics field. Visit site



Creative Comms logo

Latest from JME

Latest from JME

Blogs linking here

Blogs linking here