You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.


Critical interventions

5 May, 15 | by Bob Phillips

There are considerable numbers of interventions which are undertaken at points of emergency; severe head injury, severe septic shock, myocardial infarction, admissions to intensive care units… In these situations it can be extremely tricky to get the critically ill, often unconscious, individual to agree to being randomised in a clinical trial. Yet without that, we won’t know what treatment to give. Or not give.

But surely we should just use common sense?

Like oxygen for myocardial infarction?

Or can we undertake “deferred consent” – a rather odd phrase which means seeking consent for the data collected after a patient has been, because of a critical care emergency, entered into a randomised trial.  more…

Parents on NICU rounds

1 May, 15 | by Bob Phillips

Does your neonatal unit have parents present when you’re doing medical rounds? Would that be a good thing? (Or if you already do it, is that a bad, limiting thing?) Could the presence of parents inhibit honest medical discussion? Could it compromise confidentiality? May the opportunities for bedside teaching be severely reduced? Could the stress of hearing the discussions be excruciating to the parents? Will the inclusion of parents into a ward round discussion bring about a greater trust, and make it truly inclusive? Will it allow for a deeper understanding of the dilemmas faced on both sides? And how much will it vary between parents?

Thinking about all those possibilities makes the idea of trying to investigate the question “Should parents be present on neonatal ward rounds?” rather difficult to frame. For instance, what outcomes are important, and how could they be measured?


Basics: Rapid Reviews

28 Apr, 15 | by Bob Phillips

Systematic reviews in health care aim to answer a specific, highly structured, clinical question by extensive searching, careful sifting and appraisal of the studies, a considered synthesis and well tempered conclusions. They can take very many months – 18 or more – to complete.

Where we undertake and use systematic reviews to provide the very best estimates of effect, we’ll also be waiting a long time to get there. What we might be – practically – better doing is a ‘good enough’ review; still focussed, still symptomatic and still synthetic, but quicker.

This is the realm of the rapid review, a not-quite defined type of systematic review that’s quicker, perhaps a little more focussed, sets clearer boundaries and is well prepared to make every piece fall into place one after another. It turns around an answer fast enough to bring answers about more quickly but still good enough to make a difference.

Of course, you might recognise this type of description when you think about Archimedes reports…But Archimedes reports are a bit briefer in searching, and rarely undertake a formal synthesis, so not quite in this category.

Predicting IvIg Resistance

24 Apr, 15 | by Bob Phillips

It would be nice, wouldn’t it, if we could work out which patients would not benefit from an intervention, in order to a) not use it and b) use something (probably more toxic) instead? It’s a frequent thought of mine, as an oncologist, when I sign off another chemotherapy chart with multiple agents on it.

I know that other have the problem too – for instance, those deciding how to treat patients with Kawasaki disease. For some patients the usual treatment of high dose immunoglobulin is ineffective at preventing cardiac artery aneurysm formation. There have been clinical prediction rules developed for this, and in Japanese populations, the Kobayashi score is reputed to be effective. The disease does appear to differ across the world though, and it’s always worth confirming that prediction models do work in different areas. more…

Basics: Intention To Treat

21 Apr, 15 | by Bob Phillips

The principle of an ‘intention to treat’ analysis is that the participants in a randomised trial are analysed in the group to which they were randomised, regardless of what treatment they received. So in a hypothetical trial of salbutamol vs. aminophiline infusion for severe asthma, regardless of what the child got, they are placed in their ‘you should have’ group…

The concept comes from the core of RCT philosophy – that chance has settled all prognostic factors evenly between the two* arms – and so the only reasonable way of preserving this is to analyse the outcomes according to this sorting.

What this does is, if some folk in the ‘intervention’ arm don’t get the intervention (e.g. Salbutamol infusion, but their K+ was falling prior to starting) then it reduces the observed effect of the drug. This is then ‘unfair’.

But wait. Pragmatic RCTs, ones of treatments as we use them, test an intervention. They test not ‘salbutamol infusion’ but the intervention – which might be characterised as ‘what if we have an approach that says we should use salbutamol infusions for pts unless its clear they need something different … like PICU .. now … can someone ring 2222 please …’

If there are lots of deviations, crossovers and non-receipts of the allocated intervention, it’s very important to looks why. The way we were proposing to do ‘the intervention’ clearly doesn’t work in practice — so it needs reassessing — not necessarily having the ‘treatment’ element thrown out.

– Archi

* OK – so it could be three, four etc arms. It’s just that two is easier to think about. And commonerer.

Bunging up the flow

17 Apr, 15 | by Bob Phillips

I was intrigued to see the meta-analysis of diosmectite in acute diarrhoea appear in the Arch Dis Child recently – partly ’cause I’d no idea what diosmectite was, and partly because I spend a lot of my time with folk who poo too little or too much.

When taking a look at a systematic review, it’s worth using a FAST appraisal schema, but starting by identifying the PICO question that the review seeks to answer.


How do you add up if there are no numbers: Qualitative Synthesis

10 Apr, 15 | by Bob Phillips

Regular readers of this blog will know of its penchant for systematic review techniques (evidenced in the recent I-squared blog ). The process of qualitative synthesis uses many of those familiar methods – defining a clear question, systematic literature searching, selecting appropriate research and assessing the risk of bias. Following this, however, qualitative syntheses begin to look really quite different – mostly because there are no nice numbers to add up and give ‘the answer’ but also because they are just not written in language we understand (read the qualitative research blog series to help with this)!

So how on earth do we go about reading a qualitative synthesis and deciding whether its any good?

Well, instead of reinventing the wheel, we can just modify our FAST assessment:

What about mixedupness?

27 Mar, 15 | by Bob Phillips

The subject of heterogeneity (mixed~up~ness) in systematic reviews is tricky. A bit like ‘significance‘ you can think about it as both a clinical and statistical concept, and in the same way, you can get results that aren’t always concordant.

Many old lags will remember a blog post about a statistically significant association between platelets and renal involvement in HSP. There, there was a statistical association that’s unlikely to be due to chance, but is clinically irrelevant.

The same queries need to be asked of heterogeneity within studies.


When is enough enough?

20 Mar, 15 | by Bob Phillips

I know that’s a tricky question, and may make you think of cream pouring on apple crumble, discussions about chemotherapy, or episodes of Octonauts depending on exactly what frame of mind you’re in and background you have.

Within a research setting, however, how do we decide when something has been researched so much and folk have repeatedly found no/minimal effect, that we should just give it up. It doesn’t work (enough). This is a key decision to be made, and relies on a mixture of elements.


When did you last ask about the manufacturer?

9 Jan, 15 | by Bob Phillips

It’s been a week of finding out things I didn’t know I didn’t know about. iCarly, for one. Life expectancy in young people with deliberate self harm for another. And fake medicines.


ADC blog homeapage

ADC Online

Education, debate, and meandering thoughts on child health, using evidence and research.Visit site

Creative Comms logo

Latest from Archives of Disease in Childhood

Latest from Archives of Disease in Childhood