You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.


The crumbling of the pyramid of evidence

3 Nov, 14 | by Bob Phillips

The ‘old way’ of thinking about the hierarchy of evidence was classically envisaged as a systematic review at the top, falling through RCT, cohorts and case-control to expert opinion (and below that, in some iterations, case law & legislative decisions).

There’s been a move against this, with the GRADE system as explained recently in our popular Guest blog: The Systematic Review Speaks The Truth- or does it?

Another example has been published in the tricky field of idiopathic scoliosis, where a group have undertaken an overview of systematic reviews. What they demonstrate, using the AMSTAR approach of assessing systematic reviews, is a huge swath of low-quality reviews when assessing non-surgical interventions. The conclusions of these reviews appear to be more likely to be ‘positive’ than the higher quality reviews, much as expected.

While this message is not startlingly new, it does reinforce the need to always, always appraise the evidence you are looking at. You can do it quickly. You can do it extensively. But you need to do it.

– Archi

Always question your question

30 Oct, 14 | by Bob Phillips

I was recently at a wonderful conference in Toronto, where 1900 folk interested in childhood cancer came together to learn, argue, network, present and be merry – #SIOP2014.

There was a particularly interesting debate between two very clever oncologists about whether or not we should use antifungal prophylaxis in children with AML and post-stem-cell-transplant. (Both are at high risk ~10% of developing fungal disease.) Now there are, as you probably know, two main classes of antifungals – the anti-yeast agents, and those with broader, anti-mould activity. Invasive yeast infections can be deadly; about 25% mortality. But invasive mould infections are said to be worse – around 50% mortality.

The debate centred around what class we should be prescribing. One group advised anti-mould, and one anti-yeast. They both had the same evidence to work from. Why the difference?


Guest post: The Systematic Review Speaks The Truth …… Or Does It?

27 Oct, 14 | by Bob Phillips

A good quality systematic review should identify and synthesise all the available evidence, for a particular question, through meta-analysis. Conclusions can then be made about the effect of the intervention on the outcome. As, in theory, all the available evidence is gathered and assessed, surely the conclusions from the meta-analysis must be the truth and we can then apply this to practice?

Well…..not quite. The transition from putting the conclusions of a systematic review into guideline development is not quite as simple. We need to assess the quality of the evidence presented and its application to practice.  more…


23 Oct, 14 | by Bob Phillips

ExplosiveWell, I thought that was a better title than ‘Volatility’ which, to be fair, is closer to what this meandering post is all about.

When we’re struggling our way through medicine, we have to face all sorts of uncertainties. Some of these are the frank face of ignorance (we just don’t know something), some of them are about the degree of chance that plays into our knowledge, some around the edges where we decide which side of an imaginary line things play, and on top of all these, we have situations where Stuff Changes. Not that we don’t currently know where things are going to end up – for example, that we don’t have the diagnosis yet because we haven’t worked it out – but that it actually alters as we go through time.


Why not look at what you already know?

16 Oct, 14 | by Bob Phillips

A little while ago we blogged on the surprisingly varied methods folk use to pick how how big an effect needs to be in order to be ‘clinically relevant’. A further paper on this theme has emerged that takes up a slightly different aspect of the challenge of getting the number right before doing a trial.

On the basics front, before you know how many people will be needed for a trial, you need to know

  • How big an effect you might see
  • How varied the effect is between people
  • What size of effect is gong to be ‘clinically relevant’ (ie above what level you want to prove the intervention will lie)
  • What chance of making the wrong call (“It works!” when it doesn’t, or vica versa) you are prepared to accept

It may be rather surprising to find that there hasn’t been, until very recently, a really well developed way of using systematic review / meta-analysis methodology to capture the stuff we already know before moving onwards to find out more, when moving between phase II (how-toxic-is-this-and-does-it-make-markers/images-better?) and phase III (are-there-fewer-dead-people?) trials. But now there is.


Publication bias.

2 Oct, 14 | by Bob Phillips

SO – you all know about publication bias? The fact that nasty, authoritarian Journal Editors, sat with their cigars, expensive brandy and well-roasted coffee look upon trials that don’t give positive results and consign them to the pit of Rejection?

(That’s just how it happens.)

Well, there’s another variants on this theme.

There’s the “we’ll only write up that outcome measure ’cause it says what we want it to show” bias (aka ‘selective outcome reporting’)

And then there’s the “can’t be arsed” bias, where studies just don’t even get written up or presented as their overwhelming lack of showing anything leads their authors to torpor. I particularly hate systematic reviews of case reports for this trouble.

And it happens with normal people too. A really lovely piece of work shows that Amazon dieting reviews show massive publication bias, probably by self-selection, and that folk buy into believing them wholeheartedly. As PT Barnum said – “there’s one born every minute”.

– Archi

StatsMiniBlog: Spot on, time and again.

22 Sep, 14 | by Bob Phillips


“Spot on!” is a rather anachronistic and very Anglophile phrase, redolent of croquet lawns, tweeds and well designed woven straw hats. It’s no wonder we tend to use  – if we are being technical – the word “accurate” instead.

But should we be using the word “precise” to make ourselves sound all academic? And what’s the difference?


Accuracy – the closeness a thing to it’s target

Precision – how close repeated attempts are to each other

Now those two things do not have to be connected – you may be accurate and imprecise, or inaccurate but very precise, or .. Oh forget it.

Let me just show you a picture …

Routine data vs research expense

18 Sep, 14 | by Bob Phillips

Lots of debates could be had off this title. When is an ‘audit’ and audit and when is it a cloaked piece of poor quality retrospective research? Why is ‘research’ considered better just because it’s ‘special’? What makes research study data forms nearly impossible to understand without spending 3 days in a steam hut wearing just a loincloth made of old patient information leaflets and drinking far too much Red Tea?

What I think it’s worth taking up, for a just a bit though, is ” What is routinely collected hospital data and it’s relationship with the real world? ”


A grain of sand.

15 Sep, 14 | by Bob Phillips

I am a glutton for podcasts, occasionally medical, but often way off this mark (sociology, philosophy & rugby league would fall into this category), yet they frequently play into each other. Some of you will recall this, as I note that when I can’t concentrate on a podcast, I know I’m becoming overloaded/over worried and need to step away from stuff to regain my good mental health. Podcasts are my pants drawer.

However, my own state of mind is not the key in this entry, but an ancient philosophical problem.

The Sorites Paradox.


Top tips for detecting adverse events in paediatrics

11 Sep, 14 | by Bob Phillips

How can we determine the safety if anything we do in paediatric prescribing? For chronic conditions, we’re generally pretty sure that if we let it wind on, it will harm the child. If we treat it, we’ll be managing the disease but causing adversity. The balance is making this tip where the good stuff overwhelms the poor stuff.

I think the commonest, extreme, example is chemotherapy. These agents are intended to treat a cancer to save a life. To do this, they may cause sufficient immunosupression to produce a fatal infection, or mucosal erosions to give a fatal intestinal perforation, or a thrombotic event that produces a cerebral infarct and death. The carefully measured doses of these drugs are placed to  make the tipping point in favour of benefit over harm; and we have improved survival in childhood cancer by this treatment approach.


ADC blog homeapage

ADC Online

Education, debate, and meandering thoughts on child health, using evidence and research.Visit site

Creative Comms logo

Latest from Archives of Disease in Childhood

Latest from Archives of Disease in Childhood