You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.


Evidence free yet evidence based; guidelines again.

29 Sep, 15 | by Bob Phillips

2211526355_d11a0e29be_mIn a paper that I’d have never seen if it wasn’t for Twitter, Loes Knaapen of the Université de Montréal Public Health Research Institute reports the scholarly musings on a bunch of conversations with ‘EBM’ guideline developers, attendance at conference events, and a lot of reading around the subject of Guideline Creation. At the heart of these musings is the dilemma

‘how to address the challenges of providing evidence-based advice to address questions for which the evidence is lacking, of poor quality, immature or incomplete’


Sleep tight

18 Sep, 15 | by Bob Phillips

Every so often you bump into something that you didn’t know you didn’t know. That might make a massive difference to your (or someone else’s) life.

Well recently I was directed at this survival guide encouraging sleep to survive shift working and do it safely and securely.

For us.

The key points are:


But what if you miss a malignancy?

15 Sep, 15 | by Bob Phillips

There’s a big push in the UK to make ‘early diagnosis’ of cancer happen more often. The assumption is that diagnosis earlier will mean the disease has not spread, is more treatable, and will lead to a better outcome.

For many conditions, the stage at presentation does indeed link to outcome. In some conditions, there’s a clear natural history that allows you to ‘catch it early’ (cervical neoplasia for example). In others, the biology doesn’t work like that, and early doesn’t mean anything (take the example of neuroblastoma screening).

But what about acute leukaemia?



8 Sep, 15 | by Bob Phillips

That was the repeated phrase of my middle child’s obsessive bedtime reading for a while. Picture of police bikes, fire engines, ambulances, mountain rescue 4×4 and lifeboats.

In not one frame was the rescued individual entered into a clinical trial of therapy or diagnostics.

I guess that might have been asking a bit much, but is it also a bit much to ask for signed, informed consent with an appropriate time to reflect between information delivery and accession? If we worry about risk of bias in non-randomised trials, should the acuity of emergency studies make this even more important to get right?


Basics: Study Type

4 Sep, 15 | by Bob Phillips

So sometimes it’s obvious (the title says “: Randomised Controlled Trial” or “Systematic Review …”) but sometimes it’s just a bit tricky to work out what type of study you’re dealing with.

The very clever folks at the AHRQ also had that – and the inconsistency of how researchers name things – and so developed an exceptionally handy flow-chart to the Naming Of Things:


Hartling L, Bond K, Harvey K, Santaguida PL, Viswanathan M, Dryden DM. Developing and Testing a Tool for the Classification of Study Designs in Systematic Reviews of Interventions and Exposures [Internet]. AHRQ Methods for Effective Health Care. Rockville (MD): Agency for Healthcare Research and Quality (US); 2010 Dec. Report No.: 11-EHC007-EF. (


Whose values?

1 Sep, 15 | by Bob Phillips

I was reading a really fascinating article about microarray-based comparative genomic hybridisation. The authors – experts in the exploration and understanding of data that looks worrying like something from The Matrix – describe the way that such powerful genetic techniques can see what might be different about one child’s genes, and suggest groups in which the technique may be used.

When the aCGH comes back with a pathological variant, that then explains the diagnosis, I can see how this may alter treatment choices, make a difference to understanding prognosis (but maybe not – as we’re not sure that the ‘forme fruste’ versions always work the same as the face-slappingly-obvious ones, are we?) and give information for reproductive choices.

And if we find a ‘nothing’, then we’re also a further step into acknowledged uncertainty. But … more…

Basics: AVID

21 Aug, 15 | by Bob Phillips

060314_1404_Whenatestis1.jpg The shortcut world of acronyms for critical appraisal was lacking one for diagnostic test accuracy – we have RAMbo for RCTs, FAST for systematic reviews, but what of the poor reader of studies evaluating a new test?

We know the basic idea – patients who are considered to potentially have the diagnosis in question have both the test-under-evaluation and the as-good-as-we, these are assessed without looking  at the results from the other one, and that if there are cut-offs these are reproducible.

Wait! That’s it … more…

Basics: CASP checklists

7 Aug, 15 | by Bob Phillips

The basics of evidence based medicine are to ask a question, acquire a paper that might answer it, appraise the study, apply it’s results and assess performance.

The appraisal bit can be done a few different ways – but underneath nearly all of them sits a similarity of key concepts – it’s just the gloss that varies.

But how something looks is VERY important (says my 12 year old). So you might like the look of the pictorial GATE approach, or the simplicity and rapidity of the tiny acronyms like RAMbo, FAST and AVID. Or a more leisurely series of questions, as promoted and freely available from the CASP team may be what you want to use to bring your appraisals into the light.

These are study-type-specific, annotated checklists of about 10 questions that step you through the key elements of evaluating bias in clinical research studies. There are a wealth of online tools and courses about the checklists, and loads of people like them.

As our recent guest blogger might say, “Have a play!”

– Archi

Predictive Factors

31 Jul, 15 | by Bob Phillips

Sometimes, we spot stuff that predicts how things will happen. Well, usually happen. These may be described as ‘risk’ factors – that is, factors which predict something will happen – or ‘prognostic’ factors – thinks that predict the outcome of a condition. There are a range of generalisations that are sometimes made from ‘predictive’ studies, and if you take an extremely non-medical example you may spot some of their weaknesses.

Say someone reports a study that shows a barking dog predicts a herd of small children in the kitchen. The study was done during daytime hours, in a family home on a suburban street. While the barking was a good predictor (85% of the time), it wasn’t perfect, sometimes there was a delivery driver at the door; though preceded by hearing the van drawing up. The authors conclude that those wishing to protect the biscuits in their kitchen should use barking dogs to warn them. more…

Stopping Rules

24 Jul, 15 | by Bob Phillips

If you were cycling or driving, you’d probably know what the stopping rules were. Traffic not moving, big red sign, large goose with malevolent glare (Lincolnshire speciality).

What if you’re doing a clinical trial?

There are a variety of things what have been described, some of them are qualitative (SUSAR – sudden, unexpected, serious adverse reactions) and some statistical. The latter have with them a set of maths that leads to reasons to discontinue, either for proven benefit or futility.


ADC blog homeapage

ADC Online

Education, debate, and meandering thoughts on child health, using evidence and research.Visit site

Creative Comms logo

Latest from Archives of Disease in Childhood

Latest from Archives of Disease in Childhood