You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.

critical appraisal note

Can our children’s trials work better than they do?

13 Jun, 17 | by Bob Phillips

We’re all well aware of the problems of doing randomised clinical trials in paediatrics – small numbers, uncertainty about sample size estimates, lack of funding to undertake the studies – but are we as aware of some alternative approaches that have been used [1]?

“Sequential design” studies look at comparing a series of treatments against each other, switching to the ‘better’ arm and comparing against the next candidate as time progresses… They need quickly and easily available outcomes and tend to be usable only for short-course treatments… but they’ve been estimated to reduce sample sizes by about 25%


Cases and controls

18 Dec, 16 | by Bob Phillips

F3.mediumI’ve noticed that there are a fair few phrases in the world where there actual meaning can be unclear or uncertain, or possibly interpreted differently by folk. Take “maybe later” when used by parent to child – clearly means “no” to the parent and “yes but not now” to the child. Or “brexit”.

But the world of science can’t be confused …  can it?

Just take a gander through the field of “case control” titled studies and you may find yourself upset to discover it can. Now I am fairly clear that what I mean by case/control is a design where the participants are chosen because they have developed (cases) or haven’t got (controls) the OUTCOME of interest – they died, developed neuroblastoma or had exclusion from school. The analysis then is about finding out if these groups had different levels of exposure to a proposed causative factor, such as blood transfusions, bacon, or X-factor viewing.

What is not a case control study is one where the groups are chosen for the exposure to a treatment or not. This is a comparative cohort study.

Now as is so often the case when appraising papers, it doesn’t sometimes matter what the authors have written. It’s what they did that counts – so discount their title if the design doesn’t fit it.

– Archi


Basics: Blame it on me

11 Mar, 16 | by Bob Phillips

In my clinical role, it’s fairly easy to take the blame for most bad things that happen to my patients. I give them cytotoxic chemotherapy (for good reason, honest) and it’s a group of substances that we label with TERATOGENIC! HARMFUL! QUITE BAD FOR YOU! tags a lot of the time.

But how do we know, in most circumstances, if the  drug/potion/puffer etc is the cause of something averse?  more…

Get it straight from the start

24 Feb, 13 | by Bob Phillips

 Over more than a decade Archimedes has presented clinical queries and the appraisal of the evidence that emerges. leading on to a clinical conclusion to the dilemma. What is strikingly common is that many questions can start in a muddle, and a failure to get an ‘evidence based answer’ might be a failure to ask an accurate question.

A recent trans-disciplinary teaching session had one anaesthetist summarise the whole of EBM question formulation as “Does drug A compared to drug B make outcome X happen more or less in patient group Z” … a brilliant anagram of the ‘PICO’ formula into Intervention, Comparator, Outcome, Patients. Now if your question doesn’t fit, or can be fudged to fit, this, then you need to unpack it.


Proof of equipoise

12 Nov, 12 | by Bob Phillips

In order to test a new treatment, in a standard randomised controlled trial, we are ethically assumed to have ‘equipoise’: an honest uncertainty at the same chance of a patient being allocated to the new or old treatment. But, I hear you scoff, how can any investigator put themselves through the hell of ethical administration forms, R&D offices and the potential of an infestation of drug safety investigators without being pretty convinced that the new way is better?

Well, in true evidence-based self-analytical fashion, a highly respected gang of investigators determined to see if equipoise had been met [1]. They undertook a systematic review of cohorts of publicly funded studies (not pharma ones) and assessed if the new treatment was better than the old one or placebo, whichever was the comparator. They found that only slightly less than half the time the new treatment was no better than the comparator, and the new therapy was only very rarely an major advantage.

How can we use this information? Well, I think we can use it every time we face a patient and family with the option to enter a large, non-pharma, RCT. We can honestly say that, looking back, we’re right with the new treatment only half the time and that trials are truly the only accurate way of testing treatments fairly.


New treatments compared to established treatments in randomized trials. Benjamin Djulbegovic et al. Cochrane Library, DOI: 10.1002/14651858.MR000024.pub3


Cracking the mould

12 May, 12 | by Bob Phillips


While Archimedes does, not infrequently, get all concerned about invasive fungal infections, this post is not of the issue of beta-D-glucan testing, or problems of azole interactions. Instead, its a swipe at the problem of how, given a transparent system of asking questions, acquiring information, and appraising the evidence we can come to such differences when we get to applying this. Why do we find it so tricky to break our clinical practice mould?  more…

Tarnished gold

20 Jan, 12 | by Bob Phillips

What can you do when a ‘gold standard’ isn’t actually that good at diagnosing a condition? It can be terribly problematic in interpreting sensitivity and specificity – for example comparing polymerase chain reaction diagnosis of microbiological infection with culture results. The ‘false positive’ may actually reflect real, and otherwise missed, diagnosis, and the ‘false negatives’ a failure of the old standard to identify someone who isn’t really unwell. more…

Short-cuts to effectiveness information

15 Jan, 12 | by Bob Phillips

A while ago Archimedes reviewed the benefits of using ‘pre-appraised’ search resources, short-cuts to the best methodological quality evidence to answer clinical questions. The favoured database of many, PubMed [] has now receieved a new addition to the range of resources on offer.


Slice, DICE and eventually something will happen

17 Sep, 11 | by Bob Phillips

Did you know that aspirin following MI doesn’t work for those with Gemini and Libra star signs?

No, it’s true*. The ISIS-2 trial, which demonstrated the mortality benefits for anti-platelet agents after myocardial infarction with a p<0.00001 only showed benefit for people born in ten of the twelve signs of the zodiac.So if you believe statistics, and randomised trials, then you could save 1/6th of the antiplatelet bill by not giving it to this lot.

Secrets and lies. Truth and beauty.

30 Jun, 11 | by Bob Phillips

… and other Bohemian aphorisms …

There is a quite brilliant paper from the under-advertised PLoS One which shows how, in the are of incubation periods for respiratory disease, Truth By Citation is quite strikingly different than the reality of the evidence. The networks of citations demonstrate how repetition, sometime but not always with a citation, leads to a ‘truth’ emerging which does not reflect the real picture of the evidence.

Truth, beauty, and absinthe

This paper joins a similar mass of information which demonstrates how information about prognostic biomarkers are dominated by the few studies which show remarkably strong associations, and rarely reference the systematic reviews that place the studies in context.
And there is are still the classic example of sudden infant death and sleeping position. more…

ADC blog homeapage

ADC Online

Education, debate, and meandering thoughts on child health, using evidence and research.Visit site

Creative Comms logo

Latest from Archives of Disease in Childhood

Latest from Archives of Disease in Childhood