You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.

meta-analysis

Proof of equipoise

12 Nov, 12 | by Bob Phillips

In order to test a new treatment, in a standard randomised controlled trial, we are ethically assumed to have ‘equipoise’: an honest uncertainty at the same chance of a patient being allocated to the new or old treatment. But, I hear you scoff, how can any investigator put themselves through the hell of ethical administration forms, R&D offices and the potential of an infestation of drug safety investigators without being pretty convinced that the new way is better?

Well, in true evidence-based self-analytical fashion, a highly respected gang of investigators determined to see if equipoise had been met [1]. They undertook a systematic review of cohorts of publicly funded studies (not pharma ones) and assessed if the new treatment was better than the old one or placebo, whichever was the comparator. They found that only slightly less than half the time the new treatment was no better than the comparator, and the new therapy was only very rarely an major advantage.

How can we use this information? Well, I think we can use it every time we face a patient and family with the option to enter a large, non-pharma, RCT. We can honestly say that, looking back, we’re right with the new treatment only half the time and that trials are truly the only accurate way of testing treatments fairly.

Reference:

New treatments compared to established treatments in randomized trials. Benjamin Djulbegovic et al. Cochrane Library, DOI: 10.1002/14651858.MR000024.pub3

 

Secrets and lies. Truth and beauty.

30 Jun, 11 | by Bob Phillips

… and other Bohemian aphorisms …

There is a quite brilliant paper from the under-advertised PLoS One which shows how, in the are of incubation periods for respiratory disease, Truth By Citation is quite strikingly different than the reality of the evidence. The networks of citations demonstrate how repetition, sometime but not always with a citation, leads to a ‘truth’ emerging which does not reflect the real picture of the evidence.

Truth, beauty, and absinthe

This paper joins a similar mass of information which demonstrates how information about prognostic biomarkers are dominated by the few studies which show remarkably strong associations, and rarely reference the systematic reviews that place the studies in context.
And there is are still the classic example of sudden infant death and sleeping position. more…

Confident in predicting? Meta analysis models step two.

27 Mar, 11 | by Bob Phillips

So, in a previous post I made a foray into the dangerous world of statistical models of meta-analysis.

Now, I’ll try hard to explain why we need to start doubting random effects meta-analysis more than we often have done. more…

It’s how mixed up? Meta analysis models step one.

27 Mar, 11 | by Bob Phillips

Well, I have to start with an apology. In one of these columns, I foolishly claimed that the difference between a Peto OR fixed effect meta-analysis and a DerSimonian-Laird random effects meta-analysis was pointlessly academic. It’s not.

Now, this might start getting all statistical, but there is a clear and important difference. Meta-analysis comes in two main flavours: fixed and random. It’s clinically important to understand what these things mean. Any other bits that are added, for example Peto, DerSimonian-Laird, or Inverse-variance, are ways of describing exactly how the weighting of each study within the meta-analysis is undertaken and shouldn’t worry us too much.

Now, ‘fixed’ effects takes as an underlying truth that each of the studies in the meta-analysis gives us a glimpse of a single truthful ‘effect size’, and that any variation between them is through chance alone. Sometimes the results seem too mixed up – heterogenous – for this to be true. In this setting, we could consider using ‘random’ effects. more…

Many outcomes give no answer?

14 Jul, 10 | by Bob Phillips

Some systematic reviews are confusing. Sometimes this is just poor writing style. Sometimes it’s because the techniques are difficult to grasp (meta-analytic item-response analysis, anyone?) And occasionally it’s because the data don’t seem to add up ‘right’. more…

FAST appraisals

7 Mar, 10 | by Bob Phillips

I’m fairly sure you’ll remember the RAMbo method of reviewing the validity of single randomised controlled trials. And so I think that many readers will have been having sleepless afternoons, struggling through the lengths of a ‘User’s Guide’ checklist for systematic reviews thinking “Which action hero can rescue me from this mire?”.

Or perhaps not.

But whichever, there is another rapid review acronym you should all learn to do it: FAST. more…

Finding the question

19 Dec, 09 | by Bob Phillips

Umbrella It’s one of the tenets of the evidence-based practice process that questions are framed as ‘PICO’: patient, intervention, comparison and outcome. But what happens when the question is bigger than PICO? more…

New things in evidence synthesis

20 Sep, 07 | by Bob Phillips

A Real Forest PlotThe days of a meta-analysis being the simple adding up of lots of studies, pretending that they are all just tiny pieces of the One Big Trial that was performed before the world was made are on their way out. Newer ways of using synthesised evidence – like meta-regression and individual patient data analysis – are coming up quickly.

more…

ADC blog homeapage

ADC Online

Education, debate, and meandering thoughts on child health, using evidence and research.Visit site



Creative Comms logo

Latest from Archives of Disease in Childhood

Latest from Archives of Disease in Childhood