You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.


How can we share treatment decisions?

3 Sep, 14 | by Bob Phillips

pot_of_gold_rainbow (107x160)I guess part of me wants to start this blog with “Never knowingly topical”, but in the UK an as yet unclear explosion of media interest has been generated around decision making and a child with a brain tumour.

Those who want to can find out more via reputable news sites – as a staunch middle-class Northerner, I’ll just link to the BBC from the start of the very long story.

Where much of this very difficult story seems to be around consent, and best interests, I’d like to take a more routine approach to the issue. How do we, in everyday care, make sure that our interactions with children, young people and their families share the decisions as much as possible?


Which O for PICO?

31 Aug, 14 | by Bob Phillips


We’ve mentioned before about the COMET initiative, that was born from lots of work in rheumatology, and seeks to standardise a core set of outcomes collected in clinical trials so that the trial

  1. Measures things of importance to patients, clinicians and researchers and
  2. Provides a degree of homogeneity that makes systematic reviews more powerful

Well, those clever rheumatologists have done it again, conceptualising the whole of it into two major areas, and breaking them up into manageable parts.


A picture paints a thousand words

20 Aug, 14 | by Bob Phillips


Pretty much sure that you’ve all hit something complicated and, after trying to explain it, have grabbed pencil, paper and said something like “Look, you see, it’s …”


And your picture may be completely unlike the thing you’re describing.


Well, hot on the tails of our Archi blog about the challenges with ‘standard care’ as a comparator comes a really nice way of thinking about complex variations in studies included in systematic reviews. Admittedly, the title is a tad off putting “Evidence-based mapping of design heterogeneity prior to meta-analysis: a systematic review and evidence synthesis” but the idea – along with it’s beautiful execution in examples – is that we can use a rather neat tabular design to outline where studies vary and how this might explain differences and need to be understood in our translation / incorporation of the outputs into clinical practice.


There’s a wealth of stuff written about visual display, and of course, an entire industry dedicated to it, but we docs do tend to ignore all that sort of stuff, don’t we?

What’s your ‘best’ example of great graphical representation making something terribly complicated enlightened? Comment, FB us, or tweet it to @ADC_BMJ #NowIsee




What stops us getting more people into clinical trials?

1 Aug, 14 | by Bob Phillips

It may not have escaped your notice as you travel between different areas of the hospitals in which you work that there appear to be some things that have more clinical trial activity going on than others. There have been many things written on why this might be, including a very persuasive paper* that argues for the reduction in health waste by the better integration of clinical care and clinical trials, and a claim that trials are an ethical imperative.

Yet not an awful lot of folk are on-trial. Why?


Words, listening, and the art of applying the general to the specific

24 Jul, 14 | by Bob Phillips

A little bit of a swirl around a decade-old paper by @iona_heath on the trouble with turning a patient’s experience into something that might require medically fixing that was floated about twitter recently.

The paper, which is densely written and has lots of lovely quotes from proper writers, and speak of many aspects of doctoring, holds to a thesis that the truth of the patient’s condition is their living of it, and as doctors, we mould and warp and misrepresent it to fit into a diagnosis, reject a diagnosis, or hold as an uncertainty. more…

StatsMiniBlog: Kappa

16 Jul, 14 | by Bob Phillips

After a short pause while brain cells were diverted elsewhere, we’re returning with the critically acclaimed (well, slightly positively tweeted) StatsMiniBlog series.

(As an aside – do let me know via comments, Facebook or Twitter if there’s an issue you’d like to see covered)

Kappa (κ) is a measure of agreement, usually between two observers of a dichotomous outcome although there are variants for multiple observers.  It gives you a measure of what agreement you see that is ‘beyond chance’


“Compared to standard care”

9 Jul, 14 | by Bob Phillips

There’s a decent argument in the analysis of quantitative studies of therapies, particularly using RCT designs, that says that we should be looking at the totality of unbiased evidence (systematic reviews) rather than looking at individual, cherry-picked, studies. The best estimate from this come from a pooling of all the results: meta-analysis.

There’s a challenge to this, though, when the comparisons are not quite the same. In the case of trials of drug A vs. B, C, D and E it can be quite easy to spot (and then perhaps undertake a network meta-analysis to address the issue). When the trials are A vs. standard care it’s a greater challenge to see if & how “standard care” varies.  more…

The despair of the box-ticking paediatrican

1 Jul, 14 | by Bob Phillips

So, as the annual assessment of learning by paediatric trainees reached fever pitch in many ares of the UK, a question rang out across Twitter:

In (trainees approaching ARCP), does (shoehorning logbook to curriculum) compared to (reflecting on clinical experiences) improve outcomes?

And while this, I feel, is more of an emotional outpouring to garner peer support, love and recognition of the need for coffee rather than an evidence request, there are some data supporting the use of work based assessments and e-portfolios


StatsMiniBlog: Rethinking meta-analysis

15 Jun, 14 | by Bob Phillips

StatsMiniBlogThe concept of meta-analysis was addressed previously, essentially pulling together data from a range of different studies and assuming that they are only (fundamentally) different by chance, or differ by real things too as well as chance, and you’re seeking an average effect across the average of these differences. The maths under this takes each study as an item, and comes up with a weighted average of the effect sizes.

There’s another way of looking at this: more…

When a test isn’t a test

8 Jun, 14 | by Bob Phillips

There are many reasons why we request tests, in medicine. One imaginary patient’s journey picks up a number of them.

Take a patient who presents with a painless lump on their arm, who’s tired and a bit pale & washed out. You might send a series of blood tests, including a full blood count to diagnose anaemia. You may also request an ultrasound of the lump, which may show an ugly mass with features consistent with sarcoma. Your friendly local plastic sarcoma surgeon might do a biopsy for you after an MRI, and the histopathologists confirm it’s a rhabdomyosarcoma.

All these tests are aimed at making a diagnosis: to clarify if the patient in front of us has, or does not have, the condition.

The oncologist who then takes up the patient’s care will move to undertake a series of further investigations; more…

ADC blog homeapage

ADC Online

Education, debate, and meandering thoughts on child health, using evidence and research.Visit site

Latest from Archives of Disease in Childhood

Latest from Archives of Disease in Childhood