17 Apr, 15 | by Bob Phillips
I was intrigued to see the meta-analysis of diosmectite in acute diarrhoea appear in the Arch Dis Child recently – partly ’cause I’d no idea what diosmectite was, and partly because I spend a lot of my time with folk who poo too little or too much.
When taking a look at a systematic review, it’s worth using a FAST appraisal schema, but starting by identifying the PICO question that the review seeks to answer.
14 Apr, 15 | by Bob Phillips
The revised Royal College of Paediatric and Child Health guidance on making decisions to limit treatment in life-limiting and life-threatening conditions in childhood has just been published. It provides an ethical and legal framework for practicing clinicians revised to reflect the changes in the scope and availability of advanced technology and in the emphasis and application of ethical and legal principles in decision making.
The document sets out the circumstances under which withholding or withdrawing life sustaining treatment might be ethically permissible. In particular it describes situations in which individual children should be spared inappropriate invasive procedures. The document sets out three sets of circumstances when treatment limitation can be considered
- because it is no longer in the child’s best interests to continue
- because it cannot provide overall benefit – firstly when life is limited in quantity, secondly when life is limited in quality
- informed competent refusal of treatment.
The document covers the ethical and legal framework and the process of decision making and the practical aspects in detail. It is a very powerful document which will help professionals and families of children with complex medical disorders in their desire and responsibility to act in the best interests of the child.
We would like to hear your responses to the document; as considered comments on this blog, on our facebook page, or (if you’re really concise) via Twitter
10 Apr, 15 | by Bob Phillips
Regular readers of this blog will know of its penchant for systematic review techniques (evidenced in the recent I-squared blog ). The process of qualitative synthesis uses many of those familiar methods – defining a clear question, systematic literature searching, selecting appropriate research and assessing the risk of bias. Following this, however, qualitative syntheses begin to look really quite different – mostly because there are no nice numbers to add up and give ‘the answer’ but also because they are just not written in language we understand (read the qualitative research blog series to help with this)!
So how on earth do we go about reading a qualitative synthesis and deciding whether its any good?
Well, instead of reinventing the wheel, we can just modify our FAST assessment:
7 Apr, 15 | by Bob Phillips
No, not -1, the self-multiplication of that fancy imaginary number that helps aircraft designers make wings work properly, but a (semi) quantitative assessment of how much heterogeneity there is in a meta-analysis: I²
You’ll recall that the idea of heterogeneity (mixed-up-ness) comes in both statistical and clinical flavours. This measure – I² – assesses the statistical aspect. It’s often to be found at the bottom of a forest plot, near some other numbers (Tau² and Chi²).
The principle of I² is straightforward – it gives you an idea of the ‘percentage of variation which is beyond that you’d expect by chance alone’. It can be interpreted, approximately*, like this: more…
3 Apr, 15 | by Ian Wacogne
“Hello? Is that the school? Yes, hi, it’s Ian here. I’m one of the dads. Anyway, I just thought I’d tell you that I’ve sent the kids to school with peanut brittle today. Yes, that’s right, peanut brittle. Yes, that does contain nuts. OK, thanks – bye!”
(Some minutes later a SWAT squad paid a visit to my house and my children’s classrooms, and removed the nut containing products in a container previously used for handling radioactive waste.)
31 Mar, 15 | by Bob Phillips
If you want to know who does, and who does not, need a bone marrow biopsy to detect malignant infiltration if the patient has rhabdomyosarcoma, you might want to start by taking a very large cohort of patients who had RMS and had a load of tests, including marrows. Then construct a decision tree that settles on identifying the group without marrow disease.
One (very good) stats process to do this is called ‘recursive partitioning’. It does what it says – splits the group up (partitions) it and then splits those up (recursively) until you’ve got a ‘good enough’ answer. Where the split is placed is by calculating and recalculating a threshold value, seeing how well that discriminates, and then moving on. For simple dichotomous measures this ‘threshold finding’ is incredibly easy as there are only two categories…
Now, how you decide what ‘good enough’ is is a matter of clinical judgement (e.g. what % of BM +ve patients would you miss – 2%?), and that’s worth a few arguments.
27 Mar, 15 | by Bob Phillips
The subject of heterogeneity (mixed~up~ness) in systematic reviews is tricky. A bit like ‘significance‘ you can think about it as both a clinical and statistical concept, and in the same way, you can get results that aren’t always concordant.
Many old lags will remember a blog post about a statistically significant association between platelets and renal involvement in HSP. There, there was a statistical association that’s unlikely to be due to chance, but is clinically irrelevant.
The same queries need to be asked of heterogeneity within studies.
24 Mar, 15 | by Bob Phillips
So now to go back to one of the big questions from the first blog of this series – ‘How are you even supposed to tell if a qualitative paper is even any good when there are no power calculations, blinding or difficult stats?’ Hopefully, if you’ve been reading through each blog, you might have begun to realise that there are different and valid ways to perform qualitative research. It therefore follows that there are different indicators of quality.
For this blog I’ll outline three main ones
- a solid theoretical background
- scientific rigour
20 Mar, 15 | by Bob Phillips
I know that’s a tricky question, and may make you think of cream pouring on apple crumble, discussions about chemotherapy, or episodes of Octonauts depending on exactly what frame of mind you’re in and background you have.
Within a research setting, however, how do we decide when something has been researched so much and folk have repeatedly found no/minimal effect, that we should just give it up. It doesn’t work (enough). This is a key decision to be made, and relies on a mixture of elements.
17 Mar, 15 | by Ian Wacogne
I’ve been thinking about this for quite a long time now, and this seems like a good time. I’ve spoken about this any number of times with students in clinic, and with doctors in training. The thing is, as soon as they hear me mention Terry Pratchett, I get the judgement. Or, to be more fair, I get the social profiling. The disguise I wear as a moderately competent 46 year old paediatrician with greying (OK, grey) hair and comfort in my surroundings slips away and there he is – the awkward, spotty, 14 year old. Actually, I didn’t come to Terry Pratchett until my university years but that’s almost too trivial to mention. The main point is that the stereotype is probably fair. This was an author with tremendous reach but with a core readership of the awkward, the geeky, the clever but, well, those with limited life skills.