There’s a decent argument in the analysis of quantitative studies of therapies, particularly using RCT designs, that says that we should be looking at the totality of unbiased evidence (systematic reviews) rather than looking at individual, cherry-picked, studies. The best estimate from this come from a pooling of all the results: meta-analysis.
There’s a challenge to this, though, when the comparisons are not quite the same. In the case of trials of drug A vs. B, C, D and E it can be quite easy to spot (and then perhaps undertake a network meta-analysis to address the issue). When the trials are A vs. standard care it’s a greater challenge to see if & how “standard care” varies.
Take a recent systematic review of the use of procalcitonin to guide antibiotic decisions in children with lower respiratory tract infections. This looked at 14 trials in 4000 episodes of infection, in different clinical settings (including ICU, ED and primary care) and used the setting as a proxy for ‘standard care’ — did the location of guided treatment alter if the management mechanism was effective or not?
Where the standard care is extremely variable – for example this review of non-surgical therapies for upper limb cerebral palsy – the challenge is massive. If there’s a well founded belief that the variation is, functionally, minimal, then pumping is entirely reasonable. If it’s not, then while you can still pool the results, the answer becomes very very difficult to translate into clinical practice. If ‘new treatment’ is, on average, 50% better than ‘the average sort of standard treatment that gets provided’ then how do you take this into a ward, clinic or home and know it will be effective?
As usual, the skill in applying research to clinical practice is to integrate a clear analysis of what the science says, with a thorough grounding in what is being done on the ground, and sharing this with patients to make a considered decision that steps forward together.
– Archi