19 Apr, 17 | by Bob Phillips
The field of systematic review, of which Archimedes we believe sneaks in under the ‘rapid review’ heading, has long since held a solid foundation to what a systematic review needs to do. It needs to have a clear question, with a comprehensive search, and assessment of included studies bias / quality, a synthesis (which may be mathematical; meta-analysis), and a set of conclusions that draw this together.
What it’s been long struggling with is how to ‘best do’ each of these areas. ‘Best’ is itself problematic – take ‘best’ searches for example. Do they find every single possible scraplette of possible information, taking 3 months of daily specialist work, where the qualitative bulk of the data, leading to the same practical conclusion, was found in the first week? (And how do you know – prospectively – when the tipple into ‘enough’ has been reached?) more…
15 Feb, 17 | by Bob Phillips
It’s a thing we like to do in medicine – make decisions on the basis of numbers. The temperature is greater than 38C in a neutropenic child? Start antibiotics. The CRP in your snuggly neonate has reduced? Stop antibiotics. The PEWS score is high – review.
Lots of researchers want to help out with this too, and they produce prediction models that can help you know what’s the chance of something bad happening. (Or sometimes something good. But usually bad; for example, can you recall in adult medicine when you saw a tool to tell you your chance of surviving 10 years without a cardiovascular event?) But there is fundamentally a leap between predicting percentages and doing/not doing – it’s the difference between a “prediction” (such as “it is very likely to rain today”) and classification (“today is a day to take your umbrella”). The predictive goodness might be given to you as the AUC of a ROC curve; the classification accuracy as the sensitivity and specificity.
Using this information can be where you blend the hard sciency stuff of critical appraisal with the arts and crafts of discussion risk with colleagues, parents and patients. Not confusing the two things in your appraising of a study is a good place to start with this.
18 Dec, 16 | by Bob Phillips
I’ve noticed that there are a fair few phrases in the world where there actual meaning can be unclear or uncertain, or possibly interpreted differently by folk. Take “maybe later” when used by parent to child – clearly means “no” to the parent and “yes but not now” to the child. Or “brexit”.
But the world of science can’t be confused … can it?
Just take a gander through the field of “case control” titled studies and you may find yourself upset to discover it can. Now I am fairly clear that what I mean by case/control is a design where the participants are chosen because they have developed (cases) or haven’t got (controls) the OUTCOME of interest – they died, developed neuroblastoma or had exclusion from school. The analysis then is about finding out if these groups had different levels of exposure to a proposed causative factor, such as blood transfusions, bacon, or X-factor viewing.
What is not a case control study is one where the groups are chosen for the exposure to a treatment or not. This is a comparative cohort study.
Now as is so often the case when appraising papers, it doesn’t sometimes matter what the authors have written. It’s what they did that counts – so discount their title if the design doesn’t fit it.
25 Oct, 16 | by Bob Phillips
I’ve been struggling to get the concept behind random-effects meta-analysis out for some time – it’s the ‘average effectiveness’ in an ‘average population’ – with the prediction interval being the ‘actual width of where the truth might lie’.
But .. yes .. but what does that actually mean & why does that matter?
Take a primary school. Get the average height of the children in each class. Now use that to tell me the average height of a child in that school.
If I make a tonne of jogging pants for them based on the average height, I might provide 15% with an OK fit, but mostly they’ll be too long or too short.
If the data are very very heterogeneous, the average is “true”, it’s just not very useful.
Does that make any more sense of how awkward a ‘random effects meta-analysis’ result can be in a highly heterogenous meta-analysis?
(ps – The overall average height might be useful to order the right amount of material from the weavers shed though.)
20 Sep, 16 | by Bob Phillips
One thing that I keep coming across, from a huge range of folks involved in clinical practice, is the idea that if something is statistically significant, then it’s true. Some folk nuance that a bit, and say things like “true 95% of the time” for confidence intervals or p=0.05 …
Of course, there’s an ongoing argument about exactly how to understand p-values in common, understandable language. A simplish and we hope right-enough version can be found here. But underlying that is a different, more important truth.
The stats tests work to assess what might be the product of chance variation. When the data they are testing come from hugely biased studies, with enormous flaws, and the poor little stats machine says p=0.001, the researcher and reader may conclude “this is true””. This is wrong: it is due to bias and poor research.
It may be better to think “this is unlikely to be due to chance” – in remembering that phrase, you’ll hopefully recollect the other reasons why something may not be due to chance too.
5 Aug, 16 | by Bob Phillips
There will be a lull in blogging as a variety of people are off doing other things.
Have a lovely summer (N hemisphere folks) / enjoy the rigours of Winter (S hemisphere chaps).
2 Aug, 16 | by Bob Phillips
In our previous post we unpeeled the sticker a little bit on how the magic process of submission to … well, let’s just stick with ‘publication’ and be optimistic … happens.
Step 11 compresses the process of being offered a second chance into a few brief words. It’s probably a good idea to think a bit about how you respond to a reviewer’s comments to make life easier for all of us. more…
26 Jul, 16 | by Bob Phillips
It’s one of the delights of my professional clinical practice that (nearly) all the time, the diagnosis of a malignancy is hard & sound, reproducible, based on good data showing discrimination from other settings and with minimal interpersonal variation. Take a chunk of a particular renal tumour and show it to half a dozen paeds pathologists, and nearly all the time the Wilm’s tumour will be recognised and identified as such.
People who deal with the un-biopsiable, the disorders without a test and with a requirement for artistic flair, both overwhelm me with their skill and terrify me with the Emperors New Clothes potential. Things like interpreting chest radiographs, or diagnosing Kawasaki disease and ADHD spring to mind. The methods used to guard against the off-piste clinician: regular group review; diagnostic criteria, checklists and the like; analysis of yes/no cohorts to detect changes in proposed outcomes, are all essential and undertaken regularly.
One area which has been subject to huge scrutiny because of the challenging implications of making/unmaking a ‘diagnosis’ is child abuse and neglect. The CORE-INFO group in Cardiff who have collected and reviewed thousands of studies have piled up the evidence for us, and these have been used to help guidelines for reproducible practice become established, yet there remains some difficulty in the finding of unexplained bruising.
19 Jul, 16 | by Bob Phillips
It’s become fairly clear that most people don’t really know how articles get from the pen into the ‘accepted’ queue at a journal.
At the most wonderful paediatric / child health journal on the planet (*) the process works like this:
* ADC of course!
12 Jul, 16 | by Bob Phillips
Trials and tribulations we all have. The not-knowing of the future can create anxiety, distress and an unhealthy desire for chocolate. Some days, knowing what’s for tea can provide the only concrete grounding in an otherwise fluctuant universe.
And along with that, the naming of things can sometimes be enlightening.
So, for un-knowing, you could consider using the following sorts of words:
Imprecision. The sort of not-knowing that is generated by small sample sizes with wide confidence intervals.Uncertainty in a mathematical sense.
Vagueness. Uncertainty about something because it has been poorly defined or described, the ambiguity of the written “How’re you doing?” – which may differ if from the melting voice of Joey Tribbiani or the concerned and serious tones of your Aunt Ethel.
Volatility. Uncertainty through a potentiality of futures that may be sweepingly different depending on as yet undetermined features … if only we’d had a referendum recently in the UK that might provide an example …