25 Oct, 16 | by Bob Phillips
I’ve been struggling to get the concept behind random-effects meta-analysis out for some time – it’s the ‘average effectiveness’ in an ‘average population’ – with the prediction interval being the ‘actual width of where the truth might lie’.
But .. yes .. but what does that actually mean & why does that matter?
Take a primary school. Get the average height of the children in each class. Now use that to tell me the average height of a child in that school.
If I make a tonne of jogging pants for them based on the average height, I might provide 15% with an OK fit, but mostly they’ll be too long or too short.
If the data are very very heterogeneous, the average is “true”, it’s just not very useful.
Does that make any more sense of how awkward a ‘random effects meta-analysis’ result can be in a highly heterogenous meta-analysis?
(ps – The overall average height might be useful to order the right amount of material from the weavers shed though.)
20 Sep, 16 | by Bob Phillips
One thing that I keep coming across, from a huge range of folks involved in clinical practice, is the idea that if something is statistically significant, then it’s true. Some folk nuance that a bit, and say things like “true 95% of the time” for confidence intervals or p=0.05 …
Of course, there’s an ongoing argument about exactly how to understand p-values in common, understandable language. A simplish and we hope right-enough version can be found here. But underlying that is a different, more important truth.
The stats tests work to assess what might be the product of chance variation. When the data they are testing come from hugely biased studies, with enormous flaws, and the poor little stats machine says p=0.001, the researcher and reader may conclude “this is true””. This is wrong: it is due to bias and poor research.
It may be better to think “this is unlikely to be due to chance” – in remembering that phrase, you’ll hopefully recollect the other reasons why something may not be due to chance too.
5 Aug, 16 | by Bob Phillips
There will be a lull in blogging as a variety of people are off doing other things.
Have a lovely summer (N hemisphere folks) / enjoy the rigours of Winter (S hemisphere chaps).
2 Aug, 16 | by Bob Phillips
In our previous post we unpeeled the sticker a little bit on how the magic process of submission to … well, let’s just stick with ‘publication’ and be optimistic … happens.
Step 11 compresses the process of being offered a second chance into a few brief words. It’s probably a good idea to think a bit about how you respond to a reviewer’s comments to make life easier for all of us. more…
26 Jul, 16 | by Bob Phillips
It’s one of the delights of my professional clinical practice that (nearly) all the time, the diagnosis of a malignancy is hard & sound, reproducible, based on good data showing discrimination from other settings and with minimal interpersonal variation. Take a chunk of a particular renal tumour and show it to half a dozen paeds pathologists, and nearly all the time the Wilm’s tumour will be recognised and identified as such.
People who deal with the un-biopsiable, the disorders without a test and with a requirement for artistic flair, both overwhelm me with their skill and terrify me with the Emperors New Clothes potential. Things like interpreting chest radiographs, or diagnosing Kawasaki disease and ADHD spring to mind. The methods used to guard against the off-piste clinician: regular group review; diagnostic criteria, checklists and the like; analysis of yes/no cohorts to detect changes in proposed outcomes, are all essential and undertaken regularly.
One area which has been subject to huge scrutiny because of the challenging implications of making/unmaking a ‘diagnosis’ is child abuse and neglect. The CORE-INFO group in Cardiff who have collected and reviewed thousands of studies have piled up the evidence for us, and these have been used to help guidelines for reproducible practice become established, yet there remains some difficulty in the finding of unexplained bruising.
19 Jul, 16 | by Bob Phillips
It’s become fairly clear that most people don’t really know how articles get from the pen into the ‘accepted’ queue at a journal.
At the most wonderful paediatric / child health journal on the planet (*) the process works like this:
* ADC of course!
12 Jul, 16 | by Bob Phillips
Trials and tribulations we all have. The not-knowing of the future can create anxiety, distress and an unhealthy desire for chocolate. Some days, knowing what’s for tea can provide the only concrete grounding in an otherwise fluctuant universe.
And along with that, the naming of things can sometimes be enlightening.
So, for un-knowing, you could consider using the following sorts of words:
Imprecision. The sort of not-knowing that is generated by small sample sizes with wide confidence intervals.Uncertainty in a mathematical sense.
Vagueness. Uncertainty about something because it has been poorly defined or described, the ambiguity of the written “How’re you doing?” – which may differ if from the melting voice of Joey Tribbiani or the concerned and serious tones of your Aunt Ethel.
Volatility. Uncertainty through a potentiality of futures that may be sweepingly different depending on as yet undetermined features … if only we’d had a referendum recently in the UK that might provide an example …
5 Jul, 16 | by Bob Phillips
If there was a chance that you missed a diagnosis about which you would be able to intervene 2:1,000 times, and that the test which could diagnose it cost about £3 and was minimally invasive – wouldn’t you be daft not to use it?
Or would you think “what a waste of time!” and wonder about the £3,000 you had spent to make £6 of a diagnosis?
The dilemma is common – there are a whole list of things that are traditionally associated with a differential diagnosis which may be exceptionally rare and sometimes coincidental rather than causative.
Take urinary tract infection in jaundiced babies.
28 Jun, 16 | by Bob Phillips
There are some times when it seems that no decision can be the right decision but doing nothing is as much as a decision as doing something.
Admittedly, it’s rare you’re faced with shooting a killer in cold blood to prevent him murdering a not-so-innocent man.
But sometimes the triadic nature of paediatric & adolescent medicine causes us trouble. more…
21 Jun, 16 | by Bob Phillips
My mum insists that we, at home, always cut off the green bit & splice the strawberry in case it had a slug in it.
For Ian Wacogne it’s sitting with his back against a radiator.
Well, in my case it’s so that you can’t eat a slug … that’s managed to get into the strawberry without having left a hole / magic taped it together afterward .. garbage, yup?
I asked my mum about it. She said that my grandma had told her she had eaten a slug in a strawberry when she was little, but that no, she didn’t remember eating the slug, and actually, on recollection, it was that she had nearly eaten a slug in a strawberry …
Such ‘strawberry stories’ are prevalent and problematic. They exist in clinical medicine, research and publishing. Some we’ve heard recently:
“You can’t publish fetal or animal papers in ADC F&N” … You can
“You need NHS Ethics to involve patients in developing research studies & protocols” … You don’t
“Multi-disciplinary research is never worth the effort” … Nope
Identifying these strawberry stories, and overcoming them should probably be one of the tasks we take on every day. I wish I could point you at high quality evidence of how to do it, but sadly, I can’t.