You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.

Shared decision making

26 Jul, 17 | by Bob Phillips

So the model of EBM that we espouse is one grounded in the patient ‘dilemma’ being the start and end point of the process. You’ll recall it’s a patient’s situation that triggers the asking of a PICO question, and particularly the selection of patient-oriented outcomes are vitally important. The acquisition and appraisal of studies that follow link to this, then applying the results of your deliberations needs to bring those thoughts back to the patient and discussing where to go with it. more…

Can our children’s trials work better than they do?

13 Jun, 17 | by Bob Phillips

We’re all well aware of the problems of doing randomised clinical trials in paediatrics – small numbers, uncertainty about sample size estimates, lack of funding to undertake the studies – but are we as aware of some alternative approaches that have been used [1]?

“Sequential design” studies look at comparing a series of treatments against each other, switching to the ‘better’ arm and comparing against the next candidate as time progresses… They need quickly and easily available outcomes and tend to be usable only for short-course treatments… but they’ve been estimated to reduce sample sizes by about 25%

more…

Damned if you do, damned if you don’t?

19 Apr, 17 | by Bob Phillips

The field of systematic review, of which Archimedes we believe sneaks in under the ‘rapid review’ heading, has long since held a solid foundation to what a systematic review needs to do. It needs to have a clear question, with a comprehensive search, and assessment of included studies bias / quality, a synthesis (which may be mathematical; meta-analysis), and a set of conclusions that draw this together.

What it’s been long struggling with is how to ‘best do’ each of these areas. ‘Best’ is itself problematic – take ‘best’ searches for example. Do they find every single possible scraplette of possible information, taking 3 months of daily specialist work, where the qualitative bulk of the data, leading to the same practical conclusion, was found in the first week? (And how do you know – prospectively – when the tipple into ‘enough’ has been reached?) more…

MiniStatsBlog: Making decisions from numbers

15 Feb, 17 | by Bob Phillips

It’s a thing we like to do in medicine – make decisions on the basis of numbers. The temperature is greater than 38C in a neutropenic child? Start antibiotics. The CRP in your snuggly neonate has reduced? Stop antibiotics. The PEWS score is high – review.

Lots of researchers want to help out with this too, and they produce prediction models that can help you know what’s the chance of something bad happening. (Or sometimes something good. But usually bad; for example, can you recall in adult medicine when you saw a tool to tell you your chance of surviving 10 years without a cardiovascular event?) But there is fundamentally a leap between predicting percentages and doing/not doing – it’s the difference between a “prediction” (such as “it is very likely to rain today”) and classification (“today is a day to take your umbrella”). The predictive goodness might be given to you as the AUC of a ROC curve; the classification accuracy as the sensitivity and specificity.

Using this information can be where you blend the hard sciency stuff of critical appraisal with the arts and crafts of discussion risk with colleagues, parents and patients. Not confusing the two things in your appraising of a study is a good place to start with this.

  • Archi

Cases and controls

18 Dec, 16 | by Bob Phillips

F3.mediumI’ve noticed that there are a fair few phrases in the world where there actual meaning can be unclear or uncertain, or possibly interpreted differently by folk. Take “maybe later” when used by parent to child – clearly means “no” to the parent and “yes but not now” to the child. Or “brexit”.

But the world of science can’t be confused …  can it?

Just take a gander through the field of “case control” titled studies and you may find yourself upset to discover it can. Now I am fairly clear that what I mean by case/control is a design where the participants are chosen because they have developed (cases) or haven’t got (controls) the OUTCOME of interest – they died, developed neuroblastoma or had exclusion from school. The analysis then is about finding out if these groups had different levels of exposure to a proposed causative factor, such as blood transfusions, bacon, or X-factor viewing.

What is not a case control study is one where the groups are chosen for the exposure to a treatment or not. This is a comparative cohort study.

Now as is so often the case when appraising papers, it doesn’t sometimes matter what the authors have written. It’s what they did that counts – so discount their title if the design doesn’t fit it.

– Archi

 

StatsMiniBlog: Pants and primary schools.

25 Oct, 16 | by Bob Phillips

20140205-091454.jpgI’ve been struggling to get the concept behind random-effects meta-analysis out for some time – it’s the ‘average effectiveness’ in an ‘average population’ – with the prediction interval being the ‘actual width of where the truth might lie’.

But .. yes .. but what does that actually mean & why does that matter?

Well.

Take a primary school. Get the average height of the children in each class. Now use that to tell me the average height of a child in that school.

If I make a tonne of jogging pants for them based on the average height, I might provide 15% with an OK fit, but mostly they’ll be too long or too short.

If the data are very very heterogeneous, the average is “true”, it’s just not very useful.

Does that make any more sense of how awkward a ‘random effects meta-analysis’ result can be in a highly heterogenous meta-analysis?

  • Archi

(ps – The overall average height might be useful to order the right amount of material from the weavers shed though.)

But if it’s significant it must be true?

20 Sep, 16 | by Bob Phillips

One thing that I keep coming across, from a huge range of folks involved in clinical practice, is the idea that if something is statistically significant, then it’s true. Some folk nuance that a bit, and say things like “true 95% of the time” for confidence intervals or p=0.05 …

Of course, there’s an ongoing argument about exactly how to understand p-values in common, understandable language. A simplish and we hope right-enough version can be found here. But underlying that is a different, more important truth.

The stats tests work to assess what might be the product of chance variation. When the data they are testing come from hugely biased studies, with enormous flaws, and the poor little stats machine says p=0.001, the researcher and reader may conclude “this is true””. This is wrong: it is due to bias and poor research.

It may be better to think “this is unlikely to be due to chance” – in remembering that phrase, you’ll hopefully recollect the other reasons why something may not be due to chance too.

  • Archi

Happy holidays everyone

5 Aug, 16 | by Bob Phillips

There will be a lull in blogging as a variety of people are off doing other things.

Have a lovely summer (N hemisphere folks) / enjoy the rigours of Winter (S hemisphere chaps).

033114_1458_GuestBlogMy5.jpg

“We thank the reviewer …”

2 Aug, 16 | by Bob Phillips

tumblr_nh74xu5LnB1s6le2wo8_250In our previous post we unpeeled the sticker a little bit on how the magic process of submission to … well, let’s just stick with ‘publication’ and be optimistic … happens.

Step 11 compresses the process of being offered a second chance into a few brief words. It’s probably a good idea to think a bit about how you respond to a reviewer’s comments to make life easier for all of us. more…

Hard science in difficult areas

26 Jul, 16 | by Bob Phillips

Picture1It’s one of the delights of my professional clinical practice that (nearly) all the time, the diagnosis of a malignancy is hard & sound, reproducible, based on good data showing discrimination from other settings and with minimal interpersonal variation. Take a chunk of a particular renal tumour and show it to half a dozen paeds pathologists, and nearly all the time the Wilm’s tumour will be recognised and identified as such.

People who deal with the un-biopsiable, the disorders without a test and with a requirement for artistic flair, both overwhelm me with their skill and terrify me with the Emperors New Clothes potential. Things like interpreting chest radiographs, or diagnosing Kawasaki disease and ADHD spring to mind. The methods used to guard against the off-piste clinician: regular group review; diagnostic criteria, checklists and the like; analysis of yes/no cohorts to detect changes in proposed outcomes, are all essential and undertaken regularly.

One area which has been subject to huge scrutiny because of the challenging implications of making/unmaking a ‘diagnosis’ is child abuse and neglect. The CORE-INFO group in Cardiff who have collected and reviewed thousands of studies have piled up the evidence for us, and these have been used to help guidelines for reproducible practice become established, yet there remains some difficulty in the finding of unexplained bruising.

more…

ADC blog homeapage

ADC Online

Education, debate, and meandering thoughts on child health, using evidence and research.Visit site



Creative Comms logo

Latest from Archives of Disease in Childhood

Latest from Archives of Disease in Childhood