It’s a thing we like to do in medicine – make decisions on the basis of numbers. The temperature is greater than 38C in a neutropenic child? Start antibiotics. The CRP in your snuggly neonate has reduced? Stop antibiotics. The PEWS score is high – review.
Lots of researchers want to help out with this too, and they produce prediction models that can help you know what’s the chance of something bad happening. (Or sometimes something good. But usually bad; for example, can you recall in adult medicine when you saw a tool to tell you your chance of surviving 10 years without a cardiovascular event?) But there is fundamentally a leap between predicting percentages and doing/not doing – it’s the difference between a “prediction” (such as “it is very likely to rain today”) and classification (“today is a day to take your umbrella”). The predictive goodness might be given to you as the AUC of a ROC curve; the classification accuracy as the sensitivity and specificity.
Using this information can be where you blend the hard sciency stuff of critical appraisal with the arts and crafts of discussion risk with colleagues, parents and patients. Not confusing the two things in your appraising of a study is a good place to start with this.
- Archi