Now it might not be quite the right way of describing the challenges in communicating what we don’t know, but I find we can often tie ourselves up in knots when it comes to explaining our lack of knowledge about an effect or prognosis. Take a child with a new diagnosis of metastatic rhabdomyosarcoma. We know, from the large trials which covered over three-quarters of those with the condition, that the number of adverse features (age, amount of metastasis, location of the primary, histological type), predict if the patient will fall into an ~50% survival group or may have little at ~10% survival chance. We cannot know for any patient which side of that line they will fall; that’s a clear type of unsure-ness.
We also can describe, because of the numbers involved and the way chance works, that we have a degree of imprecision on those guesses (the ~50% may truly be 40% or 60%). That’s maybe a little harder to explain, but probably OK to most folks to communicate.
If the data came from a bunch of small studies which had used slightly different techniques to get the info, and to assess the outcomes, we may also be dubious about how ‘believable’ the answer was. This sort of uncertainty can be really hard to explain with confidence; we think this is ‘sort of’ true but the numbers might be a bit dodgy? And then if someone throws in a new factor – a new test, or molecular finding – which hasn’t been examined frequently – how to explain that tiny bits of knowledge might be wrong? Or that publication-bias-by-time is a ‘thing’?
I don’t know the answers to these problems in the clinical practice of evidence-based paediatrics. I think if we can parse the sources of our uncertainty we may be in a better place to share our certainties, though, and so make better decisions together.