22 May, 15 | by Bob Phillips
I spend quite a lot of time fairly unsure that I really know enough about the stuff I should know about. Sometimes I think we could benefit from reflecting on this a bit:
Are there known uncertainties – things which we have a good estimate, based on good research, but gives us an answer which isn’t clearly one way or another (like early discharge for low-risk febrile neutropenia)?
Are there clear ignorances – things where we know that we don’t know about that stuff (for me … acute ataxia that isn’t a cerebellar tumour or recent removal from a merry-go-round)
Are their nibbling unconvincings – things where you know what you have always done, and probably will do again, but aren’t sure it’s really right (e.g. using 0.45% saline for IV maintenance fluid)?
Are there blasts of amazing newness – where someone slaps you with something you have had no idea was a thing or what to do about the thing (I heard about something called “Mauriac syndrome” the other month ?!??)?
These types of unsureness can be resolved with very different approaches. The first line – the known uncertainty – can only really get better if there are more studies, or if the decision is shared one-to-one with the patient and family to whom it applies. The clear ignorance is relatively straighforward to fill: find a good background article that can supply the necessary gap-filling. For this, I’d heartily recommend the “15 minute consultation” section of the (green) Education and Practice section of the ADC. Nibbling unconvincingness is (obviously!) best addressed by you looking at the evidence, and summarising it if necessary … via an Archimedes perhaps?
The final lot – the unknown unknowns – are most dangerous and most unfillable. You just need to keep you ears and eyes open, and constantly taste the air for areas in which you are shockingly ignorant. An open mind – let call it #ThingsIdidntKnow – is the best way out of this. We need to support each other in letting this happen and celebrate the gaps in our knowing.
19 May, 15 | by Bob Phillips
Q: What do you get when you take 50+ paediatric trainees/trainers/medical students, 7 fantastic facilitators, 3 challenges and 6 flipcharts? A: Many solutions and a roomful of conversation… (oh, and not much space left on the flipcharts!)
PEdSIG was delighted to welcome a roomful of enthusiastic delegates to the symposium “Learning on the Job: Are service and training mutually exclusive?” The session was opened by Bob Klaber who talked passionately about the importance of “reflection, commitment and permission” to enable learning on the job. Following a “relatively” smooth (and unplanned) room swap the workshop began tackling 3 important challenges faced by trainers/trainees on a daily basis. All participants rotated through each of the three workshops, World Café style, adding iteratively to a summary provided by the facilitator from the previous group. Here are our top solutions/ideas to take back to YOUR workplace, more can be found on our website www.pedsig.co.uk
15 May, 15 | by Ian Wacogne
This is a another part of the series about writing.
This one is from @Ian_Wac ogne, Editor of the wonderful green Education and Practice edition. It’s part of a resource which will grow, over the weeks and months for people to access if they’re thinking about writing.
This post – as others may be – is particularly aimed at people who might like to write for Education and Practice, but I hope that there will be helpful information for other parts of the ADC suite, for the rest of BMJ publishing, and beyond. Of course, if you’re looking for tips on creative writing – if you’re one of these people who has always had a book in you – then this isn’t the place; the best advice I’ve heard about that is to read, read, read.
In this first post, I want to ask: Why do you want to write?
12 May, 15 | by Bob Phillips
Integrated Management of Childhood Illness (ICMI) is the leading protocol designed to decrease under 5 mortality globally (WHO) – although its potential impact is threatened by quality of care.
Magge and colleagues report the outcome of a nurse mentorship programme— Mentorship and Enhanced supervision at Health Centres (MESH) in two rural districts (21 rural health centres) in Rwanda. The detail of the intervention is described in the paper. This was a pre post study with outcome assessed through a validated index of key ICMI assessments recorded at baseline and 12 months. The index is effectively a checklist of good clinical assessment/care including items like ability to drink, presence of severe vomiting, convulsions, difficulty breathing, weight, presence of oedema and suchlike. The index increased significantly in both districts. The impact was positive across multiple health care outcomes—number of children seen by ICMI trained nurses increased from 83.2 to 100%, use of ICMI case recording forms increased from 65.9 to 97.1%, correct classification from 56 to 91.5% and correct treatment from 78.3 to 98.2%.
The data is impressive and goes to the heart of what can be achieved through a well thought out quality improvement initiative with an initial analysis of the issues, ‘package’ intervention, comprehensive assessment of the feasibility and success of the intervention, efforts to attain sustainability and longer term impact on health outcomes.
8 May, 15 | by Bob Phillips
Welcome to a new series. We’d love to have contributors, ideas and comments. The aim of this group of blogs are to address the question
How do you write, particularly a paper for a clinical, academic audience?
And we’ll start, not at the very beginning, because although it is a very good place to start, it would be better to start one step further back. With some idea of the tools you may want to use.
5 May, 15 | by Bob Phillips
There are considerable numbers of interventions which are undertaken at points of emergency; severe head injury, severe septic shock, myocardial infarction, admissions to intensive care units… In these situations it can be extremely tricky to get the critically ill, often unconscious, individual to agree to being randomised in a clinical trial. Yet without that, we won’t know what treatment to give. Or not give.
But surely we should just use common sense?
Like oxygen for myocardial infarction?
Or can we undertake “deferred consent” – a rather odd phrase which means seeking consent for the data collected after a patient has been, because of a critical care emergency, entered into a randomised trial. more…
1 May, 15 | by Bob Phillips
Does your neonatal unit have parents present when you’re doing medical rounds? Would that be a good thing? (Or if you already do it, is that a bad, limiting thing?) Could the presence of parents inhibit honest medical discussion? Could it compromise confidentiality? May the opportunities for bedside teaching be severely reduced? Could the stress of hearing the discussions be excruciating to the parents? Will the inclusion of parents into a ward round discussion bring about a greater trust, and make it truly inclusive? Will it allow for a deeper understanding of the dilemmas faced on both sides? And how much will it vary between parents?
Thinking about all those possibilities makes the idea of trying to investigate the question “Should parents be present on neonatal ward rounds?” rather difficult to frame. For instance, what outcomes are important, and how could they be measured?
28 Apr, 15 | by Bob Phillips
Systematic reviews in health care aim to answer a specific, highly structured, clinical question by extensive searching, careful sifting and appraisal of the studies, a considered synthesis and well tempered conclusions. They can take very many months – 18 or more – to complete.
Where we undertake and use systematic reviews to provide the very best estimates of effect, we’ll also be waiting a long time to get there. What we might be – practically – better doing is a ‘good enough’ review; still focussed, still symptomatic and still synthetic, but quicker.
This is the realm of the rapid review, a not-quite defined type of systematic review that’s quicker, perhaps a little more focussed, sets clearer boundaries and is well prepared to make every piece fall into place one after another. It turns around an answer fast enough to bring answers about more quickly but still good enough to make a difference.
Of course, you might recognise this type of description when you think about Archimedes reports…But Archimedes reports are a bit briefer in searching, and rarely undertake a formal synthesis, so not quite in this category.
24 Apr, 15 | by Bob Phillips
It would be nice, wouldn’t it, if we could work out which patients would not benefit from an intervention, in order to a) not use it and b) use something (probably more toxic) instead? It’s a frequent thought of mine, as an oncologist, when I sign off another chemotherapy chart with multiple agents on it.
I know that other have the problem too – for instance, those deciding how to treat patients with Kawasaki disease. For some patients the usual treatment of high dose immunoglobulin is ineffective at preventing cardiac artery aneurysm formation. There have been clinical prediction rules developed for this, and in Japanese populations, the Kobayashi score is reputed to be effective. The disease does appear to differ across the world though, and it’s always worth confirming that prediction models do work in different areas. more…
21 Apr, 15 | by Bob Phillips
The principle of an ‘intention to treat’ analysis is that the participants in a randomised trial are analysed in the group to which they were randomised, regardless of what treatment they received. So in a hypothetical trial of salbutamol vs. aminophiline infusion for severe asthma, regardless of what the child got, they are placed in their ‘you should have’ group…
The concept comes from the core of RCT philosophy – that chance has settled all prognostic factors evenly between the two* arms – and so the only reasonable way of preserving this is to analyse the outcomes according to this sorting.
What this does is, if some folk in the ‘intervention’ arm don’t get the intervention (e.g. Salbutamol infusion, but their K+ was falling prior to starting) then it reduces the observed effect of the drug. This is then ‘unfair’.
But wait. Pragmatic RCTs, ones of treatments as we use them, test an intervention. They test not ‘salbutamol infusion’ but the intervention – which might be characterised as ‘what if we have an approach that says we should use salbutamol infusions for pts unless its clear they need something different … like PICU .. now … can someone ring 2222 please …’
If there are lots of deviations, crossovers and non-receipts of the allocated intervention, it’s very important to looks why. The way we were proposing to do ‘the intervention’ clearly doesn’t work in practice — so it needs reassessing — not necessarily having the ‘treatment’ element thrown out.
* OK – so it could be three, four etc arms. It’s just that two is easier to think about. And commonerer.