As a junior doctor, I have had, and still have some fantastic senior colleagues to work with, who generally give important and valuable advice. Over the placements and years, their advice is slowly turning me into the doctor that I aspire to be, an amalgamation of all the good bits from every doctor I have worked with so far along the way. I say doctor, but really I’m talking about all the other people that play a part in the hospital experience. Physiotherapists, pharmacists, health care assistants, porters, and so many more. Most of all, the many brilliant nurses I’ve had the pleasure of working alongside.
When I first started out as a doctor, the single biggest piece of advice that was given to me, and that still holds true today as one that I pass on to those unlucky enough to be my juniors, is to listen to the nurses. Make friends with the nurses. Don’t get on their bad side. Pay attention to what they say. That advice has saved me and saved my patients more times than I can count.
Because, nurses are always right. Aren’t they?
It’s a brave team that would design a study to pit nurses against a scoring tool, but that’s exactly what Allan Cameron and team from Glasgow have been up to. The Glasgow Admission Prediction Score (GAPS) was developed to estimate the probability of a patient being admitted, based on data collected at triage such as the patient’s age, early warning score, and triage category. The tool has been validated with good results, and could be used to help to optimise flow within the ED through early identification of those more likely to need a hospital bed.
This study, published in the January EMJ, aimed to compare GAPS to the triage nurses’ gestalt on likelihood of admission. To assess the latter, a visual analogue scale (VAS) was used, onto which triage nurses would mark how certain they were of patient admission/discharge. Previous studies on the topic have shown that when nurses are confident of the outcome, they’re usually right, and this study was no different. As always, we’d recommend you take a look at the paper itself to draw your own conclusions from the results.
3844 attendances to a single emergency department were studied, however a portion were allocated direct to a minors or resuscitation area, bypassing triage, and further patients were excluded from being under 16 or leaving before treatment was complete. Only 9 patients out of the 2091 that were triaged had insufficient data completion, which is a respectable figure. Of the 1829 attendances suitable for inclusion, 745 were admitted (40.7%), which seems high, however as stated this did not include a large number of minors patients who were more likely to have been discharged.
Nurse gestalt was found to be more sensitive than GAPS (81.2% vs 71.8%) but less specific (77.4% vs 86.6%). There was no correlation between nurse seniority and accuracy of predictions. Whilst the GAPS was more centrally distributed, results from the VAS showed peaks at 0-5% and 95-100% certainty of admission. This was the case for 781 patients. In these patients, nurses performed significantly better than GAPS, correctly predicting outcome in 92.4% (722). Excluding these patients though, GAPS provided a more accurate assessment.
In practice, the team found that the most accurate way to predict likelihood of admission was GAPS, but with the triage nurses able to override the tool where they were confident (>95%) as to whether the patient would be admitted or discharged. The authors admit that more work is needed, but maybe we’ll see admission prediction scores in use in the future.
Interestingly, there is no mention on whether those patients discharged home were followed up to see if any were admitted in the following days. Maybe the nurses’ gut feeling wasn’t wrong after all…