Evidence does not speak for itself. We need to hold it up against what we know and do and make sense of the findings for ourselves. There are tools and systems to help assess research quality, but there is no substitute for discussion round the table to identify the research which really matters to frontline staff.
Every week, our small editorial team looks at around 200 research outputs from a range of sources to see which are worth sharing more widely. We only pick around four a week to work up as NIHR Signals. We are helped in this by a pool of 1400 raters—largely clinicians, from a range of backgrounds—who assess the relevance and importance of the research in relation to their practice and experience. If we think it could change practice or make decision-makers think twice, we discuss the headline message carefully, sometimes correcting conclusions by authors who overstate their findings. We also identify an opinion leader to provide context and commentary. Less than 2% make it as NIHR Signals—to cut through the “noise” of the large volumes of evidence now being generated.
So which make the cut? We have different views round the table, reflecting our different clinical and service experiences. Like other editorial processes, we consider research reliability and quality. Sometimes, we can be suspicious of large effect sizes and risk of bias ratings which seem too good to be true, as in a recent review of laser therapy for knee osteoarthritis. Other questions of relevance and external validity need more discussion. Some international reviews with comparative effectiveness may not align with UK practice, guidelines, or available treatments—for instance, a urological surgeon rating a recent review of surgical treatment for enlarged prostate noted that two newer NICE-approved treatments in common use didn’t feature.
We may question how other systems of care line up for complex workforce or service delivery interventions. For instance, studies in a recent international review on managing common mental health disorders in primary care embraced different models of care, not all of which included family doctors or community mental health teams. How relevant or helpful would the general findings be to a GP in Hull or Honiton? Sometimes, we make a judgement that a review is “good enough” in some fields, despite moderate quality evidence or some heterogeneity of interventions or contexts. This was the case in a carefully done review of occupational therapy for people with dementia and carers in home setting, with the benefit of a range of meaningful outcomes beyond assessment of functional decline.
Many of the studies we select are funded by NIHR and reflect a problem-driven approach in real-world settings, such as a recent large trial of adrenaline for cardiac arrest out of hospital. These include very complex public health and service delivery interventions, from the trial of an anti-bullying intervention in schools using principles of restorative justice to a mixed-methods evaluation of intentional nursing rounds in acute wards. It is exciting to see new research which does not shy away from wicked problems and complexity.
When we set up the NIHR Dissemination Centre five years ago, we thought of filtering and assessing research as a largely technical process. We have learned that making sense of evidence is a social and deliberative activity. We need systematic approaches to help us—but the value is in the animated, sometimes heated, discussion of what the research means to different people with different experience. It really is good to talk.
Tara Lamont is director of the NIHR Dissemination Centre. The views expressed here are her own.
With thanks to fellow NIHR Signals editorial team members—Rob Cook, Peter Davidson, Johnny Lyon-Maris, Rosie Martin, Elaine Maxwell