There is nothing more convincing than someone citing research, and yet we often don’t know if what’s being cited is any good. Research can be bad if it’s poorly conducted or if the wrong evaluative method was used to answer the question. The methods we use in healthcare are often quite limited, especially when it comes to community interventions. This is why I have been working with the Institute of Medicine (IOM) to open up what we mean by evaluation. We’re holding a potentially groundbreaking meeting on 27 August: “Designing Evaluations for What Communities Value.”
The alleged gold standard in evaluation is the randomised controlled trial, so much so that one often hears people try to convince on the basis of “RCT evidence.” This lazy citing of a method is one sure way to detect that someone doesn’t know what he or she is talking about (venture capitalists, take note) because a randomised trial is not always appropriate for the question being answered.
My personal grind with the RCT is in the “controlled” bit. By controlling for things that might influence the impact of an intervention, researchers create such sanitised environments that what is learnt cannot be applied to the real world, where everything is connected to everything else—often in unknown ways.
There have been some notable attempts to address this, such as “pragmatic randomised controlled trials,” “cluster randomised controlled trials,” and J-PAL’s “randomised evaluation,” but the truth is we don’t yet know what the best approach is to assess the value of community interventions.
There are two parts to this problem. Firstly, for reasons that are unclear, healthcare tends to limit itself to a small suite of evaluative techniques often organised as the so-called hierarchy of evidence. Secondly, although we know that all communities are different, and their problems unique to them, we continue to strive for a single “best approach”—seemingly to be able to compare interventions from one locality to the next. Although this aim is laudable, it repeats the fallacy of a “gold standard.”
There is no gold standard to assess a community intervention. Instead, there is a suite of evaluative techniques that can be used, depending on the community and the intervention being tried. The IOM meeting on 27 August will aim to illustrate this.
Given our desire to be anchored in the real world, we’ve been lucky enough to base the meeting on the needs of the communities entering the “Way to Wellville” competition, which is being run by Esther Dyson’s HICCup. These communities are being challenged to become healthier in whatever way works for them. Although HICCup may ultimately want a single evaluation framework for their “competition” (something I completely disagree with), we’ve built a programme around the needs of a single Wellville community, with representatives from other Wellville communities present as observers.
We’re in the final stages of planning the event, but the idea is that people of the community, aided by expert facilitators, will talk about what it’s like to live there and what they value when it comes to their health. Meanwhile, proponents of different techniques will jot down how they’d go about evaluating change in the community—whether it happened and whether it was positive or negative—based on what the communities value.
The key to evaluation in the real world, however, is adjusting to what’s seen. HICCup’s competition will last five years. In year one, a community may see something negative and hence want to change their intervention. Doing so would likely require them to also change their approach to evaluation. If this continues year on year, by the end of year five both the intervention and its evaluation may be significantly different—and community specific—to what they started with. This is why it makes little sense to try to come up with a single evaluative framework for community interventions.
Healthcare’s limited approach to evaluation not only blinds it to the value that interventions create in communities, but also keeps it ignorant of what communities actually care about. Opening our minds to other forms of evaluation helps us to better understand and respect the communities we serve. What’s so groundbreaking about the IOM meeting is not the format, as such, but the simple notion that we should let communities define the interventions they want—some of which will have little direct link to the biomedical definition of health.
We hope you’ll join us for this meeting but do hurry, spaces are limited.
Pritpal S Tamber is the director of Optimising Clinical Knowledge Ltd, a consultancy that helps organisations in and around healthcare design clinically credible strategies that have a measurable impact on care. Follow him on Twitter @pstamber.
Competing interests: The Institute of Medicine (IOM) is paying my travel and accommodation to speak and participate at the meeting in this blog, something that they’re doing for everyone listed on the agenda. The IOM’s “Collaborative on Global Chronic Disease” is also collaborating with a group I am forming currently called the “Creating Health Incubation Group.” The “Collaborative” and the “Incubation Group” cover similar topics but there is no formal partnership, as such.