Richard Smith: What is “implementation research” and whatever happened to GRIP?

Richard SmithI’m trying to organise a workshop on “implementation research,” and I find that the concept is as hard to pin down as poetry. Might you be able to help me?

The overall idea behind implementation research is not hard to identify. It’s about trying to make sure that the results of research are applied in the real world of health care or the activity to which the research is relevant. Funders of research are naturally interested in seeing the research they fund make a difference, although it used to be that a few articles in prominent journals and they were happy.

But implementation research is by no means the only activity concerned with closing what is sometimes called “the know do gap.” There is also knowledge translation, quality improvement, and scaling up. Management and education scientists too are interested in understanding how to make change happen. Are these all the same thing? And do you remember GRIP, “getting research into practice?” It was very fashionable some 15 to 20 years ago, but I see from a search that the phrase seemed to disappear around 1995. Was that because it was a flop?

Certainly turning research into real world activity is hard, and maybe the constant change of name given to the process and the uneasy search for new methods reflect failure and disappointment.

Memory is a way (possibly a hopeless one) of sorting the wheat from the chaff, and I remember a relevant article in the BMJ by Richard Grol, a Dutch researcher interested in quality improvement. (The article has been cited 545 times, so maybe sorting by memory is a good method.) Richard began his article by imagining a group of experts trying to reduce Caesarean section rates that research has shown to be too high. The epidemiologist advocates a meta-analysis and evidence based guidelines. The educational expert is against such a top down process and wants small groups of doctors to discuss the problem and make local changes. The health services researcher wants multicentre audits with feedback on variations among hospitals. The management expert proposes an analysis of decision making and a quality improvement team. The health authority thinks that financial incentives are the answer.

Richard points out that all these suggestions are based on theories of what makes people behave as they do, and he argues that the best results will probably be achieved by using different methods at different times. He proposes a general theory of developing a concrete proposal, selecting evidence based interventions, identifying obstacles to change, linking interventions to obstacles, developing a plan, and then carrying out and evaluating the plan.

About three years ago I was asked by Richard to give a talk on how we had done with improving the quality of health care (mostly, I suggest, a matter of incorporating research evidence into practice) and if we hadn’t done well, as he suspected, why was that. I like to think that I started with an open mind, and the data I assembled seemed to show that we hadn’t done well. (Richard, Don Berwick, and Michel Wensing thought the same in a BMJ article of 2008.)

I couldn’t answer the question of why quality hadn’t improved, but I wasn’t short of hypotheses. Firstly, it’s hard for doctors and other clinicians to accept that they may not be providing optimal care. And yet, as with Alcoholics Anonymous where the first of 12 steps to recovery is “I admit that I am powerless over alcohol,” it must be essential to recognise that care isn’t optimal in order to improve. Secondly, quality improvement is a matter of thinking of systems, and clinicians are not only not systems thinkers they are almost adverse to that way of thinking—preferring to see themselves as heroic individuals. Thirdly, quality improvement is still largely the province of the few, a specialty, and yet to be effective it must belong to the many. Fourthly, we haven’t generated the emotion that is necessary to make change happen. Fifthly, we lack adequate evidence on what works in improving quality, and research like this is low status. (Lack of good research and incentives for such research was the hypothesis favoured by Richard and coauthors in their article.) My final hypothesis was that the problem of improving quality was hard and getting harder as the population ages, and people experience more comorbidity, take more drugs, and need more interventions, and as health systems become more complex.

My conclusion was: “If I were the king I would throttle back dramatically on discovery research and concentrate on research to implement what we already know—that could make a huge difference.”

This statement brings me a full circle after perhaps deviating too long into quality improvement, a discipline for closing the “know do gap” that I know the best. So what, once more, is implementation research and how can we organise our workshop? I think that I’ll select practical examples and forget theory, or is that a mistake?

Implementation research could cover two issues: research to understand the determinants of successful implementation; or the best research design to facilitate implementation.

On determinants of successful implementation: there is a solid body of empirical research suggesting that the following factors are critical: setting implementation objectives at the point of developing the scientific ones; engaging probable implementers in the development of the objectives and throughout the research process; having at least one dedicated researcher be constantly focusing on how and when the shift from contributing to knowledge will give way to implementation in scaled populations. I would recommend including these factors as part of the protocol requirements of GACD researchers.