Amanda Glassman and Rachel Silverman: Evaluating what works in global health

Around the world, people are benefitting from a global health revolution. More infants are surviving their first months of life; more children are growing and thriving; and more adults around are living longer and healthier lives. This amazing worldwide transformation is cause for huge celebration, but it also begs several questions. What, specifically, are we doing right? What are the policies and programs driving the global health revolution from the ground up? Or put more simply: what works in global health, and how do we know?

These questions are not just academic. Every year, global health funders and low- and middle-income country governments must set funding priorities within a limited budget. While funders and implementers can hope their programs are making an impact, without rigorous and attributable evaluation at scale, they cannot know for sure. And every time their intuition gets it wrong—every time they unwittingly continue to fund a program that is not working or not cost-effective—they forego the opportunity to fix design flaws or reallocate to more impactful interventions. In so doing, they also forego opportunities to increase the pace of health improvement even further; to ensure the most vulnerable members of society equally share in health gains; and to improve the global evidence base on effective global health interventions and delivery strategies.

Earlier this month we released Millions Saved: New Cases of Proven Success in Global Health, a book which chronicles the global health revolution from the ground up, featuring 22 case studies from low- and middle income-countries around the world. Eighteen of these programs showed attributable health impact at scale, with just a subset saving a combined 18 million life years—an incredible achievement that has impacted so many families for the better.

How do we know? We know because these programs underwent rigorous impact evaluation, published in journals like The BMJ or through institutions like the World Bank. Recent years have seen a revolution in rigorous impact evaluation in low- and middle-income countries. 2014 saw more than 300 such evaluations across health and other functional areas, up from just about 10 in 1995. As a result, we can attribute a portion of overall health improvements to specific programs and initiatives such as the rollout of the MenAfriVac Meningitis A vaccine across Africa’s Meningitis Belt, a cash transfer program to protect Kenya’s poorest orphans and vulnerable children from deprivation and ill health, and Thailand’s comprehensive program to control tobacco consumption.

Knowing what works is important, but knowledge about effectiveness cuts both ways. In addition to success stories, Millions Saved also features four programs that did not work as intended—where efficacious interventions did not improve health when implemented at scale. In Peru, for example, a handwashing campaign convinced mothers to lather up but proved unable to drive a measureable reduction in diarrhea. Unlike so many other failures, however, this program and the others featured in Millions Saved had a silver lining: their rigorous evaluations exposed problems in time to change course, alter the design, and improve those programs for future beneficiaries.

Yet even today, huge gaps remain in the evaluation literature. And this means many such programmatic failures will remain undetected and unaltered, while successes remain uncelebrated and unexamined. We are often asked about why the new Millions Saved omits a favored intervention, disease priority, or specialty. Where is mental health, for example? Or heart disease? Cancer? And what about tuberculosis or family planning? The questions are varied, but the answer is always the same: despite our best efforts, we could not find a suitable, rigorous evaluation of an at-scale program that demonstrated attributable health impact. That is not to say that interventions in these areas have not improved health at scale – it is quite likely that they have. But without rigorous at-scale evaluation, we simply cannot and do not know for sure.

Millions Saved is great news, but we still have work to do. Rigorous evaluation at scale is feasible and essential. We cannot afford to leave health impact on the table. So we’ll end with a plea: if you care about cancer, or heart disease, or tuberculosis, or family planning, please help us include it in the next Millions Saved. Evaluate your programs at scale, learn from your successes and failures, and help the whole world better understand what works in global health.

Amanda Glassman is director of Global Health Policy and a senior fellow at the Center for Global Development.

 

 

 

 

silvermanhigh1Rachel Silverman is a senior policy analyst at the Center for Global Development.

Competing interests: Millions Saved is partially funded by the Bill and Melinda Gates Foundation, which supported several of the case studies featured in the book.