Unnatural experiments

By Charles Weijer and Monica Taljaard.

When government health programmes are studied in cluster randomized trials, should the health programme itself be considered “routine government care” and not part of the research, or is it a research intervention to be scrutinized by the research ethics committee? Watson and colleagues argue for the former position; we disagree and argue for the latter. The issue can be traced to a series of blog posts by Richard Lilford (blog 1, blog 2, blog 3, blog 4). In this blog, we explore two examples Lilford considers and the conclusions he draws.

The first example is the “Oregon experiment”. The Oregon Health Plan Standard is public health insurance for low-income residents of Oregon between 19 and 64 years of age. In 2008, the state sought to expand health insurance access to as many as 90,000 adults with slightly higher incomes. As Oregon could not afford to provide insurance to all, it drew lots to pick 30,000 names from the waiting list. Two years later, researchers interviewed a random sample of adults who were eligible for the lottery about their health and healthcare utilization. Even though the roll-out of expanded Medicaid in Oregon was not research but driven by scarce resources, researchers analysed the collected data as a randomized controlled trial. The data collection was approved by a research ethics committee and the informed consent of participants obtained. According to Lilford, this is a fine example of “an entirely service-led intervention…where the service dog wags the research tail.”

The second example is the Mexican Universal Health Insurance trial. Segurio Popular is a government programme designed to extend health insurance and upgrade local health facilities for 50 million uninsured Mexicans. Researchers collaborated with policymakers to study the health programme in a parallel-arm cluster randomized trial. 74 matched pairs of health facility catchment areas were randomized to receive either the health-insurance programme and medical facility upgrades or no intervention. Households in the catchment areas were surveyed at baseline and again 10 months after the intervention. Study outcomes included health spending, health outcomes, and utilisation. The publication does not refer to research ethics committee review or informed consent. Unlike the Oregon experiment, the Mexican Universal Health Insurance trial was prospectively designed as a cluster randomized trial.

For better or worse, most programme evaluation occurs after the fact and without a prospective randomized design. This reflects the realities of programme evaluation and government policies: the programme may need to be implemented urgently; the policy may affect an entire region or health system, so there aren’t enough independent units to randomize; or there may be some other concern with randomization. Whatever the reason, the government implements the health programme in the entire population at once; researchers merely gather and analyse data after the fact. In these cases, Lilford correctly observes that “the researcher has no part in the intervention and cannot be held responsible for it in any way—the responsibilities of the researchers relate solely to research, such as data, security and analysis.” The use of a lottery is an unusual feature of the Oregon experiment, but it doesn’t change the ethical analysis. The research component is limited to in-person interviews, and ethics review and informed consent is appropriately delimited to these activities. The fact that it can be analysed as a randomized trial is a happy accident, not the product of a study design authored by researchers.

Lilford sees prospectively randomized evaluations of government policy and retrospective evaluations as the same ethically. Speaking of the Mexican Universal Health Insurance trial he says: “But even when the study is prospective, for instance, involving a cluster RCT, it does not follow that the researcher is responsible for the intervention…The interventions are ‘owned’ by the health service, in the main.” This realization “makes something of a nonsense of the Ottawa Statement on the ethics of cluster trials—for instance, it says that the researcher must ensure that the study intervention is ‘adequately justified.’” The idea that a researcher would be asked to justify a government health programme is, he says, “poppycock.” “Rigid adherence to the tenets of this declaration could do [harm] by limiting the value that society could reap from evaluations of the large number of natural experiments that are all around us,” says Lilford.

With the rare exception of government programmes allocated to citizens by lottery, we are not surrounded by natural experiments. We are inundated by government health programmes of uncertain effectiveness and value. Legitimate government priorities commonly preclude the use of prospective randomized designs to evaluate programmes. If the health need is urgent, the government will employ a “big-bang” implementation across the entire policy jurisdiction. In other cases, implementation may be phased, and the programme made available first to regions where the need is most urgent. While these facts preclude a cluster randomized trial, an interrupted time series methodology, perhaps with multiple baselines, may be used for rigorous evaluation. Such is the disconnect between government priorities and prospective randomized designs that we can count on one hand the number of completed cluster randomized trials of government programmes. Given these realities, it is misguided to chide the Ottawa Statement.

Unusual cases in which the prospective randomized evaluation of a government programme is possible differ from those allowing only for retrospective evaluation in one critical respect: the government is not implementing the health programme as it would routinely. Rather, government is collaborating with researchers to randomly allocate provinces, communities or hospitals to intervention or control (or to different schedules of implementation) so the programme may be evaluated. As a result, some provinces, communities or hospitals may either be prevented from receiving the health programme or are delayed from receiving it. These are most assuredly unnatural experiments. Even if the government is the author of the programme, researchers are the authors of the study design. And it is the design that triggers equipoise issues, including the justification of the study intervention and control conditions, that must be assessed by the research ethics committee.

 

Paper title: The Ottawa Statement does not impede randomized evaluation of government health programmes

Author(s): Charles Weijer and Monica Taljaard

Affiliations:

CW: Rotman Institute of Philosophy, Western University

MT: Clinical Epidemiology Program, Ottawa Hospital Research Institute; School of Epidemiology and Public Health, University of Ottawa

Competing interests: CW receives consulting income from Eli Lilly & Company Canada

Social media accounts of post author(s): https://twitter.com/charlesweijer

(Visited 150 times, 1 visits today)