Research highlights – 11 February 2011

Research questions “Research highlights” is a weekly round-up of research papers appearing in the print BMJ. We start off with this week’s research questions, before providing more detail on some individual research papers and accompanying articles.

  • What was the impact of the first phase of the Safer Patients Initiative in UK hospitals?
  • Has the second phase of the Safer Patients Initiative had an additional impact on safety?
  • Does real-time audiovisual feedback during cardiopulmonary resuscitation improve outcomes?
  • Can office induced hypertension be reduced by recording blood pressure with an automated device?
  • Can neonatal mortality be reduced in a country with low resources by training and equipping traditional birth attendants?

A safer NHS; but why?
Between 2004-8 a British charity, The Health Foundation, ran a Safer Patients Initiative (SPI) to test ways of preventing harms from routine hospital care. The initiative’s aims and methods were similar to those of the US “Saving 100,000 lives” campaign and, indeed, the US Institute for Healthcare Improvement designed and oversaw both of these programmes.
The first phase of the Safer Patients Initiative took place in one hospital in each country of the UK before being rolled out to a further 20 hospitals in 2006. Amirta Benning and colleagues now report their evaluations of both the first and the second phases. As Peter Provonost and colleagues say in a linked editorial, these studies bring mixed news.

The good news is that safety and quality of care improved in these NHS hospitals over the study period. Along with John Appleby’s data briefing on the UK’s relatively good health outcomes  and an all-time high in public ratings of the NHS, these findings contradict the government’s assertion that the NHS needs urgent reform.
But Benning and colleagues also found that the improvement could not be attributed to the Safer Patients Initiative: overall, care improved equally in both treatment and comparison hospitals. That’s not surprising, say the editorialists, because the interventions were not well enough selected, designed, or piloted, and the programme failed to get enough buy in and leadership from clinicians.

Feedback in CPR
Devices that deliver feedback “may be useful” in helping rescuers give high quality cardiopulmonary resuscitation (CPR), according to the European Resuscitation Council’s guidelines, but there’s been no clear evidence to support the use of such devices in any particular setting. David Hostler and colleagues did a multicentre, cluster-randomised trial assessing the effectiveness of real-time audiovisual feedback on CPR after out of hospital cardiac arrest in more than 1500 patients. Clusters of emergency medical service providers in North America were randomised to use the monitor-defibrillator with or without feedback, and the groups switched between the treatment arms throughout the study. Although the results showed improvements in CPR performance when feedback was used, these improvements failed to translate into better return of spontaneous circulation or other clinical outcomes. In an accompanying editorial, Peter Leman considers why this might be so.
Another recent BMJ paper looked at chest compression only CPR – which is increasingly used by lay rescuers – compared with conventional CPR.  Toshio Ogawa and colleagues did an observational study including data from over 40000 out-of-hospital resuscitation attempts by lay people in Japan. They found that conventional CPR had better outcomes than chest compression only CPR for some patients, such as those with arrests of non-cardiac origin, younger people, and people in whom the start of CPR was delayed.
Dodging the white coat effect

Measurements of blood pressure in the clinic are subject to many sources of inaccuracy, including variation in technique and the “white coat effect.” Ambulatory and home measurements are more reliable, but are not without their drawbacks, especially since the evidence base for hypertension treatment relies on clinical values. Martin Myers and colleagues did a cluster randomised controlled trial examining a third option: the use of an automated device by patients to measure their own blood pressure in the clinic.
In the intervention group, primary care patients with systolic hypertension sat alone in a quiet room while taking their own measurements. Readings taken with this method were closer to the patients’ 24 hour ambulatory blood pressures than were those taken manually in the control group – although the automated method did not completely eliminate the white coat effect.

In an editorial, Jonathan Mant and Richard McManus say that the precise role of this method still needs to be determined, and that the findings highlight the importance of measuring blood pressure properly. A reviewer remarked that they would be happy to see this paper about “the bread and butter of primary care” in a general journal where it could reach its target audience of GPs and practice nurses.

Research online: For other new research articles see www.bmj.com/research
Gerd Gigerenzer and colleagues recently challenged journals to report more absolute risks in the abstracts of research papers. That’s fair enough for randomised controlled trials, but for many observational studies and meta-analyses absolute results can’t always be reported easily or meaningfully in a nice short summary statistic.

Debate in bmj.com’s Rapid Responses illustrates this point well. Sven Trelle and colleagues’ network meta-analysis on the cardiovascular safety of non-steroidal anti-inflammatory drugs found some risk for all such drugs, with naproxen looking the safest.  GP Alex Thain responded “it would really help me, today seeing my real patients, to have some idea of the numbers needed to harm. If the cardiovascular risk for my patients rises from 1 % to 4% for example, they are at perfect liberty to remember that they therefore have a 96% chance of not having a cardiovascular event.”

The authors replied: “Measures of absolute risk, including numbers needed to harm or numbers needed to treat, should not be directly pooled in meta-analysis. Rather, NNH and NNT need to be calculated from the rate ratio based on an assumed baseline risk, which will depend on characteristics of a specific population. Event rates in included trials were considerably lower than what is observed in routine clinical settings.” Helpfully, they added a table giving the NNH/NNTs for each drug comparison and for patients at low, medium, or high baseline risk. But they made the point well that you can’t always state an absolute risk in a paper’s abstract.