“Research highlights” is a weekly round-up of research papers appearing in the print BMJ. We start off with this week’s research questions, before providing more detail on some individual research papers and accompanying articles
- How effective are two risk stratification schemes at predicting thromboembolic events in patients with atrial fibrillation?
- Is CPR with chest compression only as good as, or better than, conventional CPR for people who have a cardiopulmonary arrest out of hospital?
- Has pay for performance in primary care improved the management and outcomes of hypertension in the UK?
- What was the effect of a quality improvement initiative in intensive care units on hospital mortality and length of stay for older adults?
- How good is the information provided about caesarean section in Brazilian women’s magazines?
Pay for performance
Do financial incentives to improve the quality of healthcare really provide value for money? Brian Serumaga and colleagues looked at the effect of the UK pay for performance (PFP) initiative—the Quality and Outcomes Framework—on the management of one target problem, hypertension, in primary care (doi:10.1136/bmj.d108). Almost 20% of available PFP funds were directed at this goal, and the scheme included specific targets for general practitioners.
A time series analysis of data for almost 500,000 patients between January 2000 and August 2007 showed no discernible effect of pay for performance on the processes of care or on hypertension related clinical outcomes, even though nearly all doctors participated in the scheme. However, there was some good news—a major reason for the lack of effect seemed to be that good quality care was already stable or improving before the implementation of PFP in 2004. Although the initiative’s payment incentives were substantial, the specific targets for hypertension might have been set too low, meaning that doctors did not need to change behaviour greatly to attain them.
The authors say their findings emphasise the need for governments to test PFP schemes before they invest in them, and to investigate other methods of improving care, such as education. “It seems policy is heading in one direction, while the evidence is heading in another direction,” said Serumaga, quoted in a New York Times blog about the paper.
In an editorial, John Reckless draws together lessons from this study and from this week’s two Analysis articles, which also criticise aspects of cardiovascular disease prevention in the UK (doi:10.1136/bmj.d201). His message: “The current model in the UK is not necessarily the right one or the only one.”
Predicting risk in patients with atrial fibrillation
Patients with atrial fibrillation have a substantial risk of stroke that is modified by the presence or absence of various risk factors. These factors have been used to develop risk stratification schemes that allow doctors to target those most in need of treatment. However, over the past 15-20 years, developments in risk schemes have not improved their predictive value for patients at high risk. For those deemed at “low risk” aspirin is the recommended treatment, but recent data suggest that this approach doesn’t work for all patients in this stratum. Greater effort to identify “truly low risk” patients and to consider all others for oral anticoagulation is thought to be the way forward.
The most commonly used scheme for stratifying risk of stroke is the CHADS2 (congestive heart failure, hypertension, age 75 years, diabetes mellitus, previous stroke/transient ischaemic attack (doubled risk weight)) score, while CHA2DS2-VASc (CHA2DS2-vascular disease, age 65-74 years, sex category; age 75 years and previous stroke carry doubled risk weight) has been developed to complement CHADS2, taking into account additional risk factors.
Jonas Bjerring Olesen and colleagues sought to validate these schemes in a large, real world cohort of patients with atrial fibrillation who had not received anticoagulation, using the Danish national patient registry. They found that CHA2DS2-VASc was better at predicting stroke in patients categorised as being at low and intermediate risk by the CHADS2 scheme, so identifying patients at “truly low risk.” The authors also estimated the importance of each component of the scores. In an editorial, Margaret Fang observes that although better risk schemes may help to inform choice of treatment, ultimately the decision should be based on the best balance of risk and benefit for the individual.
Effectiveness of AS03 adjuvanted pandemic H1N1 vaccine
Approval of vaccine produced to fight the influenza A/H1N1 pandemic in 2009 was accompanied in several countries by the expectation that the effectiveness of the vaccine would be assessed with post-marketing epidemiological methods. In a paper published on bmj.com this week, Danuta M Skowronski and colleagues report estimates of the effectiveness of the AS03 adjuvanted pandemic H1N1 vaccine most used in Canada during the autumn of 2009, based on Canada’s well established sentinel surveillance system and using a case-control design. They found that a single dose of the vaccine conferred excellent protection; 14 days or more after vaccination, its estimated effectiveness was 93% (95% confidence interval 69% to 98%) against medically attended, laboratory confirmed influenza A/H1N1 illness. This finding primarily reflected protection conferred to children and young adults. An intriguing and as yet unanswered question is how much the vaccine given at the end of 2009 or in early 2010 has continued to protect against the H1N1 virus in the 2010-11 season, say editorialists John Watson and Richard Pebody.