Philip Wilson: The dangers of science by press release

Philip WilsonImagine you’ve just completed a groundbreaking piece of research. Do you: a) go and tell your mates down the pub; b) publish in a peer reviewed journal; or c) rush out a press release?

According to legend, Crick and Watson stylishly chose “a” after they discovered the structure of DNA, strolling into The Eagle in Cambridge to announce they had discovered “the secret of life.”

A more typical choice would be a combination of “b” and “c.” Journals issue press releases about newsworthy studies, giving reporters early access to research on the understanding that no stories are published until after the research has appeared in the journal. Everyone benefits. Reporters get time to prepare their stories, and journals get their research into the public eye.

That’s how it usually works. But a couple of weeks ago I came across an interesting looking press release from the University of Washington. A daily tablet containing two antiretroviral drugs, the release said, had been found to prevent HIV transmission in heterosexual couples where one partner was living with HIV.

I contacted the press office. Could I see the research? No, it hadn’t been published. Had it been presented at a conference then? No, the monitoring board for the study had recommended the results be publicised early. Could I talk to the monitoring board and ask why? Not possible. Could I talk to the researchers then? They were travelling internationally and not available for comment.

The only source for the story was a single, uncheckable press release. So much for nullius in verba (“take nobody’s word for it”)

Science reporting gets a fair amount of criticism, and Ben Goldacre’s Bad Science columns are a source of amusement and horror for me and thousands of others. There are plenty of reasons why reporting falls down. There are under-resourced journalists struggling to do their best with a tight deadline, and there’s also good old-fashioned sensationalism. But it’s no use complaining that the media fail to accurately report research if there’s no real research to report on.

I chose to run the story, with a caveat that the study had been terminated early and had yet to be published. I’m not sure this was the right thing to do. I could have waited – the study has since been presented at an International AIDS Society conference in Rome – but other news organisations would be likely to cover the story in the meantime (which of course they did).

Is it really a problem when fragments of a study are released early to the press? I’d argue yes, and not just because it’s irritating to be scooped by less fastidious journalists. Reports based on incomplete information can create a misleading impression of a study’s findings, and false stories are difficult to correct once they’ve entered the public consciousness. Even direct retractions don’t always change people’s minds.

In 2009 the US army released a few scraps of information about a supposedly successful trial of an HIV vaccine. Brilliantly, when the Guardian reported what few data were available, readers in the comments section tried to work out whether the study had a statistically significant result.

When more details emerged, it turned out that the trial’s result wasn’t significant in either an intention to treat or a per protocol analysis, but did attain significance once a subset of participants was excluded. A borderline result at best then, but a careful analysis of these rather technical issues would be unlikely to get the same attention as the Guardian’s original, hopeful, “HIV breakthrough” headline.

The risk of misleading people with incomplete information has led most journals to refuse to publish studies that have been discussed in the media prior to publication. The BMJ’s press policy says: “We do not want material that is published in the BMJ appearing beforehand, in detail, in the mass media. If this happens, doctors and patients may be presented with incomplete material that has not been peer reviewed, and this makes it hard for them to make up their own minds on the validity of the message.”

Apart from the risk of getting a story wrong, I think there’s also a deeper objection to doing science by press release. It creates the impression that science is about statements from authority figures, and not about sources that are open and available for checking.

Major public health programmes, such as vaccination campaigns, depend on people’s ability to make decisions about scientific issues. Researchers, universities, and journals need to recognise that the way in which they engage with the media plays a huge part in shaping the public understanding of science.

And as for journalists who don’t make sure their stories stand up? Well, it can get pretty embarrassing.

Philip Wilson is the editor, BMJ Evidence Centre