Deborah Cohen on improving health reporting

Deborah CohenThere are a few ways to improve health reporting. One is doing as some science commentators do: lumping all journalists together in a totally “unscientific” way sniping and sneering to their pals on the blogosphere.

Or you can explain how to cover medical research by offering advice about how to approach a story that your editor wants (even if you don’t); other media outlets have featured; or generally seems to be generating a bit of a furore.

This is the tack taken by Medicine in the Media, a National Institutes of Health funded workshop for mainly US journalists held at Dartmouth College this month.

I spoke about the BMJ’s investigations at this Ivy League college in New England that, with its fraternity dorms and white clapboard houses, looks like an American movie set.

Set up by the rather witty Barry Kramer, from the National Cancer Institute, the workshops’s aim is to help journalists unpick the constant slew of information released by journals, advertisers, medical societies, and other interest groups.

Not always an easy job and one journalists don’t always get right, as Gary Schwitzer, a former CNN medical news reporter, documents on his website, Health News Review. This has a predefined set of ten measures by which health stories are judged and rated. Included in the criteria are how benefits and harms are approached; if the story grasps the quality of the evidence; if costs are mentioned at all; and if any conflict of interests are covered.

All very good in theory, but how do you report on research papers? And how do you interpret confidence intervals, p values, relative and absolute risks? By giving examples of what’s gone wrong in the past and working through a series of research papers (of varying quality), husband and wife duo, Steve Woloshin and Lisa Schwartz, both professors of medicine, described ways of laying out statistics in a comprehensible fashion.

From past examples of news articles and broadcasts, it’s clear that academics, doctors and medical organisations are culpable of poor promoting poor science. Overblown quotes from scientists pepper articles and broadcasts. Their exaggeration of research or an intervention might because they have a particular intellectual or financial conflict of interest (or they simply like the sound of their own voice). So rather than constantly lambast journalists, science commentators might like to turn their attention on their own. How about a “hyperbolic scientist watch?” Or “misleading quote monitor?” And unless these quotes are science fiction, then the scientific community should take some responsibility of the message they broadcast.

It’s also evident that journalists over-rely on the main medical journals for their stories—they’re trusted resources. But they too get it wrong by publishing some highly dubious papers, which journalists were taken through step by step—pitfalls to look out for and notes of caution. Journalists are urged to use absolute rather than relative risks. Reporters are castigated for highlighting the relative risk ratio to make a greater impact. But it seems that it might only be partially their fault. Woloshin and Schwartz pointed out that often the absolute risk is absent from the initial paper. Nor may the paper have put the research into context or detailed any associated harms. The urged that journalists should go back to the researchers to ask for further information particularly about the absolute risks is missing.

Some journalists at the workshop complained that if journal editors with their armies of peer reviewers, statisticians and experts on hand can’t spot dodgy science, how are they expected to do their job? A good point—simply castigating journalists for getting it wrong is missing a bigger issue. Attention needs to be turned on those who really do hold the cards and maybe reporters should hold journals to account.

The same applies for guidelines. Who should journalists—or indeed doctors—trust? Examples were given of contradictory advice offered up by different organisations for vascular screening and journalists were told what to look out for—multidisciplinary committees, transparency of members (including conflicts of interest), and selection of evidence (systematic or not).

But the issue that really caused heads spin was screening. When one life might be saved from screening, how can you not implement this invention? And how do you convince editors that it’s not irresponsible to cover screening tests with a critical eye? Journalists were introduced to concepts such as: 5 year survival rates versus mortality rates; population versus the individual; benefits versus harms. They were told to consider who is advocating a screening test and what their conflicts of interest might be—are they a private company, a hospital cashing in, or a government organisation (who may well have their own political agenda).

Deborah Cohen, investigations editor, BMJ. I received NIH funding to speak at the event.