Liz Wager: Show us the data (part 2)

Liz Wager My last blog started with the observation that it’s impossible to investigate research fraud unless you have the raw data. While that may seem obvious, it leads logically onto another, subtly different, point which often seems to be missed: that it’s impossible to spot many types of research fraud unless you have seen the raw data. Some problems, such as plagiarism, or blatant image manipulation, can be picked up by keen-eyed reviewers or editors, especially if they use screening tools such as CrossCheck. But fabricated or falsified data usually cannot be spotted from the aggregate data reported in journal articles.

For example, imagine I report that I had studied 100 patients (or rats), given half of them one treatment, and half another, and then measured their blood pressure after one and three months. In the publication, these findings would probably be reported as an average with a measure of statistical distribution, such as the standard deviation (SD). The report might also include the average age and weight (+/- SD) of the populations and other key characteristics.

But what if, instead of measuring 50 patients (or rats) in each group, I had measured only five? Or, even worse, none at all? This deception almost certainly would not be apparent from the average figures. Patient (or animal) characteristics could easily be adapted convincingly from other publications. Even implausible data distributions are unlikely to be apparent in a single study—remember that Carlisle analysed 169 publications to show that Yoshitaka Fujii’s data were suspect.

Or suppose I measured 70 in each group (rather than 50) but discarded inconvenient results I regarded as outliers? Or what if I had planned to measure blood pressure at six months, but half the patients had disappeared (or the rats had escaped, or worse, died)? Or what if I thought the six month data were less impressive than the three month findings and therefore failed to mention it. None of these problems could possibly be apparent from the aggregate (i.e. analysed) data.

So, while more stringent peer review may pick up arithmetical errors, and other initiatives, such as the use of reporting guidelines or checklists, can undoubtedly improve the reporting of research methods (which is an area ripe for improvement), and the publication of study protocols, or the recently proposed transparency declarations may reduce selective reporting (such as the missing six month end-point, or the missing rats), it’s unrealistic to expect any of these to detect or prevent deliberate data fabrication.

That’s one reason why The BMJ’s new policy of requiring raw data makes sense. If peer reviewers, editors, and readers can see the raw data, there’s more chance that both fraud and honest errors will be detected. That’s clearly a benefit, the size of which depends on how often fraud and error occur, which, to be honest, we really don’t know at the moment. And the other thing we don’t know yet is how much it costs to format, archive, and curate data, and therefore whether the benefits exceed the costs—but we’re working on this and we won’t know until we’ve tried.

Liz Wager PhD is a freelance medical writer, editor, and trainer. She was the chair of the Committee on Publication Ethics (COPE) 2009-12.