Richard Smith: A woeful tale of the uselessness of peer review

Richard SmithLet me tell you a sad tale of wasted time and effort that illustrates clearly for me why it’s time to abandon prepublication peer review. It’s the tale of an important paper that argues that we can screen for risk of cardiovascular disease using simply age. (1) I’ve already posted a blog on the implications of the paper, but now I want to tell you about its tortured journey to publication.

A version of the paper was first submitted to a journal, the BMJ, in March 2009. It was finally published in PloS One in May 2011, more than two years after it was first submitted. During that time the paper has been rejected seven times by four journals, including PloS One at first, and reviewed by 24 reviewers. At a conservative estimate of two hours per review this is more than a week of academic time. If the academics are paid at a rate of £50 an hour, again conservative, the cost is over £2000. That figure does not include the editorial costs or the opportunity costs, the academics might have spent their time doing something much more valuable than reviewing a paper that 23 other reviewers had also reviewed.

This long delay and high cost might have been justified if what was eventually published was much superior to what was initially submitted. It’s different, but the central message that age alone is as good as complex risk assessment scores is still the same and has not been seriously disputed. The comments of the reviewers could have been a useful discussion around the paper, part of the process of digesting it and deciding its true importance. As it is their comments are lost in the memory stores of editorial computers. It’s not clear to me whether the journals rejected the paper because it was too unsurprising or too radical in its threat to established interests or, paradoxically, both.

What is clear is that nothing would have been lost and much gained if this paper had been published straight away and the debate over its value had been conducted in public rather than behind closed doors for over two years at considerable expense.

The evidence, as opposed to the opinion, on prepublication peer review shows that its effectiveness has not been demonstrated and that it is slow, expensive, largely a lottery, poor at spotting error, biased, anti-innovatory (as perhaps in this case), prone to abuse, and unable to detect fraud. (2) The global cost of peer review is $1.9 billion, (3) and it’s a faith based rather than evidence based process, which is hugely ironic when it’s at the heart of science.

My conclusion is that we should scrap prepublication peer review and concentrate on postpublication peer review, which has always been the “real” peer review in that it decides whether a study matters or not. By postpublication peer review I do not mean the few published comments made on papers but rather the whole “market of ideas,” which has many participants and processes and moves like an economic market to determine the value of a paper.

Prepublication peer review simply obstructs this process—as happened with this important paper showing the age alone is enough for screening for cardiovascular disease.

This is slightly edited portion of an editorial that appears in the Journal of Medical Screening and can be accessed for free at
Competing interest: RS was the editor of the BMJ and the chief executive of the BMJ Publishing Group, which once owned the Journal of Medical Screening, and was until September a member of the board of the Public Library of Science.

1. Wald NJ, Simmonds M, Morris JK. Screening for Future Cardiovascular Disease Using Age Alone Compared with Multiple Risk Factors and Age. PLoS ONE 2011;6(5): e18742. doi:10.1371/journal.pone.0018742

2. Smith R. Classical peer review: an empty gun. Breast Cancer Research 2010; 12(Suppl 4):S13 doi:10.1186/bcr2742

3. Research information network: Activities, costs, and funding flows in the scholarly communications system. 2008.