Richard Smith: A woeful tale of the uselessness of peer review

Richard SmithLet me tell you a sad tale of wasted time and effort that illustrates clearly for me why it’s time to abandon prepublication peer review. It’s the tale of an important paper that argues that we can screen for risk of cardiovascular disease using simply age. (1) I’ve already posted a blog on the implications of the paper, but now I want to tell you about its tortured journey to publication.

A version of the paper was first submitted to a journal, the BMJ, in March 2009. It was finally published in PloS One in May 2011, more than two years after it was first submitted. During that time the paper has been rejected seven times by four journals, including PloS One at first, and reviewed by 24 reviewers. At a conservative estimate of two hours per review this is more than a week of academic time. If the academics are paid at a rate of £50 an hour, again conservative, the cost is over £2000. That figure does not include the editorial costs or the opportunity costs, the academics might have spent their time doing something much more valuable than reviewing a paper that 23 other reviewers had also reviewed.

This long delay and high cost might have been justified if what was eventually published was much superior to what was initially submitted. It’s different, but the central message that age alone is as good as complex risk assessment scores is still the same and has not been seriously disputed. The comments of the reviewers could have been a useful discussion around the paper, part of the process of digesting it and deciding its true importance. As it is their comments are lost in the memory stores of editorial computers. It’s not clear to me whether the journals rejected the paper because it was too unsurprising or too radical in its threat to established interests or, paradoxically, both.

What is clear is that nothing would have been lost and much gained if this paper had been published straight away and the debate over its value had been conducted in public rather than behind closed doors for over two years at considerable expense.

The evidence, as opposed to the opinion, on prepublication peer review shows that its effectiveness has not been demonstrated and that it is slow, expensive, largely a lottery, poor at spotting error, biased, anti-innovatory (as perhaps in this case), prone to abuse, and unable to detect fraud. (2) The global cost of peer review is $1.9 billion, (3) and it’s a faith based rather than evidence based process, which is hugely ironic when it’s at the heart of science.

My conclusion is that we should scrap prepublication peer review and concentrate on postpublication peer review, which has always been the “real” peer review in that it decides whether a study matters or not. By postpublication peer review I do not mean the few published comments made on papers but rather the whole “market of ideas,” which has many participants and processes and moves like an economic market to determine the value of a paper.

Prepublication peer review simply obstructs this process—as happened with this important paper showing the age alone is enough for screening for cardiovascular disease.

This is slightly edited portion of an editorial that appears in the Journal of Medical Screening and can be accessed for free at
Competing interest: RS was the editor of the BMJ and the chief executive of the BMJ Publishing Group, which once owned the Journal of Medical Screening, and was until September a member of the board of the Public Library of Science.

1. Wald NJ, Simmonds M, Morris JK. Screening for Future Cardiovascular Disease Using Age Alone Compared with Multiple Risk Factors and Age. PLoS ONE 2011;6(5): e18742. doi:10.1371/journal.pone.0018742

2. Smith R. Classical peer review: an empty gun. Breast Cancer Research 2010; 12(Suppl 4):S13 doi:10.1186/bcr2742

3. Research information network: Activities, costs, and funding flows in the scholarly communications system. 2008.

  • Richard Smith

    Horrobin, the entrepreneur,
    medical researcher, author and editor, was a great critic of peer
    review, and his central criticism was that truly original work, the
    work that moves science forward, was very likely to be rejected by
    peer reviewers—because peer review is a “lower common denominator
    He collected many examples of important studies, several of which led
    to Nobel prizes, being rejected by peer reviewers,
    but he didn't
    include one of the best examples—that of Barry Marshall's studies
    showing that Helicobacter
    pylori was
    the cause of peptic ulcer, work that led to a Nobel prize for

    I graduated in medicine in 1976 it was not known that peptic ulcer
    had an infectious cause—and much of the surgery we learnt about was
    surgery for peptic ulcer and H2 antagonists were some of the best
    selling drugs. The idea that a bacteria might cause peptic ulcer
    seemed ludicrous because everybody knew that the stomach is extremely
    acidic and that bacteria couldn't flourish there.

    was talking about case reports at a meeting this week, and I referred
    to Marshall's famous paper in which he described infecting himself
    with H pylori and giving himself gastritis—so fulfilling
    Koch's postulates. Somebody in the audience told me how they had seen
    Marshall talk and show slides of the many rejection letters he'd
    received. I've tried to find out more, and clearly Marshall did have
    much of his work rejected—for example, in February 1983, according
    to Wikpedia, the Gastroenterological
    Society of Australia rejects Marshall's abstract to present his
    research at their yearly conference and deemed it in the bottom 10%
    of papers submitted. 

    somebody can give us more detail on Marshall's rejections.

  • Abhijit Bal


    Had it been published straight-away it is likely that it would have been taken less seriously and the paper would have been lost in the ocean of literature. The process has given the paper a “stamp of approval” and so it would be wrong to suggest that nothing would have been lost had it been published without review. You spotted the paper only because PLoS published it. If everything was published, it would be hard to spot papers. In the current process, you can choose to ignore poor quality journals. In fact, the peer review process itself deters many substandard papers as it is not time effective for authors to pursue a poor work all the way through.

    Also, only the authors can comment whether they benefited from the review process which in this case does seem a bit protracted to me.

  • Abhijit Bal


    It is not as easy as you sound it to be. You could spot the paper only because the current process had filtered a lot of rubbish. True there is still rubbish out there but you could ignore to read those journals. In absence of this stratification of published papers and filtering, you wouldnt see the intended paper. True, somebody would see but there would also be time spent in going through several low quality papers. That time also costs money.

  • Richard Smith

    As my predecessor as editor of the BMJ showed, you can get anything published in you persist long enough. Prepublication peer review does not weed out papers; rather it takes a long time wastefully to sort them into different journals.
        That sorting might be worthwhile if it meant that “the most important papers appeared in the highest profile journals.” Unfortunately it doesn't. What it does do is to introduce a systematic bias into what's published–with the positive, sexy stuff appearing in the “top journals” and the papers spelling out their errors appearing in low profile journals.
        The system is bust.

  • Richard Smith

    Your assumptions are wrong. I spotted the paper because the authors told me about it soon after it was first written. Since publication the paper has received only two citations–but deserves much more. I hope that postpublication peer review will now begin to work where prepublication peer review has failed.

  • Abhijit Bal

    The top journal in a given specialty does not necessarily publish the top papers of the specialty but overall, it would be fair to say that good papers do get published in good journals as the history of Wislar paper shows (PLoS also has a decent impact factor, which for all its drawbacks is some measure). When was a “Nature-like” paper published in the “Obscure Journal of Science”? Again, this may even have happened but does it happen all the time? 
    All of us adjust our scientific radars according to our needs. In absence of the source signal, life would be more complicated. We would simply have to wait for someone to do a thorough post-publication peer review or else go through every paper ourselves, good and bad. Who pays for that time? Why must post-publication reviewers be made to review papers which otherwise would never have been submitted by the authors if a minimal filter was available in the first place? Again, who pays for the time it would take to spell out poor papers as poor papers?
    Science may have functioned differently 100 years ago but there was also less access to free speech and so only seriously interested people would embark on the difficult journey propagating their findings. Life is different now and anyone can blog anything without spending too much effort which tilts the balance in the other direction.
    But overall I think PLoS One has provided some answers. They publish papers which have evidence based conclusion although the cost is very high. A cheaper PLoS One (unless one can justify publishing papers whose conclusions are not evidence based) followed by F1000 would be perfect.
    The current system is far more ideal. The alternative that you propose has too many chaotic elements in it. I might be hopelessly wrong but this is what I think.

    Sorry I posted twice earlier because I thought there were some problems posting.

  • Richard Smith

    suggest that you read these three papers. Three show how “top
    journals” provide a biased, over positive view of the world, and
    the fourth illustrates how we have been hoodwinked into thinking that
    antidepressants are much more efficaious than they actually are by
    distorted publishing practices.

    if you can't be bothered with the full articles you can read my blog
    on why we should beware of top journals.

  • Abhijit Bal

    The NEJM paper (Turner et al): This paper describes how positive studies are published while negative studies are not. The authors conclude that there is bias but the reason for the bias was not determined. As you have rightly pointed out, anything can eventually get published so there is no reason why the negative studies would not get published. I therefore suspect that many negative studies were not submitted. Neither pre-publication review nor post-publication review would have any bearing on this. 
    I agree with the conclusions of the PLoS paper (Young et al). Papers without flaws should be published when there is no restriction on space. Some pre-publication check would be necessary to prevent at least obvious flaws from getting published. Once again, some kind of stratification of papers would be useful, I think, for end-users like me.
    The JAMA paper (Ioannidis) is very interesting but we don’t know which studies were correct- the original highly cited papers or the subsequent studies. Also, both studies could be correct because they take place at a different time with different variables. In fact, randomized control trials often have several exclusion criteria. The patients we see in day to day clinical practice are typically the ones who would be excluded from clinical trials. So in that sense nothing is right. But how all this relates to peer review, I am not so sure.  
    I could not access the other JAMA paper (Ioannidis and Panagiotou).

  • Abhijit Bal

    I dont think the story of rejections is as fascinating as one would want it to be. From Marshal's interview, it appears that a local scientific society in Australia rejected the abstract and the Lancet accepted the paper- first in June 1983 followed by a more elaborate paper in June 1984. No journal rejected the paper from what I gather from the interview. Correct me if I am wrong.

  • Yannis Guerra

    My main issue about post publication review is that it implicitly demands pureness and virtue from the authors. Because there is pre publication peer review, a lot of people that would publish garbage just to pad their CVs don't do it, as they have better things to do with their time. Make it only post publication review, and you will have the literature filled with hundreds of papers that are garbage, but that will take too long to filter out. In the long run we will have 2 groups of papers, the real good ones, and a much larger group of CV inflating bad papers. Is that a better situation than what it is right now? I am not sure.
    One clear example of a similar phenomenon that is happening right now is with the retracted/withdrawn papers. A large amount of them continue to be cited as useful, because people either ignore the retraction, or the papers support the position of the new article so you voluntarily “ignore” the retraction. 
    Always be careful of the law of unintended consequences.
    A free market (even of ideas) will always be littered of cheap imitators with a few original ones.

  • Andrew Farke

    Would things have been easier if reviews had been transferred between journals (rather than just the manuscript alone)? I suspect the editors of some journals, if given access to the reviews from another journal and the authors' response to these reviews, would have accepted the paper. This may be more of an indictment of the “impact” criterion used for acceptance/rejection by some journals than of the peer review system itself.

  • Dear Richard,

    I am sure you know of this recent BMJ poll which showed that 26% of respondents (118) agreed that journals should only carry out peer review after publication. Did the number bring you hope or disappointment?

    Kamal Mahawar
    CEO, WebmedCentral


    We are deluding ourselves if we think we can detect scientific fraud & misconduct without access to the raw data produced by a study. Electronic posting of such data allows those so inclined to perform their own analysis. So many academics, institutions & oversight authorities claim to carefully review studies free of bias, when their conflict of interest is of such magnitude as to render their findings meaningless.

  • Jacob Barhak

    Richard is correct about waste and inability to judge a paper in many cases. Reviewers are human and make mistakes. Therefore the post publication system has superior aspects.
    And to all those who claim that without a pre publication peer review process it would be hard to detect good work, I suggest you learn from the open source programming community. This community publishes code and text freely online in repositories such as Github, BitBucket, SourceForge and others. This changed the software industry and programmers now use those open source codes and easily distinguish what is good for them. Quality assurance is continuous and all code is essentially reviewed post publication. Moreover, these version control systems allow constant improvement of work and move in this community is towards reproducible science – if you share data and methods/code this is possible. Nevertheless, the review process becomes hard on reviewers, so the users become the post publication reviewers – just like Richard predicted will happen with publication – this is happening daily at the software community- why not in the scientific publication community?
    Hopefully current trends will move scientific publication community in the same direction.