Richard Smith: More on the uselessness of peer review

Richard SmithI know I’m becoming a bore with all this raving against prepublication peer review, but like all true bores I’m charging on regardless. And I’m fired up by the experience I’ve had in the past few minutes.

Unsurprisingly, I’m a hypocrite as well as a bore, and despite my protestations I do a fair bit of reviewing. I’m never quite sure why, but it’s something to do with the hope of getting an early peek at something stupendous. This has yet to happen.

Anyway, I did recently agree to review for a journal of global repute and was given access to the paper. As is my custom, I downloaded the paper to read on a plane or train, and when I read it I thought that I must have downloaded only the summary. What I had was a sketchy piece with no argument, no data, no evidence, and, as far as I could see, no point.

Assuming that I had only the summary, I contacted the editors—and was told that I had the whole article. I wrote a review that was nearly as long as the paper pointing out its deficiencies politely, making suggestions on how it could be improved, and giving references to two much better papers that covered similar ground. But why, I wondered, had the editors sent it out for review? Perhaps it was because of the prestige of at least one of the authors.

Today I was sent an email telling me that the journal has made a decision and that I can access the other reviews. The paper has been rejected, but I see that the paper has had two other reviews—both saying in effect that what they had been sent was no paper at all.

What a waste of time and effort. The authors should never have submitted the paper, the journals shouldn’t have sent it out for review, and we reviewers should have declined to review it. Failure all round.

Recently a paper that I wrote with several others was reviewed by another journal of global repute. Again there were three reviewers (the Holy Trinity), and I’m not very unkind when I paraphrase their reviews as: Reviewer A: “Please reference my work”; Reviewer B: “Pay more attention to my specialty”; and Reviewer C “The authors should have written the paper in the gnomic language that I use.”

These episodes remind me of my most fatuous peer review story. Years ago something I’d written was quoted in  the American Journal of Public Health, but in my piece I was quoting work done by others. The original authors wrote to me and asked me to let the journal know that they had originated the idea. I was happy to do so and wrote a short letter for publication saying little more than “Thank you for quoting me, but X and Y first produced this idea and here’s a reference.”

The journal wrote back saying: “Thank you for your letter, which will now be peer reviewed. If it passes peer review it will be published in nine months.” Barmy, I thought. This was peer review reduced to a mindless, bureaucratic, time and resource consuming charade—but maybe that’s what it is most of the time.

RS was the editor of the BMJ until 2004 and is director of the United Health Group’s chronic disease initiative.

Competing interest: I do quite a lot of reviewing for many journals. I am never paid, and my wife wants to know why I spend such a lot of time working for nothing. When I review for the BMJ I’m offered a year’s free access to, but I have free access anyway–so I give away the access. Perhaps I should sell it.

  • If Richard Smith wants to see original work he should concentrate on the work that never gets to review. I can send work that is wholly original, innovative and with astonishing outcomes detailed but has been turned away because I am unknown and the conclusions of the work are outside 'mainstream' thinking.

  • charlesx

    At least in your case the bad paper was rejected, so your time was not completely wasted.  What annoys me is when I get a paper to review by Distinguished Professor A, which is remarkably similar to previous papers by DPA and contains nothing new or interesting.  I write a detailed review listing the weaknesses of the paper and explaining why it does not deserve publication.  The journal editor (perhaps under pressure from DPA)  then accepts the paper with virtually no changes.  This has happened twice recently.

  • Abhijit Bal


    I think in the first example, what you show is the success of peer review. Three reviewers “wasted” their time but they saved the time of thousands(and perhaps millions over time because one of the authors is of some repute) of people who would have otherwise read the sorry piece if it was published. True, the editor should have rejected the paper if it was that bad but nonetheless, overall I think time has been saved not spent.

    The examples that you cite from years ago are somewhat irrelevant. For example, it is possible that some good papers were never published because of “author-fatigue”. However, in this day and age, most of the work that produces fatigue (e.g. arranging references) are done by reference managers. If a few lines have to be added or deleted, it is an easy to do that on the computer (compared to the typewriter). Nobody goes to the post-office anymore (I remember posting my paper in a letterbox in London even in 2003). Everything is done on-line.

    I think there are several advantages of prepublication peer reviews that you choose to ignore. Most important, it improves the quality of papers. The fact that scrutiny takes place makes one work better. True all this can take place post-publication but that has its own problems- how would anyone know at what stage has the paper been cited? Technically speaking all versions would be citable. Would authors reply in order to merely tick the box? Would everyone reading everything that is published (such as the paper you quote in your first example) actually not waste significantly more research time? 

    I propose: Publish all papers which have evidence-based conclusions and are are not unnecessarily repetitive, make that process affordable, and then open for post-publication on-line reviews.

    What we must reduce is the emphasis on publications. I think job applications should not even ask people about the number of papers. They should only be asked a one-page summary of their own research work.


    Abhijit Bal

  • Richard Smith

    You make statements but never cite any data or evidence. 
    Let's turn the debate round. Imagine there was no prepublication peer review. Could you make a case for spending $1.7 billion and a huge amount of academic time and living with the delay of months and sometimes years in order to introduce prepublication peer review? I doubt it. And remember that it''s easy for people to announce their results to the world without peer review right now. They do it all the time.

  • Richard Smith

    Iain Chalmers says that not publishing research you have completed is misconduct, and I agree with him. You can get anything published–including gobbledegook papers–if you persist.

  • Abhijit Bal

    There is no well established “post-publication peer review-only” journal although several journals have tried this option in the last 5 years. Hence, there is a paucity of evidence. The problems of the current peer review system are very well known but what should it be compared with?
    As and when you have drawn things to my attention, I have tried to research the available material.
    Thus, I found that the Barry Marshall’s rejection story that you wrote about in the previous blog was far less dramatic than you thought it would be. The papers that you asked me to read, some of them were in no way connected to peer review and I wrote about that in your previous blog. 
    In this post, I found that the evidence you cite (rejection of a paper written by a famous author) actually proved that peer review is useful rather than useless. To support my conclusion, I described how you and the two other reviewers saved the time of thousands of people by rejecting the piece. I read your post thrice so as to confirm that I did not miss anything because I was surprised at the kind of example you cited. The editor of the journal did the same. Confronted by a famous name, s/he sought the advice of three other people and s/he was very right in doing so. In the end, the correct decision was made. If you talk about that example to 10 people, I am confident all of them would feel that it proved the usefulness of peer review. Had your blog been subject to pre-publication peer review, you would have cited some other example to strengthen your argument.
    The example that you cite from the American Journal of Public Health (and I found and read that paper too) is from 1997-8. Again, I think the situation is probably less dramatic. The reply of the journal back in 1997-8 seems like an automated reply to me. So while it makes funny reading after 15 years, it was just the way the system worked then- probably a secretary sending a reply meant for all “letters to the editor”.
    So let us test it. Would you still say that the first example shows how useless peer review is? If the answer is yes, it would prove what I have always suspected. That once people commit themselves openly, it is very hard for them to change their position. This situation does not arise in pre-publication peer review where one’s reputation is not at stake yet as the data has not been made public. But once made public, egos are strong. It is human nature.
    So let us now turn to some evidence. In a website that is based on post-publication peer review only, I selected a total of 8 specialties with 50 articles between them. There were 64 reviews. Forty-four reviews made significant observations. Eighteen of these were posted within the last 4 weeks of the submission of the papers. The remaining 26 reviews were included in my analysis (where enough time had passed for authors to respond). None of the manuscripts were revised and not one author cared to respond to the 26 comments at the time I analyzed the data. I also noted that these are early days and things might improve in the future.
    I still think that in an ideal world, “post-publication review only” would definitely work. Authors would be clean, they would present data accurately, reviewers will do their job without fail and fear, authors would take the criticism on board, papers would be precisely stratified into grades of 0 to 10, readers would take note of the latest version of the papers and refrain from citing earlier versions and everything would go right. It can happen. I am not so sure it will but I might be wrong. 
    The way the system is currently, I know any rubbish gets published but I also know that none of it is thrust on me. If I wish to find rubbish I am able to do so.
    Once again, I think a minimal filter is needed so that end-users like me don’t suffer. Accept papers that have evidence based conclusions and are not boringly repetitive.