You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.

Richard Smith: Scrap peer review and beware of “top journals”

22 Mar, 10 | by julietwalker

Richard SmithThe neurologist and epidemiologist Cathie Sudlow has written a highly readable and important piece in the BMJ exposing Science magazine’s poor reporting of a paper on chronic fatigue syndrome, (1) but she reaches the wrong conclusions on how scientific publishing should change.

For those of you who have missed the story, Science published a case control study in September that showed a strong link between chronic fatigue syndrome and xenotropic murine leukaemia virus-related virus (XMRV). (2) The study got wide publicity and was very encouraging to the many people who believe passionately that chronic fatigue syndrome has an infectious cause. Unfortunately, as Sudlow describes, the study lacked basic information on the selection of cases and controls, and, worse, Science has failed to publish E-letters from Sudlow and others asking for more information.

In the meantime, three other studies have not found an association between chronic fatigue syndrome and XMRV. (3-5)

To avoid such poor reporting in the future Sudlow urges strengthening the status quo—more and better prepublication peer review. Not only is she trying to close the stable door after the horse has bolted she has also failed to recognise the possibilities of the new Web 2.0 world. The time has come to move from a world of “filter then publish” to one of “publish then filter”—and it’s happening.

Prepublication peer review is faith based not evidence based, and Sudlow’s story shows how it failed badly at Science. Her anecdote joins a mountain of evidence of the failures of peer review: it is slow, expensive, largely a lottery, poor at detecting errors and fraud, anti-innovatory, biased, and prone to abuse. (6 7) As two Cochrane reviews have shown, the upside is hard to demonstrate. (8 9) Yet people like Sudlow who are devotees of evidence persist in belief in peer review. Why?

The world also seems unaware that it is scientifically dangerous to read only the “top journals”. As Neal Young and others have argued, the “top journals” publish the sexy stuff. (10) The unglamorous is published elsewhere or not at all, and yet the evidence comprises both the glamorous and the unglamorous.

The naïve concept that the “top journals” publish the important stuff and the lesser journals the unimportant is simply false. People who do systematic reviews know this well. Anybody reading only the “top journals” receives a distorted view of the world—as this Science story illustrates. Unfortunately many people, including most journalists, do pay most attention to the “top journals.”

So rather than bolster traditional peer review at “top journals,” we should abandon prepublication review and paying excessive attention to “top journals.” Instead, let people publish and let the world decide. This is ultimately what happens anyway in that what is published is digested with some of it absorbed into “what we know” and much of it never being cited and simply disappearing.

Such a process would have worked better with the story that Sudlow tells. The initial study would have appeared–perhaps to a fanfare of publicity (as happened) or perhaps not. Critics would have immediately asked the questions that Sudlow asks. Instead of hiding behind Science’s skirts as has happened, the authors would have been obliged to provide answers. If they couldn’t, then the wise would disregard their work. Then follow up studies could be published rapidly.

Unfortunately, unlike physicists, astronomers, and mathematicians, all of whom have long published in this way, biomedical researchers seem reluctant to publish without traditional prepublication peer review. In reality this is probably because of innate conservatism and the grip of the “top journals” who insist on prepublication review, but biomedical researchers often say “But our stuff is different from that of physicists in that it may scare ordinary people. A false story, for example, “Porridge causes cancer” can create havoc.”

My answer to this objection is that this happens now. Much of what is published in journals is scientifically poor—as the Science article shows. Then, many studies are presented at scientific meetings without peer review, and scientists and their employers are increasingly likely to report their results through the mass media.

In a world of “publish then filter” we would at least have the full paper to dissect, whereas reports in the media even if derived from scientific meetings, include insufficient information for critical appraisal.

So I urge Sudlow, a thinking woman, to reflect further and begin to argue for something radical and new rather than more of the same.

1. Sudlow C. Science, chronic fatigue syndrome, and me. BMJ 2010;340:c1260

2. Lombardi VC, Ruscetti FW, Das Gupta J, Pfost MA, Hagen KS, Peterson DL, et al. Detection of an infectious retrovirus, XMRV, in blood cells of patients with chronic fatigue syndrome. Science 2009;326:585-9.

3. Van Kuppeveld FJM, de Jong AS, Lanke KH, Verhaegh GW, Melchers WJG, Swanink CMA, et al. Prevalence of xenotropic murine leukaemia virus-related virus in patients with chronic fatigue syndrome in the Netherlands: retrospective analysis of samples from an established cohort. BMJ 2010;340:c1018.

4. Erlwein O, Kaye S, McClure MO, Weber J, Willis G, Collier D, et al. Failure to detect the novel retrovirus XMRV in chronic fatigue syndrome. PLoS One 2010;5:e8519.

5. Groom HC, Boucherit VC, Makinson K, Randal E, Baptista S, Hagan S, et al. Absence of xenotropic murine leukaemia virus-related virus in UK patients with chronic fatigue syndrome. Retrovirology 2010;7:10.

6. Godlee F, Jefferson T. Peer Review in Health Sciences. 2nd ed. London: BMJ Books; 2003.

7. Smith R. Peer review: A flawed process at the heart of science and journals. J R Soc Med 2006;99:178-182.

8. Jefferson T, Rudin M, Brodney Folse S, Davidoff F. Editorial peer review for improving the quality of reports of biomedical studies. Cochrane Database of Systematic Reviews 2007, Issue 1. Art. No.: MR000016. DOI: 10.1002/14651858.MR000016.pub3 

9. Demicheli V, Di Pietrantonj C. Peer review for improving the quality of grant applications. Cochrane Database of Systematic Reviews 2007, Issue 1. Art. No.: MR000003. DOI: 10.1002/14651858.MR000003.pub2 

10.  Young NS, Ioannidis JPA, Al-Ubaydli O, 2008 Why Current Publication Practices May Distort Science. PLoS Med 5(10): e201. doi:10.1371/journal.pmed.0050201

Competing interest: RS is on the board of the Public Library of Science and an enthusiast for open access publishing, but he isn’t paid and doesn’t benefit financially from open access publishing.

By submitting your comment you agree to adhere to these terms and conditions
  • http://www.jmir.org Gunther Eysenbach

    10 years ago I called the Web 2.0 publication model “type-1″ publications (work-in-progress papers, downstream filtering, post-publication review), and the prepublication reviewed papers in journals “type-2″ publications (“upstream” filtered) [1]. I agree that the “type-1″ model is happening and that there is a role for this, but I don’t necessarily agree that this means a necessity for abandoning peer-review at journals (and therefore, journals themselves). Journals serve specific communities (be it only those dumb journalists or busy medical practitioners who can impossibly screen tens of thousands of preprints with comments or ratings), and it is the role of the editor (advised by peer-reviewers, ie experts in the subject) to serve these communities. I am all for Web 2.0, but the web is the web and a journal is a journal, and they play different roles in the knowledge translation cycle.

    1. Eysenbach G. The impact of preprint servers and electronic publishing on biomedical research. Curr Opin Immunol. 2000 Oct;12(5):499-503
    http://yi.com/home/EysenbachGunther/scans/Eysenbach2000e_CurrOpImmunol_preprint_servers.pdf

  • Tiago Villanueva

    There are a few nuances to this. Even virtually unknown, non-indexed journals and publications, have extremely demanding peer-review systems. So they suffer from both problems, potentially flawed peer review and anonymity, regardless of its quality (which even may be very good!). How do you tackle this? What is more valuable, a well known and popular blog or an unknown journal?
    Do you honestly see CNN or the BBC quoting a report from the “Obscurity Journal of Unknownstan” in the near future? Even top open access journals don’t make it as often to the mainstream press as top traditional journals, let alone good quality less-known or “alternative” publications.
    Perhaps, training of future generations of journalists and media professionals should reflect this.

  • Simon Li

    Well known journals have a wider impact on the general public, and are more likely to attract the attention of the mass media. However people working in a specialist field might do a Pubmed search every now and then to find the papers they’re interested in, and if a paper is relevant to your area the chances are you’ll browse through it regardless of the standing of the journal, because as a researcher you have to keep up to date. It’s debatable whether the science coverage of the major news outlets would be any worse with a pre-publication system.

    The question then is how does the pre-publication filtering system work, and how do good articles get the attention of front-line medical professionals? Do authors use the pre-pub system as a way of improving their work before submitting it to a journal, and will the journals accept work which has previously been released for free albeit in an incomplete form? Not a problem for open-access journals, but traditional pay-to-publish ones may be more reluctant. Will the selection process be turned around, with journal editors browsing through the repository and choosing interesting papers (perhaps with the help of a recommendations system)?

  • JerryM

    Researchers could just pre-publish their studies, allow comments and reply, and so get some peer-2-peer-review, then submit to the big journals, getting the best of both worlds.

    The publishers could even make it so only subscribers of the intended journal could comment, thereby keeping their revenue.

    Why let 3 peers review your paper when you can get 30 or 300 to do the work?

  • Gary A. Soucie

    I agree that prepublication peer review is not a panacea. Several years ago, when I was editing a peer-review clinical surgery journal, I edited and published an article coauthored by six surgeons at an eminent cancer institute. The article was peer-reviewed by two of my board editors, both highly esteemed surgeons with knowledge of the subject matter. After my editing, the article was sent for review to the lead author. Having dotted all the Is and crossed all the Ts, we published the article. A single reader (and our circulation included over 45,000 surgeons)wrote to point out that the vincristine dosage mentioned in the article was wrong. I leapt to the PDR to discover that the published dosage was potentially lethal. Now, while most surgeons reading the article might have recognized the error and ignored the dosage, why wasn’t the error picked up by any of the co-authors, the peer reviewers, or the lead author?

    And I also agree that the “top journals” should not be given the almost exclusive attention they receive. However, as the former editor of one of those “lesser journals,” I can assure you that we sometimes published some really good, even groundbreaking articles. But physicians and surgeons complain of having too much to read (and I have to agree), so I am not sure that postpublication peer review is THE answer. Perhaps some enterprising entrepeneur will develop a sort of reader’s digest of medical articles, so doctors will have some better idea of where to spend their precious reading time.

  • http://www.ohri.ca David Moher

    While you make a case for abandoning peer review as it is now practiced – filter and publish – my sense is that we do not appropriately practice or teach peer review. This issue is a topic in several of upcoming rounds I’m giving. Having looked at four universities around the world (hardly a comprehensive sample) and associated research institutes, where I’m going to be speaking, there remains a large inequity. In these institutions (Departments of Epidemiology, etc) there are full and half semester courses on design and conduct of clinical research. The assumption here is that these courses have the requisite depth that will result in students with the appropriate knowledge and understanding about design and conduct issues. This may not be the case within and across many countries.

    What is more clear is that these same academic institutions do not have any courses on reporting of research or peer review of research (perhaps these topics are touched upon in a session in the design and conduct courses). This is very surprising to me. Globally, about $100 billion dollars is spent on research, annually. One way to gauge this large expenditure is to examine the quality of published research. As we all know it is not optimal; a waste of a very large sum of money – a bad investment. How can reports of research be good when our academic institutions don’t value them sufficiently enough to have core courses on peer review and reporting research. Asking colleagues on my corridor as to how they learned about peer review and reporting I get a pretty uniform answer – word of mouth, etc. None, including me, have attended university courses on the topics during training. So before we can disband filter and publish I think we need to teach it and give it equal and appropriate value within academic and other research institutions.

  • http://www.me-cvsvereniging.nl Guido den Broeder

    Some interesting thoughts, but not relevant to the issue at hand. Sudlow is simply wrong. The science article has not been refuted. Instead those other studies are methodologically flawed.

    The true issue here is that peer review does not work if, in the case of BMJ, the peers know as little about the topic as the authors. The peers, like the BMJ authors, missed – among other things – that ME is not the same as unexplained self-reported fatigue.

    True replication studies are underway; let’s wait for their results, shall we?

    Prof.drs. Guido den Broeder
    Chairman of the ME/CVS Vereniging, The Netherlands
    http://www.me-cvsvereniging.nl

  • http://www.bmjwa.com Joseph Ana

    That is the problem – Richard suggests ‘publish then filter’ rather than pre-publication peer review (or screening). Well that may be okay for the more discerning western world ( even though i doubt it even in that part of the world) but down where I am sending this response from, publish then filter is a recipe for disaster, quackery and charlartanism ( if there is a word like that). Differentiation of chaff from substance,and the lack of critical appraisal skills, and widespread indulgent dependence on ‘expert view’ pose a real threat to ‘publish then filter’.

  • Carolyn Richards

    What isn’t in the Science paper is that Science did NOT want to publish the paper with the CFS attached but wanted the XMRV paper. The authors refused so Science made them jump through hoops in order to publish. They had to prove the valadity of their conclusions, including using CFS blood & growing the virus in tissue. This is not just a paper by the WPI, this was done in conjunction with the National Cancer Institute, housed in the NIH & the individuals who discovered the virus. Those who are truly interested in replication have asked for a contol sample of the virus for testing their methods. The WPI would be more than happy to provide additional information required but there seem to be many who wish to further their own adgenda than science.

  • http://www.sarcoidosis.com.au Dr Roger K.A. Allen

    If the science and mathematics that happened at Bletchley Park in WW2 had waited to get accepted in the equivalent to a medical journal, the Atlantic War against German U-boats would have been lost and we’d all be living in the Third Reich and eating bratwurst. This stuff was all due to trial and error, great ideas, lateral thinking etc. The end product was the computer/s called the “bombe”. The world has never been the same. Ideas occur at light speed and only the electron can keep up.

    The world of movable print has long since gone and ideas are bypassing the constipated colon of the current model. There is a huge latency period also from the time someone has a bright idea/inspiration to the final appearance of the BMJ on the front lawn along with the morning paper. Some even give up and put good science in a drawer. I have seen this happen only to discover it decades later. There are few places in modern journals just for ideas and brain-storming.

    Even the abstract has a long gestation and meanwhile ideas are going backwards and forwards between those involved, often clandestinely lest the opposition publish it first. We are all the poorer.

    Many great advances in science occur long before a p value is found. Many novel ideas founder on the current rocks of peer-reviewed journals, so-called experts and myopic editors (figuratively speaking, Richard as I see you wear glasses). I know one colleague whose work was so novel, it was rejected by Thorax only to be published by another journal.

    Open access publication is also making inroads into the current monopoly by publishing firms.

    The Impact Factor (the two year Anglophone citation factor) which was designed to maintain the publishers’ monopoly is in need of revision. Indeed I spoke to an entomologist recently who said it is killing off good science and young scientists. We need in its place a citation half-life which would include non-English science as God is not English.

    As result of this, textbooks are becoming more and more obsolete and sadly less readable which is another subject I will discuss elsewhere. By the time they are published, they are ready for door-jams.

  • http://j-n-turner.co.uk joe

    I can’t see how this would help – you’d just get a range of opinions on the web with no way of measuring credibility. Non-scientists (and even those outside of the field) would find it impossible to understand and you’d get even less scientifically literate journalism.

    In mathematical science you can prepublish to the extent that it is based on datasets – but in natural science, those numbers have come from somewhere, so there is more than just the issue of whether they’ve crunched the numbers sensibly. How have they generated the data? In most cases, non specialists are not going to be in a position to replicate the experiments.

  • Richard Smith

    I’m glad that this provocative piece has attracted much more attention than longer, more reflective pieces I’ve written—but that’s often the way.

    Here’s a few more points on peer review.

    1. I’m advocating the scraping of prepublication peer review. Peer review in the sense of the community making up its mind on a piece of work is unstoppable–and, as I tried to argue, what finally determines the value of a piece of work anyway.

    2. This works for publication, but it’s more difficult to find an alternative for awarding grants.

    3. One of the most interesting points in the peer review debate is that people rarely refer to the now large evidence base, most of which shows the defects of peer review. Somehow people who would not advance opinions on cardiology, molecular biology, chronic disease, or health policy without referring to the evidence are quite content to advance opinions on peer review without knowing the evidence. That’s what I mean about peer review being “faith based.”

    4. I agree with David Moher that one problem is that peer reviewers are not trained. As he probably knows, we did a trail of training peer reviewers. It made little difference—perhaps because the “dose” was too small or because we were trying to teach old dogs new tricks. I’d rather expose the study to people who do know how to appraise it critically rather than train people to do it behind closed doors.

  • Richard Smith

    I can’t resist adding some very personal paragraphs I’ve written on the miseries of peer review. It’s an interesting transition from being a noble editor to being a lowly author and reviewer.

    “Let me begin with my immediate frustrations. For 25 of the past 30 years I’ve been editor (of the BMJ), but now I’m a reviewer and an author. I have just completed a review for the BMJ–of a paper that had interesting new data on an important topic but doubtful methods. Despite my scepticism about peer review I usually accept requests to review, although I always wonder why. It’s time consuming and unpaid and usually my comments disappear into the void. The BMJ, as I expected, rejected the article, primarily on the grounds that the paper wasn’t right for its audience. Like most major journals the BMJ rejects over 90% of the studies it receives, many of them after hours of scrutiny and comment by reviewers. The reviewers’ time is largely wasted because many authors, recognising the arbitrariness of the “publishing game,” simply send the paper elsewhere without revision.
    I think that it would make much more sense simply to publish the paper—on a university website or in an electronic journal with a low threshold—with my comments and those of the other reviewer and let the world decide what it thinks. That is anyway what happens in that many peer reviewed papers disappear without trace after publication, some are torn to pieces, and a few flourish and are absorbed into the body of science. The paper rejected by the BMJ, which may well not now surface for a year, contained data that would fascinate some and inform a current and important debate. I can’t see that any harm would result from it being available to all.
    At the moment I’m also waiting for an opinion on a paper that tells a complicated story of what we see as scientific misconduct on the part of a publisher. Four of us on three continents wrote the paper rapidly because it suddenly became very topical after a major news story. We asked the BMJ to fast track the paper, and it was rapidly rejected with some thoughtful reviews. We did, unusually, revise the paper and submit it to another journal with a request for rapid review. That was about two months ago, and the only thing we’ve heard has been from a reviewer, who happens both to be a friend and to have written a review for the BMJ. He wanted to know if he could simply send the same review, but I told him that—perhaps unfortunately for him—we had revised the paper in the light of his opinion. So he’ll have to review it again. Our chances of getting published in the second journal are perhaps 30%. If the paper is rejected we’ll either get fed up and abandon it or continue our way down the food chain—because you can get virtually anything published if you persist long enough.
    Again I think that much would be gained and really nothing lost if our paper was simply posted on a website with the reviewers’ comments attached.”

    If you’re interested you can read the full article at:

    http://jopm.org/index.php/jpm/article/viewArticle/12/25

  • Bill

    As Richard has noted, the ‘top journals’ seem to occasionally let some very bad science through their peer review process. Science magazine has justifiably gotten a bad rap over this, but in my experience the BMJ is one of the worst offenders. Here are two examples:

    [1] http://www.bmj.com/cgi/content/abstract/329/7480/1450 – the conclusion is that unblinding did not occur, but if you look at table 3 you will see that it clearly did.

    [2] http://jcp.bmj.com/content/60/5/466.abstract – an anti-psychiatric rant with no real science masquerading as a review.

    Here is an idea: why not let members of the public participate in prepublication peer review? Post the paper on a website and invite comments from the public.

  • Kelly Latta

    Richard Smith makes some interesting points regarding peer review, but tying it into biomedical research into the neuroimmune disease CFS could be considered somewhat disingenuous.

    And yes, as fans of JP Ioannidis know, there are many forms of misconduct at peer review journals and bias is inherent in the system. Nor is the BMJ immune. In this context the BMJ is perhaps best known for only publishing studies on CFS supporting the biopsychosocial viewpoint – minus the bio. And very few letters and no medical research studies opposing such research are published.

    Given the situation, it is perhaps ironic that Dr. Sudlow sought refuge at the BMJ because the editors of Science refused to give her opinion the consideration she obviously felt it should have been given.

    It should also be noted that editorials from biomedical scientists whose research does not support Dr. Sudlow’s viewpoint and the position of the BMJ were apparently not sought although many have appeared elsewhere including the New York Times.

    Nor did Dr. Smith or BMJ editor Fiona Goodlee point out that neither Goodwin et al nor van Kuppeveld selected ord studied the same patient group as Lombardi et al.

    Given Dr. Sudlow’s complaints regarding patient selection it would probably have been appropriate – defining the study population being a critical component. However the Science supplement to Lombardi et al clearly delineated the differences even if the BMJ did not.

    Lombardi et al used not only the non-operationalized 1994 Fukada definition that Goodwin et al used, but additionally the 2003 Canadian Consensus Definition. The small Van Kuppeveld et al study used neither and specimens were frozen not fresh.

    Drs. Wessely and McClure also incorrectly claimed in their editorial that all patients in Lombardi et al were from Reno, Nevada when the Lombardi et al study supplement clearly stated that patients and controls came from multiple states throughout the country.

    You can’t have it both ways. You can’t complain about patient selection in one study and then not mention patient selection issues in studies reflecting your ideological viewpoint.

    Studies regarding the pathogenicity of XMRV are in their infancy and modern scientists know that many different pathogens can cause the same disease. XMRV is just one candidate regarding the brain disease ME/PVFS/CFS (ICD-10 G93.3).

    Several more XMRV studies in well defined CFS populations with top CFS biomedical experts and retrovirus experts are currently awaiting publication. These are in addition to studies into the already established associations of HERV-K18, EBV and HHV-6A in CFS.

    How a lack of peer review would have clarified a situation where two entirely different groups of unwell people vs. patients with a severe neuroimmune disease are being studied under the same vaguely defined term remains unclear.

    Perhaps both peer review and outdated and unproven psychosomatic theories regarding the brain disease CFS need to be overhauled.

    Patsopoulos NA, Ioannidis JP The use of older studies in meta-analyses of medical interventions: a survey. Open Med. 2009 May 26;3(2):e62-8

    Goudsmit E, Stouten B Chronic Fatigue Syndrome: Editorial Bias in the British Medical Journal Journal of Chronic Fatigue Syndrome, Vol. 12(4) 2004.

    McClure M, Wessely S Chronic fatigue syndrome and human retrovirus XMRV. BMJ 2010; 340: c1099

  • Robert Matthews

    Physicists have long had the genuine peer-review system of ArXiv, where preprints are posted to the site after passing a very basic “nutter filter” – after which it’s a free-for-all. While comments aren’t published, even distinguished authors can expect a kicking if they publish dodgy nonsense, and the fact that it won’t just be a handful of referees who get to see it helps keep authors honest/competent.
    That said, there is still a problem with the sheer amount of stuff published on the ArXiv. One way to help us home in on the good stuff might be to, ahem, copy what they do in some newspaper comment sections – and allow people to both rate articles, and also rate the comments made. That way on can spot both good and bad papers AND helpful/stupid comments.

  • Garret McMahon

    One small irony here is that a debate centred around scholarly communication in the life sciences and how more access to case control data, method and open comment could enhance the scientific process cannot cite an open copy of Dr Sudlow’s BMJ Observation. A deposit of a post-review copy in http://www.era.lib.ed.ac.uk/ might help here.

You can follow any responses to this entry through the RSS 2.0 feed.
BMJ blogs homepage

The BMJ

Helping doctors make better decisions. Visit site



Creative Comms logo

Latest from The BMJ

Latest from The BMJ

Latest from BMJ podcasts

Latest from BMJ podcasts

Blogs linking here

Blogs linking here