Richard Smith: Accelerating towards the future of publishing science

Richard SmithOne of the conclusions of Ben Goldacre’s important book Bad Pharma is that our current system of publishing science is broken. More and more people are reaching that conclusion, and pressure is building to blow apart the present system. Goldacre’s prime concerns are not only that many drug trials are not published, but also that major journals are making large sums of money while being used to exaggerate the effectiveness and safety of drugs. But the system is also slow, wasteful, ineffective in terms of reliably sorting studies and detecting errors, inefficient, largely a lottery, anti-innovatory, and prone to bias and abuse.

We are entering the phase that the economist Joseph Schumpeter called “creative destruction” when new systems and ways of doing things sweep away the old. Famously, “the future is here but not evenly distributed,” and in the past few weeks I’ve seen glimpses of the future.

The first is F1000Research, which began in July and has the slogan “Open science, open data, open peer review.” It’s the latest invention of Vitek Tracz, who I always describe as the Picasso of scientific publishing in that he has introduced important innovations not once but repeatedly.

F1000Research, will post papers, including opinion pieces and case reports as well as clinical trials and other studies, within hours of them being submitted. If a study includes data then the authors must make it available—as the BMJ is now insisting on for clinical trials. With F1000Research the data will be available through the website. The data are citable, and the creators of the data will be credited when their data are used for new studies.

As the paper is posted it will be sent to reviewers who are asked to make one of three possible judgements: approved, approved with reservations, or not approved. Reviewers are encouraged to read papers quickly and to approve if the paper seems acceptable in that the conclusions are supported by the methods and data. (It seems to me that this can almost always be the case if the conclusions are suitably cautious.) If reviewers approve the paper then they don’t need to write anything but simply give it a green tick, which appears with their name on the website. Once a paper has two ticks it is indexed in Scopus and Embase and probably soon in PubMed and PubMed Central. Papers are being approved in days.

If the authors approve with reservations or do not approve then they must give reasons. A decision of “approve with reservations” is indicated by a green question mark and a “do not approve” with a red cross. Papers and reviewers’ comments will remain on the site even if they are not approved, but they will not be indexed.

Readers can comment on papers and spread them through social media (Twitter and Facebook) from the moment they are posted.

I know that this rapid publication and light review makes some, and not only those with vested interests, shudder, and previous attempts to encourage medical authors to post completed studies immediately have not succeeded—despite this having long been the case in high energy physics, astronomy, and mathematics. Perhaps because the timing is right or because F1000Research has been bolder than previous experiments, it does seem to be working. Papers are being submitted. Reviewers are reviewing rapidly and openly.

My second glimpse of the future has been to see an online system of peer review that encourages rapid, complete, and efficient review and pulls together multiple reviews in a comprehensive and visual way that exceeds anything currently available in science. The system has been developed for another area where sorting the wheat from the chaff is a slow, inefficient, messy, and wasteful process. I imagine such a system being used not for prepublication but for postpublication peer review, the “true peer review,” as I’ve long argued.

I can’t be more specific about this system because it’s in development by a start up, but my point is that new and better ways are being developed of doing things that have been done poorly within science publishing. The innovation that I’ve seen is no doubt one of many that are being produced in this era of creative destruction.

A third glimpse of the future is the appearance of the first articles of eLife, the new journal funded by three major research funders, including the Wellcome Trust, which I think of as trying both to shove open access to the top of the hill and blow out of the water Nature, Science, and other high profile, closed access journals. It’s unashamedly elitist and sees itself as top science published by top scientists. In most ways eLife looks like the old system except that its open access, but part of its mission is “to fully utilize digital media in the presentation of new research,” so perhaps something more revolutionary will follow.

As creative destruction proceeds the old have to adapt or die, and I see signs of the BMJ increasingly doing what I believe journals can do best—not publishing science, but campaigning on important issues like making data available, improving the regulation of medical devices, and fighting overtreatment. The great advantage of the old journals is their “brand,” something that may sound irritatingly vague and managerial to some but is immensely valuable. Brand can be exploited to make money—as with New England Journal of Medicine Fried Chicken, as one editor of that journal joked—or to advance important reforms, as the BMJ is doing.

I’ve been impatient for reform for 20 years, and I’ll probably be dead before the journal world I discovered in 1974 is wholly unrecognisable—but I feel the tectonic plates of scientific publishing shifting.

Richard Smith was the editor of the BMJ until 2004 and is director of the United Health Group’s chronic disease initiative.

RS did work briefly and very part time for Vitek Tracz, was editor of the BMJ and chief executive of the BMJ Publishing Group and on the board of the Public Library of Science, and has been paid $US1000 to review Ben Goldacre’s book for eLife (thank you very much, very generous).

  • Pardon my ignorance, but is there a pool of suitably qualified peer reviewers who will have the time and the inclination to drop everything to review a newly posted paper? And who will review the reviewers?

  • Mike Taylor

    Your question is precisely the same one that occupies editors now. Wherever the compulsion to donate time and expertise for reviewing currently comes from, those same motivations will have to work under new models, too. I don’t any reason why they shouldn’t: after all, the rewards for providing good, detailed reviews now are negligible, but we do it anyway.

  • Mike Taylor

    BTW., the fact that all comments are held for moderation on this blog is guaranteed way to kill conversation. Please see

  • Jacob Barhak

    Michael, the readers will review the reviewers. The quality assurance process does not stop with publication. It is a continuous process and therefore has advantages over current processes. Richard is correct here – technology now allows what was not possible before. I am reading this post a few years after publication. New publication systems are still appearing and people are using those. Yet people are still having a hard time detaching from the old peer review customs. Nevertheless, it seems are moving towards the post publication direction Richard plotted.