Recently I asked a leader of a major research funder what proportion of its grants led to a publication. “I’ve no idea,” he answered, “but it’s probably 20-30%. What bothers me the most is that it’s the positive stuff that gets published. You do an experiment day after day until it ‘works.’ You then publish what ‘works’ and not what doesn’t ‘work.’”
I was surprised that he wouldn’t know what proportion of grants led to a publication, that the proportion he guessed was so low, and that he knew there was a clear bias in what was published. Do research funders, especially those spending public money, not have an obligation to insist that all their grants result in some sort of publication (even if it’s simply an explanation of complete failure) and to avoid bias in what is published?
Some parts of the National Institute for Health Research have accepted this obligation, publishing at full length in its own journals the outcome of all its grants. There is also an insistence that data be made available through managed release if requested.
The Wellcome Trust, a leader in promoting open science, has now made an important step towards a future where all the research it funds will be made available quickly and in full, including supporting data. What’s more, the publishing process will be controlled not by editors with their baffling idiosyncrasies, but by scientists themselves. Another development will be to encourage a move away from the tyranny of impact factors, which distorts the dissemination of science.
The trust has set up a publishing site called Wellcome Open Research, which is supplied by F1000Research. Those researchers who have been funded by Wellcome are encouraged to publish there, although they are not obliged to. After routine checks of things like ethics committee approval and compliance with standard reporting guidelines, the study is posted together with the full data behind it. It can then be cited. The authors select peer reviewers who must meet certain criteria, including not having published with the authors. The reviewers post their reviews and these too are citable. Anybody can comment at any stage and authors can respond to reviewers. Peer review becomes more of a scientific debate and a process for improvement than an arbitrary judgment. The authors might revise their paper in light of the reviewers’ comments, in which case the study will be given a new citation. Once the study has two positive reviews, which might happen in days, then the study enters PubMed Central, Europe PubMed Central, and other databases.
The Wellcome Trust or any other research body that creates such a publishing platform can vary the “rules” as it wants, but the aim is quick, transparent publication supported by full data.
The ease of publishing should allow the quick publication of all studies, including those that are negative and inconclusive. Funders like Wellcome might insist on the publication of all that they fund, and are not then subject to the vagaries of the many thousands of journals.
The availability of full data will help with replication, and the trust might choose to encourage researchers to replicate the findings of others. At the moment there is no incentive for researchers to replicate the findings of others. The full availability of data will also help guard against research fraud and the torturing of data to get desired results.
I’m sceptical about the value of prepublication peer review or, indeed, any review short of full assessment in the “market of ideas,” but immediate posting of the study, the availability of full data, and the transparency of the peer review process should allow review that is superior and faster than the usual long, drawn out, closed, and arbitrary review. The aim of peer review in Wellcome Open Research will not be to deliver a binary decision to publish or not publish, but rather to help improve the study and ensure that the conclusions do not go beyond those justified by the data and the methods.
Scientists and research funders are becoming tired of current methods of publishing science, which are slow, closed, biased, arbitrary, and even corrupt. The thousands of journals mean that science is broken up and scattered, and subject to the vagaries of editors and publishers on, for example, whether studies can be accessed by all, the length of studies, and if there’s a chance to respond with criticisms of published studies. The existing system is also inefficient, absorbing the time of scientists as studies work their way down the food chain of journals. It’s not surprising that much research is not published at all. Plus the whole process is enormously expensive, generating huge profits for commercial publishers like Elsevier.
The whole outdated enterprise is kept alive for one main reason: the fact that employers and funders of researchers assess researchers primarily by where they publish. It’s extraordinary to me and many others that the employers, mainly universities, outsource such an important function to an arbitrary and corrupt system.
Research funders, such as the Wellcome Trust, are perhaps the only players with the power to change the system. Individuals, unless they are Nobel prize winners or something equally established, need to comply by publishing in journals. Individual universities are unwilling to step out of line. But funders call the shots in research: if researchers think that publishing quickly and easily within Welcome Open Research won’t reduce their chance of future funding (and might even increase it) then they will use the system.
Robert Kiley, Wellcome’s head of digital services, is open about wanting to change the system so as to bring benefits to researchers, funders, and broader society. “One of the long term aims of this approach is to start a shift in research and researcher assessment away from journal based measures and towards direct assessment of the output itself, whether it be an article, or in another form such as a dataset or software tool.”
This is another important step to the post-journal world, which will be bad news for current vested interests but good news for everybody else.
Richard Smith was the editor of The BMJ until 2004.
Competing interest: RS acts as a consultant to F1000Research, although has done only a handful of days.