A friend from a middle income country writes to me in despair about the way he and his colleagues have been treated by medical journals. His story made me angry at medical journals and the delay, waste, and inefficiency they cause for no obvious benefit.
My friend and his colleagues conducted a huge pragmatic cluster randomised trial of a complex intervention to improve the care of a common and serious condition in communities (not just individual patients). The result was no improvement on the primary endpoint, although there were substantial improvements in the number of patients treated.
The study has asked an important question and provided a reliable answer. The paper is clearly written (some of the authors are native-English speakers and highly published); the strengths and weaknesses of the study are well described; and the conclusions are not overblown.
The paper is clearly publishable, and none of the many reviewers who have seen the paper have said otherwise. It’s important that it be published, and it would be misconduct for it not to be published.
My friends thought that they would like the study to be published in a high impact journal, and they were optimistic that such a large study in a middle income country on an important question with strong methods and a reliable answer would make it into a major journal.
There are perhaps three main reasons for publishing in a major journal. Firstly, it might make it more likely that the study will lead to real improvement in the real world. This would be a good reason, but I don’t think it’s true: the route from a study to major scale up is long and complex, and publication is a necessary, but small and mostly unimportant step on that path. Secondly, your study may be noticed more, which is probably true—but interviews on the BBC and articles in the New York Times are flattering but usually not consequential. The third reason is academic credit, and the sad truth is that where you publish continues to be the major form of academic assessment even in countries, like Britain, where the assessing authorities warn against it.
For whatever reason my friends sent their study to major journal A. Both reviewers were impressed by the study but had quibbles and, it must be said, misunderstandings. Their comments were, however, cogent, and my friends responded to them clearly, politely, and effectively. Nevertheless, the journal rejected the paper as not a priority, meaning it wasn’t in the top 5% or so the editors publish. This process took months and involved many hours of authors and reviewers. It probably involved far less time of editors, the only party being paid for their involvement in the process.
Essentially the same thing happened with major journal B and major journal C. Major journal D did at least reject the study quickly, simply saying it wasn’t a priority. (What, you wonder, is a priority if a huge, well done study on a major condition in a middle income country is not? It isn’t, you hope, a sexy study on some new, extremely expensive drug for treating a comparatively rare cancer.)
None of the journals rejected the study on the grounds that it was “negative” (the wrong word), but inevitably you wonder.
Years have now passed, some eight reviewers and multiple editors have assessed the paper, and the study is still not published.
My guess is that the study wasn’t ranked in the top 5-10% that the major journals publish, but was probably in the next 10%. As publishing is something of a lottery and the top 10% of one journal is not the top 10% of another, there is a logic to continuing to try for a major journal if you care about publishing in one. Most authors probably, however, drop to the second tier or go for a megajournal after two rejections.
My advice to all authors is to consider their objectives. If their objective is real change in the real world, as I believe it should be, then they should publish as quickly as possible—in a megajournal or F1000Research—and put their energy into dissemination and implementation. Unfortunately, most authors are academics and have to worry about getting credit.
I think, however, that it’s tragic that this important study is taking so long to publish, that so much time of the authors and reviewers has been wasted, and that much legitimate scientific discussion (the points made by the reviewers and the authors’ responses) has been lost. This story, which is typical, shows a process hugely wasteful of time, intellectual effort, and money.
And for what benefit? The only logic could be that the process is sorting science so that readers can find the most important by concentrating on a few journals; but we know this is a fiction. Scientists need to see the totality of the evidence, which is scattered across thousands of journals (or not published at all); and we know that major journals have a bias towards the new and sexy, which is why unsurprisingly what they publish is more likely to be wrong or retracted than in less grand journals.
We need to do better, and new models for publishing science are emerging.
Richard Smith was the editor of The BMJ until 2004.
Competing interest: RS spent 25 years being well paid for working as an editor on the BMJ and while chief executive of the BMJ Publishing Group was also responsible for the group’s specialty journals. He now receives a generous pension from the BMA, which owns the BMJ and its accompanying journals. RS has been paid for consulting on F1000Research https://f1000research.com/, one of the new models for publishing science, and is working on the Open Pharma https://openpharma.blog/ project which is encouraging pharmaceutical companies to join funders like Wellcome and Gates in encouraging improvements in the publishing of science.