Increasing openness is a better route to eliminating biases than increasing anonymity
Scientists and advocates of evidence-based medicine rely on a system of publication that is itself guided primarily by beliefs, prejudices, and superstitions rather than good empirical evidence. As Lisa Bero noted in the opening keynote of the 8th Peer Review Congress in Chicago this week, we believe in peer review but are not actually sure what it is.
If you’re not familiar with the Peer Review Congress, every four years a bunch of editors, publishers, researchers, statisticians, librarians and interested others get together to discuss research and findings about how peer review and the other work of journals operates, and how it can be improved. (Disclosure: The BMJ is an organiser of the meeting and I am one of this year’s organising committee). If that sounds a bit like navel-gazing by an in crowd, consider the talk on a systematic review of conflicts of interest that began with a declaration of the researchers’ conflicts of interest. It can all get a bit meta.
A key theme of day one of the congress was the tension between on the one hand a growing drive for openness in all things to do with science and its publication, and on the other a continuing reliance on anonymity in peer review. The BMJ uses a flavour of open peer review, in which reviewers names are revealed to the authors, and on publication the reviewers names and reports are shared with the published article. But we are in a small minority. Most biomedical journals rely on a system, referred to as “single blind” peer review, in which editors and reviewers know who the authors are, but authors receive reports from anonymous reviewers (usually from a named editor). And this system has a lot of problems.
It has long been suspected that competitors can slow down a paper’s publication by raising obstructions during peer review. But there are also more hidden types of bias and conflicts of interest that dog the peer review process. For example, it is now evident that women are much less likely to be asked or suggested as peer reviewers by male editors and authors. Extensive and very compelling data on this issue from the American Geophysical Union were presented by Jody Lerback, who compared the numbers of women suggested and invited with the populations of women members of the AGU. As well as highlighting problems, her talk made plain what steps the AGU has taken as a result, to try to eliminate this particular problem.
In the same session, Elisa Ranieri from Springer-Nature presented data on a trial done at Nature journals on doubled-blind review—in which authors names were hidden from reviewers, as well as reviewers names being hidden from authors. This approach is one suggested by people who imagine that bias against particular authors allows hostile reviewers to unfavourably critique papers from their enemies. So, remove the author’s names and affiliations, the argument goes, and you remove this source of bias. Personally, I’m not persuaded by this argument. When papers are written in a style that includes “we previously showed…,” and include items like funding information and clinical trial registration, how can authors truly be anonymous? Indeed in a published study on this issue, 65% of anonymized reviewers guessed who the authors were, and 84% were right in their guess.
Nevertheless, Springer-Nature rolled out their experiment, in which authors were allowed to opt in to a trial of double-blind review. One of the most striking results was how authors from different countries took up the offer. Authors in China and India, who experience higher than average rejection rates from journals (for a variety of reasons not necessarily linked to bias), were very much more likely to ask for double-blind review than authors from the US, for example. So, authors who believed they may get a rough deal from the traditional review system chose instead a system that they hoped might eliminate bias. But if that was their hope, they were disappointed, because overall the trial showed that authors who opted in to double-blind review were more likely to experience rejection, both before and after full peer review, than those who chose single-blind. Without a breakdown per country of rejection rates, one cannot conclude definitively that the double-blind “experiment” was simply recapitulating the higher rejection rate these authors experience under the single-blind system, but it certainly looked the most likely explanation. Nevertheless, Springer-Nature seem sufficiently persuaded by the results to plan to roll out the double-blind option more widely. Some at the conference also want to see the results of triple-blinding—when editors don’t know who the authors are—although there might be logistical difficulties in choosing non-conflicted reviewers in this variant of the process.
The proposal for more and more “blinding” may seem reasonable from the perspective of clinical trials, in which it is well established that the physician, patient, or trial coordinator can introduce bias if they know who is in a treatment group and who receives a placebo. But peer review is a less physiological and more sociological phenomenon, and my prejudice—or null hypothesis to be disproved with data—is that increasing openness is a better route to eliminating biases than increasing anonymity. If it is clear who the reviewers are and what they have said, authors and readers can challenge unreasonable demands and behaviours that may arise from bias or conflicts of interest. At The BMJ we will continue pushing for more openness, not less, until the data tell us to do otherwise.
Theo Bloom is an executive editor, The BMJ.