“Hurrah, another group just published the research findings that we are about to report!”
These are words of comfort to authors who feel disappointed when they are confronted with ideas or findings in the literature that are similar to theirs, when they are “beaten by the competition” in the speed of publishing. I argue that on such occasions the appropriate feeling is joy. Admittedly, in a world in which publications are currency for obtaining academic degrees and positions, being the first or the only to publish in a domain seems glamourous. However, when put against the background of the replication crisis, the glamour quickly fades.
Why being the first is not always something to strive for
It is well documented that many published findings cannot be reproduced; this is referred to as the replication crisis in science. One illustration of this phenomenon comes from clinical intervention research. When claims made in 49 highly cited publications were followed up, only 20 (44%) were reproduced.  The findings of the remaining 29 studies were contradicted by subsequent studies, not challenged, or the effects of the interventions turned out to be weaker than originally claimed. Another example comes from diagnostic research. Systematic reviews of diagnostic accuracy studies have shown that estimates of sensitivity and specificity typically decrease over time.  Indeed, when new diagnostics are launched and validated, authors tend to claim that these tests work well. As time evolves, the accuracy is usually corrected downwards, probably because later studies are more representative or rigorous. 
Examples of replication problems abound, also in other scientific disciplines, and this should not surprise us. Visionary academics saw the crisis coming and warned us decades ago. In the 1960s, Derek de Solla Price documented the exponential growth of scientific literature and predicted a stage of devaluation.  In the 1990s, Douglas G Altman, concerned about the pressure to publish for career reasons, wrote: “As the system encourages poor research, it is the system that should be changed. We need less research, better research, and research done for the right reasons. Abandoning using the number of publications as a measure of ability would be a start.” 
Also in a probabilistic framework, it can be illustrated that the chance that a research finding is true depends on the prior probability of it being true, just like the predictive value of a test result depends on the prevalence of a condition. In an environment keen on discovery where true effect sizes are small and many hypotheses are tested, false-positive research findings are far more likely than true positives. The risk of false-positive findings can be modelled mathematically, and it increases when interests are high, scientific fields are hot, and study methods are flexible.  Therefore, studies with new and surprising findings should be looked at with suspicion rather than admiration until confirmatory studies separate the wheat from the chaff.
Why many still want to be the first
Being first is often associated with attractive things such as innovation, creativity, and intelligence. Discovering new patterns in data is indeed a hallmark of scientific creativity, but methodological rigour and reproducibility are key to science as well. In fact, the art of science may well lie in the balance between the creative ability to see patterns and the critical ability to unmask cognitive biases (self-deception).  The current system of research incentives also pushes investigators to claim that their work is novel. Several initiatives have been proposed to counter this trend, such as rewarding only those publications that are replicated, encouraging preregistration of study protocols, diversifying peer review, and funding replication studies. [6, 7]
Reasons to feel proud of a research paper other than being the first
There are plenty of reasons to be proud of our publications. Our papers could, for example, inspire others to think (or not) along the same line, use and refine our methods, or just enjoy a well-written synthesis. Our papers could also be picked up by systematic reviews and, hopefully, classified as studies with low risk of bias. If this happens and our findings are similar to those of others, our work would reinforce the evidence needed for policy. If our findings differ from others’, our studies could shed light on the heterogeneity of a phenomenon and help to define subpopulations.
In the end, the real question is whether we choose to pursue the dream of a unique and spectacular, but isolated discovery or rather choose to belong to a movement that contributes to sound science.
Kristien Verdonck works at the Department of Public Health, Institute of Tropical Medicine, Antwerp, Belgium. As a postdoctoral researcher, she has been confronted several times with young investigators who are disheartened when they realise that other authors are publishing in what they consider to be their domain.
Competing interests: None declared
- Ioannidis JP. Contradicted and initially stronger effects in highly cited clinical research. JAMA 2005;294(2):218-28.
- Cohen JF, Korevaar DA, Wang J, Leeflang MM, Bossuyt PM. Meta-epidemiologic study showed frequent time trends in summary estimates from meta-analyses of diagnostic accuracy studies. J Clin Epidemiol 2016;77:60-7.
- Fernandez-Cano A, Torralbo M, Vallejo M. Reconsidering Price’s model of scientific growth: an overview. Scientometrics 2004;61(3):301-21.
- Altman DG. The scandal of poor medical research. BMJ 1994;308(6924):283-4.
- Ioannidis JP. Why most published research findings are false. PLoS Med 2005;2(8):e124.
- Munafo MR, Nosek BA, Bishop DVM, et al. A manifesto for reproducible science. Nature Human Behaviour 2017;1:0021.
- Ioannidis JP. How to make more published research true. PLoS Med 2014;11(10):e1001747.