By Olavo Amaral
Awareness of reproducibility issues in various areas of science has been on the rise in recent years, with systematic replication efforts in areas such as psychology, economics, cancer biology and social sciences arising in recent years. The low reproducibility rates in some of these areas raise the question of whether irreproducible results can be predicted from particular features in the original publications. Whether reproducibility can be accurately estimated from published information has major implications not only for choosing what to believe or what is worth replicating, but also for how we assess and fund science.
The question of whether researchers can estimate the reproducibility of published findings has been studied in replication initiatives in psychology (see also this), economics and social sciences, and the answer is that they are reasonably good at it. The pooled prediction accuracy across these four studies is around 66% for individual surveys and 73% for prediction markets, in which a large group of researchers interact in a stock market-like interface to generate predictions. Most striking of all was the study of 21 social sciences articles in Science and Nature, in which the market ranked all of the 14 replicated studies ahead of the 7 non-replicated ones (if you want to try your hand at it, there’s an online quiz available. It was even demonstrated that laypeople without a PhD in psychology show some accuracy in predicting reproducibility in these studies.
That said, evaluating the feasibility of findings in the social sciences can be done with common sense alone – what is the probability, after all, that simply looking at a picture of Rodin’s The Thinker will lead you to express less religious beliefs (a finding so counterintuitive that not even the original author finds it is likely to be true anymore)? Using common sense in laboratory science, however, is likely to be harder – unless you are an expert in the field, you will hardly have a good intuition of whether overexpressing protein X is likely to lead to an increase in the expression of gene Y.
Of course, there might be other ways to assess the probability that these studies are reproducible. These include indicators of methodological rigor such as randomisation and blinding, which are widely thought to reduce risk of bias, as well as effect sizes and statistical results – which have been shown to correlate with reproducibility in psychology studies. Nevertheless, we have little data on whether prediction ability indeed holds for biomedical research, as studies in this direction have analysed only a few articles up to the moment.
The Brazilian Reproducibility Initiative, which aims to replicate between 60 and a 100 experiments from Brazilian biomedical science using common laboratory methods, provides a unique opportunity to study this question. Replications will occur within a countrywide network along the course of 2020, with results due in mid-2021. With this in mind, we are recruiting experimental researchers to predict the probability that the results will be replicated – as well as to explain the reasons for their predictions!
Our prediction project started last November, with 44 researchers performing a total of 880 predictions on 60 experiments (with 1259 trades in the prediction markets). Nevertheless, we want to increase our sample size to have greater accuracy in the individual markets for each method – therefore, we are recruiting participants for a second round of predictions!
Registrations to participate in the project are open until January 24th to anyone over 18 with experience in experimental research in an academic environment. If you are interested, fill the form at https://forms.gle/xz65Zd4caBJUzFpR7. Participation involves around 2 hours of work, distributed over a 4-week period, in which researchers will make predictions on experiments using one of the methods included in the Initiative (MTT assay, RT-PCR or elevated plus maze).
Participants will be initially asked to make predictions on the reproducibility of 20 experiments in a survey. Once they complete the survey, they will enter the markets, in which they will receive credit to bet on the results of individual experiments in a stock market-like environment along with other participants. Credits can later be redeemed as Amazon gift cards according to prediction success once experimental results are in.
Time is running out for participation; thus, if you want to help us answer this vital question, register as soon as possible, and good luck with your predictions!
Olavo Amaral is an associate professor at the Institute of Medical Biochemistry Leopoldo de Meis, at the Federal University of Rio de Janeiro and project coordinator of the Brazilian Reproducibility Initiative.