I have taught classes on how to get published in scientific journals in many low and middle income countries, and just about every participant in every class has thought that science journals are biased against research from low and middle income countries. I think that they are as well, but strong evidence of the effect is lacking. But now we have some strong evidence, which I’ve been digesting on a flight home to London from Bangladesh, where I’ve been attending the board meeting of icddr,b, probably the largest health research in a low income country.
We know from previous studies that the acceptance rates of articles is higher when first authors come from English-speaking high income countries; and articles from high income countries have higher citation rates. Indeed, an author’s affiliation with the United States can increase his or her citations by 20% (probably because citations are derived from databases that favour American journals and because Americans cite Americans just as Brits cite Brits). But all this could be explained not by bias but simply because research from high income countries, particularly the US, is better. What has been needed is a study that controls for the quality of the research and even for the reviewer. Now we have such a study.
The study, which comes from Imperial College’s Institute of Global Health Innovation, is a double-blind randomised crossover trial in which 347 clinicians reviewed the same abstracts a month apart with the source of the abstract being changed without their knowledge between low and high income countries. Only three clinicians recognised that the abstracts came from a different source.
The four abstracts in the study were taken from Cochrane Reviews to ensure that they were high quality studies. The studies were randomised to come from the University of Freiburg (Germany), Harvard University (US), University of Addis Ababa (Ethiopia), or University of Mzuzu (Malawi). Thus clinicians might be sent, for example, an abstract from the University of Freiburg and a month later be sent the same abstract, only this time seeming to come from the University of Addis Ababa. Other clinicians would receive the same pair only the one that seemed to come from the University of Addis Ababa would come first. The authors also randomised the journals from which the abstracts seemed to come from the high impact New England Journal of Medicine and the low impact Journal of Community Medicine and Health Education. Altogether there were some 16 pairings plus four pairings where the abstract always seemed to come from Oxford, two pairs with the high impact journal and two with the low impact. These Oxford papers were a control arm.
The clinicians, who were living and practising in England, were told that they were participating in a speed-reading survey. They were asked to score the abstracts on a 100-point scale for strength of the evidence, relevance to them, and the likelihood that they would recommend the abstract to a colleague.
A total of 551 responded to the first abstract, and 347 (63%) responded to both abstracts. There was no significant difference in the score for the strength of the evidence between high and low income countries, but the clinicians scored the abstracts from the low income countries significantly lower (by around 25%) on relevance and likelihood of recommending to colleagues. Importantly there was no difference in the score of the abstracts from Oxford.
The authors of the study conclude that the results show clear unconscious bias against research from low income countries. They think that the score for the strength of evidence is not different because “there are well developed and well known criteria upon which this can be assessed”—in other words, there is less room for unconscious bias.
Interestingly the impact factor of the journal had no effect on how clinicians scored the abstract: they gave the same credence to research published in the Journal of Community Medicine and Health Education as that published in the New England Journal of Medicine. I must confess that this result surprised and pleased me; I wonder why authors go to such lengths to publish in a high impact journal if it doesn’t impress the readers.
Two of the authors of this first study explored possible bias against research from low income countries in another way, using the Implicit Association Test, a method from cognitive psychology. Subjects are shown pairs of words, pictures, sounds, or combinations and asked to “click one of two computer keys to categorise stimuli into associated categories.” If the respondents think the two stimuli are consistent then they respond more quickly than if they think them inconsistent.
In this study the authors paired rich countries (Canada, UK, Japan, Germany, France) or poor countries (Malawi, Ethiopia, Cambodia, Liberia, Bangladesh) with words associated with good research (objective, precise, transparent, credible, useful) with those associated with poor research (biased, vague, dishonest, unreliable, worthless).
A total of 321 health professionals and researchers, most of them academics, took the test, and the results showed “a moderately strong association between rich countries and good research and poor countries and bad research.” The strength of the association is of a similar order to that between male gender and science or male gender and careers. Four fifths of the respondents had a “slight to strong implicit association between rich countries and good research.” Older people were more likely to associate rich countries with good research, whereas the 173 (45%) who were peer reviewers were more likely to associate good research with poor countries.
Respondents were also asked whether they agreed or disagreed with the statement “Poor countries are as likely as rich countries to produce good research.” More than half (58%) disagreed, but interestingly there was no correlation between the results for the explicit survey and the score on the Implicit Association Test. Both conscious and unconscious bias exist against research from low income countries, but they are not correlated—in other words you might consciously have no bias but be biased unconsciously.
Together these two studies using very different methods provide strong evidence of bias against research from low income countries. The participants in my classes will not be surprised, but they will be pleased to have evidence for their beliefs.
Those reviewing grant proposals and manuscripts from low income countries should be aware of the bias and do all they can to avoid it—perhaps through blind reviewing or by using more reviewers from low income countries (although it is, of course, possible that they have the same bias against research from low income countries).
Richard Smith was the editor of The BMJ until 2004.
Competing interest: RS is an adjunct professor at Imperial College’s Institute for Global Health Innovation but had nothing to do with this research. He is also the chair of the board of icddr,b and so concerned about bias against research from low income countries but will not benefit personally from this research (apart perhaps from reflected glory of funding bodies and journal editors take steps to abolish the bias).