Richard Smith: Beware journals, especially “top” ones

Richard SmithDave Sackett, the father of evidence based medicine, used to warn people against reading journals. They took up time that could be better spent and gave you fragments of evidence not the whole picture. This all felt uncomfortable to me when I was editor of the BMJ.

But ironically it was Dave who made the suggestion that led to Short Cuts, the section in the BMJ that is headed “All you need read in the other general journals.” The thinking is that if you know of the research in the major general journals you will know what is new, important, and really matters. Dave’s suggestion is ironic because growing evidence—some of it ultra-ironically in a recent Short Cuts—suggests that concentrating on such journals does give you a seriously and systematically biased view of the state of evidence.

 The idea that reading only top journals will give you a distorted view of the world is perhaps almost obvious when you reflect on it, but I first began to understand it when I reviewed a paper for PloS Medicine by  Neal Young, John Ioannidis, and Omar Al-Ubaydli in which they argued that this was the case. When the article was published I blogged on it, and so that I can’t (or perhaps can) be accused of self plagiarism, here is some of what I wrote:

“Unusually for a scientific publication, [the authors] use economic concepts to make their case [that top journals are distorting science], and by doing so they illustrate the value of crossing disciplinary boundaries. Their argument is built around “the winner’s curse.” Imagine many firms competing for a television franchise. Each will try to work out the value of the franchise, and inevitably there will be a range of bids. If the franchise is simply awarded to the highest bidder then there’s a high chance that that bid is too high, meaning that the winner will lose money — hence “the winner’s curse.” Those who run such bids often recognise the problem of the curse and discount the highest bid or go for a lower bid.

This phenomenon operates in science publishing because the elite journals that accept only a fraction of papers submitted to them go for the “best” and are thus likely to be publishing papers that are suffering from the winner’s curse — for example, in that they give dramatic results that are a considerable distance from the “true” results. They are exciting outliers — and so very attractive to the elite journals. The articles that the high impact journals publish are bound to be atypical and will present a distorted view of science, leading to false conclusions and “misallocation of resources.”

The authors had some evidence to support their theory.  A study from JAMA, again ironically, showed that of the 49 most highly cited papers on medical interventions published in high profile journals between 1990 and 2004 a quarter of the randomised trials and five of six non-randomised studies had been contradicted or found to be exaggerated by 2005. Now I read more supportive evidence in Short Cuts. Again it’s a study from JAMA looking at original studies of biomarkers with 400 citations or more from 24 highly cited journals, including the BMJ. These studies were compared with subsequent meta-analyses that evaluated the same biomarkers, and of the 35 highly cited original studies 29 showed an effect size larger than that in the meta-analyses.

It would be unwise, says an editorial in JAMA, to discount biomarker research, but it doesn’t draw the conclusion that it would be wise to stay away from journals like JAMA and sadly Short Cuts.

Competing interest: RS was the editor of the BMJ until 2004 and is on the board of the Public Library of Science. Plus he is a known curmudgeon, nihilist, and iconoclast.

  • Mangesh Thorat

    As they are published in a top journal, how distant are these studies likely to be from the truth?

  • Huw Llewelyn

    suppose this illustrates the importance of each reader arriving at a personal opinion about published evidence.  The question for the reader to ask is “If I were to repeat the study, what is the probability of getting a similar result within the same bounds?”  For this probability to be high, the probability of non-replication (NR) due to a number of factors has to be low.  These are a low probability of: (1) NR due to chance e.g. with a low P value, (2) NR due to absence of other contradictory study results (3) NR due to unreliable methods, (3) NR due to poor description of methods or results, (4) NR due to authors’ dishonesty, (5) NR due to difference in local subjects, etc.  To these should be added: ‘NR due to biased selection of outlier result by a journal editor!’