Altmetrics is the latest buzzword in the vocabulary of bibliometricians. It attempts to measure the “impact” of a piece of research by counting the number of times that it’s mentioned in tweets, Facebook pages, blogs, on YouTube, and in news media. That sounds childish, and it is. Jeffrey Beall wrote an article in 2012 with the title “Article-Level Metrics: An Ill-Conceived and Meretricious Idea.” It sounds a bit strong. On mature consideration we think it was understated.
Access to unbiased information is important for researchers, and it’s vital for doctors. Their patients’ lives might depend on it. That’s why the AllTrials initiative is so important. Having access to only those trials that favour a treatment has the potential to do great harm to patients, because the literature is biased towards unrealistic success. Making more research public is only worthwhile if people take the trouble to read the papers. Ironically, the huge volume of work published today has driven some people to devise methods to assess papers without bothering to read them. Altmetrics is one such attempt. All one has to do is to look at a few examples to see that it is a menace to public health.
Take, for example, one of the papers with the second highest altmetric scores in 2013, which was published in the New England Journal of Medicine, Primary Prevention of Cardiovascular Disease with a Mediterranean Diet. It was promoted (in a very misleading way) by the journal.
It’s obvious that part of the problem lies with hubristic press releases by made by PR officers employed by universities and glamour journals, and hence with the authors who approve them. Presumably the reason for the hubris is to promote the journal, but such publicity misleads doctors and patients. Many of the 2092 tweets related to this article simply gave the title, but inevitably the theme appealed to diet faddists, with plenty of tweets like the following:
The interpretations of the paper promoted by these tweets were mostly desperately inaccurate. Diet studies are anyway notoriously unreliable. As John Ioannidis has said “Almost every single nutrient imaginable has peer reviewed publications associating it with almost any outcome.”
This unfortunate situation comes about partly because most of the data comes from non randomised cohort studies that tell you nothing about causality, and also because the effects of diet on health seem to be quite small.
The study in question was a randomized controlled trial, so it should be free of the problems of cohort studies. But very few tweeters showed any sign of having read the paper. Unsurprisingly, the actual content of the paper is far more nuanced than any tweet could convey. We found no tweets that mentioned the finding from the paper that the diets had no detectable effect on myocardial infarction, no effect on death from cardiovascular causes, and no effect on death from any cause. The only difference was in the number of people who had strokes, and that showed a deeply unimpressive P = 0.04. Some problems were pointed out in the online comments that follow the paper. Post-publication peer review really can work, but you have to read the paper first. Neither did we see any tweets that mentioned the truly impressive list of conflicts of interest of the authors, which ran to an astonishing 419 words.
We conclude that altmetrics are numbers generated by people who don’t understand research, for people who don’t understand research. People who read papers and understand research just don’t need them and should shun them.
Part of the responsibility for this sad situation must lie with the “publish or perish” culture, which has resulted in far too many papers being written. Every paper, however bad, can be published in a journal that claims to be peer-reviewed. Laurie Taylor said it all when he referred to “The British Journal of Half-Baked Neuroscience Findings with Big Popular Impact.”
The irresponsible spin put on papers, especially by “glamour” journals, with the collusion of authors, puts patients at risk. Hyped press releases are rife. These journals hide their results behind paywalls, making it impossible for most people to read them. This sort of publishing is as outdated as the handloom weavers. All papers should be published openly on the web, with an open comments section in which pretentions can be demolished.
Above all, we suggest that you should ignore metrics, all of them, and read the paper.
This is a shortened version of this post.
David Colquhoun, University College London.
Andrew Plested, Leibniz-Institut für Molekulare Pharmakologie (FMP) & Cluster of Excellence, NeuroCure, Charité Universitätsmedizin.
Competing interests: All authors declare that that we have read and understood the BMJ Group policy on declaration of interests and we have no relevant interests to declare.