Richard Smith: The beginning of the end for impact factors and journals

Richard SmithSomething has just happened that will almost certainly end the tyranny of impact factors and may well mark another step towards the extinction of most scientific journals. Did you notice it? Probably not, and even if you did you may not have understood what it was or what it may lead to.

It was the appearance of something called rather clunkily “article level metrics.” These are a variety of scores and other bits of information attached to each article in the publications of the Public Library of Science (where I’m on the board). They shift attention from journals to articles, particularly for the academic bean counters anxious to find a convenient and low cost way of ranking academics.

To illustrate the metrics let’s consider the article “Why most published research findings are false” by John Ioannidis, the most popular article ever published in PLoS Medicine, which has just celebrated its fifth birthday. (If you haven’t ever read the article, you should: it’s very important. As you read it you will add to its metrics.)

You can click on the tab at the top of the article entitled “metrics.” When you get to the metrics the first thing that you’ll see is that the article has been viewed 239 697 times since it was published in August 2005. The number of page views will actually be more by the time you access the article because the data are updated every 24 hours, and we know from a graph that shows the growth of page views over time that page views of this article are continuing to grow. The shape of the graph is clearly important. Many articles will cease to be viewed after a while—and so the graph will flatten. John’s article continues to command attention.

We can also see that there have been 48 680 downloads of the PDF of the article. This probably reflects the number of people printing out the article to read and keep it. A high ratio of PDF downloads to page views probably means that many people have found the article valuable.

Next you can see that the article has been cited 110 times in the Scopus database, 58 times in PubMed Central, and 98 times in CrossRef. Many of these citations will be the same, but different databases include different journals. Citations are used to calculate the impact factor, but these citations come from only one (expensive) database. It’s better to use more than one database. The number of citations for John’s article is very high, especially when we remember that many articles are never cited.

Citing an article usually indicates that other authors have seen value in it, although they could be citing it to point out its many flaws. Citations are obviously driven by researchers and other authors, and many doctors publish little or nothing—so when assessing the value of a piece of medical research it makes a lot of sense to consider data on readers as well as citations.

But there is still more. You can see that the article has been mentioned 17 times in blogs on the Postgenomic blog site, which collects science blogs from many different sites. Academic bean counters may be snotty about blogs, but increasingly blogs are the way that scientists communicate with each other—avoiding the misery of peer review and the wild inaccuracies of journalists.

You can also see that 105 people have bookmarked the article in CiteULike, a site for collecting references, meaning probably that the article has been or will be cited in articles. Eighteen people have also bookmarked the reference in Connotea.

This is all a beginning. PLoS plans to add more metrics. What is crucial is that the metrics can be collected automatically. It may be possible, for example, to measure references in parliaments, official reports, Cochrane reviews, or any news media.

Slowly but surely these metrics will become much superior to using the impact factor of the journal in which an article is published as a surrogate for the impact of the article itself. Although a routine practice, this is wholly unscientific because there is very little correlation between the impact of a journal and the impact of the articles it publishes—because the impact factor of the journal is driven by a few articles that are very highly cited.

Plus the metrics give a real time and much broader measure of the influence of an article. Increasingly governments and research funders are interested not just in the number of times an article is cited in other publications (an incestuous and self serving measure) but on the impact they have in the real world, the changes they lead to.

So that’s why article level metrics might doom the impact factor, but why might they signal an end to many journals? It’s because they lead to articles rather than journals being what matters, and the articles can then be published quickly on databases rather than in journals. PLoS One is already publishing around 500 papers a month, and other publishers are beginning to copy it.

The edifice of journals is beginning to crack—and not before time.

Competing interest: Richard Smith is on the board of the Public Library of Science and has been an enthusiast for open access publishing for 15 years.

  • Pingback: Square One » From journal impact factors to article merics()

  • Fascinating, and in my view, dead right. It’s delightful to see that Ioannidis’ article has been downloaded 48,680 times.. It’s at least as impressive to see that Peter Lawrences’s article, Real Lives and White Lies in the Funding of Scientific Research, has been viewed 31026 times and downloaded 3711 times since it was published on September 15th, this year.
    Both papers deserve their popularity, but that doesn’t mean that citations and downloads are a good way to judge original research. One obvious problem concerns the number of people in the field. Citations give you the numerator but not the denominator. All one has to do is look at individuals who are eminent in your own field to see that the number of citations of individual papers can bear little relation to how their value, as judged ten or twenty years later.

    It is a problem that bibliometry has become a job in its own right. If you make your living from bibliometry, you can no more afford to admit that it doesn’t work than a homeopath.

  • Pingback: The End of Impact Factors as a Measure of Research Quality | Next Generation Science()

  • Pingback: La Feuille » Archive du blog » Safari : la librairie dans les nuages()

  • Pingback: The dawning of the age of article-level metrics « Faculty of 1000()

  • Liz Wager

    Article-level metrics certainly make sense but how long will it take the academic establishment to wean themselves off their horrible dependence on impact factors for academic appointments and measuring research output? Over-reliance on impact factors leads to all sorts of distortions and problems, so the sooner we can end it, the better. So should I start wearing an ‘Impact factors, no thanks’ t-shirt or do you have some better ideas for how we can lobby for this?

  • Pingback: Jerry Fahrni » “What’d I miss?” – Week of November 5th()

  • Pingback: Quantifying Research Quality using Article Level Metrics « Unruled Notebook()

  • Alex

    Finally, Tim Berners-Lee’s revolution returns to where it started: scientific research.

  • Simon Chapman

    As a former long time BMJgroup editor (Tobacco Control), I know that these stats have long been available behind the visible pages for editors to look at. Incredibly useful in providing evidence at editorial planning meetings about the sorts of papers that attract lots of interest vs those that only the author, her partner and her mum & dad open. So I'm intrigued why more journals don't make them public.

    A thought: if you look at online newspapers these days, they almost all have “most viewed” lists. Many of these feature promise of raunch, nudity and all sorts of other fun stuff. Might we see more scientific articles submitted with keywords designed to entice?

  • Carl May

    Article level metrics will mean that personal metrics will become more important. Reviewing a pile of CVs sent to me as an external assessor for a prestigious appointment a month or so ago, I was struck that almost all of the applicants made reference to their h Factor and other personal citation data. This doesn't mean the end of journals – journal publishers are already exploring ways to monetise these metrics. What it might mean the end of is the highly restrictive model of citation measurement that companies like ISI Thomson use for their citation metrics. You cite scopus in your article, but the real advance is free to the user and it is Hazing's Publish or Perish software that uses google scholar to track citations.