You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.

Twimpact factors: can tweets really predict citations?

6 Jan, 12 | by Claire Bower, Digital Comms Manager, @clairebower

A new paper is kicking up a storm in the world of altmetrics (a community that seeks to incorporate social coverage in the assessment of scholarly impact). Analysing the relationship between social metrics and more traditional measures, the study by Gunther Eysenbach in the Journal of Medical Internet Research (JMIR) concludes that highly tweeted papers are more likely to become highly cited.

Not surprisingly, the article, Can tweets predict citations? Metrics of social impact based on twitter and correlation with traditional metrics of scientific impact,” has been tweeted 575 times, and if Eysenbach’s findings prove true, should receive a fair number of citations.

Eysenbach’s study looked at a total of 4,208 tweets citing 286 separate JMIR papers. Not surprisingly, 60% of the tweets were sent within two days of the article being published. There was a correlation between the number of tweets about a JMIR article and the quantity of citations in Google Scholar or Scopus 17-29 months later. Highly tweeted papers were found more likely to become highly cited but the numbers are arguably too small for concrete conclusions (12 out of 286 papers were highly tweeted).

The key finding in the paper is that “highly tweeted articles were 11 times more likely to be highly cited”, which creates an impressive headline but requires a good deal more context for a full understanding.


Phil Davis recently posted a critique of Eysenbach’s article onThe Scholarly Kitchen blog. He pointed to a number of weaknesses in the study:

  • There are far fewer barriers to microblogging than creating an article citation (“which requires an author to produce a new piece of research, have it vetted by peers, and published in a journal, which is indexed by a reputable source that tallies citations”).
  • Eysenbach has purchased several domain names (, and with the possible goal to create services to calculate and track twimpact and twindex metrics for publications and publishers. According to Davis, “it becomes hard to separate the paper’s contribution to science from its contribution to entrepreneurship”.
  • Eysenbach serves as editor and shareholder of JMIR. “His reference list includes a total of 69 articles citing JMIR, only three of which cite articles for their content — 55 serve to cite data points, and 11 are unaccountable. If listing each paper was important for understanding the paper, the author could have listed them in a data appendix…The practice of serial self-citation by an author simultaneously serving as editor and shareholder of his journal appears as suspicious behavior”.

However, in response to the last point of serial self-citation, JMIR has since issued a correction of the original article, removing the offending citations to dataset articles, and replacing them with a list of articles in an appendix. Jason Priem (co-founder of the altmetrics project) commented on Davis’s post, describing the change as “a lovely example of how grassroots post-publication peer review is continuing to grow into a vital part of the scholarly communication system”.

Whether or not you agree with the validity of Eysenbach’s study, the very fact that it has been published and discussed so widely is surely a testament to the increasing importance of social metrics in evaluating article impact.

By submitting your comment you agree to adhere to these terms and conditions
  • a) “There are far fewer barriers to microblogging than creating an article citation” – this is not a critique about the study (in terms of questioning the validity of the correlation findings), but about the usefulness of the twimpact factor going forward (potential to game the system etc.). These points are discussed in the paper.

    b) “Eysenbach has purchased several domain names..”.
    Yes, I disclosed this in the conflict of interest section. I created some neologisms in the paper and I registered them to avoid that they are highjacked. These domains are inactive. We might use them in to display twimpact factors of JMIR articles and/or other articles in the future (similar to what we currently display at…, but with a better interface). I am not sure how this can be a valid “critique” point of the study. How many other research groups own domain names related to their research without even disclosing this fact? In hindsight, I should probably not even have mentioned this. The comment ““it becomes hard to separate the paper’s contribution to science from its contribution to entrepreneurship”.” is quite cynical given that many great products (Google etc.) are based on science. And there is nothing wrong with entrepreneurship as a means to turn ideas and research findings into useful products and services.

    c) “serial self citation” (on a journal level) – most editorials discuss and cite papers published in their own journal. This is an editorial exploring and discussing the impact of a series of JMIR papers – impact in terms of social media response and citations, something that has not been done before. The 55 papers which we analyzed are cited because we thought it is important for the reader to be able to readily determine which article title falls into what cell in the 2×2 grid (highly cited/highly tweeted, highly tweeted/low cited etc). Neither author, two peer-reviewers, and the independent editor handling this manuscript thought of this as “offending” or “suspicious”.
    JMIR has already a great impact factor (4.7) and is top-ranked in its discipline – we do not need to engage in impact factor gaming, as the cynical comment on “serial self-citation” suggests. Having said that, as you mentioned, a correction has been issued within 3 weeks after publication and before submitting the paper to various bibliographic databases, with moving the references into an extra file. For us, this seems a rather artificial move bowing to the tyranny of the impact factor and citation analysis (citing references deliberately not in the references section only to not skew the impact factor). This just strengthens the point of the paper that additional metrics (in addition to, not instead of, citation analysis) might not be a bad idea.

    Unfortunately, these “critique” points are less a critique of the science, but fall more into the category of throwing dirt at us to see what sticks. Many commenters on the Scholarly Kitchen blog have their own agenda and conflicts of interests (not always disclosed on the blog), and readers should be careful to take these into account as well.

  • Pingback: The best Open Access policies put researchers in charge, and recent EU Horizon 2020 and COST policies support this | EUROPP()

You can follow any responses to this entry through the RSS 2.0 feed.
BMJ Journals Development blog homepage

BMJ Web Development Blog

Keep abreast of the technological developments being implemented on the BMJ journal websites.

Creative Comms logo