24 Feb, 12 | by BMJ
“As the volume of academic literature explodes, scholars rely on filters to select the most relevant and significant sources from the rest,” the altmetrics manifesto argues. “Unfortunately, scholarship’s three main filters for importance are failing.” Peer review “has served scholarship well” but has become slow and unwieldy and rewards conventional thinking. Citation-counting measures such as the h-index take too long to accumulate. And the impact factor of journals gets misapplied as a way to assess an individual researcher’s performance, which it wasn’t designed to do.
There are various tools that provide an easy interface for finding out readership metrics for a researcher. Until recently, none of these allowed users to choose what is included or enabled non-traditional artefacts to be combined with traditional ones. This is where Total-Impact, a new offering from the altmetric community, comes in.
It is a tool aimed primarily at researchers who want to know how many times their work has been downloaded/ bookmarked/ blogged, and research groups who want to view the broad impact of their work and see what has provoked interest. In addition, it may well appeal to funders and repositories wishing to report on the impact of research outside of the traditional methods. Metrics are computed based on the following data sources: CrossRef, Mendeley, Slideshare, Dryad, PLoSALM (PLoS article level metrics), Facebook, CiteULike, Wikipedia, Delicious, PubMed, Research Blogging, to name but a few. (Full list available here)
“Total-Impact data can be highlighted as indications of the minimum impact a research artifact has made on the community or explored more deeply to see who is citing, bookmarking, and otherwise using specific research”. However, the website openly admits that Total-Impact is in early development and has many limitations. Potential users are warned not to use the tool in the following ways:
- as indication of comprehensive impact
Total-Impact is in early development. See limitations and take it all with a grain of salt.
- for serious comparison
Total-Impact is currently better at collecting comprehensive metrics for some artifacts than others, in ways that are not clear in the report. Extreme care should be taken in comparisons. Numbers should be considered minimums. Even more care should be taken in comparing collections of artifacts, since some Total-Impact is currently better at identifying artifacts identified in some ways than others. Finally, some of these metrics can be easily gamed. This is one reason we believe having many metrics is valuable.
- as if we knew exactly what it all means
The meaning of these metrics are not yet well understood.
- as a substitute for personal judgement of quality
Metrics are only one part of the story. Look at the research artifact for yourself and talk about it with informed colleagues.
A major difficulty experienced by the developers is finding sources of open data. Another technical challenge for altmetrics is what to do about multiple digital “addresses” for a specific article online. Someone who tweets about a paper will probably link to a URL but not include the digital object identifier, or DOI, that makes the paper more permanently findable online, even if the URL changes. Despite these struggles, it will be interesting to see the significance of this tool in “uncovering the invisible impact of research”.