The Journal Impact factor revisited

On several occasions in the past I have moaned about the undue emphasis placed on the Journal Impact Factor (JIF) while accepting that wisely or otherwise, many Universities and Faculties that should know better rely on it for judging a candidate’s suitability for promotion, tenure, etc.  As a consequence, many authors on the cusp of such important academic milestones are determined to publish in journals with high impact factors even when other journals may be more appropriate. Gradually, it seems, the balance is swinging in a direction that puts the IF in better perspective.

A group of journal editors issued a Declaration on Research Assessment (DORA) that was recently publicized in Science News.  The so-called San Francisco group agreed that the JIF  has become ‘an obsession in world science. Impact factors warp the way that research is conducted, reported, and funded.’ The DORA statement  ‘makes 18 recommendations for change  to reduce the dominant role of the JIF in evaluating research and researchers and instead to focus on the content of primary research papers, regardless of publication venue.’

Today’s declaration is timed to coincide with editorials in many scientific journals.  The signatories are an impressive group of top ranking journal editors. A complete list of signatories to date is at Importantly, they are not just editors of ‘also-ran’ journals.  As the article notes, “There are number of citation ranking systems today, but the oldest and most influential is the so-called “two-year JIF” devised by Eugene Garfield in the early 1950s.” The article continues, “Even though the JIF is only a measure of a journal’s average citation frequency, it has become a powerful proxy for scientific value and is being widely misused to assess individual scientists and research institutions, say the DORA framers. The JIF has become even more powerful in China, India, and other nations emerging as global research powers.’ The San Francisco declaration cites studies that outline known defects in the JIF, distortions that skew results within journals, that gloss over differences between fields, and that lump primary research articles in with much more easily cited review articles. Further, the JIF can be “gamed” by editors and authors, while the data used to compute the JIF “are neither transparent nor openly available to the public,” according to DORA. The statement goes on to explain: “Since the JIF is based on the mean of the citations to papers in a given journal, rather than the median, a handful of highly cited papers can drive the overall JIF, says Bernd Pulverer, Chief Editor of the EMBO Journal. “My favorite example is the first paper on the sequencing of the human genome. This paper, which has been cited just under 10,000 times to date, single handedly increased Nature’s JIF for a couple of years.”

“The Journal Impact Factor (JIF) was developed to help librarians make subscription decisions, but it’s become a proxy for the quality of research,” says Stefano Bertuzzi, ASCB Executive Director, one of more than 70 institutional leaders to sign the declaration on behalf of their organizations. “Researchers are now judged by where they publish not by what they publish. This is no longer a question of selling subscriptions. The ‘high-impact’ obsession is warping our scientific judgment, damaging careers, and wasting time and valuable work.”

The SF declaration urges all stakeholders to focus on the content of papers, rather than the JIF of the journal in which it was published, says Bertuzzi, “The connection is flawed and the importance of the finding as reflected by the light of a high JIF number is often completely misleading, because it is always only a very small number of papers published in a journal that receive most of citations, so it is flawed to measure the impact of a single article by this metric. Great papers appear in journals with low JIFs and vice versa.”

One of the four editors of Traffic who signed DORA, Michael Marks acknowledges that the group realized that the scientific world has been using impact factors inappropriately. “Initially our gut reaction was to blame the JIF itself but it’s not the JIF’s fault,” says Marks. “It’s our use of the JIF that’s the problem.”

DORA’s 18 recommendations call for sweeping changes in scientific assessment, says Drubin. They will hopefully lead to “a change in the culture where people will choose the journals that they publish in not on the prestige but on the fit. Is the format correct? Is the audience correct? Does the editorial board have the appropriate expertise?”  Editors note: This is long overdue. I hope the DORA insurrection succeeds.

(Visited 156 times, 1 visits today)