The Journal Impact Factor (JIF) – a measure reflecting the average number of citations to recent articles published in a journal – has been widely critiqued as a measure of individual academic performance. However, it is unclear whether these criticisms and high profile declarations, such as DORA, have led to significant cultural change. In this post, Erin McKiernan, Juan Pablo Alperin and Alice Fleerackers present evidence from a recent study of review, promotion and tenure documents, showing the extent to which (JIF) remains embedded as a measure of success in the academic job market.
The Journal Impact Factor: few scholars will be unfamiliar with this controversial metric. Appearing on journal websites, academic CVs, and in hiring decisions—it’s both the most widely used and the most criticized research metric that exists.
Although the Journal Impact Factor was originally developed to help libraries make indexing and purchasing decisions for their journal collections, it has become a proxy for quality—not just of the journals academics publish in, but of the academics themselves. Many now believe that publishing work in high JIF journals is an essential step to achieving academic success.
And although many have raised concerns about the JIF’s use — and potential misuse — in current academic evaluation systems, little is known about the extent to which the metric features in tenure and promotion decisions.
To find out, we examined more than 860 review, promotion, and tenure (RPT) documents from universities across Canada and the US for mentions of the JIF. Here’s what we found:
How often is the Journal Impact Factor mentioned in tenure guidelines?
The JIF appeared in several of the RPT documents we collected, especially for institutions that prided themselves on their research outputs.
More than 40% of research-intensive institutions (R-type) mentioned the JIF, and 18% of master’s institutions (M-type) did. We also identified over a dozen terms that alluded to the JIF without mentioning it explicitly, suggesting that our results may be under-representing its use (fig.1).
Do tenure guidelines support the Journal Impact Factor’s use?
Of the institutions that mentioned the JIF, 87% were supportive of its use. Only 13% were cautionary about using the metric in tenure considerations. None heavily criticized the JIF or prohibited its use.
For example, guidelines described the JIF as the “most reliable” and “most extensively used” research metric for evaluating faculty performance. Others emphasized that “Publication in respected, highly cited journals… counts for more than publication in unranked journals,” and encouraged candidates to include “information about the Impact Factors or related metrics” in their promotion files.
Fig1. A taxonomy of JIF related language in RPT documents, 1: referring directly to JIF, 2: referring in some way to JIF, 3: Indirect, but probably reference to JIF.
What do tenure guidelines assume the Journal Impact Factor measures?
Of the research institutions that mentioned the JIF in their tenure guidelines, most described it in one of three ways:
- Over 60% associated the JIF with quality. This goes against previous research that has questioned this association. We saw statements like, “Markers of quality of publications may include impact factors of journals, number of citations of published work, and audience of journal.”
- Some institutions—about 40% of our sample—associated it with impact, importance, or significance. Documents stated that “metrics such as citation figures, impact factors, or other such measures of the reach and impact of the candidate’s scholarship” would be considered in the tenure process.
- Finally, 20% of institutions associated the JIF with prestige, reputation, or status. For example, guidelines emphasized that faculty should “be aware of the prestige rankings of the field’s journals and … publish in the highest-ranked journals possible.”
A long way to go
Together, these findings suggest that the concerns about the JIF’s misuse in academic evaluations are warranted. Not only do universities—especially research-intensive ones—continue to encourage the use of the JIF in RPT evaluations, they also use it to measure aspects of research that it is ill-suited to measure. Until this changes, the potential “impact” of the Impact Factor on our professional futures cannot be underestimated.
This post is based on the preprint Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations published on PeerJ Preprints and originally appeared on the PeerJ blog
Image Credit: arielrobin via pixabay (licensed under CC0 licence)
Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below
About the authors:
Alice Fleerackers is a freelance writer, a researcher and lab manager at the ScholCommLab, and an editor at Art the Science. With degrees in both psychology and publishing, she is fascinated by the confluence of science and story, and is passionate about bringing research into everyday life. She can be found on Twitter at @FleerackersA
Juan Pablo Alperin is an Assistant Professor in the School of Publishing at Simon Fraser University, an Associate Director of Research for the Public Knowledge Project, and the co-Director of the ScholCommLab. He is a multi-disciplinary scholar, with training in computer science, geography, and education, whose research focuses on the public’s use of research. He can be found on Twitter at @juancommander.
Erin McKiernan is a professor in the Department of Physics, Biomedical Physics program at the National Autonomous University of Mexico in Mexico City. She is a researcher in experimental and computational biophysics and neurophysiology, and an advocate for open access, open data, and open science. She can be found on Twitter at @emckiernan13
There is an essential problem. These universities are dealing with multiple claims – for promotion, for example, So, some kind of metric is required. I cannot see anyway of getting round this. You cannot expect evaluators to read stacks of individual papers that lie behind applications for, say, promotion, grants, employment. That being the case, rather than documenting what strikes me as a perfectly predictable area of academic management – namely trying to use metrics to make tractable and fair an otherwise impossible task – why not look at the options? No metric is not an option. So, if the JIF is so bad, you need to come up with alternatives for valuing publications (their citation impact seems not a bad start, to may way of thinking), and also adding that such metrics probably need to be supplemented by, say, metrics for the applicant’s work.. In other words, evaluators need some sense of the standing of the journal (or book) in which the publication occurs, as well as the actual impact of the publication itself (i.e. how much it has been cited and used elsewhere). Rather than decrying JIF, why not come up with alternatives so that we can move beyond this unproductive stalemate of a debate.
Yet, the most reputable journals have the highest IF rendering the argument rather superfluous. Most academics are knowledgeable about journals’ reputation within their specialty and confident it’s not the only criterion for promotion/hiring. Sounds more like a ‘sour grapes’-type argument.
Nice blog post, as expected when Erin is involved. But one minor error. JIF was originally developed by Garfield to help him decide what further journals should be added as sources for Science Citation Index. A bit later it became used by librarians to decide what titles to subscribe to, and many years after that as a convenient proxy for article quality. And I don’t agree with Peter’s premise that a bad metric is better than no metric. Relying on simplistic metrics is worse than a considered evaluation of an applicant’s track record. So the latter is more time-consuming? Too bad.
I would say that the alternative for using bad metrics is not no metrics (as Charles argues), but better metrics. For the JIF there is a good alternative: the number of highly cited papers, e.g., the number of top 10% highly cited papers a researcher has produced. That may be a much better measure for a researcher’s contribution to science than focusing on papers in high impact journals which themselves may not be very important.
Many quality journals published in the LIMCs are not even captured by citation index his then can such ever have JIF. Again publishing in the so called high impact factor journals costs so much – such charge as much as $5000 or more. In LIMCs like Nigeria where studies are funded from personal incomes it will be almost impossible to publish in such journals no matter how good the study was. For JIF to make real sense every journal published in all parts of the world must be captured in the database. Interestingly some articles there are extensively cited locally and internationally but no record for them. At the moment JIF is not equitable for use.
what are the best metrics among jif , citescore, scimago journal rank and snip.?
It’s a big club, and you ain’t in it.