There is no evidence that journal rank has any persuasive predictive property for any measure of scientific quality. Every scientist who is not aware of the unscientific nature of the Impact Factor should ask themselves if they are in the right profession, writes Bjoern Brembs.
This article first appeared on the LSE Impact of Social Science blog
I wasn’t planning to write anything on Stephen Curry’s latest piece on the negotiated, irreproducible and mathematically unsound Impact Factor sold by Thomson Reuters to gullible university administrators. I agree with most of what he writes there and, as he correctly cites a 20 year-old paper, all of it has been known for a decade or more. So why now pick it up anyway?
First, apparently a lot of people are recommending the article, raising the suspicion that there may be some last refuges of scientists out there who are isolated from common knowledge. Second, he mentions a smear campaign against the IF, or rather shaming the use of IF. Of course, everybody who is using the IF immediately disqualifies themselves and by now everybody knows that. In fact, every scientist who is not aware of the unscientific nature of the IF should ask themselves if they are in the right profession. Taking the IF seriously is similar to taking creationism, homeopathy, or divining seriously. Stephen writes:
If you publish a journal that trumpets its impact factor in adverts or emails, you are statistically illiterate. (If you trumpet that impact factor to three decimal places, there is little hope for you.)
Which I thought was worth pointing out in the light of Nature Publishing Group’s aggressive spam campaign earlier this year touting their impact factors to the third decimal. NPG really is beyond hope, it seems.
The other reason I thought I should comment was a post on DrugMonkey’s blog, where he writes that:
This notion that we need help “sifting” through the vast literature and that that help is to be provided by professional editors at Science and Nature who tell us what we need to pay attention to is nonsense.
And acutely detrimental to the progress of science.
I mean really.
You are going to take a handful of journals and let them tell you (and your several hundred closest sub-field peers) what to work on? What is most important to pursue? Really?
That isn’t science.
That’s sheep herding.
And guess what scientists? You are the sheep in this scenario.
I couldn’t agree more. There is no evidence in the published literature that journal rank has any persuasive predictive property for any measure of scientific quality, be it expert review, citations, sound methodology, anything. On the contrary, there is solid evidence that journal rank is predicting article unreliability. Thus, the point needs to be made that there is no need for a replacement of IFs – we need to get rid of journals altogether as the concept of a journal is outdated and can be shown to be detrimental for science. This is an old idea and new data has only supported this insight. What we do need is a scientific metrics system that assists us in discovering, sorting and filtering the fraction of articles from the roughly 1.5 million published articles every year that are relevant to our research. There are currently many article-based metrics being developed which already today, with existing technology, can easily replace any journal-based metric for this task.
Note: This article gives the views of the author, and not the position of the British Politics and Policy blog, nor of the London School of Economics. Please read our comments policy before posting.