There is no evidence that journal rank has any persuasive predictive property for any measure of scientific quality. Every scientist who is not aware of the unscientific nature of the Impact Factor should ask themselves if they are in the right profession, writes Bjoern Brembs.
I wasn’t planning to write anything on Stephen Curry’s latest piece on the negotiated, irreproducible and mathematically unsound Impact Factor sold by Thomson Reuters to gullible university administrators. I agree with most of what he writes there and, as he correctly cites a 20 year-old paper, all of it has been known for a decade or more. So why now pick it up anyway? First, apparently a lot of people are recommending the article, raising the suspicion that there may be some last refuges of scientists out there who are isolated from common knowledge. Second, he mentions a smear campaign against the IF, or rather shaming the use of IF. Of course, everybody who is using the IF immediately disqualifies themselves and by now everybody knows that. In fact, every scientist who is not aware of the unscientific nature of the IF should ask themselves if they are in the right profession. Taking the IF seriously is similar to taking creationism, homeopathy, or divining seriously. Stephen writes:
If you publish a journal that trumpets its impact factor in adverts or emails, you are statistically illiterate. (If you trumpet that impact factor to three decimal places, there is little hope for you.)
Which I thought was worth pointing out in the light of Nature Publishing Group’s aggressive spam campaign earlier this year touting their impact factors to the third decimal. NPG really is beyond hope, it seems.
The other reason I thought I should comment was a post on DrugMonkey’s blog, where he writes that:
This notion that we need help “sifting” through the vast literature and that that help is to be provided by professional editors at Science and Nature who tell us what we need to pay attention to is nonsense.
And acutely detrimental to the progress of science.
I mean really.
You are going to take a handful of journals and let them tell you (and your several hundred closest sub-field peers) what to work on? What is most important to pursue? Really?
That isn’t science.
That’s sheep herding.
And guess what scientists? You are the sheep in this scenario.
I couldn’t agree more. There is no evidence in the published literature that journal rank has any persuasive predictive property for any measure of scientific quality, be it expert review, citations, sound methodology, anything. On the contrary, there is solid evidence that journal rank is predicting article unreliability. Thus, the point needs to be made that there is no need for a replacement of IFs – we need to get rid of journals altogether as the concept of a journal is outdated and can be shown to be detrimental for science. This is an old idea and new data has only supported this insight. What we do need is a scientific metrics system that assists us in discovering, sorting and filtering the fraction of articles from the roughly 1.5 million published articles every year that are relevant to our research. There are currently many article-based metrics being developed which already today, with existing technology, can easily replace any journal-based metric for this task.
Note: This article gives the views of the author(s), and not the position of the Impact of Social Sciences blog, nor of the London School of Economics.
This article was originally posted on Bjoern Brembs’ personal blog and can be found here.
Great stuff here. To add to the conversation, I recently penned a few words about journals requiring authors to cite other articles from the same journal as a way of raising the impact factor of the journal. My solution, I see now, is insufficiently radical, but the facts remain deeply disturbing:
How journals manipulate the importance of research and one way to fix it, at http://curt-rice.com/2012/04/06/how-journals-manipulate-the-importance-of-research-and-one-way-to-fix-it/
Few of the peer-reviewed medical journals where research in evidence of homeopathy is published
1. Annals of the New York Academy of Sciences (Wiley) (Impact factor=1.731, 2011)
Thermodynamics of extremely diluted aqueous solutions (1999)
http://www.homeopathy.org/uploads/downloads/2012/05/Elia.pdf
Successive dilutions and succussions may alter permanently the physical-chemical properties of the solvent water
2. Journal of Thermal Analysis and Calorimetry (1.752, 5-year Impact Factor)
New Physico-Chemical Properties of Extremely-Diluted Aqueous Solutions (2004)
http://www.nationalcenterforhomeopathy.org/files/physico-chemical-properties.pdf
The procedure of dilutions and succussions is capable of modifying in a permanent way the physico-chemical features of water
Dismissing the impact factor is not so simple. If people think it’s important, then in a sense it becomes so. Many early career researchers I work with are told by their supervisors/advisers/mentors only to publish in journals with high impact factors. That advice may not be soundly based – but researchers seeking to build their careers cannot afford to be cavalier about it.
That is so correct! Isn’t it embarrassing that the equivalent of homeopathy rules who we hire and fire? How do you explain that to one of the taxpayers who fund this enterprise? How can you justify that we hire potential frauds (IF correlates best with retractions than any other measure ever taken!) but fire hard-working, honest scientists who work on a topic that might save the world in 50 years?
This is precisely why Stephen Curry’s correct: everybody who uses the IF in the way you so correctly described needs to be prepared to be treated like a creationist, homeopath or dowser.
Excellent argument but for one thing. You assume that journal publishing is only about communication. The most common circumstances in which I see IF being a concern is in the evaluation of the quality of academics — grant adjudication, tenure, promotion, etc. IF is used as a proxy for actual impact, particularly where a piece has been published too recently to reasonably expect actual citations.
The primary question is not what measure scientists need to decide what research to read. It is how can committees best evaluate the quality of those they need to evaluate. Reading all the output and making a decision is not the answer for 2 reasons. It takes too long. And it means each committee is open to idiosyncratic evaluations. If peers at a high quality journal have judged your work to be worthy of inclusion, then other peers should accept that judgement.
My point was exactly that using the IF for that is like using homeopathy to treat someone.
Bjoern, some of your statements above are in contradiction with the clear correlation between the two-year impact factor and the 5-year median of cites for research articles. See the below two links:
https://twitter.com/PepPamies/status/237284953589694465/photo/1/large
http://occamstypewriter.org/scurry/2012/08/19/sick-of-impact-factors-coda/#comment-12080