FacebookBufferPocketShare

George

Having first documented the large-scale demise of the impact factor as a predictor of quality research, George Lozano and team examined whether this pattern also applies to the handful of elite journals. His recent study finds the proportion of top papers published by elite journals has in fact been in steady decline since the late-eighties. Journal hierarchies are breaking down and researchers will benefit from the many publishing venues now available to reach wider audiences.

The digital age has brought forth many changes to scholarly publishing. For instance, we now read papers, not journals. We used to read papers physically bound with other papers in an issue within a journal, but now we just read papers, downloaded individually, and independently of the journal. In addition, journals have become easier to produce. A physical medium is no longer necessary, so the production, transportation, dissemination and availability of papers have drastically increased. The former weakened the connection between papers and their respective journals; papers now are more likely to stand on their own. The latter allowed the creation of a vast number of new journals that, in principle, could easily compete at par with long-established journals.

In a previous blog, and paper, we documented that the most widely used index of journal quality, the impact factor, is becoming a poorer predictor of the quality of the papers therein. The IF already had many well documented and openly acknowledged problems, so that analysis just added another problem to its continued. The data set used for that analysis was as comprehensive as possible, and included thousand of journals. During subsequent discussions, the issue came up of whether the patterns we documented at a large scale also applied to the handful of elite journals that have traditionally deemed to be the best.

Hence, in a follow-up paper we examined Nature, Science Cell, Lancet, NEJM, JAMA and PNAS (just in case, the last 3 are New Engl. J. Med., J. Am. Med. Ass., and Proc. Natl. Acad. Sci.). We identified the 1% and 5% most cited papers in every year in the past 40 years, and determined the percentage of these papers being published by each of these elite journals. In all cases, except for JAMA and the Lancet, the proportion of top papers published by elite journals has been declining since the late-eighties.

lozano1To account for the fact that more papers are now being published than 40 years ago, a normalized index was used. An index of 1 indicates that the number of highly cited papers published by a journal is what would be expected by chance. So, for example, 1% of that journal’s papers that year would be among the among the 1% most cited papers. An index of 7 indicates that a journal published 7 times as many top papers as would be expected by mere chance.

Using a normalized 1% index a more complex pattern emerged.  Some journals, like Cell and PNAS had lost ground in that past 20 years. Nature and Science had remained relatively at the same level, and NEJM, Lancet and JAMA had increased their share up to about 2005, at which time Lancet continued to rise, but the other two noticeably dropped.

This normalized 1% index was also applied to several emerging journals. Most had increased their relative share of highly cited papers in the past 20 years. Chemical Reviews had a high but highly variable index. Finally PLoS One, despite having a high number of highly cited papers, had a normalized 1% index of barely above one. This means that considering the number of papers they publish, their proportion of top papers is roughly what would be expected by chance.

lozano2

 

The digital age has brought new challenges and opportunities for journals, researchers and evaluators. Papers are now independent of the journal in which they appear, so researchers will benefit from knowing that many publishing venues are now available, and they do not differ much in their ability to reach a wide audience. Journals have been affected in different and complex ways, which suggests that their relative position in the journal hierarchy is highly dependent on the effectiveness of their past and future editorial, advertising and marketing policies. Finally, advancement, recruitment and grant-evaluation committees, administrators and other evaluators should be aware even if journal hierarchy and reputation were a valid measure of the papers therein, the hierarchy is highly variable, so identifying quality work is no longer as simple as checking the journal names where the research is published.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics.  

About the Author

George Lozano received a B.Sc. from the University of Guelph, earned an M.Sc. from the University of Western Ontario, and holds a Ph.D. from McGill University. He was subsequently awarded FCAR postdoctoral Fellowship which he took to UC Riverside, and an NSERC postdoctoral fellowship, which he took to Simon Fraser University. Since then he has taken several teaching and research positions in three continents, in a concerted effort to add to his multi-cultural experiences. George’s main research deals with the evolutionary, behavioural, and physiological ecology of animals, mostly birds. Along with the empirical research, some of his recent work deals with the evolution and maintenance of multiple sexual signals, the adaptive explanation behind anorexia, evolutionary medicine, research policy and bibliometrics

Print Friendly