The bibliometric infrastructure of citations has become an inescapable organising feature of academic life. Drawing on a range of evidence of the use and misuse of citations data, Stuart Macdonald argues its ubiquity has rendered authorship a questionable concept in modern scholarship.
The role of the author in academic publishing is not quite what it might seem. Gone are the days when academics simply conducted research and published their findings. Now their papers are less valued for their content than for providing measures of academic performance. Citation is chief of these. For half a century now, Clarivate ISI has been calculating journal impact factors from the frequency with which a journal’s papers are cited. More citation means a higher JIF and higher journal price. The JIF is critical to the fortunes of higher education and academic publishing. But what are the implications for the performer, the author?
“We … [used] … to make our acceptance those articles that we felt would make a contribution to the international literature. Now our basis for rejection is often ‘I don’t think this paper is going to be cited’.”
(journal editor cited in Chew et al., 2007, p.146)
Authorship and citation
‘Publish or perish’ is misleading: academics perish if they are not cited. The academic paper is primarily a platform for citation. Wrong citations (inappropriate, irrelevant or simply non-existent) count just as much as right citations, and many citations are wrong – not really surprising when 80% of authors have never read the papers they cite. When Elsevier provided its authors with an example of good citation style, over 500 cited the completely fictional example.
‘Publish or perish’ is misleading: academics perish if they are not cited.
The notion that the best papers are the most cited papers was concocted fifty years ago by Eugene Garfield. It was always questionable. The most citable, and therefore publishable, papers are mundane, ‘water is wet’ papers. No university using modern measures of performance would ever have employed Peter Higgs (of Higgs boson fame).
What was once the most cited paper of all is about cleaning test tubes, while the paper announcing the double helix, probably the most important discovery in biology for a century, was rarely cited for more than a decade.
Manipulating the metrics
Once citation was gamed (implying that players followed certain rules): now manipulation (no rules, no holds barred) is universal. The most ruthless players are often those with a standing to maintain – prestigious universities, reputable journals, distinguished academics, established publishers. For instance, coercive citation (editors making citation of their own journals a condition of publication) is particularly prevalent in top journals.Over 90% of their authors comply. Many journals expect something like 60% of a submission’s citations to be of the journal’s own papers and author self-citation is rife: the new rector of Spain’s oldest university has rarely cited anyone but himself.
With such rewards from manipulating the metrics, who needs improved performance from authors?
The most powerful publishers meet regularly to discuss adjustments to their JIF allocation. A tweak or two can make a huge difference to fortunes: re-classification of ‘meeting abstracts’ to ‘academic papers’ resulted in one Biology journal increasing its JIF from 0.24 to 18.3 in a year. The Covid-19 premium increased the Lancet’s JIF from 79.3 in 2021 to 202.7 the following year, a leap in measured quality of 255%. With such rewards from manipulating the metrics, who needs improved performance from authors? Actually, who needs authors at all?
Ike Antkare is one of the world’s most prolific authors. Antkare, protégé of Cyrile Labbé, does not exist and never has. He continues to publish. Many on the Scopus list of prolific authors started publishing decades before they were born. One author of 12 academic papers with 144 citations and an h-index of 12 is Larry the cat.
Author lists can resemble mini CVs packed with institutional affiliations to impress editors. Affiliations of co-authors are frequently mythical, as are many co-authors. China has manipulated itself to world leader in highly-cited research, and Clarivate ISI has re-classified 1,000 authors from its annual list of 6,849 highly-cited authors as ‘fraudulent.’
Saudi Arabia pays prolific foreign academics to claim affiliation with Saudi universities to enhance the Kingdom’s intellectual standing. King Abdulaziz University in Jeddah – paying each foreign author $US76,000 annually – has already overtaken Cambridge University in the Mathematics ranking of US News & World Report. A Maths department is not a requirement.
More authors, more citations
The number of authors per paper has grown rapidly; co-authors will each self-cite and hugely increase paper citation, JIF and all that hangs from these. Where co-authors flourish so do impact factors. Papers in Physics journals can have hundreds of authors, sometimes thousands with author lists dwarfing the paper. (e.g., Khachatryan ad infinitum, 2010).
What do all these authors do? Not a lot. Of course (artificial intelligence notwithstanding) every paper must be written by someone – but not necessarily its listed authors. Management ethics have pervaded academic publishing and senior managers, including vice-chancellors, feel entitled to put their names to papers written by anonymous underlings. In Medicine, communications companies write academic papers for clients and arrange for publication in top journals. Illustrious academics are paid to front such papers, sometimes sight unseen.
Some years ago, 16% of papers published in the prestigious New England Journal of Medicine had ghost authors, and no fewer than 44% had honorary authors. Guest, gift and honorary authors (who write nothing) are everywhere. How else could some academics claim to have written hundreds of papers a year? One at Nottingham Trent University attaches his name to a couple of papers a week, amassing 50,000 citations in just five years. One Spanish academic publishes at the rate of a paper every 37 hours. The number of authors publishing more than 60 papers a year has almost quadrupled in a decade.
Desperate to maintain some propriety, some journals expect authors to identify their contribution to papers, as if a system that accommodates fictional authors would baulk at fabricating their function. When those listed as a paper’s authors have had nothing to do with creating the paper and have no idea who has, authorship has little meaning. Commendably diplomatic, Wilhite and Fong use the term ‘fungible’. Papers are produced to be cited and reading need not extend beyond title, abstract and keywords.
Getting what you pay for
Authorship has descended from a claim to have written a paper, to a sign of some involvement with the paper, to an entitlement in recognition of an individual’s authority and influence. Authorship allows claim to the benefits of citation and it is through citation, not scholarship, that the standing of authorsis enhanced. Scholarship is irrelevant to ranking.
“From an instrumental position, [the paper] gave me my professorship. But as a scholarship piece, it’s disgusting. Yeah, it was really, really awful.”
(author quoted in Butler and Spoelstra, 2014, p.544)
The role of academic publishing was once to distribute knowledge from research to the public at large. Now the customer is the academic, paying the publisher direct for required performance measures. And while once journals carried a few dozen papers annually, written, edited and reviewed by free academic labour, a new business model provides profits unrelated to the publisher’s cost of production. Charging the academic processing charges for open access and rapid citation makes publishing hundreds, even thousands, of papers in each journal issue irresistible to publishers. ‘Scholars’, then, pay to be published and look to papers that can be cited almost anywhere in support of almost anything for the greatest return. As authors, they resign themselves to a mediocrity that performance measures will acknowledge as scholarship.
This post draws on the author’s article, The gaming of citation and authorship in academic journals: a warning from medicine, published in Social Science Information.
The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.
Image Credit: Plateresca on Shutterstock.
Conclusion. People are evil, independent of their education level, and nothing can stop them.
I don’t think I agree with your conclusion.
Many of the individual actions that created this very undesirable situation are logical and rational actions for the individual involved. People need jobs. People want to get promoted. Individually little that is evil is done, just some things that are questionable.
However, it’s the combination and interactions of the priorities and thus the most logical actions of all the individuals/different roles involved that lead to the distortions.
It all blossoms from (or should we say the rot starts at?) just one foolish assumption… ‘well we know this is an oversimplification, but anyway…’ that leads to all of this: that citation count is a relevant measure of anything other than how many times a paper has been cited. In the absence of any valid way or thing to measure (‘quality’ – meaningless anyway. quality depends on context), certain types of people will still want to try to measure something, even if it’s meaningless. Once you ascribe that measurement some importance, the games begin.
The antidote: don’t pay attention to citation counts,
The problem: people stand to gain from continued value placed on citation counts.
The point of research is to find stuff out. Anything that rewards anything that isn’t finding new stuff out is a distorting external incentive. The only way you can get round this is to recognise that the value of research is in the practice of it, not the outcome of it.
I hear your point, but if we can’t trust the author to ethically cite their sources, isn’t it illogical to trust their content?
This is absolutely scandalous. Though it seems to reflect what Jacques Ellul long predicted: the subordination of all human activity to La Technique.
The golden moments for accademic life occurred briefly in the 1980″s prior to. Institutional rankings. mass participation, student loans, and
Manager heads of department. The road from expert scholar to online content creator was long and slippery. At least the church had a good sense to resist politcal interferences. Who would want to accademic career now? Better to do carpentry (no dis-respect intended).
Excellent piece. I fully share the sentiment of the author. Research papers have become a commodity which sole purpose is to lengthen CVs and support career progression applications. Nowadays papers are published to be cited, but not to be read.
It is indeed the fact! Manipulations are everywhere in Academics! The whole process of over-publishing is not actually encouraging the innovations, novelty and discoveries! Because worthy research take time to result discoveries,innovation and novelties! Most importantly the whole process is not encouraging the really talented and smart researchers/ scientists who are capable of making science truthfully!
As a selfeducated critic of modern cosmology, I find myself debating with individuals who refuse to think for themselves and cannot accept any idea if it is not peer review published and if it is peer review published, it must be true. They are oblivious to the problems of peer review publishing, not only that which you mention, but also as the gatekeepers of consensus beliefs. Not only that but most journals are behind onerous paywalls keeping out the average person.
While Macdonald’s blog post attempts to examine the issue of meaningless research in general there is a acute urgency to address and remediate the situation in the medical/clinical area.
Here we propose a concrete roadmap
to restore integrity in academic publishing with aclinical/medical focus:
1. Mandate CRediT Taxonomy – Require explicit author contributions to eliminate ghost/honorary authorship.
2. Standardize Patient-Centered Outcomes & Cost-Effectiveness – Ensure clinical trials report meaningful, comparable outcome/cost ratios.
3. Enforce Open Peer Review & Reproducibility – Publish reviews, mandate open data, and penalize irreproducible research.
4. End Citation Manipulation – Ban coercive citations, self-citation cartels, and fake authorship.
5. Tie Research Evaluation to Real-World Impact – Shift funding and rankings from citation counts to patient health, policy influence, and economic efficiency.
This tentative trajectory prioritizes meaningful, ethical, and cost-conscious research over citation-driven academic inflation and can be a solid starting point for further discussions and implementation. Cheers, what else?
Publishers turned from reputable promoters of science to money making machines. They are taking advantage of scientists and use them as slaves for free editorial work and reviews.
Before the publication costs were symbolic and covered mostly preparation of figures, especially colour, mails to referees, editing, preparation of reprints and other laborious expenses. Still they cost less than 1% of the research budget. Nowadays preparation of reprint is less laborious, nevertheless, charges jumped to the sky. It may cost over 10-15% of the budget for average laboratory to publish 2-3 articles per year. This means journals abusing the need for presentation of research and become money making machines in expense of mainly taxpayers money.
One way to stop this taxpayers money ripoff is to stop publishing in “reputed” journals. Instead, the funding agencies have to manage the sites for presenting the results of the research funded by them. That will immediately save for scientists at least 5% of budget that is already diminishing due to inflation.
Another and better option is just to publish in free platforms like Research Square, BioXriv, etc. These databases post unreviewed articles. However, it is better to let the public to estimate the value and quality of work since most reviewers become highly biased and “politicized”.
Perhaps the role of publisher should be absorbed into the funding source. Part of a grant award would include obligations for the researcher to be a reviewer on other papers solicited by the funder, ending the free labor issue around modern peer review.
But leaving all review to the public is no longer science, and not a realistic alternative. However polarized you feel reviewers have become, the public is worse.
From the logic of the academic market for citations, AI is a welcome development enhancing the productive efficiency manifold. Probably, we will see production time to manuscript shortened to 2 hours and peer-review performed by a differently trained AI. Soon, with automation, academics thinking critically will no longer be needed. Sanction the critics of the system and the system will have suceeded in stifling independent thinking. Brave New World.