Reflecting on the debate around generative AI and its impact on scholarly communication, Danny Kingsley argues, much like open access twenty years earlier, AI holds a dark mirror to enduring flaws in research publishing and assessment.
It has been interesting to watch the explosion of discussion around ChatGPT/generative AI. The commentary oscillates between the great potential it offers (read efficiencies) and concerns about the damage it will do to longstanding trusted systems. One of those systems is scholarly publishing.
For all the challenges raised, ChatGPT is simply holding a mirror to issues already plaguing the current scholarly publishing system. Indeed, in the same way that a decade ago, open access was a scapegoat for scholarly communication, now generative AI is a scapegoat for the scholarly publishing system. These concerns have an underlying assumption – the current system is working. We need to ask: is it?
For all the challenges raised, ChatGPT is simply holding a mirror to issues already plaguing the current scholarly publishing system
There is no room here for a comprehensive list of the many and varied problems with the current scholarly publishing ecosystem. As a taster, consider the findings that predatory journals are unfortunately here to stay, the worrying amount of fraud in medical research and that researchers who agree to manipulate citations are more likely to get their papers published.
Two recent studies, one European and one in Australia, reveal the level of pressure PhD and early-career researchers are under to provide gift authorship. There have also been alarming revelations about payment being exchanged for authorship, with prices depending on where the work will be published and the research area. Investigations into this are leading to a spate of retractions. Even the ‘self-correcting’ nature of the system is not working, with a finding of a large number of citations to articles that had been retracted, with over a quarter of these happening after the retraction.
Let’s consider some of the concerns raised about AI and scholarly publishing. The current inability for AI to document the provenance of its data sources through citation, and lack of identifiers of those data sources means there is no ability to replicate the ‘findings’ that have been generated by AI. This has raised calls for the development of a formal specification or standard for AI documentation that is backed by a robust data model. Our current publishing environment does not prioritise reproducibility, with code sharing optional and a slow uptake of requirements to share data. In this environment, the generation of fake data is of particular concern. However, ChatGPT “is not the creator of these issues; it instead enables this problem to exist at a much larger scale”.
For all the gloom, Generative AI offers a way to address inequities with the current scholarly publishing system. For example, writing papers in English when it is not an author’s first language can be a significant barrier to participating in the research discourse. Generative AI offers a potential solution for these authors, argued in the context of medical publishing. Another argument is that the ‘knee jerk’ reactions by publishers to the use of ChatGPT in articles means we are missing the opportunity to level the playing field for English as an additional language (EAL) authors.
After all, the practice of having assistance in the writing of papers is hardly new. A study looking into prolific authors in high impact scientific journals who were themselves not researchers found a startling level of publication across multiple research areas. These authors are humans (mostly with journalism degrees), not AI.
the practice of having assistance in the writing of papers is hardly new.
Speaking of authorship, news broke recently of a radiologist using ChatGPT to write papers and successfully publish well outside his area of expertise, including agriculture, insurance, law and microbiology. This is an excellent representation of concerns that many have expressed about the excessive production of papers ‘written’ by generative AI. While the actions of the radiologist is shocking, this type of behaviour is not limited to the use of AI, with the admission of a Spanish meat expert who had published 176 papers in a year in multiple domains through questionable author partnerships.
The singular driver for almost all questionable research practices is the current emphasis on the published article as the only output that counts. The tail is wagging the dog.
If an oversupply of journals and journal articles is already fuelling paper mills (which themselves can be generated through AI) then the whole scholarly publishing ecosystem could be about to collapse on itself. A commentary has asked whether using AI to assist writing papers will further increase the pressure to publish – given that publishing levels have increased drastically over the past decade already.
We are asking the wrong questions. A good example is this article which asks whether publishers should be concerned that ChatGPT wrote a paper about itself. The article goes on to discuss ‘other ethical and moral concerns’; asking “Is it right to use AI to write papers when publishing papers are used as a barometer of researcher competency, tenure, and promotion?”.
I would rephrase the question to: “Is it right that publishing papers are used as the primary assessment tool of researchers”? The singular driver for almost all questionable research practices is the current emphasis on the published article as the only output that counts. The tail is wagging the dog.
There is already a groundswell against the current research assessment system, with organisations such as the More than Our Rank initiative from INORMS, and the Coalition for Advancing Research Assessment (CoARA) both building on the San Francisco Declaration on Research Assessment (DORA). Whole countries are moving, with the Netherlands launching the ‘Room for Everyone’s Talent’ programme, and research funders such as the Wellcome Trust undertaking significant work into research culture. But this is a huge undertaking.
A few years ago, the world celebrated 350 years of scientific periodicals. In the intervening time, we have experienced the industrial revolution, two world wars and seen the internet change the world but basically the journal system has remained incredibly stable. Will generative AI finally be the disrupting force to move the system into something fit for today’s world?
A longer version of this blogpost first appeared as “AI and Publishing: Moving forward requires looking backward” on the Digital Science tl:dr blog.
The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.
Image Credit: Peter Neumann via Unsplash.
Thanks Danny – I think the associated issue is …. student assessment! Nobody knows in social science how to shut the floodgates on AI essay and assignment content.