Following the resignation of Harvard president Claudine Gay over allegations of plagiarism, questions around research integrity and the integrity of the scholarly record have come to the fore. Till Bruckner argues that loose definitions of research integrity shouldn’t be used in attempts to ideologically purge the scholarly record, and that it is necessary to re-open debates on academic standards before existing conventions are weaponised for political ends.
In 2020 the founders of the highly regarded Retraction Watch initiative published an op-ed titled “Science Journals Are Purging Racist, Sexist Work. Finally.” It notes that several papers have recently been retracted by journals after they were flagged on social media. The Retraction Watch team conceded that these moves are politically motivated:
“The critics are right: Journals do have a double-standard, and it is political. They move briskly to pull unworthy papers tinged by politics while ignoring hundreds, or likely thousands, of credible allegations of fraud or major error.”
They welcomed efforts to “pull all the racist, sexist papers built on faulty premises. But then, let the purge continue: keep digging deep into the archives to remove the science that’s bad for other reasons, too.”
Is this wise? Last year the LSE Impact Blog published a post making a similar argument:
“[There is] a confusion between academic freedom and freedom of speech… academic freedom is the freedom to research any question of (social) scientific interest using appropriate standards of methodological rigour. Work which violates such standards should not be given a platform in academia… Academics need to change these structures by prioritising research integrity in their own work, when reviewing and editing for journals, and when hiring, promoting and evaluating others.”
The scientific paper that the blog objected to, people in poor countries score significantly lower than those in rich countries on IQ tests. But did it violate research integrity, a concept that is centred on honesty, fairness and accuracy?
First, the post charged that the author of the paper is politically motivated, and that the “primary goal of this dataset is to promote an ideology of a racial hierarchy”. Given the author’s track record, that seems plausible. However, most social scientists are in some respects politically motivated, and countless papers implicitly or explicitly promote an ideology. Having political motivations and promoting ideologies that many people find deeply hurtful, immoral or dangerous is not a violation of research integrity. If this was the case, every paper drawing on international law and legal precedent to argue that Israel either is, or is not, guilty of genocide in Gaza would be a violation of research integrity.
Second, it argued the paper has methodological weaknesses, pointing to a weak and possibly biased sampling methodology and raising doubts about the cross-cultural validity of the outcome measures used. However, countless papers have weak and biased sampling methodologies, including publications written by some of the world’s most widely cited researchers. Further, the validity of outcome measures is hotly disputed in many scientific disciplines. Disputes over the appropriateness of methodologies and metrics are part and parcel of normal scientific debate. They are not disputes about research integrity.
Having political motivations and promoting ideologies that many people find deeply hurtful, immoral or dangerous is not a violation of research integrity
In short, the post claimed to be about “major gaps in research integrity”, but documented no clear research integrity violations. Nonetheless, it began by castigating journals for publishing this kind of research, and concluded with a call for academic hiring and promotion decisions to take “research integrity” into account. Before cheering on such attempts to purge racist academics, let’s take a deep breath and think about where this road may lead.
We might consider how these standards apply to another publication about group differences in IQ scores. In November 2023, the journal PLOS One published a study titled “Cognitive ability and voting behaviour in the 2016 UK referendum on European Union membership” that was widely covered by the media. The authors claimed that on average, people who voted Leave during the Brexit referendum had lower IQ scores than those who voted Remain. The paper concluded that Leave voters’ lower IQs indicated their greater “vulnerability to (mis/dis) information,” and muses about the future possibility of “restricting voting based on cognitive function”.
At least outright bans on certain types of research would create clarity about what is forbidden and what is not.
Is this an “unworthy paper tinged by politics” that is “built on faulty premises”? Are its politics “eugenicist”? Are its standards of methodological rigour “appropriate” and its outcome measures uncontested? None of those criteria relate to research integrity. All of them are loose enough that they could be used to demand the retraction of many social science papers, and there is no rulebook governing how to interpret or enforce them. It’s the worst possible way of defining and policing the limits of academic freedom. At least outright bans on certain types of research would create clarity about what is forbidden and what is not.
Academics should reject attempts to weaponise research integrity in pursuit of political ends without exception. Otherwise, political and commercial actors can and will launch forensic audits of the records of scholars from the ‘enemy’ camp to detect imaginary or miniscule infractions of the academic rulebook with the sole aim of intimidating, harassing and silencing scholars whose conclusions they dislike.
To safeguard research integrity’s function as a tool to improve science, rather than as a tool to harass political opponents, we urgently need an open discussion on when allegations of academic misconduct merit further investigation and possible sanctions, and when they should be shrugged off as attempts to discredit individuals or shut down academic debate.
The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.