LSE - Small Logo
LSE - Small Logo

Lauranne Chaignon

May 13th, 2025

Is the list of Highly Cited Researchers losing credibility?

4 comments | 23 shares

Estimated reading time: 6 minutes

Lauranne Chaignon

May 13th, 2025

Is the list of Highly Cited Researchers losing credibility?

4 comments | 23 shares

Estimated reading time: 6 minutes

For over two decades, the Highly Cited Researchers list has spotlighted global scientific influence. But behind its annual release lies a shifting story, which includes evolving methods, changing ownership and growing misuse. Lauranne Chaignon traces the list’s transformation from a research tool to a high-stakes benchmark, raising questions about its continued role in academic evaluation.


Every year for the past twenty years, November comes with the publication of a list that attracts considerable attention from the scientific community. The list of Highly Cited Researchers (HCR) aims to identify the most influential researchers of the past decade in 22 major fields, on a global scale, based on their publications and the citations. Each year, the list features between 6,000 and 7,000 researchers.

Initially conceived as a descriptive, monitoring tool, it has gradually become the object of manipulation, which questions more than ever the relevance of its use in evaluations or rankings.

In a recent study, I retraced the trajectory of this list. This diachronic investigation shows that, far from being a static object, it has evolved considerably over time. It has changed both in terms of producers, manufacture and use. Initially conceived as a descriptive, monitoring tool, it has gradually become the object of manipulation, which questions more than ever the relevance of its use in evaluations or rankings.

A short history of Highly Cited Researchers

The history of the HCR list is partly linked to the Institute for Scientific Information (ISI). Founded by Eugene Garfield in 1960, this institute was behind the creation of the first citation indexes and, consequently, the bibliographic database Web of Science.

As early as the 1980s, Garfield published several works in which he sought to identify “the 1,000 contemporary scientists most-cited from 1965 to 1978.” When, 20 years later, his team launched the portal Highlycited.com, now considered as the first HCR list, the goal was to continue his work. The institute is still in charge of producing this list, despite being acquired by JPT Publishing in 1988 and then Thomson Corporation in 1992.

In 2008, Thomson merged with Reuters. The ISI was absorbed into Thomson Reuters’ “Intellectual Property & Science Business” department. At the same time, the portal HighlyCited.com was shut down. Data were no longer updated, and the list was archived and abandoned until its reappearance at the end of 2012, with a completely new methodology.

Finally, in 2016, this department became part of Clarivate Analytics, a company that decided to recreate the Institute for Scientific Information in 2018. Today, the list continues to be published, and its methodology continues to evolve.

Major methodological changes

Over the course of its history, the HCR list has therefore undergone a number of methodological changes. These include the time window of publications and citations involved, the number of authors selected, as well as the way in which highly cited papers are counted, publications from large research teams are taken into account, and interdisciplinary profiles are managed. These changes were made, at least in part, after feedback from the scientific community.

But two major changes were to profoundly alter this list. When the HighlyCited.com portal was created in 2001, it took the form of a database: the list was progressively and regularly enriched with new names and disciplines, adding to the available data without replacing it. Moreover, the database provides extensive biographical (studies, positions held, awards received) and bibliographical data for each researcher. However, when the list reappeared at the end of 2012, it had adopted a different approach: the list provides only the name, research field and affiliation of the researchers, and it is interestingly now published annually, making it an event.

From 2021 onwards, a so-called “qualitative” complement accompanies the quantitative approach. This involves closely examining the publication practices of pre-selected Highly Cited Researchers to ensure that they comply with normative standards of authorship and credit. In this way, the list observes the researchers’ productivity, their use of self-citations and their citation networks. The analysis aims to identify and exclude researchers whose impact has been artificially pumped up or fabricated, drawing on practices deemed to be described as scientific misconduct, such as a disproportionate use of self-citations or purchasing citations from papermills. This new methodological approach, that sheds light on ISI’s focus on threats to scientific integrity, is directly linked to changes in research evaluation, publication and citation practices, and to the way the list is used and misused.

Changing uses

When the list was proposed as a database in the early 2000s, it was first and foremost a monitoring tool. It was presented as a means of identifying potential collaborations, finding out about the work of eminent colleagues, and keeping informed of major developments or discoveries in various fields. At the time, the web was in its early days, and this tool, accessible and free of charge, represented a mine of information that only the Institute for Scientific Information could provide, thanks to its citation indexing.

The list became an indicator of scientific excellence, which undoubtedly transforms its role: for some, being a part of it becomes a major goal, whatever it would take.

The list soon became much more than a monitoring tool. Used in various bibliometric analyses, at both researcher and institutional levels, as early as 2003, it was also introduced in the Shanghai rankings, the number of Highly Cited Researchers accounting for 20 per cent of an institution’s score. The list became an indicator of scientific excellence, which undoubtedly transforms its role: for some, being a part of it becomes a major goal, whatever it would take.

Manipulations have been so obvious and large that, in 2024, over 2,000 candidates for the list were filtered out for not meeting ISI’s evaluation and selection criteria.

Like many other tools before it, such as the journal impact factor which was designed among other things to help librarians, once the list became a target, some gaming practices began, particularly in terms of citations. Manipulations have been so obvious and large that, in 2024, over 2,000 candidates for the list were filtered out for not meeting ISI’s evaluation and selection criteria.

Such changes have weakened the very principle of this list: by relying on citations, the list’s producers strived to highlight the researchers who had been acclaimed by the scientific community itself. But this system may lose its meaning when citations are manipulated rather than earned through valuable research contributions.


The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: Andrii Yalanskyi on Shutterstock.


About the author

Lauranne Chaignon

Lauranne Chaignon is a bibliometrician at PSL University (France). She is also a PhD student at the Centre de Sociologie de l'Innovation (CSI), Mines Paris-PSL. Her research focuses on the links between research evaluation, bibliometrics and scientific integrity.

Posted In: Academic publishing | Citations

4 Comments