LSE - Small Logo
LSE - Small Logo

Giovanni Abramo

June 24th, 2024

Research evaluation should be pragmatic, not a choice between peer review and metrics

0 comments | 10 shares

Estimated reading time: 6 minutes

Giovanni Abramo

June 24th, 2024

Research evaluation should be pragmatic, not a choice between peer review and metrics

0 comments | 10 shares

Estimated reading time: 6 minutes

Responding to the growing momentum of movements, such as DORA and CoARA, Giovanni Abramo argues for a more nuanced balance between the use of metrics and peer review in research assessment.


In academia, debates over how to best evaluate research are as common as coffee stains on a professor’s desk. The Coalition for Advancing Research Assessment (CoARA), is a new initiative endorsed by the European Research Area (ERA) and composed mainly of research institutions and funders. CoARA aims to revolutionise research assessment by promoting qualitative judgments over quantitative metrics. As their website states, research assessment should be based:

“primarily on qualitative judgement, for which peer review is central, supported by responsible use of quantitative indicators”.

But, is this shift a wise move, should quantitative indicators only play a supportive role to peer review, or do we risk throwing the baby out with the bathwater?

The disharmony of qualitative and quantitative research assessment

CoARA’s mission to reform research assessment is rooted in a desire to move away from the over-reliance on metrics. These metrics, they argue, fail to capture the full breadth of a researcher’s impact and contributions. Instead, CoARA champions a qualitative approach, where peer review plays a central role. Think of it as trying to appreciate a symphony by listening to the music, rather than just counting the notes.

Critics, including myself, argue that abandoning quantitative methods in favour of qualitative judgment is to tilt too far in the opposite direction. Metrics, when used responsibly, offer objectivity, consistency, and scalability—qualities that are invaluable in large-scale assessments. Similarly, the issue in research assessment is not the metrics per se, but their irresponsible application. The solution is not to discard metrics, but to ensure they are used wisely and appropriately.

Metrics, when used responsibly, offer objectivity, consistency, and scalability

The paradox at the heart of CoARA’s initiative is therefore the existence of an epistemic community of evaluative scientometricians, who have long been refining the science of research assessment. By sidelining these experts, CoARA is missing out. To illustrate, imagine a world where robotic surgeons are dismissed simply because they use robots. These surgeons are highly skilled professionals who know when to use technology and when to rely on traditional methods. Similarly, scientometricians are adept at determining when to apply quantitative metrics and when to defer to peer review.

Counterpoint

One of the key arguments against a one-size-fits-all approach to research assessment is the diversity of research outputs and contexts. I contend that the choice between peer review and scientometric methods should be based on a thorough analysis of the specific goals, resources, and cultural contexts involved. For instance, if a government wants to boost interdisciplinary research or international collaboration, peer review might struggle to assess these objectives effectively. Scientometric techniques, with their ability to analyse patterns and trends across large datasets, could be more suitable.

Peer review, the cornerstone of CoARA’s vision, is also not without its challenges. It is labour-intensive and costly, particularly when scaled up to national or international levels. For example, the anticipated total cost for the U.K.’s implementation of the Research Excellence Framework (REF) 2021 was around £471 million. Reviewers are already overburdened, with estimates suggesting that in 2020 alone, they dedicated over 100 million hours to journal reviews. Adding more responsibilities could push them to the brink.

all national peer-review assessment exercises limit the number of works submitted, which introduces extra costs and distorts results.

For these reasons, all national peer-review assessment exercises limit the number of works submitted, which introduces extra costs and distorts results. For example, in the 2021 U.K. REF assessment, higher education institutions spent a staggering £281 million preparing their submissions. Performance rankings are particularly sensitive to two issues: the less-than-perfect selection of top-quality research and the total number of works evaluated.

Furthermore, what is all the fuss with CoARA claiming that metrics cannot capture the full scope of researchers’ contributions, when peer-review assessments are already cherry-picking a fraction of the overall research? It seems like a catch-22.

In contrast, scientometric assessments can be conducted by a relatively small team of experts, making them more accurate and cost-effective for large-scale evaluations. Moreover, scientometrics can provide continuous, fine-grained data, unlike the discrete and often subjective scores from peer reviews.

Solo or symphony?

I do not want to simply dismiss CoARA, but to call for a balanced approach. There are limitations to both peer review and scientometrics, this argues for a pragmatic blend of the two. The goal should be to harness the strengths of each method, using metrics to inform and complement qualitative judgments.

the debate over research assessment methods is not about choosing sides but about finding harmony.

For example, in fields where bibliographic repositories lack coverage, peer review remains indispensable. However, in STEMM disciplines, where the bulk of research is indexed, scientometrics can provide valuable insights into scholarly impact and more accurate research performance assessment.

In the end, the debate over research assessment methods is not about choosing sides but about finding harmony. Like a well-conducted orchestra, the best approach blends different instruments to create a richer, more nuanced sound. By combining the rigour of scientometrics with the depth of peer review, we can ensure that research assessment not only measures but also truly understands the impact of scholarly work. As we navigate the future of research assessment, let’s keep our minds open and our methods diverse. After all, in the grand symphony of science, every note counts.

 


This post draws on the author’s article, The forced battle between peer-review and scientometric research assessment: Why the CoARA initiative is unsound, published in Research Evaluation.

The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: InnaSakun on Shutterstock


Print Friendly, PDF & Email

About the author

Giovanni Abramo

After nearly four decades at the National Research Council of Italy (CNR), Giovanni Abramo transitioned in 2023 to Universitas Mercatorum as a Professor of Management of Innovation and Entrepreneurship. His research centres on research performance evaluation and evaluative scientometrics. Giovanni founded the CNR Laboratory for Studies in Research Evaluation in 2004 and Research Value, a spinoff company, in 2006. In 2023, he was elected President of the International Society for Scientometrics and Informetrics (ISSI).

Posted In: Featured | Measuring Research | Peer review | Research evaluation

Leave a Reply

Your email address will not be published. Required fields are marked *