LSE - Small Logo
LSE - Small Logo

Blog Admin

April 21st, 2016

What makes research excellent? Digging into the measures aimed at quantifying and promoting research excellence.

1 comment | 8 shares

Estimated reading time: 5 minutes

Blog Admin

April 21st, 2016

What makes research excellent? Digging into the measures aimed at quantifying and promoting research excellence.

1 comment | 8 shares

Estimated reading time: 5 minutes

Toni_Pustovrh“Research excellence” is a tricky concept in theory and arguably trickier to capture in practice. Toni Pustovrh shares findings from a recent study which looks at how research is currently quantified and evaluated in Slovenia. In-depth interviews with scientists reveal a variety of views on the concept and the current mechanisms in place. The analysis suggests that neither a predominantly peer-review based evaluation system, nor one based mainly on quantitative metrics will ever be the best solution, as both have their inherent problems. As one survey respondent notes, “numerics do not reveal the content”.

This piece is part of a series on the Accelerated Academy.

In modern societies, scientific and technological (S&T) developments are generally regarded as the main drivers of socioeconomic progress and national competitiveness, as well as the key tools for addressing contemporary societal challenges. With the aim of furthering national prestige and development, most countries have implemented measures to promote the production of excellent research in their national science systems. In the current climate of reduced public funding for S&T, quantification and metrics are increasingly being used in the hope of producing the greatest number of S&T breakthroughs for the least amount of allocated funds.

While most stakeholders would agree that research excellence is something desirable, it is much less clear what research excellence actually is, how it is defined and what measures are used to promote it. Another important question is whether the measures to quantify excellence actually have the desired effects of producing excellent science and scientific breakthroughs or whether they instead provide an incentive for researchers to focus primarily on attaining the best results according to the excellence metrics themselves.

gold star excellence researchImage credit: Creativity103 CC BY (FLickr)

Currently there is no precise or widely adopted definition of what research excellence is or should be. We might say that we know what excellence is when we see it, or that excellence is what expert reviewers agree is excellent. Some have argued that we have seen a shift over the last two decades from a notion of scientific excellence to a notion of research excellence, at least at the level of the European Union’s science policy landscape. The former is a rather fuzzy, undefined notion, which is embedded in the scientific and research community and its processes, focused on specific content and mainly left to be recognized and promoted by expert reviewers who are themselves scientists. The latter is more precise, and focuses on specific research outputs which are supposed to be the indicators of research excellence, such as publications in the top 10% of cited scientific papers, internationally recognized patents, cooperation of the public research sphere with industry, etc. Such metrics are selected, constructed and implemented not primarily by scientists, but also, or sometimes mainly by policymakers, political decision-makers and other types of experts, who are not necessarily familiar with the workings and processes of the scientific and research community.

And indeed, such a shift towards using quantifiable metrics focused on specific outputs in order to produce excellent research is increasingly reflected at the national level in many countries. While peer review remains a key element in the process of evaluating and selecting excellent research, we could argue that the process is becoming more automated, relying on checking whether the publications have been published in prestigious journals, and generally whether a specific research result falls into a specific excellence category, instead of going through a more costly and time-consuming process of checking the actual content of the publications themselves. Nevertheless, metrics can be used to provide comparability and transparency, and a spur for competitive production. Still, they can also become a black box, run by complex algorithms, which might not necessarily measure and promote what they were supposed to.

Slovenia is one of the EU countries that has adopted and implemented a number of quantitative metrics in an automated system (SICRIS – Slovenian Current Research Information System) that quantifies, scores and evaluates the research bibliographies (outputs) of practically all Slovenian scientists (about 15,000). It enables the quantification of the research work of individual researchers, but also of research groups and organizations. Metrics are thus becoming a crucial R&D policy instrument in evaluating the research excellence of Slovenian scientists. They also strongly determine the allocation of project and research funds, grants and awards, as well as the employment and promotion prospects of every researcher. As stated in the Resolution on the research and innovation strategy of Slovenia from 2011, the system is intended to “encourage internationally recognizable, excellent research.”

The SICRIS system produces its quantitative score for an individual researcher by combining the scores from the following categories:

  • A1 – research successfulness, which quantifies publication outputs,
  • A2 – citation successfulness, which quantifies the number of citations per author, and
  • A3 – project/funding successfulness at acquiring funds outside of the Slovenian Research Agency (from collaboration with the business sector, from international project and from other projects).

From among the various categories making up the mentioned metrics, two stand out as the ones scored highest, and, presumably, most clearly denote research excellence. The first is part of the A1 metric, and tracks research publications that appear in the top 5% of Scopus Citation Index (for the “hard” sciences) or the top 10% of Social Science Citation Index and Scopus (for the “soft” sciences) scientific journals, and is supposed to reflect internationally recognized scientific excellence. The second is part of the A3 score, and tracks success in acquiring funds from cooperation with the business-enterprise sector, which is supposed to encourage knowledge and technology transfer.

As part of a research project on Cooperation networks in Slovene science, we have conducted in-depth interviews with representatives of excellent Slovenian scientists from all six science domains in Slovenia (Natural Sciences and Mathematics, Engineering sciences and technologies, Medical sciences, Biotechnical sciences, Social sciences, Humanities), asking about their views on research excellence and the excellence metrics in SICRIS.

Most of the interviewees see excellence in science and research as achieving new original results and making new discoveries. One noted that the results of excellent research activity are also publications in top journals, and another that scientific excellence can be seen “in whether you are known out there in the world …. which is reflected in being invited into European projects, to lectures and conferences, and in the willingness to exchange students with you”. One said that we are “trying to seek scientific excellence in the outputs of this excellence”, which is not necessarily excellence itself, but only its reflection. Another researcher said that the excellence criteria of the SICRIS system “have no connection with what some researcher or research group actually means in science” and that the problem of evaluation is one of increasing bureaucratization, and a lack of willingness by experts to give and stand behind a concrete review of someone’s work. Several researchers have pointed out that the journals that are considered as excellent in their scientific domain are nevertheless not necessarily among the top 5% or top 10% of the most cited journals.

Regarding the research-industry cooperation metric, one researcher said that “…only looking at the amount of funds is very unjust … some technical sciences serve as a service provider to the industry and the amounts are very high, although the work does not represent excellent research, but routine process work”. Several others have also noted that this criterion should not have as strong a value as it currently does, and that it would be necessary to look at and evaluate the actual work that was done to acquire the funds from industry and business.

gold lightImage credit: Hector Lazo (CC BY-SA)

Generally, researchers from the “hard” sciences see the excellence metrics as useful, even though they can be subject to abuse and gaming the system. As one noted, “we Slovenians are experts at circumventing systems…/SICRIS/ criteria are an appropriate tool but are sadly abused…” Many said that they are a useful indicator, but should not be confused with the real thing – excellent science and research breakthroughs. As one said, “numerics do not reveal the content”. Researchers from the soft sciences generally see excellence metrics as problematic, since the system and the metrics were originally developed to serve the “hard” sciences, although some adaptations have been made for evaluating researchers from the social sciences and humanities over the recent years.

Finally, many of the interviewees have also expressed criticisms of the reviewers and the quality of their work in the Slovenian peer-review system used for allocating project and programme funds by using SICRIS scores. As one said, “reviewers can be a catastrophe… not knowing how to evaluate the results they are given…./their results are/ significantly below an acceptable level”. Here we might conclude that excellent reviewers are needed to recognize excellence when they see it. And excellent reviewers can be more costly than automated algorithms.

Overall, we are experiencing a general, possibly global shift towards an increasing quantification and use of metrics in trying to promote the production of excellent science and research. While the issues discussed here are taken from the Slovenian context, we think many of them might be relevant and transferrable to other national evaluation systems. What we should keep in mind is that neither a predominantly peer-review based evaluation system, nor one based mainly on quantitative metrics will ever be the best solution, as both have their inherent problems and both will be subject to gaming and circumvention by some. What would be prudent to strive for is a balanced combination of content-focused peer review, with reviewers who are excellent experts at what they are reviewing, as well as transparent and widely agreed-upon metrics. The agreed-upon part is also very important, as only metrics that have been discussed together with the scientists, not just selected by experts and policymakers, will actually promote what they are intended to, which is excellent science and breakthroughs.

The post is part of a series on the Accelerated Academy and is based on the author’s contribution presented at Power, Acceleration and Metrics in Academic Life (2 – 4 December 2015, Prague) which was supported by Strategy AV21 – The Czech Academy of Sciences. Videocasts of the conference can be found on the Sociological Review.

Note: This article gives the views of the author, and not the position of the LSE Impact blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

Dr. Toni Pustovrh is a researcher at the Faculty of Social Sciences, University of Ljubljana

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Evidence-based policy | REF2021 | Research funding | The Accelerated Academy Series

1 Comments