LSE - Small Logo
LSE - Small Logo

Blog Team

January 19th, 2018

Why has no other European country adopted the Research Excellence Framework?

1 comment | 2 shares

Estimated reading time: 5 minutes

Blog Team

January 19th, 2018

Why has no other European country adopted the Research Excellence Framework?

1 comment | 2 shares

Estimated reading time: 5 minutes

Most European countries have followed the UK’s lead in developing performance-based research funding systems (PRFS) for their universities. However, what these countries have not done is adopt the same system, the Research Excellence Framework being its most recent iteration. Instead, many use indicators of institutional performance for funding decisions rather than panel evaluation and peer review. Gunnar Sivertsen has examined systems throughout Europe and finds the REF to be quite unique as a combination of performance-based institutional funding and research evaluation. While most countries do both, they do so in independent setups and with different, less expensive methodologies.

In 1986 the United Kingdom pioneered the development of performance-based research funding systems (PRFS) for universities with the introduction of the Research Assessment Exercise; what is now called the Research Excellence Framework (REF). Most European countries have since introduced PRFS for their universities, but not by adopting the REF. A large group of countries use indicators of institutional performance (“metrics” in UK terminology) for funding decisions rather than panel evaluation and peer review. The few countries to have chosen the latter approach either do not use evaluation results for funding allocation or have at least partly replaced the assessment procedures with metrics.

Against best practice?

This situation should probably be understood from both sides. Starting with the UK perspective, the rest of Europe seems to disregard what is probably the most developed model of best practice when it comes to national research assessment exercises. The two major approaches used for PRFS in Europe, indicators of institutional performance versus panel evaluation and peer review of individual performances, were discussed in The Metric Tide report (Wilsdon et al. 2015), an independent review on the use of metrics in research evaluation. The review convincingly concludes that within the REF, it is currently not feasible to assess research quality using quantitative indicators alone. Peer review is needed. The review also warns that the use of indicators may lead to strategic behaviour and gaming. One of the main recommendations is that:

“Metrics should support, not supplant, expert judgement. Peer review is not perfect, but it is the least worst form of academic governance we have, and should remain the primary basis for assessing research papers, proposals and individuals, and for national assessment exercises like the REF.”

This recommendation could be interpreted as a formulation of best practice also for other countries, particularly since it is aligned with the first of the ten principles of the Leiden Manifesto for Research Metrics (Hicks et al. 2015): “Quantitative evaluation should support qualitative, expert assessment.” The implication would then be that most other countries ought to change their PRFS. But before reaching this conclusion, let us firstly see the challenges from the other side.

Why not the REF?

Denmark, Finland, Norway, and Sweden belong to the majority of countries with indicator-based PRFS for their universities. The tradition in Scandinavia, however, is to look to the UK for inspiration. Sweden did so three years ago. FOKUS, a new model for research assessment and institutional funding, was designed as an adaptation of the REF. The government decided not to implement it, mostly for reasons of cost, but also because the universities were concerned about their institutional autonomy and preferred to organise research evaluations themselves. Sweden decided to continue with the approach it has used since 2009: a small part of the resource allocation for research is based on indicators of external funding and of productivity and citation impact within Web of Science.

Sweden’s choice can only be understood if we separate the two main purposes of a PRFS: research evaluation and funding allocation. They can be difficult to distinguish. Hicks (2012) defines PRFS as related to both purposes; they are “national systems of research output evaluation used to distribute research funding to universities”. The understanding in Sweden is now that the purpose of research evaluation must be achieved by other means than the indicator-based PRFS. The emerging alternative is that each university runs a research assessment exercise by itself and with the help of international panels of experts. As an example, Uppsala University is presently running a research evaluation named “Quality and Renewal” where the overall purpose is to “analyze preconditions and processes for high-quality research and its strategic renewal”.

Sweden is thereby following the model of the Netherlands with regards to research evaluation. The national research assessment exercise in the Netherlands has no funding implications and is self-organised at certain intervals by each of the universities and coordinated on the national level by a Standard Evaluation Protocol (SEP). With this autonomous self-evaluation system in place, there is an agreement with the government that performance indicators representing research should not be part of the PRFS. Norway and Portugal also have national research assessment exercises that may look like the REF, but in fact mainly have a formative and advisory function. Without the link to funding, a flexibility is created in which evaluations may have a thematic rather than institutional focus (e.g. climate research in Norway), and the units of assessment may be self-organised units representing collaboration across several universities in a certain field (as in Portugal).

Indicators rather than evaluation

Italy has so far been closest to adopting the REF as a model for PRFS, but since its first version in 2003, a semi-metric solution has been developed that differs considerably from the REF. Most other countries have chosen indicator-based models directly, not because they do not observe the scholarly standards and fundamental principles of research evaluation, but because they do not see direct institutional funding as the appropriate place for executing research evaluation. The indicators are not replacing peer review, they are used for purposes other than peer review. While the direct institutional funding may be modified by performance indicators, proper use of peer review is instead installed in procedures for competitive third-stream funding or in assessment exercises with the main purpose of supporting strategic development.

The method that became the purpose

In 1986, research assessment based on peer review was the chosen method for institutional funding allocation in the UK. Funding allocation was the main purpose. Growing constraints on public funding and the prevailing political ideology resulted in policies aimed at greater accountability and selectivity. Gradually, the method became the more important purpose. The REF is now officially the “UK’s system for assessing the quality of research in UK higher education institutions”. Seen from the inside, there seems to be no better solution. Seen from the outside, the REF is quite unique as a combination of performance-based institutional funding and research evaluation. Most countries do both, but in independent setups and with different and less expensive methodologies.

Please read our comments policy before commenting.

Note: This blog post is based on the author’s article, “Unique, but still best practice? The Research Excellence Framework (REF) from an international perspective”, published in Palgrave Communications (DOI: 10.1057/palcomms.2017.78). The article first appeared at our sister site, LSE Impact. It gives the views of the author, not the position of EUROPP – European Politics and Policy or the London School of Economics. Featured image credit: Bacon’s map of Europe by Norman B. Leventhal Map Center. This work is licensed under a CC BY 2.0 license.

 _________________________________

About the author

Gunnar Sivertsen – NIFU
Gunnar Sivertsen is Research Professor at the Nordic Institute for Studies in Innovation, Research and Education (NIFU) in Oslo. His expertise is in policy-oriented studies of research related to statistics, performance indicators, evaluation, funding, and science policy. He has advised the development of new systems for research assessment and monitoring in the Czech Republic, Denmark, Finland, Flanders (Belgium), Norway and Sweden.

About the author

Blog Team

Posted In: Elections | featured | Gunnar Sivertsen | Politics

1 Comments