Since the UK decided to link research assessment to research funding, there have been critiques that the competitive nature of the REF assessment creates a winner takes all environment. Whilst this is difficult to assess, Banal-Estanol et al. use a novel method to evaluate UK research performance against that of a US-based counterfactual. They find the UK’s overall research output increased relative to the comparator, but research productivity did not. The REF did enhance the pre-existing concentration of research output in some institutions, although largely at the expense of non ‘elite’ universities.
Public research policy in the UK over the last 35 years has revolved around nationwide assessment exercises, first the Research Assessment Exercise (RAE) and subsequently the Research Excellence Framework (REF). The ratings of research performance produced by these exercises determine university core research funding allocation. As such, they have major effects on the fortunes of UK universities, both directly financially, and indirectly through prestige.
This raises a simple question: how comparable are these ratings across different universities? Afterall, UK universities have historically had different missions and research trajectories. From the outset of these exercises, it has been argued these comparisons risk creating competitive “rich get richer” dynamics that reinforce inequalities and work against ambitions of levelling up the system. Has the REF, in its current form, added value by enhancing the overall research capability and performance of the UK Universities, or has it incentivised efforts to move existing research capacity around?
Directly comparing universities against each other within a highly heterogeneous domestic Higher Education (HE) sector can be misleading. In our recent paper, we argue for a new and potentially more equitably approach, that comparisons should be made against similar international benchmarks. Ideally, such benchmarks would have followed similar trends to the UK universities prior to assessment. Given that such comparable international units may not exist, we applied a synthetic control method to identify a suitable control group. In effect, we created an unit for each UK University, as a weighted average of US universities. In this way, we find for example that in the area of Economics, the best match for City University London is a combination of City University of Delaware, Florida Atlantic University and University of Georgia. Once we have a comparable unit, we can compare the research performance of a UK university with its counterfactual, to identify the causal effect of the REF on that university’s output.
We applied our approach to estimate the effect of REF 2014 on the research performance of UK universities in the areas of Economics and Business. We focused on REF 2014 because we anticipated that its incremental impact on research performance – relative to the previous RAE 2008 – would be larger than previous and subsequent rounds. Notably, REF 2014 introduced significant changes, such as providing higher rewards for world-leading research (4*), while eliminating payments for research recognised only internationally (2*) or nationally (1*).
Some of the university performance measures we use included the number of publications, the number of publications in top journals, the number of publications per author and the number of publications in top journals per capita. As shown in Figure 1, the number of publications, as well as those in top journals, increased, on average, for UK universities, throughout the whole REF’s 2014 assessment period. Still, the same measures also increased, on average, for universities elsewhere, as in the US. The key is to determine if UK universities’ research performance increased further than it would have in the absence of the REF. Hence the need to find (or construct!) appropriate counterfactuals to each UK university.
Fig1. Evolution of research outcome measures over time. This figure shows the evolution over time of the yearly averages of the outcomes of interest, separated by UK (solid line) and US (dotted line).
Our results indicate that the REF 2014 significantly increased UK universities’ research output, on average, relative to their counterfactuals. Indeed, the number of publications of UK departments grew relative to their control US groups across the whole 2009-2014 treatment period, but especially towards the end of that period (2012-2014). Research excellence, measured by the number of publications in top journals also increased, albeit to a lesser extent, again especially towards the end of the assessment period. However, the number of publications per author, and the number of publications in top journals per author, did not change relative to the counterfactual, as the number of authors in UK universities also increased.
This suggests that the REF did not result in an overall increase in UK universities’ research productivity or productivity in research excellence. Rather than having succeeded in improving their research environment by investing in processes and support strategies to enable their existing academics to become more productive, universities appear to have had to recruit more researchers, either from other UK universities or from the international market, to perform better. While this may not be a bad outcome, beyond the question about the sustainability of such strategies, these behaviours may result in research excellence migrating from one university to another or from one region to another, leading to a redistribution rather than an overall enhancement of the research capacity of the UK research system.
Our results support concerns that the REF may have enhanced the concentration of research output in elite universities. REF 2014 did indeed reinforce the already strong position of Russell Group universities. When comparing Russell Group and non-Russell Group universities our results indicate that, while the non-Russell Group displayed on average a greater increase in the proportion of publications in top journals in Economics, the Russell Group universities experienced collectively a higher increase in the overall numbers of publications in Economics and Business and publications in top journals in Business. Nonetheless, results are not the same within each group, as reported in Figure 2, and only very few universities in each group overperformed, in a statistically significant way, their counterfactuals on at least one performance measures. The rest of the universities have either been unaffected or negatively affected by the REF (see table A5 in paper’s Appendix for more details).
Fig2. Distribution of the REF yearly treatment effects for all UK universities, and the Russell and non-Russell Groups. Figure 2 reports the distribution of the number of publications of UK universities in excess of their counterfactuals’(left) and the number of publications in top journals of UK Universities in excess of their counterfactuals’ (right) for all universities, and separately, for the Russell and Non-Russell groups of universities.
This points to the need for future assessment exercises (and other international Performance Research Funding Schemes) to encourage and reward a more inclusive culture of collaboration among domestic higher education institutions, rather than constructing a competitive environment. A situation that has resulted in increasing the performance gap and inducing ‘losers’ to stop submitting to the REF altogether (as happened in Economics).
Additionally, our approach provides a robust data-driven methodology to conduct an assessment of research performance of universities against comparable international benchmarks. Allocating funding on the basis of performance against these benchmarks would secure a fairer competition that could level the playing field between HE providers. Further, supporting the research efforts of non-elite universities can also provide access to research-informed and research-focused education to the less advantaged students attending them, offering a genuine mechanism to ‘level-up’ the existing system.
The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.
Image Credit: Edward Howell via Unsplash.