LSE - Small Logo
LSE - Small Logo

Moqi Groen-Xu

Peter Coveney

July 10th, 2023

The REF needs to trust academics

0 comments | 7 shares

Estimated reading time: 6 minutes

Moqi Groen-Xu

Peter Coveney

July 10th, 2023

The REF needs to trust academics

0 comments | 7 shares

Estimated reading time: 6 minutes

Discussing recent research into how the REF distorts research and academic publishing patterns in the UK Moqi Groen-Xu and Peter Coveney argue for a researcher, rather than a rules centred REF.


All governance systems – states, parents, or research evaluations – fall on a spectrum between authoritarian and democratic. Authoritarian systems can be better in tackling common vices such as incompetence and nepotism. Democratic systems are better when the subjects are more diverse, better informed than the assessor, and actions are hard to measure.

The REF sits squarely at the authoritarian side of the spectrum: It imposes an exorbitant complex system of rules on UK academics, at the eye-watering cost of £471M (in the order of 1,000 £500K research grants). Although academics are better informed about their own work than the REF panels, they are required to present their work within a restrictive framework. It is based on measurable outcomes, thus limiting novel, fundamental, and daring research. Finally, it’s one-size-fits-all framework penalises not only research outside existing paradigms, but also standard research that fails in terms of its narrow criteria (e.g., important but not original applications).

Yet the REF fails to benefit from the advantages of authoritarian systems, because: Rather than encouraging competence, it promotes mediocre work with rapid turnaround and appeal to “peers” with thousands of outputs to assess. As it focuses on outcomes, it distributes more funding to already well-endowed institutions and thereby reinforces the status quo. And, its “research environment” category, it captures investment that correlate with funding, and encourages extortionate overheads.

In a recent co-authored paper, one of us quantified the effect of the REF on research productivity, using 3,597,272 publications made by UK academics around its deadlines. Before REF deadlines, UK academics produce 4% more journal papers, compared to the year after the deadlines (29% more papers within REF submissions in the year before deadlines compared to the one after, and 60% more books).

These REF-led spikes are not the most ground-breaking (Fig.1). They receive 5% fewer citations over the next five years, compared with those published immediately afterwards (16% among REF submissions). 0.003% more are retracted later (compared to post-deadline years) – amounting to 79% of the mean retraction rate. The lower variation in their citations (5% compared to post-deadline papers) and in their journal impact factors (6% compared to post-deadline journals), suggests that these are safe bets. The lower half-life of their journals (9% compared to post-deadline journals), suggests that they are on more short-term topics.

Continuous discouragement of novel work shows in the long run. 37 years after introducing the Assessment, the UK share of research in the priority areas of the UK Integrated Review – AI, Quantum Technology, and Engineering Biology– is merely 7%. Tony Blair and William Hague report that “today’s expertise in most cutting-edge AI is largely to be found within frontier tech companies, not the country’s existing institutions.”

The current REF consultation acknowledges these concerns, but it does not go far enough. It is time to step away from a failed authoritarian approach and trust the academics under evaluation.

Rules. The consultation seeks to solve problems — by introducing yet more rules. We recommend replacing these rules with freedom for academics to explain their work.

Narratives. Frontier research is difficult to assess without context. The proposed explanatory statement is a mere addition to the current rules. We propose to replace the output-based submission with narratives that can refer to one or multiple outputs, put them in context, and explain how they fulfil the REF criteria.

Assessment periods. Assessment just five years after publication risks overweighting metrics such as journal impact factors. To encourage projects with long-term potential, allow outputs from more than one period. Long-term assessment is also essential for impact case studies, which naturally tend to reward short-term applied work.

Output weights. The high take-up (and granting) of double-weighting highlights the misalignment between the rewards and costs of diverse types of research. Change the submission unit to bundles of one or more outputs, with self-assessments of their merits (e.g., as frontier, applied, or replication research) and of appropriate weights (possibly in units similar to the current outputs-per FTE). That would allow panels to focus on the most important submissions and be more cost-effective.

Recognition for non-explorative work. As a country-wide assessment, the REF should encourage work to complement and build on frontier research. Removing the minimum 2*s for research underlying impact case studies is a start, but inconsistent without acknowledging such work in general. We propose a separate category to recognise capability-building and development work, with smaller rewards reflecting their lower cost compared to ground-breaking work.

People, Culture, and Environment. Metrics used for this category almost entirely (except the proposed EDI measures) reflect reputation and funding, and therefore reinforce existing inequalities. Assessments should focus on actual capabilities invested in, instead of the investment amount.

  • This category should reward a culture of exploration, including tolerance for failure.
  • Provide steeper rewards for pockets of excellence in lower-ranked institutions.
  • Unless the concerns around measurability are addressed, it should focus weighting on knowledge building.

Panels. The prevalence of senior scholars in panels invites entrenchment and the grade inflation typical for “marking your own homework”. The proposal to appoint advisors on EDI is insufficient.

  • Panels need to include early career researchers (ECRs) that are better informed about the most cutting-edge fields.
  • They should to a greater extent consult scientists outside the UK to assess the most ground-breaking work, and to avoid entrenchment and inequality.
  • Institutions should flag novel and controversial submissions to minimise the time burden on ECRs and external assessors.

The consultation is a well-intended attempt to address concerns while avoiding disruption. But, the ends do not justify preserving the existing set up. Instead of introducing yet more rules– if the REF really has to continue (a debatable proposition) – it should give institutions the necessary discretion to bring the UK back to speed to not only participate in, but occasionally to lead cutting-edge research at a global level.

 


The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: Laura Heimann via Unsplash.


Print Friendly, PDF & Email

About the author

Moqi Groen-Xu

Moqi Groen-Xu is a Senior Lecturer for Finance at School of Economics and Finance at Queen Mary University of London. She specialises in empirical work on incentives in firms and among researchers. She blogs on moqixu.com and tweets @moqixu.

Peter Coveney

Peter Coveney is a Professor of Physical Chemistry, Honorary Professor of Computer Science, Director of the Centre for Computational Science (CCS) and Associate Director of the Advanced Research Computing Centre at University College London (UCL). He is also Professor of Applied High Performance Computing at the University of Amsterdam (UvA) and Professor Adjunct at the Yale School of Medicine, Yale University.

Posted In: REF2029 | Research evaluation

Leave a Reply

Your email address will not be published. Required fields are marked *