The review of metrics enjoins universities not to drift with the ‘metric tide’. To do this requires a united front of strategic leadership across the sector, argues HEFCE’s Steven Hill. Rather than the inevitable claims about league table positions on website front pages, universities could offer further explanation of how the rankings relate to the distinct mission of the institution.
This is part of a series of blog posts on the HEFCE-commissioned report investigating the role of metrics in research assessment. For the full report, supplementary materials, and further reading, visit our HEFCEmetrics section.
Last week saw the publication of the report of Independent Review of Metrics in Research Assessment, ‘The Metric Tide’. The review was led by Professor James Wilsdon, supported by an excellent steering group, and the report represents the culmination of an 18-month-long project that aims to be the definitive review of this important topic. Accompanied by a scholarly literature review, some new analysis, and a wealth of evidence and insight from both written contributions and numerous discussions at workshops and conferences, the report is a tour de force; a once-in-a-generation opportunity to take stock.
‘The metric tide’
As might be expected for a topic so complex and contested, the conclusions from the report are nuanced. But there is a very clear message for universities and their management; the review recommends:
‘At an institutional level, HEI leaders should develop a clear statement of principles on their approach to research management and assessment, including the role of quantitative indicators. On the basis of these principles, they should carefully select quantitative indicators that are appropriate to their institutional aims and context. Where institutions are making use of league tables and ranking measures, they should explain why they are using these as a means to achieve particular ends. Where possible, alternative indicators that support equality and diversity should be identified and included. Clear communication of the rationale for selecting particular indicators, and how they will be used as a management tool, is paramount. As part of this process, HEIs should consider signing up to DORA, or drawing on its principles and tailoring them to their institutional contexts.’
Within the review group, the thinking behind this recommendation was very much about the concern that sometimes institutions are outsourcing their strategies. By having a goal linked to league table or ranking positions, they are allowing another organisation, through its choice of indicators and weightings, to determine their strategy. Building on the metaphor of the report’s title, universities risk drifting on the metric tide, not setting and steering their own direction. As I have argued previously, this is a concern in the UK and overseas.
Image credit: Santeri Viinamäki CC BY 4.0, via Wikimedia Commons
The dilemma for leadership
At the heart of this, though, there is a real dilemma for institutional leadership. If the institution does the right thing – sets it own goals, and monitors progress with a set of responsible indicators, both qualitative and quantitative – there is a risk that league table positions might suffer.
And while internally the institution should be relaxed about this, confident in their strategic choices, outside stakeholders will see a different picture. In particular, those in search of simple proxies for institutional quality and reputation will inevitably be drawn to league table comparisons. Not only will these comparisons be disconnected from institutional strategies, but they are also themselves potentially flawed, making too much of small differences that lack statistical robustness.
Steering in the right direction
What is the solution to this dilemma? Communications clearly play a role. Universities need to get smarter at talking about what they do, why it’s important, and the strategic choices they are making.
Rather than the inevitable claims about league table positions on website front pages, maybe there could be some explanation of how the different rankings relate to the distinct mission of the institution. Of course, to do so would require transparency from the providers of league tables, another recommendation from the review.
To make this work would need collective agreement across the sector. What if all UK institutions made a stand against global rankings, and stopped using them for promotional purposes? The reputation of the UK’s higher education sector would stand firm, and a really strong signal would be sent to the rest of the world. Not drifting, but steering purposely through the metric tide.
This piece originally appeared on the HEFCE blog and is reposted with permission.
Note: This article gives the views of the authors, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.
Steven Hill is Head of Research Policy at HEFCE.