The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management
This report starts by tracing the history of metrics in research management and assessment, in the UK and internationally. It looks at the applicability of metrics within different research cultures, compares the peer review system with metric-based alternatives, and considers what balance might be struck between the two. It charts the development of research management systems within institutions, and examines the effects of the growing use of quantitative indicators on different aspects of research culture, including performance management, equality, diversity, interdisciplinarity, and the ‘gaming’ of assessment systems. Finally, it examines the role that metrics played in REF2014, and outlines scenarios for their contribution to future exercises.
The review was chaired by Professor James Wilsdon, Professor of Science and Democracy at the Science Policy Research Unit (SPRU) and was supported by an independent steering group with the following members: Dr Liz Allen, Dr Eleonora Belfiore, Sir Philip Campbell, Professor Stephen Curry, Dr Steven Hill, Professor Richard Jones, Professor Roger Kain, Dr Simon Kerridge, Professor Mike Thelwall, Jane Tinkler, Dr Ian Viney, Professor Paul Wouters.
Source: Wilsdon, J., et al. (2015). The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. DOI: 10.13140/RG.2.1.4929.1363
- Follow the discussion of the report launch event (agenda here) on 9 July 2015 on Twitter #HEFCEmetrics
- Responsible Metrics blog – A forum for the responsible use of metrics in research
- Supplementary Report I: A comprehensive review of the literature.
- Supplementary Report II: Detailed analysis of the correlation between REF2014 scores and a basket of metrics.
- HEFCE website – policy guide with papers from steering group meetings and more information on stakeholder workshops.
Blog posts on the report and review process (section to be continuously updated):
The metric tide is rising: HEFCEmetrics report argues metrics should support, not supplant, expert judgement.
James Wilsdon introduces the report which found that the production and consumption of metrics remains contested and open to misunderstanding. Wider use of quantitative indicators, and the emergence of alternative metrics for societal impact, could support the transition to a more open, accountable and outward-facing research system. But placing too much emphasis on poorly-designed indicators – such as journal impact factors – can have negative consequences.
The metrics dilemma: University leadership needs to get smart about their strategic choices over what counts.
The review of metrics enjoins universities not to drift with the ‘metric tide’. To do this requires a united front of strategic leadership across the sector, argues HEFCE’s Steven Hill. Rather than the inevitable claims about league table positions on website front pages, universities could offer further explanation of how the rankings relate to the distinct mission of the institution.
The Management of Metrics: Globally agreed, unique identifiers for academic staff are a step in the right direction.
The Metric Tide report calls for research managers and administrators to champion the use of responsible metrics within their institutions. Simon Kerridge looks at greater detail at specific institutional actions. Signing up to initiatives such as the San Francisco Declaration on Research Assessment (DORA) is a good start. Furthermore, by mandating unique and disambiguated identifiers for academic staff, like ORCID iDs, links between researchers, projects (and outputs) will become more robust.
Do institutions and academics have a free choice in how they use metrics? Meera Sabaratnam argues that structural conditions in the present UK Higher Education system inhibit the responsible use of metrics. Funding volatility, rankings culture, and time constraints are just some of the issues making it highly improbable that the sector is capable of enacting the approach that the Metric Tide report has called for.
Impact is multi-dimensional, the routes by which impact occur are different across disciplines and sectors, and impact changes over time. Jane Tinkler argues that if institutions like HEFCE specify a narrow set of impact metrics, more harm than good would come to universities forced to limit their understanding of how research is making a difference. But qualitative and quantitative indicators continue to be an incredible source of learning for how impact works in each of our disciplines, locations or sectors.
The Metric Tide report calls for the responsible use of metrics. As a supplier of data and metrics to the scholarly community, Elsevier supports this approach and agrees that metrics should support human judgment and not replace it, writes Peter Darroch. To be used effectively, there needs to be a broad range of metrics generated by academia and industry which can be generated automatically and for any entity of interest.
Measuring research: what are the units of assessment? by Stephen Curry
A Basketful of Metrics? by Athene Donald
The Metric Tide: Rethinking Research on Research by Liz Allen
Skewering the impact factor by Stephen Curry
Research impact: A rising tide lifts all boats by Ian Viney
Can Metrics Be Used Responsibly? Why Structural Conditions Push Against This – Meera Sabaratnam
Science, values and the limits of measurement – Cameron Neylon
Riding the waves of the Metric Tide by Stacy Konkiel
imagine there’s new metrics (it’s easy if you try) by Jenny Martin
Metrics in the Arts and Humanities by Martin Eve
How to measure a scientist by Matthew Partridge