metric tideThe Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management

Download the Full PDF and Executive Summary

This report starts by tracing the history of metrics in research management and assessment, in the UK and internationally. It looks at the applicability of metrics within different research cultures, compares the peer review system with metric-based alternatives, and considers what balance might be struck between the two. It charts the development of research management systems within institutions, and examines the effects of the growing use of quantitative indicators on different aspects of research culture, including performance management, equality, diversity, interdisciplinarity, and the ‘gaming’ of assessment systems. Finally, it examines the role that metrics played in REF2014, and outlines scenarios for their contribution to future exercises.

The review was chaired by Professor James Wilsdon, Professor of Science and Democracy at the Science Policy Research Unit (SPRU) and was supported by an independent steering group with the following members: Dr Liz Allen, Dr Eleonora Belfiore, Sir Philip Campbell, Professor Stephen Curry, Dr Steven Hill, Professor Richard Jones, Professor Roger Kain, Dr Simon Kerridge, Professor Mike Thelwall, Jane Tinkler, Dr Ian Viney, Professor Paul Wouters.
figure 1 impact metrics tracking
Source: Wilsdon, J., et al. (2015). The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. DOI: 10.13140/RG.2.1.4929.1363

metric tide lit reviewResources

Blog posts on the report and review process (section to be continuously updated):

The metric tide is rising: HEFCEmetrics report argues metrics should support, not supplant, expert judgement.
James Wilsdon
introduces the report which found that the production and consumption of metrics remains contested and open to misunderstanding. Wider use of quantitative indicators, and the emergence of alternative metrics for societal impact, could support the transition to a more open, accountable and outward-facing research system. But placing too much emphasis on poorly-designed indicators – such as journal impact factors – can have negative consequences.

The metrics dilemma: University leadership needs to get smart about their strategic choices over what counts.
Steven-HillThe review of metrics enjoins universities not to drift with the ‘metric tide’. To do this requires a united front of strategic leadership across the sector, argues HEFCE’s Steven Hill. Rather than the inevitable claims about league table positions on website front pages, universities could offer further explanation of how the rankings relate to the distinct mission of the institution.

The Management of Metrics: Globally agreed, unique identifiers for academic staff are a step in the right direction.
The Metric Tide report calls for research managers and administrators to champion the use of responsible metrics within their institutions. Simon Kerridge looks at greater detail at specific institutional actions. Signing up to initiatives such as the San Francisco Declaration on Research Assessment (DORA) is a good start. Furthermore, by mandating unique and disambiguated identifiers for academic staff, like ORCID iDs, links between researchers, projects (and outputs) will become more robust.

Can metrics be used responsibly? Structural conditions in Higher Ed push against expert-led, reflexive approach.

meeraDo institutions and academics have a free choice in how they use metrics? Meera Sabaratnam argues that structural conditions in the present UK Higher Education system inhibit the responsible use of metrics. Funding volatility, rankings culture, and time constraints are just some of the issues making it highly improbable that the sector is capable of enacting the approach that the Metric Tide report has called for.

Rather than narrow our definition of impact, we should use metrics to explore richness and diversity of outcomes.

janeImpact is multi-dimensional, the routes by which impact occur are different across disciplines and sectors, and impact changes over time. Jane Tinkler argues that if institutions like HEFCE specify a narrow set of impact metrics, more harm than good would come to universities forced to limit their understanding of how research is making a difference. But qualitative and quantitative indicators continue to be an incredible source of learning for how impact works in each of our disciplines, locations or sectors.

Pursuing a multidimensional path to research assessment – Elsevier’s approach to metrics

Peter DThe Metric Tide report calls for the responsible use of metrics. As a supplier of data and metrics to the scholarly community, Elsevier supports this approach and agrees that metrics should support human judgment and not replace it, writes Peter Darroch. To be used effectively, there needs to be a broad range of metrics generated by academia and industry which can be generated automatically and for any entity of interest.

Print Friendly