LSE - Small Logo
LSE - Small Logo

Blog Admin

July 9th, 2015

The metric tide is rising: HEFCEmetrics report argues metrics should support, not supplant, expert judgement.

4 comments | 1 shares

Estimated reading time: 10 minutes

Blog Admin

July 9th, 2015

The metric tide is rising: HEFCEmetrics report argues metrics should support, not supplant, expert judgement.

4 comments | 1 shares

Estimated reading time: 10 minutes

James Wilsdon introduces the Independent Review of the Role of Metrics in Research Assessment and Management. The review found that the production and consumption of metrics remains contested and open to misunderstanding. Wider use of quantitative indicators, and the emergence of alternative metrics for societal impact, could support the transition to a more open, accountable and outward-facing research system. But placing too much emphasis on poorly-designed indicators – such as journal impact factors – can have negative consequences.

For the full report, executive summary, further resources, and blogposts written on The Metric Tide report, please see our HEFCEmetrics page

Over the past fifteen months, I’ve been chairing an independent review of the role of metrics in the research system. The Metric Tide, proposes a framework for responsible metrics, and makes a series of targeted recommendations to university leaders, research funders, publishers and individual researchers. Together these are designed to ensure that indicators and underlying data infrastructure develop in ways that support the diverse qualities and impacts of UK research.

Today the Independent Review of the Role of Metrics in Research Assessment and Management publishes its findings, available hereOur report The Metric Tide identifies 20 recommendations for stakeholders across the research system, covering the following areas: supporting the effective leadership, governance and management of research cultures; improving the data infrastructure that supports research information management; increasing the usefulness of existing data and information sources; using metrics in the next REF;  and coordinating activity and building evidence.

Our review found that the production and consumption of metrics remains contested and open to misunderstanding. Wider use of quantitative indicators, and the emergence of alternative metrics for societal impact, could support the transition to a more open, accountable and outward-facing research system. But placing too much emphasis on poorly-designed indicators – such as journal impact factors – can have negative consequences, as reflected by the 2013 San Francisco Declaration on Research Assessment (DORA), which now has over 570 organisational and 12,300 individual signatories.

Metrics should support, not supplant, expert judgement. In our consultation with the research community, we found that peer review, despite its flaws and limitations, continues to command widespread support. We all know that peer review isn’t perfect, but it is still the least worst form of academic governance we have, and should remain the primary basis for assessing research papers, proposals and individuals, and for assessment exercises like the REF. At the same time, carefully selected and applied quantitative indicators can be a useful complement to other forms of evaluation and decision-making. A mature research system needs a variable geometry of expert judgement, quantitative and qualitative indicators. Academic quality is highly context-specific, and it is sensible to think in terms of research qualities, rather than striving for a single definition or measure of quality.

figure 6 hefce metrics ref2014 quantitative data

In most universities, you don’t have to look far to see how certain indicators can have negative consequences. These need to be identified, acknowledged and addressed. Linked to this, there is a need for greater transparency in the construction and use of indicators, particularly for university rankings and league tables. Those involved in research assessment and management should behave responsibly, considering the effects that indicators will have on incentive structures, behaviours, equality and diversity.

Indicators can only meet their potential if they are underpinned by an open and interoperable data infrastructure. How underlying data are collected is crucial. If we want agreed, standardised indicators, we need unique, unambiguous, persistent, verified, open, global identifiers; standard data formats; and standard data semantics. Without putting this holy trinity in place, we risk developing metrics that are not robust, trustworthy or properly understood.

orcid unique identfier

One positive aspect of our review has been the debate it has generated. To keep this going, we’re now setting up a blog – www.ResponsibleMetrics.org. We want to celebrate responsible metrics, but also name and shame bad practices when they occur. So please send us your examples of good or bad metrics in the research system. Adapting the approach taken by the Literary Review’s “Bad Sex in Fiction” award, next April we plan to award the first annual “Bad Metric” prize to the most egregious example of an inappropriate use of quantitative indicators in research management.

As a community, we can design the indicators we want to be measured by. The metric tide is certainly rising. But unlike King Canute, we have the opportunity –and now, a serious body of evidence – to influence how it washes through higher education and research.

This is an adapted extract of a piece which first appeared on the Guardian Science blog and is reposted with the author’s permission.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

James Wilsdon is professor of science and democracy in the Science Policy Research Unit (SPRU) at the University of Sussex (@jameswilsdon) and chair of the Independent Review of the Role of Metrics in Research Assessment and Management(@ResMetrics).

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Academic communication | Higher education | Measuring Research | Research funding

4 Comments