The Metric Tide report calls for the responsible use of metrics. As a supplier of data and metrics to the scholarly community, Elsevier supports this approach and agrees that metrics should support human judgment and not replace it, writes Peter Darroch. To be used effectively, there needs to be a broad range of metrics generated by academia and industry which can be generated automatically and for any entity of interest.
This is part of a series of blog posts on the HEFCE-commissioned report investigating the role of metrics in research assessment. For the full report, supplementary materials, and further reading, visit our HEFCEmetrics section.
Following the publication of the report ‘The Metric Tide: Independent Review of the Role of Metrics in Research Assessment and Management’, Elsevier would like to show its support for the review. We find it a balanced and sensible perspective on the value metrics can bring to merit systems. As highlighted in our response to HEFCE’s call for evidence, it has been our consistent position that quantitative data inform, but do not and should not ever replace, peer review judgments of research quality – whether in the REF, or for any other purpose. Metrics can support human judgment and contribute to a fully rounded view on a research question being asked.
We support using a ‘basket of metrics’ or indicators to measure the multiple qualities of the many different entities to be investigated such as articles, journals, researchers or institutions. One data source or a single metric is never sufficient to answer questions around research assessment as each metric has its weaknesses, and if used in isolation will create a distorted incentive. The focus of a researcher’s work should be conducting and communicating outstanding research, and the role of metrics is to support and not limit researchers in demonstrating what they consider to be outstanding.
Elsevier will continue to provide a basket of metrics containing multiple types of data and several points of measurement and will continue to engage with the global research community, and be guided by it. This will allow us to continue to improve the range of metrics that we offer, extending the metrics and data sources available, and ensuring that metrics can be generated for all entities of interest. Elsevier remains committed to the principles of transparency and interoperability between systems that the report highlights. We support: open, standardised methodologies for research metrics; open, persistent, verified, unique global identifiers; and agreed standards for data formats and semantics. For example, we are early adopters of ORCID, FundREF, DataCite and we actively participate in organisations such as EuroCRIS, NISO and CASRAI.
It is for this reason that Elsevier strongly supports Snowball Metrics. Snowball Metrics has brought academia and industry together to define and standardise a manageable set of metrics that aim to enable benchmarking between institutions across the entire spectrum of research activities. In this process transparency has played a central role, making sure that each metric ‘recipe’ is open and benefits from existing standards such as HESA definitions, CERIF and CASRAI standards.
Image credit: Walters Art Museum (Wikimedia, Public Domain)
Elsevier welcomes the proposal to establish a Forum for Responsible Metrics and would very much like to participate in this forum. The forum will provide an excellent way to continue our support of the scholarly and publishing communities in the development of useful standards. We would like to highlight some areas we feel should be considered to maximise the benefit of the Forum:
- Firstly, we believe a requirement for being able to use metrics responsibly is being able to automate the production of metrics for any entity of interest. Research assessment is nuanced and complex so it is important to have a variety of established, understood metrics that can be deployed flexibly and ‘on the fly’. This enables the selection of the most appropriate metrics from the range of metrics available, to address the question at hand whether it is in social sciences or chemistry.
- Secondly, we feel it is important to draw on international perspectives wherever possible. While it is a daunting task to roll out the recommendations made in the report even within the UK, we strongly encourage consulting and incorporating the experience and opportunities available internationally. This will not only increase the gain for the UK, but also ensure that research globally benefits from the leading position the UK has taken in this area. Elsevier would be happy to use our global networks to help facilitate this.
- Finally, we want to emphasise that showcasing excellence – ‘bottom-up’ – is as important as measuring it – ‘top-down’. It would be a great step forward if this forum could empower researchers to feel ownership over how those allocating resources view their outputs rather than simply being on the receiving end of someone else’s view.
The UK is widely seen in a global leadership position for research and research assessment; with the release of this report and the intention to establish a Forum for Responsible Metrics, there is an opportunity to use this experience to the benefit of the global community. Elsevier would like to help the UK extend its position through continued collaboration with the sector.
Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.
Dr Peter Darroch is Senior Product Manager, Research Metrics at Elsevier.