LSE - Small Logo
LSE - Small Logo

Blog Admin

July 28th, 2015

Rather than narrow our definition of impact, we should use metrics to explore richness and diversity of outcomes.

8 comments | 4 shares

Estimated reading time: 10 minutes

Blog Admin

July 28th, 2015

Rather than narrow our definition of impact, we should use metrics to explore richness and diversity of outcomes.

8 comments | 4 shares

Estimated reading time: 10 minutes

janeImpact is multi-dimensional, the routes by which impact occur are different across disciplines and sectors, and impact changes over time. Jane Tinkler argues that if institutions like HEFCE specify a narrow set of impact metrics, more harm than good would come to universities forced to limit their understanding of how research is making a difference. But qualitative and quantitative indicators continue to be an incredible source of learning for how impact works in each of our disciplines, locations or sectors. 

This is part of a series of blog posts on the HEFCE-commissioned report investigating the role of metrics in research assessment. For the full report, supplementary materials, and further reading, visit our HEFCEmetrics section.

If you consider that nearly 7,000 impact case studies were recently submitted to the REF and (I’m guessing) every single one of them contained some kind of indicator to evidence their impact claims, you might expect the academic community to be more enthusiastic about the use of impact metrics than for some other types of quantitative indicators. But during the course of our consultation for the HEFCE Metric Tide report, we found equally as many concerns about the use of impact metrics as for these other types.

One of the most common concerns that colleagues discussed with us is that impact metrics focus on what is measurable at the expense of what is important. But, as the report highlights in relation to excellence, it’s more than this. When you design a metric for impact you are explicitly constructing a definition of what impact is and when you go on to use that metric, you are locking in that definition. What we know is that impact is multi-dimensional, the routes by which impact occurs are different across disciplines and sectors, and impact changes over time. We would need a really broad range of metrics to be able to usefully show this variety.

figure 1 impact metrics trackingSource: Wilsdon, J., et al. (2015). The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. DOI: 10.13140/RG.2.1.4929.1363

And it seems that impact case studies authors did use a wide variety of impact indicators. Evidence from the Digital Science and King’s College London evaluation of REF impact case studies found that from the nearly 7,000 impact case studies, authors used almost as many metrics to evidence the impact of their research. They did so because they were able to choose the ones that seems most appropriate to evidence their research, and the impact it had created. (A few examples are listed in Figure 1). And that is another key concern with the use of impact metrics in the next REF. If HEFCE were to pick even a ‘basket’ of metrics for the next REF, would there be enough overlap between disciplines, sectors and so on for those chosen to effectively describe the outcomes around impact of research and be compared as a result? The Digital Science and Kings College London report says not:

The quantitative evidence supporting claims for impact was diverse and inconsistent, suggesting that the development of robust impact metrics is unlikely … impact indicators are not sufficiently developed and tested to be used to make funding decisions.

So for the impact component of the REF, the Metric Tide report recommended that it is not currently feasible to use quantitative indicators in place of narrative impact case studies. There would be a danger that by doing this the concept of impact might narrow and become too specifically defined by the easy availability of indicators for some types of impact and not for others. For an exercise like the REF, where HEIs are competing for funds, defining impact through quantitative indicators is likely to mean universities ‘play safe’ about which impact stories have greatest currency and therefore should be submitted. This would mean showing less of the diversity and richness of the impacts that we create from our research.

Another reason not to encourage any funder to specify a set of impact metrics at a particular point in time is the growth in the number of tools that can provide some indication of impact. Individual academics are collecting more information about impact-relevant activities and their effects, and universities are making better use of the information they and others already hold to do the same. The recommendations in the Metric Tide report around the improvement of research infrastructure and the greater use of identifiers such as ORCID were made in the hope that this will get easier. It would be a shame if we were not able to make best use of any new tool, just because it was not on some specified list.

But that is not to say impact metrics are not useful or needed. We are in fairly early days of our understanding of the ways in which impact happens and both qualitative and quantitative indicators can be a source of learning how impact works in each of our disciplines, locations or sectors. So we should use the dataset of impact case studies as a learning tool about the ways in which successful impact was created, using what methods and with what effects. Although as yet many impact metrics only give partial information, for me some information is always better than none.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

Jane Tinkler is Senior Social Science Advisor at the UK Parliamentary Office of Science and Technology (POST). She has been a social science researcher for nearly ten years working on applied projects with government, civil society and academic partners. Prior to joining POST, she was based in the Public Policy Group at the London School of Economics where most recently she worked on the impact of academic research in the social sciences. Her recent publications include: (with Simon Bastow and Patrick Dunleavy) (2014) The Impact of the Social Sciences: How academics and their work makes a difference.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Academic communication | Impact | LSE comment | Measuring Research | REF2014 | REF2021 | Research funding

8 Comments