Funders and the wider research community must avoid the temptation to reduce impact to just things that can be measured, says Liz Allen of the Wellcome Trust. Measurement should not be for measuring’s sake; it must be about contributing to learning. Qualitative descriptors of progress and impact alongside quantitative measurements are essential in order to evaluate whether the research is actually making a difference.
Learning about what works and what doesn’t is an important part of being a science funder. Take the Wellcome Trust, for example: we have multiple grant types spread over many different funding schemes and need to make sure our money is well spent. Evaluating what we fund helps us to learn about successes and failures and to identify the impacts of our funded research. Progress reporting while a grant is underway is a core component of the evaluation process. Funders are increasingly taking this reporting to online platforms, which come with the ability to easily quantify outputs, compare funded work and identify trends.
As useful as these systems are, they come with an inherent danger: oversimplifying research impact and simply reducing it to things we can count. At the Trust, our attitude recalls a quote attributed to Albert Einstein: “Everything that can be counted does not necessarily count and everything that counts cannot necessarily be counted.” Including qualitative descriptors of progress and impact, alongside quantitative data, is integral to our auditing and reporting.
Some quantifiable indicators, such as bibliometrics, do help to tell us whether research is moving in the right direction; the production of knowledge and its use in future research, policy and practice is important. However, it’s not about how many papers have been published or how many patents have been filed – it’s about what’s been discovered. Without narrative and context alongside ‘lists’ of outputs, how can you know whether research is making a difference?
While different funders place their emphasis on different things when deciding which research to fund, as a sector we need to be responsible and avoid the creation of perverse incentives that distort the system1. If funders send out the message that what’s important is a lot of papers or collaborations, then those seeking our funding will tell us that they’ve produced a lot of papers or collaborations.
The research community needs to be pragmatic in moving the field of impact tracking and evaluation forward. We need to develop better qualitative tools to complement more established indicators of impact – traditional bibliometric indicators, such as citations, can now be complemented with more qualitative tools such as those provided by F1000 Prime. The Trust is also exploring the value that altmetrics can bring. Other channels for the dissemination of research, such as Twitter, are becoming increasingly popular among researchers, and it’s important that we understand their role.
Most of all, we should not forget why we fund research: to make a difference. In gathering reports from those we fund, we should encourage openness and the sharing of successes and failures alongside the products and outputs of research. At the core of evaluation is learning: how and when does research lead to impact, and how we might use our funds more effectively? A research impact agenda that encourages funders to merely count is clearly at odds with that.
This article was first published on the Wellcome Trust blog.
Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics.
A social scientist by training, Liz Allen leads the Evaluation team at the Wellcome Trust. At Wellcome Liz is responsible for developing methodologies and implementing approaches to support the monitoring and evaluation of the impact of research and funding initiatives. Liz is also a member of the Board of Directors of the ORCID initiative (Open Researcher and Contributor ID) www.orcid.org – a not-for-profit initiative which intends to create an international and open system of unique and persistent researcher ids.
1. Allen L. The art of evaluating the impact of medical science. Bulletin of the World Health Organization 2010;88:4-4. doi: 10.2471/BLT.09.074823
Narratives. Numbers are important but narratives illustrate the impact of the numbers by telling the qualitative impacts on the lives of citizens, the environment etc. Whether it is REF 2014 Case Studies or videos of community testimonials, impact must be illustrated by narratives to get the complete picture. Trouble is, narratives can’t be quantified and in an era of competition there is a tendency to count things and use them as proxies for impact.
I’m currently involved in writing REF Impact documents. I think those doing so will err towards the side of quantified impacts because they fear if they don’t the impact will be rejected by the panels. That doesn’t mean narratives, anecdotes or quotations can’t be used but that the two must go along side-by-side. A small number of anecdotes would be poor evidence of impact, in my opinion, but so would quoting a simple, single value like “15000 people have viewed by research blog last year”, as that tells us nothing about the actual impact the research has had.
It also depends on the type of impact as well. Economic or financial are more easy to quantify, but outreach or communication activities less so.