LSE - Small Logo
LSE - Small Logo

Blog Admin

August 22nd, 2011

Impact is a strong weapon for making an evidence-based case for enhanced research support but a state-of-the-art approach to measurement is needed

4 comments | 5 shares

Estimated reading time: 5 minutes

Blog Admin

August 22nd, 2011

Impact is a strong weapon for making an evidence-based case for enhanced research support but a state-of-the-art approach to measurement is needed

4 comments | 5 shares

Estimated reading time: 5 minutes

Research impact may be a new feature of the Research Excellence Framework, but the research evaluation community has been assessing impact since the 1990s. Claire Donovan believes it is time to stop reinventing the wheel and to ask what, after nearly two decades of development, is state-of-the-art in assessing research impact?

In the UK there has been scant engagement between the wider academic community and social scientists that specialise in research evaluation. Yet with the 2014 Research Excellence Framework (REF) looming, university departments will soon have to provide evidence of the impact of their research.

But what, precisely, is research ‘impact’? What is the best way to demonstrate it? And what does ‘impact’ entail for the social sciences? A special issue of Research Evaluation on ‘State of the Art in Assessing Research Impact’, hopes to offer some answers.

What is research ‘impact’?

There are competing definitions of what constitutes research impact. The REF draws a clear line between research ‘quality’ (i.e. the relative excellence of academic outputs intended for academic consumption, e.g. journal papers and books) and research ‘impact’ (i.e. the benefits that research outcomes produce for wider society).

Yet confusion can ensue when REF commentators do not employ this division, a prime example being when journal citation (‘quality’) metrics are incorrectly presented as measures of ‘impact’. In what follows, I refer to ‘impact’ purely in the form of broader societal returns.

How this broader ‘impact’ is defined will determine how it is assessed. Early attempts to measure the impact of research relied on the idea that the purpose of science was to support a country’s international competitiveness and generate wealth creation. Impact was therefore represented by metrics based on economic measures and a raft of science, technology and innovation (STI) indicators.

More recently, research impact has been interpreted as part of a social contract that exists between science and society which entails that research must address pressing social issues. This redefinition of ‘impact’ embraces broader social, cultural, environmental and economic returns, and a mixture of qualitative and quantitative methods has been employed to capture those outcomes.

The state-of-the-art

“Counting the countable because the countable can be easily counted renders impact illegitimate.”

This laconic statement by John Brewer (President of the British Sociological Association) neatly summarises what the research evaluation community has known for a long while: that metrics-only approaches that attempt to measure research impact not only fail to do so, but are also behind the times. Advanced approaches to assessing research impact combine narratives with relevant qualitative and quantitative indicators to capture broader societal benefits, and deliver meaningful results.

Two examples: Payback and SIAMPI

In the special edition of Research Evaluation several papers use the ‘Payback Framework’ and the ‘SIAMPI’ approach to assess research impact.

The ‘Payback Framework’ was developed in the 1990s to assess the outcomes of health services research. It is a multidimensional gauge of the contribution of research to knowledge production, capacity building, informing policy or product development, health and health sector benefits, and broader social and economic benefits. It provides a conceptual framework to collate, order and weigh evidence (interviews with researchers and stakeholders, documentary evidence, qualitative and quantitative indicators) and to make cross-case comparisons.

Internationally, Payback is widely regarded by funders of health and medical research as ‘best practice’ for evaluating the impact of research, and in the UK it has been modified and used to assess the ‘impact’ of the humanities and in the special edition, it is applied to an assessment of the impact of the Economic and Social Research Council’s (ESRC) Future of Work Programme.

The SIAMPI philosophy will appeal to the REF-weary: assessment should be an ‘enlightenment tool’ used for learning how to maximise the chances of ‘impact’, rather than for judging. SIAMPI identifies ‘productive interactions’ (or forms of reciprocal engagement) between researchers and stakeholders as a precondition for research achieving any kind of socially valuable outcomes.

Interviews and qualitative and quantitative indicators combine to provide evidence of meaningful reciprocal engagement between academics and the beneficiaries of their research. These indicators are likely to be predictive of future impact. One special edition paper outlines the SIAMPI approach, while another applies this method to assess the impact of the ESRC’s Centre for Business Relationships, Accountability, Sustainability and Society (BRASS).

‘Impact’ and the social sciences

John Brewer perceives a gulf between the research evaluation community represented at the ‘State of the Art’ workshop on the one hand, and, on the other, social science academics experiencing ‘impact’ in the REF as an imposition of the audit culture within higher education. “The suggestion that impact is a sheep in wolf’s clothing – that it looks more hazardous than it really is – is widely accepted within the policy evaluation tradition, who are bemused by all the fuss, while to critics of the audit culture such a metaphor seriously underplays the dangers around impact.”

These dangers include lack of consensus over what ‘impact’ means, that simplistic metrics exclude broader notions of public benefit, that impact is often serendipitous yet “chance should play no role in allocating quality-related research funding,” and concerns about how to handle the fact that social science impact is often negative (i.e. critical of government policy) and disguised (i.e. often unrecognised or not credited by ‘end-users’).

While the Payback approach can be successfully transferred to evaluate the broader societal impacts of social science research, issues of timing and attribution are found to be even more complex. Evaluations can occur too late, as well as too soon, to capture the influence of social science research upon fast-moving policy environments.

For SIAMPI, a possible solution is to focus on interaction processes rather than attempting to measure the impact of research. This removes the headache of assigning attribution because it is often impossible to connect research and ‘impact’ outcomes in a maze of complex, numerous social interactions and serendipitous turns. We should think of “contribution not attribution”.

REF and the ‘state of the art’

Payback and SIAMPI are examples of ‘state of the art’ approaches to assessing research impact that could potentially inform REF development. Yet for Ben Martin these sophisticated approaches are bespoke and labour-intensive: the REF would seek to “convert this craft activity of impact assessment into mass production almost overnight” and thereby be “cruder”.

We can strike a balance. Most importantly:

  1. ‘Impact’ need not be conceived of in purely economic terms: it can embody broader public value in the form of social, cultural, environmental and economic benefits.
  2. Metrics-only approaches are behind the times: robust ‘state of the art’ evaluations combine narratives with relevant qualitative and quantitative indicators.
  3. Rather than being the height of philistinism, ‘impact’ is a strong weapon for making an evidence-based case to governments for enhanced research support.
  4. The ‘state of the art’ is suited to the characteristics of all research fields in their own terms, including the humanities, creative arts and social sciences.
  5. There should be no disincentive for conducting ‘basic’ or curiosity-driven research: within the REF‘impact’ assessment should be optional and rewarded from a sizeable tranche of money separate from quality-related funds.
  6. The REF should not stifle ‘impact in the making’ or the efforts of younger research groups.
Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Citations | REF2014 | Research funding

4 Comments