LSE - Small Logo
LSE - Small Logo

Blog Admin

May 5th, 2016

Selling impact: How is impact peer reviewed and what does this mean for the future of impact in universities?

1 comment

Estimated reading time: 5 minutes

Blog Admin

May 5th, 2016

Selling impact: How is impact peer reviewed and what does this mean for the future of impact in universities?

1 comment

Estimated reading time: 5 minutes

Despite a wealth of guidance from HEFCE, impact evaluation in the run-up to REF2014 was a relatively new experience for universities. How it was undertaken remains largely opaque. Richard Watermeyer and Adam Hedgecoe share their findings from a small but intensive ethnographic study of impact peer-review undertaken in one institution. Observations palpably confirmed a sense of a voyage into the unknown. Due to the confusion and uncertainty, there was a tendency to prioritise hard (or more immediately certain) impacts over those deemed more soft (or nebulous).

Back in 2012, before universities across the UK submitted evidence of their excellence to the Research Excellence Framework (REF), ‘impact’, a new aspect of performance review for research communities, was all the talk and never far from the lips of senior academics and administrators and institutional managers. HEFCE had announced, after a pilot exercise, some considerable deliberation and despite the vitriol of significant swathes of the academic family, that impact would constitute 20% of the overall REF award. Some devised heuristics that calculated that the very best impact submissions – those brandishing a 4* star narrative case study – would have a currency exchange of seven 4* star research outputs and in more explicitly monetaristic terms as much as £350,000 within the REF period.

Given the huge potential value to be gained by institutions able to evidence excellence in impact, all manner of strategies and resources were directed at ensuring the best possible outcomes of a REF impact submission. Significant sums were reported being invested by universities across the sector into for instance copy-editors and science writers – those with expertise and skill in crafting the most cogent and convincing impact narratives that might inveigle the favour of academic peer-reviewers and user assessors populating the REF’s 4 main and 36 sub, disciplinary panels.

impact narrativeImage credit: Freytag’s structure for plot narrative Wikimedia, Public Domain

Our own experience of preparing for impact in the REF occurred at Cardiff University, where what amounted to a mock-REF exercise, specifically for impact, was organized as a process that might help to second-guess the key factors and influences that would lead REF panel members to assign high evaluation scores. This mock-impact REF was organized as four main panels, mimicking the core disciplinary areas to be reviewed within the main event of REF2014. It pulled together the most senior and experienced academics from across the university, many of whom had experience of being panel members of previous Research Assessment Exercises (RAEs) and would be recruited for the same purpose in REF2014. These were joined by a smaller number of user-assessors – policy folk; industry players and the like. Over two days, these four panels, examined a range of the universities’ most promising impact case studies and made recommendations as to how they would likely be graded and how they could be improved.

We were fortunate to be granted permission to observe these two days; an opportunity we felt not to be passed up, given that any observation of the REF panels would be strictly off-limits. From these observations we were able to determine firstly, the process undertaken by panel members in making decisions about the worth of impact case studies and the various challenges they encountered in finding consensus; secondly what they perceived to be the best recipe for outstanding case studies. As observers we were cognisant of how novel this process was and that despite the wealth of experience of panel members in the context of peer-review, much of this was exclusive to evaluating research less its impact. How impact would be evaluated was despite a wealth of inputs and guidance from HEFCE entirely opaque, making this particular exercise so pertinent. Our observations palpably confirmed this sense of a voyage into the unknown and academics struggling in part to come to identify a ‘true’ and fair sense of impact.

VOYAGE UNKNOWNImage credit: By Daderot (Own work) [CC0], via Wikimedia Commons

What really stood out was what panel members felt would be essential to convincing those within the REF of impact veracity. Evidence, it was anticipated, would be relatively thin on the ground, where researchers simply had not been in the habit of harvesting their impact achievements. Furthermore, it was reckoned naïve to imagine, given the huge demands already made of REF panel members, that they would have either capacity or willing to closely scrutinise accompanying evidence. Much more then, it was felt, would rest on being able to spin the most convincing of narratives that would leave no doubt and no need for recourse to supporting documentation in adjudicating the excellence  – the significance, reach and transformative potential – of a researcher’s economic and societal contribution.  This meant a whole new writing style, one perhaps more easily exercised by those in the social science and arts and humanities than those in the STEM disciplines.

Quite what would be viewed as the best kinds of impact across what is ostensibly an endless continuum, also gave cause for prudence and conservatism among panel members and a tendency to prioritise hard (or more immediately certain) impacts over those deemed more soft (or nebulous). A bifurcation in other words between and/or biasing of economic impacts over social impacts ensued – particularly those manifest in the milieu of public engagement, where lines of causality and attribution were considered to be uneasily proved. Other issues emerged to do with interpretations and applications of those aforementioned qualifiers of impact: reach and significance, particularly in the context where geography or more specifically the devolved context of higher education, provoked concerns related to impact appearing parochial.

Cardiff ultimately did very well in impact terms in the REF and it’s fair comment to claim many of the predictions observed, faithfully reflect the prioritisations and pathways taken by REF panel members in their impact evaluations. What is, however, even more abundantly clear, is how a prioritization on impact narrative – the impact sales-pitch – is reflective of and reinforces the market economy and logic of higher education and how the dominance of competition and audit cultures therein is dramatically changing the nature of academics’ self-presentation and what some would call, performativity.

The full article, Selling ‘impact’: Peer-reviewer projections of what is needed and what counts in REF impact case studies. A retrospective analysis is published in The Journal of Education Policy.

Note: This article gives the views of the authors, and not the position of the LSE Impact blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

Richard Watermeyer is a Senior Lecturer in Education at the University of Bath. He is a sociologist of education, knowledge, science and expertise with broad interests in higher education policy, practice and pedagogy.

Adam Hedgecoe is a Professor within the School of Social Sciences at Cardiff University. He is a sociologist of science and technology, whose work sits at the intersection of STS, medical sociology and bioethics​.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Impact | Peer review

1 Comments