LSE - Small Logo
LSE - Small Logo

Justyna Bandola-Gill

Kat Smith

November 9th, 2021

Impact Monoculture – Are all impact case studies the same old story?

2 comments | 31 shares

Estimated reading time: 7 minutes

Justyna Bandola-Gill

Kat Smith

November 9th, 2021

Impact Monoculture – Are all impact case studies the same old story?

2 comments | 31 shares

Estimated reading time: 7 minutes

The impact of quantitative research measures on academic behaviours have been widely discussed, but the impact of qualitative assessment regimes is more often thought of as benign. Drawing on an analysis of impact case studies submitted to REF 2014, Justyna Bandola-Gill and Katherine E. Smith, explore how the narrative turn in research assessment has created four distinct narratives for impact case studies. Finding these narratives to diverge from more complex accounts of real-world impact, they assess the relative value of this ‘governance by narrative’. 


There is a trend in research evaluation to focus on employing narratives as an alternative to quantified metrics. Whilst the critiques of quantitative measures are well-rehearsed, the governing effects of this qualitative turn, such as the UK’s REF impact case studies, have been given relatively less attention. This is precisely what we wanted to explore in our recent study.

The origin of REF impact case studies shows that the narrative format was introduced with the intention of avoiding some of the traps of quantification and in recognition of research impact as a complex phenomenon. We were therefore surprised to find that our analysis of REF2014 impact case studies suggests that this approach actually resulted in significant standardisation of the way impact was described. In contrast to the open and unrestricted format we usually associated with stories, narratives used for assessment take on a form of ‘restrictive storytelling’, in which the accounts produced for REF assessment differ markedly from academics’ accounts of ‘real world’. This begs a question – how does this standardisation of impact narratives happen?

A plot, (a few) heroes and a good moral – the recipe for performing impact well for REF?

Narratives are inherently structured formats – they are characterised by a plot that organises a series of events into a coherent whole, they have specified heroes and a clear ending, usually linked to a moral. We found these narrative elements were all prominent in the case studies employed to construct impact as an auditable object for REF.  

The plot of a case study plays an important role in defining what our participants believed good impact performance (in REF) is. A ‘good’ impact case study plot has to have a clear beginning (identifying the problem), middle (a set of activities or strategies) and end (outlining the level of change as compared to the beginning). The ‘moral’ of the story shapes the meaning of the final ‘impact’. It has to be concrete, quantified and – ideally – economic (read monetised). As recalled by one of our interviewees, a good case study has a dollar sign at the end. Finally, the heroes of the story were important in individualising the account of impact – for a REF impact case study, the main heroes had to be the academics, while the research users needed to be portrayed as mostly passive recipients of knowledge, advice or innovation. This was in marked contrast with the descriptions of ‘real life’ impact that our participants gave, in which accounts were far more complex, with a wide range of actors often actively involved.

Four ‘narrative archetypes’

Reflecting this ‘restricted storytelling’, we identified four narrative archetypes of impact within the sample of REF2014 case studies we analysed: Problem-solving; Tool-building; Reframing; and Public engagement. Not all of these performed equally well in REF2014 within our sample.  In particular, ‘Public engagement’ scored lowest of the four – this was the narrative archetype which seemed most difficult to fit into the restricted storytelling format. For example, the ‘moral’ of impact stories that focused on public engagement was often expressed in terms of learning and enlightenment, which was difficult to translate into the kinds of specific, quantified, monetised impacts that participants believed assessors were looking for.

An Impact Monoculture?

How do we explain this emerging monoculture of impact? On the one hand, we can see that the impact case study templates are performative (much like quantitative tools) – they provide a restricted format in which institutions can describe impact activities and they guide authors in particular ways. At least to a degree, the tool constructs the idea of what a ‘good’ impact case study involves and how impact should be ‘performed’ for assessment. However, the REF case study form also contains several free text sections, so the tool still allows for some variety and diversity of the impact stories.

Another key set of actors are higher educational institutions, which provide the context in which REF impact case studies are drafted, reviewed, edited and selected. Our participants accounts suggest that, in response to the high economic rewards attached to high performing REF impact case studies, universities have invested in processes that function to institutionalise restricted ideas about what ‘successful’ REF impact case studies look like, filtering out case studies that do not conform. In effect, certain ideas about what REF ‘counts’ as successful impact have quickly gained traction in the UK and universities have institutionalised these perceptions, encouraging case study authors to employ ‘successful’ narrative tropes (using internal review processes to filter more complex accounts). These institutionalised processes have led to filtering out case studies that were perceived to be too complex – for example ones that were challenging the existing political frameworks so happened across different areas and institutions.

Governing by Narratives – Pros and Cons

We approached our analysis by revisiting the original rationale for taking a case study approach to the performance assessment of research impact. From this perspective, the extent to which a narrative case study format appears to have resulted in a monoculture of impact is surprising. The contrast between participant accounts of ‘real world impact’ and ‘REF impact’ suggest whatever REF is assessing, it isn’t capturing realistic accounts of the complex ways in which research influences ideas and activities beyond the academy (reflecting earlier observations that the REF approach to impact sits uneasily with theoretical and empirical work in this area). However, even with the restrictions we identify, case studies remain more flexible than metrics. Plus, if we consider these findings from the perspective of performance assessment, the restricted, simplified storytelling may well (as one of our paper’s reviewers observed) be useful – it is likely to be easier for assessors to make the kinds of comparisons REF requires. There are pros and cons in this kind of ‘governing by narratives’ and your perspective on our findings will likely be informed by your own starting point. Personally, we believe the limitations of this approach are great enough to warrant a re-think, something we have discussed at length elsewhere. Ultimately, qualitative and quantitative performance metrics are both entangled with degrees of performativity. As the purpose and effectiveness of the REF is again under review, perhaps it’s time to reconsider what alternative approaches to incentivising and assessing impact might involve.

 


This post draws on the authors’ article, Governing by narratives: REF impact case studies and restrictive storytelling in performance measurement, published in Studies in Higher Education. 

Note: This post gives the views of the authors, and not the position of the LSE Impact Blog, or of the London School of Economics.

Image Credit: No Longer Here via Pixabay. 


Print Friendly, PDF & Email

About the author

Justyna Bandola-Gill

Justyna Bandola-Gill is an Assistant Professor in Sociology and Social Policy at the Department of Social Policy, Sociology and Criminology at the University of Birmingham. Her research explores the relationship between knowledge and governance, with a particular interest in data and quantification. You can follow her on Twitter @justynabandola

Kat Smith

Kat Smith is a Professor of Public Health Policy at the University of Strathclyde and an Honorary Professor at the University of Edinburgh. Her research centres on understanding, and working to improve, the use of evidence and expertise in public health policy. Kat has published 70 peer-reviewed journal articles and five books, including the co-authored book The Impact Agenda: Controversies, Consequences & Challenges. She is currently leading a work-stream analysing inclusive growth policy work in the UKPRP Consortium SIPHER, is contributing to a governance work-stream involving citizens’ juries on the UKPRP consortium SPECTRUM, and is collaborating with researchers in Canada and Australia to explore senior public health leadership during the COVID-19 pandemic. Kat is also currently Co-Editor-in-Chief of Evidence & Policy and Co-Edits a Palgrave Studies in Science, Knowledge and Policy book series.

Posted In: Impact | REF2014 | REF2021

2 Comments