A common feature of evaluation mechanisms across many fields of activity is the influence they have on shaping perceptions and practices within them. In the UK a key argument in favour of the inclusion of impact within the REF has been the way in which, for better or worse, it has sensitised UK academics to how research can impact society. In this post, Jorrit Smit and Laurens Hessels, draw on a recent analysis of different impact evaluation tools to explore how they constitute and direct conceptions of research impact. Finding a common separation between evaluation focused on scientific and societal impact, they suggest bridging this divide may prove beneficial to producing research that has public value, rather than research that achieves particular metrics.
Impact assessments are widespread – from policy to business, as well as scientific research. Increasingly, universities and other knowledge institutions feel the need (or pressure) to demonstrate their societal value. There is plenty of scientific and policy literature around that proposes how to go about this. Researchers and knowledge transfer professionals publish evaluation ‘frameworks‘, ‘methods‘ and ‘approaches‘, and discuss their use in specific areas, such as health, agriculture or educational sciences. Taking heed of this, are the almost annual review publications that sum up the state of the field.
What ‘impact’ is and how it works depends on many factors, ranging from disciplines and sectors to culture and geography. Specific methods have been developed in particular contexts of, for example, a French agricultural public research institute (ASIRPA) or English research council funding programs (Flows of Knowledge). This is desirable, as the criteria and methods of an evaluation should match with the expectations and routines of particular scientific practices. As for example proposed in this recent blogpost on guiding principles to choose between frameworks.
Besides context, there is another aspect to impact assessment that requires attention: the performativity of evaluation. Or in other words, the way in which specific evaluation methods assume, and thereby produce, different understandings of research and impact. In a recent paper (open access) we explore this very question, building on previous constructivist studies that have proposed how evaluation significantly shapes the contours of the object it valuates. For ten impact assessment approaches we assessed what evaluation object of ‘research with societal value’ they created. This is significant as it not only makes clear what fits best to your situation, but also what you do to that situation. Evaluations are not innocent: by rewarding particular activities or achievements, they influence perceptions and behaviour of the people evaluated.
To guide thoughts and choices in the theory and practice of research evaluation, we therefore created an analytical framework that allows comparison of different methods. It consists of four aspects:
- Types and roles of actors in the production, exchange and evaluation of research: what types of people, things and organisations does the method assume to participate in this process?
- Interaction mechanisms that are considered crucial to the production of societal value. Taking our cue from knowledge utilization and transfer studies, we distinguished between linear, cyclical and co-production models, with increasingly blurry boundaries between knowledge producers and users.
- Concepts of societal value that assessment methods imply. Whether they operationalize it as research results with potential use (product or output), adoption or implementation in practice (use or outcome) or benefits from this use, matters a great deal to what counts as impact.
- Lastly, we use these three aspects to highlight how a method shapes the relationships between the scientific value and the societal value of research. Whereas some methods treat these as distinct characteristics, others deal with them in an integrated way because the underlying processes strongly overlap.
Comparing ten methods along these lines led to some surprising insights. For example, it appeared that methods which look at scientific practice at higher levels of aggregation (e.g. entire institutes and funding programs) take more dimensions of societal value into account than ones that stay closer to a particular research process. This points to pertinent and open question: can different concepts of societal value (product, use and/or benefit) apply at all scales, from individual researchers and research groups to institutions and funding programmes?
We also found that the epistemological assumptions about knowledge production – whether it takes place in isolation from or in interaction with societal parties, or if it is even co-creation between academics and non-academics – strongly shape the boundary between the scientific and societal value of research. We summarized this correlation in the figure below.
It is not surprising that many hold on to an analytic distinction between the scientific and social value. There is a strong scientometric tradition of mapping the scientific value of science, based on citation data, and policy bodies requested societal impact assessments of research units explicitly in addition to this. Nevertheless, this distinction is at odds with many empirical studies of scientific practice in the constructivist tradition of science and technology studies that show a much less clear divide between science and society.
If researchers, research managers, policy makers and relevant stakeholders fail to discuss what societal value entails, and what effects it will have when we start evaluating it, then the potential for unintended consequences is magnified
The heterogeneity of the actor-networks responsible for the production of new knowledge complicates the relationship between the scientific and societal value of scientific research. However, the practical accountability needs behind many evaluation methods colour views of the impact process, so that evaluation typically focuses first on the research practices and only very few put the process of societal change central. An integrated concept of scientific and societal value could help encourage doing what we value most, instead of doing what counts.
The importance of the societal impact of publicly funded scientific research is likely to increase in the coming years. This calls for a shared discourse in policy, practice and science studies research that enables collective critical reflection. If researchers, research managers, policy makers and relevant stakeholders fail to discuss what societal value entails, and what effects it will have when we start evaluating it, then the potential for unintended consequences is magnified. Ideally, evaluation should not be separate from research and impact, as an afterthought, but as an important learning moment in knowledge (co)production. We hope that our work can help guide such discussions and create an environment that enables collective learning: not only about how impact works, but also about what evaluation objects and behaviour are produced by impact assessments.
This post draws on the authors’ article, The production of scientific and societal value in research evaluation: a review of societal impact assessment methods, Published in Research Evaluation.
Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below
Image Credit: Adapted from Quino Al via Unsplash.