LSE - Small Logo
LSE - Small Logo

Elena Louder

Carina Wyborn

Chris Cvitanovic

Angela Bednarek

January 15th, 2021

Four guiding principles for choosing frameworks and indicators to assess research impact

1 comment | 128 shares

Estimated reading time: 7 minutes

Elena Louder

Carina Wyborn

Chris Cvitanovic

Angela Bednarek

January 15th, 2021

Four guiding principles for choosing frameworks and indicators to assess research impact

1 comment | 128 shares

Estimated reading time: 7 minutes

Selecting a framework for assessing research impact can be difficult, especially for interdisciplinary studies and research in fields that do not have established forms impact assessment. In this post, Elena Louder, Carina Wyborn, Christopher Cvitanovic, Angela T. Bednarek, outline four principles for researchers designing impact assessment criteria for their work and suggest how a closer appreciation of how assessment frameworks are dependent on particular forms of knowledge production and dissemination is critical to making the right choice.


Evaluating the impacts of science on policy and practice is inherently challenging. Impacts can take a variety of forms, occur over protracted timeframes and often involve subtle and hard-to-track changes. As a result, diverse impacts are impossible to capture through traditional academic metrics such as publications and citations, and cannot be captured by focusing solely on the end results of research projects, such as changes in policy or practice.

However, despite these challenges, scientists and researchers everywhere and in all disciplines are increasingly required to demonstrate the impact of their work, for example, in funding applications or for career progression. As a result, there has been an increased effort among academics and practitioners alike to develop approaches to help guide the evaluation of impacts at the intersection of science, policy, and practice.  These efforts have, in turn, led to the development of numerous new evaluation frameworks which go beyond traditional academic metrics, but rather attempt to capture various dimensions of impact such as changes in attitudes, behaviours, and policy.

Despite recent advances, these frameworks have emerged from different domains and disciplines, are framed and described using complementary, but often different terminology, and approach evaluation from different founding assumptions. This is largely because different frameworks have sought to capture the non-linear and context specific nature of impacts across diverse sectors and domains. However, in this rapidly developing field, it can be hard to make sense of the various frameworks (especially when applied to a common problem such as environmental change), and it is confusing for both funders, researchers and practitioners to know what works in what contexts and why, limiting the effectiveness of initiatives aimed at supporting a more dynamic relationship between science and policy.

In our recent paper we sought to help overcome this challenge by undertaking a synthesis of the frameworks that are currently available for guiding the evaluation of impacts at the interface of environmental science and policy.  Specifically, we examined the epistemological foundations and assumptions of these frameworks and drew out their similarities and differences to help improve the evaluation of research impact.  In doing so we identified four key principles (referred to in the paper as ‘rules of thumb’) to help guide the selection of an evaluation framework for application within a specific context.

Four Guiding Principles

Based on our literature review and qualitative analysis, we recommend the following rules of thumb to guide the selection of frameworks and indicators for evaluating the impact of research at the interface of science, policy and practice. Whilst these have been derived from the literature relating to environmental science and policy, we posit that they can help guide the selection of frameworks across different contexts, disciplines and sectors.

Be clear about underlying assumptions of knowledge production and definitions of impact:

Clarifying from the start how research activities are intended to achieve impact is an important pre-cursor to designing an evaluation. Furthermore, defining what you mean by impact is an important first step in selecting indicators to know if you’ve achieved it. For example, a research organization should be clear up front whether changes in attitude, problem framing, or relationships count as impact. This must involve outlining why certain activities are expected to contribute to impact, and what those impacts might look like. For example, if it is assumed that interactions between stakeholders lead to improved relationships, indicators can usefully be developed to evaluate the nature, frequency, quality etc. of interactions. This epistemological clarity helps define what counts as impact, and what counts as robust evidence of that impact.

Attempt to measure intermediate and process-related impacts:

Whether this means expanding the definition of impact, or evaluating quality, or ‘contribution to impact,’ select indicators that capture nuanced changes in problem framing, understanding, or mind sets. Our review shows that evaluations should at least partially attempt to capture the ‘below the tip of the iceberg’ knowledge co-production activities. This could be done by focusing at least part of an evaluation on measuring perspectives of participants (via interview or survey) regarding changes such as increased capacity, changes in expertise and knowledge, and shifts in how a problem is understood or framed. Attention to such intermediate impacts is important as they may serve as building blocks for end-of-process outcomes, and also enable the evaluation of ‘progress makers’ along a theory of change to identify if a project is tracking towards intended outcomes.

Balance emergent and expected outcomes:

While it is important to be clear on expectations and aspirations, evaluations should have at least some open-ended component which captures emergent (unexpected) outcomes, both positive and negative. This could be implemented through crafting at least part of an evaluation in an open-ended manner. For example, rather than rubrics with pre-determined criteria, ask instead- what changed? who changed? how do you know? Such an open-ended approach allows for unexpected outcomes to surface.

Balance indicators that capture nuance and those that simplify:

Evaluations which assign numerical scores to impact may be extremely useful for project managers and large research organizations. However, aggregated scores can sometimes overshadow conceptual changes in the way a problem is framed, or subtle changes resulting from knowledge co-production. Over emphasis on simple evaluations can also lead to ‘gaming the indicators,’ and provide perverse incentives to tailor research to meet the indicators. While indicators that can be quantitatively scored (for a hypothetical example, assigning 1-10 scores on dimensions like suitable context, legitimacy and relevancy, project outputs) may be easy to use, especially for comparing different research projects, such an approach might not register why or how changes occurred. The same is true for the number of indicators- fewer indicators may make evaluation simpler and more convenient, where more indicators may deliver more detailed information. This tension must be considered when designing an evaluation.

Through our analysis of the frameworks used to evaluate research impact at the intersection of environmental science and policy, we found that existing frameworks vary in their overall design, in scope and thoroughness, in the number of principles and indicators, and the approach to timing and implementation. Importantly, our synthesis suggests that these differences in evaluation framework often reflect deeper variation on how knowledge is understood, and what counts as impact.  However, a common theme was that evaluation must capture the non-linear, less visible changes to things such as problem framing, mindsets, and relationships between researches and stakeholders. The four rules of thumbs presented above seek to provide a set of overarching principles to help researchers, funders and/or practitioners alike to choose the most appropriate framework (or combination of frameworks) for their specific purpose and context, irrespective of their field of discipline.

 


This post draws on the authors’ article, A synthesis of the frameworks available to guide evaluations of research impact at the interface of environmental science, policy and practice, published in Environmental Science and Policy. 

Note: This article gives the views of the authors, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

Image Credit, published with permission by authors and Visual Knowledge.


Print Friendly, PDF & Email

About the author

Elena Louder

Elena Louder is a PhD student in the department of Geography, Development and Environment at the University of Arizona. Her research interests include political ecology, the politics of renewable energy development, knowledge co-production, and biodiversity conservation.

Carina Wyborn

Dr Carina Wyborn is fellow at the Institute for Water Futures and the Fenner School of Environment and Society at the Australian National University. She is an interdisciplinary social scientist, who works on the science policy interface in complex sustainability challenges. Her research focuses on anticipatory governance and the capacities to make decisions in the context of uncertain and contested socio-environmental change. Find Carina on Twitter @rini_rants.

Chris Cvitanovic

Dr Chris Cvitanovic is a transdisciplinary marine scientist working to improve the relationship between science, policy and practice to enable evidence-informed decision-making for sustainable ocean futures. In doing so Chris draws on almost ten years of experience working at the interface of science and policy for the Australian Government Department of Environment, and then as a Knowledge Broker in CSIROs Climate Adaptation Flagship.

Angela Bednarek

Dr. Angela Bednarek directs the Evidence Project (at The Pew Charitable Trusts), a cross-cutting initiative aimed at increasing the use of evidence in policy and practice by marshalling funders, practitioners, scholars, and others to demonstrate effective practice and spur systemic changes in research and evidence use infrastructure.

Posted In: Impact | Research evaluation

1 Comments