LSE - Small Logo
LSE - Small Logo

Cordelia Lonsdale

Gloria Seruwagi

October 3rd, 2023

Five lessons to deal with uncertainty in research impact evaluation

1 comment | 22 shares

Estimated reading time: 6 minutes

Cordelia Lonsdale

Gloria Seruwagi

October 3rd, 2023

Five lessons to deal with uncertainty in research impact evaluation

1 comment | 22 shares

Estimated reading time: 6 minutes

Reflecting on carrying out impact evaluation as part of the The Research for Health in Humanitarian crises programme, Cordelia Lonsdale and Gloria Seruwagi, discuss common challenges faced in evaluating impact, particularly when outcomes don’t conform to expectations. 


The Research for Health in Humanitarian Crises (R2HC) programme has an explicit impact mission: the research funded through the programme should improve health outcomes for people affected by humanitarian crises.

R2HC uses case studies to evaluate not only the outcomes and impacts of funded research, but to understand the processes, activities and experiences that shape research impact. Our focus is on whether research is used by, and influences, humanitarian health policymakers and practitioners (more background here).

We are two evaluators who have worked on the development of case studies between 2020 -2022. Here’s five things we’ve learned about research impact:

We need to be clear what we mean by research impact

We had R2HC’s theory of change and results framework, to help identify what impact we’re looking for. We also had an idea of what forms impact might take (we use a slightly amended version of the four types), as defined in the Research Excellence Framework (REF) Impact Toolkit:

  • Conceptual (changes to stakeholder knowledge, attitudes and understanding)
  • Capacity (changes in the ability of researchers to conduct similar work, or of stakeholders to use and apply research)
  • Instrumental (changes to policy or practice)
  • Enduring Connectivity (changes to the existence or strength of networks, relationships of stakeholders who can use or apply research

But even when definitions and frameworks are as clear as we can get them, interviews sometimes generated information that felt hard to label. Is it impact if, for example, if a junior data collector is inspired to start a PhD?

When this kind of information comes up, we would discuss not only what happened, but where something happened and who was affected. Through this we could figure out together if this outcome was linked to the R2HC theory of change, or sat outside it, and whether it was therefore ‘meaningful’ impact for the case study. Some pieces of information ended up in a box marked ‘other’.  These are contributing to thinking as we overhaul our approach so that we can better understand the role of ‘unintended outcomes’ in overall programme impact.

Interviews can’t be rigid

Both researchers and their humanitarian collaborators may have a lot of information about the project that was not in any final report, especially if the project finished some time ago. Evaluators must be alert to new pathways in the conversation where impact can hide, and not go into the conversation with too many preconceptions.

Asking open questions such as ‘tell me more about how you organised this meeting with policymakers?’ or ‘how was that publication received by x?’ can uncover impact that was not previously documented, and helped us better understand its nature.

It’s sometimes much easier to understand the ‘overall spirit’ of a project, and build rapport, by asking someone to explain it in their own words, instead of assuming you know how the research panned out. This is particularly important when documenting research uptake strategies (which are not always recorded in detail in R2HC’s files) or understanding research outside our own technical areas of expertise.

Image credit: Freshspectrum, (CC BY-NC 4.0)

Research funders must find a way to fund and support impact data collection after research grants have closed

Data collection (interviews with partners, stakeholders, and documentary source research) relied heavily on goodwill and the quality of relationships that researchers had with their humanitarian partners. But it is a burden on most stakeholders – they must give up their time for free to contribute to research impact evaluation. This is an issue – not least because if we only do impact case studies on the ‘cooperative’ grantees and partners, we are likely to end up with a skewed picture of programme impact. Even with the best will in the world, it is a big ask for someone working in a crisis setting, or with a huge academic workload, to take time out of their busy days to talk about a research project that has closed. We were grateful so many of them shared their insights with us – sometimes via email, if they couldn’t make time for a call.

People have different levels of comfort with talking about failure

When Cordelia did her case studies, we didn’t really have much space for talking about failure. Increasingly, R2HC felt we needed somewhere to put this as some researchers were openly sharing these reflections with us, so we adapted the basic template to include it.

Researchers have different ways, and stages, of defining or coming to terms with “failure” –for example, some differentiated between failure they attributed to their own effort and that which was due to factors outside their control.

Even when researchers were circumspect about failure, we also felt that there were obvious learning opportunities for R2HC and the wider community from asking these questions. Failed research happens and failed uptake strategies also happen. Both are opportunities for learning and we should normalise this. But the first step is finding a better way of capturing and understanding it.

Gloria’s evaluations, which included questions about failure, generated some useful insights – often shared by researchers themselves. For example, one co-lead researcher reflected that he wished he had engaged central government policymakers much earlier in his research process, so that they were more sensitised to the end-results.

Another research team ended up with an underpowered RCT (due to various barriers common in humanitarian settings) but drew out learning which has built their expertise and capacity that they are now applying to new research. Such impacts would have gone unrecognised had we not asked about failure.

Researchers whose work has impact tend to know this already

Sometimes, a researcher would know very little about what happened after they published their work. We noticed some correlation: teams that had delivered significant research impact also had a pretty good sense of the pathways their research had taken into policy and practice. They had maintained relationships and engaged policymakers and practitioners regularly, and could give a helpful steer on who had used their research and how (and connect us with them so we could check). From this experience, our conclusion is:  If you ‘have no idea what’s happened’ after you published… the answer might well be ‘not much’. Remaining engaged with stakeholders in policy and practice, and building relationships (not only popping up on their radar when you have a new article out), is a key ‘research impact’ competency.

 


This post first appeared as, 5 Things we Learned from Evaluating the Impact of Research, on the From Poverty to Power blog. 

The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: Freshspectrum, 111 Evaluation Cartoons for Presentations and Blog Posts (CC BY-NC 4.0).


Print Friendly, PDF & Email

About the author

Cordelia Lonsdale

Cordelia is the Senior Research Impact Advisor for the RH2C programme and has responsibility for supporting and facilitating R2HC grantees in achieving impact through their research.

Gloria Seruwagi

Dr. Gloria Seruwagi is a public health specialist with background training in social and behavioural sciences. She has several years of work experience that straddles policy development and programming, research and university teaching.

Posted In: Impact | Research evaluation

1 Comments