LSE - Small Logo
LSE - Small Logo

Chris Roche

Alana Tomlin

Ujjwal Krishna

Will Pryor

May 25th, 2021

Proving and Improving – Evaluating policy engagement is an opportunity for researchers and institutions to learn as well as demonstrate impact.

1 comment | 116 shares

Estimated reading time: 7 minutes

Chris Roche

Alana Tomlin

Ujjwal Krishna

Will Pryor

May 25th, 2021

Proving and Improving – Evaluating policy engagement is an opportunity for researchers and institutions to learn as well as demonstrate impact.

1 comment | 116 shares

Estimated reading time: 7 minutes

The challenges of evaluating the contribution of research to policy making are well documented. In this post Chris Roche, Alana Tomlin, Ujjwal Krishna and Will Pryor outline seven principles for effective monitoring, evaluation and learning for policy engagement. They were developed through consultation with researchers, support staff and others in a range of science and humanities disciplines at the University of Oxford and beyond. The authors suggest if research institutions want to be able to better demonstrate policy impact, they should approach monitoring and evaluation by building on the intrinsic motivation of researchers to learn and improve, rather than as an exercise in simply recording and proving impact.


The importance of policy engagement for academic institutions should not be underestimated. In REF 2014, among the 6,679 impact case studies submitted across all subjects, the most common category of impact was public policy and parliamentary debate. In just under half of all case studies, the word ‘policy’ occurred in the description of impact.

The significance of policy impact is reflected in university investments in supporting policy engagement, as well as associated monitoring and evaluation, as a means to ‘prove’ impact. But, these processes are often at odds with the other key benefit of monitoring and evaluation, which is to provide opportunities to better understand and ‘improve’ the ways research can contribute to policy.

influencing policymaking is like ‘navigating a kind of very muddy, slow tidal river’

Balancing this tension between ‘prove and improve’ is not easy to achieve. As one researcher involved in this project remarked, influencing policymaking is like ‘navigating a kind of very muddy, slow tidal river’. It requires the development of intangible relationships and trust with policy actors. Moreover, the meandering, non-linear, and inherently political process of policymaking precludes drawing clear lines of influence between discrete research projects and policy.

This journey along the tidal river of policy engagement is all the more difficult without accessible methodologies and guidelines for monitoring, evaluating and learning in this area. Whilst some research communities have developed guidelines (e.g., the international development sector), they often reflect specific disciplinary interests, or the reporting requirements of UK research councils.

Aware of these challenges, our recently published guidance notes, commissioned by the University of Oxford’s Policy Engagement Team, outline seven principles for monitoring, evaluating and learning from policy engagement. These principles emphasise the benefits of ongoing reflection, and are accompanied by a resource library of accessible tools and practical recommendations for universities and funders to encourage and support these activities. The guidelines were informed by a literature review and interviews with academic researchers across disciplines, UKRI staff and research support staff.

1. Start early

This is for anyone who has ever had to scramble to collect information for a report to funders, or wished they’d thought a little more about how and when to engage with policymakers at the beginning of their research. Many researchers leave monitoring and learning until the end of a project, but starting early can increase the chances of success and save time later.

2. Acknowledge complexity, context and luck

The policy process is disorderly and unpredictable. The ability of researchers to have an impact depends not only on experience and expertise, but also on the political environment and sheer chance. This underscores the importance of learning as you go, and makes monitoring and evaluation rather like understanding a hand of poker in which the outcomes are determined by a mix of skill, strategy and luck.

Image Credit, Copyright Tom Gauld.

3. Think about relationships, power and politics

Policies are forged in a crucible of vested interests. Research evidence is just one of the many contributions competing for the oxygen of attention. Researchers are among many other actors on the policymaking stage, who are often connected in complex ways that are not always visible. Understanding the key players, what makes them tick and how they relate, is important for effective engagement, as well as for assessing change over time.

4. Track contacts, networks and coalitions

Policy engagement is all about relationships – investing in them, maintaining them and, from time to time, ending them. This often requires building broader interest groups or coalitions. Keeping track of how these relationships are evolving is key to learning and adapting as you go, as well as being able to tell the story of how outcomes occur.

5. Track outcomes and impact

Changes in ideas, behaviours, policies or practices are usually the result of a combination of factors. The contribution of evidence and your policy engagement may be a small part of a larger jigsaw. Your engagement might be focused directly on policymakers or be more oblique or indirect. Harvesting outcomes and gathering intelligence about how changes come about is a critical step in the learning journey.

6. Make space for learning and reflection

Sense-making is a critical part of monitoring and evaluation, yet is often under-resourced or overlooked. Creating the time and space for learning about your engagement is important. Set up a regular time and space for reflection, and protect it.

7. Ensure evaluation is fit for purpose

Is an evaluation really necessary? What questions are most important? Who cares? Involving people who care about an evaluation is an important predictor of findings being used. Being clear about purpose and who is involved is the key to designing an appropriate evaluation which is robust but also useful.

Guidelines such as ours, are only as enabling as the institutional contexts in which academics and research support staff operate. Many of our interviewees explained that time constraints, lack of discipline-specific guidance, and sometimes alienating monitoring and impact jargon are common reasons for researchers to be put off engaging with public policy and accompanying evaluation. Appropriate incentives and support are required to facilitate policy engagement in the first instance, as well as encourage associated monitoring, evaluation, and learning.

If monitoring and evaluation systems are designed in ways that are understood by, and useful for, researchers’ learning, they are much more likely to be effective. If they are seen to be simply about top-down control, or reporting and accountability, this can undermine intrinsic motivation and trust. Research support staff play an important role here in buffering and pushing back on demands, as well as assisting and supporting researchers to see the bigger picture.

the crux of the challenge lies in nurturing the learning aspect of monitoring and evaluation as a way for researchers to improve, whilst at the same time ensuring that evidence gathered in the process is readily available

How monitoring, evaluation and learning for policy engagement is best done is contested. For universities and funders, the crux of the challenge lies in nurturing the learning aspect of monitoring and evaluation as a way for researchers to improve, whilst at the same time ensuring that evidence gathered in the process is readily available to demonstrate success and help others to learn.

How individual universities behave is also a function of how the sector as a whole understands and rewards policy engagement. Given this, and the fact that there is no ‘right’ way of approaching the monitoring and evaluation challenge for policy engagement, relevant networks such as the Universities Policy Engagement Network might want to consider how best they could promote appropriate forms of strategic experimentation across the sector that might facilitate cross-organisational learning and adaptation. Improvement collaboratives supported by the National Health Service provide interesting examples of what this might look like. Being able to ‘prove and improve’ is likely to become increasingly important for individuals and institutions in an environment where research integrity, public policy, and their interaction, is under growing scrutiny.

 


The authors are grateful to all those who contributed to this learning exercise and to Research England’s Higher Education Innovation Fund for its support.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

Image Credit: Copyright Tom Gauld.


 

Print Friendly, PDF & Email

About the author

Chris Roche

Chris Roche is Professor of Development Practice and Director of the Institute for Human Security and Social Change at La Trobe University, Melbourne, Australia. He is also Deputy Director (Impact) of the Developmental Leadership Program an international research collaboration between the Australian Department of Foreign Affairs and Trade, the University of Birmingham, and La Trobe University.

Alana Tomlin

Alana Tomlin is the Deputy Director (Operations) of the Developmental Leadership Program, an international development research collaboration between the Australian Department of Foreign Affairs and Trade, the University of Birmingham, and La Trobe University. She previously worked as a research facilitator for the social and medical sciences, and is a Director of the West Midlands Anti Slavery Network.

Ujjwal Krishna

Ujjwal Krishna is a Specialist Doctoral Research Scholar with the Developmental Leadership Program, based at the Institute for Human Security and Social Change, La Trobe University. He is interested in the political economy of development research and policy, and works with the Australian Government’s Department of Foreign Affairs and Trade on its aid investments in leadership and coalitions.

Will Pryor

Will Pryor is Head of Policy Engagement at the University of Oxford, and leads the development of the Oxford Policy Engagement Network, which connects researchers across the University with each other, and with opportunities and resources to engage more effectively with the policymaking community.

Posted In: Evidence for Policy | Research evaluation

1 Comments