Wary of the pressure that researchers are under to demonstrate impact, Louise Shaxson injects a dose of realism into the debate around what that impact ought to look like. Researchers must provide clear policy messages, carefully define the relevance of their research, be realistic about what can be achieved, and be clear about whether they’re practising research communication or advocacy.
Ensuring that development research has impact is a hot topic at the moment: Enrique Mendizabal has just blogged at length; and Ros Eyben and Chris Roche are slugging it out with Chris Whitty [DFID's Director of Research and Evidence] and Stefan Dercon [DFID's Chief Economist] on Duncan Green’s [Oxfam GB] blog. I’d like to add my voice to the debate by focusing on a specific concern I have: that pressure to demonstrate concrete impacts on public policy is encouraging researchers to make grand claims about what we/they are likely to achieve. There are four issues:
1. Clear research evidence doesn’t necessarily lead to clear policy messages.
It should be taken for granted that we need to collect evidence according to the best traditions of whatever disciplines we work in. But before we wade into a debate over which techniques give us the best data, there are two more basic points about how we translate evidence into messages for policymakers.
First, an excellent analysis of poor data gives us poor evidence for policy: “Bad data isn’t just less good than good data, it is positively mischievous” said Professor Sir Roger Jowell a while back. And no matter how good the quality of each study and the strength of the evidence, the quality of macro level data is often very poor. However good your analysis of a single component, unless it’s set in context the policy messages may be unclear, or even wrong. It’s a bit like looking at a close-up of a bee on a flower: we can clearly see the bee but can’t identify the flower species and have no idea what the rest of the garden looks like. When presenting a high-quality but narrow message from research, we need to appreciate the implications of this fuzzy context and the issues they raise for the choices facing policymakers.
Second, the choice of which evidence to use in policymaking is exactly that – a choice. Among the many different types of evidence to choose from, our individual values and beliefs condition what we accept as evidence in the first place, not just which evidence is the most rigorous (as pointed out by Mark Monaghan in this excellent study of drugs policy in the UK). Where the choice of what evidence to use is political, we will be better off focusing on the quality of the evidence produced and just ensuring it makes it into the debate, than on trying to demonstrate impact in terms of any concrete outcomes on policy.
2. Be careful how you define ‘policy relevance’.
The phrase ‘policy relevance’ is bandied around a lot but I think there are three different dimensions. The first is the type of research: theoretical, more policy-focused or based around policy implementation. The second is whether the topic is currently pertinent to policy issue: as we all know, issues rise and fall in importance to rhythms that are difficult for researchers to determine. The third is whether it is well set in the institutional context; whether there is a coherent group of people and processes that will be able to take up the results and use them. These dimensions interact with each other and give rise to three broad types of research project, all of which are relevant to policy in different ways:
- Research which seeks evaluate ongoing projects or programmes, using impact evaluations (for example) to design the next phase.
- Research which seeks to transfer previous results from a different sector or country; looking to contextualise findings from elsewhere
- Research which is essentially proof-of-concept; testing ideas, establishing causal relationships or understanding different voices
Jeff Knezovich has talked more about some of the perverse incentives to policy relevance caused by the way research is sometimes funded. Policy relevance isn’t an either/or situation; it’s a multi-dimensional and constantly-shifting challenge. This leads to my next point;
3. Be realistic about what can be achieved – think breadth of impact rather than depth. Getting references to pieces of research in policy documents doesn’t have to be the holy grail. The UK’s ESRC recognises three different types of research impact: instrumental (e.g. concrete changes to policies and programmes), conceptual (changes in understanding) and capacity building. A recent evaluation of the Rural Economy and Land Use Programme adds two more: building enduring connectivity and fostering attitudinal change. In RAPID we also consider five areas in which policies can be affected—discourse, attitudes, processes, content and behaviour.
Research that influences policy development may be citable and therefore measurable, but it is important to recognise it also has real impacts on how policies are designed and implemented and the relationships research can help to build along the way. Power dynamics and choice have a significant effect on how research evidence is sourced, interpreted and ultimately used – as two recent ODI Background Notes demonstrate (here’s the first, which sets out a framework for analysing the knowledge-policy interface, and the second, on how to apply the framework to in-country programming).
And as Justin Parkhurst has blogged, it is probably less important to ensure your research led to policy change than it is to support local systems for reviewing and evaluating evidence, which may be better able to set that evidence in context of what needs to be done.
4. Be clear whether you’re practicing research communication or advocacy.
There can be an unwitting slide from good research communications to advocacy for particular policy positions. The diagram below is adapted from this report to 3ie in which I looked at the policy relevance of impact evaluations; but I think this is a general issue, not limited to a single type of research:
It’s up to individual researchers and organisations to choose how far to the right they want to operate, but it’s important to be clear about it beforehand or risk being accused of partiality.
None of this is to say we shouldn’t be showing that what we do makes an impact: research that spends public money must show something for it. Fundamentally I think it is about the quality of the evidence and how we collectively choose to use it. However, at a time when the evidence-informed policy debate is going mainstream, I think we do need to inject a dose of realism into debates around what that impact ought to look like.
Note: This article gives the views of the author(s), and not the position of the Impact of Social Sciences blog, nor of the London School of Economics.
Louise Shaxson is a Research Fellow in the Research and Policy in Development (RAPID) programme at ODI. She has had a longstanding interest in the general area of evidence-informed policymaking, particularly how policymakers inside line Ministries can be encouraged to make better use of evidence and analysis. She has worked internationally and inside Whitehall Departments looking at structures, functions and processes to improve the use of evidence by policymakers; implementing the sorts of organisational change programmes that are sometimes needed and developing new evidence-related policy tools where necessary. Louise is currently working on a couple of projects relating to research uptake and impact; developing a monitoring framework that assesses behaviour change rather than outputs, and working with political actors to improve their use of evidence.