Wary of the pressure that researchers are under to demonstrate impact, Louise Shaxson injects a dose of realism into the debate around what that impact ought to look like. Researchers must provide clear policy messages, carefully define the relevance of their research, be realistic about what can be achieved, and be clear about whether they’re practising research communication or advocacy.
Ensuring that development research has impact is a hot topic at the moment: Enrique Mendizabal has just blogged at length; and Ros Eyben and Chris Roche are slugging it out with Chris Whitty [DFID’s Director of Research and Evidence] and Stefan Dercon [DFID’s Chief Economist] on Duncan Green’s [Oxfam GB] blog. I’d like to add my voice to the debate by focusing on a specific concern I have: that pressure to demonstrate concrete impacts on public policy is encouraging researchers to make grand claims about what we/they are likely to achieve. There are four issues:
1. Clear research evidence doesn’t necessarily lead to clear policy messages.
It should be taken for granted that we need to collect evidence according to the best traditions of whatever disciplines we work in. But before we wade into a debate over which techniques give us the best data, there are two more basic points about how we translate evidence into messages for policymakers.
First, an excellent analysis of poor data gives us poor evidence for policy: “Bad data isn’t just less good than good data, it is positively mischievous” said Professor Sir Roger Jowell a while back. And no matter how good the quality of each study and the strength of the evidence, the quality of macro level data is often very poor. However good your analysis of a single component, unless it’s set in context the policy messages may be unclear, or even wrong. It’s a bit like looking at a close-up of a bee on a flower: we can clearly see the bee but can’t identify the flower species and have no idea what the rest of the garden looks like. When presenting a high-quality but narrow message from research, we need to appreciate the implications of this fuzzy context and the issues they raise for the choices facing policymakers.
Second, the choice of which evidence to use in policymaking is exactly that – a choice. Among the many different types of evidence to choose from, our individual values and beliefs condition what we accept as evidence in the first place, not just which evidence is the most rigorous (as pointed out by Mark Monaghan in this excellent study of drugs policy in the UK). Where the choice of what evidence to use is political, we will be better off focusing on the quality of the evidence produced and just ensuring it makes it into the debate, than on trying to demonstrate impact in terms of any concrete outcomes on policy.
2. Be careful how you define ‘policy relevance’.
The phrase ‘policy relevance’ is bandied around a lot but I think there are three different dimensions. The first is the type of research: theoretical, more policy-focused or based around policy implementation. The second is whether the topic is currently pertinent to policy issue: as we all know, issues rise and fall in importance to rhythms that are difficult for researchers to determine. The third is whether it is well set in the institutional context; whether there is a coherent group of people and processes that will be able to take up the results and use them. These dimensions interact with each other and give rise to three broad types of research project, all of which are relevant to policy in different ways:
- Research which seeks evaluate ongoing projects or programmes, using impact evaluations (for example) to design the next phase.
- Research which seeks to transfer previous results from a different sector or country; looking to contextualise findings from elsewhere
- Research which is essentially proof-of-concept; testing ideas, establishing causal relationships or understanding different voices
Jeff Knezovich has talked more about some of the perverse incentives to policy relevance caused by the way research is sometimes funded. Policy relevance isn’t an either/or situation; it’s a multi-dimensional and constantly-shifting challenge. This leads to my next point;
3. Be realistic about what can be achieved – think breadth of impact rather than depth. Getting references to pieces of research in policy documents doesn’t have to be the holy grail. The UK’s ESRC recognises three different types of research impact: instrumental (e.g. concrete changes to policies and programmes), conceptual (changes in understanding) and capacity building. A recent evaluation of the Rural Economy and Land Use Programme adds two more: building enduring connectivity and fostering attitudinal change. In RAPID we also consider five areas in which policies can be affected—discourse, attitudes, processes, content and behaviour.
Research that influences policy development may be citable and therefore measurable, but it is important to recognise it also has real impacts on how policies are designed and implemented and the relationships research can help to build along the way. Power dynamics and choice have a significant effect on how research evidence is sourced, interpreted and ultimately used – as two recent ODI Background Notes demonstrate (here’s the first, which sets out a framework for analysing the knowledge-policy interface, and the second, on how to apply the framework to in-country programming).
And as Justin Parkhurst has blogged, it is probably less important to ensure your research led to policy change than it is to support local systems for reviewing and evaluating evidence, which may be better able to set that evidence in context of what needs to be done.
4. Be clear whether you’re practicing research communication or advocacy.
There can be an unwitting slide from good research communications to advocacy for particular policy positions. The diagram below is adapted from this report to 3ie in which I looked at the policy relevance of impact evaluations; but I think this is a general issue, not limited to a single type of research:
It’s up to individual researchers and organisations to choose how far to the right they want to operate, but it’s important to be clear about it beforehand or risk being accused of partiality.
None of this is to say we shouldn’t be showing that what we do makes an impact: research that spends public money must show something for it. Fundamentally I think it is about the quality of the evidence and how we collectively choose to use it. However, at a time when the evidence-informed policy debate is going mainstream, I think we do need to inject a dose of realism into debates around what that impact ought to look like.
Note: This article gives the views of the author(s), and not the position of the Impact of Social Sciences blog, nor of the London School of Economics.
Louise Shaxson is a Research Fellow in the Research and Policy in Development (RAPID) programme at ODI. She has had a longstanding interest in the general area of evidence-informed policymaking, particularly how policymakers inside line Ministries can be encouraged to make better use of evidence and analysis. She has worked internationally and inside Whitehall Departments looking at structures, functions and processes to improve the use of evidence by policymakers; implementing the sorts of organisational change programmes that are sometimes needed and developing new evidence-related policy tools where necessary. Louise is currently working on a couple of projects relating to research uptake and impact; developing a monitoring framework that assesses behaviour change rather than outputs, and working with political actors to improve their use of evidence.
Agree with the idea of more realism e.g “we will be better off focusing on the quality of the evidence produced and just ensuring it makes it into the debate, than on trying to demonstrate impact in terms of any concrete outcomes on policy”.
This is just what Evidence Aid (www.evidenceaid) is looking to do in the humanitarian and disaster management setting. The quality of the evidence is paramount, as well as getting the evidence to the right people in order for them to make informed decisions about interventions.
Excellent synthesis of the discussion on the subject, Louise. I like your ‘sliding’ diagram in particular. I wrote something about the bottom half some time back: Never mind the gap: on how there is no gap between research and policy and on a new theory (http://wp.me/pYCOD-cd). And this relates to how funding to think tanks and to researchers in general is provided: often via projects with pre-defined influencing objectives.
As I have said many times before, the research method and evidence do not tell us what to do. Values do. I think we ought to be able to recognise both even if we cannot leave our values outside while we are conducting research.
But another point worth mentioning, specially, for the audience of this blog is that the biggest contribution that research can make is likely to come by indirect and long term paths. Particularly, the path of education.
Doing research is, in itself, an influencing activity. Anyone who had the chance to work as a research assistant should know that the experience itself has left an important mark in their personal and professional lives.
Doing research promotes critical thinking, a quality that is conducive for using (and demanding) research. Policymakers, businesspeople, journalists, etc. who have had that kind of education are more likely to be able to take advantage of different types and sources of information. They are less likely to be fooled by low quality research and ‘so called experts’. This is why funding universities to be good at teaching as well as research (and influence) is, I think, the most important thing that international development funders can do (http://wp.me/pYCOD-Aq).
Are we in danger of overstating ourselves? Yes. But I am afraid it’s a bit late to ask that question. At least in the Aid industry we have been overselling ourselves for years. The consultancies, NGOs, and think tanks that bid for projects that include policy influencing targets are all, at the very least, guilty of being overoptimistic; but I am more inclined to think, the cynical in me, that they are in fact knowingly making promises they know they cannot keep.
There is a sense of collusion in the air. The funder has to aim for big wins (value for money) and the contractors need to promise big to win. Both have incentives to pretend all is well and play down the type of concerns you have raised in this blog. And both assume that, by the time they get to looking for evidence of success (or failure), people will have moved on (which is likely as most posts in donors like DFID don’t last more than a few years) and the pressure to demonstrate impact will not be there any more.
Re “There is a sense of collusion in the air” – see the “Conspiracy of Optimism” paper by Weston et al re a similar problem in the defence sector, at http://www.rusi.org/downloads/assets/Acquisition_Focus_0207.pdf “A ‘conspiracy of optimism’ exists between MoD and industry, each having a propensity, in many cases knowingly, to strike agreements that are so optimistic as to be unsustainable in terms of cost, timescale or performance.”
It exists between DFID, the other foundations and donors, the NGOs, the think tanks, the consultancies, and the media.
The few voices that raise concern or are sceptical (you’d expect the NGOs and the think tanks to be the sceptical ones) are considered almost as evil as the climate change deniers.
At the heart of most of this is the source and ways of funding. When everyone depends on the ‘aid budget’ everyone is keen to show value for money, impacts,etc.
The article by Louise Shaxson makes interesting reading on a very complex topic.It is also true that we are pushing ourselves too hard but necessary. I like the diagram, and for the case of Uganda it is not only a two way traffic but a three way traffic. One must communicate and advocate to not only policy makers but also to the ordinary citizens. This is so, because change these days is not coming from above but from the bottom (case of the Arab spring), and this means that, those at the bottom must have evidence to work with on issues they would want to change. So, for me, research evidence must aim at invigorating the bottom that will in the long run influence the top.