LSE - Small Logo
LSE - Small Logo

Blog Admin

November 6th, 2012

I disagree that I disagree! There is room for more than one method of evidence in policymaking

1 comment

Estimated reading time: 5 minutes

Blog Admin

November 6th, 2012

I disagree that I disagree! There is room for more than one method of evidence in policymaking

1 comment

Estimated reading time: 5 minutes

Academics should not get ‘bogged down’ in their perceptions of what methods of research government values. Kirsty Newman explains that when it comes to decision making in government, there is no universal preference for one form of research evidence over another.

I have been meaning for some time to write a response to Andries de Toit’s paper Making Sense of Evidence: Notes on the Discursive Politics of Research and Pro-Poor Policy Making and this blog written by Enrique Mendizal in support of the paper’s conclusions.

My initial reaction to both the paper and the blog is to agree with many of the underlying concerns presented but to protest that both authors are weakening their case by presenting the other side of the argument as far more extreme than it actually is. They are reacting against a caricatured ‘randomista’ – an unthinking, automaton who slavishly demands experimental evidence (and only experimental evidence) before making any decision. To be honest, I would also react against such a person – but I am not convinced that such people actually exist!

I work for the UK Department for International Development in the Evidence into Action team. This team’s mission is to promote and support use of quality research evidence in decision-making both within DFID and more widely. Readers of De Toit and Mendizabal’s articles, might imagine that this team is full of ‘randomistas’ – but this couldn’t be further from the truth. The team consists of 20 people with a wide variety of experience and expertise. We have social scientists, basic scientists, international development experts, evaluators and economists. We have people with experience in other government departments, academia, grass-roots charities, activist groups and think tanks. Almost every day within the team, I hear intelligent and nuanced discussions about research evidence and how it contributes to policy and practice.

There is general consensus that for some questions (e.g. whether X intervention leads to Y outcome in environment Z) experimental or quasi-experimental approaches offer the strongest evidence.  And that for other questions (e.g. whether X intervention is cultural or politically acceptable, why X leads to Y, whether another environment is similar to Z etc) alternative approaches would be more useful. However beyond those general statements, I would not say that we have a universal preference for one form of evidence or another. It completely depends on what question needs to be answered.

For example, this afternoon within DFID, there was a seminar about male circumcision in Africa and its role in reducing HIV transmission. The reading for the seminar included a systematic review which looks at the efficacy of circumcision as a preventative measure. But there were also papers examining the social and ethical issues around this issue.

Sometimes, we might push for more experimental (or quasi-experimental) approaches. For example, I recently looked over a policy brief which discussed the outcomes of a particular type of governance intervention. I felt that this brief was relying too much on case study based evidence to support a certain hypothesis. My suggestion was that they needed to either summarise more rigorous evidence or, if this evidence does not exist, they need to discuss why this is the case and explain the limitations of the evidence which is presented.

However, on other occasions, the team might suggest that more qualitative research is needed. For example, a colleague was at a meeting recently where the results of a randomised controlled trial to increase vaccine uptake by providing food handouts were discussed. He felt that the discussion was focusing too much on a single question (does the intervention ‘work’ in this setting?) and his advice was that there was a need to consider other questions (e.g. what does the intervention mean for people’s dignity? how ethical is the intervention? is the intervention socially acceptable?) and that to answer these questions different forms of evidence would be required.

I am not in any way suggesting that DFID gets the balance right in every case; of course we make mistakes. But it is not true to suggest that the team in which I work, or DFID as a whole, is dominated by people who only value one form of research evidence as the basis for decisions.

Unfortunately I am not able to go to the PLAAS conference in November at which these issues will be discussed – but I would like to make a plea to those who are attending. Please try not get too bogged down discussing your perceptions of policy makers’ prejudices. I suspect that the discussions will be more useful if they focus on concrete examples of policy decisions which contributors feel were overly influenced by one type of research evidence. By analysing these examples, it may be possible to come up with some genuinely useful case studies and some recommendations on how policy makers can consider a range of factors (including citizens’ voice, local cultures, power dynamics etc. etc. as well as research evidence) as they make their decisions.

This article was originally published on Kirsty’s own blog, KirstyEvidence, and is republished with permission.

Note: This article gives the views of the author(s), and not the position of the Impact of Social Sciences blog, nor of the London School of Economics.

About the author:
Kirsty
is interested in research and how it can contribute to international development. She works for DFID but these opinions are her personal views.

 

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Evidence-based policy | Government | Impact

1 Comments