LSE - Small Logo
LSE - Small Logo

Blog Admin

October 19th, 2012

Evidence alone is not enough: policymakers must be able to access relevant evidence if their policy is to work

3 comments | 1 shares

Estimated reading time: 5 minutes

Blog Admin

October 19th, 2012

Evidence alone is not enough: policymakers must be able to access relevant evidence if their policy is to work

3 comments | 1 shares

Estimated reading time: 5 minutes

It is not enough to look for evidence of a previous policy success. Jeremy Hardie and Nancy Cartwright argue that exactly what evidence is needed, and of what, is the key question that needs to be asked for making real evidence-based social policy interventions.

A great deal of impressive work has been done in evidence-based policymaking to help show what facts are  true– most conspicuously nowadays the establishment of Randomised Controlled Trials (RCTs) as a way of proving the truth of the fact that this intervention worked there. But there is not much research available to help with deciding what facts are relevant to a particular case.

Our book is designed to provide a guide to thinking about what evidence is relevant to deciding what social policy intervention is most likely to work, where you are. It is therefore practical, about what to do in circumstances, about this problem, for these people.

Although practical, it is backed by theory. That theory starts by distinguishing what is true from what is relevant. If God provided an encyclopaedia of all the propositions about the world that are true, whatever that might mean, it would not help until you had decided which of these facts is relevant to your decision. The fact that it worked there is not straightforwardly relevant to the conclusion that it will work here. And that is not just to do with a fact having been established by an RCT.

Any method – whether RCTs or econometrics or more generally drawing on our shared stock of facts which we accept as true from science, or experience, or from just using our eyes – which establishes a fact does no more than that. It does not answer the next question, why do we believe that fact is relevant evidence for believing that it will work here?

This insight – that relevance matters as much as truth – is not merely theoretical. There are too many examples where doing here what worked there, failed because if a failure to see what was relevant to success beyond the fact that the intervention had worked somewhere else. 

Small class sizes worked to raise reading scores in Tennessee, but the same policy failed in California. As often, with hindsight the explanation is simple. If, as they did in California, you introduce smaller classes on a large scale very quickly, and of course you have to have lots more teachers, you end up with a lower average quality of teacher, because you have to recruit less experienced or well trained people. And of course you have to have many more class rooms, so you take space for reading classes from other activities which themselves contribute to the flourishing of pupils and hence to their reading ability. These and other changes mean that the potential of lower classes never gets realised. Even if you are confident that smaller class sizes played a determining causal role in Tennessee, you have to think whether it can do so in California. And that means thinking about what helping factors are needed and whether they will be present. The true fact that it worked in Tennessee does not of itself tell you how to think about that. It does not tell you what helping factors have to be present to allow smaller classes to play their causal role. Let alone how you find out whether those factors will be present.

Most of our text looks at how you think about causal roles and helping factors. The California example shows that you must identify supply of adequate teachers and space. And you do that by reflecting on how the intervention worked in Tennessee and why. We call that process horizontal search.

Another example is the failure of the Intensive Nutrition Programme in Bangladesh. This was based on the success of such a programme in Tamil Nadu. Mothers of poor families with badly nourished children were given food or money for food, and education about how to use them. Nutritional standards rose. The same intervention was introduced in Bangladesh. Nothing happened. Why? In Tamil Nadu, the mother runs the family and decides who gets the food or money. In Bangladesh, it is the mother in law. So if you give the food or money to the mother, the mother in law gets hold of it, and because she has not gone through the other half of the programme, education, she does not necessarily get food for the young children. The food or the money may go to her sons. We call thinking about that vertical search. One way of characterising what was done in Tamil Nadu is to say that we identified who runs the family and gave the food and money and education to her. If we had done the same in Bangladesh, we would have given the resources to the mother in law. Vertical search, because you have to climb up to a more generalised description if you are going to make the right disposition. If in giving an account of the Tamil Nadu success you had talked about giving the resources to the person in charge, you would then have identified the mother in law in Bangladesh as the person in charge them.

We suggest a number of other ways how you might embark on teasing out what you need to know, what evidence you need, to have some confidence that your intervention will work. Another is the premortem. Just carry out the thought experiment of asking yourself, ‘If this goes wrong, how will it have gone wrong?’ That may get you to wondering about teacher quality. This and our other tips do not have much theoretical status. Either they work for you or they don’t. This idea, the premortem, does no more than protect against, not the familiar danger of only accepting confirmatory data, but the temptation not to recognise what evidence you need – will you have enough teachers?- in case it is blindingly and fatally obvious that you won’t get the answer you want.

Here is another example. Suppose we think that CCTVs in car parks reduce car theft. You have to think how they do that if you are to use them properly. If it is by deterrence, you make them visible. If by giving the police a chance to catch people red-handed, you make them invisible. And you have to have enough police, near, to make arrest likely. And you may not have to worry if CCTV evidence is valid in court if the causal role is based on deterrence. Similarly with speed cameras.

If this all sounds down to earth, so it is. Exactly what evidence you need, of what, is the key question that has to be asked for making real evidence based social policy interventions. And the answer, given the complexity of multidimensional social problems, will very likely be quite specific to the particular context. It is unlikely that you can limit yourself to establishing that it worked, on average, there.

Note: This article gives the views of the author(s), and not the position of the Impact of Social Sciences blog, nor of the London School of Economics.

About the author:
Jeremy Hardie is co-author with Nancy Cartwright of Evidence-Based Policy: A Practical Guide to Doing It Better

 

 

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Evidence-based policy | Government | Impact | Knowledge transfer

3 Comments