LSE - Small Logo
LSE - Small Logo

Blog Admin

March 20th, 2013

Evidence-based practice: why number-crunching tells only part of the story

0 comments

Estimated reading time: 5 minutes

Blog Admin

March 20th, 2013

Evidence-based practice: why number-crunching tells only part of the story

0 comments

Estimated reading time: 5 minutes

Rebecca-Allen1Rebecca Allen welcomes greater attention and government funding towards randomised controlled trials (RCTs) in education. But further conversations are still needed on how best to design and implement these trials given diverse educational contexts and difficulties in gathering accurate data. The social science model of research is not ‘what works?’ but rather ‘what works for whom and under what conditions?’

As a quantitative researcher in education I am delighted that Ben Goldacre – whose report  Building Evidence into Education was published last week – has lent his very public voice to the call for greater use of randomised controlled trials (RCTs) to inform educational policy-making and teaching practice. I admit that I am a direct beneficiary of this groundswell of support. I am part of an international team running a large RCT to study motivation and engagement in 16-year-old students, funded by the Education Endowment Foundation. And we are at the design stage for a new RCT testing a programme to improve secondary school departmental practice. The research design in each of these studies will give us a high degree of confidence in the policy recommendations we are able to make.

Government funding for RCTs is very welcome, but with all this support why is there a need for Goldacre to say anything at all about educational research? One hope is that teachers hear and respond to his call for a culture shift, recognise that “we don’t know what’s best here” and embrace the idea of taking part in this research (and indeed suggest teaching programmes themselves).

It is very time-consuming and expensive to get schools to take part in RCTs, (because most say no). Drop-out of schools during trials can be high, especially where the school has been randomised into an intervention they would rather not have, and it is difficult to get the data we need to measure the impact of the intervention on time..

However, RCTs cannot sit in a research vacuum. Ben Goldacre does recognise that different methods are useful for answering different questions, so a conversation needs to be started about where the balance of research funding for different types of educational research best lies.

It is important that RCTs sit alongside a large and active body of qualitative and quantitative educational research. One reason is that those designing RCTs have to design a “treatment” – this is the policy or programme that is being tested to see if it works. This design has to come from somewhere, since without infinite time and money we cannot simply draw up a list of all possible interventions and then test them one by one. To produce our best guess of what works we may use surveys, interviews and observational visits that took place as part of a qualitative evaluation of a similar policy in the past. We also used descriptions collected by ethnographers (researchers who are “people watchers”). And of course we draw on existing quantitative data, such as exam results.

All of this qualitative and quantitative research is expensive to carry out, but without it we would have a poorly designed treatment with little likelihood of any impact on teacher practice. We may find that, without the experience of other research, we might carry out the programme we are testing poorly for reasons we failed to anticipate. The social science model of research is not ‘what works?’ but rather ‘what works for whom and under what conditions?’

Education and medicine do indeed have some similarities, but the social context in which a child learns shapes outcomes far more than it does the response of a body to a new drug. RCTs may tell us something about what works for the schools involved in the experiment, but less about what might work in other social contexts with different types of teachers and children. Researchers call this the problem of external validity. Our current RCT will tell us something about appropriate pupil motivation and engagement interventions for 16-year-old teenagers in relatively deprived schools, but little that is useful for understanding 10-year-old children or indeed 16-year-olds in grammar schools or in Italian schools.

The challenge of external validity cannot be underestimated in educational settings. RCTs cannot give us THE answer; they give us AN answer. And its validity declines as we try to implement the policy in different settings and over different time frames. This actually poses something of a challenge to the current model of recruiting schools to RCTs, where many have used “convenient” samples, such as a group of schools in an academy chain who are committed to carrying out educational research. This may provide valuable information to the chain about best practice for its own schools, but cannot tell us how the same intervention would work across the whole country.

Social contexts change faster than evolution changes our bodies. Whilst I would guess that taking a paracetamol will still relieve a headache in 50 years’ time, I suspect that the best intervention to improve pupil motivation and engagement will look very different to those we are testing in an RCT today. This means that our knowledge base of “what works” in education will always decay and we will have to constantly find new research money to watch how policies evolve as contexts change and to re-test old programmes in new social settings.

This was originally published on the IOE London Blog and is reposted with permission.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. 

About the author
Rebecca Allen is a Reader in Economics of Education at the Institute of Education, University of London. Rebecca’s principal research interests lie within the economics of schooling and effect of government policies on school behaviour and performance. Rebecca is currently working on a quantitative analysis of the teacher labour market, with a particular focus on the early career choices of teachers. She specialises in quantitative evaluation methods and tends to use very large scale datasets, such as the National Pupil Database, the Longitudinal Survey of Young People in England and the School Workforce Census.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Evidence-based policy | Government

Leave a Reply

Your email address will not be published. Required fields are marked *