Experimental research methods have become mainstream across many disciplines in the social and behavioural sciences. Highlighting, the application of new experimental methods that employ innovations in digital technology, machine learning and theory, Jonathan Breckon and Alex Sutherland argue that social scientists should be encouraged to add a wider variety of experimental techniques to their methodological repertoire.
The so-called “Randomistas” were given a boost in 2019 after three pioneers of randomised experiments in international development – Esther Duflo, Abhijit Banerjee and Michael Kremer – won the Nobel Memorial Prize in Economics. They set up the Abdul Latif Jameel Poverty Action Lab (or JPAL), an organisation that has run over 1,000 randomised controlled trials to understand how to reduce poverty and has championed the use of the method internationally.
However, with some exceptions, other areas of empirical social science lag behind in terms of acceptance and adoption of these methods. Some of this lag is to do with a misunderstanding of experimental methods and their application. Best-selling social science research methods textbooks often present an outdated view of Randomised Controlled Trials and overlook the latest technical and theoretical advances. This is understandable, as some of the methodological work can be highly complex and confined to specific contexts.
To get beyond the critique and the jargon, Nesta has produced The Experimenters’ Inventory, which in plain language catalogues 11 different kinds of randomised experiments. It covers techniques such as; hybrid trials – that test the effectiveness of a new idea at the same time as how it should be implemented in the real world, the use of machine learning algorithms – such as Multi-Armed Bandit Trials – to improve who gets chosen for control group, as well as faster and cheaper experiments, such as Nimble RCTs – a lighter-touch trial advocated by Professor Dean Karlan at Yale University and the World Bank. Each represents a technique that could be more broadly added to the experimental social science tool box.
Digital experiments: a new category
In the private sector, experiments are a standard means through which Silicon Valley improves online products. Companies like Google and Amazon run tens of thousands of experiments a year. Despite the hype around big data and its ability to rethink the rules of causality, there is still an appetite from these companies for simple experiments.
As the Australian politician and former professor of economics Andrew Leigh says in his book Randomistas, Google has arguably more data than any other organisation in the world – around 15 billion gigabytes and increasing at a rapid rate – but still conducts randomised experiments. They, and other businesses such as eBay, Chrysler, United Airlines and Uber, run A/B testing which has become a central part of the day-to-day operation of internet-based companies.
Could these experiments in business and technology not be an interesting area for social scientists to explore? Some of these experiments might seem to only cover superficial issues – website clicks, or marketing reach. But experimental sociologists like Damon Centola at Pennsylvania University are studying ‘complex contagions’ such as social movements, political campaigns, or adopting healthy behaviours. Despite the backlash against Facebook’s ‘emotional contagion’ experiment on 700,000 users run by Cornell University in 2012, digital experiments are a rich area to explore, as long as we understand and address ethical issues of ‘hiding’ material from social media users.
One advantage of digital experiments is that they can dramatically reduce the cost of running trials. In the UK, the Government Digital Service and the Behavioural Insights Team have experimented to improve everything from encouraging organ donations, to filling in HMRC tax forms. So by using something that is going to happen anyway (i.e. sending letters) and tweaking it, such experiments are nearly costless to deliver and cheap to evaluate using administrative data.
Another advantage of going digital is the statistical power boost from larger samples online. Online research has grown exponentially in the last 20 years, and with the advent of easy to-access resources such as MTurk (Amazon’s Mechanical Turk), a crowdsourcing marketplace for recruiting online participants, it shows no signs of slowing down.
These are valuable platforms for researchers from a range of fields, including social psychology, political science, sociology, human geography and economics. For Princeton University social scientist Matthew Salganik in his book Bit by Bit: Social Research in the Digital Age, the uniqueness of online experiments means we can now add another dimension to textbooks where and how experiments happen: ‘digital experiments’ merit their own category – in addition to the traditional division of ‘lab experiments’ and ‘field experiments’.
Rethinking the traditional randomised trial
As well as technical innovations with experimentation, there have also been attempts to re-think the traditional trial. For instance, ‘realist trials’ try to get beyond the ‘black box’ problem, by telling us much about why, or how effects differ between individuals or settings, and universities like the London School of Hygiene and Tropical Medicine are using theory-based approaches to public health.
And, if randomisation really is not feasible, one answer is to use quasi-experimental methods. These can be quite technical, such as regression-discontinuity design or difference-in-difference. But they are worthy members of the evaluator’s repertoire and can be used in complex cases – such as the evaluation of Troubled Families by the UK Ministry of Housing, Communities and Local Government, which created a dataset with over a million cases and over 3,000 variables.
Embracing the critics
One of the strengths of experimental (and quasi-experimental) techniques is the vast body of literature critiquing or attempting to make things better. It was easy to forget in the celebrations for the JPAL Nobel prize winning economists last year, that it was only a few years ago that an eminent experimental sceptic, Professor Angus Deaton, won the economics Nobel. He has written extensively on the limitations of RCTs.
Such critiques should be welcomed. The transparency of the RCT methods aids our ability to dig into what is going on. There is much other literature that confronts the difficulties of running good experiments, finding ways to answer the perennial challenges of internal and external validity. The World Bank development impact blog and its weekly links to interesting articles and it’s curated list of methods are, for instance, essential reading for methodologists, both inside and outside development economics. But these aren’t the only challenges faced – and much can depend on the important preparatory steps such as piloting.
Being open to such criticisms will enable experimentation to further refine its approaches, and to evolve and adapt, beyond its origins in medicine, business and economics into wider social science
About the authors
Jonathan Breckon, is Director of the Alliance for Useful Evidence at Nesta. Formerly Director of Policy at the Arts and Humanities Research Council, he has had policy roles at the Royal Geographical Society, the British Academy, and Universities UK. He is a Director of the What Works Centre for Children’s Social Care, a Visiting Professor at Strathclyde University, and a Visiting Senior Research Fellow at King’s College London’s Policy Institute.
Alex Sutherland is Chief Scientist and Director of Research and Evaluation at BIT where he works across projects and policy areas. Some of his recent published work has been on police body-worn cameras and he has led a number of large-scale randomised-controlled trials in education. Before joining BIT, Alex was at RAND Europe for 5.5 years, and spent three years coordinating and teaching research design/quantitative methods at the University of Cambridge. Prior to Cambridge, he worked at the Centre for Criminology (University of Oxford) and has a D.Phil. in sociology, also from Oxford.
Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.
Featured Image Credit adapted from Katya Austin, via Unsplash (CC0 1.0)