LSE - Small Logo
LSE - Small Logo

Gorgi Krlev

November 4th, 2019

If we’re serious about changing the world, we need to get our evidence right – A comment on the 2019 Nobel Prize in Economics.

2 comments | 10 shares

Estimated reading time: 10 minutes

Gorgi Krlev

November 4th, 2019

If we’re serious about changing the world, we need to get our evidence right – A comment on the 2019 Nobel Prize in Economics.

2 comments | 10 shares

Estimated reading time: 10 minutes

The announcement of this year’s Nobel Prize in economics has highlighted divisions within the development economics community, particularly around the efficacy of using Randomised Controlled Trials (RCTs) as a tool for making social interventions. In this post Gorgi Krlev discusses the pros and cons of experimental approaches in economics and suggests that rather than seeing routes to delivering social change as a binary choice between macro and micro approaches, social scientists should instead recognise the inherent complexity of social change and adopt realist approaches in assessing how best to make social interventions. 


The citation for this year’s Nobel Prize in economics reads: “This year’s Laureates have introduced a new approach to obtaining reliable answers about the best ways to fight global poverty“. One might be forgiven for assuming that this success was universally welcomed. However, for at least some in the economics community the prize symbolised “the impoverishment of economics”.

What has unfolded over the past days and weeks, has been a clash of world views over how economics can influence society. On one side is Duflo, Banarjee and Kremer’s microeconomic focus on small-scale technical improvements, and on the other, the pluralist community’s stress on macroeconomic structural inequalities. This has been paired with a sustained critique of the Nobel Laureates’ tool of choice, the randomized controlled trial (RCT). That echoes the 2015 Nobel Laureate Angus Deaton’s critique of RCTs as lacking in sensitivity to context and their role in shifting research from things that matter most, to things that can be examined in a randomized research design.

Becoming prominent in the 1980s and imported from medical research, the proliferation of experimental methods is arguably grounded in the desire for economics to be considered a hard science like physics, and not a soft science like all (other) social sciences. The success of RCTs in this respect is highlighted by the way that in some fields it has become a benchmark for the quality of evidence. As a colleague told me the other day: “With randomization your study has “5*” journal potential—without, it is hard to publish at all.” However, beyond being a rigorous methodology, what role do RCTs play in delivering social change?

To randomize, or not to randomize…

Randomization has its benefits. It is strong when interventions represent “easy fixes”, such as the cost-effective measures to achieve social progress identified by the Copenhagen Consensus. Kremer’s classic study on the effects of deworming on school attendance, while more complex and not free of criticism, is a prime example of just this type of intervention. In these cases, the desired outcome (less malnutrition or higher immunization) equals or is close to the outputs produced (amount of micronutrients processed or number of vaccinations performed). 

However, experimental methods are limited, if not unsuitable, when the achievement of the desired outcome is related in various ways to the activities performed. For example, following Amartya Sen’s understanding of poverty as multiple deprivations, it is not enough to assess interventions, such as microfinance, against changes in household income, or to test against several, but simple, factors collapsed into an index. 

Much as in the way that research impact in general relies not simply on publication, but a wider range of communication activities. To understand these kinds of multifactorial changes, we need detailed accounts of whether interventions enable new social relations, empowerment, or self-worth. This requires contextual knowledge, from multiple data sources, including qualitative information.

In the absence of this information, we won’t know whether things have really changed for the better. In other words, we may have statistically significant results on variables that are easy to measure, such as more income or a shift in spending from “temptation goods” to more useful expenses, but little clue as to whether this has improved anybody’s lives. These shortcomings become even more pronounced when RCTs are applied to assess transformational effects, such as behaviour change. 

To apply randomization to a problem we implicitly assume a logic, as in evidence based medicine, that the intervention (drug) should have the same effect on each individual (patient). However, for social interventions this is simply not always the case. For instance in microfinance, the default expectation is that loan recipients will set up a successful business. This impression is fuelled by some of the industry’s figureheads’ naïve projections that we, and especially the poor, are all entrepreneurs – a forceful effort of positive thinking. A more realistic assessment of microfinance would be whether it makes more people entrepreneurs who would otherwise have been deprived of that opportunity, subject to minimizing the social risk for those who fail and might slip into debt cycles as a result. 

Taken as a whole these issues point to the conclusion that for more “demanding” interventions, more “contextualized” causal chains of impact and kinds of evidence need to be taken into account. 

Thinking, big and small

In line with this call for embracing complexity, social impact analysts and social policy scholars are increasingly moving away from impact as a “rational, ordered and linear process”, and  from the input-output-outcome-impact model. Notably evidence based medicine, the field that inspired much RCT based work in economics, has begun to take a more nuanced approach to assessing complex health interventions, often using realist reviews that embrace contextual complexity, rather than traditional systematic reviews or meta-analyses of (randomized) data.

The focus on small improvements and neat designs may indeed have pushed us too far (back) into the simplistic neoliberal world that has been criticized for simply being “bad economics”. But we should not fall prey to the illusion that including power, politics and irrationality into our models is infinitely possible, without sacrificing analytic value. 

The heterodox community’s point that we need to think big and not small when trying to fix broken systems is well taken. It echoes the voices demanding structural reforms of tax regimes, social security provision, or wealth distribution that maintain structures of inequality and cultural dominance. However, the historic limitations of grand social designs may themselves, have led to the current part-replacement of reforms by more experimental approaches, such as mission-oriented innovation. The need for reforms also does not make individual small-scale contributions redundant. 

Want change – get organised

Past experience has taught us that organizations, be it in renewable energy or in social care, are key actors of change. However, the amount of evidence organizations act on is limited. Evidence is either not gathered at all, or if gathered not acted on. News of a charity that stopped its program for reassessment after negative evaluation (by an RCT), is still “a big deal”. We need many more such incidences, where impact analysis is used for organizational learning. 

There are ways of combining organizational and structural data, and complex thinking with analytical precision. Configurational approaches for example have proved effective in showing how race, gender, family background and educational achievement, if analyzed in combination rather than in isolation, matter for social inequality. Unfortunately, we rarely find them in program evaluation. And while ‘randomistas’ have conducted thousands of studies, as we have seen, it would require millions of studies to improve the practice of organizations that aim to contribute to the “common good”.

The repertoire we have as social scientists is broad and powerful. The current heated debates are a welcome occasion to make that repertoire more relevant to solving today’s grand challenges. This should be guided by the questions we want to answer, not by predefined toolkits or epistemological tradition. 

On a strategic level, we must connect the small-scale with the broad picture:

  • Equip organizations with the means to analyze the effects they create to promote continuous improvement;
  • Conduct larger scale model studies along the example of smaller scale interventions (whether organizational practices or policies) that signal potential for high impact;
  • Build or identify a portfolio of interventions to assess for broader and combined effects on systems.

Methodologically, we need more ‘realistas’, who improve our understanding of when we need which kind of evidence. They may be randomistas at times, and embrace causal complexity at others. The solution lies not in saying the world needs more of the one or the other, but in being able to choose or bring both together in meaningful ways. Because if we’re serious about changing the world, we need to get our evidence right.

 


Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.

Featured Image Credit, Lachlan Donald via Unsplash (Licensed under a CC0 1.0 licence)


Print Friendly, PDF & Email

About the author

Gorgi Krlev

Gorgi Krlev is an Assistant Professor of Sustainability at ESCP Business School in Paris and a visiting fellow at Politecnico di Milano as well as the University of Oxford’s Kellogg College. He seeks to amplify “the social”, not only in studying new phenomena such as social innovation, but also in how academics produce knowledge. He can be found on Twitter @gorgikrlev.

Posted In: Evidence-based policy | Evidence-based research | Impact | Research methods

2 Comments