MSc African Development student, Kayla de Fleuriot, reflects on a Cutting Edge in Development talk by visiting lecturer Claire Hutchings, head of programme quality for Oxfam Great Britain, about the crucial role M&E plays in International Development.
Claire Hutching brought us into her realm of work, as the head of programme quality for Oxfam Great Britain, introducing us to the pioneering methods and practices of measurement which her team at Oxfam have designed and implemented. In Hutching’s lecture, we explored Monitoring and Evaluation (M&E), both in terms of its grounding in statistical theory and its real-world implications.
As the title implies M&E is about (at a rudimentary scale) keeping a consistent account of development outcomes in order to accurately evaluate a development project’s impact. This may seem to be a deadpan topic within development, but is actually (I promise!) a cutting edge issue in development, in terms of how it has been adapted for 21st century problems. Indeed, Hutching’s found her way into M&E as someone who worked in social justice and found a ‘utilitarian’ passion in working with M&E, despite not being a ‘numbers person’.
The great value of M&E is that it provides us with a logic which can help us measure, and therefore better understand, development impact. Because, as Hutching’s so aptly stated: “development doesn’t happen on a piece of paper, [and] it doesn’t happen in a linear way”. This theory of change view therefore shows us that development project outcomes cannot be understood by isolated aggregated outputs. We therefore need statistically rigorous ways to assess development projects. M&E, as described by Hutching, provides an avenue into addressing this through its methodology of designing appropriate impact testing and evaluation, as a means of counterfactual-proofing in making development causal inferences. But within M&E there is considerable variation in terms of specific methodology. For more on the specifics of how this can work, watch Esther Duflo’s TED Talk.
Hutching’s dives into the critical potential of M&E as a better strategy for development projects by showing how M&E helped shift an HIV project in Bolivia towards more meaningful impacts. This was specifically done by reassessing what the project, which aimed at improving awareness about HIV/AIDS, had achieved; and it was found that addressing the information gap alone was not sufficient in changing behaviours. M&E in this case helped Oxfam test their real impact, versus their project’s outcomes, and provided the opportunity for the project to learn and therefore manage their programme better.
M&E also provides an opportunity through measurement design to test bias’s within a project. Hutching’s explained this by discussing the agenda of women’s empowerment at Oxfam, and how the organisation has adapted its definition to suit contexts, rather than their own beliefs about what the specifics of what women’s empowerment should look like. Oxfam uses a model of broader theme and sub-measurements as a means to make sure women’s empowerment, and other difficult-to-measure impacts, are consistently measured in terms of theme, but contextually measured in terms characteristics (see their Global Performance Framework for more). The characteristics are specific enough to be contextually relevant, and are a sort of shopping list to choose from in measuring each theme.
In conclusion, we need to understand the complex impact that development projects have: whether they’re good or bad and by how much they’ve contributed to an outcome. M&E provides a practical and convincing methodology for answering these questions, based on statistical theory and adaptive design. What should be emphasised here, however, is that not all M&E programmes are as well executed and transparent as in the Oxfam case. Therefore we need to be clear on what good M&E practice is and what is not.
A take-away from this lecture which I found particularly valuable is that moving forward in M&E, two fields are crucial:
1) Evaluation design in the sense that we need to lead with the question and not the design, i.e. “fit for purpose” design (see Elliot Stern for more on evaluation design), and
2) Measurement framework – define it well before you try measure it, otherwise it won’t be useful for impact assessment (see Global Performance Framework for more).
If we focus on these two points we may be able to continue adapting and improving development projects so that we are more accountable in the future, more effective in our projects, and so that we contribute to the sector’s real knowledge base.
Kayla de Fleuriot is a candidate for an MSc African Development at the LSE, from South Africa. Kayla previously studied geography and economics, as well as urban planning at Rhodes University and Wits University, respectively.
The views expressed in this post are those of the author and in no way reflect those of the International Development LSE blog or the London School of Economics and Political Science.