On Friday 22 November, Claire Hutchings from Oxford Policy Management came to LSE to talk about the importance of monitoring and evaluation in development for the Cutting Edge Issues in Development lecture series. Students Liam Shah and Michelle Nazareth tell us what they took away from the guest lecture. 

Claire Hutchings speaking at LSE’s Cutting Edge Issues in Development lecture series. Photo credit: LSE ID

How do we measure what matters? Claire Hutchings from Oxford Policy Management delivered a lecture packed full of powerful and transformational knowledge to better understand how to measure the impact of development programmes through key conceptual and methodological reflections. Hutchings’ expertise and confidence with the subject surely left many inspired with lessons not only for impact evaluation but research in international development more broadly.  This short article will cover some of the key takeaways I found that hopefully does justice to summarise the breadth and depth presented.

It should seem somewhat logical nowadays that we should not simply pick up models and apply them to new contexts. The challenges that programmes aim to tackle are centred around active agents who are subject to cognitive biases within a system of complex and dynamic structures, all interacting in their day-to-day livelihoods. All of these influences also affect the way we interact and operate within different environments and this should not be subsided or ignored.

Yet, Hutchings highlights that there is the presence of boundaried thinking often implicated in programmes’ accountability structures. There is an assumption that outcomes and impacts observed are fully attributed to the programme, which is unrealistic for most development efforts, and tied to this, there is an inclination to focusing on outcomes rather than the broader impacts generated. Trained to follow a single plan with limited time and resources dedicated to design and evaluation; there is a tendency for organisations and practitioners to oversimplify issues and the structures they interact with, and thus jump over the most crucial steps of analysis. However, there is a gradual realisation and reversal of this commonality, exemplified by multiple models embracing more robust theories of change. Importantly, the diversity of each of these models highlighted that there is no single or ‘right’ way to generate impact.

To move forward, it is integral that we realise that the earliest decisions of design, and the continual importance of evaluation in a development project are amongst the most critical stages – areas to spend most time on Hutchings argues. What we don’t do enough of is questioning what we already know, and what we may not know. By broadening our sights to allow an evolving procedural nature of evaluation, we can constantly innovate and progress our development practices by exploring a wider set of options to engage with different contexts and aspects of the programme and identify where and how we can add value. Further, it probes asking more questions on topics of significance within our context, creates a drive to expand our indicators, and reprioritises information needs to more substantial issues.

Measurement frameworks and the ways we engage with them are integral in these evaluation designs, helping us to overcome conceptual and methodological challenges. How do we unravel fuzzy concepts like dignity and empowerment, and how can we incorporate the dynamic, multidimensional realities experienced by people and the environment the programme operates within in our measurements?

Measurement agendas must allow us to examine our own assumptions of the causal relationship in order to continually test and refine our theories. The importance of realist evaluation (what works, for whom, in which circumstances, and why) and the value of mixed methods research (qualitative and quantitative) is crucial. This can produce complex indices that define multiple domains and features of concepts; as well as measurements that are iterative, evolutionary, and sensitive to change like many of these fuzzy concepts to capture the dynamics of complex places. Simultaneously, a large underpinning of this is what Hutchings calls developmental evaluations – to position evaluators to facilitate real-time feedbacks to the design process to create a continuous loop of information that is responsive to context dynamics.

The key to all of this, which Hutchings returned to throughout the lecture, is to act with intention, curiosity, and humility. The first two have been discussed earlier here in developing a robust theory of change that evolves throughout the entirety of a programme, continually updating and testing our assumptions. Most importantly, however, is humility – to face our failures and learn from others as something productive to keep moving forward. We will undoubtedly gather inconclusive results and we must embrace this to develop better and more robust frameworks for the contexts which we operate within.

One point that Hutchings ended on is the fact that not everything can be measured. We can only do our best to measure what matters. With all of the information presented, I have the upmost confidence that the more we act with intention, curiosity and humility, and reflect on our concepts and methods; the closer we will get to programmes that measure what matters, and create impact that matters, according to the people and circumstances of given contexts to transform the livelihoods of millions.

Liam Shah is an MSc Development Studies student at LSE with a bachelors degree in BASc Economics Studies and Global Sustainable Development from the University of Warwick. His interests include gender empowerment, social policy, environmental management, monitoring and evaluation, and mixed methods research.

__________________________________________

In her lecture on Friday at the LSE, Claire Hutchings of Oxford Policy Management made the seemingly complex world of measurement more accessible to students of development. Her lecture touched upon the basic issues and challenges that professionals must overcome when dealing with the evaluation and assessment or further design of programmes for development. The lecture also revealed her to be a vibrant person who had dabbled around in the fields of Geology and Greek mythology (!) in addition to her vast work in the development sector.

The lecture deconstructed the processes of monitoring and evaluation, randomized control trials and other important methods of measurement used in policy and programme evaluation. The broad themes were diverse and included the role of the researcher, the human element, the fuzziness of concepts and methodologies, and lastly, the various challenges in the field. Hutchings stressed on the importance of acting with intention, by which she highlighted the need to focus more on the analysis of real issues instead of leaping to the objectives and strategies for implementation of a programme. Understanding the ‘why’ and ‘what’ in precise terms, were essential to successful design and improvement.

The second key theme introduced was the requirement for curiosity, or the process of learning, adapting and improving. Hutchings spoke about measurement as an evolving and iterative process, constantly refined by new ideas and methods in the field (This went along with an amusing video where kindergarten students performed better than business school graduates). She thus strongly made a case for being open to change and contradiction, always in the pursuit of the truth.

This pursuit of the true answer was another important theme: acting with honesty. Hutchings pointed out the existence of biases and assumptions in all forms of research and the ability to be comfortable with acknowledging these. As humans, even those involved in trying to precisely quantify and measure concepts are constrained by their own judgements and observations. We need to acknowledge and be aware of these biases always understanding that not everything (or almost nothing) can be measured perfectly, but it is still important to try! Another important dimension was the need to recognize and give importance to the people- ultimately the purpose of development is to improve the quality of life and hence the human element is crucial.

Hutchings’ overview of the different methods of measurement (both quantitative and qualitative) was a useful and precise summary of approaches in the field today and used actual examples from Oxfam projects to illustrate her points. In summary, she also presented the idea that neither quantitative nor qualitative studies can be thought of as the ultimate goal, since generalizable data and individual narratives are equally important.

These ideas that we took away from the lecture are critical to the field of measurement but also worthwhile considering for the development sector as a whole. For example, one of the key takeaways from the lecture was that there might never be a precise answer, and this is something we all must get comfortable with- the idea that some things are just ‘unknowable’. In conclusion, Claire Hutchings was able to give a broad and succinct overview of the focus, concepts and challenges in measuring development today and a few pathways for change in the future.

Michelle Nazareth is an MSc Development Management student from India. She has a Bachelor’s degree in Sociology, Psychology and Economics from Christ University, Bangalore. Her research interests include adaptation and climate change, urban planning and the informal sector.

Don’t forget to join us on Friday for the Cutting Edge Issues in Development Thinking and Practice lecture with guest speaker Kevin Watkins who will give a talk on “Fighting for breath – why is the world failing to tackle the deadliest killer of children?”. Friday 29 November4-6pm in the Sheikh Zayed Theatre. External guests please register via: https://bit.ly/2kvbH69


The views expressed in this post are those of the author and in no way reflect those of the International Development LSE blog or the London School of Economics and Political Science.