LSE - Small Logo
LSE - Small Logo

Erica Thompson

January 28th, 2023

Models and experts: urgent questions about how we inform decisions and public policy

0 comments | 3 shares

Estimated reading time: 10 minutes

Erica Thompson

January 28th, 2023

Models and experts: urgent questions about how we inform decisions and public policy

0 comments | 3 shares

Estimated reading time: 10 minutes

Mathematical models are simplifications. No model is capable of integrating all of the numerous political, economic, and social trade-offs involved in making public policy decisions. The solutions are both technical and social. Erica Thompson writes that we need more work on the social context and social content of mathematical models, and critical thinking about how these models interface with decision and policy development.

Mathematical models are here to stay. Whether they are determining supply chain vulnerabilities, demonstrating regulatory compliance, or informing policies for a zero-carbon future, quantitative models are at the heart of modern societies. And as computers become more powerful and more readily accessible, artificial intelligence and machine learning models are also being applied in many new areas.

Given that, we urgently need to understand how best to use and work with models to make good and responsible decisions. Statistician George Box was quite right to point out that “all models are wrong”. They are necessarily simplifications of the messy reality we want to get to grips with. But many quantitative methods for working with models basically assume that the model is right, or at least that it can accurately estimate the range of plausible outcomes.

If the model is not quite perfect, we can expect some of its outputs to be wrong (not just inaccurate). In that case, the information that is offered as decision support could be misleading. We have two options here. We could remain in what I call model land and just expect to have to say “what a shame, we made the wrong decision” occasionally. In some circumstances that might be a reasonable answer, but if we are making decisions about critical infrastructure or selling a product that might be unsafe to millions of people, then we have both a legal and ethical responsibility to do better, to get out of model land and understand how relevant our model results are for the real world.

So what’s the second option? You won’t be surprised to know that it isn’t easy. In my new book, I consider some of the implications of working with imperfect models and the kinds of strategies that we need to adopt to make best use of the information they contain. One theme that I explore is the need to understand the role of expert judgement in constructing, calibrating, evaluating, and using models, and the way that that expert judgement might be shaped by our social context.

Experts make models – and that’s a very good thing, because who would want to rely on a model created by a non-expert? But their expertise is often limited, and it comes from a particular background and set of experiences. Indeed, you can often find equally qualified experts who will disagree about the right assumptions to make when constructing a model and who give different advice about how to achieve the stated aims. Then the decision-maker – probably a non-expert – will be in the difficult position of trying to adjudicate between different models from different experts, weighing up their relative credibility.

This probably sounds familiar in the context of the last few years of model-driven policy making around the COVID-19 pandemic. Models predict catastrophes, and/or damp squibs. Politicians want to “follow the science”, but no model is capable of integrating all of the numerous political, economic and social trade-offs involved in making decisions about health policy, lockdowns, and mandates.

This complexity is another theme that I explore in the book. Decisions about which way to throw a ball to get it through a hoop can be made very well by models with a single scalar determinant of success: did the ball go through the hoop, or not? But decisions about unbounded social systems are much more complex, introducing dimensions and outcomes which cannot be collapsed into one “success metric”.  How is the global economy doing? What should climate policy achieve by 2100? Metrics like Gross Domestic Product and global average temperature, while useful as an overview, do not capture the complexity of the future outcomes that we care about and hope to achieve.

And the models that we end up choosing to use can be incredibly powerful in shaping the way we think about a problem and the way we choose to respond. Integrated Assessment Models of economy and climate represent options in terms of the cost of different energy system technologies, making assumptions about the cost of nuclear energy or renewables and implementing policy in the form of interventions such as a carbon tax or commitments to achieve different climate targets. These models include assumed costs for as-yet-undeveloped technologies like carbon capture, but do not yet include costs for stimulating behaviour change, or indeed costs for only slightly more speculative technologies like solar geoengineering. Their overall framing also implies that the unregulated pathway will always be the lowest cost. That means the lowest financial cost to energy system participants, not the lowest absolute cost to society. You can begin to see that both high- and low-level choices made by modellers actually have huge implications for the way that these models interface with political and investment decisions.

My research programme is dedicated to analysing these kinds of factors, thinking through how we can use models more effectively to inform decisions and public policy, from financial regulation and public health to climate adaptation and anticipating humanitarian crises. The legal and ethical implications of taking responsibility for how and when we use models are becoming clearer and more urgent, especially with increasing use of machine-assisted model development and increasing automation in decision processes. The solutions are both technical and social. We need more research into how best to support mathematical inference from imperfect models, but we also need more work on the social context and social content of mathematical models, and critical thinking about how these models interface with decision and policy development. It’s an exciting area of immense interest and hugely broad application – join me on the journey as we Escape From Model Land!

 


About the author

Erica Thompson

Erica Thompson is a Senior Policy Fellow at the LSE Data Science Institute and author of the book "Escape From Model Land".

Posted In: Democracy and culture

Leave a Reply

Your email address will not be published. Required fields are marked *

LSE Review of Books Visit our sister blog: British Politics and Policy at LSE

RSS Latest LSE Events podcasts

This work by LSE USAPP blog is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported.