In Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do about It, Erica Thompson explores how mathematical models are used in contexts that affect our everyday lives – from finance to climate change to health policy – and what can happen when they are malformed or misinterpreted. Rather than abandoning these models, Thompson presents a compelling case for why we should revise how we understand and work with them, writes Connor Chung.
Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do about It. Erica Thompson. Basic Books. 2022 (Hardback; 2023 paperback).
“World is on track for 2.5°C of global warming by end of the century.” “US recession odds are falling fast.” “New wave of Covid predicted as UK’s return to school and social mixing hit.” Amidst the challenges of recent years, mathematical modelling has become an ever-more-important tool for understanding our world. Done right, this can empower us. Distilling complexity into bite-size pieces, after all, can be a key step towards changing things for the better.
Embedded within every model are certain assumptions about how the world works. Sometimes, they do the job. Yet, other times, our visits to model land go awry. Thompson fears that modern society never learned to tell the difference
Yet modernity’s faith in modelling has come with a dark side, suggests statistician Erica Thompson in Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do about It (Basic Books: 2022). Embedded within every model are certain assumptions about how the world works. Sometimes, they do the job. Yet, other times, our visits to model land go awry. Thompson fears that modern society never learned to tell the difference, and that as a result, we’re becoming trapped in a mirror-world of our own making.
The core problem? That it’s all too easy to approach models as sources of objective scientific fact. Yet “[s]uch naive Model Land realism,” Thompson warns, “can have catastrophic effects because it invariably results in an underestimation of uncertainties and exposure to greater-than-expected risk.” “Data, that is, measured quantities, do not speak for themselves,” and at nearly every stage of finding the story, the world finds ways of seeping in.
It’s all too easy to approach models as sources of objective scientific fact.
Let’s say, for example, you want to know how climate change will impact GDP. A preeminent tool for doing so is the DICE model family. As recently as 2018, its factory settings concluded that global warming of 4˚C by 2100 would reduce global economic output by only around 4%. The Intergovernmental Panel on Climate Change, meanwhile, has warned that such warming would bring about “high to very high” planetary risks “in all reasons for concern.” So how does one conclude that a world of cataclysmic weather, of cities swallowed up, of climate-driven refugee and food crises would barely register in the economic metrics?
First, there’s what’s fed into the model: since costs and benefits of building a solar farm or passing a clean energy regulation don’t play out all at once, one must instruct a model how much to value the present versus the future. This variable (one of many dials to which DICE is highly sensitive) is called a “discount rate,” and no amount of math can hide the fact that it’s ultimately a moral judgment. As its main creator, Yale economist and Nobel laureate William Nordhaus, has himself written, “[t]he choice of discount rates is central to the results” – DICE can be made to say just about anything depending on what inputs are chosen. Relatedly, there’s what’s not fed into a model: models are informed by pre-existing knowledge. As a consequence of history, less economic and climactic data are readily available from the developing world, for instance.
Models are informed by pre-existing knowledge. As a consequence of history, less economic and climactic data are readily available from the developing world
Then follows the construction of the model itself. As economist Nicholas Stern and co-authors point out in a recent paper, certain presumptions of rational actors, of market efficiency, and of exogenous technological progress are embedded into DICE’s fundamental wiring. More broadly, Thompson notes, DICE takes as granted that “the burden of allowing climate change can be quantitatively set against the costs of action to avoid it, even though they do not fall upon the same shoulders or with the same impact.
Models are by nature parsimonious: their utility derives from reducing complex phenomena to a much smaller set of parameters. Yet the real world is full of higher-order impacts (good and bad) beyond what gets specified in the math
Then, there’s how results are generalised to the world at large. Models are by nature parsimonious: their utility derives from reducing complex phenomena to a much smaller set of parameters. Yet the real world is full of higher-order impacts (good and bad) beyond what gets specified in the math. And when models set the bounds of what’s possible, viable, or optimal (DICE, Thompson points out, is enshrined in policy analysis pipelines at some governmental and intergovernmental agencies), nuance risks being lost in translation: “The whole concept of predicting the future can sometimes end up reducing the possibility of actively creating a better one.”
None of this is to say that DICE is useless. Assumptions, even simplistic ones, are necessary for making decisions about complex phenomena. But at the same time, they indelibly embed the modeller in the modelled, and we get nowhere by ignoring this reality.
Thompson isn’t the first to point out that model-making is a deeply human endeavour. But it is in these case studies of present-day debates in the modelling community, as informed by first-hand expertise, that her work really shines. Alongside DICE, Thompson deftly pries open black box after black box in cases ranging from financial markets to public health to atmospheric dynamics, finding in each case that turning morality into a math problem doesn’t purge the human touch. It only buries it just below the surface.
Models emerge as ‘tools of social persuasion and vehicles for political debate’ as much as they are quantitative processes
Models emerge as “tools of social persuasion and vehicles for political debate” as much as they are quantitative processes. And since “we are all affected by the way mathematical modelling is done, by the way it informs decision-making and the way it shapes daily public campaigns about the world around us,” it becomes a real challenge for modern democratic society when models are insulated from understanding or accountability.
The easiest response at this point might be to surrender – to declare that the ineffability and complexity of the world makes mathematical modelling inadequate. And yet… there’s also the pragmatic reality that, amidst compounding crises, models have quite simply proven useful. The empirical record has largely vindicated scientists’ (and, for that matter, literal fossil fuel companies’) climate predictions. Energy system simulations from Princeton played a key role in passing the Inflation Reduction Act, one of the most globally significant pieces of climate legislation to date. And modelled pathways from the International Energy Agency are playing key roles in guiding a rapid buildout of clean energy – and in challenging fossil fuel expansion.
How does one ensure that, in grappling with the social nature of modelling, the baby isn’t thrown out with the bathwater?
History, after all, is full of seemingly progressive (and indeed radical) critiques of objectivity, scientific consensus, and expert practice that end up merely reinforcing the status quo: just take the long history of social constructivist scholarship being used by allies of the tobacco and fossil fuel industries to support and justify their misinformation campaigns. Meanwhile, the climate denialist camp has long had the reliability of climate modelling in their sights. So how does one ensure that, in grappling with the social nature of modelling, the baby isn’t thrown out with the bathwater? It’s a tough needle to thread, yet something Thompson manages to do with grace. Just as there is “a problem in trusting models too much,” she writes, “there is equally a problem in trusting models too little.” Although “failing to account for the gap between Model Land and the real world is a recipe for underestimating risk and suffering the consequences of hubris,” she counters that “throwing models away completely… lose us a lot of clearly valuable information.”
More transparency and intentionality about the role of expert judgement, Thompson suggests, might help close the ‘accountability gap’ between the models and the humans acting on them
This may be the book’s most valuable contribution: it’s ultimately a call not to abandon model land altogether but instead to become better travellers. This begins with seeing the social nature of models as a feature, not a bug. More transparency and intentionality about the role of expert judgement, Thompson suggests, might help close the “accountability gap” between the models and the humans acting on them. Similarly (echoing a rich literature in the philosophy of science), she notes that greater institutionalised diversity of methods and standpoints might result in fewer unseen biases and blind spots.
Ultimately, this book is a plea for humility. It’s wrong, Thompson tells readers, to presume that we’ve somehow created the capacity to transcend the limits of human rationality. Instead we must realise that “taking a model literally is not taking a model seriously,” as Peter Diamond noted in his Nobel acceptance speech – that only by cultivating an ethos of responsibility can we truly treat our creations with the care they deserve.
Such a conclusion may be uncomfortable, but it’s also deeply pragmatic advice for better modelling, better truth-seeking, and better public reason in an empirical age. Modellers, scientists, policymakers, and more would do well to take it to heart.
This post gives the views of the author, and not the position of the LSE Review of Books blog, or of the London School of Economics and Political Science. The LSE RB blog may receive a small commission if you choose to make a purchase through the above Amazon affiliate link. This is entirely independent of the coverage of the book on LSE Review of Books.