Models that make predictions about the COVID-19 pandemic are complicated by the fact that people change their behaviour in response to these predictions. Lucie White discusses how we can deal with this, based on a recent paper with Philippe van Basshuysen, Donal Khosrowi and Mathias Frisch.
The famous – or perhaps rather notorious – “Report 9”, published by a team led by epidemiologist Neil Ferguson on 16 March 2020, was pivotal in steering the UK’s response to the COVID-19 pandemic. In the days following the publication of this report, which predicted that the NHS would be overwhelmed within a matter of weeks unless drastic action was taken, the UK government announced a sudden about face, implementing a raft of restrictions and abandoning their previous “mitigation”-based strategy in favour of attempting to suppress viral spread.
The seemingly overly pessimistic predictions of the mathematical model that formed the basis of this report have been subject to intense scrutiny. Even Ferguson admits that the most extreme projection – 500,000 deaths in the UK if nothing was done – was “never really going to happen”. So did something go seriously wrong with COVID modelling? The answer to this question is more complicated than it might appear, due to the fact that models can change the very things that they attempt to predict.
It is extremely difficult to anticipate just how individuals will respond, and thus difficult to take account of these sorts of effects in modelling
It is clear that the model in Report 9 was designed to form the basis for scientific advice, and to inform public policy. As a consequence of the results and the recommendations made by the modellers, the scientific advisory committee SAGE recommended policy measures to stop the spread of the virus, which were then implemented, in large part, by the UK government. To naively compare the numbers of deaths and hospitalisations that the model projects for “unmitigated” viral spread with actual deaths and hospitalisations simply ignores the fact that mitigation measures were put in place. And models can incorporate this into their projections – in fact, Report 9 modelled a range of scenarios contingent on different (combinations of) policies being put in place.
The difficulty comes from the fact that the results of models can also change individual behaviour – in ways that go beyond compliance with any policies implemented. Take, for example, a rule that restricts gatherings to two people. You could comply with the rule in different ways – by meeting a different friend every night, or by only meeting with a couple of people. Whether you decide to go beyond what the rule suggests, and limit your contact to a small group of people overall, is likely to depend on how seriously you take the threat of the pandemic. This, in turn, is likely to depend on evidence including the results of mathematical modelling, especially if it is widely publicised, and accompanied by graphical representations of the seriousness of the situation.
Figure 1: A graph from Report 9 projecting the degree to which critical care bed capacity will be exceeded by demand under different (mitigation-based) policy scenarios.
These individual decisions can have a drastic impact on the spread of the virus and thus the accuracy of predictions. The model in Report 9, for instance, is highly sensitive to a particular parameter representing the effectiveness of social distancing measures, which is highly dependent on how individuals respond to these measures. How individuals respond to these measures, in turn, can be influenced by the outcomes that the model projects in the first place. It is extremely difficult to anticipate just how individuals will respond, and thus difficult to take account of these sorts of effects in modelling.
So what can we do about this? One answer could be to look to modelling efforts in the social sciences, particularly economics, where this is a familiar problem. Another could be to take a different approach to evaluating these models – in addition to looking at how well they can predict certain outcomes, perhaps they can also be assessed in light of how effective they have been in steering public and policy response. This idea might seem strange, but there is precedent for it elsewhere. Imagine, for example, a doctor predicting that her patient will die within ten years if she doesn’t stop drinking and smoking. This prediction might help the patient to change his habits and escape this unwanted outcome. We would not hold it against the doctor if she couldn’t tell us exactly when the patient would have died had he not changed his behaviour, or exactly what impact his lifestyle changes will have on his life expectancy. Similarly, the ability to steer us away from unwanted outcomes can be a virtue of models, even when their power to change aspects of the world can diminish their capacities to predict them accurately.
Of course, we need to be very careful with this – we don’t want to choose or design models on the basis of what will best steer outcomes in the direction we want – not least because we might disagree on what outcomes are desirable. Models are instruments that aim to tell us something about the world, and this crucial function should not be undermined. These judgements should not figure into model design, but there might be a place, after models have been designed and implemented, to evaluate them in part on the type of effects they have had on the world, and whether these effects have been desirable, or undesirable.
This post originally appeared on the LSE’s COVID-19 blog and draws on the authors’ published paper, Three Ways in Which Pandemic Models May Perform a Pandemic, published in the Erasmus Journal for Philosophy and Economics.
Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below
Image Credit: Maxim Hopman via Unsplash.