j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src= 'https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f); })(window,document,'script','dataLayer','GTM-NZXZ6MK'); How do you manage a risk like COVID-19? Lessons from Camus, Beck and Dörner | LSE COVID-19
LSE - Small Logo
LSE - Small Logo

Sergio Scandizzo

Chitro Majumdar

June 16th, 2020

How do you manage a risk like COVID-19? Lessons from Camus, Beck and Dörner

1 comment | 19 shares

Estimated reading time: 10 minutes

Sergio Scandizzo

Chitro Majumdar

June 16th, 2020

How do you manage a risk like COVID-19? Lessons from Camus, Beck and Dörner

1 comment | 19 shares

Estimated reading time: 10 minutes

When governments are dealing with an unprecedented crisis, they tend to stick to the playbook for fear of criticism later. Sergio Scandizzo (European Investment Bank) and Chitro Majumdar (RsRL) discuss what light Albert Camus, sociologist Hans Beck and psychologist Dietrich Dörner can shed on their handling of the pandemic.

Orders! … When what’s needed is imagination.
(Albert Camus, La Peste)

The scale and global synchronicity of COVID-19’s impact has been unprecedented. Yet the political statements and actions we see in response to it are neither unique nor new, but recur quite systematically in the wake of disastrous events.

One of these is the behaviour of government officials – judged by many as either irresolute and slow to react in the early stages of the contagion, or unnecessarily draconian once the pandemic character of the problem was clear. With some notable exceptions (for example Singapore, whose government has acted forcefully and consistently since the beginning), we can see the same pattern at play in both democratic and non-democratic countries.

plague
Hospital staff disinfecting patients in a wooden tub during the outbreak of bubonic plague in Karachi, India. Photograph, 1897. probably by R Jalbhoy. Photo: Wellcome Collection via CC-BY 4.0 licence

While European democracies were worried about their citizens’ reaction, in non-democratic countries the initial concern of government officials seems to have been to protect themselves from external scrutiny and judgment. For instance, the Chinese government was initially reluctant to enact widespread containment measures because it was chiefly concerned with its international reputation, perhaps because they do not have to worry about internal criticism that much. Only once they have done so do they take action openly – possibly because they recognise there is no point in covering up or, better, because the reputational cost of covering up is higher than that of transparency.

At that point, they can be more effective than their democratic counterparts, because they are less concerned about internal consensus. Democratically elected governments, on the other hand, are worried about auditability in the first place – that is why they are less efficient, especially at the beginning – although they may have more opportunity to build a common response later on, as the population buys into the idea of behavioural changes.

In both cases, what we see at play is a concern on the part of government officials to ensure that their behaviour is fully justifiable and auditable ex post so that any adverse outcome cannot be blamed on them: a prevalence of accountability over responsibility. The incentives in large systems are stacked in favour of going by what is perceived to be right – to err, even knowingly, on the wrong side – rather than risk addressing real issues, and pursuing plausible and appropriate action using common sense and imagination. In both cases, the response is initially blunted by rulers’ concern for their own interest (survival in power), while the subsequent switch to a more drastic set of measures is used — by democratic and non-democratic governments alike — to strengthen their grip on power.

How did we get to this point? How did following the rulebook and due process became a hallmark of political leadership? How did a society with a level of technology so advanced as to be able to literally destroy the planet, and with a degree of self-reflection so sophisticated as to continually subject to scrutiny every facet of its behaviour, come to regard the management of risk as an exercise in audit?

Perhaps the answer, as suggested by sociologist Hans Beck in his seminal book Risk Society, is that the social, economic and political side effects of risks in modern societies (what he calls modernisation risks) are as important – if not more important – than the risks themselves. The management of risk is taken away from business and science, and becomes a central political issue displacing more traditional priorities and driving an expansion of state authority and scope of intervention. Policymakers are aware that the perception that they failed in their risk management role is worse than the danger itself. At the same time, the desire for more control over risks creates opportunities for system changes – towards more centralisation, bureaucracy and ultimately power – which are difficult even for the best-intentioned politician to resist.

So it should come as no surprise that, when caught unawares, those same politicians claim they are facing an “unprecedented” or “once in a lifetime” event: the infamous black swan.
As black swans go, it is difficult to find better examples than epidemics: they have low probability; they are potentially catastrophic; and they take us by surprise. In his novel The Plague, which should be required reading for anyone with public office responsibility, Albert Camus articulates (half a century before Nassim Taleb reintroduced the idea to the world of finance) our unhealthy relationship with rare adverse events:

“Everybody knows that pestilences have a way of recurring in the world, yet somehow we find it hard to believe in ones that crash down on our heads from a blue sky. There have been as many plagues as wars in history, yet always plagues and wars take people equally by surprise.” (Albert Camus, La Peste, 1947).

However, dealing with uncertainty might call for different approaches depending on one’s objective. Statistics provides the basic paradigm of hypothesis testing, whereby an inference from empirical evidence may be wrong in two ways: Type I and Type II errors. The former happens when we accept a wrong thesis as valid, while the latter takes place when we reject a correct one. In a judicial context, a Type I error occurs when an innocent is convicted, while a Type II error occurs when a guilty person is acquitted. In science, we may recall Karl Popper’s falsification criterion: a theory that cannot be possibly proven wrong is not a theory – it is a fantasy. If I postulate the existence of unicorns, I am not making a scientific claim. That is not because nobody has ever seen a unicorn, but precisely because it is impossible to design an experiment that would conclusively exclude the possibility that such mythical animal exists. Scientists, therefore, tend to minimise Type I errors.

By contrast, risk managers should favour being wrong about their theories and, hence, aim at minimising Type II errors. This is not only because they cannot afford to wait for enough evidence to be collected – otherwise they would never take risk-mitigating actions in time – but also because the very object of risk management is protection from the unexpected, which is by definition the hardest event to prove it is about to happen. And in certain instances, Type II errors can have catastrophic consequences.

In The Plague, there is a meeting of the town’s prefect with several doctors. One of them, the protagonist Dr Rieux, makes the point that it is essential to take urgent and drastic action in order to stop the epidemic killing half the population of the city. His colleague objects that it has not been proven that the illness is in fact the plague. On scientific grounds he is of course right, but Rieux counters that proving his theory, or any other for that matter, is not the point: “Let us just say that we should not act as if half of the city was not at risk of being killed, because in that case it would be.”

Rieux understands that, faced with a potential catastrophe, his priority is to prevent the worst-case scenario from happening, and this course of action cannot wait for scientific proof of what the disease actually is. Decisions when dealing with extreme risks cannot be driven by full scientific evidence because such evidence is almost never entirely available. Yet acting on incomplete information remains a source of angst, not just because of the possibility of failure, but of how not having ensured full “scientific” foundations for our actions can be later construed against the decision maker. In areas of human endeavour where lab experimentation is not an option, the only excuse for failure seems to be having followed accepted procedure, be it scientific, bureaucratic, legal or otherwise. Being wrong having followed orders seems preferable to risk being right having used one’s own imagination.

Twenty years ago, German psychologist Dietrich Dörner wrote The Logic of Failure, where he discussed what might seem a simple question: why do things go wrong? He concluded that we are especially bad at dealing with complex systems—“we are prehistoric minds in an industrial era”—and that as a consequence we are mired in a number of habits that undermine our decision making. Dörner’s conclusions are even more true for the digital era. Here are some of them:

• We tend to oversimplify our mental models of complex systems, focusing only on one or two “key” variables and underestimating the importance of other factors.
• We are especially poor in analysing and forecasting based on sequences of data in time. We tend to assume linear extrapolation of trends and do not cope well with accelerating or decelerating change – let alone with the possibility of a change in trend direction.
• We tend to see new situations as simply extensions of old, established situations, and therefore apply old, established actions which may not be appropriate. This may be self-protective behaviour to allow us to feel that we can cope.
• We tend to ignore the possibility that actions we take now may have unintended consequences and cause problems that currently do not exist.

The same issues that Dörner emphasised have played out as countries respond to the pandemic.  In India, the government announced a lockdown across the country with almost no notice. It did not consider the daily wage and migrant labourers living in cities. Consequently, hundreds of thousands of these workers were left without shelter, food, and resources. The workers began the long journey home, trudging hundreds of kilometres on foot in harrowing circumstances, perpetrating a humanitarian crisis.

Thousands thronged railways, bus stations and highways, terrified of the choice between the  prospect of death by starvation against the probability of infection. Better planning, dispersed decision-making and a feedback-seeking system allowing for experience, intuition and tacit knowledge would have probably obtained better outcomes than top-down, non-participative, ad hoc measures.

Even rich countries (apart from a few outliers such as Sweden and the Netherlands) typically seem to follow the trajectory of Dörner’s observations. There is discomfort with data, and a desire to retrofit it to conform to the generated “truths” with which policy-makers seem to be comfortable. Other effects and costs of policy measures are neglected: these could range from hunger, psychological distress and “deaths of despair” to public health, social solidarity, increased government surveillance and stability on the other, with cost considerations that straddle not just the economic but also the social, political and ethical spheres.

The decision-making we are witnessing today under conditions of uncertainty privileges the playbook over imagination, and negative general goals over feedback-based, specific and interim goals. Policymakers have an incentive to favour actions that can be subject to later audit, rather than on the probability of outcomes. Given that the dynamics of systems are too complex to anticipate, stress-testing of various scenarios could be useful to mitigate the impact of the pandemic. A greater inclination to risk-based policy responses, factoring in collateral damage, may lead to more optimal and humane outcomes than rigid models. In the ecology of uncertainty, how authorities add nuance to their responses – whether they focus on containing the single problem, or consider the costs and benefits of interventions – will be debated for a long time to come.

References

Camus, A., 1947, La Peste, Gallimard.
Beck, H., 1986, Risk Society, Sage.
Dörner, D, 1996, The Logic of Failure, Metropolitan Books.

This post represents the views of the authors and not those of the European Investment Bank, RsRL, the COVID-19 blog nor the LSE.

About the author

Sergio Scandizzo

Sergio Scandizzo is Head of Regulation and Reporting at the European Investment Bank (EIB) in Luxembourg.

Chitro Majumdar

Chitro Majumdar is the Chief Strategic Advisor of Sovereign Institutions and a founder of R-square RiskLab (RsRL).

Posted In: Politics and global governance | Psychology and behavioural sciences

1 Comments