LSE - Small Logo
LSE - Small Logo

Jana Bacevic

April 27th, 2020

Science in inaction – The shifting priorities of the UK government’s response to COVID-19 highlights the need for publicly accountable expert advice.

1 comment | 80 shares

Estimated reading time: 7 minutes

Jana Bacevic

April 27th, 2020

Science in inaction – The shifting priorities of the UK government’s response to COVID-19 highlights the need for publicly accountable expert advice.

1 comment | 80 shares

Estimated reading time: 7 minutes

The phrase following the science is repeated frequently in relation to government policies to address COVID-19. However, what this science might be and how it is better than other ‘sciences’ is less frequently explained. In this post, Jana Bacevic reviews the UK government’s initial response to the COVID-19 outbreak and argues that a key factor determining the UK government’s approach was a closed advisory system that enabled particular scientific or epistemic communities to have disproportionate influence on policymaking. To address this deficiency, scientific advisory systems need both a greater variety of experts and greater transparency.


One of the consequences of the Coronavirus may be a re-evaluation of the role experts and expertise play in society and policymaking. Recent stories in the UK media have emphasised the role of the Scientific Advisory Group for Emergencies (SAGE), including Chief Scientific Adviser Sir Patrick Vallance, and Chief Medical Officer Chris Whitty in – depending on the account – shaping or failing to shape the UK Government’s Covid-19 strategy. What does this tell us about the role of experts? 

Scientific advisory panels do not make policies. Rather, their role is to summarise and distil available research to help guide politicians in decision-making. In this sense, while the remit of experts is limited by the kind of questions politicians ask, they have relative freedom in selecting and using evidence. However, decisions about what, and whose perspective, counts as ‘evidence’, are not neutral. 

Reports in the media have overwhelmingly focused on SPI-M (Scientific Pandemic Influenza Group on Modelling), the modelling subsection of SAGE. Yet, in the early stages of the pandemic, the Government also took advice from another subsection: the Scientific Pandemic Influenza Group on Behaviours, or SPI-B. First convened during the H1N1 (‘swine flu’) in 2009/10, SPI-B was reconvened on 13 February 2020 to provide guidance on behavioural and social interventions. This raised questions, in particular concerning the association of behavioural science with the famous ‘Nudge’ unit. While the Government has argued that Coronavirus strategy was uniquely driven by the “best evidence available”, the SAGE documents tell a somewhat different story. 

This suggests one of the reasons the UK Government was slow to adopt stricter lockdown measures was not only, and possibly not primarily, the fear they would be ineffective: it was the fear they would be unpopular

The specific advice SPI-B was asked to provide included the risk of widespread public disorder; the use of behavioural and social interventions (including school closures and general social distancing); and how to communicate with the public, especially vulnerable groups. While the group affirmed support for stopping mass gatherings, it had a negative view of school closures, which were deemed “highly disruptive”; it also cited evidence from Japan suggesting growing discontent around this policy. Experts in SPI-B were split over the value of recommending general social distancing, arguing that household isolation together with school closures would lead to an unexpected displacement of activity, and thus increase the likelihood of discontent and non-compliance. The document “Combined behavioural and social interventions” from 4 March deems isolation of symptomatic cases, and isolation of at-risk members, as the combination of interventions most likely to be socially acceptable. This may explain why, in the early stages of the pandemic, the Government insisted there was no need for a stricter lockdown or social distancing.

What made SPI-B advise against measures adopted elsewhere? In the absence of reliable data – research on the success of different non-pharmaceutical interventions (including widespread testing and contact tracing) was just beginning to emerge – SPI-B drew on two kinds of evidence. One was existing behavioural science, including assumptions about the degree to which certain policies could be applicable to the UK. The other were results of public opinion surveys conducted by agencies such as Yougov, Ipsos, and Cabinet Office-contracted BMG Research, between January and March 2020. These polls focused on Coronavirus awareness, risk perception, and – importantly – public approval for governmental interventions. This suggests one of the reasons the UK Government was slow to adopt stricter lockdown measures was not only, and possibly not primarily, the fear they would be ineffective: it was the fear they would be unpopular. 

Of course, it is possible to claim that governments have a duty to protect the population, regardless of whether those measures are popular or not. It is equally possible to claim that the emphasis on popularity had more to do with preventing public disorder, than with the ratings of any particular politician or party. Either way, the Government’s early focus on the reactions of the public shows a remarkable degree of similarity with the behavioural approach that prefers to govern through ‘messaging’, rather than intervening. 

This provides a different angle on the role of expertise in the Covid-19 pandemic. In crises, politicians tend to privilege the advice they believe will allow them to control the situation. Crisis situations, like pandemics, are characterised by multiple and overlapping forms of uncertainty: about the virus itself, its consequences, and rate of transmission (R). It is not surprising that, in the initial confusion the Government turned towards what it thought it could predict and manage: the behaviour of the population. After independent experts, the media, and the public started repeatedly raising concerns about UK’s divergence from WHO approach, it deferred to a different expert opinion, for this purpose represented in the Imperial College study. 

This does not mean that the UK Government was misled by a ‘cabal’ of behaviourists, any more than by a ‘cabal’ of modellers. All epistemic communities are limited by their epistemic assumptions, as well as theoretical and methodological traditions. Under conditions of competition for public funding, not to mention the prestige associated with being able to advise on public policy, it is not surprising that epistemic and disciplinary communities will strive to preserve privileged access to these resources, to the degree of excluding all others. 

Exclusivity and non-transparency of expert panels, like SAGE, make these sort of epistemic monopolies, where specific epistemic communities exercise singular or disproportionate influence, easier to maintain. ‘Experts’ are not immune to epistemic biases. In the evidence list of SPI-B, there is reference to advice received from “academic specialists in Health Psychology, Social Psychology, Anthropology and History”; but there is also a notable absence of, for instance, sociologists or specialists in public health. 

Under conditions of competition for public funding, not to mention the prestige associated with being able to advise on public policy, it is not surprising that epistemic and disciplinary communities will strive to preserve privileged access to these resources

This does not imply that simply widening the pool of ‘experts’ would have resulted in a better strategy. Just like it is ridiculous to suggest economists have the answer to the Corona crisis, it is hubristic to assume any single epistemic community has privileged access to solutions to complex social problems. What we need instead is a relationship between different kinds of ‘science’, policy, and society that is open, non-hierarchical, and aware of the inevitable limitations of any single epistemic position. This also requires trust in the capacity of the ‘public’ to learn about and evaluate different kinds of specialist knowledge. 

What does this mean for the role of expertise in a society? Neither the epistocratic ‘trust in science’ nor the populist mantra ‘the people have had enough of experts’ are likely to offer an answer. Expert panels have to be open, transparent, and democratic; but, then again, so do politicians. Science (and knowledge more generally) that work for the public are mutually exclusive with epistemic monopolies, regardless of whether they created through market competition, artificial scarcity, inherited privilege, or the belief that people are ‘dumb’ or ‘irrational’. Challenges in the decades to come are going to require both more diverse knowledge, and more specific forms of expertise. It is time to make them truly open, inclusive, and public.

 

   


Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.

Image credit: Newton, by Richard Croft via Geograph (CC BY-SA 2.0)


Print Friendly, PDF & Email

About the author

Jana Bacevic

Dr Jana Bacevic is a research associate at the University of Cambridge. Her work is in sociology of knowledge, epistemology and social theory, and political sociology. She has written on topics including higher education policies, ‘intellectuals’ and the role of expertise, and the political economy of knowledge production, and also worked as advisor to governments and international organizations. She tweets at @jana_bacevic

Posted In: COVID 19 | Experts and Expertise

1 Comments