LSE - Small Logo
LSE - Small Logo

Stephen John

July 13th, 2020

Speaking (Some) Truths to Power

0 comments | 16 shares

Estimated reading time: 10 minutes

Stephen John

July 13th, 2020

Speaking (Some) Truths to Power

0 comments | 16 shares

Estimated reading time: 10 minutes

Speaking (Some) Truths to Power

Stephen John on the ethics of expert advice


The UK government claims that its policy response to COVID-19 has been science-led. Two recent revelations suggest a more complicated picture: that scientific advice may have been policy-led. First, according to creditable reports, in January and February, scientists didn’t make any recommendations on lockdown policies, apparently because they felt that a government loudly committed to ‘taking back control’ would not countenance such a restriction on liberty. Second, it seems that the Prime Minister’s special advisor, Dominic Cummings, contributed to meetings of the Government’s scientific advice panel, SAGE.

Both of these cases raise a concern: that scientists were telling policy-makers what they wanted to hear. This seems an awful way of going about science communication; if there is one role we don’t want scientists to play in policy-making it is that of pandering courtiers. But to go beyond outrage to constructive advice we need to say what scientists ought to have done: we need to fill out the sentence ‘scientists told politicians what they wanted to hear, rather than telling them…’. Well, rather than telling them what? I’m going to suggest that the best way of finishing this thought is, ‘rather than what they ought to have wanted to hear’. Obviously, this distinction between what policy-makers want to hear and what they ought to want to hear is fine. It’s entirely possible that our actual scientists are incompetent hacks, cowed by the government, communicating badly. What I suggest below is even more worrying: that even the wisest scientists might find it difficult to negotiate the subtleties of good science communication.

To think through this problem, let’s start by setting out two extreme options for science communication: first, that scientists should be priests; second, that they should be ascetics.

One way of understanding the notion of science-led policy is as a demand that scientists should be priests: they should just tell policy-makers what to do. While this position may appeal to many of us despairing at policy-makers’ apparent ineptitude, ultimately, it is deeply unappealing. As many commentators have noted, the science around COVID-19 is highly fragmentary and rudimentary. When faced with such massive uncertainty, it’s unclear how we can identify the best course of action. To make matters worse, even were the science completely certain, scientists shouldn’t simply tell policy-makers what to do. After all, ‘ought’ claims depend not only on factual or empirical claims, but also on ethical or value claims. Even if we were absolutely certain, say, that lockdown would save hundreds of thousands of lives, it doesn’t follow that we must implement lockdown unless we also endorse the value judgement that saving hundreds of thousands of lives is valuable. That might seem rather an obvious value judgement, but, of course, lockdown doesn’t only have benefits, but costs, say for those denied access to ‘non-essential’ care; deciding whether the benefits outweigh the costs is an essentially evaluative matter. Even if we think that there are objective answers to such ethical questions, it’s not obvious that, in virtue of their skills and training, scientists are the best people to answer them. Indeed, even if scientists are, somehow, ethical experts, it would still be inappropriate for them to answer those questions; rather, at least in democracies, this is the proper role of elected officials.

Faced with these concerns, it’s tempting to swing to the opposite end of the spectrum, and think that scientists should be ascetics: they should dispassionately report all of the facts, unswayed by any ethical or political considerations, leaving it to policy-makers to integrate those facts with their values. Unfortunately, while chiming with cultural notions of scientists as fearless speakers of truth unto power, this model is equally problematic. One set of concerns is that it may just be impossible: that various ethical and value judgements so deeply infiltrate scientific reasoning that scientists can’t state ‘just’ the facts. Regardless of these deep and tricky issues, the ideal of asceticism is plain unappealing; it’s a mandate for a completely pointless form of data dumping. If scientific advisors literally reported absolutely every true (or well-established) claim, every single possibility, every uncertainty and potential source of error, policy-makers would be drowning in a sea of claims. The scientists might keep their hands clean, but that’s cold comfort for anyone hoping for a sensible response to COVID-19.

To get a fix on these problems, consider a far simpler, everyday example. Imagine your bike is broken. You ask your keen cyclist colleague for her advice on what to ask for at the bike repair shop. Your colleague casts her expert eye and decides you have three options: a cheap repair, but where the same problem will recur again in six months, or a moderately expensive repair, which will keep you on the road for at least three years, or an extremely expensive repair, but your bike will have the same problem again in six months. Imagine that your colleague really thinks that maintaining bikes is the highest human calling, so she just tells you ‘get the moderately expensive repair’. This priest-like attitude seems problematic: it’s your choice how you trade-off your money and your bike’s longevity. This doesn’t imply, however, that your colleague should just list all three options, as the ascetic model suggests. That’s a waste of her time and yours: common-sense psychology tells her that the third option is just irrelevant.

Your colleague shouldn’t tell you too little—only the second option—nor too much—all three options—but just enough—the first and second options. Rather than priest or ascetic, she should be Goldilocks.

What makes this Goldilocks solution just right? Experts know lots of things. Typically, when we non-experts turn to experts, we don’t simply want to know lots of facts; rather, we want the non-experts to help us figure out what to do. The experts have to decide which facts to communicate. In turn, the basic ethical principle of respect for autonomy implies that when non-experts solve this problem, their key aim should be to enable non-experts to choose in accordance with their own values. This implies that the priestly model of advice giving is wrong: experts shouldn’t impose their own values on others. However, it doesn’t imply that all evaluative considerations are irrelevant to advice-giving, as the ascetic holds. Rather, experts’ advice should be tailored to the non-expert’s interests and values. Sometimes, it’s easy to do this—no-one enjoys paying lots for shoddy repairs. However, often predictions are hard—it is difficult to know how a colleague will trade-off longevity and cost. The key trick in respectful, ethical advice giving is to predict non-experts’ needs and interests well, and to recognize the limits to such predictions.

With these comments in mind, we may need to rethink our reaction to ‘policy-led science’. Consider the scientists who failed to advise ministers about possible lockdown strategies on the grounds that take-back-controllers would never countenance such liberty-restricting measures. If the Goldilocks picture is correct, these advisors weren’t wrong to try to tailor their advice to their audience’s interests and values; they were just wrong in their predictions and assumptions about what their audience valued. Similarly, in principle, at least, Dominic Cummings’ presence at SAGE meetings might be a sensible way in which to ensure that expert advisors are aware of the government’s values, rather than a dreadful breach of the political into the scientific. The problems come with Cummings’ auto-didactic tinkering, rather than his mere presence. In short, if we are clear-eyed on the proper role of experts, we should be wary of sweeping claims about politicization, and, instead, focus on the details of interaction. This focus may be less exciting than sloganizing—and far harder to politicize—but at least it recognizes that giving good advice is difficult; even well-intentioned experts and well-designed systems can get it wrong.

Still, you might be worried that something has gone wrong in my analysis. Haven’t I just ended up saying that scientists should just tell politicians what they want to hear? If I have, my argument is in deep trouble. One of the most important reasons to involve independent objective scientists in policy is precisely to ensure that politicians’ words and actions are guided by the way the world is, however politically uncomfortable that may be. Fortunately, then, my argument doesn’t demand that scientists reinforce politicians’ wishful thinking. My claim is that scientific advisors should not tell policy-makers what they want to hear, but, rather, what they ought to want to hear.

To explain this distinction, consider, again, an everyday example. As a parent, I want to be told if my children are too sick to be at school. At the same time, as a lazy person, I don’t want to be told if my children are too sick to be at school, because looking after them would be a hassle. Imagine that, one day, my laziness is at a peak: I really don’t want to know that my children are sick. Even if the school secretary knows I don’t want to know this, he should still call me up if my children are sick; his communication should be sensitive to what I ought to want, not what I actually want. Something similar is true, I suggest, in the case of expert advice. There are all sorts of sordid, unpleasant reasons why politicians might not want to be told certain sorts of facts; for example, acknowledging that COVID-19 will do massive harm to the economy might damage a fragile, carefully constructed political consensus that ‘leave means leave’. Still, when acting in their capacity as policy-makers, choosing for the sake of the public, they ought to want to be told it. It is this second kind of consideration which should govern scientists’ advice-giving to politicians.

The Goldilocks model can, then, escape the charge that it provides a licence for sycophancy. Unfortunately, the distinction I’ve drawn between what politicians actually want to be told and what they ought to want to be told is subtle. Figuring out what politicians ought to want to be told is no easy task, and it’s easy to fall-back on the simpler criterion of what they do want. I already suggested above that communicating well involves complex judgements, and can easily go wrong. To make matters worse, then, there is a very fine line between the best way for experts to communicate—aligning with what policy-makers ought to want to be told—and the worst way—aligning with what they want to be told. It should be no mystery that communication between experts and non-experts often goes wrong; the real mystery is how it ever goes well.

 

Image credit: Swimmer, Zero-X

 


The Source Code

This essay is based on the article ‘Science, Truth, and Dictatorship: Wishful Thinking or Wishful Speaking?’ by Stephen John, published in Studies in History and Philosophy of Science Part A.


About the author

Stephen John

Stephen John is Hatton Lecturer in the Philosophy of Public Health at University of Cambridge. His research interests lie at the intersection of philosophy of science, applied ethics, social epistemology, and political philosophy.

Posted In: Read