LSE - Small Logo
LSE - Small Logo

Uwe Peters

November 23rd, 2020

Talking about Not Talking (Carefully) about Science

0 comments | 39 shares

Estimated reading time: 10 minutes

Uwe Peters

November 23rd, 2020

Talking about Not Talking (Carefully) about Science

0 comments | 39 shares

Estimated reading time: 10 minutes

Talking about Not Talking (Carefully) about Science

Uwe Peters on how science communication can bring out the worst in us


We are currently facing a range of social problems related to COVID-19, climate change, political polarization (for example, on voting fraud in the US election), ‘fake news’, and many other issues. With respect to each of them, we rely on science communicators (scientific experts, research institutions, science reporters), first, to tell us what the facts are and, second, to advise us on what we ought to do, given these facts.

But there are also limits to what we would want a science communicator to tell or even only indirectly suggest to us. For instance, given what we now know about COVID-19, climate change, and political polarization, if a scientific expert said without evidence that people should be reluctant to wear COVID-19 masks, should reject the reality of climate change, or should become more hostile towards political opponents, they would face strong criticism. And it might seem obvious enough that we don’t commonly see this sort of science communication.

I want to suggest otherwise. Well-meaning science communicators, including scientific experts and reporters, do in fact regularly (but unintentionally) produce messages whose effects resemble the effects of precisely the kind of claims just mentioned. Indeed, science communicators (unintentionally) convey particular kinds of social norms that incline us toward harmful behaviour—norms that are significantly more effective in influencing behaviour and thinking than even norms capturing what we should be doing (‘You should eat five portions of fruit and veg a day’). Let me explain (by retracing an argument I develop here and here).

Whether in the media or in academic journals, science communicators regularly inform people about socially problematic behaviours of groups of individuals. We are, for instance, told that:

Britons are uniquely reluctant to wear face masks (YouGov)

conservatives in the US are substantially less likely than liberals to accept that human-caused climate change is happening (Yale Climate Change Communication)

America remains deeply polarized, few voters are truly persuadable (Time)

people [on social media] are quicker to repeat something that’s wrong than something that’s true (NBC)

90% of people are biased against women (BBC)

And so on. These statements are perhaps (depressingly) accurate, and they seem to convey information that is vital for tackling the social problems concerned.

However, they also have a dark side that tends to be overlooked. To cast light on it, let’s turn to the psychology of social norms. Psychologists (for example, here and here) tend to distinguish between what we take to be prescriptive norms—what we think (rightly or wrongly) ought to be the case or ought to be done (for example, ‘people should protect the environment’)—and descriptive norms—what we think (rightly or wrongly) people commonly do, think, or feel (for example, ‘most students are on social media’). On the face of it, descriptive norms just seem to describe what as a matter of fact is common or typical among people, whether or not it should be. But interestingly, and this is the key point here, there is a wealth of evidence that suggests that people alter their behaviour, their thinking, and even their emotional responses to conform with mere descriptions of how people like them behave, think, or feel.

For instance, studies found that when told that most others do so, many people readily follow suit in energy saving (or wasting), recycling, tax paying, election voting, healthy (or unhealthy) eating, stereotyping, corruption, stealing, and so on. That is, many people tend to do what (they perceive) others commonly do, regardless of whether it amounts to what might be considered good or bad behaviour. Even when not themselves a member of the particular group being described, many people still tend to expect individuals who are members of the groups to conform to these norms, even when those norms capture morally problematic behaviour. For instance, in one study, the participants were told about two groups, one whose members tended to make babies cry, and another whose members refrained from this morally problematic behaviour. When a member of the second group then acted against type and also made a baby cry, participants judged that individual more harshly than an individual who acted in exactly the same way but belonged to the first group (I discuss all of this in more detail here).

These kinds of findings matter for science communication. Statements of the type ‘Britons are reluctant to wear face masks’ and so on communicate to an audience that certain groups display certain behaviours or hold certain beliefs. They convey descriptive norms, norms that we know can influence the behaviour, beliefs, and emotional responses of the people under discussion. Given that it is often the point of science communication to reach a wide audience, that audience will often contain members of the group in question. These claims are thus likely to incline at least some members of the audience to align their behaviour or thinking with these norms, and so they will act or think in harmful ways (for example, by refusing to wear face masks, rejecting the reality of climate change, and so on).

In fact, meta-analyses (such as this and this) comparing the influence of prescriptive norms with the influence of descriptive norms on intention formation and behaviour found that descriptive norms are significantly stronger in shaping behaviour and cognition than prescriptive norms. An exhortation to ‘Eat healthy!’ is less likely to affect intention formation and change behaviour than ‘People in your age range eat healthily’. It seems that while we don’t usually like to be told what to do, we are nonetheless prone to conformity when we learn how people like us think and behave.

Taking all this together, it’s not unreasonable to fear that when, say, a BBC journalist writes (citing YouGov), ‘Britons are reluctant to wear face masks’, this is more likely to make at least some Britons abstain from mask wearing than if our journalist stated, ‘Britons should be reluctant to wear face masks’. This is worrying.

So what are science communicators supposed to do? How should they go about their job of truthfully explaining scientific research to the public? How should any of us go about discussing the harmful behavioural or cognitive features of particular groups? And are we to blame if we negatively influence others in the ways described here?

Well, it does seem blameworthy if in light of the potential harm of descriptive norms, science communicators fail to consider these risks when communicating the relevant sort of research. This seems reckless. Unfortunately, the phenomenon is still largely unknown to science communicators.

Another way to be reckless is to generalize from smaller to larger groups, describing the behaviour of many more people than the evidence supports. For example, there is evidence that many social scientists often move from results pertaining only to specific samples (for example, subjects with a WEIRD background—Western, Educated, Industrialized, Rich, and Democratic) to public claims about much broader social groups (‘Britons’, ‘men’, ‘people’, and so on). When these kinds of generalizations happen and descriptions are applied to a much larger group, then there are many more people who may be inclined to adopt the harmful behaviour or belief ascribed to them.

So one tactic may be to restrict without inaccuracy the scope of the generalizations commonly found in science communication, instead referring to ‘many’ rather than ‘most’ or ‘all’ people in a group, or in a category as a whole (‘Britons’, and so on). This helps because in capturing only what some people do, think, and so on, these claims are less likely to convey descriptive norms (recall that descriptive norms typically concern what the majority thinks or does). There are other, less harmful ways of communicating broad generalizations about negative behaviours or beliefs to the public. For instance, there is evidence that including information on trends among a minority in the generalizations (for example, ‘Most people litter, but increasingly more people do not’) can ‘buffer’ the impact of negative majority information (for example, ‘Most people litter’).

Having said that, science communicators might often lack the time or other resources to express themselves in these ways. There are also likely to be cases when the epistemic, ethical, or societal benefits of expressing broad social generalizations about negative features outweigh the costs related to the effects of descriptive norms. And the effects of descriptive norms are not all-powerful and can be undercut by other phenomena. For example, politically motivated cognition (‘wishful thinking’) may lead some to resist the descriptive norms conveyed in science communication, and so the negative effects at issue may not always arise.

Still, the data on the impact of descriptive norms from across various different domains are robust. Moreover, there is much at stake if broad claims about COVID-19, climate change denialism, political polarization, and so on have the effects outlined. The potentially highly detrimental impact of descriptive norms in science communication should thus be taken into account by science communicators—indeed, by each of us—when we are making public claims about social ills. Otherwise, we risk inadvertently increasing the very problems we set out to reduce.

 


 

The Source Code

This essay is based on the articles ‘How (Many) Descriptive Claims about Political Polarization Exacerbate Polarization’ by Uwe Peters, published in Journal of Social and Political Psychology, and ‘Science Communication and the Problematic Impact of Descriptive Norms’ by Uwe Peters, published in the British Journal for the Philosophy of Science.


 

About the author

Uwe Peters

Uwe Peters is a postdoctoral researcher in philosophy at the University of Bonn and the University of Cambridge. He is also undertaking an MSc in Psychology and Neuroscience at King’s College London. His research focuses on the philosophy of science, cognitive science, and social as well as cognitive biases.

Posted In: Audio Essays | Read