The concept of misinformation has recently been subject to a range of theoretical and methodological critiques. Sander van der Linden, Ullrich Ecker and Stephan Lewandowsky argue misinformation has clear and measurable effects and that downplaying them risks compromising fact based public dialogue.
A flurry of recent opinion pieces downplay the problem of misinformation in society. Whether it is de-emphasising the role of misinformation in fuelling violence, dismissing the threat of digital technologies in spreading misinformation, or claiming that disinformation is largely a distraction from more important underlying issues. What these pieces have in common are (mis)perceptions of misinformation’s role in causing social ills, its prevalence, impact on behaviour, and epistemic status.
In our recent work we have tried to set the record straight and outline why these one-sided arguments can lack nuance and are at times flat out wrong. At worst they may even unintentionally provide intellectual cover for the current witch hunt on misinformation researchers. Here we take up each of these critiques in turn.
Misinformation as a cause of societal ills
It’s important to understand that misinformation can have both direct and indirect impacts, which add up to the total impact of misinformation. Think of the 2024 UK riots, which were initiated directly by a false story on social media.
The people involved also likely already held strong anti-immigration and anti-Muslim views, making them susceptible to this kind of misinformation in the first place. The reasons why people hold these views are complex, but it is difficult to ignore that these too are most likely shaped by misinformation (see for example, in a different context the long history of the recent false claim that Haitians were eating pets in Ohio). Misinformation can thus also contribute indirectly by shaping public opinion.
The reasons why people hold these views are complex, but it is difficult to ignore that these too are most likely shaped by misinformation
Studies of propaganda offer another analogy. For instance, evidence from the Rwanda genocide highlights that 10% of the overall violence against the Tutsis can be directly attributed to areas that received propaganda radio as opposed to those that didn’t. Moreover, beyond direct persuasion, the indirect effect of propaganda through word-of-mouth or via social interaction was even greater. Misinformation can therefore persuade directly, but also indirectly via different (social) channels.
It does not make sense to dismiss the role of misinformation because it is difficult to identify a linear causal impact on society, especially because misinformation is a meta-risk; it impacts our perceptions of other risks (such as climate change), which in turn feeds into how well these risks are managed. Misinformation also exacerbates affairs by lowering trust and increasing polarisation. A recent paper takes a more nuanced view by thinking of the problem as a dynamic feedback loop between trust and misinformation: misinformation can impact society by lowering institutional trust and lower institutional trust can in turn cause people to believe in misinformation.
The prevalence of misinformation is significant
Critics’ claim that people consume little misinformation can be misleading, as the answer depends on the definition of “misinformation”. One minimising strategy is to narrowly define “misinformation” as “fake news”. For example, several landmark analyses included only between 21 and 490 websites in their analyses of “fake news” consumption. It is unsurprising that overall those sites received few visits, although the fact that 44.3% of Americans visited one of those sites at least once during the 2016 U.S. presidential election campaign is not insignificant.
Moreover, while it is true that estimates show this type of news comprises a small portion of people’s news diet, the expert consensus is that “misinformation” means both “false” and “misleading” information. In many cases propaganda and other more subtle types of misinformation may contain half-truths, but are otherwise layered with bias and manipulation and thus not only more prevalent but also more problematic. Under this consensus definition, the misinformation problem widens significantly.
averages too can be a misleading descriptor when variation is high: for some people misinformation exposure will be low, whereas others are drowning in misinformation on their feeds.
Critics also tend to quote low-prevalence estimates, but do not caveat that many studies estimate prevalence based on limited public data at a single point in time for a single platform. Estimates of misinformation in social media posts vary widely, ranging from 0.2% to 28.8%. Visual misinformation comprises up to 20% of political media content on Facebook. Critics are correct to say that absolute social media impressions can be misleading: we need to divide exposure to misinformation by an individual’s total exposure to both true and false information. Yet, averages too can be a misleading descriptor when variation is high: for some people misinformation exposure will be low, whereas others are drowning in misinformation on their feeds.
A recent study showed that information about COVID-19 vaccines that wasn’t flagged by fact-checkers but still highly misleading reached over 20% of Facebook’s US user base and was viewed 6 times as often as all fact-checked fake news combined. It was also 50 times more impactful than outright fake news in driving vaccine hesitancy. Broadly dismissing misinformation for having low prevalence based on narrow definitions and selective estimates is therefore not particularly informative in terms of risk assessments.
There is evidence that misinformation changes behaviour
Real-world examples of the harmful effects of misinformation on human behaviour abound. From mob lynchings in India, to the recent riots in the UK, to people (including children) ingesting toxic chemicals in the hopes they will cure COVID-19.
However, one could argue that there is no experimental evidence showing that misinformation is a direct cause of these acts—a critique reminiscent of the doubt-sowing tactic used by the tobacco and fossil fuel industries. After all, if we cannot be 100% sure something is harmful, maybe we should wait and see before doing something about it?
What then is a reasonable standard of evidence in these situations? It is simply not ethical to carry out randomised control trials using misinformation to see if it changes voting behaviour or vaccination choices, just as it wouldn’t be ethical to ask people to smoke cigarettes for a few decades to see if they cause cancer. Having said this, many of these questions can be answered either in controlled laboratory settings, with computer simulations, or via naturally occurring field experiments.
many of these questions can be answered either in controlled laboratory settings, with computer simulations, or via naturally occurring field experiments.
The evidence is there: being exposed to misinformation lowers vaccination intentions in the lab, and in the real world regions with higher susceptibility to misinformation show lower vaccine uptake. In experimental settings, propaganda impacts attitudes and real-world behaviours, such as protesting. Conspiracy theories about global warming cause people to be less likely to sign real-world petitions. Areas with greater exposure to TV shows that featured greater COVID-19 misinformation experienced more deaths, Russian troll activity predicted polling numbers for Trump, an instrumental difference-in-differences approach shows misinformation exposure impacted real-world voting for populist parties and contributed to genocide, and misinformation can even causally impact unconscious behaviours.
There is always a need for more and better research, but there is plenty of evidence that misinformation can impact real-world behaviour; pretending otherwise is disingenuous.
Misinformation has clearly identifiable fingerprints
One of the more abstract critiques of misinformation is that it carries no empirical signature (psychological or linguistic markers) that would allow us to identify it, because what’s true or false is always context-dependent. Attempts to do so are in this view laden with subjectivity and bias and ostensibly a fool’s errand.
Yet, mathematical modelling shows that most popular conspiracy theories are completely implausible. Philosophers have articulated good reasons for why a priori scepticism of conspiracy theories is warranted. More generally, subjectivity and ideological bias are unlikely to play a major role in determining the truth value of claims, in light of the fact that independent fact-checks of false claims highly correlate not only with one another, but also with the ratings of regular bipartisan crowds. This makes sense, as the wisdom of diverse crowds often approximates expert judgment.
Research also overwhelmingly shows that misinformation makes use of common manipulation techniques and logical fallacies. Indeed, many studies and reviews show that misinformation and deception contain different psycholinguistic cues than true information. For example, whereas reliable news is often neutral and scientific, misinformation, on average, is more (emotionally) manipulative.
Meta-analyses of interventions that teach people about misinformation techniques show that knowing about such tactics helps people accurately discern between misinformation and credible information.
Machine-learning models show, over large and diverse sets of news headlines rated as false (by multiple independent sources), that by using known psychological markers we can achieve over 80% classification accuracy of misinformation.
Yet perhaps the strongest evidence simply comes from intervention work. Meta-analyses of interventions that teach people about misinformation techniques show that knowing about such tactics helps people accurately discern between misinformation and credible information. Similarly, large-scale field experiments that train people on, say, emotionally manipulative news or reasoning-based fallacies, also find such interventions lead people to share less misinformation. In plain language: misinformation has clear markers that people can be taught to identify.
This leaves a risk of false positives, particularly if as critics argue, most information in our environments is true. In other words, if you teach people about falsehoods when most information they come across is true, they might end up identifying falsehoods where there are none. Fortunately meta-analyses have shown that such interventions promote correct discernment and that varying the base rate of misinformation does not impact people’s ability to discern false from true information.
Both jurisprudence and science have emerged to deal with reality and to differentiate truth from falsehood. Science and the judiciary are elaborate institutions that have developed imperfect yet important and evidenced-based ways to deal with establishing truth in uncertain conditions. If critics claim that this cannot be done reliably, then they really are arguing that neither science nor justice can be achieved.
This post draws on the authors’ preprint, Why Misinformation Must Not Be Ignored, published on PsyArXiv.
The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.
Image Credit: Reporters with various forms of “fake news” from an 1894 illustration by Frederick Burr Opper (Public Domain) via Wikipedia.
This is a very helpful article. I would like to find images or other educational materials that can be shared widely on Social Media as tools to identity misinformation.
I appreciate the examples that you gave because they make it clear that this is not just a US problem