LSE - Small Logo
LSE - Small Logo

Robin Mansell

March 11th, 2025

We must tackle the “Western bias” of misinformation and disinformation research

0 comments

Estimated reading time: 7 minutes

Robin Mansell

March 11th, 2025

We must tackle the “Western bias” of misinformation and disinformation research

0 comments

Estimated reading time: 7 minutes

Policy responses to misinformation and disinformation in the Global South suffer from a biased evidence base – from research largely carried out in the Global North. Similar biases have been found regarding the outcomes of using AI tools to combat illegal and harmful information. Robin Mansell takes stock of the problem and considers what can be done.


The World Economic Forum singled out mis- and disinformation as its top-ranked global risk in its 2025 report – a greater threat than extreme weather events or state-based armed conflict.

What is less talked about is how this risk is framed and dealt with outside the Global North. Online misinformation (the inadvertent spreading of inaccurate information) and disinformation (where information known to be false is intentionally spread in order to cause social harm) is a big issue for the Global South. Yet the vast majority of evidence on how to combat harms linked to mis- and disinformation as well as hate speech comes from the United States and Europe. All too often, the assumption is that the effects of mis- and disinformation on people’s trust in news media and on their political preferences will be the same in the Global North as in the Global South.

A new report from the Observatory on Information and Democracy – entitled Information Ecosystems and Troubled Democracy – investigates these biases and points to remedies for all those grappling with these issues.

How policies neglect conditions in the Global South

Following guidance developed globally, legislation is being put in place in countries throughout the Global South pushing tech companies to moderate online content to protect rights to freedom of expression and keep people safe from harms. An immediate point to note, however, is that mis- and disinformation in these contexts often circulates where many people are not connected to the internet, and where many still rely on legacy news media. Policies which focus solely on online mis- and disinformation neglect these conditions.

Policies also tend to neglect the multiple social, cultural and political factors that motivate people to generate harmful online content. When research does focus on the Global South, it reveals different patterns of trust in news media. For instance, in 2024, scores for overall trust in the news based on self-report surveys carried out by the Reuters Institute for the Study of Journalism were 30% in Argentina and 43% in Brazil (in both cases, declining); 32% in the United States and 69% in Finland (stable); and 64% in Kenya (increasing). There are also large differences in who generates and weaponises information and in people’s varying capacities to seek remedies for harms.

When research does focus on the Global South, it reveals different patterns of trust in news media

There is also much less research on the power asymmetries between Big Tech companies, governments and individuals in the Global South and how these play out in decisions about what is acceptable online information and what information should be suppressed. Big Tech companies operate as digital behemoths in the Global South, just as they do in the Global North. But there is a key difference: models for governing the use of social media algorithms to amplify/demote certain information are being exported from the Global North to the Global South, often in the context of aid, cooperation and trade. When these models are emulated by governments in the Global South they can have negative unintended consequences.

For example, legislation aimed at moderating online content can be used to suppress voices critical of the state, even when the stated goal is to protect rights to freedom of expression – as seen in Nigeria, in countries in Asia, and other regions in the Global South. Combined with the global drive by US and Chinese-owned tech companies and Global North governments to promote the use of AI tools in the Global South, the overall transfer of policies and practices is a recipe for deepening and exacerbating existing inequalities and injustices.

The path to resisting corporate and state power

If policymakers in the Global South are to succeed in combating the harms of mis- and disinformation, it is essential that their efforts reflect the diverse experiences of harms in their countries and regions, as well as the structural factors that are at play. This means resisting becoming passive recipients of Western ideas about how to govern complex information and communication spaces; that is, it requires an explicit effort to decolonise research in this area.

Citizens or their representatives need to be able to contest the design and function, and even the existence, of commercial social media platforms and their uses of AI tools. The space for alternatives to Big Tech business models might be diminishing, but it has not vanished. There are numerous examples of resistance to unjust and discriminatory Big Tech practices and to legislation that is implemented in ways that do not protect rights to freedom of expression and data privacy.

Citizens need to be able to contest the design and function, and even the existence, of commercial social media platforms and their uses of AI tools… The space for alternatives to Big Tech might be diminishing, but it has not vanished

Besides raising awareness of things individual citizens can do as self-defence – such as using virtual private networks and encrypted messaging applications, for those who are connected to the internet – civil society organisations are supporting community and municipal collaborations to develop digital platforms that adhere to their own rules for moderating online content and for the way data is collected and used. They are working to attract funding for public interest alternative news media and promoting new national frameworks for public and decentralised digital infrastructures that put the public’s interest in the availability of accurate online information first, not Big Tech companies’ interests in data monetisation. The common goal of resistance tactics enabled by civil society actors is to redistribute power away from Big Tech and governments that do not respect universally agreed human rights, and towards local political and social communities. To address the skewed evidence base that is currently informing policy and practice around combatting mis- and disinformation in the Global South, the biases of research urgently need to be acknowledged and addressed.


The report, “Information Ecosystems and Troubled Democracy”, is available to read here, courtesy of the Observatory on Information and Democracy.

Sign up here to receive a monthly summary of blog posts from LSE Inequalities delivered direct to your inbox.

All articles posted on this blog give the views of the author(s). They do not represent the position of LSE Inequalities, nor of the London School of Economics and Political Science or the International Observatory on Information and Democracy.

Image credits: JRdes and Primakov via Shutterstock.

About the author

Robin Mansell

Robin Mansell is Professor Emerita in the Department of Media and Communications at LSE. She was Scientific Director for the report of the Observatory on Information and Democracy in Paris. Her research focusses on digital platform governance including privacy and surveillance issues. She has received numerous awards for her research and is a Fellow of the British Academy.

Posted In: Global Inequalities | Media | Technology

Leave a Reply

Your email address will not be published. Required fields are marked *