LSE - Small Logo
LSE - Small Logo

Robin Mansell

February 11th, 2025

Will crackdowns on big tech work?

0 comments | 6 shares

Estimated reading time: 5 minutes

Robin Mansell

February 11th, 2025

Will crackdowns on big tech work?

0 comments | 6 shares

Estimated reading time: 5 minutes

Robin Mansell, Professor Emerita at the LSE and Scientific Director of the International Observatory on Information and Democracy’s report Information Ecosystems and Troubled Democracy 2025, reflects on what can be done to tackle the power of big technology companies.

Whether it is Meta’s decision to stop using independent fact checkers of social media content or it is Elon Musk’s use of X to favour the alt right in Germany, US-owned big tech companies are exercising their market power in ways that challenge democracy. Mis- and disinformation and hate speech on social media go viral, despite efforts to put ethical principles and legislation in place to foster the ‘inclusive, open, safe and secure digital space that respects, protects and promotes human rights’ called for by the United Nations.

In the US, measures to mitigate harms linked to online content are cast as suppressing innovation, threatening competitiveness, and undermining US First Amendment speech rights. The UK government is consulting on changes to copyright law to ensure that the big tech companies can continue to access data, especially data generated by the UK’s creative industry, to train and profit from AI models. By promoting its AI Opportunities Action Plan, the government aims to promote economic growth by encouraging trust in AI and pushing for more ways to ‘unlock’ data assets. Protecting citizens from critical risks is mentioned, but the key goal is to drive a profitable data economy. AI systems will be brought to market with some beneficial applications. They will also exacerbate existing harms and create new ones, while profits continue to accrue to the big tech companies.

New legislation in the UK

On 1 February 2025 the UK government announced new legislation to tackle the use of AI tools to produce child sexual abuse images that circulate online by requiring the companies hosting this content to remove such images. As part of a wider proposed crime and policing bill, this risk mitigation measure may help, but even if the UK succeeds in mitigating some risks through such measures – including the Online Safety Act – the big tech companies will resist. The UK government will be out of line with the ‘America First’ agenda. It might even find itself the target of US government-imposed sanctions. This partly explains why policy makers seek to risk mitigate rather than tackle the real problem, which is the powerful incentives to profit from commercial data monetisation.

Big tech and democracy

A common assumption for promoting ‘AI opportunities’ is that what is happening to our online information and communication spaces is dictated by inevitable technological change. A new global report by the International Observatory on Information & Democracy on Information Ecosystems and Troubled Democracy challenges this assumption. It explains how the big tech companies are shaping our information ecosystems – complex systems of people, practices, values, and technologiesin their commercial interests. Based on a review of research with more than 1,600 citations, the report looks at how big tech strategies and practices are contributing to declining trust in news media, at how AI systems are deployed in ways that do not protect human rights, and at how data collection and use are being governed principally in the interests of big tech companies.

The report shows that online mis- and disinformation and hate speech are symptoms of complex changes in societies, including disregard for the rule of law and the abrogation of human rights. Of course, the propagation of online mis- and disinformation and content used to target and harm children is an important amplifier of these changes. Still, the report explains why this is not the singular cause. It shows why identifying technology, such as AI, as the principal cause of harm is misguided.

A big weakness of policy responses to online harms is a failure to acknowledge and tackle power asymmetries. The report examines the role of big tech and efforts to govern the uses of data. It discusses how it is possible to counter the power of big tech through strategies aimed at achieving data justice. The failure to tackle the power of big tech directly means there is little questioning of the legitimacy of big tech companies’ data extraction practices or of the monetisation of data for profit, even when there is accumulating evidence of unacceptable outcomes, including discrimination, inequalities and harms to children.

What can be done

The report highlights the need to reimagine and resist big tech company data monetisation by supporting civil society mobilisations. This means going beyond efforts to increase people’s media or AI literacy in the hope that individuals will protect themselves from harm: it means initiatives that aim to diffuse concentrations of big tech power. Such bottom-up measures include data governance frameworks that put control in the hands of local, community and municipal actors. This includes exposing how big tech business models make their digital platforms and social media attractive targets for mis- and disinformation campaigns and encourage investment in AI systems that are being deployed in harmful ways. As the report shows, big tech companies’ claims to be developing ‘responsible AI’ are often simply product marketing claims, and there is a need for greater support for independent monitoring of how these developments contribute to human rights infringements.

Overall, much more attention needs to be given to the actors and institutions that generate mis- and disinformation or harmful imagery, and to their motivations. This is neglected when policy focuses mainly on information or on technology, instead of on the actors. The report evidences that no single content moderation technique can be effective as long as the underlying corporate incentives to monetise data are left in place. The economic incentives that drive the big tech business model in the first place need to be tackled. Democratic instability and harms to adults and children will continue until actions are taken against the big tech business models – using risk mitigation strategies to crack down on big tech is likely to have small effects.

This post gives the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science, nor of the International Observatory on Information and Democracy.

Featured image: Photo by Mathew Schwartz on Unsplash

About the author

Robin Mansell

Robin Mansell is Professor Emerita of New Media and the Internet in the Department of Media and Communications at LSE, and and Scientific Director of the International Observatory on Information and Democracy. She has training in several social science disciplines including psychology, social psychology, politics and economics and is a strong advocate of interdisciplinary research when it builds on the strengths of disciplinary inquiry.

Posted In: Internet Governance

Leave a Reply

Your email address will not be published.