LSE - Small Logo
LSE - Small Logo

Nick Couldry

Dipayan Ghosh

November 13th, 2020

Regulation of online platforms needs a complete reset

0 comments | 19 shares

Estimated reading time: 5 minutes

Nick Couldry

Dipayan Ghosh

November 13th, 2020

Regulation of online platforms needs a complete reset

0 comments | 19 shares

Estimated reading time: 5 minutes

How to regulate social media companies and other large digital platforms is a pressing question for governments around the world. In this post, Nick Couldry, Professor of Media, Communications and Social Theory at the LSE and Dipayan Ghosh, Co-director of the Digital Platforms & Democracy Project at the Harvard Kennedy School, argue that a much broader approach is required to understand what they call the “consumer internet” business model of  today’s large digital platforms and press the need for a “new digital realignment”.

Two decades ago, the US, UK and many other societies, without exactly intending to, delegated to digital platforms the redesign of the spaces where human beings meet, ignoring the possible social consequences. The result today, is a media ecosystem where it is business models, like those of Facebook and Google, that shape how our ideas and information circulate.

The results have often been disastrous. Big Tech has been forced to firefight, damping down the circulation of incendiary messages on WhatsApp, constraining the spread of false claims about vaccines, and confronting the plethora of misinformation about the global pandemic, particularly in the US. And yet, in the wake of one of the most divisive elections in US history, Google was found last week profiting from placing ads on sites such as Gateway Pundit that have spread false information about election turnout.

Something is deeply out of alignment here, but contemporary societies haven’t quite put their finger on what it is.

Yes, politicians are starting to take notice of the problem. In the month of October the US  saw a report from the Democrat-led House antitrust subcommittee on Google, Amazon, Facebook and Apple’s excessive monopoly power and the Justice Department’s lawsuit against Google. Meanwhile in Europe, politicians and competition authorities signaled a tougher stand against Big Tech platforms.

But these interventions do not go nearly far enough. The reason is simple: they remain locked within a narrow antitrust model of how digital platforms should be regulated. But this essentially economic framework cannot deliver solutions to a problem it was not designed to solve: the negative social side-effects of platforms’ basic business model. We need a much broader approach.

No one intended things to work out this way. But combine the embedding of connected computer devices in daily life with a few hugely successful platforms and the internet’s early 1990s shift from a commercial to a public model, and you have the basic recipe for today’s problems. Only one further ingredient was needed – the business model of today’s large digital platforms – and bad consequences for public life predictably flowed.

In a new report, we call that business model the “consumer internet”: it is the outcome when the vast space of online interaction becomes managed principally for profit. The model has three sides: data collection on the user to generate behavioral profiles; sophisticated algorithms that curate the content targeted at each user; and the encouragement of engaging – even addictive – content on platforms to hold the user’s attention to the exclusion of rivals. The model is designed to do only one thing: maximize the profitable flow of content across platforms. And it applies in various forms across the industry – not just at Facebook, where one of us once worked.

The problem is not that platforms make a profit, but that they reconfigure the flows of social information to suit a business model which basically treats all content suppliers the same. When platform operators seek to maximize content traffic by whatever means, and disinformation merchants too just want to maximize traffic, their goals can easily interact in a dangerous spiral.

And here is the paradox: that, whatever corporations’ pro-social claims, the goals of dominant digital platforms and bad social actors are in deep and largely hidden alignment. There is no social world without some bad actors; our misfortune is to inhabit a world that directly incentivizes their proliferation.

The risks of a computer-based social infrastructure of social connection were predicted as long ago as 1948 by the founder of cybernetics, Norbert Wiener, who wrote: “It has long been clear to me that [with] the modern ultra-rapid computing machine . . . we were here in the presence of [a] social potentiality of unheard-of importance for good and for evil”. Wiener’s unease was ignored in the headlong rush to commercially develop the internet, but it is not too late, even now, to heed Wiener’s warning.

Societies through their regulators and lawmakers must renegotiate the balance of power between the corporate platform and the consumer. A new digital realignment is needed. But how would this work?

First, we need radical reform of the market behind digital media platforms, enabling consumers to exercise real choice about how data that affects them is gathered, processed, and used, including a real choice to use platforms without data being gathered. Locking in this last point would challenge the privacy-undermining impacts of the platforms’ business model at its heart.

Second, much greater transparency must be imposed on platform corporations, uncovering not just their detailed operations but the so far uncontrolled social harms from which they profit. Platforms should be required to uncover their business models’ full workings, revealing exactly where they create advantages for bad social actors, and how they gain from this. Platforms must be required to take urgent remedial action against those social harms that they discover or are reported to them, for example algorithmic discrimination, data-driven propaganda, or viral hate speech. And they should be compelled to stop forms of data collection that corrode broader social values.

Achieving this will involve reform of legal frameworks that effectively exempt platforms from liability for what passes across them, whether via reform of Section 230 of the Communications Decency Act in the US or via the proposed European Digital Services Act in the EU. Failing such remedial action in all key jurisdictions, more drastic measures against the social damage caused by the consumer internet’s business model, such as platform break-up, must be considered.

Without such radical reforms, societies will have no chance of salvaging a citizens’ internet from the wreckage of today’s consumer internet. Such reforms are as relevant for Europe as North America. Yes, regulation is more advanced in the former, yet the need for regulators to confront not only platforms’ economic harms but their social harms remains unheeded.

A lot is at stake. After a presidential election whose build-up was disfigured by toxic content on platforms large and small, the US has an incoming government potentially interested in platform reform, yet the danger of extreme right-wing politics spreading virally online is anything but resolved. Nor does most of Europe want its politics to go down the US’s path.  Revisiting the apparently dry, technical details of platform regulation could today hardly be more urgent.

This article represents the views of the authors and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

Featured image: Photo by Saulo Mohana on Unsplash

About the author

Nick Couldry

Nick Couldry is Professor of Media, Communications and Social Theory in the Department of Media and Communications at LSE. As a sociologist of media and culture, he approaches media and communications from the perspective of the symbolic power that has been historically concentrated in media institutions. He is interested in how media and communications institutions and infrastructures contribute to various types of order (social, political, cultural, economic, ethical). His work has drawn on, and contributed to, social, spatial, democratic and cultural theory, anthropology, and media and communications ethics. His analysis of media as ‘practice’ has been widely influential. In the past 7 years, his work has increasingly focussed on data questions, and ethics, politics and deep social implications of Big Data and small data practices. He is the author or editor of 15 books and many journal articles and book chapters.

Dipayan Ghosh

Dipayan Ghosh is the co-director of the Digital Platforms and Democracy Project at the Harvard Kennedy School and lecturer at Harvard Law School. He is the author of Terms of Disservice (Brookings). Ghosh previously served as privacy and public policy advisor at Facebook, and prior was a technology and economic policy advisor in the Obama White House. His work on AI, privacy, disinformation, and internet economics has been cited and published widely, including in The New York Times, The Washington Post, HBR, CNN, MSNBC, NPR and BBC. Named to the Forbes 30 Under 30, he received a Ph.D. in electrical and computer engineering from Cornell and an MBA from MIT.

Posted In: Internet Governance

Leave a Reply

Your email address will not be published. Required fields are marked *