LSE - Small Logo
LSE - Small Logo

Nick Couldry

October 12th, 2021

It’s time to stop trusting Facebook to engineer our social world

0 comments | 38 shares

Estimated reading time: 5 minutes

Nick Couldry

October 12th, 2021

It’s time to stop trusting Facebook to engineer our social world

0 comments | 38 shares

Estimated reading time: 5 minutes

As a recent US Senate hearing hears that Facebook prioritises its profits over safety online, Nick Couldry, Professor of Media, Communications and Social Theory at the London School of Economics and Faculty Associate, Berkman Klein Center for Internet and Society, Harvard University, argues that a public scrutiny and a tighter regulatory framework are required to keep the social media giant in check and limit the social harms that its business model perpetuates.

The world’s, and in particular the USA’s, reckless experiment with its social and political fabric has reached a decision-point. Almost a year ago Dipayan Ghosh of the Harvard Kennedy School of Government and I argued that the business models of Facebook and other Big Tech corporations unwittingly aligned them with the goals of bad social actors, and needed an urgent reset. Why? Because they prioritize platform traffic and ad revenue over and above any social costs.

Yet, in spite of a damning report by the US House of Representatives Judiciary Committee last October and multiple lawsuits and regulatory challenges in the US and Europe, the world is no nearer a solution. But for the case of Facebook, whistleblower Frances Haugen’s shocking Senate testimony last week confirmed exactly what we argued: that this large US-based corporation is “buying its profits with our safety”, because it consistently prioritizes its business model over correcting the significant social harms it knows it causes.

Featured image: Photo by Glen Carrie on Unsplash

As Robert Reich notes, it would be naïve to believe that accountability will follow the public outcry. That’s not how the US works anymore, nor indeed many other democracies. Meanwhile Mark Zuckerberg’s response to the new revelations rang hollow. Of course, he is right that levels and forms of political polarization vary across the countries where Facebook is used. But no one ever claimed that Facebook caused the forces of political polarization, which inevitably are variable, only that for its own benefit it recklessly amplified them.

Nor, as Zuckerberg rightly protests, does Facebook “set out to build products that make people angry or depressed”: why would they? But the charge is more specific: that Facebook configured its products to maximize the measurable “engagement” that drives its advertising profits. Facebook’s 2018 newsfeed algorithm adjustment, cited by Haugen, was a key example. Yet we know from independent research that falsehoods travel faster, more deeply and more widely than truths. In other words, falsehoods generate more “engagement”. So, optimizing for “engagement” automatically optimizes for falsehoods too.

It is not good enough for Facebook now, under huge pressure, to claim credit for the “reforms” and “research” it conducted in earlier attempts to mollify an increasingly hostile public. Facebook can say, as Mark Zuckerberg just did, that “when it comes to young people’s health or well-being, every negative experience matters”, but its business model says otherwise, and on a planetary scale. It is time for that business model to be examined in the harsh light of day.

The problem with the underlying business model

In a report published a year ago, Dipayan Ghosh and I called this model the “business internet”. Its core dynamics are by no means unique to Facebook, but let’s concentrate there. The business internet is what results when the vast space of online interaction becomes managed principally for profit. It has three sides: data collection on the user to generate behavioral profiles; sophisticated algorithms that curate the content targeted at each user; and the encouragement of engaging – even addictive – content on platforms that holds the user’s attention to the exclusion of rivals. A business model such as Facebook’s is designed to maximize the profitable flow of content across its platforms.

If this sounds fine on the face of it, remember that the model treats all content producers and content the same, regardless of their moral worth. So, as Facebook’s engineers focus on maximizing content traffic by whatever means, disinformation operators – wherever they are, provided they want to maximize their traffic – find their goals magically aligned with those of Facebook. All they have to do is circulate more falsehoods.

Facebook will no doubt say it is doing what it can to fix those falsehoods: many platforms have tried the same, even at the cost of damping down the traffic that is their lifeblood. But the problem is the underlying business model, not the remedial measures, even if (which many doubt) they are well-intentioned. It is the business model that determines it will never be in Facebook’s interests to control adequately the toxic social and political content that flows across its platforms.

It is the business model that determines it will never be in Facebook’s interests to control adequately the toxic social and political content that flows across its platforms.

The scale of the problem is staggering. As recent Wall Street Journal articles detail, Facebook’s business model (and obsession with controlling short-term PR costs) push it to connive when celebrities post content that even Facebook’s rules normally ban, discount the impacts on teen girls’ self-esteem from Instagram’s image culture, misunderstand the consequences for political information when it tweaks its newsfeed algorithm, and fail in its own drive to encourage Covid vaccine take-up.

Some Facebook staff seem to believe that the Facebook information machine has become too large to control.

Yet even so, we can easily underestimate the scale of the problem. We may dub Instagram the ‘online equivalent of the high-school cafeteria’, as the Wall Street Journal does, but what school cafeteria ever came with a continuously updated and universally accessible archive of everything anyone said there? The problem is that societies have delegated to Facebook and other Big Tech companies the right to reengineer how social interaction operates – in accordance with their own economic interests and without restrictions on scale or depth. And now we are counting the cost.

A turning point?

But thanks to Frances Haugen, through her Senate testimony and her role in the Wall Street Journal revelations, society’s decision-point has become startlingly clear. Regulators and governments, civil society and individual citizens could consign the problem to the too-hard-to-solve pile, accept Facebook will never fully fix it, and allow the residual toxic waste (inevitable by-product of Facebook’s production process) to do whatever harm it can to society’s and democracy’s fabric. Or key actors in various nations could decide that the time for coordinated action has come.

Key actors in various nations could decide that the time for coordinated action has come.

Assuming things proceed down the latter, less passive path, three things require urgent action.

  1. Facebook should be compelled by regulators and governments to reveal the full workings of its business model, and everything it knows about their consequences for social and political life. Faced with clear evidence of major social pollution, the public cannot be expected to rely on the self-motivated revelations of Facebook’s management and their engineers working under the hood.
  2. Based on the results of that fuller information, regulators should consider the means they have to require fundamental change in that business model, on the basis that its toxicity is endemic and not merely accidental. If they currently lack adequate means to intervene, regulators should demand extended powers.
  3. Equally urgent action is needed to reduce the scale on which Facebook is able to engineer social life, and so wreak havoc according to its whim. At the very least, the demerger of WhatsApp and Instagram must be put on the table by the US FTC. But a wider debate is also needed about whether societies really need platforms on the scale of Facebook to provide the connections on which social life undoubtedly depends. The time has passed when citizens should accept being lectured by Mark Zuckerberg on why they need Facebook to “stay in touch”. More comprehensive breakup proposals may follow from that debate. Meanwhile, analogous versions of the “business internet”, in Google and elsewhere, also need to be examined closely for their social externalities.

 

Some fear that the medicine of regulatory reform will be worse than the disease. As if the poisoning of democratic debate, the corrupting of public health knowledge in a global pandemic, and the corrosion of young people’s self-esteem, to name just some of the harms, were minor issues that could be hedged.

Something like these risks was noted at the beginnings of the computer age, when in 1948 one of its founders, Norbert Wiener, argued that with “the modern ultra-rapid computing machine . . . we were in the presence of [a] social potentiality of unheard-of importance for good and for evil”.

Nearly 75 years later, Wiener’s predictions are starting to be realized in plain sight. Are we really prepared to go on turning a blind eye?

This article gives the views of the author and does not represent the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

About the author

Nick Couldry

Nick Couldry is Professor of Media, Communications and Social Theory in the Department of Media and Communications at LSE. As a sociologist of media and culture, he approaches media and communications from the perspective of the symbolic power that has been historically concentrated in media institutions. He is interested in how media and communications institutions and infrastructures contribute to various types of order (social, political, cultural, economic, ethical). His work has drawn on, and contributed to, social, spatial, democratic and cultural theory, anthropology, and media and communications ethics. His analysis of media as ‘practice’ has been widely influential. In the past 7 years, his work has increasingly focussed on data questions, and ethics, politics and deep social implications of Big Data and small data practices. He is the author or editor of 15 books and many journal articles and book chapters.

Posted In: Internet Governance

Leave a Reply

Your email address will not be published. Required fields are marked *