A new Online Harms White Paper, published jointly by the UK government’s Department for Digital, Media, Culture and Sport and the Home Office, calls for a new system of regulation for tech companies with the goal of preventing online harms, including disinformation, cyberbullying, extremist content, and advocacy of self-harm. Here, Canadian academics Blayne Haggart, Brock University and Natasha Tusikov, York University, Canada offer their thoughts on the UK proposals.
Amidst a growing consensus that social media and online platforms must be subject to public regulation, the United Kingdom has stepped up to do just that. In doing so, it has made a lot of people very angry.
On April 8, the British government presented to Parliament an Online Harms white paper. Among other things, it proposes an arm’s-length regulator that would be responsible for setting and enforcing rules prohibiting speech that is illegal (think: child porn and hate crimes) or socially damaging (think: cyberbullying and intimidation). Overall, the proposed framework seems to echo ideas suggested by U.K. law professor Lorna Woods.
Response to calls for regulation
While some critiques have been a bit hyperbolic, the more measured ones have focused on the paper’s vagueness regarding definitions and how the regulator will be set up, and concerns that the regulator will be making rules addressing harmful speech that isn’t (currently) illegal.
Regulation is where ideals meet reality. It’s easy to talk about “free speech” or the need for public regulation in the abstract. No matter the issue, translating such concepts into practice is always a messy process, fraught with compromise. While the white paper only applies to the U.K., it offers the rest of us the opportunity to think realistically about what public regulation of social media should look like.
For all its flaws, the white paper is a responsible, if incomplete, attempt to address real social issues.
For example, some people might be uncomfortable allowing a government agency (arm’s length or otherwise) to set rules governing harmful, but not illegal, speech.
But here’s the thing: we’re already in a situation where regulators are setting rules censoring such speech, only in this case the regulators are Facebook and other giant platforms. These private, monopolistic, unaccountable companies already set arbitrary rules that erase otherwise-legal speech from our online lives based on nothing but their own whims and prudishness.
Such rules are no less significant because they’re set by private companies. As we’ve previously remarked, we are in the “worst of both worlds” when it comes to online speech regulation: private, unaccountable regulation and governments exerting extra-legal behind-the-scenes pressure to ban things they don’t like.
Somebody always sets the rules. We need to ask: who, how and for what purpose? We’d rather have these rules set by an accountable, reviewable public agency than by Mark Zuckerberg.
Rules create winners and losers
Concerns that the white paper rules would stifle “free speech” tend to ignore all the voices that are already stifled by the current de facto online rules. These include the women and people of colour driven offline by stalkers and trolls, and the reporters and public figures (especially women) for whom the price of entry into the digital public conversation is a ceaseless torrent of rape and death threats.
Although not intended as such, the argument that rules governing such behaviour would stifle legitimate speech is effectively an argument to continue stifling the speech of those currently affected by these behaviours.
The uncomfortable truth is that there are always rules. These rules will always exclude some people and ideas. The choice here isn’t between free speech and censorship; it’s between who will and won’t be heard.
That every system creates winners and losers makes it absolutely crucial that any rule-making process be both public and accountable. As far as we can tell, this describes the U.K. process: it involves open consultations, including with civil society actors and user groups. The white paper’s proposals remain general and open to interpretation, including how the actual rules will be set. Critics should see all this as an opportunity to shape the regulatory process, not as a burden to be resisted.
Pay more attention to structural issues
The white paper’s biggest flaw is that it almost completely ignores the systemic conditions that have made commercial online platforms so problematic. Their personalized-advertising, algorithm-fuelled, maximized-engagement-at-any-cost business model has played a large role in creating a poisonous online environment.
Banning personalized advertising, limiting data collection and usage and addressing market-concentration issues could go a long way toward cleaning up the online environment without resorting to heavy-handed speech regulation. Failure to do so would likely severely limit the regulator’s effectiveness, forcing it into ever-more-direct interventions and confirming its harshest critics’ fears. Canadian regulators, take note.
Online platforms aren’t responsible for all the world’s ills, but nor are they blameless. Greater public regulation is needed. We believe the U.K. white paper is a step in the right direction, arrived at through democratic processes and involving public consultations. It proposes rules that would be decided upon by a democratically elected government and implemented by an arm’s-length regulator, presumably isolated from direct government interference. This, for better or worse, is what democratic regulation looks like.
This article is republished from The Conversation under a Creative Commons license. Read the original article. This article represents the views of the author, and not the position of the LSE Media Policy Project, nor of the London School of Economics and Political Science.