In this post Poppy Wood, Reset.tech’s Senior Advisor on UK policy and political strategy, argues that it is imperative to know the difference between the freedom of speech – a democratic claim, and freedom of reach – a false claim on the right to manipulate a broken business model. This, she explains, will help the UK create a strong regulatory policy for online content.
The UK Government’s ambitious plans to ‘make the UK the safest place in the world to go online, and the best place to grow and start a digital business’ have been a long time in the making.
The Online Harm proposals were first set out in the Online Harms White Paper published in April 2019, with an interim consultation report published in February 2020, and full consultation report published in December 2020. The new regulatory framework will continue to progress, slightly rebadged, via the forthcoming Online Safety Bill expected in 2021. The core proposal is for a new statutory duty of care, enforced by an independent regulatory body, which is now confirmed to be Ofcom.
The aim of the duty of care will be to improve safety for users of online services, primarily by taking action against activity that ‘caus[es] significant physical or psychological harm to individuals’. Companies will ‘complete an assessment of the risks associated with their services and take reasonable steps to reduce the risks of harms they have identified occurring’.
There has been much discussion about whether and how such proposals might interfere with free expression or with the functioning of a free and impartial press. At points, the government’s proposals appeared to have veered too closely towards a content-focused notice-and-takedown regime, as opposed to an approach that looks at ways to minimise risk throughout the system. Reset supports systems-based regulation as the tech-savvy, human rights friendly approach to dealing with these thorny issues.
Such discussions about freedom of expression online often starts with reference to the “trade off” between regulating digital media platforms whose products harm the public, and preserving free speech. The argument suggests that it’s a zero-sum game – that dialing-up public safety means dialing down freedom of speech. This rhetoric is false and undermines proper attempts by governments to intervene in the information marketplace. It is a narrative promoted by the tech companies whose power and resources have allowed this argument to dominate the debate.
The reality is that there have always been rules in the digital media marketplace. Some are set by governments – such as prohibitions on incitement to violence – while most are set by the companies themselves, policing content to comply with social norms, customer expectations, and democratic values. These rules are often ignored and applied inconsistently. And they are only rarely explained with respect to how they enable rather than restrict a greater range of public expression.
Architecture and business models
We argue that freedom of speech can be preserved while implementing a robust digital regulatory regime. The key lies in altering the architecture of the platforms and the logic of content curation — that is, how digital media platforms decide what content to serve to consumers.
The core of the problem we face today is not primarily a result of malign content appearing on social media. The problem is that the business model of platforms is designed to amplify and spread that content which best captures attention that can be sold to advertisers. That means that recommender or curation algorithms have a bias towards the sensational and outrageous, which is often synonymous with the malign. Content that once remained at the margins of the marketplace of ideas is systematically dragged into the mainstream and normalized through repetition before a broad audience. This is not about the right of an individual to post on social media; it is about the decision of the platforms to distort the public sphere by amplifying harmful-but-sensational views and sentiments far out of proportion to their actual prevalence in the real world.
So what is clear is that rather than focusing on content, it is better to focus on (re)designing these curation systems so that harmful material stops being algorithmically and automatically spread at scale. People should remain free to state their opinions online (providing they do not break the law in doing so, nor violate the terms of service of a commercial contract), but those opinions do not have to reach the widest audience in ways that sacrifice the public interest for advertising revenue. It is the difference between freedom of speech and freedom of reach. The former is a democratic value. The latter is a false claim on the right to manipulate a broken business model.
The UK Government is aware of these issues and has committed itself to ensuring protections for freedom of expression within the regulatory framework, as well as strong protections for journalistic content.
The final policy position closing the online harms consultation white paper states that “[t]he government is committed to defending the invaluable role of a free media and is clear that online safety measures must do this”. The white paper also provides for clear exceptions for journalistic content by stating that:
[c]ontent and articles produced and published by news websites on their own sites, and below-the-line comments published on these sites, will not be in scope of legislation. In order to protect media freedom, legislation will include robust protections for journalistic content shared on in-scope services.
It is unclear how wide this exemption will reach, which outlets will be within the scope of the exemption, and how they will qualify. No doubt there will be drafting challenges ahead for the government to meet that commitment, and there will need to be extensive dialogue with a range of stakeholders.
A further area of contention and confusion is the government’s proposal to include “services where users expect a greater degree of privacy – for example, online instant messaging services and closed social media groups” in scope of the regulation. There is no clarity on how government expects service providers to achieve this, and the implications for end-to-end encryption which facilitates much of the privacy users experience online. While the government has put much thought into protecting journalistic content online, this approach appears at odds with preserving the full anonymity of journalistic sources. The hope is that the exemption would stretch to include certain journalistic practices as well as sources, but the details remain to be seen.
The government is right to press ahead with this new regulatory agenda. With the consultation period now over, the next step will be the introduction of draft legislation, likely in the form of an Online Safety Bill, with possibly a period of pre-legislative scrutiny. This crucial process will need the active participation of all stakeholders to ensure that the outcome produces an online environment that is safe for the public without clamping down on freedom of expression.
This article was originally commissioned as part of the UK component of the Media Influence Matrix Project, set up to investigate the influence of shifts in policy, funding sources and technology on contemporary journalism. It is due to report in summer 2021. This article represents the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.