LSE - Small Logo
LSE - Small Logo

Lorna Woods

Alexandros Antoniou

September 3rd, 2024

Is the Online Safety Act “fit for purpose”?

0 comments | 2 shares

Estimated reading time: 5 minutes

Lorna Woods

Alexandros Antoniou

September 3rd, 2024

Is the Online Safety Act “fit for purpose”?

0 comments | 2 shares

Estimated reading time: 5 minutes

How can the UK’s Online Safety Act be used to tackle problematic behaviour online in times of crisis? Lorna Woods and Alexandros Antoniou of the University of Essex look at whether the Act is fit for purpose in light of the recent riots and the online hate speech and misinformation that likely contributed to the unrest.

Media coverage of the riots across the UK has highlighted the part played by social media platforms in the widespread unrest. These platforms were used to organise public disorder, incite racial hatred, inflame, and share “information”, much of it false, that exacerbated matters. With the UK’s Online Safety Act 2023 enacted to much fanfare last autumn, with the stated aim of making the UK the safest place in the world to go online, questions now arise about how the Act, and the regulator entrusted with online safety matters (Ofcom), will deal with such online activities and whether there are any gaps in the new regime.

Delayed impact

The short answer is that the Online Safety Act will not have an immediate effect, because the duties it introduces do not yet bite on social media and search services.

The Act imposes two sorts of duties on regulated service providers: it both requires companies to understand the risks presented by the design and functionality of their service, and to mitigate those risks in relation to content that is criminal and content that is harmful to children. While the Act outlines compliance requirements in relation to the duties, it also specified that the regulator, Ofcom, should provide further on the requirements in guidance and codes. Until these documents are finalised, the duties remain non-operational. This is expected by early 2025. (Given the extensive groundwork required by Ofcom in preparing the codes, this timescale is not surprising.)

Treatment of Illegal Content under the Regime

Hate speech of the sort that formed the backdrop to the riots would likely trigger the duties under the Act, were they currently enforceable, as would inciting violence and organising and facilitating rioting. So, for example, social media services would be expected to have systems designed to limit the likelihood of people encountering this content and ensure it is available for as short a time as possible. More generally, services should have a system so that on receiving a valid complaint, the content is dealt with appropriately in accordance with the service’s terms of service.

However, even if the duties were in force, it is uncertain whether the Act is suited for what is essentially crisis response. The duties require service providers to improve their services in general terms, focusing on the systems they have in place, rather than mandating the removal of specific items of content (e.g., individual posts).  Ofcom is empowered to evaluate the overall effectiveness of a service’s content moderation and other systems, but neither Ofcom nor the Government can direct providers to deal with specific items of content in any particular way. Ofcom cannot tell service providers to take particular pieces of content down; it is very different from the broadcasting regime in the Communications Act where Ofcom makes decision about the acceptability of specific programmes.

Challenges with addressing mis- and disinformation

The Act’s ability to tackle mis- and disinformation remains uncertain. It depends on how we categorise these types of content by reference to existing UK law.  Misinformation becomes relevant to the riots because the circulation of inaccurate statements about the attacker in Southport – claiming that he was a Muslim immigrant – apparently contributed to the febrile atmosphere.

The previous Government removed the Act’s provisions relating to content harmful to adults, which might have caught some misinformation, from the legislation as it was going through Parliament, with the result that misinformation would only be caught if it crossed the criminal threshold, or, in the case of large or risky social media services (e.g., X ), if that platform’s terms of service prohibited that sort of content (which is not required by the Act).

For criminal disinformation, the duties under the Act would be triggered if the content posted fell within either the foreign interference offence or the false communications offence. Although both are criminal in nature, the foreign interference offence is classified in the Act as a ‘priority offence’, setting in motion more stringent obligations (e.g., to prevent user exposure to such content) than the false communications offence (for which no such preventative duties apply). While there are growing indications of some foreign interventions in the dissemination of inaccurate information, establishing a link between the actor spreading the disinformation and the foreign power, or proving the required “interference effect” remains challenging. It seems uncertain therefore that a service would have to take action in regard to this category of material.

As for the false information offence, the person spreading the information must know the information to be false – and this might be difficult to show where information is spreading virally, and could be problematic even in relation to the original poster. It was reported that a woman was arrested in relation to a social media post that misidentified the attacker in the Southport murders, along with suspicions of public order offences, but claims she was unaware the information was false. Of course, an arrest does not guarantee a prosecution or conviction, so we do not yet know how the courts will treat this requirement. In general, moreover, the Act, which problematically focusses heavily on assessing whether content meets certain thresholds, does not directly address the role of platforms in amplifying harmful content and connecting groups.

Alternative legal mechanisms

The video-sharing platform rules are a pre-existing set or rules that may be relevant to a sub-set of social media providers. Found in the Communications Act 2003, these rules – like the Online Safety Act – do not impose direct content controls but require services to implement safeguards in relation to certain types of content. They mandate that in-scope services have procedures in place to deal with illegal content and hate speech, with Ofcom holding enforcement authority. Again, the regime does not envisage that the regulator has the power to make directions to the service provider to take content down or limit its availability. The question the regulations ask is whether the service is providing adequate safeguards in general, so reducing the likelihood that the platforms are used for the spread of impugned content categories. However, the rules only apply to services established in the UK, excluding many platforms – though Snapchat and TikTok would be caught.

A further possibility would be to rely on the video on demand rules, under which UK-based on demand programme services (e.g., Now TV) must ensure that they do not contain any material likely to incite hatred based on race, religion, nationality or sex. These would not apply to the platforms but to the person posting/sharing the content so long as the content qualifies as an “on-demand programme service”. Ofcom has held that some YouTube channels meet these criteria, offering thus a potential method to address certain self-defined journalists and online DIY news anchors spreading false information under the guise of independent reporting. However, these rules remain minimal, are limited currently to UK established providers and Ofcom has yet to take an active enforcement stance in this area.

None of these frameworks, however, adequately deal with crisis situations. The Online Safety Act provides one potential mechanism, designed for “special circumstances”. Here, the initiative lies with the Secretary of State who may direct Ofcom, where there is a threat to public safety, to take certain steps, such as requiring a company to publicly explain how it is handling the threat, e.g., rioting in the streets. This is more of a transparency tool than a mechanism to deal with specific content. Importantly, Ofcom can only act once directed by the Secretary of State and there is no indication that such a direction has been issued so far.

The new Government has signalled its intention to “look more broadly at social media” in the aftermath of the riots. Recent events, along with London Mayor Sadiq Khan’s criticism that the Act is “not fit for purpose”, highlight the need to reassess the Act’s scope and rectify current limitations that could hinder Ofcom’s ability to act in similar future incidents.

This post represents the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

Featured image: Photo by Ehimetalor Akhere Unuabona on Unsplash

About the author

Lorna Woods

Lorna Woods, OBE is Professor of Internet Law at the University of Essex and a member of the Human Rights Centre. She has worked in the field of media and communications regulation for more than thirty years, publishing on both domestic and European law. From 2018 to 2023 she worked with Carnegie UK Trust on a project on the reduction of harm in social media and is now a consultation to the Online Safety Act network. She is an affiliate fellow at the Minderoo Centre for Technology and Democracy at Cambridge University as well as a member of the policy network at the Centre for Science and Policy there.

Alexandros Antoniou

Alexandros Antoniou is a Senior Lecturer in Media Law at the University of Essex. He researches communications law and intellectual property asset management. His work in these areas has been published in leading journals such as The Journal of Media Law, Communications Law, Entertainment Law Review and the Journal of Intellectual Property Law and Practice. He is a legal correspondent for the European Audiovisual Observatory.

Posted In: Internet Governance

Leave a Reply

Your email address will not be published. Required fields are marked *