LSE - Small Logo
LSE - Small Logo

Jakob Angeli

December 8th, 2021

Quo vadis, Online Safety Bill? Closing gaps in the new regulatory framework

0 comments | 2 shares

Estimated reading time: 5 minutes

Jakob Angeli

December 8th, 2021

Quo vadis, Online Safety Bill? Closing gaps in the new regulatory framework

0 comments | 2 shares

Estimated reading time: 5 minutes

In November, the LSE Department of Media and Communications, in cooperation with the Joint Parliamentary Committee on the Online Safety Bill, hosted two roundtables that brought together leading academics, policy experts, and Committee members to discuss the framing of key issues in the draft Bill (write-ups available here) . Here, recent MSc Politics & Communication graduate Jakob Angeli highlights three central areas of the Bill that sparked controversy during the discussions and call for reconsideration.

As the UK Government’s Online Safety Bill inches closer to becoming law, questions remain about the implications of the new regulatory regime for businesses, individuals, and civil society. Previous contributions on this blog have looked at potential consequences for journalism, the way media literacy is addressed in the Bill (here and here), and how the Facebook Files lay bare shortcomings of the proposed legislation. However, concerns persist about further questions that lack consideration under current provisions. Three issues merit particular attention.

Are protections and exemptions adequately framed?

The draft Online Safety Bill grants special protection to journalistic content and ‘content of democratic importance.’ However, when it comes to exemptions, it might miss its target. The implicit understanding of journalism contained in the Bill seems to stem from an era where a handful of established outlets dominated the media landscape, as the wording ‘recognised news publisher’ contained in Clause 40 suggests. Not only does such a framing grant disproportionate privileges to institutional players, it also leaves room for application to outlets known for spreading false and misleading information, such as Russia Today or CGTN. Many types of material not subject to editorial control (comments, discussion forums etc.) also fall outside the scope of regulation. While institutional exemptions need to have a place in the Bill, there still are significant loopholes and a lack of clarity.

In addition, ‘content of democratic importance’, defined as ‘that which appears or is specifically intended to contribute to democratic political debate,’ might not be far-reaching enough in terms of capturing what should be protected. Roundtable participants repeatedly highlighted that the wording should be changed to include speech that enjoys heightened protection under Article 10 of the European Convention on Human Rights. ‘Content published in the public interest’ might constitute a more adequate framing, as this would broaden the scope of protection to also include content by actors like NGOs.

Anonymity vs. identity verification – a false dichotomy?

Being able to stay anonymous is a cornerstone of free expression in the digital sphere. Anonymity allows vulnerable and marginalised individuals to make their voices heard without fear of repercussions. It also affords children and adolescents opportunities to explore their identity and build communities with like-minded peers. However, the shocking amount of racist abuse directed against English players during this year’s European football championship and the murder of MP David Amess have reinvigorated discussions about the dark side of online anonymity,  with voices calling for comprehensive identity disclosure requirements to mitigate harm and prevent online radicalisation. While these examples attest to the severity of the problem, it bears mentioning that the link between anonymous accounts and hate speech might be less straightforward than commonly assumed.

Experts highlighted that anonymity is no all-or-nothing concept, and that graded solutions are conceivable which take into account a platform’s size and audience. These range from granting users privileges only after a prolonged period without a breach of terms of service to safely storing user identities with trusted third parties, thereby making users effectively traceable but anonymous vis-à-vis one another. Similarly, several roundtable participants underlined the need for a robust and independent third-party age verification sector. In its present form, the Online Safety Bill insufficiently addresses this tension between anonymity and verifiability, and risks granting platforms too much leeway in deciding upon user identification themselves.

Ensuring that the new framework enjoys trust and legitimacy

While the Bill goes some way towards strengthening communication regulator Ofcom’s oversight powers, more could be done to make sure that service providers are kept from marking their own homework. Workshop participants suggested that the regulator should be obliged to regularly consult with the public on important questions such as service provider categories or changing definitions of harm, thereby ensuring that the regime keeps pace with societal and technological change.

Furthermore, independent researchers need a legal basis for accessing platform data. Currently, access to reliable data is strongly limited, and research is frequently paid for by service providers themselves. A new, independent scrutiny committee could exercise these additional control functions, monitor the regulatory regime on a continuous basis, and access platform data for research purposes.

Overall, it is vital to guarantee that the regime is legitimate in the eyes of the broader citizenry, especially because the new regulations will bring about increased intervention by platforms. Ultimately, transparency is sorely needed on all levels. This extends not only to decisions about data usage but also, most crucially, content moderation and algorithmic design. We still lack a clear understanding of the way that algorithms suppress or amplify certain types of content, although these mechanisms have far-reaching consequences for virtually every aspect of society: among many other examples, there is evidence that algorithmic recommendation and curation systems influence viewpoint diversity on social networks, impact people’s decisions to get vaccinated against COVID-19, and can entrench educational inequalities.

The Online Safety Bill’s success will not least be measured by the extent to which it will contribute to the opening of algorithmic black boxes and present a corrective to the underlying business model of the platform economy, which maximises engagement to harvest user data for profit.

This article gives the views of the author and does not represent the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

About the author

Jakob Angeli

Jakob Angeli is a recent graduate of the MSc Politics and Communication programme at LSE. He holds BA degrees in Linguistics and Social Sciences from Humboldt University Berlin and an MA in Language and Communication from Technical University Berlin, and has previously worked as a research assistant at the WZB Berlin Social Science Center and as an educator for a German NGO. Jakob's research interest encompass media policy and regulation, European politics, and applying corpus linguistic methods to ideational research.

Posted In: Internet Governance

Leave a Reply

Your email address will not be published. Required fields are marked *