LSE - Small Logo
LSE - Small Logo

Esme Fowler-Mason

February 8th, 2023

The Online Safety Bill needs more algorithmic accountability to make social media safe

0 comments | 42 shares

Estimated reading time: 5 minutes

Esme Fowler-Mason

February 8th, 2023

The Online Safety Bill needs more algorithmic accountability to make social media safe

0 comments | 42 shares

Estimated reading time: 5 minutes

The UK’s Online Safety Bill, which has been slowly making its way through parliament for some time, has attracted its fair share of criticism as it attempts to make the UK the “safest place in the world to be online,” while defending free expression. In this piece, Esme Fowler-Mason emphasises the need for the Online Safety Bill to be adequately equipped to tackle algorithmically generated harm.

One of the key goals of the UK’s Online Safety Bill, which returned to the House of Commons in December 2022 after six months of delays, is to reduce harm on social media platforms.

The term ‘harm’ covers a broad scope of issues, including worsening mental health of young people; screen addiction; disinformation; polarisation and extremism. These issues are all exacerbated by machine learning algorithms, which curate the content we see with the goal of keeping us online for longer.

However, throughout the drafting process, there has been a lack of attention afforded to tackling algorithmically generated harms. The Draft Bill has been criticised for prioritising content-based issues at the expense of more opaque problems such as platform design and algorithm use. No legislative discourse expressly refers to the concept of algorithmic accountability, and Ofcom is given extremely broad discretion to determine how it will deal with this issue.

Given the extent of harm caused by machine learning algorithms, and the vast and easily accessible scholarship on the matter, this is surprising. The lack of engagement by Parliament with academics in the field of algorithmic accountability could lead to a legislative instrument unable to make social media safe.

How do algorithms create harm?

Machine-learning algorithms involve a computer system training itself to spot patterns and correlations in data sets and make predictions based on these. They are programmed to increase user-engagement so that technology companies can keep people online, gather their data, and make money from advertisements.

Algorithms curate the content we see based on our micro-behaviours, such as how long we watch a video, or what posts we do, or don’t, ‘like’. The more cat videos you watch, the more the algorithm will show you. The more fake news you consume, the more you will be shown.

Algorithms also amplify extreme content because this is what keeps us engaged. Whilst this can be positive, it also fosters the growth of paedophile rings on YouTube, extreme right-wing groups on Facebook, and pro-eating disorder communities on TikTok.

The severity and pervasiveness of harms caused by algorithms has led to the development of the concept of algorithmic accountability. This intensely researched topic seeks to apportion responsibility for harm caused when algorithmic decision-making has discriminatory or unjust consequences.

Transparency Reporting

Whilst the Online Safety Bill places various requirements on social media companies to inform the public on the steps they are taking to keep users safe, transparency reports still only need to be given to Ofcom alone (s 37 Online Safety Bill). This means harmful decisions and actions of the private technology sector will remain hidden from the public.

The Joint Committee Report on the Draft Bill recommended that transparency reports produced by service providers should be published ‘in full in a publicly accessible place.’ However, algorithmic accountability literature posits that this is going too far, as it could create privacy risks and allow malicious actors to ‘game the system.’

Algorithmic accountability literature suggests neither the Joint Committee nor the Online Safety Bill have got it quite right. Rather, transparency reports must be made publicly accessible for meaningful accountability, however, they should be translated into simple language and anonymised to protect user information, sensitive code and so on.

Transparency reports must be publicly accessible, as transparency is a prerequisite for accountability, but means little without a critical audience. If there is not a ‘robust regime of public access’, the decisions made by Big Tech oligopolies, like Meta and TikTok, will not be subject to scrutiny from broader publics and interest groups. Publicly accessible transparency reports would allow for a broader and more diverse uptake of knowledge, by media watchdogs, academics, NGOs and so on. These actors can spot trends which could serve as evidence and justification for requesting further information from platforms and push for change.

User awareness of such patterns, such as the persistent amplification of terrorist videos or child sexual exploitation and abuse, could galvanise technology companies to combat this through investigation and design change in fear of losing their user base and, ultimately, money. Similarly, making reports available to public disclosures also allows society to see whether Ofcom itself is doing its job properly.

Parliament should at least consult academics in the field of algorithmic accountability to ensure the final instrument is equipped to effectuate meaningful transparency, and in doing so, reduce the risk of user harm from algorithms.

Ethical Design Requirements: Altering the algorithm

The Online Safety Bill falls below the standards posited in the field of algorithmic accountability because it does not create a statutory duty of care on social media companies to design algorithms in accordance with a safety-by-design framework. At present, it would only require Ofcom to set non-binding instructions that adhere to safety-by-design features (s 37 OSB). As these are advisory in nature, there will be no explicit legal requirement for service providers to input values other than user-engagement and profit maximisation into algorithms.

In line with algorithmic accountability literature, the legislation should establish minimum safety-by-design standards, such as having limits on auto-play and auto-recommendation, as suggested by the Joint Committee Report on the Draft Bill.

Much of the literature on algorithmic accountability advocates for re-designing algorithms and platforms with a view to reducing harms caused. This is a proactive response to the wealth of evidence demonstrating the harms caused by design for two reasons. It accommodates for the inevitable prevalence of machine learning algorithms and social media use and envisions ways users can still benefit from algorithms whilst simultaneously reducing the unintended harms they can cause.

The careful articulation of an algorithm as ‘impartial’ is an obfuscation that makes it seem neutral, despite the millions of evaluations it makes each second. This distracts us from the reality that algorithms are value laden with the commercial logistics of the attention economy. By drawing our attention to this, the literature allows us to see the possibility of inputting alternative values into an algorithm, such as diversity, democracy, or ‘no-harm.’

Algorithms can be programmed to broaden a user’s world view to combat the ‘filter bubble’ issue. This could manifest in the form of a ‘get me out of my filter bubble’ button, which would also have the concurrent, positive effect of empowering users with choice and control – an aim set out in the White Paper. This could benefit society through reducing problems such as misinformation, disinformation and radicalisation as it helps people look beyond their assumed or known interests.

Tweaking an algorithm or designing a button to allow people to have chronologically organised ‘feeds’ is not hard; however, it is not profitable — perhaps why Instagram offers this choice but only though an obscure process, and not as a default setting.

Algorithmic accountability is an invaluable reference point for ensuring the legislation is adhering, or at least considering, the high ethical standards posited in academia. This could lead to a final instrument that is better equipped to protect social media users from the pecuniary interests of the private technology sector.

Conclusion

At the heart of the concept of algorithmic accountability is a drive to hold technology companies accountable for harms caused by their products and mitigate such harms. The critical and expert discussion in this field, increased use of algorithms on social media and rising instances of harm mean Parliament should, at a minimum, consult algorithmic accountability scholars.

The decision to regulate is welcome, however, more could be done to ensure the forthcoming legislation can reduce algorithmic harm on social media. Whilst algorithmic accountability cannot assist the legislation in all aspects, engagement with concepts from the field will ensure the final legislation is evidence-based and effective.

This article represents the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

Featured image: Photo by Markus Spiske on Unsplash

About the author

Esme Fowler-Mason

Esme Fowler-Mason is a first class LLM graduate with a background in European Studies and Law from the University of Amsterdam. She has a keen interest in AI governance, internet regulation and digital rights.

Posted In: Internet Governance

Leave a Reply

Your email address will not be published. Required fields are marked *