LSE - Small Logo
LSE - Small Logo

Lorna Woods

March 25th, 2022

The UK Online Safety Bill: an outline

0 comments | 24 shares

Estimated reading time: 5 minutes

Lorna Woods

March 25th, 2022

The UK Online Safety Bill: an outline

0 comments | 24 shares

Estimated reading time: 5 minutes

The UK government published its long-awaited Online Safety Bill in mid-March, following significant scrutiny. Here, Professor Lorna Woods of Essex University explains the key points of the bill. 

The Online Safety Bill (OSB) is a gorilla of a thing, comprising 213 pages, plus explanatory notes, impact assessment and human rights statement. It will take a while for people to understand how the regime introduced by the OSB is likely to work.  As a starting point, the following is an explainer, outlining what seems to be the intent but not taking a position on whether this is good, bad or ugly.

At its heart, the OSB is based on risk assessment (“risk assessment duties”) and mitigation (“safety duties”), albeit set in a complex framework focussed on different types of content – criminal, harmful to children and harmful to adults – and distinguishing between different types of service.  The provisions in relation to content harmful to adults are much weaker than those in relation to the other two categories.  Innovations since the draft Bill include two add-on chunks: fraudulent advertising and porn; as well as provisions on identity which are partially integrated into the safety duties. Companies in scope are either providers of user-to-user services and/or search services, and – dealt with separately (see Part 5) – services which display “regulated provider pornographic content”.  The OSB has extra-territorial effect, meaning that it will cover services not based in the UK but that serve UK users.

Regulated Services and Regulated Content

The user-to-user services are sub-divided into categories (Cat 1 and Cat 2B – detail as to the thresholds will be produced in secondary legislation.). There is a sub-category for search services: Cat 2A.  This categorisation means different services have different obligations. Significantly, only Cat 1 services have any obligations with regard to content that is harmful to adults, with extra obligations around content of democratic importance and journalism content. They must also provide user empowerment tools and give adult users the option to verify their identity.  Cats 1 and 2A have duties relating to fraudulent ads.  Cats 1, 2A and 2B all have transparency reporting obligations. It is implicit that there might be some services within the regime which do not fall within any of these categories. Only services “likely to be accessed by children” need to carry out the children’s risk assessment and safety duties; this does not depend on categorisation.

The bill does not require providers to deal with all content on their platforms, but only regulated content. This is criminal content where there is a victim who is an individual; or content that is psychologically or physically harmful. On top of this, each category of content has a sub-category of priority content (and for children there is also primary priority content). Priority criminal content comprises terrorism offences, child sexual abuse and exploitation offences, and other offences such as threats to kill, harassment, stalking or supplying illegal drugs or weapons (listed in Schedule 7). Priority content for content harmful to children and harmful to adults are to be added by statutory instrument.

Preliminary Steps by Ofcom

While the bill starts with the duties on regulated providers, the regime depends on the groundwork of the regulator, OFCOM. It has the general duty to ensure:

“the adequate protection of citizens from harm presented by content on regulated services through the appropriate use by providers of such services of systems and processes designed to reduce the risk of such harm” (added to its Communications Act 2003 duties).

Ofcom should also ensure a higher level of protection for children than adults.  Specifically, Ofcom is obliged to:

  • carry out market level risk assessments and to develop ‘risk profiles’ which it publishes;
  • maintain the register which allocates service providers into the different categories of service;
  • provide guidance on the Children’s Access Assessments (CAA), the process by which providers determine whether or not they are likely to be accessed by children;
  • provide guidance on risk assessments.

It is only once this information is available that companies will know which duties apply to them, and have guidance on how to do their respective risk assessments – their obligations do not kick in until this is available. Ofcom also has to develop codes of practice on how to comply with the safety duties. The bill contains details of some codes Ofcom must provide (on CSAEM, terrorism and fraudulent ads), and identifies some issues those codes must cover, but otherwise the scope and nature of the codes for each of the safety duties is up to Ofcom, although the bill provides principles that Ofcom should take into account when developing each code. Ofcom is also to provide guidance on user verification.

Duties

This then is the next step: the providers carry out ‘suitable and sufficient’ risk assessments on areas relevant to that service – in doing so they must take into account not just the risk profiles that Ofcom has developed but look at a range of factors, including the characteristics/functionalities of the service. They must also then carry out the corresponding safety duty. This is not just about take down, but (with the exception of content harmful to adults) requires proportionate steps to mitigate and manage harm taking into account, inter alia, design of functionalities, algorithms and other features, user tools and policies on user access (though the precise list varies depending on the type of content in issue). If the service complies with the relevant codes of practice, then it will be deemed to have complied with its safety duties. It can seek to comply with those duties in other ways but, in that instance, Ofcom will assess whether it has actually done so. As well as this safety duty, service providers also have duties to take freedom of expression into account. They must also provide a complaints mechanism. Cat 1 providers must also offer (adult) users the opportunity to verify themselves, and to give them the option not to interact with those who do not verify themselves, as well as providing other user empowerment features.

The fraudulent advertising duties look very similar to the illegal content safety duties, but there is no risk assessment. The relevant fraudulent offences are listed in cl 36 but are not priority content for the purposes illegal content safety duty.

The provisions in relation to porn apply to sites where the provider chooses to display the content (that is, the service providers the porn, by contrast to porn that is user generated). Those providers are under a duty to ensure that children are not normally able to encounter porn (as defined in the bill); again, OFCOM is to provide guidance to assist services in complying with their duties. No particular technical solution has been specified in the Bill.

Enforcement by Ofcom

Ofcom is responsible for enforcing the regime – cl 111 lists the enforceable requirements. In support of this, it has a number of information gathering powers and may also use external experts. In relation to information gathering, there is an obligation on companies to name a senior manager; plus where the providers do not comply with Ofcom’s information notices there is the possibility of directors being made criminally liable for that non-compliance. Separate provisions deal with CSEA and terrorism content which allow Ofcom to specify the use of “accredited technology” for the identification and or removal of that content. Where Ofcom is of the opinion that a provider has failed in a duty it may give a “provisional notice of contravention” which may be followed up by a “confirmation decision”.  The confirmation decision make require the provider to take specific steps to comply or remedy a failure to comply. Ofcom may, subject to certain constraints (cl 116), require the use of “proactive technology”, but it may not specify these technologies in relation to content harmful to adults. Ofcom may impose penalties (see Schedule 12). To support enforcement, Ofcom may – by applying to the court – deploy business disruption measures by imposing requirements on ancillary services (eg payment services) or through access restriction orders.

While there is no right of private action for failure to comply with the duties, specified entities (to be listed in a statutory instrument) may launch a super complaint where systemic risks are not being dealt with by Ofcom.

New criminal offences

The Bill also contains a number of new criminal offences, implementing some of the Law Commission recommendations from its ‘Modernising Communications Offences’ report (July 2021): harmful communication offence; false communication offence; threatening communications offence. These are directed to the user not the service provider, but these offences in principle will also fall within the criminal content category (they are not automatically priority content). The cyber-flashing offence has also been introduced.  These offences form part of the general body of criminal law; they are not to be enforced by Ofcom.

The Way Forward

This is a long and complex bill that, despite the opportunity for pre-legislative scrutiny, will likely take most involved in the legislative process some time to understand fully. We can also expect to see much lobbying from different interests. From recent statements in Parliament, it looks like second reading will take place just after Easter, with the detailed worked to be done after the Queen’s Speech in May. The bill will then be considered by the House of Lords, with most of that work looking likely to happen in the Autumn.  Even when the bill receives royal assent, given the amount of work that Ofcom has to do, the regime will not be brought into effect immediately.

This article gives the views of the author and does not represent the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

About the author

Lorna Woods

Professor Lorna Woods, OBE is Professor of Internet Law at the University of Essex and a member of the Human Rights Centre. Professor Woods has received an OBE for her services to internet safety policy. Her most recent project, with Carnegie UK Trust, is on the regulation of social media, introducing and arguing for a systemic approach. This work underpinned the UK government’s approach to legislation; she has been invited to give evidence to numerous Parliamentary select committees both in the UK and abroad, and regularly presents on law and tech at policy conferences. Recent publications include "Obliging Platforms to Accept a Duty of Care" in Moore and Tambini (eds) Regulating Big Tech: Policy Responses to Digital Dominance (OUP, 2021) and a co-edited collection, Perspectives on Platform Regulation Concepts and Models of Social Media Governance Across the Globe (Nomos, 2021). Professor Woods also researches digital human rights, including a chapter on freedom of expression in Peers et al (eds) The Charter of Fundamental Rights: A Commentary (2nd ed) (Hart, 2021).

Posted In: Internet Governance

Leave a Reply

Your email address will not be published. Required fields are marked *