Whether and how to regulate companies that provide social media or search services are key questions that governments around the world are grappling with, as both users and regulators worry about the extent of these tech giants’ influence. The UK government released a white paper on the regulation of ‘online harms’ for consultation in April 2019, and received more than 2,400 responses. Glasgow’s Philip Schlesinger and Martin Kretschmer write about their research on the complex landscape of platform regulation in the UK.
The UK Government’s recent response to the Online Harms consultation names Ofcom as the new regulator for this area of media content, adding to its extensive footprint.
While this assignment is not a great surprise to watchers of the regulatory scene, it highlights the growing complexity of UK platform regulation. Eight major official reports that have appeared since 2018 tell an interesting tale.
In new research based at CREATe and CCPR at the University of Glasgow, together with our colleague Ula Furgal, we have begun to map the platform regulation scene. We want to understand its dynamics and future prospects as the UK embarks on Brexit and the role of global platforms becomes ever-more salient to the policy agenda.
Our research is part of the programme of work funded by the AHRC Creative Industries Policy & Evidence Centre (PEC). It will receive its first public airing at the British Institute International and Comparative Law in Russell Square, London, on 26 February. Following the presentation of our mapping study, there will be a discussion featuring five regulators with a stake in this rapidly evolving field. Ofcom, the media regulator designated by the government as likely to become responsible for online harms; the Competition and Market Authority; the Information Commissioner’s Office; the Intellectual Property Office, and the prospective regulator for Artificial Intelligence (the Centre for Data Ethics and Innovation currently based in the Department for Digital, Media, Culture and Sport).
It is very timely to take stock as platforms are an increasingly distinctive regulatory object. Since 2016, a series of new policy initiatives has targeted online platforms. The trendsetting German NetzDG Law of 2017 required social media platforms to remove manifestly unlawful content within 24 hours, enforced by high fines and audited by reporting requirements. In 2017, Australia launched a far-reaching digital platform inquiry centred on competition issues raised by the US tech giants Facebook and Google. Article 17 of the Copyright in the Digital Single Market Directive (which, following Brexit, the UK will not implement) makes some platforms liable for content uploaded by their users. A new framework for content responsibility will be announced by the EU shortly: a pan-European digital regulator may soon be in the offing.
Our UK mapping study is a pilot for more extensive work. First, we performed a content analysis of eight official UK reports published between 2018 and 2020. Second, the regulators mentioned in those reports were analysed to determine their statutory basis (including the extent they rely on EU Law), and to identify their key duties and processes.
Our primary sample drew on the following:
- Ofcom discussion paper: Addressing harmful online content: A perspective from broadcasting and on-demand standards regulation (September 2018)
- Cairncross Review (DCMS): A sustainable future for Journalism (February 2019)
- HoC DCMS Committee: Disinformation and ‘fake news’ (February 2019)
- HoL Communications Committee: Regulating in a Digital World (March 2019)
- Furman review (Treasury, BEIS): Unlocking digital competition (March 2019)
- Online Harms White Paper (DCMS & Home Office) (April 2019)
- CMA market study: Online platforms and digital advertising (interim report) (December 2019)
- Centre for Data Ethics and Innovation: Review of online targeting (February 2020)
Our preliminary findings show that the regulatory landscape in the UK has been largely shaped in response to the perceived social and economic harms caused by the activities of two companies, Google and Facebook. The official documents analysed show that 3320 (76%) of the 4325 references made are to just two US firms and their subsidiaries: Google (including YouTube) counted for 1585 references. Facebook (including Instagram, WhatsApp and Messenger) racked up 1735 references. Only two platforms headquartered in Europe are mentioned (Spotify and Ecosia), whereas Chinese firms are referenced 61 times. Not a single UK firm features. This way of constructing the regulatory field points to structural and jurisdictional challenges that any UK regulatory intervention cannot avoid.
Close to 100 online harms mentioned in these reports are said to need addressing. Terrorism, fake news and child-related harms dominate the picture. We have categorised the harms mentioned in line with the fields of law into which they fall. However, many have no clear legal basis for their alleged harmfulness. A wider ‘duty of care’ proposed in the UK government’s Online Harms white paper is designed to encompass a wide range of such harms. Designing a regulatory process that removes harmful content while at the same time holding private firms to account is far from simple.
Further information about the research and the conference on 26 February, is available here. The event is free. Those mostly interested in platform regulation may wish to attend from 5.30-7pm. You can register directly by contacting Diane McGrattan at CREATe, Diane.McGrattan@glasgow.ac.uk
This article represents the views of the authors and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.