In this article, LSE Visiting Fellow Jonny Shipp explores the key learnings from the Internet Commission’s first accountability report and outlines how its work relates to the UN Sustainable Development Goals.
In January, the Internet Commission published the results of its first, year-long independent review of how organisations take decisions about content, contact, and conduct online. The Accountability Report 1.0 offers insight into the current state of affairs, shedding new light on how an organisation’s everyday activities relate to its wider corporate purpose. It may be the very first example of procedural accountability on content moderation practices: an approach which can support global digital cooperation and emerging statutory regulation.
In early June, CEOs from many of the world’s leading technology companies joined the ‘Digital with Purpose Movement’ and signed a pledge to governments and policy makers to accelerate the realisation of the Paris Agreement and UN Sustainable Development Goals. They hope to catalyse collective action across industry to create a “race to the top” in digital responsibility and ethical business practices, so reversing the negative consequences of digitalisation.
A trusted Internet?
In late 2017, on the day the UK Government published its first proposals for what is now the Online Safety Bill, I led a roundtable discussion to explore the idea of an “Internet Commission”. A series of stakeholder workshops followed to explore the scope of “digital responsibility” and develop a new accountability process in support of an effective Internet regulation ecosystem. Amidst public anxiety about the operation and negative effects of social media platforms, most agreed that industry could no longer be allowed to “mark its own homework” and that a new wave of corporate accountability was required.
A promising approach is for regulators to focus on the organisational processes and procedures surrounding content moderation decisions. Inspired by this, the Internet Commission published its first Evaluation Framework in 2019, and in 2020 it had the opportunity to gather data with a first cohort of “Reporting Partners”: the BBC (broadcasting), Sony Playstation (online gaming), Popjam (social media), and Meetic and Tinder (online dating).
Each Reporting Partner submitted written answers to the questions in our evaluation framework, and written clarifications based on our first review of the data. We then conducted interviews and developed a detailed confidential case study. Each organisation commented on their draft case, and through discussion we arrived at a final confidential case study for each participant. We identified key practices and used an organisational maturity model to explore and test the congruence of these practices with the organisation’s stated goals and purpose. Next, we agreed on redactions to each of the five cases, then combined them as the basis for a private knowledge sharing workshop. Here, participants exchanged views about shared challenges, such as safety by design, rights of appeal, moderator welfare, understanding emerging issues, and the opportunities and limitations of content moderation and age assurance technologies. This shaped the published report, which was scrutinised by a group of nine Advisory Board members, balanced in number across civil society, academia and industry. To guarantee the report’s independence, they were given full access to the evidence and opportunities to discuss it in detail with the authors.
Learning about digital responsibility
Firstly, here are a few learnings about the Internet Commission’s accountability reporting process. Reporting Partners told us that we were asking the right questions, but we also had many more: our supplementary questions served as a useful starting point for iterating our evaluation framework. Our detailed, confidential case studies have helped organisations to better understand where they are now, and this part of the process has prompted some immediate changes. Participants appreciated and benefited from private knowledge sharing. And although it is challenging, we have shown that it is possible to produce a fair and independent public report whilst respecting the need for commercial confidentiality.
Secondly, here is what we learned about how things are being done in the organisations we studied. We identified 24 key practices and evaluated their congruence with a culture of digital responsibility. Despite the diversity of the cohort, we identified eight shared challenges: safety by design, moderator welfare, right of appeal, reporting, customer focus, understanding emerging issues, moderation technologies and age assurance technologies. Here are four short examples of the practices we evaluated:
- We saw how Popjam, a small company that was recently acquired by Epic Games, has addressed online child safety as part of their service design, drawing on its team’s long experience with younger audiences to balance the risks and benefits of private messaging. They decided not to include this feature as younger users do not see it as necessary, and the risks around child exploitation are substantial.
- Sony Playstation is rightly proud of its moderator wellness programme: content moderators have a tough job, reviewing challenging content and behaviours. An established programme supports emotional needs of moderators, and Sony takes this a step further, using their supply chain influence to ensure that the same psychological support is available to everyone involved, even moderators employed by third-party vendors.
- The BBC is a unique organisation with a strong, if not always consistent, heritage of editorial responsibility. It aims to encourage healthy public debate, and this is demonstrated in the way its moderation team curates online interaction. They adopt a clear, principles-based policy to understand a contributor’s intentions and to err on their side. Where posts are removed, affected users are encouraged to use an appeals process.
- Tinder operates in 196 countries and is the most used dating app in many European countries. It makes extensive use of automated tools to review public-facing profiles in near real-time. Its system is designed to overestimate the number of violations of its guidelines, creating a wide safety net around the automated moderation process. Importantly, this also helps the organisation to identify and stay ahead of new abuse and scam patterns.
The full report, “Accountability Report 1.0. Online content, contact and conduct: advancing digital responsibility” is available here.
The wider context: driving “Digital with Purpose”
This month, the Portuguese Presidency of the EU kick-started a future Charter on Digital Rights with the Lisbon Declaration on Digital Rights. It was in this context that CEOs from many of the world’s leading technology companies joined the Digital with Purpose Movement, signing a pledge to governments and policy makers to accelerate the realisation of the Paris Agreement and UN Sustainable Development Goals. The Internet Commission’s work is relevant here because whilst digitalisation can support the delivery of these goals, the goals can also guide more trustworthy digital development. For example, can we achieve Goal 3, “Good health and well-being” whilst harmful content is proliferating? Is the achievement of Goal 16, “Peace, justice and strong institutions” consistent with the spread of misinformation, data privacy concerns or child exploitation? And Goal 9, on “Industry, innovation and infrastructure” must surely require stronger governance and more work to identify and reduce systemic risks in relation to the Internet. These are themes are central to the Internet Commission’s evaluation framework, an updated version of which was published in March.
The updated evaluation framework incorporates learning from the first reporting cycle, feedback from researchers, regulators and policy makers, and participating organisations, and reflects relevant indicators from: 2020 Ranking Digital Rights framework; Voluntary Principles to Counter Online Child Sexual Exploitation and Abuse; ICO Age Appropriate Design Code; and Council of Europe’s Guidelines on rights of the child in the digital environment. There are core questions about the organisation’s scope and purpose, the people it touches and its governance, plus additional sections on:
- Content moderation: how is harmful and illegal contact, content, or conduct discovered and acted upon?
- Automation: how are intelligent systems used to promote and/or moderate online content?
- Safety: what measures are in place to protect people’s health and well-being?
The Digital with Purpose movement has identified digital impact themes across five priority areas: Climate Action, the Circular Economy, Supply Chain, Digital Inclusion and Digital Trust. It aims for companies to be assessed and awarded a formal certification which will be measured annually to track performance. To accelerate the realisation of the UN Sustainable Development Goals, the Internet Commission is contributing on the theme of Digital Trust. By aligning our work, we hope to contribute to a race to the top in digital responsibility. But although transparency has a role to play, it should not become a goal in itself, and organisations answering their own questions is not a formula for rebuilding trust.
A second reporting cycle is now underway with participants including Twitch, Pearson, Meetic and Tinder. Through this work, Internet Commission is identifying and independently evaluating how ethical behaviours are embedded within organisational culture through specific processes and practices, thereby advancing digital responsibility and contributing to the movement for Digital with Purpose.
This article represents the views of the author, and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.