LSE - Small Logo
LSE - Small Logo

Blog Administrator

May 24th, 2018

A more transparent and accountable Internet? Here’s how.

0 comments | 10 shares

Estimated reading time: 5 minutes

Blog Administrator

May 24th, 2018

A more transparent and accountable Internet? Here’s how.

0 comments | 10 shares

Estimated reading time: 5 minutes

This week the UK Culture Secretary announced that the UK Government will legislate on internet safety. This could include a statutory code of conduct to tackle bullying and harassment on social media and mandatory transparency reporting. LSE Visiting Fellow Jonny Shipp is leading the development of an “Internet Commission”, an independent initiative for a more transparent and accountable internet. He explains how increased transparency about how online content is managed might be achieved and how it could help firms to improve their practice and legislators to sharpen their focus.

Policymakers, businesses and citizens are scrambling to understand the impact and challenges of digitalisation. Whilst technology can benefit people as individuals, its impact is not universally positive. Doteveryone’s 2018 Digital Attitudes Report finds that 50% say the internet has made life a lot better for people like themselves, but only 12% say it has had a very positive impact on society.

An accountability model for digital organisations

In February, with the support of the Centre for Information Policy Leadership and the Corsham Institute, I convened a series of workshops to develop a model of accountability for digital organisations. This was part of the development of the Internet Commission: an independent initiative for a more transparent and accountable internet. It aims to help reverse today’s negative spiral of the unintended consequences of digitalisation, ad hoc regulation and loss of public confidence in technology.

Drawing on the perspectives of civil society organisations including Age UK, NSPCC and Privacy International, and businesses including Apple, Microsoft, Sky and Telefónica, we mapped the top issues that digital organisations should be accountable for. We identified the unintended consequences of digitalisation, including changes in the labour market due to automation, addiction and mental health effects, fraud and organised crime, and the accessibility and take-up of internet access.

We also explored a group of issues connected with the ways in which online content is managed: the promotion of material that is offensive, inaccurate, “fake”, or otherwise harmful to individuals or society; and the use and misuse of data, analytics, human agents and artificial intelligence that determine and prioritise what is promoted, removed and not removed from the internet.“Procedural accountability”

“Procedural accountability” was a focus of discussion at the March 2018 workshop on platform responsibility convened by LSE’s Truth, Trust and Technology Commission. The idea is that firms should be held to account for the effectiveness of their internal processes in tackling the negative social impact of their services.

Transparency reporting is a vital step towards achieving such accountability, enabling policymakers, academics and activists interested in the detail of these reports to understand and comment on the steps firms take and how effective they are. To be credible and trusted, information disclosed by online firms will need to be independently verified. The internet industry “marking its own homework” is no longer effective.

Policymakers, industry and civil society need a much clearer picture of the processes by which Internet content is currently managed. Equipped with a reliable and relevant evidence base, they will have a strong basis for discussion and deliberation about how things should improve and how new regulation might work.

In this way, online firms can be encouraged to share best practice and can be encouraged to adopt ethical business practice. Independent of Industry and of Government, the Internet Commission aims to enable this process of constructive dialogue, confidential disclosure and analysis which leads to insightful public reports and well informed multi-stakeholder debate.

Earlier this year, as the storm over the role of social media in elections gathered pace, the UK Prime Minister insisting that social media companies must do more to ensure that they are a force for good. She confirmed her commitment to launching an annual internet safety transparency report, to provide data on what offensive content is being reported, how social media companies are responding to complaints, and what material is being removed. In support of this, the Internet Commission has taken made the question of how online content management issues as its initial focus.

Piloting a Transparency Reporting Framework

The processes by which online content is created, shared, promoted, moderated and removed are complex, and, like the wider internet ecosystem, they are evolving rapidly. In the West at least, interventions by governments and regulators have been largely ad hoc, fragmented and incomplete. People, especially parents, do not feel in control of the situation. As new services are launched, the picture becomes still harder to understand and so people are uneasy about the consequences for society.

Santa Clara University School of Law’s first conference on Content Moderation & Removal at Scale led to the recent publication of the “Santa Clara Principles on Transparency and Accountability in Content Moderation”. These principles are intended as a starting point for the development of transparency and accountability around moderation of user-generated content. They cover:

  • Numbers: companies should publish the numbers of posts removed and accounts permanently of temporarily suspended due to violations of their content guidelines;
  • Notice: companies should provide notice to each user whose content is taken down or account is suspended about the reason for the removal or account suspension; and
  • Appeal: companies should provide a meaningful opportunity for timely appeal of any content removal or suspension.

Taking account of the Santa Clara Principles and with the engagement of growing team and network of experts and supporters, the Internet Commission has created the Transparency Reporting Framework intended to apply to all forms of online content, whether user-generated, advertising, editorial or something in between. The current draft, which was recently discussed at a multi-stakeholder workshop meeting hosted by the Oxford Internet Institute, proposes 45 qualitative and quantitative questions in six sections:

  1. Reporting: how is the platform alerted to potential breaches of its rules?
  2. Moderation: how are decisions made to take action about content?
  3. Notice: how are flaggers and content creators notified?
  4. Process of appeal: how can decisions be challenged and what happens when they are?
  5. Resources: what human and other resources are applied to managing content?
  6. Governance: How are content management processes, policies and strategies overseen?

This week, UK Government published its draft transparency reporting template as part of its response to its Internet Safety Strategy Green Paper. In the coming months the Internet Commission aims to convene online firms and consult with others in a dialogue on digital responsibility that is informed by these and other proposals. It will seek to build an approach that reflects the growing importance of Artificial Intelligence, drives improvement in processes and procedures, works internationally and includes independent audit and assurance, balancing concerns for safety, security, privacy and freedom of expression.

About the Internet Commission

Backed by an advisory board from industry, academia and civil society, I am leading the development of the Internet Commission together with Julian Coles, Jessica Sandin and Dr Ioanna Noula. We are working to turn our accountability model for digital organisations into action by leading a dialogue on digital responsibility. That’s why we have started to develop a detailed framework to enable accurate and trustworthy assessments of the processes that social network providers have in place to tackle Internet harms.

This article gives the views of the author and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science.

About the author

Blog Administrator

Posted In: Internet Freedom | Internet Governance | LSE Media Policy Project

Leave a Reply

Your email address will not be published. Required fields are marked *