LSE - Small Logo
LSE - Small Logo

Mark Bunting

March 9th, 2019

Something must be done about harmful online content, but what exactly?

0 comments

Estimated reading time: 5 minutes

Mark Bunting

March 9th, 2019

Something must be done about harmful online content, but what exactly?

0 comments

Estimated reading time: 5 minutes

Does any issue, in these divided times, unite UK politicians and commentators more than the regulation of online platforms? MPs, Lords, ministers, opposition frontbenchers, charities, newspapers, broadcasters and a succession of think-tanks agree: something must be done about harmful online content.

The question is, what? The Government is finalising a white paper for publication before the end of March; it seems even Brexit can’t stop the policy juggernaut. What should it say?

A leading proposal is to introduce a ‘duty of care’, an idea developed most fully by Professor Lorna Woods at the University of Essex, and William Perrin, a trustee of the Carnegie UK Trust. It’s based on a suggestive metaphor of online platforms as public spaces, whose owners and operators have a degree of responsibility for what happens there. A duty of care is a legal wrapper that makes platforms liable for reasonably foreseeable harms that they fail to prevent.

What harms are ‘reasonably foreseeable’ in online environments is a tricky question, though. Services that deliberately encourage malicious gossip or advertise themselves as places for children to meet ‘new friends’ are pretty dreadful ideas. But photo sharing sites, messaging services, user-generated video platforms and collaboration tools can be used for a million benign uses as well as some terrible ones. So ‘reasonable foreseeability’ assumes platforms can detect good actors from bad, and reliably prevent the latter without unintentionally blocking the former. Sometimes that might be easy, and often not; how will a regulator, or court, decide how much error can be tolerated?

The Digital, Culture, Media and Sport Select Committee is looking enviously at Germany’s Network Enforcement Law, which threatens platforms with €50m fines if they don’t take illegal content down within 24 hours of being notified of it. But the notice-and-takedown approach belongs to a previous technological era. These days the vast majority of content blocking is done by automated tools, often without the content in question being seen by anyone other than its creator. The big question, whether the concern is online harm, free speech or both, is how effective are those tools? How much do they miss that should be removed – and how much are they inadvertently blocking which should be allowed? We have no idea.

Neither the duty of care nor the German law directly address a major problem: we haven’t yet coherently described what we want from platforms. ‘Just take it down’ is easy to say, and few would argue that horrendous examples of violent or abusive content should not be blocked or removed. But the quest for perfection is impossible. Awful things will happen online, just as they do offline; promises from some platforms, extracted under duress, may prove hostages to fortune.

Ok, but surely platforms can do better? Certainly, they can. Consistent enforcement of existing community standards, and proper evaluation of their effectiveness, would be a start. However, as a policy ‘do better’ assumes we know where we are now and that we will know when enough has been done. Does this apply to any of the myriad content and behavioural problems that we now observe online? And as platforms attempt to ‘do better’, how far will they get before they start running into grey areas? The German experience suggests not very far – it was only a few days after implementation of its law that politicians started to complain about their own content being taken down.

So regulation will be complex and risky. But these are not good reasons for doing nothing. Better oversight of platforms is sorely needed. Governments’ relations with platforms need to be reset, with careful policy that engages with the totality of platforms’ activity to govern content and speech; recognises the need for balance between protection from harm and the benefits of an open Internet; defines what ‘harmful’ means and prioritises the most serious risks; and is grown-up enough to admit that striving for perfection is tilting at windmills.

A statutory framework to achieve this need not itself be complex. Here are four simple recommendations.

First, an oversight body should be established with the power to require platforms to assess the nature and extent of specified problems on their platforms. These assessments should be well-evidenced, independent or independently verified, and engage with civil society, industry and relevant public bodies. Their findings should be disclosed to the overseer.

Second, where assessments identify harm or significant risk of harm, platforms should be required to tell the oversight body what they’re doing about it; how they assess the impacts of their actions (on free speech as well as harm); and how they measure success. The oversight body should give its view, and have the power to send platforms back to the drawing board if their responses are inadequate or not achieving the desired results. Expectations should be proportionate to the size of each platform and the magnitude of the risk. Risks to children should be treated particularly urgently.

Third, the oversight body should publish regular reports on the extent of online harm, the effectiveness of platforms’ responses, and priorities for further action.

Fourth, the oversight body should have ways to compel compliance, which could include enforcement notices, ‘verified provider’ kitemarks, beneficial rights (e.g. access to adjudication or arbitration services), fines or the ability to enforce restrictions on services.

Much of this already happens, but it’s inconsistent, and the piecemeal and voluntary efforts to date have not built trust in platforms’ self-governance. Legislation could put in place more systematic ways of identifying online harms, and oblige platforms to account for their efforts to address them. An independent platform overseer needs to be created now, with the powers, confidence and independence to hold platforms to account – and, occasionally, to resist the howls of outrage and enforce a proportionate, evidence-based approach to online harm.

♣♣♣

Notes:

  • This blog post appeared first on LSE Media Policy Project.
  • The post gives the views of its author, not the position of LSE Business Review or the London School of Economics.
  • Featured image by Pashi, under a Pixabay licence
  • When you leave a comment, you’re agreeing to our Comment Policy.

Mark Bunting is a member of Communications Chambers and author of Keeping Consumers Safe Online: Legislating for platform accountability for online content. He tweets @buntms

 

 

 

About the author

Mark Bunting

Mark Bunting is a member of Communications Chambers and author of Keeping Consumers Safe Online: Legislating for platform accountability for online content. He tweets @buntms

Posted In: Technology

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.