LSE - Small Logo
LSE - Small Logo

Blog Administrator

October 20th, 2016

Liability and responsibility: new challenges for Internet intermediaries

1 comment

Estimated reading time: 5 minutes

Blog Administrator

October 20th, 2016

Liability and responsibility: new challenges for Internet intermediaries

1 comment

Estimated reading time: 5 minutes

Monica Horton PhD Graduation PicturesMonica Horten, a visiting fellow at the LSE, argues for clarification on proposals regarding Internet intermediaries’ liability for content, and for an appropriate balance to be struck between the different interests involved. This post is based on her paper for the Center for Democracy and Technology on Content ‘responsibility’: The looming cloud of uncertainty for internet intermediaries.

How might policy-makers address demands for removal or policing of Internet content?  Starting in the spring of 2016, a suite of proposals has emerged from the European Commission regarding Internet intermediaries and liability for content. The Commission has formulated a new notion of ‘responsibility’, addressing content platforms and file-hosters in addition to Internet Service Providers. My  paper discusses these new proposals and considers what they might mean from the perspective of the intermediary.

Liability vs responsibility

A key finding of my paper is that we are seeing an upscaling of liability, to incorporate three new policy areas: counter -terrorism, hate speech and protection of minors. The political pressure to ‘do something’ in these three areas will create a huge amount of legal uncertainty for intermediaries, and pose a significant threat to freedom of expression.

It is important to consider the difference between the two notions of ‘liability’ and ‘responsibility’. Liability means that Internet intermediaries can be sanctioned or fined, or asked to remove content that is illegal, and is governed by the process of law.

Responsibility, on the other hand, is a new and somewhat intriguing policy notion. Responsibility implies that the intermediary somehow takes action upon its own initiative and is self-motivated. Responsibility is therefore a proactive process, and it means the intermediary will be making its own determinations, and will do things like scanning and monitoring, filtering, blocking and automated removals, under its own terms and conditions.

This immediately raises concerns. How does the intermediary determine what content should be acted on? It is not a court, nor is it a judge nor a policeman. In what way does it know how to fulfil public policy objectives, without any direction?

New technological advances raise new legal complexities and even trained and experienced judges find it difficult to establish the correct legal basis for a determination. For example, in copyright cases such as Paramount v BskyB, or Popcorn Time, they have to spend considerable time unpacking the technology in order to establish the legal basis for an infringement. Some judges express their concern that a ruling in favour of one industry will have the effect of harming the other. (see ABC v Aereo in the US), while others suggest that in making a strict ruling on copyright, wider benefits to society may be lost (consider the Stoererhaftung cases in Germany).

If judges have to make lengthy assessments of the technology in order to reach a decision, then  how should the intermediary do it, acting alone? This is a very pertinent question when we are looking at content ‘responsibility’ and a proposed self-regulatory environment.

Beyond copyright

My paper does not address the Copyright directive because it was written before the leaked documents appeared. But it does pre-empt it, and I’d just like to comment briefly on Article 13 of the leaked Directive. This calls on Internet intermediaries to ‘take appropriate and proportionate measures’ and ‘to prevent the availability of copyrighted material, ‘including through the use of effective content identification technologies.’  ‘Prevent availability’ implies proactive monitoring and action. Content identification technologies means automated scanning systems such as ContentID or Audible Magic. It implies an onerous obligation on intermediaries, and mitigates in favour of the large content platforms who are gaining increasingly monopoly status.

Copyright takedowns exponentially outnumber all others, and until now, the big debate around intermediary liability has centred on copyright. But with the latest proposals from the European Commission there is what I would describe as an upscaling of liability going into counter-terrorism, hate speech and protection of minors, and the volume in these new areas is growing. Content take-downs for counter-terrorism have increased from a dozen requests to several thousand, and the targets are in the hundreds of thousands.

The Terrorism Directive stipulates that Member States take measures for the take down of illegal content where the content is hosted in the EU. It calls for the blocking of access to content hosted outside the EU.  Terrorism is defined as publicly inciting to commit a terrorist offence.  The directive encourages intermediaries to take voluntary preventative action against such content, and holds out a threat of sanctions against those who fail to comply with takedown or blocking orders. There is no judicial oversight for users whose content may be mistakenly removed, but there is the possibility for judicial review for service providers, although it’s not clear what that means.

The directive also states that there should be transparent procedures and effective safeguards, but leaves unsaid what these should be. At the same time, it opens the way for preventative measures, which would entail scanning and monitoring of user content, and would be non-transparent and unaccountable. This is a worrisome combination of liability with responsibility.

The hate speech proposals also combine liability with responsibility in a way that gives cause for concern. Hate speech falls into two policy agendas: security on the one hand, and audio-visual media on the other. Under the EU Security Agenda, the Hate speech Code of Conduct is a self-regulatory measure, calling on the big four social media platforms to review requests for content removal, and take it down within 24 hours.  The criteria for those reviews are not clear, and removals will be carried out with no judicial oversight.

The Audiovisual Media Services Directive

The measures proposed under the Audiovisual Media Services Directive are, I believe, the real blinder.  This directive regulates old-style broadcast television; however, Article 28a brings video-sharing platforms such as YouTube under the umbrella of audio-visual media regulation and hence new content responsibilities relating to hate speech and protection of minors.

  • Hate speech is defined as ‘inciting violence or hatred with reference to race, colour or religion’.
  • Regarding protection of minors, children must be protected against any content that could impair their ‘physical, mental or moral development’.

Article 28a explicitly asks video-sharing platforms to embed these definitions in their terms and conditions – this is mandatory ‘responsibility’. Consider, however, that the Internet is not a broadcast service, and it runs on code. Coding for these definitions would be impossible: they are over-broad, and prone to different interpretation in different cultures, countries, and religious communities. They will leave all users exposed to arbitrary actions by different intermediaries.

Furthermore, obligations relate to the provider’s responsibility ‘in the organisational sphere‘. On a video-sharing platform, ‘the organisational sphere’ is about indexing and search algorithms. Responsibility will mean keyword filtering, link suppression and manipulation of those search algorithms.

This unclear environment of mandatory and self-regulatory measures will present dilemmas for the intermediaries. They will be in trouble for taking down content, and for leaving it up. As they are not trained judges and arbitrators, they will err on the side of caution, which could signal a distinct threat to freedom of expression.

Finding a balance

With all that in mind, consider the responsibility of policy-makers to find the appropriate balance of interests. Article 10 of the European Convention on Human Rights is the right to freedom of expression, to receive and impart information, without interference from a public authority. Interference includes filtering, blocking, and removal of content.

The second paragraph of Article 10 provides the over-arching framework where policy-makers want to restrict content. It’s in the caselaw of the Court of Human Rights, and it’s summarised  in the Yildirim v Turkey  ruling.

  • Rule of law is trumps
  • There should be no arbitrary justice
  • Any restriction on content must have a basis in law
  • The law must be clear and precise, it should consist not of general clauses but must be specific to the policy aim
  • It must enable users to foresee the consequences of their actions so that they can regulate their own conduct
  • The law must be accessible
  • Users must know the conditions under which restrictive powers will be exercised. They must know the level of discretion available to the authorities

To be compatible with Article 10, any restrictive measures define the kinds of content targeted, the blocking technologies, the duration and territorial ambit. They must be justified to meet a pressing social need and only go so far as is essential to address that need. They must be overseen by competent authorities, with the possibility for judicial appeal. Any restrictive measures should be compatible with that framework.

The responsibility for policy-makers is to find the appropriate balance within that framework, in order to safeguard the Internet as an open, innovative vibrant ecosystem that works for all.

This post gives the views of the author and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science.

About the author

Blog Administrator

Posted In: EU Media Policy | Filtering and Censorship | Intermediaries | LSE Media Policy Project

1 Comments