LSE - Small Logo
LSE - Small Logo

Blog Administrator

July 16th, 2018

House of Lords Communications Committee Inquiry “The Internet: to regulate or not to regulate?”. An overview of the evidence, Part 1

0 comments | 1 shares

Estimated reading time: 5 minutes

Blog Administrator

July 16th, 2018

House of Lords Communications Committee Inquiry “The Internet: to regulate or not to regulate?”. An overview of the evidence, Part 1

0 comments | 1 shares

Estimated reading time: 5 minutes

Oscar Davies is a media lawyer who will start pupillage at One Brick Court in October 2019. In this blog, he provides a summary of some of the evidence that has been submitted to the House of Lords Communications Committee as part of its inquiry into online regulation. This post was first published on Inforrm, and is reproduced here with permission and thanks.

The House of Lords Communications Committee has launched an inquiry into how the regulation of the internet should be improved. Oral and Written evidence has been provided to the Committee by a very wide range of companies, NGOs and individuals from a variety of perspectives.

The Committee will continue to receive evidence until September 2018 and intends to report towards the end of 2018. The members of the Committee are Lord Gilbert of Panteg (Chairman), Lord Allen of KensingtonBaroness BenjaminBaroness Bertin, Baroness Bonham-Carter of YarnburyThe Lord Bishop of ChelmsfordBaroness Chisholm of OwlpenViscount Colville of CulrossLord GoodladLord Gordon of StrathblaneBaroness KidronBaroness McIntosh of Hudnall, and Baroness Quin,

The evidence received by the Committee is a very good introduction to the vital contemporary debate on the regulation of the Internet and in this series of posts I have set out to summarise it and draw attention to some of the more interesting contributions.

The call for evidence explains the purpose of the inquiry: to explore how internet regulation should be improved. This includes better self-regulation and governance, and whether a new regulatory framework for the internet is necessary. Accountability, transparency and adequacy of governance are all to be considered for online platforms in relation to individuals’ content. The Government’s Digital Charter seeks to ‘make the UK the safest place to be online’, and that ‘the UK should lead the world in innovation-friendly regulation’.

The call to evidence lists nine set questions that should be answered by those submitting evidence, as follows:

  1. Is there a need to introduce specific regulation for the internet? Is it desirable or possible?
  2. What should the legal liability of online platforms be for the content that they host?
  3. How effective, fair and transparent are online platforms in moderating content that they host? What processes should be implemented for individuals who wish to reverse decisions to moderate content? Who should be responsible for overseeing this?
  4. What role should users play in establishing and maintaining online community standards for content and behaviour?
  5. What measures should online platforms adopt to ensure online safety and protect the rights of freedom of expression and freedom of information?
  6. What information should online platforms provide to users about the use of their personal data?
  7. In what ways should online platforms be more transparent about their business practices—for example in their use of algorithms?
  8. What is the impact of the dominance of a small number of online platforms in certain online markets?
  9. What effect will the United Kingdom leaving the European Union have on the regulation of the internet?

This part of the post will deal with the responses to the first three questions. In the second and third posts I will deal with questions 4-6 an 7-9 respectively.

1. Is there a need to introduce specific regulation for the internet? Is it desirable or possible?

Several of the responses started by emphasising that there are already several laws in place that regulate the internet. The question then becomes how rather than whether to regulate the internet. Oath, a house of media brands such as Huff Post, Tumblr and Yahoo News, notes the difficulty of regulating the internet due to it being a ‘non-linear and complex ecosystem’. Google and the Internet Service Providers’ Association (ISPA UK)consider that the internet is far from a ‘wild west’, as deemed by the culture secretary, Matt Hancock, in May 2018.

Rather, the starting point for internet regulation for many submitting evidence is the E-Commerce Directive, which sets out harmonised rules for online businesses and gives platforms significant responsibilities to remove illegal content when notified. Also mentioned by the ISPA as regulators are the Internet Watch Foundation (IWF, a self-regulatory body founded by the Internet industry that tackles online child sexual abuse content), Counter-Terrorism Internet Referral Unit (CTIRU), and the Defamation Act 2013.

Her Majesty’s Government (HMG) noted that two landmark pieces of legislation have come into force in May 2018 to keep up with changes in technology: the Data Protection Act 2018 and the Network and Information Systems Regulations 2018. Further, the Digital Charter’s purpose is ‘to make the internet work for everyone – for citizens, businesses and society as a whole’. The BBC welcomes the Charter as an important opportunity to:

  • ‘identify the guiding principles for Government, industry and civil society groups, that can act as a reference point for now and in the future, and
  • in so doing, provide certainty to digital businesses as a significant proportion of its regulatory context shifts from EU into UK law, including some principles fundamental to effective competition.’

The BBC noted how fake news has highlighted issues with platforms such as Facebook, Google and Twitter, which have tended to provide unmediated access to large audiences to this material. Channel 4 notes that whilst TV is a heavily regulated medium, the same cannot be said for the online world, ‘where legislation has legislation has failed to keep pace as digital online platforms have grown rapidly, unchecked, despite their increasing importance and influence in our everyday lives.’

In contrast, Facebook states that it has taken ‘significant steps’ to self-regulate and to ensure that harmful content is either removed or prevented from reaching the platform: ‘We are not waiting for legislators and regulators to devise new forms of regulation.’

Unsurprisingly, Google warns that ‘sweeping liability reform’ could ‘force platforms to pre-vet all the content that users upload, and would inevitably suffocate much of what is a vibrant digital world’. Google states that if liability is shifted onto intermediaries for users’ action and content line, this would have a ‘severe chilling effect’ on the access to and hosting of legitimate speech and would narrow the information and content available via the open web. It may also make users less responsible for the content they are producing, therefore undermining incentives towards good online citizenship and appropriate user behaviour. Trip Advisor seems to be against adopting a new regulating, alleging that it will ‘stifle the dynamism and the innovation of the digital economy making [it] potentially more difficult for new players to emerge.’.

It is perhaps unsurprising that current powerful players like Facebook and Google prefer self-regulation and keeping the status quo, rather than welcoming statutorily imposed regulation. It could be argued that it is in their interests to preserve their dominant position.

The ISPA takes a middle ground, suggesting that a combination of legislation and self-regulation is most appropriate for the future regulation of the internet. Oath similarly suggests that a ‘constructive engagement between government and companies is key’.

2. What should the legal liability of online platforms [i.e. intermediaries] be for the content that they host?

In considering the liability of online platforms, Global Partners Digital (GPD) notes that it is important to recall that states have an obligation under Article 19 of the International Covenant on Civil and Political Rights (ICCPR) to respect, protect and fulfil the right to freedom of expression. GDP notes that since online platforms are increasingly important in terms of how people express themselves, governments should ensure that any legislation enacted does not inappropriately restrict freedom of expression (chilling effect).

Most of the parties submitting evidence for this question noted that legal liability of online platforms is currently governed by Articles 12-15 of the eCommerce Directive (ECD). This lays out three categories of providers as (i) mere conduits; (ii) caching providers: and (iii) hosts. It requires that hosting providers are liable for content when they take an active role in presenting and publishing the content, confirmed by the European courts. Sky adds that under the Directive, hosting providers are liable if they fail to remove illegal content expeditiously once aware of it. A recent ruling in the Netherlands set this at 30 minutes in relation to infringement of copyright of live sports. Whilst Sky calls for an ‘urgent need’ to refresh or replace the e-Commerce Directive due to it being almost 20 years old, Oath claims that the regime was purposefully ‘forward-looking’; to replace it would shift liability for online offences to intermediaries, ‘laden with practical consequences for the ecosystem at large’. Further, Oath purports that this would remove a ‘long-standing common law principle’ of having the law apply the same online as offline.

Trip Advisor similarly considers that the current liability regime for online platforms proves to be ‘both balanced and efficient’: ‘making online platforms automatically liable for illegal content they host will be detrimental to innovation’ in that ‘more content than necessary may likely be blocked by the online platforms’. It is argued that the relevance of review websites such as Trip Advisor would be at stake ‘if negative reviews that could be potentially perceived as defamatory are more systematically not published due to the high legal risk and possible fines’. Consequently, Trip Advisor concludes:

‘The rule should continue to be that, in case of doubt regarding the defamatory or the illegal nature of a content, the online platform should leave it until a court or an enforcement agency decides on the illegal nature of the content and notifies the platform to remove it. In such cases, only if the online platform fails to remove the notified content, should it be held liable.’

In contrast, Professor of Internet Law Lorna Woods likens social media platforms to public spaces, and suggests that a statutory duty of care should be imposed on social media service providers with over 1,000,000 users in the UK. The duty would take steps to reduce the level of harm, monitored by an independent regulator.

Professor Christopher Marsden considers there to be three alternatives:

‘One is not to regulate, but of course that means that the world develops without regulation. The second is that we can regulate all the platforms that we might be concerned about. The third is to regulate only the dominant platforms.’

He suggests that the problem with the third option is that by regulating a stable duopoly or oligopoly of companies (Facebook, Google, Twitter etc.), this may be perpetuating the duo/oligo-poly situation as market entrants will be harder to regulate.

The government’s stance is that online platforms need to take responsibility for the content they host and must proactively tackle harmful behaviours and content on their platforms. In contrast, Mark Stephens, a partner at law firm Howard Kennedy, said in his oral evidence that ‘giving intermediary liability is not very helpful.’

Facebook states that ‘removing intermediary liability protections would invite abuse because intermediaries would have strong incentives to comply with all removal requests’. However, the social media platform also appreciates why policy makers are exploring how platforms should be regulated, and states that it will be working with the government to ensure more positive experiences online.

3. How effective, fair and transparent are online platforms in moderating content that they host? What processes should be implemented for individuals who wish to reverse decisions to moderate content? Who should be responsible for overseeing this?

The Competitions and Markets Authority (CMA) states that online platforms should ‘at a minimum be transparent with consumers about what data about them is being collected, who it is shared with, and the use to which it is put.’ The CMA acknowledges that under the Unfair Trading Regulations 2008 ‘it is required to show that a failure of transparency has an actual or potential effect on the economic decision-making of the average consumer before it can take enforcement action.’

Whilst the CMA supports greater transparency where this promotes competition and innovation, it warns that transparency alone is not always sufficient to effectively protect consumers. For example, transparency will not be sufficient where the practice is unlawful in itself. It is also noted that in some cases, requiring disclosure of the algorithm itself could reduce business incentives to invest in developing their proprietary algorithms and thus risk stifling innovation.

Microsoft views the concept of accountability as more important than transparency (publishing billions of lines of code may transparent in a way but would not be helpful to the public). On a similar note, Mark Bunting of Communications Chambers, a firm specialising in media and telecoms policy, said that ‘the aspiration would be that we know a bit more about what the algorithms are trying to solve and what are the data on which they have trained those algorithms’. We should therefore be focusing, in his view, on the steps platforms have taken to ensure that the algorithms are working as intended, and how they are measuring success and reporting it against those objectives:

‘For example, if there is an algorithm to detect extremist content on YouTube, my question would be, “YouTube, what have you done to assess whether that algorithm is working effectively both in capturing content that genuinely is extremist when qualified people look at it, and in not capturing all sorts of material that is legal content and has just inadvertently fallen foul of an algorithm?”’

The Government acknowledges that the largest social media companies have already brought forward transparency reports, providing a picture of the actions taken in the process of content moderation. Google published a transparency report earlier this year which looked at how YouTube deals with content on their platform. Facebook published an expanded transparency report on 15 May showing that they took action on 1.9 million pieces of ISIS and al-Qaeda content, took down 837 million pieces of spam, and disabled about 583 million fake accounts. However, even Facebook admitted that it needed to increase transparency in a number of ways, such as publishing more information about how their algorithms work, increasing users’ control over their experience, and working on the intersection for AI and ethics.

Despite this apparent transparency, the government noted that the Internet Safety Strategy consultation highlighted that users are concerned about reporting of content on social media platforms:

‘Only 41% of respondents to the survey (66 individuals) thought that their reported concerns were taken seriously by social media companies, showing a lack of confidence that platforms are moderating content effectively.’

In order to tackle these problems, the government has put forward a draft code of principles that social media providers should adhere to in order to tackle harmful content and conduct online: ‘By establishing common standards, companies will understand how they should promote safety on their platforms, and users will know what to expect when things do go wrong.’ This code of practice aims to create a more positive user experience online, covering areas such as clear and transparent reporting practices, clear explanations to complaints and clear and understandable terms and conditions. The government states that this code is ‘a means to an end, and not an end in [itself]’, and that it will only be effective if complied with – in practice – by companies.

This article gives the views of the author and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science. This blog was first published on Inforrm.

About the author

Blog Administrator

Posted In: Internet regulation | LSE Media Policy Project

Leave a Reply

Your email address will not be published. Required fields are marked *