LSE - Small Logo
LSE - Small Logo

Blog Administrator

August 20th, 2018

House of Lords Communications Committee Inquiry “The Internet: to regulate or not to regulate?”

0 comments | 9 shares

Estimated reading time: 5 minutes

Blog Administrator

August 20th, 2018

House of Lords Communications Committee Inquiry “The Internet: to regulate or not to regulate?”

0 comments | 9 shares

Estimated reading time: 5 minutes

Oscar Davies is a media lawyer who will start pupillage at One Brick Court in October 2019. In this blog, he provides the second part of his overview of the evidence submitted for the House of Lords Communications Inquiry “The Internet: to regulate or not to regulate?” (Part 1 can be found here). Responses to questions 4 to 6 of the Call for Evidence will be considered here. This blog was first published on Inforrm and is reposted here with permission and thanks.

4. What role should users play in establishing and maintaining online community standards for content and behaviour?

There was a general consensus among those that submitted evidence that users should be able to, as a bare minimum, flag content that violates its community standards. Google mentions its ‘rigorous reporting processes’, and states that:

‘We want to act quickly when users inform us of content that might violate our policies, so we have pledged to continue the significant growth of our teams with the goal of bringing the total number of people across Google working on this to over 10,000 in 2018.’

On this, however, TripAdvisor warns that whilst users should have the possibility to flag any problem with content for the online platform to analyse internally, ‘this ‘right of flagging’ cannot be understood as a ‘right of deletion’’. The review website continues:

 ‘The online platform must remain free to keep or take down the content concerned, where the alleged “illegality” of that content is not apparent. In such cases, only a court order or a decision by an enforcement authority should compel the platforms to take down the content.’

Her Majesty’s Government (HMG) looks at the role of users from a different angle – notably taking into account the safety of both young and old members of the online community. As set out in the green paper, HMG is ‘committed to equipping parents with the information to help prevent online harms’. The role of schools was also mentioned as playing ‘an important role in supporting children when they have suffered the impacts of online harms from cyberbullying and exposure to terrorist material, to online abuse or to sexting.’

To protect older people online, HMG is working to address financial abuse in a number of ways. For example, the ‘Take Five’ communications campaign, led by the Home Office and UK Finance, equips the public to more confidently challenge fraudulent approaches, including via email or online, with a focus on the over 65s.

5. What measures should online platforms adopt to ensure online safety and protect the rights of freedom of expression and freedom of information?

HMG aims to, through the Digital Charter and Internet Safety Strategy work, ‘develop a defined set of responsibilities for social media companies that provide clarity on the safety measures we expect within a well-functioning digital economy’.

Open Rights Group reminds its readers that ‘criminal law applies online as well as offline…Measures of disruption are problematic because they evade the prosecution of criminals, and the rights of redress and due process.’

The Internet Service Providers’ Association (ISPA) maintains that intermediaries should not be asked to be judge and jury and that notices should be filed by competent authorities, ‘ideally a court or other independent and impartial body qualified and with legitimacy to make these kinds of decisions’. It goes on to state:

‘Furthermore, content control mechanisms should always respect due processand be backed by some form of statute, with removal-at-source as the default content control measure, with access blocking to be used as a targeted and temporary resort in certain circumstances. If trusted flagging mechanisms are used, clear standards and rules should be provided by the Government in order to avoid the infringement or rights.’

Oath similarly describes the recent trend in companies to adjudicate complex cases involving speech rights and other frameworks as a ‘notable expansion on the traditional understanding of ‘self-regulation’’. It remains important that the committee ‘acknowledge that it remains important that the courts should step in to adjudicate on complex cases and develop case law which can inform future policy and practice.

In contrast, techUK supports the current state of affairs, stating that ‘the current legal regime for platforms balances the rights of freedom of expression and information with the responsibility to ensure that illegal content is removed’. It goes on to say that the government’s initiative in the Internet Safety Strategy to provide small start-ups and app developers with more information to ensure that they can “think safety first” and build in safety measures is ‘a welcome measure’ [46].

When Policy Exchange asked the question of who was responsible for controlling, or removing, extremist content online, by far the most popular answer (72%) was ‘the companies that provide website content, such as Facebook, Google etc’. Respondents could give more than one answer and other popular options were: ‘the government’ (53%); ‘the companies that provide access to the internet (49%); and ‘individual internet users’ (36%).

When asked for their views on different ways in which the internet might be regulated, only 15% of Policy Exchange respondents expressed support for self-regulation of the kind that currently exists:

6. What information should online platforms provide to users about the use of their personal data?

The starting point for many is, as per its implementation in May of this year, the General Data Protection Regulation (GDPR). This is supposed to give people ‘more control over use of their data, and providing them with new rights to move or delete personal data’ (HMG). Oath describes the GDPR as ‘the most comprehensive law on transparency in the world today’, though also notes that it may be premature to opine on the success of the GDPR.

The BBC notes considers the current proposed EU Regulation regarding the ‘promotion of fairness and transparency in online intermediated trade’ important, aiming to create a fair balance on data transparency between platforms and businesses. The BBC, arguably a public body, adds that, ‘access to data is vital to enable the BBC to deliver its public purposes’.

This question is particularly interesting regarding Facebook in light of the Cambridge Analytica scandal where some 30 million Facebook users were not told that their data was being harvested and given to third parties, despite Facebook finding out about the data use in late 2015. Perhaps conscientiously, Facebook states that ‘In recent weeks we have introduced measures to make more clear the existing tools that users have to control their data and provided details of the further steps we are taking in this area.’ For example, their new Settings and Privacy Shortcuts allows users to see their data, delete it, and easily download and export it. Further, a new tool called ‘Clear History’ enables users to see the websites and apps that send Facebook information when a user interacts with them, to delete this information from their account, and to turn off their ability to store it associated with the users account in future.

This is the end of Part 2. Part 3 of this series will consider the final three questions from the Call For evidence (7-9).

This post gives the views of the authors, and is not the position of the Media Policy Project nor of the London School of Economics. 

About the author

Blog Administrator

Posted In: Internet regulation | LSE Media Policy Project | Truth, Trust and Technology Commission

Leave a Reply

Your email address will not be published. Required fields are marked *