LSE - Small Logo
LSE - Small Logo

Emma Goodman

February 21st, 2019

The DCMS Select Committee’s proposals for social media regulation: would they work?

0 comments

Estimated reading time: 5 minutes

Emma Goodman

February 21st, 2019

The DCMS Select Committee’s proposals for social media regulation: would they work?

0 comments

Estimated reading time: 5 minutes

A Government white paper from DCMS and the Home Office on internet safety is anticipated in the coming weeks, and is expected to consider the need for a social media regulator of some kind. Various reports have weighed in on this issue, most recently the final report from the UK House of Commons DCMS Select Committee’s inquiry into Disinformation and ‘fake news.’ Here Emma Goodman, who contributed to the report of the LSE Commission on Truth, Trust and Technology, discusses the Committee’s recommendations on social media regulation.

The DCMS Select Committee’s new report is the result of an 18-month inquiry into Disinformation and ‘fake news’ which received 98 written evidence submissions (including one from me and my colleagues outlining the report of the LSE Commission into Truth, Trust and Technology) and oral evidence from more than 50 individuals. An interim report was published in July 2018, to which the Government responded in October 2018. Committee chair Damian Collins at the time expressed disappointment with the Government’s response, which left several recommendations without clear answers.

Much of this week’s coverage of the report has focused on its somewhat incendiary approach to Facebook, which it describes as “digital gangsters.” The report reiterated the Committee’s fury at Mark Zuckerburg’s refusal to appear before it, either in person or by video link, complaining of the company’s lack of accountability. “The age of inadequate self regulation must come to an end,” said Collins in a statement. But what does it actually suggest in terms of social media regulation that is new, and will it work?

Liability for ‘harmful’ content

Firstly, the report recommends that social media/tech companies are placed in a new category for the purposes of liability for content that is neither platform nor publisher. Currently, under the 2000 E-Commerce Directive, tech companies are viewed as conduits or hosts of information and therefore exempt from liability for illegal content published on their platforms as long as they are unaware of it. Once aware, they must take action to remove it. The report calls for tech companies to “assume legal liability for content identified as harmful after it has been posted by users.” This doesn’t sound like a significant change in terms of when they become liable, but it does suggest that they are responsible for content that is harmful rather than simply illegal. Our Truth, Trust and Technology Commission (T3) report also suggested a hybrid or intermediate category for social media companies, based on the idea of introducing a statutory duty of care, but stressed that such concepts are in the early stages of development.

Index on Censorship is rightly concerned about the lack of clarity on what constitutes “harmful” content online, pointing out that nobody has yet been able to come up with a definition of harmful content that goes beyond definitions of speech and expression that are already illegal. The organisation’s statement stresses the need for the protection of freedom of speech when considering regulation that could be potentially restrictive.

According to Sky News which cites “sources close to the legislative process,” the government intends to “define a list of so-called “online harms” – harms which are legal but harmful – then ask platforms to come up with ways of reducing them.” But despite 79% of UK adults having concerns about going online and 45% reporting that they have encountered online harm, it can be harder than you might think to find clear evidence of online harm, even concerning children.

The Committee’s report calls for the “urgent” establishment of a compulsory Code of Ethics for tech companies, similar to the broadcasting code issued by Ofcom, that would require the removal of harmful and illegal content that has been referred to the companies for removal by users, or that “should have been easy for tech companies themselves to identify.” Companies would be liable for such content and should have systems in place to highlight and remove harmful content, as well as ensuring that adequate security structures are in place. If a tech company is found to be failing to meet its obligations under the code, then an independent regulator, which would oversee the process with the capacity to demand relevant information and access from the tech companies, should be able to launch legal proceedings against them, with the threat of large fines.

As fact-checking organisation Fullfact stressed, any mandatory code developed should have input from the public, as well as from tech companies and government. The proposed Code of Ethics would be developed by “technical experts,” the report says.

It is unclear from the report whether this regulatory function would be given to an existing body, or whether it would require a new body, but Collins has suggested that powers could be given to Ofcom, which published a paper in September 2018 outlining how lessons from the regulation of standards in broadcasting might help to inform policymaking in the area of regulating online harms. Regulation of broadcasting and regulation of online harms are very different, however, with the latter being arguably far more complex. Ofcom CEO Sharon White outlined some of these differences in a speech.

Our T3 Commission report called for tech companies in the information space to be overseen by an independent regulator, but suggested the establishment of a new body that could coordinate all efforts to improve the online information environment. At first, this body would take on an oversight and advisory role rather than a regulatory one.

Tech companies’ role in improving digital literacy

Improving the digital/media literacy of the public is frequently suggested as a solution to the problem of the spread of disinformation, although as my colleague Sonia Livingstone has emphasised, it is not a straightforward solution. One of the report’s less controversial recommendations repeats the inquiry’s interim report’s call for the government to instigate the creation of a “comprehensive educational framework,” available online, that would inform people of the implications of sharing their data willingly, their rights over their data, and ways in which they can constructively engage and interact with social media sites.

The report also calls for tech companies and their algorithms to play their part in enhancing digital literacy levels. One suggestion – interesting though it could be seen as rather paternalistic – proposes the re-introduction of “friction” into the online experience of consuming, generating and sharing information, in the hope of giving people more time to think about what they are writing, and “make the process of posting or sharing more thoughtful or slower.” In practice, this could be requiring a user to read an article in full before getting the option to share it, for example, or requiring someone to write something about a post in order to share it.

Part of the compulsory Code of Ethics would require social media companies to develop or use tools that help users distinguish between ‘quality’ journalism and stories coming from organisations that have been linked to disinformation or are regarded as unreliable sources, the report says. This echoes the recent report of the Cairncross Review, which recommends that the government should oblige larger online platforms to help users “identify what ‘good’ or ‘quality’ news looks like” by making the origin of each article clearer, and the trustworthiness of its source. This process should be regulated, the Review suggests, and in time the regulator should develop a set of best practices.

Although involving the tech companies in facilitating the public’s understanding of where content comes from is a worthy aim, the concept of rating ‘quality’ journalism is beset with problems, immediately begging the question of how this could be defined, and who would make such decisions. As Oxford’s Rasmus Kleis Nielsen said to the Economist, “I can’t think of another democracy in which there is a call for regulatory oversight of what constitutes ‘quality’ in the news.” The Select Committee’s report mentions Newsguard as an example of a tool that provides green or red trust/reliability ratings to news outlets. Newsguard does an admirable job of explaining both its rating system and why its work can be trusted, but it is not without controversy, and the idea that such a system would be legally mandated raises concerns about inadvertent censorship.

Funding

The Committee’s report recommends, as does our T3 report and others that make similar proposals, that the work of regulating social media and improving the information landscape is funded by a levy on tech companies. Part of the levy would go towards supporting the “enhanced” work by the Information CO as a “sheriff in the Wild West of the internet” and in anticipating future technologies, the report recommends.

Conclusion

The DCMS Committee’s report is clear that something urgently needs to be done to tackle the spread of disinformation, which is particularly pernicious when it plays a role in elections or spreading inaccurate stories related to healthcare. The report has a good deal of valuable information on the impact on elections which I have not discussed here: both the extent of foreign interference, and on serious data protection issues. Its emphasis on the need for more transparency in the way that tech companies participate in the information society is also welcome.

But it does focus rather heavily on how to get ‘harmful’ speech off the platforms, rather than trying to improve the information experience overall. For example, the report cites the fact that one in six Facebook moderators now work in Germany following the implementation of the NetzDG law, which allows fines of up to €20 million if (illegal) hate speech is not removed within 24 hours, as “practical evidence that legislation can work.” Arguably, however, although this is indeed evidence that Facebook is taking the legislation seriously, it is not clear that this means it is actually working, whether the ‘right’ content is being removed and whether harm is therefore being prevented. The law has attracted criticism from human rights and freedom of expression groups, and as Lubos Kuklis pointed out with reference to the European Commission’s efforts to tackle hate speech, we shouldn’t necessarily celebrate more takedowns as a marker of unequivocal success.

See our summary of policy responses to the information crisis here. This article gives the views of the author and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science.

About the author

Emma Goodman

Posted In: LSE Media Policy Project | Media Literacy | Truth, Trust and Technology Commission

Leave a Reply

Your email address will not be published. Required fields are marked *