LSE - Small Logo
LSE - Small Logo

Emma Goodman

April 10th, 2019

The Online Harms White Paper: its approach to disinformation, and the challenges of regulation

3 comments | 9 shares

Estimated reading time: 5 minutes

Emma Goodman

April 10th, 2019

The Online Harms White Paper: its approach to disinformation, and the challenges of regulation

3 comments | 9 shares

Estimated reading time: 5 minutes

A new Online Harms White Paper, published jointly by the UK government’s Department for Digital, Media, Culture and Sport and the Home Office, calls for a new system of regulation for tech companies with the goal of preventing online harm. In brief, the paper (which outlines government proposals for consultation in advance of passing new legislation) calls for an independent regulator that will draw up codes of conduct for tech companies, outlining their new statutory “duty of care” towards their users, with the threat of penalties for non-compliance including heavy fines, naming and shaming, the possibility of being blocked, and personal liability for managers. It describes its approach as risk based and proportionate. Here, Emma Goodman looks at what the Online Harms White Paper proposes to do about disinformation.

Concerns about disinformation have been a significant driver behind the push for more legislation of social media companies and other online tech giants. One of the main fears is, of course, its role in manipulating voters during election campaigns, which has been the focus of a good deal of attention in advance of European elections in May. Inaccurate information about health topics such as vaccination safety is also a seen as an urgent issue, as outbreaks of measles occur across Europe and the US. While in areas of the world with less robust democratic institutions, disinformation can have disastrous consequences: in Myanmar, for example, military personnel have been accused of using propaganda and disinformation on Facebook in a coordinated campaign to incite violence, murder and forced migration.

The spread of disinformation and the subsequent lack of public trust in information was a key motivation behind our LSE Commission on Truth, Trust and Technology, which identified confusion, cynicism, fragmentation, irresponsibility of tech companies, and apathy as the five ‘evils’ of the information crisis. We proposed the creation of an independent agency with an ‘observatory and policy advice’ function, therefor establishing a permanent institutional presence to encourage the various initiatives attempting to address problems of information reliability. The White Paper goes further than this.

 

What does the White Paper propose to do about disinformation?

Disinformation is one of the “harms with a less clear definition” outlined in the new white paper’s “initial” list of harms, along with cyberbullying and trolling, extremist content and activity, coercive behaviour, intimidation, violent content, advocacy of self-harm and promotion of female genital mutilation. These are listed alongside the “harms with a clear definition” which appear to comprise illegal content, such as child sexual exploitation and abuse, or terrorist content.

The white paper defines online disinformation as “spreading false information to deceive deliberately,” as opposed to misinformation, which it defines as “the inadvertent sharing of false information.” While noting that any inaccurate information can be harmful, the paper explains that the government is particularly worried about disinformation. In detailing the harm it causes, the paper cites a study from Oxford’s Computational Propaganda Project that found evidence of organized social media manipulation campaigns in 48 countries in 2018, and describes the Russian State as a major source of disinformation. It also expresses concern about the use of artificial intelligence both for generating fake content, and for targeting and manipulating voters.

The white paper outlines its high-level expectations of companies, which is expects the regulator to reflect in new codes of practice. To fulfill the “duty of care” with regards to disinformation, the paper states that companies will need to take “proportionate and proactive measures” to:

  • Help users understand the nature and reliability of the information they are receiving. This could include ensuring that it in clear to users when they are dealing with automated accounts, and improving the transparency of political advertising.
  • Minimize the spread of misleading and harmful disinformation. It suggests using fact-checking services and making disputed content less visible to users. Companies will be required to ensure that algorithms selecting content do not skew towards extreme and unreliable material in the pursuit of sustained user engagement.
  • Increase the accessibility of trustworthy and varied news content. This could be by promoting “authoritative” news sources, and by promoting diverse news content to counter the ‘echo chamber’ effect.

The government is considering the ‘news quality obligation’ that the Cairncross Review report suggests imposing on social media companies, the paper says. This would require companies to improve how users understand the origin of a news article and the trustworthiness of its source.

Risks of regulation of online information and communication

The paper stresses that the code of practice will ensure that the focus is “on protecting users from harm, not judging what is true or not.” This is a commendable approach, but when tackling disinformation, it is hard to imagine that there won’t be frequent occasions when such judgments will have to be made.

The impact of regulation on any freedom of speech is a major concern for freedom of expression and human rights groups, and although the paper commits to working with civil society and industry to sure that action is effective but doesn’t “detract from freedom of speech online,” this will likely be a challenging balancing act. At the white paper launch event, Home Secretary Sajid Javid said that as well as removing harmful content, companies might be required to stop some content from being uploaded in the first place, which could be particularly worrying from a freedom of speech perspective as it would likely involve the use of monitoring and filtering and “a restrictive approach to content removal,” as Article 19 warns.

Index on Censorship is clear that the white paper’s proposals pose “serious risks,” stressing that the scope of the right to freedom of expression includes “speech which may be offensive, shocking or disturbing,” and that these proposals might lead to “disproportionate amounts of legal speech being curtailed.” The organisation also suggests that the duty of care model, which treats social media as a kind of public square, is not an exact fit “because it would regulate speech between individuals based on criteria that is far broader than current law.” It can also be argued that the proposals violate more than one EU-level law, as Lilian Edwards explains.

One of the key issues around addressing online harms is the evidence, or lack of it. The white paper says that it takes a risk-based approach, which is encouraging, but only if sufficient evidence is available to assess risk. Index is concerned by the lack of evidence of the likely scale of each individual harm, and of the measures’ likely effectiveness, emphasising that the wide range of different harms require tailored responses. This concern is echoed by Open Rights Group’s director Jim Killock, who comments that establishing a relationship between content and harm is “in practice incredibly difficult,” and asks what the evidence threshold for requiring harm might be.

Another question is how to ensure that attempts to prevent harm are proportionate to the risks. As Secretary of State for DCMS Jeremy Wright said at the launch, the effects of online harms are most acute for the most vulnerable in our society, including children. While it is true that the most vulnerable need most protection, how to sufficiently protect them without over-protecting the less vulnerable is also a challenge.

There will also be a need for more clarity on how to address problematic content that might be spread with differing intentions. It is not clear, for example, what would happen to anti-vaccination content. “Vaccine hesitancy” is one of the World Health Organisation’s top ten threats to global health for 2019, and there has been a significant increase in measles cases, for example, with outbreaks in several countries that had been close to eliminating the disease. This has coincided with the growing awareness of the volume of anti-vaccination messages spread on social media. Exactly how the two phenomena are related is not yet clear, but the link is enough to cause concern, and social networks have been implementing their own initiatives. Would anti-vaccination material be classed as disinformation, even though those who share it might believe if to be true?

Further questions remain and many will not be answered until a regulator is in place. The White Paper is now subject to a 12-week consultation period: you can contribute here (closes 1 July).

This article represents the views of the author, and not the position of the LSE Media Policy Project, nor of the London School of Economics and Political Science. 

About the author

Emma Goodman

Posted In: Internet Freedom | LSE Media Policy Project | Truth, Trust and Technology Commission

3 Comments