LSE - Small Logo
LSE - Small Logo

Catherine Speller

May 10th, 2018

Reducing Harm In Social Media Through A Duty Of Care

0 comments | 18 shares

Estimated reading time: 5 minutes

Catherine Speller

May 10th, 2018

Reducing Harm In Social Media Through A Duty Of Care

0 comments | 18 shares

Estimated reading time: 5 minutes

This blog first appeared on the Carnegie UK Trust website on 8 May 2018. It is the fourth in a programme of work on a proposed new regulatory framework to reduce the harm occurring on and facilitated by social media services. The authors William Perrin and Lorna Woods have vast experience in regulation and free speech issues. William has worked on technology policy since the 1990s, was a driving force behind the creation of Ofcom and worked on regulatory regimes in many economic and social sectors while working in the UK government’s Cabinet Office. Lorna is Professor of Internet Law at University of Essex, an EU national expert on regulation in the TMT sector, and was a solicitor in private practice specialising in telecoms, media and technology law. The blog posts form part of a proposal to Carnegie UK Trust and will culminate in a report later in the Spring.

In our previous blog post we suggested using the concept of a ‘duty of care’ on social media service providers as regards their users. We also set out a technical overview of the regulatory regimes we have considered in arriving at a duty of care. In this blog we expand on a duty of care for non-specialist readers (and apologise to specialist readers for over generalisations in a highly technical area).

Duty of care

The idea of a “duty of care” is straightforward in principle. A person (including companies) under a duty of care must take care in relation to a particular activity as it affects particular people or things. If that person does not take care and someone comes to harm as a result then there are legal consequences. A duty of care does not require a perfect record – the question is whether sufficient care has been taken. A duty of care can arise in common law (in the courts) or, as our blog on regulatory models shows, in statute (set out in a law). It is this latter statutory duty of care we envisage. For statutory duties of care, as our blog post also set out, while the basic mechanism may be the same, the details in each statutory scheme may differ – for example  the level of care to be exhibited, the types of harm to be avoided and the defences available in case of breach of duty.

Social media services are like public spaces

Many commentators have sought an analogy for social media services as a guide for the best route to regulation. A common comparison is that social media services are ‘like a publisher’. In our view the main analogy for social networks lies outside the digital realm. When considering harm reduction, social media networks should seen as a public place – like an office, bar, or theme park. Hundreds of millions of people go to social networks owned by companies to do a vast range of different things. In our view, they should be protected from harm when they do so.

The law has proven very good at this type of protection in the physical realm. Workspaces, public spaces, even houses, in the UK owned or supplied by companies have to be safe for the people who use them.  The law imposes a ‘duty of care’ on the owners of those spaces. The company must take reasonable measures to prevent harm. While the company has freedom to adopt its own approach, the issue of what is ‘reasonable’ is subject to the oversight of a regulator, with recourse to the courts in case of dispute. If harm does happen the victim may have rights of redress in addition to any enforcement action that a regulator may take action against the company. By making companies invest in safety the market works better as the company bears the full costs of its actions, rather than getting an implicit subsidy when society bears the costs.

A broad, general almost futureproof approach to safety

Duties of care are expressed in terms of what they want to achieve – a desired outcome (ie the prevention of harm) rather than necessarily regulating the steps – the process – of how to get there.  This fact means that duties of care work in circumstances where so many different things happen that you couldn’t write rules for each one. This generality works well in multifunctional places like houses, parks, grounds, pubs, clubs, cafes, offices and has the added benefit of being to a large extent futureproof. Duties of care set out in law 40 years ago or more still work well – for instance the duty of care from employers to employees in the Health and Safety at Work Act 1974 still performs well, despite today’s workplaces being profoundly different from 1974’s.

In our view the generality and simplicity of a duty of care works well for the breadth, complexity and rapid development of social media services, where writing detailed rules in law is impossible. By taking a similar approach to corporate owned public spaces, workplaces, products etc in the physical world, harm can be reduced in social networks. Making owners and operators of the largest social media services responsible for the costs and actions of harm reduction will also make markets work better.

Key harms to prevent

When Parliament set out a duty of care it often sets down in the law a series of prominent harms or areas that cause harm that they feel need a particular focus, as a subset of the broad duty of care. This approach has the benefit of guiding companies on where to focus and makes sure that Parliament’s priorities are not lost.

We propose setting out the key harms that qualifying companies have to consider under the duty of care, based in part on the UK Government’s Internet Safety Green Paper. We list here some areas that are already a criminal offence –the duty of care aims to prevent an offence happening and so requires social media service providers to take action before activity reaches the level at which it would become an offence.

Harmful threats – statement of an intention to cause pain, injury, damage or other hostile action such as intimidation. Psychological harassment, threats of a sexual nature, threats to kill, racial or religious threats known as hate crime. Hostility or prejudice based on a person’s race, religion, sexual orientation, disability or transgender identity. We would extend hate crime to include misogyny.

Economic harm – financial misconduct, intellectual property abuse,

Harms to national security – violent extremism, terrorism, state sponsored cyber warfare

Emotional harm – preventing emotional harm suffered by users such that it does not build up to the criminal threshold of a recognised psychiatric injury. For instance through aggregated abuse of one person by many others in a way that would not happen in the physical world (see Stannard 2010 on emotional harm below a criminal threshold). This includes harm to vulnerable people – in respect of suicide, anorexia, mental illness etc.

Harm to young people – bullying, aggression, hate, sexual harassment and communications, exposure to harmful or disturbing content, grooming, child abuse (See UKCCIS Literature Review)

Harms to justice and democracy – prevent intimidation of people taking part in the political process beyond robust debate, protecting the criminal and trial process (see Attorney General and Committee on Standards in Public Life)

We would also require qualifying social media service providers to ensure that their service was designed in such a way to be safe to use, including at a system design level. This represents a hedge against unforeseen developments as well as being an aggregate of preventing the above harms. We have borrowed the concept from risk based regulation in the General Data Protection Regulation and the Health and Safety at Work Act which both in different ways require activity to be safe or low risk by design.

People would have rights to sue eligible social media service providers under the duty of care; for the avoidance of doubt, a successful claim would have to show a systemic failing rather than be deployed in case of an isolated instance of content. But, given the huge power of most social media service companies relative to an individual we would also appoint a regulator. The regulator would ensure that companies have measurable, transparent, effective processes in place to reduce harm, so as to help avoid the need for individuals to take action in the first place. The regulator would have powers of sanction if they did not.

We shall set more detail on issues such as the regulator, which services would qualify etc in forthcoming blog posts that may be more technical than this one and ultimately in a publication. We are always happy to receive people’s views and suggestions to comms@carnegieuk.org and again repeat our apologies to specialists for our over generalisation here.

This article gives the views of the authors and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science.

About the author

Catherine Speller

Posted In: Internet safety | LSE Media Policy Project

Leave a Reply

Your email address will not be published. Required fields are marked *