LSE - Small Logo
LSE - Small Logo

Guest author

September 14th, 2018

It’s time to end the wild west of the web

0 comments

Estimated reading time: 10 minutes

Guest author

September 14th, 2018

It’s time to end the wild west of the web

0 comments

Estimated reading time: 10 minutes

Rapidly developing technology and the ubiquity of social media means that there is increasing risk of child sexual exploitation or abuse online. A recent NSPCC FOI investigation found there were more than 3,000 police-recorded offences for sexual communication with a child in England and Wales in 2017/18 alone. In this post, Pooja Kumari introduces the NSPCC’s new Wild West Web campaign which aims to ensure that the government introduces robust and effective regulation of social media platforms. Pooja is a Senior Policy Officer for Child Safety Online at the NSPCC.

Technology brings numerous opportunities to children, but it also opens up an array of potential harms. For too long, social networks have been allowed to treat child safeguarding as an optional extra, and the result is that children are exposed to unacceptable risks in the spaces where they play, learn, and socialise. After a decade of inaction from social media companies, the challenge we face is now huge, but not insurmountable. Rapidly developing technology creates new opportunities to initiate, maintain and escalate abuse. With the government’s forthcoming white paper on tackling online harms, now is the time to act.

What are the risks to children on social networks?

The ubiquity of social media carries multiple risks, from exposure to inappropriate and sexualised content, to the production and distribution of child abuse imagery, through to the growing scale of technology-facilitated grooming. Platforms provide new opportunities to initiate and facilitate abuse. With so many children using social networks, gaming and messaging sites, it means that children are increasingly exposed to the threat of abuse or exploitation, from both adults and their peers.

Through the ease of access afforded by smartphones, groomers can target significant numbers of children, and escalate and maintain their abuse. Groomers can readily move children into the shadows, moving children from well-known platforms to encrypted and hidden sites. Self-generated imagery is a considerable issue, accounting for around a third of recent images removed by the Internet Watch Foundation. Once a self-generated image has been taken, it opens the door for exploitation and blackmail.

Platforms have failed to build in adequate safeguarding protections, failed to take steps to proactively tackle grooming, and failed to do enough to proactively tackle child sexual abuse imagery at source. Successive governments have also repeatedly failed to intervene, placing disproportionate weight on the claims made by industry. As a result, for over a decade, social networks have repeatedly failed to protect their child users.

The extent of technology-facilitated abuse

There is much we still do not know about the scale and extent of online abuse. It is likely that the indicators tracked in the How safe are our children? report significantly under-report the scale of technology-facilitated abuse. Nevertheless, these figures still paint a disturbing picture of online harms. As technology has provided new ways for offenders to commit abuse, the onus has been on social networks to do everything they can to make the channels safer. They have largely failed to do so, and we can now see the consequences of this decade of inaction.

  • An NSPCC FOI found that there were over 3,000 police-recorded offences for sexual communication with a child in England and Wales in 2017/18 – 2,813 in England and 274 in Wales – and a further 82 in Northern Ireland. [1] In Scotland, there were 462 records of the equivalent offence of communicating indecently with a child in 2016/17, the most up-to-date figures available. In England and Wales, in more than half of cases where the data was recorded (53%), offences took place on Facebook and the apps it owns. [2]
  • In 2017, the Internet Watch Foundation identified 78,589 URLs containing child sexual abuse imagery, an increase of 37% from the previous year.
  • According to recent NSPCC research [3], more than one in seven children aged 11-18 (15%) have been asked to send sexual messages or images of themselves. One in ten girls aged 13 or under had received a request. Groomers are able to exploit the design of social networks, using friend and follower suggestions to infiltrate peer networks, and to establish contact with children that can escalate into requests for sexual messages. Seven per cent of 11-16 year olds say they have shared a naked or semi-naked image of themselves.

Ending the decade of inaction

Since a voluntary Code of Practice was first proposed in the Byron Review 10 years ago, there have been 13 self-regulatory codes of practice, none of which have delivered any meaningful change for children.

The NSPCC supports the government’s commitment to introduce a white paper proposing legislation which tackles both legal and illegal online harms. Our Wild West Web campaign aims to ensure that the government introduces robust and effective regulation of social media platforms that will help to keep the next generation of children safe online.

That’s why the NSPCC is calling for the forthcoming legislation to:

  • Commit social media firms to follow a consistent set of minimum safeguarding standards. Legislation must require social networks to introduce dedicated child accounts, with default settings designed to protect children from online harms.
  • Make platforms report on how they keep children safe. Every firm should be legally required to produce an annual transparency report that sets out their complaint handling processes and outcomes.
  • Carry consequences for platforms that don’t follow safeguarding rules. We need a regulatory regime with robust investigatory and disclosure powers. The regulator must be able to issue financial sanctions where safeguarding measures aren’t followed.
  • Make platforms take proactive steps to prevent exposure to illegal content and behaviour. If firms can invest in algorithms to support their marketing, they can also develop them to proactively identify illegal behaviour on their sites, including grooming.

It cannot be right that there is the least regulation where children face the greatest risks. In the coming months, government will decide whether its legislation will deliver meaningful, enforceable change, or whether it will continue to let platforms decide for themselves whether to protect child users. Now is the time to clean up the ‘Wild West’ of the Internet and for tech firms to be finally held accountable for the risks on their sites.

Sign the petition here.

Notes

[1] Police Service of Northern Ireland recorded crime statistics. (Data provided to NSPCC).

[2] NSPCC (2018) FOI request sent to all Police Forces in England and Wales. Police disclosed what methods were used in 2,028 instances

[3] 12 NSPCC (2018) Net Aware research on file with the NSPCC

This post gives the views of the authors and does not represent the position of the LSE Parenting for a Digital Future blog, nor of the London School of Economics and Political Science.

About the author

Guest author

Posted In: In the news