LSE - Small Logo
LSE - Small Logo

Sonia Livingstone

June 27th, 2019

Revenge pornography and online hate content: the evidence underpinning calls for regulating online harms in the UK

0 comments | 2 shares

Estimated reading time: 5 minutes

Sonia Livingstone

June 27th, 2019

Revenge pornography and online hate content: the evidence underpinning calls for regulating online harms in the UK

0 comments | 2 shares

Estimated reading time: 5 minutes

The consultation on the Online Harms White Paper, published jointly by the UK government’s Department for Digital, Media, Culture and Sport (DCMS) and the Home Office, closes on July 1. It calls for a new system of regulation for tech companies with the goal of preventing online harm. Julia Davidson and Sonia Livingstone were part of a group commissioned by DCMS to look at the evidence around online harms experienced by adults, and here they explain some of findings of their review.

The UK Government’s Online Harms White Paper includes a much-discussed Table (p.31) on the online harms in scope of the proposed regulation. This distinguishes “Harms with a clear definition” from “Harms with a less clear definition” and “Underage exposure to legal content.” In proposing a new regulator, the White Paper explains that:

“The regulator will take a risk-based approach, prioritising action to tackle activity or content where there is the greatest evidence or threat of harm, or where children or other vulnerable users are at risk” (p.9)

It notes further that:

“There is growing evidence of the scale of harmful content and activity that people experience online” (p.12)

Given this situation, the White Paper proposes that a new regulatory framework be set up, under which industry should have a statutory duty of care and the new regulator would have powers of enforcement (possibly including fines and senior management liability). It calls for a culture of transparency, and for industry to demonstrate the steps taken to combat serious offending. None of this is uncontroversial, and discussion (including on this blog) has been lively.

So what is the nature of that “growing evidence”? And how strong is it? Our report for DCMS – “Adult Online Hate, Harassment and Abuse: A rapid evidence assessment – has finally been published. This complements our earlier report for DCMS on “Children’s online activities, risks and safety: A literature review by the UKCCIS Evidence Group.” Both reports stem from the work of the Evidence Group of the UK Council for Internet Safety.

We note first that, compared to the volume of research on children, the UK research evidence base on harms to adults is patchy and incomplete. That said, our report critically evaluates what evidence there is regarding online harassment, revenge pornography and image based abuse, and online hate – all dubbed “Harms with a clear definition” – as well as cyberbullying and trolling. Based on a rapid evidence assessment of recent literature from the UK, EU and the US (with some international studies), a key finding was that these various harms are not so clear in the research literature, neither in terms of definitions nor measurement. Nonetheless, the findings do point to a societal problem in need of urgent attention. In this blog post, we focus on the findings for revenge pornography and online hate content.

Revenge pornography

Revenge pornography is a subset of image based abuse, including both the non-consensual sharing and creation of sexual images, for a variety of motives, ranging from sexual gratification to harassment, control and extortion. S33 of the Criminal Justice and Courts Act 2015 creates an offence of disclosing private sexual photographs or films without the consent of an individual who appears in them and with intent to cause that individual distress. The legislation has been criticised as inadequate particularly in respect of anonymity for victims, and a Government review is about to get underway. Few studies have looked at the prevalence rates of revenge pornography, and prevalence is hard to quantify with variations in methodology and definition.

Key findings include:

  • Rates of prevalence range from 1.1% to 23% of the population (noting that this latter figure comes from a self-selected sample answering a survey on revenge pornography)
  • Research has predominantly focused on victim accounts
  • Victims are typically female (between 60-95% of victims)

Most importantly, there is wide agreement in the literature that the harm caused by revenge pornography can be devastating. Victims may:

  • Suffer numerous psychological and emotional harms
  • Be subject to online and offline harassment, stalking, and assault
  • Suffer from mental health problems such as anxiety, panic attacks, PTSD, depression and substance abuse

Indeed, there are numerous documented cases of victims of revenge porn committing suicide, suffering harms to their employment, careers and professional reputations. Victims may also suffer more intangible/abstract harms such as the violation of personal and bodily integrity, the infringement of dignity and privacy, and inhibition of sexual autonomy and expression.

Online hate content

Usually classified as hate speech, online hate content can take the form of words, pictures, images, videos, games, symbols and songs. Research indicates that race or ethnicity is the protected characteristic that provokes the most online adult hate, followed by sexual orientation, religion, disability, and transgender status.

Most of the research on the experiences of victims of online hate focuses on those of different races and religions. Anti-Semitic online hate is prevalent and centres on themes of perceived Jewish influence/power, conspiracy theories of world domination, and Holocaust denial/trivialisation. However, the volume of Islamophobic online hate has risen exponentially, often increasing following ‘trigger’ incidents such as terrorist attacks.

Online hate towards migrants, refugees and asylum seekers is being increasingly explored in research. It is clear that police statistics indicating that approximately only 2% of online crimes have a hate element underestimate the extent of online hate. For example: 4,123,705 tweets were sent across the world that could be considered anti-Islamic between 18 March-30 June 2016.

There is only a small body of research exploring the impact of online hate crimes. The research that does exist demonstrates that:

  • Online hate crimes can have emotional, psychological, mental health, behavioural, and economic/financial effects
  • Psychological effects of online hate may include feelings of shock, fear, anger, paranoia, distress, low self-esteem, frustration, fright, sadness and loneliness
  • Hate crime may cause or worsen mental health problems, such as anxiety, depression, self-harm and suicide; behavioural effects of online hate may include victims not leaving the house/only doing so when accompanied, and changing the way they look
  • Victims of online hate may avoid using the internet, leading to feelings of isolation and disconnection, limiting their freedom of expression
  • There is some research on the impact of online disablist and LGBT hate crime leading to real world hate crimes, and there is some evidence to link online victimisation with offline victimisation.

It is clear that the experience of some form of hate and/or harassment is a normal aspect of online life for many adults. For some, the experience of online hate and sexual abuse is particularly pervasive and damaging, sometimes having a lasting impact upon their health, wellbeing, employment and lifestyle. UK legislation is piecemeal and online aspects are often added to existing legislation particularly in the harassment area, leading to difficulties in prosecuting cases. Victims believe that industry could do much more in preventing harms and responding to complaints and state that they are often not taken seriously by the police; indeed most would not bother to report online harms.

For more findings, check out our full report. We hope it provides last minute food for thought for those who haven’t yet got their white paper consultation response in. After all, whatever the difficulties of regulating online harms, we can surely all agree on the importance of evidence-based policy.

This article represents the views of the authors, and not the position of the LSE Media Policy Project, nor of the London School of Economics and Political Science. 

About the author

Sonia Livingstone

Sonia Livingstone OBE is Professor of Social Psychology in the Department of Media and Communications at LSE. Taking a comparative, critical and contextual approach, her research examines how the changing conditions of mediation are reshaping everyday practices and possibilities for action. She has published twenty books on media audiences, media literacy and media regulation, with a particular focus on the opportunities and risks of digital media use in the everyday lives of children and young people. Her most recent book is The class: living and learning in the digital age (2016, with Julian Sefton-Green). Sonia has advised the UK government, European Commission, European Parliament, Council of Europe and other national and international organisations on children’s rights, risks and safety in the digital age. She was awarded the title of Officer of the Order of the British Empire (OBE) in 2014 'for services to children and child internet safety.' Sonia Livingstone is a fellow of the Academy of Social Sciences, the British Psychological Society, the Royal Society for the Arts and fellow and past President of the International Communication Association (ICA). She has been visiting professor at the Universities of Bergen, Copenhagen, Harvard, Illinois, Milan, Oslo, Paris II, Pennsylvania, and Stockholm, and is on the editorial board of several leading journals. She is on the Executive Board of the UK Council for Child Internet Safety, is a member of the Internet Watch Foundation’s Ethics Committee, is an Expert Advisor to the Council of Europe, and was recently Special Advisor to the House of Lords’ Select Committee on Communications, among other roles. Sonia has received many awards and honours, including honorary doctorates from the University of Montreal, Université Panthéon Assas, the Erasmus University of Rotterdam, the University of the Basque Country, and the University of Copenhagen. She is currently leading the project Global Kids Online (with UNICEF Office of Research-Innocenti and EU Kids Online), researching children’s understanding of digital privacy (funded by the Information Commissioner’s Office) and writing a book with Alicia Blum-Ross called ‘Parenting for a Digital Future (Oxford University Press), among other research, impact and writing projects. Sonia is chairing LSE’s Truth, Trust and Technology Commission in 2017-2018, and participates in the European Commission-funded research networks, DigiLitEY and MakEY. She runs a blog called www.parenting.digital and contributes to the LSE’s Media Policy Project blog. Follow her on Twitter @Livingstone_S

Posted In: Internet safety | LSE Media Policy Project | Truth, Trust and Technology Commission

Leave a Reply

Your email address will not be published. Required fields are marked *