LSE - Small Logo
LSE - Small Logo

Policy Planner

March 11th, 2015

No need to panic: Let’s talk about coercion, consent, and collaboration for a safer Internet for kids

0 comments | 1 shares

Estimated reading time: 5 minutes

Policy Planner

March 11th, 2015

No need to panic: Let’s talk about coercion, consent, and collaboration for a safer Internet for kids

0 comments | 1 shares

Estimated reading time: 5 minutes

Mason HeadshotLSE Media Policy Project researcher Jessica Mason discusses recent research on “youth-produced sexual content online” from the Internet Watch Foundation (IWF) and Microsoft.

Yesterday the IWF released the results of a new qualitative study in which they proactively searched, over a three month period, for sexually explicit video and image content depicting young people. They exclusively looked for content that seemed to have been produced without an adult present, indicating it may have been self-produced. They identified young people as those who appeared to be under the age of 20[i], estimating age using techniques taught by law enforcement. Of all the content collected, 82.5% featured young people between the ages 16-20. The remaining 17.5% depicted young people under the age of 15; this included 667 images or videos.[ii] In both age categories the vast majority of content featured girls, and webcams were the top tool used to record such content, which is unlike past research that indicates webcams are used less frequently than other technologies.

The assessed content was distributed on 230 different sites, but the IWF identified 17 specific sites where the content was generated. They have decided not to name the sites, but are working with them to find solutions.

Using correct, deliberate, and meaningful language

The IWF presented these findings and asked experts to respond at a private event called “Youth selfies: The real picture,” a title which is highly problematic and inappropriate given that this research encompasses a range of different issues that require distinct approaches from policymakers, parents, and teachers. So-called “sexting” among 17-year-olds in a relationship (and the nonconsensual and illegal distribution of those private images) requires a drastically different conversation than that necessary to adequately address a ten-year-old who is coerced into revealing naked genitalia on a webcam.

The IWF study focused most of its analysis on the under-15 age group. Especially for younger children, labeling some of this content “selfies,” “youth-produced” or “self-generated” is a mistake.  Those working in child safety are deliberate about using terms like child sexual abuse images (as opposed to pornography) to draw clear distinctions between serious criminal acts and legal content depicting consenting adults. A similarly careful labeling scheme is needed for content generated by youths. For all ages, we need to talk about this content and its distribution in terms of whether it was consensual or coercive/abusive, and we need to start educating young people about the differences at earlier ages.

In its press release, report, and at the event, the IWF was clear that while they used the terms “youth-produced” or “self-generated” in their findings, they were not commenting on whether or not there was a coercive element to the production of the explicit content evaluated. The report did not seek to evaluate whether or not coercion was present, it only identified content for inclusion where there was no adult visible in the video or images.

Some of the videos evaluated by the IWF indicated clear distress and likely blackmail of the child involved (see case study of Girl C). To attribute any consent or even agency to children and young people who may have been pressured or blackmailed for sexually explicit images of themselves is victim-blaming. Though an adult was not seen in the identified content, having someone threatening or forceful who is virtually present in your private home can feel very real, as research on cyberbullying indicates.

The irresponsible press response

Unfortunately, the media has taken a sensationalist, victim-blaming angle with headlines like “Girls as young as seven posing in underwear in web photos and videos which end up in hands of sex offenders”. This ignores the person on the other side of the webcam soliciting these images from persons who are considered legally unable to give consent. And, of course, these media outlets fail to mention that such incidents are incredibly rare overall.

For those who were victims of these crimes as children, reading these headlines implies they are at fault, rather than concentrating on the perpetrators who are grooming, pressuring, and blackmailing to obtain these images and then illegally viewing them, replicating them, and distributing them for consumption.

Just as we should not indicate that young children are “agents” in this, we need to consider the same issues of consent and wrongdoing in prevention messages for the older subset of young people who may consensually share an image or video within a relationship of trust. Slogans like “think before you post” ignore those who are truly at fault: those in relationships who choose to violate another person’s trust, and those who use coercion and pressure to obtain this content.

It is evident that there is a need for deeper conversations about consent with young people, parents, teachers, and the media. It should be evident to everyone — even sensationalist media outlets — that a seven-year-old cannot give informed or enthusiastic consent. It is also unlikely that a seven-year-old would spontaneously pose for images online without having been targeted and having other risk factors present in their home life.

Moving forward: more research and the challenges abuse images pose to social scientists

The IWF report correctly concludes that more research is needed, including an annual study of this type to track changes over time.[iii] The foundation also suggests more research using image and video hashing to map the duplication and distribution of this illegal content to determine possible strategies for intervention and prevention.

For social scientists, researching child abuse images is fraught with challenges. Criminals are unlikely to be forthcoming about their activities, victims are often ashamed and coerced into silence, and the products of these crimes are illegal to view, even for researchers. Some have still produced research using image blocking software and only evaluating the text accompanying abuse images.

This means that entities like the IWF are often the sole bodies capable of producing some of this research and social scientists are not able to replicate their methodologies.  One of the basic tenets of the scientific method is replication: another scientist should be able to repeat a methodology to test the findings.

Globally, organisations like the IWF cannot be fully transparent in their methodology. For example, they may not want to give away specifics on what physical features they use to identify age because it may make it easier for criminals to make a victim’s age more ambiguous in content to prevent its removal. At the same time, many child safety agencies around the world depend on government and private funding. They must continue to demonstrate their value, but should they be solely responsible for producing the research that justifies their existence?

The IWF has an excellent record. As one IWF representative told attendees at yesterday’s event, illegal content in the UK remains up for an average of two hours before the IWF blocks it. For other agencies around the globe that average is usually a day or two. But despite the necessity of the IWF, we should make sure that its research, especially that which is likely to get wide and sensationalistic press coverage, should reflect the highest standards of social science methods. One way to do this without having to compromise some of the necessary secrecy around methodology would be to vet and include more academics and child safety experts in every step of the research: in its design, data collection, analysis, and reporting.

Contrary to the headlines, this study does not indicate that this is a time to panic about safety. It does, however, challenge some previous findings and suggest a need for more research, outside expertise, and accountability. It is also important to remember — as Andy Phippen eloquently stated on one of the panels — that if this is happening to even one child it is enough to know that we need to stop it.

 

[i] Though the IWF, in accordance with the Sexual Offences Act 2003, typically defines a child as anyone under the age of 18 years, they expanded the upper age limit for this study hoping it would mitigate many of the challenges they face in identifying the age of a person, especially after puberty. In many cases where the IWF is unable to verify age as under 18, they are unable to remove content.

[ii] This does not indicate that 667 children appeared in explicit content. One child may have appeared in more than one piece of content.

[iii] This would require the IWF to use consistent definitions and methodology. The IWF did a similar study in 2012, but this study differed significantly in its scope and definitions making it difficult to directly compare the two sets of results.

This article gives the views of the authors and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science.

About the author

Policy Planner

Posted In: Children and the Media | Internet Governance | LSE Media Policy Project

Leave a Reply

Your email address will not be published. Required fields are marked *