Artificial intelligence (AI) is revolutionising our digital world—but in the wrong hands, it can be a dangerous weapon used against children. Elena Martellozzo and Paula Bradbury of the University of Middlesex look at the UK’s Online Safety Act and how AI is being used to create hyper-realistic abusive images and synthetic media that exploit children.
The UK government is taking action against the rising threat from AI-generated child sexual abuse material (CSAM), with new legislation that will make the country the first to criminalise the possession, creation, and distribution of AI tools designed to generate CSAM. The legislation will also outlaw AI-generated paedophile manuals, with offenders facing up to three years in prison.
These measures signal a strong commitment to tackling AI-enabled abuse, but they also raise crucial questions: How will enforcement work in practice? Will police forces have the resources and technology needed to track down offenders using AI to produce CSAM? And what role should tech companies play in ensuring their platforms do not facilitate this growing threat?
While this legislation is a step in the right direction, it is clear that tackling AI-driven CSAM will require ongoing collaboration between lawmakers, law enforcement agencies, and the tech industry. The conversation is far from over—if anything, it has just begun.
The emerging threat of AI-generated CSAM
Artificially generated CSAM presents a uniquely devious challenge. Firstly, the proliferation of AI-driven CSAM emboldens and encourages offenders, creating an environment where the exploitation of children is further enabled and legitimised. The increasing accessibility of this content exacerbates the issue of sexual violence against children, normalising abuse and placing them at even greater risk—both online and offline. Secondly, unlike traditional CSAM, which involves the direct abuse of a real child, AI-generated material can be created without physical contact—yet its impact is still devastating.
The risks posed by CSAM are severe, with long-lasting effects on victims and broader societal consequences. Offenders can manipulate images, “nudify” real photographs, and use deepfake technology to generate hyper-realistic abusive content. Some of these AI-generated images even incorporate the voices of real children, effectively re-victimising survivors. Furthermore, offenders leverage AI tools to blackmail children, coercing them into further abuse. AI also helps refine the production of disturbingly lifelike images, reinforcing offenders’ harmful desires.
The emergence of AI-driven or virtual CSAM has exacerbated this issue, as AI enables the rapid and increasingly profitable exploitation of children. This type of abuse is growing alarmingly common. In October 2023, the Internet Watch Foundation (IWF) conducted a focused investigation into a dark web forum, uncovering a significant volume of AI-generated child sexual abuse material (CSAM). Over a one-month period, analysts identified 20,254 AI-generated images posted on this platform. Out of these, 11,108 images were selected for detailed assessment, with 2,978 confirmed as depicting AI-generated child sexual abuse material.
Implications for law enforcement and safeguarding
From a law enforcement perspective, the new legislation places additional responsibilities on the police, Border Force, and online regulators. Border Force officers will now have the power to compel individuals suspected of CSAM offences to unlock their digital devices at UK entry points. This measure aims to prevent offenders from accessing illegal content stored on personal devices while traveling. However, the effectiveness of this approach will depend on adequate training, technological capabilities, and legal safeguards to ensure it is not misused. An additional complexity lies in the cross-border nature of online offending as, like with other forms of CSAM that are not created using AI, jurisdictional boundary lines that create additional challenges for law enforcement agencies as there may be different legal definitions of what constitutes as CSAM, let alone challenges to identify and locate offenders.
For children and safeguarding professionals, these developments highlight the growing need for proactive intervention. Children are increasingly targeted through AI-enhanced grooming tactics using AI profile images and chat bots placing increasing pressure on the need for the adaptation of safeguarding measures to keep pace with evolving threats. In a recent report by Thorn (a US-based non-profit that tackles CSAM) it was reported that:
1 in 10 minors report that they know of cases where their friends and classmates have created synthetic non-consensual intimate images (or “deepfake nudes”) of other kids using generative AI tools.
The rise in children using AI tools to create, possess, and share indecent images of their peers is a major concern. This behaviour carries serious consequences, including the risk of criminalisation, as many may not fully understand that their actions constitute a crime.
The role of tech companies and regulation
One of the most pressing concerns is the responsibility of tech companies in preventing the misuse of AI tools. Platforms must implement stricter controls to detect and remove AI-generated CSAM while ensuring their services are not exploited by offenders. Recent data from the Internet Watch Foundation reveals a staggering 380% increase in reports of AI-generated CSAM, with 245 confirmed cases in 2024 compared to 51 in 2023. Given that each report can contain thousands of images, this surge highlights the immense scale of the challenge.
The Online Safety Act grants Ofcom the authority to regulate digital platforms and enforce compliance with safety measures. However, the real challenge lies in whether tech companies will actively invest in AI detection tools, content moderation, and robust reporting mechanisms. The question remains—will crackdowns on big tech be effective?
Emerging technologies and legal gaps?
The criminalisation of AI-generated CSAM is a necessary step in closing legal loopholes that have allowed offenders to exploit evolving technology. However, questions remain about the practical impact of these laws. Critics argue that while the legislation is a positive development, it does not go far enough in addressing the root causes of online child abuse and exploitation.
For instance, Professor Clare McGlynn highlights that the legislation does not tackle the proliferation of so-called “nudify” apps or the widespread availability of sexual content featuring adult actors portraying children. These materials, while not illegal under current UK law, contribute to the normalisation of child exploitation. Without addressing the broader culture of online abuse and pornography, these new measures may only address part of the problem.
A starting point, not a solution
The rapid advancement of artificial intelligence has brought both opportunities and challenges in the realm of online safety. One of the most pressing concerns is the misuse of AI to generate child sexual abuse material (CSAM), a threat that demands urgent legal and technological responses. The AI-focused CSAM provisions in the Crime and Policing Bill mark an important step toward addressing this issue, aligning with the broader objectives of the Online Safety Act (OSA)to create a safer digital environment.
However, legislation alone is not enough. A comprehensive approach, including robust enforcement, collaboration with technology companies, investment in victim support services, and active engagement with children and young people, is necessary. Without a multifaceted strategy, enforcement gaps and evolving technological risks could leave children vulnerable. As AI technology evolves, so must our strategies to protect children, or they will be the ones who suffer most.
This post gives the views of the authors and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.
Featured image: Photo by Igor Omilaev on Unsplash