LSE - Small Logo
LSE - Small Logo

Sonia Livingstone

January 15th, 2025

Child online safety – next steps for regulation, policy and practice

0 comments | 371 shares

Estimated reading time: 6 minutes

Sonia Livingstone

January 15th, 2025

Child online safety – next steps for regulation, policy and practice

0 comments | 371 shares

Estimated reading time: 6 minutes

Following the Online Safety Act, Ofcom has published its guide for tackling harms and risks online. But critics argue that these guidelines don’t go far enough. Sonia Livingstone analyses the concerns raised and argues that more and smarter regulation is needed that’s up to date with the way children use the internet. The UK Government needs to be ready to push back against the likes of Musk and Zuckerberg who promote further deregulation of their sector. 


Enjoying this post? Then sign up to our newsletter and receive a weekly roundup of all our articles.


This is an eventful moment for children’s online safety – a good time to take stock.

Having gained Royal Assent for the Online Safety Act (OSA) in October 2023, in December 2024 Ofcom published its codes of practice and guidance on tackling illegal harms. These will most likely be approved by parliament next month and come into force on 17 March. This means the 100K+ sites and apps in scope must complete their risk assessments before then and implement safety measures to mitigate 130 priority offences from then.

But Ian Russell, of the Molly Rose Foundation, has called Ofcom’s actions a “disaster” – dither, delay, and wholly insufficient. The Children’s Commissioner for England is also critical, saying that Ofcom’s code “protects corporations, not children.” And 5Rights Foundation has expressed “regret that the first draft of Ofcom’s proposals falls short of what is needed to deliver on the promises of the Act.”

Interestingly, the Secretary of State agrees, calling the OSA “very uneven” and “unsatisfactory”. This follows his November Statement of Strategic Priorities insisting on safety by design, platform transparency, agile regulation and inclusive digital literacy. While his concern is that the last government watered down the Act, the children’s charities’ concern is that Ofcom has diluted it even further.

Small companies are left relatively unburdened by the regulation, though as we know, children are often pioneers when it comes to the latest under-the-radar app for risk taking (now that, as a teen told me last week, Instagram is for grownups).

The concerns with the current Ofcom proposals

So, what are these concerns about the Ofcom proposals following the Online Safety Act?

(1) That the legal objective of safety by design isn’t met, as Ofcom’s approach responds to harm but doesn’t prevent it –for example when it comes to livestreaming, or the algorithmic amplification of harmful content tailored to young people’s vulnerabilities.

(2) Small companies are left relatively unburdened by the regulation, though as we know, children are often pioneers when it comes to the latest under-the-radar app for risk taking (now that, as a teen told me last week, Instagram is for grownups). Are we paying attention to Reddit, MeowChat, Omegle or MeetMe?

(3) Actions to prevent strangers searching or contacting children are only precluded on high-risk platforms – a limitation many parents will find deeply worrying. For my current research I just spent a day with year 9 children and was regaled with stories of being added by randomers on Snapchat.

Too little, too late?

Many of the children’s charities have combined to say, the plans are too little, too late. To those of us who participated in the 2006 Home Office Task Force on Child Online Protection on the Internet and its Good Practice Guidance for social media services, chaired by Annie Mullins, or Tanya Byron’s review in 2008 under Gordon Brown, and a host of other initiatives, the plans are very late indeed!

The future of content moderation – central to Ofcom’s codes, but already being cancelled by Meta – is uncertain, to say the least. 

For around two decades, there’s been debate about social media responsibility, privacy and safety measures, age-appropriate provision and moderation of age inappropriate content, and prevention of illegal content and contact. That society is still waiting for decisive action is astonishing. And policymakers call academics slow!

Zuckerberg and Musk are overtaking policy (again)

Mark Zuckerberg has announced the end of independent fact checkers on Facebook and Instagram, which many believe will leave harmful content unchecked, even amplified. The future of content moderation – central to Ofcom’s codes, but already being cancelled by Meta – is uncertain, to say the least. His overt rationale is the defence of free speech – an argument that has recently undone the Kids Online Safety Act in the USA, the Californian Age Appropriate Design Code Act, the revision of COPPA and more. Huge effort went into each of these initiatives to improve child online safety – they may not be perfect, but they were surely steps in the right direction.

We should expect more about free speech from Elon Musk and the tech bros, especially when the new US President takes office. Ofcom has responded that social media platforms operating here must comply with the OSA. I suggest, however, that in the UK we’ll need a strategy to address the challenge posed by such uses, perhaps misuses, of the US first Amendment.

The more children use the internet, the more digital skills and literacies they generally gain, the more online opportunities they enjoy and – the tricky part for policymakers –the more risks they encounter.

So, eventful times, and I haven’t even mentioned AI – though the next AI Action Summit is in a month (notably no longer called a safety summit) and perhaps there’ll be some attention to children – or not. Though, the Internet Watch Foundation has given us good evidence for why children should be prioritised as society seeks to tackle the role of technology in facilitating multiple forms of child sexual exploitation and abuse.

Findings on shifting online dangers for children

I’m always being asked about the latest research, and this too has a 20 year history. When I conducted the UK Children Go Online survey in 2003, all but 3 per cent of 9-19year olds were already online, experiencing widespread exposure to pornography, unwanted sexual messages and other risks as well as, importantly, a growing array of digital opportunities. When I led the EU Kids Online survey in the UK and across the EU in 2010, updated in 2020 we found a substantial and increasing exposure to online risk. These and a host of projects before and since have repeatedly shown the same results – the more children use the internet, the more digital skills and literacies they generally gain, the more online opportunities they enjoy and – the tricky part for policymakers –the more risks they encounter. In short, the more, the more: EU Kids Online recently shared all its evidence with the European Commission to guide the Digital Services Act.

The Evidence Group of the UK Council for Child Internet Safety was commissioned by the government to review the evidence in 2017, and again by NSPCC to support the Online Safety Bill in November 2023. Led by Dr Jo Bryce, our report estimated in late 2023 that around one in 12 children has encountered OSA “primary priority” or “priority” content risks such as pornography, content that encourages self-harm or suicide, or bullying and hateful content. The evidence also showed that any harm resulting from such exposure falls very unequally, and some children are, for a host of reasons, much more vulnerable than others.

That report also found that recent “increases in the popularity of livestreaming, ephemeral media, and messaging platforms, as well as the development of virtual reality technology, have changed the dynamics of online sexual interaction” – and therefore the associated risks. And that “This has been accompanied by developments in Artificial Intelligence (AI) that have provided additional means for producing synthetic CSAM (e.g., deepfakes), but also greater efficiency in moderating and detecting material that may be harmful to children.” Finally, we explored the “increasing use of algorithms by platforms to ‘feed’ content and recommend contacts to children… [with] a direct impact on children’s potential experience of harm.”

Relying on robust age-checks to keep children away from harm, rather than ensuring safety by design to reduce harm in the first place, is likely to be one of this year’s big controversies.

Will 2025 be a year of better online protection?

So, when Ofcom says that, in the coming year or so it expects to see “Children better protected online, with services having to introduce robust age-checks to prevent children seeing content such as suicide, self-harm material and pornography, and to filter or downrank harmful material in recommended content” – you know that there’s some high expectations. And an evidence baseline against which to measure real improvement. Note, further, that relying on robust age-checks to keep children away from harm, rather than ensuring safety by design to reduce harm in the first place, is likely to be one of this year’s big controversies: are adults ready to undergo routine and robust age-checks, and which organisations can be trusted not to misuse our privacy in the process?

By the end of 2025, Ofcom says, “We expect the transparency duties will prove among the most effective in improving users’ safety by shining a light on services’ actions – and in particular by providing hard evidence, for the first time, about the effectiveness of their safety measures.” We might ask, what “hard evidence” is expected? And do we really expect the platforms to reveal it, especially if it shows the ineffectiveness of their safety measures? The researchers amongst us are watching closely. Second, since the same platforms are already required to report to the European Commission under the transparency requirements of the Digital Services Act, can we already form a view on whether transparency duties are the game changer hoped for?

Regulation is working – companies are taking action, and the benefits are real. But they are just not taking enough action, and it’s for us to figure out what’s not being done.

Is regulation or education the answer?

Many policymakers around the world are watching the outcomes of the Online Safety Act. Some suggest that we’re placing too much reliance on regulation – that the task is hopeless, and our best bet is to educate children. But that’s not so straightforward either. Tanya Byron helped raise awareness of the risks among children, parents and educators. But things have moved on. Now that children have content “fed” to them by personalised algorithms closely tailored to their preferences and also their vulnerabilities, to sustain their undivided and unending attention: the task of media literacy must be complemented by effective regulation.

In other words, if we don’t effectively regulate the tech companies, the effect of awareness raising is counterproductive – it burdens vulnerable individuals with the responsibility for coping with the opaque and often exploitative actions of the world’s most powerful companies. Increasingly when I interview children, they are cautious rather than ambitious for their digital lives, and hypervigilant about what can go wrong, feeling alone and unsupported as they try to understand the inexplicable.

At the Digital Futures for Children centre at LSE joint with 5Rights, our research led by Steve Wood shows that regulation is working – companies are taking action, and the benefits are real. But they are just not taking enough action, and it’s for us to figure out what’s not being done, especially if transparency reporting does not live up to hopes. And to insist on better.


This article is based on Sonia Livingston’s Keynote speech at Westminster e-Forum policy conference on January 14, 2025.

All articles posted on this blog give the views of the author(s), and not the position of LSE British Politics and Policy, nor of the London School of Economics and Political Science.

Image credit: New Africa in Shutterstock


Enjoyed this post? Then sign up to our newsletter and receive a weekly roundup of all our articles.

Print Friendly, PDF & Email

About the author

SoniaLivingstone

Sonia Livingstone

Sonia Livingstone OBE is a professor in the Department of Media and Communications at LSE. Much of her current work focuses on Children’s Rights in the Digital Age. She currently directs the Digital Futures Commission (with the 5Rights Foundation) and the Global Kids Online project (with UNICEF).

Posted In: LSE Comment | Media and Communications | Media and Politics | Uncategorized