LSE - Small Logo
LSE - Small Logo

Sonia Livingstone

July 7th, 2023

How can we make the internet safe for children in practice?

0 comments | 6 shares

Estimated reading time: 6 minutes

Sonia Livingstone

July 7th, 2023

How can we make the internet safe for children in practice?

0 comments | 6 shares

Estimated reading time: 6 minutes

Does regulating the internet to protect children mean checking the age of all users? Sonia Livingstone discusses the importance of adopting an approach that centres child rights and highlights the political and policy challenge of recognising who is a child online when seeking to protect children.

One in three children uses the internet, and one in three internet users is a child. Yet tech companies claim to be unable to determine who is a child online. This is a practical but also a political challenge – does society want to task companies with age-assessing all their users? Or is that too great a risk to the privacy and freedom of expression of both adults and children?

Why children need protection online

Evidence of problems abounds – with a plethora of studies headlining media reports as well as cited in government inquiries and third sector advocacy. While some sources are contested and their implications too often overstated, the case for a robust and evidence-based approach to protecting children online is broadly accepted – hence the slew of new legislation and policy being introduced in the UK, the US and internationally.

The EU Kids Online research network conducted a pan-European survey on children’s online access, skills, opportunities, risks and safety mediation. It found, for example, that only a quarter of 9-16-year-olds always feel safe online, and 10 per cent never feel safe. Depending on the country, up to half of all children said something upset them last year, double that of the previous survey. What upsets them? There is evidence that children encounter all 4 C’s of online risk: content, contact, conduct and contract. The most common risk is hate; and the biggest increase is in exposure to self-harm content. Crucially too, the risks of online engagement fall unequally: the European ySKILLS project is finding that more vulnerable adolescents are more at risk of encountering harm online – this includes those who are discriminated against or with poorer health.

A child-rights-based approach

An authoritative statement from the UN Committee on the Rights of the Child is its General Comment 25, which sets out implementation of the UN Convention on the Rights of the Child in relation to the digital environment. A child-rights framework requires a holistic approach that balances rights to protection, provision and participation, and centres children’s age and maturity (or “evolving capacity”) and best interests (a complex judgement that puts children’s rights ahead of profit, and that requires consulting children and making decisions transparent).

While General Comment 25 is addressed to states, the tech sector also has clear responsibilities for child protection online. So, alongside new legislation, “by design” approaches, including safety by design, are increasingly demanded of businesses whose digital products and services impact on children in one way or another. And there’s plenty they could do – for example, EU Kids Online research shows that children don’t trust the platforms or can’t figure out how to get help: after a negative experience, only 14 per cent changed their privacy settings, and only 12 per cent reported the problem online. Meanwhile, less than a third of parents use parental controls – because they don’t know how parental controls work or even whether they are effective, and they fear adverse effects on children’s privacy, autonomy and online opportunities.

Getting the policy framework right means adopting a child-rights approach, as the UK and Europe have long committed to do but insufficiently enacted.

But society cannot aim to protect children at the cost of restricting their civil rights and freedoms, nor by policies that, however inadvertently, incentivise businesses to age-gate children out of beneficial digital services. Getting the policy framework right means adopting a child-rights approach, as the UK and Europe have long committed to do but insufficiently enacted. While we wait, children – once the intrepid explorers of the digital age – are becoming over-cautious, worrying about the risks and, evidence shows, missing out on many online opportunities as a result.

Sonia Livingstone discusses children’s rights in a digital world. Click here to watch on LSE Player.

Identifying who is a child

But, if businesses don’t know which users are children, how can they be tasked with age-appropriate provision and protection? In the European Commission-funded euCONSENT project, my role was to explore the child-rights implications of age assurance and age verification. The Information Commissioner’s Office is actively exploring these issues in the UK.

One option is that we expect tech companies to design their services as a widely accessible, child-friendly, broadly civil space – as in a public park – with exceptions that require users to prove they are an adult (as when buying alcohol in a shop). Another option is that we expect tech companies to treat all users in age appropriate ways (ie, find out everyone’s age and provide personalised services accordingly – though there’s a lot to debate about what age appropriate means, and how to implement it, given that children vary hugely not only by age but according to many other factors). While there are challenges with both approaches, policy innovation is vital if we are to move beyond the status quo of treating children online as if they are adults.

To realise children’s rights in a digital age, should society require companies to redesign their service as needed, or to redesign the process of accessing their service? In both the UK’s Online Safety Bill and Europe’s Digital Services Act, success will depend on the effective and accountable conduct of risk assessments.

At present, many systems of age assurance do not respect the full range of children’s rights.

In the euCONSENT project, we argued that a child-rights approach to age assurance must protect not only children’s right to be protected from digital content and services that could harm them, but also their right to privacy and freedom of expression (including to explore their identity or seek confidential help without parental consent), their right to prompt and effective child-friendly remedy, and their right to non-discrimination. This means they must be able to access digital services along with everyone else even if they lack government ID or live in alternative care or have a disability, and whatever the colour of their face. At present, many systems of age assurance do not respect the full range of children’s rights.

Let’s be practical

Notwithstanding the important arguments ongoing about the potential costs to privacy, expression and inclusion associated with age assurance technologies to date, in practice, users are already age-verified. Google says it has age estimated all users signed in to its service, based on a host of data collected over time, including what their friends look like, the sites they visit, and everything else it knows about them. But Google’s strategy here is not very transparent. Meanwhile, Instagram is one of a growing number of platforms adopting age estimation technology for all its users, as is Roblox.

In the Digital Futures Commission, with 5Rights Foundation, we have proposed a model of Child Rights by Design. It provides a toolkit for designers and developers of digital products and was co-developed with them – and with children. It draws on the UNCRC and General Comment 25. It centres on 11 principles of which age-appropriate service is one, privacy is another, also safety, of course. The other eight are equally important for a holistic approach – equity and diversity; best interests; consultation with children; business responsibility; child participation; wellbeing; fullest development; and agency in a commercial world.

If big tech embedded Child Rights by Design, the task of policymakers, educators and parents would be greatly eased.


This post is based on a speech delivered by the author in the framework of the European Commission’s stakeholder event on the Digital Safety Act and the European Parliament’s Internal Market and Consumer Protection Committee (IMCO)’s public hearing on the online safety of minors.

All articles posted on this blog give the views of the author(s), and not the position of LSE British Politics and Policy, nor of the London School of Economics and Political Science.

Image credit: Shutterstock and Michael Jeffery via Unsplash

Print Friendly, PDF & Email

About the author

SoniaLivingstone

Sonia Livingstone

Sonia Livingstone OBE is a professor in the Department of Media and Communications at LSE. Much of her current work focuses on Children’s Rights in the Digital Age. She currently directs the Digital Futures Commission (with the 5Rights Foundation) and the Global Kids Online project (with UNICEF).

Posted In: LSE Comment | Media and Communications | Society and Culture