LSE - Small Logo
LSE - Small Logo

Gazal Shekhawat

Sonia Livingstone

November 1st, 2023

AI and children’s rights: a guide to the transnational guidance

0 comments | 16 shares

Estimated reading time: 5 minutes

Gazal Shekhawat

Sonia Livingstone

November 1st, 2023

AI and children’s rights: a guide to the transnational guidance

0 comments | 16 shares

Estimated reading time: 5 minutes

As the UK prepares for its much-trumpeted AI Safety Summit on 1 and 2 November, LSE’s Gazal Shekhawat and Sonia Livingstone call attention to the interests of children in the global AI arms race.

Strikingly, the UK’s AI Safety Summit which starts today includes little mention of children in its programme apart from a rhetorical mention of them as “the future.” But to realise children’s safety – and all our futures, it will be crucial to understand how the AI sector and regulatory policies impact all aspects of the lives of the world’s children and young people. Here we reflect on guidance from key transnational bodies.

Five key sources of AI guidance

  1. UNICEF’s ‘Policy guidance on AI for children’ (2021, Version 2.0) draws on stakeholder consultation around the globe, an overview of national strategies (2020), and a research summary with case studies of risks and opportunities. It recommends principles of inclusion, fairness, privacy, explainability, increasing awareness among governments about children’s rights, and about AI among children and carers. In the UK, The Alan Turing Institute has piloted this guidance with the Scottish AI Alliance and Children’s Parliament to showcase how children can effectively contribute to AI policy discussions. As the global agency for protecting children’s rights, UNICEF’s guidance is multifaceted, and carries weight across continents. Its adoption is now the next step to look forward to.
  2. The European Commission’s Joint Research Centre’s guidance, ‘Artificial Intelligence and the Rights of the Child ‘(2022) highlights pressing requirements of AI, including: ‘AI minimisation, valuable purpose and sustainability,’ explainability, transparency, non-discrimination, and integration of children’s safety in AI technologies. It shows how these can illuminate specific cases on recommendation systems, conversational agents and robotic systems. While part of the EC, the guidance’s scope is notably global.
  3. The UN Human Rights Council’s Special Rapporteur on Privacy’s report on the right to privacy, and children’s privacy (2021) responds to how the expansive use of AI may infringe children’s privacy by calling for respect for the established UN conventions and policy frameworks, including the UN Convention on the Rights of the Child. The recommendations shed a new light on how legal and rights-based frameworks interact with AI.
  4. The World Economic Forum’s Artificial Intelligence for Children Toolkit (2022) advocates its ‘FIRST’ principles: Fair, Inclusive, Responsible, Safe, Transparent – and calls for an AI labelling system, akin to barcodes or QR codes on products and services advising on possible harms, data collected and age-appropriate users. This guidance is especially relevant for the private sector and can support businesses and C-suite executives in navigating product development for children.
  5. The IEEE Standards Association’s Age-Appropriate Digital Services Framework (2021) builds on 5Rights principles to ensure a standard for digital products and services including AI that are age appropriate. It provides an ‘impact rating system’ with open-access criteria to support product designers, the government and educational providers, complemented by the 4i’s (‘Intentions, Inputs, Instructions and Impact’) of 5Rights’ Guidance for Policymakers on AI, as shown below:

    What does good look like?

It will be apparent that the focus of the guidance is heavily on processes of AI deployment to avoid or mitigate risk of harm, leaving largely unstated the question of desirable outcomes for society, including children. Beyond the – very important – governance of potentially problematic processes, for which the above five resources offer valuable and highly actionable guidance grounded in evidence, stakeholder views and children’s voices, we must also ask, what does good look like for AI and children’s rights? This involves considering questions such as:

  • Can we really document the benefits of AI for good, for instance in relation to health diagnoses, personalised education, the future of work or support for children’s development and agency?
  • Can we find solutions to counter the potential harms to individual children, children collectively or society more generally – consider disinformation, privacy violations, or harms to protected communities? And are these best addressed by regulating AI?

In reviewing key frameworks, a substantial research agenda is emerging. As UNICEF, Council of Europe and others note, we need more research to understand how AI models, especially the use of chatbots and smart toys, may affect cognitive functioning among children. Further, studies on how AI affects different age groups can aid precise policies. Indeed, AI’s impact on children in the UK will hardly be uniform since it is shaped by their communities and life contexts. We also know that algorithmic bias affects minoritized children disproportionately, and the Children’s Commissioner report finds that black children are six times as likely to be searched by the police compared to the national population. Given recent investigations into the use of AI in automated decision-making by public sector bodies, including lack of transparency on AI use by the Met Police, we remain in difficult waters; elite conversations in high places may be insufficient to navigate these.

All guidance – whether concerning children or “the public” – would benefit from further clarity on how it has been formulated, signposting which recommendations emerge from children, experts, researchers or businesses to assess and understand how stakeholders’ needs have been considered and balanced. AI provision and policy will likely have global implications, making it crucial to include perspectives from the Global South. While there is representation internationally from, notably, the African Union (AU) and Association of South East Asian Nations (ASEAN), it is also important to hear the voices of global South children and civil society.

Coming challenges

Yes, more research is needed. Although AI is hardly new, since generative AI burst onto the scene, policymakers and researchers have been scrambling to keep up. In the world of children’s rights, dramatic technological innovations tend to stimulate doom-laden and panicky calls for to protect children at all costs, in advance of the evidence. Of course, child protection is vital and urgent, but so too are children’s other rights – to privacy and data protection, to learn with and about AI, to creative expression in a digital world, and more.

It’s hard to determine children’s best interests in relation to AI when the harms are only now being documented and the benefits are mainly evidenced through sporadic case studies with uncertain rigour or application scope. It’s also problematic how often discussions about data protection (or exploitation) are separated from discussions about AI outcomes: surely a holistic judgment considers how and where the data comes from in conjunction with who the AI benefits? Even working out what we don’t know and need to find out is a task yet to be undertaken. In all these tasks, the voices of children themselves must be heard.

Also challenging is the question – too rarely addressed – of how far guidance should be specifically tailored to children’s rights (rather than human rights, which are of course also vital). Children have specific needs and vulnerabilities, and they are a sizeable segment of the population, yet their personal data is harvested without their consent, or that of their parents, and used to drive educational or health or other AI services without consulting them, or providing child-friendly explanation appropriate to their “AI literacy.” For example, OECD and many others around the world, including that just announced from The White House, have produced valuable sources of guidance on ethics and AI, but they often say little or nothing about children.

At LSE, with 5Rights Foundation, we are launching a new research centre: Digital Futures for Children, building on our recent Digital Futures Commission. Its remit is exactly this – to ensure an evidence base for policy and advocacy relating to the digital environment, guided by the UN Committee on the Rights of the Child’s General comment no. 25 and in collaboration with children and, we hope, readers of this blog and colleagues around the world. Watch this space!

This post represents the views of the authors and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

Featured image: Photo by Tim Gouw on Unsplash

About the author

Gazal Shekhawat

Gazal Shekhawat is a PhD researcher at LSE, where she studies how media and communications shape the everyday lives of women in India’s Hindi heartland. Capturing their experiences at different stages of life and vocation, she seeks to understand how women navigate the push and shove of culture in the age of social media platforms. Before starting her PhD, she was a Research Fellow at Microsoft Research Lab, India where she worked on civic participation and misinformation on social media platforms.

Sonia Livingstone

Sonia Livingstone OBE is Professor of Social Psychology in the Department of Media and Communications at LSE. Taking a comparative, critical and contextual approach, her research examines how the changing conditions of mediation are reshaping everyday practices and possibilities for action. She has published twenty books on media audiences, media literacy and media regulation, with a particular focus on the opportunities and risks of digital media use in the everyday lives of children and young people.

Posted In: Artificial Intelligence | Children and the Media

Leave a Reply

Your email address will not be published. Required fields are marked *