LSE - Small Logo
LSE - Small Logo

Stacey Steinberg

April 28th, 2021

Ethical AI? Children’s Rights and Autonomy in Digital Spaces

0 comments | 29 shares

Estimated reading time: 2 minutes

Stacey Steinberg

April 28th, 2021

Ethical AI? Children’s Rights and Autonomy in Digital Spaces

0 comments | 29 shares

Estimated reading time: 2 minutes

Childhood records, once kept safe in paper files and family memory books, now exist as records in the digital cloud. This data has become increasingly connected during the past decade, giving rise to opportunities for public health agencies to collaborate, families to thrive, and artificial intelligence (AI) databases to grow. For www.parenting.digital, Stacey Steinberg discusses the dilemmas surrounding the ethical application of these databases and how they can be used to improve outcomes for children, as well as the potential dangers also visible as we look out on the AI horizon.

Some non-profit organizations, like the 5Rights Foundation, have set out to tackle these issues, partnering with UNICEF to outline policy recommendations for companies and governments building data sets on children. Despite these great efforts, the technology industry is moving faster than our laws have been able to keep up.

Artificial Intelligence has the potential to help children thrive in some online spaces. While heavily debated, AI is being used to track down child predators, help eliminate bias in child welfare cases, and even predict which schoolchildren will need extra assistance in the classroom. But as a new and exponentially growing field, there is potential that this technology, if not used ethically and thoughtfully, might hurt this next generation of children – a generation that is growing up shared in a way adults could have never dreamed of when they came of age even only a decade ago.

During my early years as a parent raising children alongside social media, I was oblivious to the impact that my own online sharing – and online parenting – could have on my young children. Such online sharing often led to feelings of connectedness and even excitement. Despite the benefits that I may have initially felt, the long-lasting implications of such public displays of information were absent from my mind.

Fifteen years have passed since I became a parent, and I am now beginning to watch my own children embark on creating their own digital identities. The disclosures I made during their early years have the potential to impact their ability to define themselves online on their own terms, something I have written about extensively. It has been a central theme in my research, and the realizations I have come to have led me to delete much information that I shared during my children’s early years.

While I am responsible for the information I have shared online, there is also a significant amount of information others have shared online about my children. Most of this, thankfully, has been shared with both mine and their permission, and all of it is positive in nature. In fact, my kids love when others share their accomplishments – like this week when my oldest son’s school board shared a documentary he made about being the Great Grandson of Holocaust Survivors. But there are other bits of information that have been collected from my children without my or their knowledge and stored in online databases. As I transition away from my work on sharenting, this other information, often collected in bits and pieces often by the most unlikely of sources, is becoming a primary source of my concern for our children’s privacy.

Using data to make predictions on childhood health outcomes, understanding where AI data is being collated from, and considering what sort of consent is necessary for companies and governments to collect data are key issues on future research agendas centred on children’s privacy and artificial intelligence.

For example, law enforcement agencies are increasingly turning to child welfare records and Adverse Childhood Events (ACEs) calculations to determine which children are most likely to get into trouble as young adults. This policy has come under heavy scrutiny due to the bias and harm that could result from such a system. Data theft situations, well known in the context of information collected from adults, is commonly studied, but there are unique concerns when data theft occurs and the victims are children. Both of these examples came up in my home state of Florida but are not limited to any single corner of the globe.

Building algorithms on children based on limited sets of data seems to be the wave of the future, but it is not without risk. In the United Kingdom, children’s “red book” information is now being collated and available online, under the expectation that this will help families thrive and make data easily accessible for all but raises many privacy concerns. Could this sort of initiative cause more harm than good?

The long-lasting impact of such a transition of information is not clear. As children’s toys become more digitalized and connected via apps like Google Voice and Alexa, the depth of available information is astounding considering the dearth of attention this growing field is getting outside of a limited group of practitioners and scholars.

Policy solutions are needed now, and thankfully, documents like Comment 25 of the United Nations Convention on the Rights of the Child are moving us in that direction. As the work put into this Comment shows, equally important to the answers are opportunities for an interdisciplinary group of scholars, practitioners, and children to come together to start asking the new questions that will define this new frontier. How do children value online privacy? What do young people see as the greatest risks posed by technology? What do teens see as these evolving platforms’ greatest assets? But this is only a start – we need to spend more time issue spotting and brainstorming the questions as we set out to find the answers.

The information artificial intelligence relies on to make assumptions about people, opportunities, and outcomes already exists and is in the hands of those who wish to help families thrive. But it is also in the hands of those who are willing to hurt families in the quest for power and profit. Future work at the intersection of AI and children’s privacy will certainly focus on how we can protect children, how we can empower young people to be part of the solutions, and how we can ensure that AI policy is made from a place of interdisciplinary research and thoughtful collaboration.

Future policy centred at the intersection of children and AI will likely focus on the ethicality of data collection, storage, and usage. Children are not simply “little adults,” and they will need greater levels of protection to keep them safe as we transition towards systems that rely more on artificial intelligence than human collaboration.

First published at www.parenting.digital, this post gives the views of the authors and does not represent the position of the LSE Parenting for a Digital Future blog, nor of the London School of Economics and Political Science.

You are free to republish the text of this article under Creative Commons licence crediting www.parenting.digital and the author of the piece. Please note that images are not included in this blanket licence.

Featured image: photo by Jessica Lewis on Pexels

About the author

Stacey Steinberg

Professor Stacey Steinberg is the Director of the University of Florida Levin College of Law’s Center on Children and Families, where she also supervises the Gator TeamChild Juvenile Law Clinic. Her research explores the intersection of a parent’s right to share online and a child’s interest in privacy. Professor Steinberg is the author of Growing Up Shared: How Parents Can Sharer Smarter on Social Media and What You Can Do to Keep Your Family Safe in a No-Privacy World.

Posted In: Research shows...