LSE - Small Logo
LSE - Small Logo

Pamela Ugwudike

November 1st, 2023

AI ethics and new digital cultures: the case of sharenting

0 comments | 15 shares

Estimated reading time: 4 minutes

Pamela Ugwudike

November 1st, 2023

AI ethics and new digital cultures: the case of sharenting

0 comments | 15 shares

Estimated reading time: 4 minutes

Families sharent – they share sensitive and identifying information about their children online. Parents might share photos of their children’s school sports day revealing their location, discuss sensitive family issues online in divorce advice groups, or seek health information from online communities. Such practices are far from the exclusive preserve of celebrities, or professional ‘influencers’ motivated by the prospect of lucrative promotional contracts. Anyone can be a sharenter, from relatives such as grandparents, to schools. But such practices can be associated with online risks and can infringe children’s rights. For www.parenting.digital, Prof Pamela Ugwudike discusses her research and how best to develop AI-driven approaches to detecting and preventing such risks.

Sharenting increased during the COVID-19 pandemic as people resorted to digital communication technologies to maintain ties. A common practice, sharenting can block children’s opportunities to develop their own online identity, with implications for their freedom of expression and ability to shape their self-identity. And it can result in emotional distress among children.

Social risks also emerge from the data trail produced via sharenting, which can lead to bullying and contaminate the digital and online identities of affected children. Indeed, as societies around the world continue to move towards digitised modes of identification for access to key services, it has become vital to secure and maintain a clean digital identity. For example, passport, license, and visa applications increasingly require verification via digital identity applications. Further, sharenting poses financial risks which emerge from the disclosure of sensitive and identifying information, exposing children to cybercrimes, from identify theft to identity fraud.

Yet, new and emerging digital cultures of which sharenting represents an example are not proscribed by law, nor are they being considered in the international race to develop ethical and legal frameworks as AI and cognate digital technologies continue to evolve and outpace regulatory and legislative efforts.

 Whilst existing studies have highlighted the motives and risks of sharenting, we argue that attention should also be paid to new measures that can mitigate the ethical challenges of social media AI technologies. These include platform algorithms that spread sensitive child-centric content. The current self-regulation model which ignores sharenting risks also warrants attention. Indeed, studies exploring AI ethics and governance highlight the importance of regulating the emerging AI technologies inadvertently facilitating harmful practices whilst transforming key aspects of social life.

Remedial strategies

So, what is the solution? Ban sharenting and prosecute sharenters? Such an approach would overlook the tensions between parents’ self-interests and children’s rights, as well as the benefits of sharenting. Examples include the opportunity to forge social networks, connect with family and friends, access advice, and earn a living.

An alternative remedial strategy could involve raising sharenters’ awareness of associated risks. Several studies have explored this approach.

New AI-driven harm prevention measures could also be developed and this much-ignored option is a key aim of our multidisciplinary study.

Using AI to counter AI challenges: NLP approaches informed by social science research

Funded by the UK Economic and Social Research Council (ESRC), ProTechThem: Building Awareness for Safer and Technology-savvy Sharing brings together researchers from social and computer sciences in the UK and Italy to develop remedial measures including AI approaches that mitigate the risks posed by social media AI technologies. We are designing Natural Language Processing (NLP) approaches, detecting sensitive posts and flagging them to moderators and other relevant users, to encourage cybersecurity behaviours and help prevent dissemination and harm.

Social media AI technologies in the form of Machine Learning and cognate models that personalise content to enhance user engagement are examples and they produce perhaps unintended ethical and other consequences.  For example, though contested, they are said to polarise users, trapping them in exclusionary and insular networks where ideologies can become validated and entrenched.

Relevant here is the capacity of the platform’s recommender algorithms to channel child-centric information produced via sharenting, towards potential perpetrators of cybercrimes and harms against children. This is likely to occur since the perpetrators’ browsing histories for example, would denote an interest in such data.

But first, the project will explore risky sharenting practices and platform vulnerabilities, to inform the NLP design process. So far, it has examined whether adequate regulations are in place to protect children from sharenting risks.

Is current regulation sufficient?

Our recently published study of the regulatory models adopted by key social media platforms (Facebook, Tik-Tok, Instagram, YouTube, Twitter) analysed their terms and conditions (Ts&Cs). It found that the current self-regulation model is limited in its ability to protect children affected by sharenting. In general, the Ts&Cs pay more attention to content moderation by online groups, shifting much of the responsibility for harm prevention to users. At the broader level of policy-making, The UK’s Online Safety Bill also ignores the risks and harms of sharenting.

Through a passive digital ethnography of open Facebook groups, we have also explored the activities of parents participating in online communities established to support parents, particularly mothers of new born babies. Sharenting practices observed include disclosures of sensitive data about their children’s healthcare needs, including diagnostic information. Deeply personal and potentially identifying information about children was disclosed on an open group in a platform that is accessible to millions of strangers around the world. Worse still, parents do so unwittingly and generally view their actions as justifiable and socially acceptable.

Nevertheless, an accompanying risk is that the AI technologies that social media platforms deploy to personalise content, will likely disseminate the data to those showing an interest in child-centric information. This raises the issue of AI ethics and governance as they relate to social media platforms.

The way forward for AI ethics and governance

As the ProTechThem project progresses, the computer scientists in the research team will use social science research methods and the insights generated from the various phases of the study to develop NLP approaches to identifying content bearing the signs of sharenting.

A key anticipated outcome is that social media group administrators will use the NLP approaches to review forum posts and inspect child-centric data that are in violation of group policy. The administrators can then trigger appropriate interventions such as activating the forum’s reporting features or sending educational information to the authors.

We plan to release the NLP approaches and other digital awareness raising tools on an open-source basis. We will share them with policy makers, social media companies, and other child safety organisations for further dissemination. Ultimately, the NLP approaches will constitute a new harm prevention framework and will help protect both sharenters and children from online risks and harms.

First published at www.parenting.digital, this post represents the views of the authors and not the position of the Parenting for a Digital Future blog, nor of the London School of Economics and Political Science.

You are free to republish the text of this article under Creative Commons licence crediting www.parenting.digital and the author of the piece. Please note that images are not included in this blanket licence.

Featured image: Photo by cottonbro on Pexels

About the author

Pamela Ugwudike

Pamela Ugwudike is Professor of Criminology at the University of Southampton. Pamela’s research interests include studying interactions between AI technologies and criminal justice. She has led projects related to predictive AI and conduits of bias in policing and penal services.

Posted In: Research shows...