LSE - Small Logo
LSE - Small Logo

Ranjana Das

October 11th, 2023

Algorithms in the public domain: parents’ fears and expectations about invisible and super-visible children

0 comments | 4 shares

Estimated reading time: 4 minutes

Ranjana Das

October 11th, 2023

Algorithms in the public domain: parents’ fears and expectations about invisible and super-visible children

0 comments | 4 shares

Estimated reading time: 4 minutes

Algorithms and data-driven technologies are increasingly used in the public domain, despite unevenness in public trust, as many know from the UK’s A-levels algorithms fiasco. Research on the consequences of automated decision-making calls for a people-centred approach, critically querying datasets and data traces. Indeed, families and households increasingly are data. For www.parenting.digital, Prof Ranjana Das discusses her research on how parents feel about algorithms.

Based on interviews with 30 parents in England, I draw out their worries that some children might be rendered invisible and other rendered hyper-visible by data-driven decisions, for instance, in the public sector. Parenting is a future-oriented project, and my research on parents’ speaking about algorithms, shows that parents’ ‘future talk’ about algorithms goes beyond initial discourses of inevitability to articulate undeniable expectations of institutions, to do more, and to do better, now.

Seen and unseen children

Parents worried about their children being seen by algorithmic systems in the public domain. Some worried that their children might be seen too soon, for too long, by the wrong people. Others worried that their children would remain unseen, invisible and misunderstood.

Nandini is the mother of a 7 year old and a 5 year old in the west of England. Both her children have special educational needs. Her son is at ease with his school work, but Nandini fears that her daughter, who struggles with middle ear deafness, has strengths that go unrecognised by automated grading or testing:

I think her personality and character is worth more than she is. So I feel that maybe she won’t get as much as a look in and they won’t get to know her as a whole person… for people who are neurodiverse… like me and maybe my daughter… It won’t work in our favour… (Nandini)

Rijula, an Indian mum raising a 3 year old and a 5 month old in the West of England, spoke of her worries about algorithmic bias, racism and data-driven discrimination as a mother of colour. Very incredulously, yet, apparently hopefully, Rijula asked doubtful questions of “they/them” – the elusive individuals and institutions behind impenetrable technological systems:

“Like, especially if it’s a big decision, like what job or what house he buys, I hope… they do a bit more under the surface analysis… Coming from a person of colour… coming from a BAME background, a lot of the data search is not meant for us… Training is not based on our experiences… hopefully it would have changed by the time my son is a younger adult?” (Rijula)

Parents did not voice these concerns solely in relation to their own children. They thought about other parents’ children who might remain unseen and misunderstood. As Liam imagined – some children might be seen too often, too much, unjustly:

“I am slightly more worried about… maybe not my children. I’m…. middle class and white …the police algorithm .. it can only deal with the data that you feed it and if you’re feeding it all…. Where’s the most crime? Poor areas. Who lives in poor areas? Mainly ethnic minorities. So, who’s gonna get arrested? …If my son had a black friend or an Asian friend, I would definitely say, you know… You gotta be more careful what you say to police and how you appear to the police, because if you’re hanging out with my son and you’re doing something naughty, maybe my son won’t get arrested. But you will. And that’s not fair.” (Liam)

The “additional comments box”

What expectations of powerful institutions do parents articulate? Parents’ expectations centred around people – asking for human presence, including the warmth of human errors, in the face of overwhelmingly automated futures. Most, though, did not recognise that decisions to automate in the first place, did indeed involve people deciding things. Nandini introduced me to her notion of the “additional comments box” – a placeholder almost, for careful, ethical human intervention.

“An additional comments box or something, but it needs it. It can’t just it can’t just be based on… a man, an Oxford educated man, a middle class white man, putting some things as setting an algorithm up and not allowing for maybe possible deviations and then getting what they get and not allowing for some.” (Nandini)

Bert, the father of a 1 year old boy, and himself a software engineer, says to me that he is horrified and taken aback at how critical he finds himself of automated decision-making. There is a binary in Bert’s and indeed other parents’ discourse, where people making decisions, and algorithms making decisions sit in watertight categories. Many seemed to not recognise the driving and shaping of human decision making, informed by algorithms.

An algorithm can’t feel it, can’t understand how hard that child’s worked and what that child is really truly capable of…So I suppose that’s the frustration really is actually that that you take out that kind of subjective nature that a human being can add and make it entirely objective which is.. I think the point in which you completely remove some sort of human oversight. (Bert)

The additional comments box, as Nandini put it, then, is more than a box for people to override machine errors. It represents, of course, incredulity about data-driven decisions not seeing or over scrutinising children. But it also represents parents’ hopes and expectations about decision making, transparency and accountability in the public domain.

Paying attention to parents talking algorithms

Where will Nandini’s musings about the additional comments box be heard, and by whom? What is Bert really asking for, he asks for human subjectivity as opposed to an algorithmic ironing out of difference and nuance?

In my in-progress Parent talking Algorithms book project, I suggest that parents’ feelings about algorithms matter. Their feelings about parenthood, and their own parenting practices are in relationships of mutual shaping with algorithmic interfaces. Parents’ individual interpretations of algorithms in their children’s lives reveal their visions of the collective. In my project I hear parents speak as parents, as citizens, thinking about the children of other parents, beginning to imagine and articulate what they might expect of those behind technological systems.

First published at www.parenting.digital, this post represents the views of the authors and not the position of the Parenting for a Digital Future blog, nor of the London School of Economics and Political Science.

You are free to republish the text of this article under Creative Commons licence crediting www.parenting.digital and the author of the piece. Please note that images are not included in this blanket licence.

Featured image: photo by Gencraft (AI-generated image)

About the author

Ranjana Das

Ranjana Das is Professor in Media and Communications at the Department of Sociology, at the University of Surrey. She is the PI of a Leverhulme Research Grant (2023-2025) on parents’ news use in times of crises; and a British Academy grant (2023-2025) on AI and emerging technologies in relation to children’s bilingualism. She writes this blog from her current book project – Parents Talking Algorithms, from which this new paper has just been released.

Posted In: Research shows...