LSE - Small Logo
LSE - Small Logo

Velislava Hillman

November 3rd, 2021

Algorithmic (in)justice in education: Why tech companies should require a license to operate in children’s education

0 comments | 18 shares

Estimated reading time: 2 minutes

Velislava Hillman

November 3rd, 2021

Algorithmic (in)justice in education: Why tech companies should require a license to operate in children’s education

0 comments | 18 shares

Estimated reading time: 2 minutes

In the wake of yet another storm of revelations about the merciless practices of powerful technology companies, Velislava Hillman, Visiting Fellow at LSE, looks at the growing power of education technology businesses in public education and argues that their products need licensing to operate.

Last month Frances Haugen told the world that Facebook knew that its Instagram platform contributes to mental health problems in teenagers (while it was also developing a similar product for under-13-year-olds – now paused), prioritising business profit over their wellbeing. Recently, she gave evidence to a British parliamentary committee scrutinising the Online Safety Bill, as calls increase to tighten up the country’s proposed Bill. Instead of stopping its abusive practices, Facebook seems to be seeking an easy way out of the scandal – by deciding to change its name (exactly what child sex offender teacher Ben Lewis did only to continue teaching).

Tech companies’ arrogance doesn’t stop with social manipulation. They are now coming into public education.

Many tech companies – including those selling educational technologies (edtech) – show appetite for it. Facebook no less. For example, its engineers developed Summit Learning, an adaptive platform, that promises to transform education. While it faced an outcry from students (driving the company behind Summit to change its name), it’s still going strong. Facebook CEO Mark Zuckerberg’s non-profit – the Chan Zuckerberg Initiative – is also funding a new platform for school-age children, Panorama Education, which harvests sensitive data from students through surveys about anything from “how they feel at school” to “how much potential they think they have”. The harvested data, the company reassures us, can integrate with all of a district’s existing data systems so that it can bring together a ‘panorama’ of information about children. Pearson, the media and education behemoth, contends for more market share with its Pearson+ app. It suggests (manipulates?) course material to students based on the app usage (?). Google, too, claims that 80 million educators and students globally use its applications. Microsoft boasts 100 million student users. Then there are the thousands of other edtechs – offering content, data management, tutoring, testing, monitoring, and more.

While tech businesses’ appetite for involvement in public education shouldn’t go without critical inquiry, public scrutiny must urgently address what edtech products (can) do to students through the pre-emptive powers of their algorithms. Edtech adopters (policymakers and education leaders), must pay attention to these concerns.

1. Manipulation

Haugen testified that Facebook deploys algorithms that manipulate the news feed to maximise people’s engagement with the platform, in order to drive advertising profits. It’s no surprise, since back in 2014, Facebook experimented with people’s emotions in one of the largest – and secret – studies involving 689,000 users, which produced “experimental evidence of massive-scale emotional contagion”! Google has also been deploying algorithms to manipulate content and maintain monopoly over general search. In parallel, algorithm-powered edtech platforms manipulate the content based on learner behaviour. As the CEO of Century Tech, an ‘adaptive’ learning platform, said, her product observes “how students are behaving across the content” and nudges them to their next task. But how do we know that such products aren’t manipulating children in a harmful way?

2. (Re)production of inequalities: who will know the whole truth?

An educational system, sociologists Bourdieu and Passeron write, is built around accepted-as-legitimate pedagogic instruments, actions and authorities. The instruments comprise work (curriculum, books, assessments etc.) and action (pedagogy) of inculcation. The authority prescribes, controls and monitors the work and action; navigates the system and its stakeholders. The system has a certain power to (re)produce education, society and culture because it has the power over who will see and learn what.

However, as edtech products increasingly mediate the educational system, they also set in as a legitimate power of authority. So, if the Pearson+ app suggests type A content for student A and type B content for student B, it has control over who will access what kind of knowledge. Social media’s algorithms create ‘echo chambers’; people are manipulated into seeing different parts of the truth or a different ‘truth’ altogether. At work, too, algorithms create information asymmetries. Uber, the taxi platform, does it to control what its drivers see and know in a way that benefits mainly Uber.

Do edtech businesses control knowledge in a fair and just manner, as they claim? How can we know for sure?

3. He’ll be the engineer, she the technician

Similarly, an edtech platform can design the legitimate length of education for student A in comparison to student B. Panorama Education and Naviance are platforms that sell themselves as the legitimate power to decide which learner will be the “holder of the principles (e.g. the engineer) and the mere practitioner (e.g. the technician)”.

Algorithms can steer some students to vocational education that has historically been perceived as low-quality, while others – to university. Such influence has its own harmful consequences.

We see algorithmic injustice of this kind in hiring workers. Over-relying on algorithmic decision-making systems presents an opaque power that can conceal disparities in hiring and destroy equal opportunity in employment. Google’s algorithm displayed prestigious job ads to men but not to women; Amazon’s hiring algorithm showed bias against women.

4. It starts with managing the canteens; it can go into predictive policing

Facial recognition was recently launched in nine English school canteens. Recognition and surveillance cameras have been in schools since 1990s, with the intention to prevent violence (while research has shown their ineffectiveness). However, the policing capacities of these technologies isn’t the only problem; algorithms have the pre-emptive power to predict who is likely to commit crime. A school in Florida has already used it. Legitimating this power can destroy young lives.

5. Algorithm-powered edtech products, but who is responsible if/when they fail?

Uber considers drivers as “driver-partners”, thus disassociating themselves from an employer-employee relationship and therefore from employer responsibility. To draw a parallel in education, the responsibility of edtech companies over the learning outcomes still remain the teachers’, so, robots aren’t going to replace teachers yet. Then, who is held accountable for the consequences of edtech failures?

License to operate

Algorithmic injustice needs to be addressed now. We shouldn’t wait until a whistleblower comes to testify that an edtech business, while fully aware that their products are failing, prioritises profit over children’s wellbeing. Edtech products need thorough scrutiny and evaluation that goes beyond data privacy assessments. They need a license to operate in children’s education.

Notes


This text was originally published on the MediaLSE blog and has been re-posted with permission.

This post represents the views of the authors and not the position of the Parenting for a Digital Future blog, nor of the London School of Economics and Political Science.

Featured image: photo by Compare Fibre on Unsplash

About the author

Velislava Hillman

For the past ten years, Dr Hillman has researched at the intersection of learning, digital technologies, children and young people with focus on their personal perspectives, uses and experiences. She has researched in the area of learning and creativity through the use of digital tools in formal and informal learning environments. During her fellowship in Berkman Klein Centre for Internet and Society at Harvard University between 2018 and 2019, Dr Hillman investigated the implications of data collection in K12 education on children’s basic rights and freedoms to self-expression. More recently, Dr Hillman’s interests are in the integration of AI systems in schools, data-driven decision-making and the role and participation of children and young people in increasingly digitised learning environments. As a Visiting Fellow she is focusing on identifying what kind of AI systems are used in compulsory education; learners’ experiences, perceptions and attitudes towards AI systems; the role of such systems in the instruction, learning and assessment processes; and the rights and freedoms of children and young people in relation to AI systems’ diagnostics and learner profiling.

Posted In: In the news