LSE - Small Logo
LSE - Small Logo

Velislava Hillman

May 27th, 2021

EdTech in schools – a threat to data privacy?

2 comments | 43 shares

Estimated reading time: 5 minutes

Velislava Hillman

May 27th, 2021

EdTech in schools – a threat to data privacy?

2 comments | 43 shares

Estimated reading time: 5 minutes

Velislava Hillman, a visiting fellow at the LSE, argues that it is essential to consider the implications of the increasing use of technology within schools, along with the motivations of ‘edtech’ companies.

It is hard, perhaps impossible, to go to school and not be registered by a digital technology. Cameras wire the premises; homework is completed using one business’s software application (eg Microsoft Word) that may be embedded onto another business’s platform (shared via Google); emails, bathroom trips, assessments, parental backgrounds  – all feed into digital systems that are owned, managed, used and repurposed by hundreds of thousands of invisible business hands.

The growth of edtech in schools has enabled constant data extraction and an inadequate ‘privacy standard’ while at the same time, policy makers and academia advise schools to teach data and privacy literacy. If constant data extraction is allowed in class (and while pandemic-induced lockdowns lasted – from home), what sort of data and privacy literacy will schools teach children? With this article, I would like to invite academics, policy makers and school leaders to think about a few important questions before data and privacy literacy teachings enter the curriculum.

What do edtech companies want from and for schools?

Profit comes to mind: edtech are businesses like any other. From a business perspective, the education sector is, first and foremost, a fertile ground for making money, and business decisions are driven by what makes business sense, not necessarily by improving learning outcomes or improving end-users’ wellbeing – be that the child’s or the teacher’s.

What is the pedagogical and curriculum proposition of edtech businesses?

Essentially, it doesn’t matter what edtech businesses want from and for schools. What matters is how their business decisions affect education, children and their futures. Edtech businesses create powerful narratives that their products will fix education’s problems. They generously reimagine education through the prism of their products. But edtech businesses offer a hidden pedagogy of oppression, whose ‘generosity’, as the great education critic Paulo Freire argues, “begins with the egoistic interests of the oppressors (an egoism cloaked in the false generosity of paternalism) and makes of the oppressed the objects of its humanitarianism”.

The interests are expressed in ‘what makes business sense’ and end with the endless opportunities provided by the collected data. For example, in California, Tennessee, Virginia and other states in the US, students’ Google Chromebooks came with a pre-installed software called Gaggle (a ‘proactive approach to suicide prevention’), which scans student coursework and behaviour to detect signs of depression. Students can’t avoid the surveillance if they want to have access their Google Classroom material. The benefits of having the devices and access to schooling therefore comes at a cost which the student does not have the choice to refuse.

Edtech companies thrive on digital data. Data in education increasingly influence decision making and change not only how the curriculum is designed (through data) but also who designs it (the agents that extract the data and their algorithms). Data create a new structural power, eliciting granular information about individuals and putting this in the hands of those that control the data systems. Data therefore can be seen as a pedagogic agency, which can override existing pedagogies and the very expertise of educators.

The function and business models of this new regime live off data, and data are generated and dependent on their main stakeholders – students. Students are the value source of edtech’s business model. Moreover, they are no longer seen as ‘human beings’ but data points. As Freire argues, an oppressor will see the oppressed only as ‘things’.

For example, education and information company Pearson Inc. earns millions of US dollars from public education budgets to develop, maintain and provide computer software. The development, maintenance and provision of edtech is a source of income for the business, which is already rethinking structures and products to generate more capital and new business, including moving towards a ‘Netflix’-style model for education.

Edtech businesses can monitor and eventually control student interactions through two powerful instruments. First, their products allow for every process, place and person in schools to be mapped and known.. To take a test, a student can’t opt out of the digital record. The second instrument is ‘ubiquitous computing,’  whereby technologies effectively disappear from view as they become ingrained in everyday life in a way that we can’t see or distinguish them. These two instruments not only strip privacy entirely, but they establish a new oppressive order without opportunities for resistance. This brings me to the next important question.

Can a child opt out of the EdTech to preserve privacy?

So far, I can venture a short ‘no’, but I would invite policy makers, academics and school leaders to produce a convincing answer to the contrary. The evidence so far points in the direction of the oppressive pedagogy as the new standard, with policy makers showing no sign of resistance, doubt, or even bargaining power (e.g., why not charge big edtech businesses for all the data they scoop from children daily, then increase teacher salaries, and furnish schools with all the facilities children would benefit from, like swimming pools, other sports and home economics facilities, science labs, arts studios, school trips and library books?)

Policy makers swiftly sign corporate deals, encourage the idea that data solve education’s ailments and let edtech businessmen tell them how to improve education. Even colloquialism has emerged suggesting uncritical widespread acceptance without resistance where a school can be seen as either a Microsoft or a Google one (and soon an Amazon one and a Pearson one).

Can education survive without data collection?

Data, the discourse goes, allows accountability and precision. The more we know about someone, the timelier interventions and better decisions we make about them (whether children agreed to participating is a totally separate question to worry about!). But there is also a level of inevitabilism attached to this narrative. That is, if we don’t have the data, does it mean we don’t intervene in a timely manner and therefore fail our children? Point at one teacher or parent who would think so!

Failures may exist for other reasons but not because there is no access to data: Google made great claims for the positive impact of its G-suite for Education on student attainment and achievement, as the company introduced its products in 2014 into the heavily disadvantaged metropolitan school district of Wayne Township Indianapolis where graduation rates were historically as low as 67%. However, according to the district’s superintendent, graduation rates had already improved to 94% by 2014. How? Through teachers’ relentless work to help students graduate (like hiring after-school buses for students to receive extra tutoring; and teaching students ‘soft skills’ such as persistence and collaboration).

Are schools a private or a public domain?

If we are to teach children data privacy in school, we need to also question whether a school is a private domain that maintains the privacy of those who come to it. The way schools approach their own privacy standards sends the strongest message for how children will understand privacy.

Privacy principles and public discourse maintain that privacy rights are greatly diminished once one enters public spaces – when one leaves the privacy of their home and shares information with others. Individuals do share information with others within the school bounds. However, while a ‘stranger’ cannot simply walk into a school building without permission and reason (think ‘stranger danger’), in a digitized school, such sharing goes beyond its walls over to thousands of strangers known only as designers, programmers, agents, administrators, marketers, business strategists, data analysts, data brokers and many others who develop, manage, control, sell, repurpose and own these technologies and data generated from their use.

Then we can have two answers to this question: The first is, if schools are a public space, then one’s privacy can no longer be guaranteed. The second one is – it’s private. Deciding whether schools are a private or a public space will shape what children learn about the meaning of ‘private’ and ‘public’. Once established as normative, these new learned concepts become difficult to challenge.

Educational responses have ascribed data and privacy literacy as a sort of an individual’s ability to understand, identify, and engage in practical activities to demonstrate the level of skill acquisition. However, some argue that such efforts will fuse with the proliferation of other literacies ranging from media to coding, which can lead to inadequate and limited success.

Introducing online safety in British schools more recently is limited in that it assumes privacy mainly in terms of lessons about personal online safety. Proposals have therefore come for the introduction of critical pedagogy surrounding the commercial aspect of data and privacy loss. But again, in a place that cannot guarantee children’s privacy due to its digitisation, how can children practice such skills? Do children really have the choice to, say, opt out of technologies that surveilled them and risk their data privacy loss? When schools used proctoring software during the lockdown, they do not offer choice for children to simply opt out; they have to take the exam.

Similarly, the growing popularity of college and career readiness platforms such as Naviance have the capacity to restructure curriculum without children’s, parents’, or teachers’ participation, and draw future pathways of students. Naviance, owned by Hobson, is a multi-layered data-collecting platform, which until February 2021 formed part of the Daily Mail and General Trust in the UK. The platform has access to a wide range of personal and sensitive information of students. It “tracks students as they move through elementary school, college and beyond”.

Naviance shows not only how a business can have access to tremendous amount of data on children but also the complex commercial exchanges that happen when one company acquires another. Schools become the ground on which such exchanges happen; even if students learn about this, what would be the end-goal of such curriculum and pedagogical effort?

Why is privacy important?

Much has been written about the importance of data privacy and the risks emanating from its loss. Privacy in education specifically should be seen as the condition and space for a child to learn and practice basic freedoms and rights, to create, express and develop critical thought. Privacy loss can lead to a wide range of tangible and intangible harmsfrom embarrassment to a career pathway a child never wanted.

Do we question, resist and create alternatives before we agree to the new oppressive pedagogy and privacy conditions?

School leaders, teachers and students must see that any dominant technology has its vulnerabilities. The vulnerability of the business of edtech must be acknowledged. Exporting educational processes over to edtech products and resorting to their promises for efficiency and precision makes schools dependent and at the mercy of their success.

In a climate where the private sector has ever more power to provide ‘emergency pedagogies’ or ‘deep learning’ seemingly without public resistance, it becomes difficult to see how data and privacy literacy measures in schools can have a meaningful impact. Students should be allowed to resist and to question the providers and their ‘generosity’. To begin to question them should be the departing point for a meaningful data and privacy literacy pedagogy.

This article gives the views of the authors and does not represent the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

Featured image: Photo by Jeswin Thomas on Unsplash

About the author

Velislava Hillman

For the past ten years Dr Hillman has researched at the intersection of learning, digital technologies, children and young people with focus on their personal perspectives, uses and experiences. She has researched in the area of learning and creativity through the use of digital tools in formal and informal learning environments. During her fellowship in Berkman Klein Centre for Internet and Society at Harvard University between 2018 and 2019, Dr Hillman investigated the implications of data collection in K12 education on children’s basic rights and freedoms to self-expression. More recently, Dr Hillman’s interests are in the integration of AI systems in schools, data-driven decision-making and the role and participation of children and young people in increasingly digitised learning environments. As a Visiting Fellow she is focusing on identifying what kind of AI systems are used in compulsory education; learners’ experiences, perceptions and attitudes towards AI systems; the role of such systems in the instruction, learning and assessment processes; and the rights and freedoms of children and young people in relation to AI systems’ diagnostics and learner profiling.

Posted In: Children and the Media | Privacy

2 Comments