LSE - Small Logo
LSE - Small Logo

Velislava Hillman

May 25th, 2022

New study confirms that many EdTech companies exploit children’s data and there is nothing to stop them

0 comments | 21 shares

Estimated reading time: 5 minutes

Velislava Hillman

May 25th, 2022

New study confirms that many EdTech companies exploit children’s data and there is nothing to stop them

0 comments | 21 shares

Estimated reading time: 5 minutes

A new report by Human Rights Watch confirmed that children are exploited through many seemingly innocuous – and often government endorsed – educational technologies. LSE Visiting Fellow Dr Velislava Hillman highlights the key points and calls for action.

Thirteen media organisations across the world including the UK’s Daily Telegraph and the US Washington Post just brought to light a global investigation by international NGO Human Rights Watch (HRW) into practices of education technology (EdTech) companies which put at risk or infringe on children’s privacy rights and freedoms. The EdTech products investigated were endorsed by 49 governments worldwide and deployed in schools and colleges during the coronavirus lockdowns. While many countries have now slowly come out of the pandemic and normality is returning to schools, many of these products have remained in the education system, as have their exploitative data practices.

HRW reviewed 165 EdTech products, of which 89% engaged in data practices that put children’s rights at risk, undermined or actively violated them. Companies monitored children without their consent and knowledge, harvested data on what they do, who they are, where they live or study, and who their family and friends are, to the extent that the only way to protect themselves from this invasion is by throwing “the device away in the trash,” the report concluded.

The majority of the learning platforms sent or allowed access to children’s data to advertising technology (AdTech) companies, many of which belong to whole supply chains owned by the most powerful companies like Amazon, Facebook, Google and Microsoft. From there, advancing algorithms analyse and profile children, piece together more data from other public or private sources to create detailed profiles that are sold to advertisers, data brokers and anyone else who may be interested to target groups of people with similar characteristics online. Such inferred profiles of children can then be used to enable behavioural manipulation, over time.

According to the report, some EdTech products directly sent or granted access to children’s personal data to 199 AdTech companies. The number of AdTech receiving children’s data “was far greater than the EdTech companies sending this data to them, illustrating the financial incentives that place economic value on children’s data and fuel extraordinarily intrusive surveillance and deep erosions of their privacy.”

Most EdTech platforms, deliberately or unwittingly, installed tracking technologies that surveilled children outside of their virtual classrooms and on the internet, persistently. Yet others fingerprinted and tagged children covertly in ways that are unavoidable even if the user becomes aware. Some EdTech products sent children’s data to AdTech companies whose algorithms manipulate behaviour, views and what children can see online.

These data practices and the resulting risks are not new in education. Concerns have been expressed for years – from ones relating to privacy risks and questioning edtech’s value to pedagogy – and the HRW report rings alarm bells that should no longer be ignored.

EdTech products are becoming the legitimate mode of access to and influencing content, instruction and assessment in schools without any general understanding of what they do and how they do it. And their data practices lead to wider consequences for children’s basic rights and freedoms, now and over time, into their future.

Earlier this month in the US, the Federal Trade Commission (FTC) listened to similarly alarming evidence about powerful mergers and acquisitions in education and the amassing of data empires which hold children’s data. The FTC is taking steps in the right direction to protect the most vulnerable – children. It is time for other governments to act with due diligence and duty of care with regards to EdTech companies having access to millions of children every day.

Who is snooping on children while they use Microsoft Teams or Google Classroom, the same companies that the UK government partnered with to provide education during the coronavirus pandemic? The Markup’s Blacklight is a tool that can offer real-time diagnostics of websites with regards to their privacy practices:

What can be done?

Ethical use of children’s data should be the baseline, not the aspiration, of EdTech deployment and regulation. When it comes to children, governments should draw the line and ensure independent and objective governance and regulation that observes and protects children’s rights and freedoms.

Other measures, proposed on previous occasions, which the HRW study emphasises include:

  • EdTech companies should be licensed to operate. They must show that their products provide value to education and actively work to prevent any negative impact to children’s rights and freedoms. To enjoy the privilege to sell products and services to children, companies should undergo continuous audit and oversight. The HRW report recommends: “governments should conduct data privacy audits of the EdTech endorsed for children’s learning…remove those that fail these audits, and immediately notify and guide affected schools, teachers, parents, and children to prevent further collection and misuse of children’s data.”
  • Schools deserve full transparency. The school principal/head teacher never sees what EdTech companies present to their investors. Total transparency from their investor deck to their algorithmic configurations should be shared with key stakeholders in education.
  • AdTech companies must play a role. As the HRW report recommends, AdTech companies must identify apps and websites that install tracking technologies and transmit children’s data to them. Their EdTech responsibility should include identifying those services that target children and monitor them regularly and with due care. They must also prevent the use of AdTech tracking technologies to surveil children. They should audit their income data, deleting and disabling data that is coming from children or child-directed services, respect and prioritise their work around children’s rights and freedoms, and ensure dedicated mechanisms to report and seek remedy for abuse of children’s data and infringements on their rights.
  • Stakeholders must be informed about any data practices relating to their data directly or indirectly. Children, parents and teachers must never be denied the right or opportunity to know what is being done to them and their data.
  • Provide remedy for what has already happened. Data has been collected for years; these data practices have been ongoing and while the coronavirus pandemic exacerbated the problem, it was neither new nor it stopped as the pandemic subsided.

The exploitative data practices of EdTech and AdTech companies can damage children’s experiences online and with any digital technologies. Surveillance and tracking may lead not only to a chilling effect; they are driving children to distrust everyone around them. This is the complete opposite of the kind of environment that is necessary for a child’s healthy development. It is time to set things right.

This article represents the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

About the author

Velislava Hillman

For the past ten years Dr Hillman has researched at the intersection of learning, digital technologies, children and young people with focus on their personal perspectives, uses and experiences. She has researched in the area of learning and creativity through the use of digital tools in formal and informal learning environments. During her fellowship in Berkman Klein Centre for Internet and Society at Harvard University between 2018 and 2019, Dr Hillman investigated the implications of data collection in K12 education on children’s basic rights and freedoms to self-expression. More recently, Dr Hillman’s interests are in the integration of AI systems in schools, data-driven decision-making and the role and participation of children and young people in increasingly digitised learning environments. As a Visiting Fellow she is focusing on identifying what kind of AI systems are used in compulsory education; learners’ experiences, perceptions and attitudes towards AI systems; the role of such systems in the instruction, learning and assessment processes; and the rights and freedoms of children and young people in relation to AI systems’ diagnostics and learner profiling.

Posted In: Children and the Media

Leave a Reply

Your email address will not be published. Required fields are marked *