LSE - Small Logo
LSE - Small Logo

Taster

July 16th, 2019

As schools become suffused with ed-tech, is the only response to constant surveillance the right to remain silent?

0 comments | 1 shares

Estimated reading time: 10 minutes

Taster

July 16th, 2019

As schools become suffused with ed-tech, is the only response to constant surveillance the right to remain silent?

0 comments | 1 shares

Estimated reading time: 10 minutes

The growing prevalence of ed-tech in schools has prompted concerns over the ability of students (and parents) to develop informed decisions towards how, why, when and who uses school data. As technologies increasingly make record of students’ every word and move, Velislava Hillman asks whether the constant monitoring, micromanagement and data collection of students can guarantee a safe environment for students to err without future negative consequences.

School is considered a safe space where children go to learn, to discover new things, to make friendships. But as we have seen in other seemingly safe spaces – from church and aid agencies, to scouts and sports clubs – children can be at risk of harm. This is not to slander schools, reject education technologies (ed-tech) or data collection, but how much of what is recorded everyday in class has the potential to diminish basic freedoms?

For example, automated essay scoring systems are commonly used in standardised testing. Using machine learning, such applications assess student materials based on essays that were originally graded by human experts. The more data, in this case graded essays, inputted into them the better their grading accuracy. However, while a student may entrust a teacher with a controversial essay, they may not feel the same way about an automated system that makes a permanent record of their written material, with unknown consequences as to how and by whom it can be used at any one point in the future. Thus, there is a risk that such technologies may have a ‘chilling effect’ over the right to free expression. Along with a range of other ed-tech tools designed for such tasks as; monitoring student-generated content, collecting biometric data and face recognition, the consequences of these technologies for students can be to have a permanent record of any misstep or mistake they make at school follow them for the rest of their life.

That being said, the learning opportunities ed-tech tools provide are many. They help children with disabilities; provide global reach; tailor learning; enable collaboration and creativity. There are also positive aspects of school data: it can elicit useful information and improve work; advance education theory; leverage resources. However, as many more tools infiltrate school processes – from applications that generate classroom surveys (what are children’s political views?), monitor student generated content (did anyone type ‘kill’?) and control behaviour – the ‘right to remain silent’ may transpire as the best option for students under constant surveillance.

The wider implications of collecting fine-grained data over a long period of time lies in its permanence, impact and reach. While laws such as COPPA, FERPA and SHERPA (for the US) and GDPR (for the EU) protect how children’s data is handled, many vendors lack transparency and consistent privacy and security practices. Schools normally organise data – academic (student related) and operational (school related) – into hundreds of elements. Each element contains more granular data such as student names or unique identifiers, home address, gender, race, disability, discipline; whether the student is on reduced lunch, enrolled in a special program; grade reports, teacher observations, accidents. The list goes on. A school may use over 100 ed-tech products and services supplied by usually for-profit vendors. For the vendors, the more data their products run on, the better targeted their product becomes for the user. While legal frameworks address how children’s digital data should be handled, data breaches continue to happen, vendors’ use of student data still lacks transparency and laws still have many flaws. Most of all, students (and parents) generally lack awareness and understanding of this complex data and the mechanisms of its collection, use and impact.

While various concerned stakeholders make efforts to shape policy with regards to school data privacy, there is still an overall lack of personal and interactive initiatives designed for children and young people specifically, to help them learn and understand what data schools and ed-tech companies collect about them and how that data can impact their lives and future. Efforts are being made in the direction of creating a safer Internet for children. For example, the recently launched privacy toolkit aims to inform and prepare young people about the digital footprint they leave on the Internet. Similar efforts must be made with regards to their school data.

Becoming informed, interacting with and understanding how their data is used to evaluate them as individuals and as learners, is one way to ensure children and young people’s interactions with ed-tech is valued. It can also encourage them to challenge potentially harmful practices without fear that schools generate digital permanent records of their every move. A number of measures can be taken to ensure data literacy, transparency, and agency of children and young people with regards to their school data.

  • Create a school data vocabulary for children and young people in order to give visibility to students about their school data – what is collected about them, by whom and how it is used to evaluate them as individuals and as learners;
  • Enable transparency: A data vocabulary will enable data transparency. A data vocabulary should also be consistently and automatically revised as ed-tech tools and school processes change constantly;
  • Prioritise student agency and participation in school processes. With constant data collection, it makes no sense for students to provide personal, emotional or academic feedback simply because every word can potentially be converted into data and be processed by algorithms that will make inferences about them with unknown consequences. Prioritising student voice and perspectives should guide the decisions about what data is collected and how it is used. Fine-grained data collection over a long period of time can destroy privacy, while various human agencies and authorities can obtain access to longitudinal data that can impact life beyond school;
  • Engage different stakeholders: educators, policy makers, and ed-tech providers should aim to promote meaningful engagements with ed-tech tools rather than ones for the mere collection of data;
  • Applied data literacy: studies show that students have varying interpretations of data. Many young people don’t think of data that is generated about them as a ‘dossier’. A school dossier, however, can cover data from pre-kindergarten all the way to university and professional life – what data is collected and how that may influence decision-making should be made transparent and clear to students.

While information from school data can help to adapt pedagogy, evaluate school quality and assess student academic performance, it is crucial to strike a balance by taking a student-centered approach that enables their equitable participation in these processes. Such balance can be achieved by providing students with critical data literacy – starting with their own school data and transparency about how their data is used in order to allow room for mistakes and a natural learning process.

 

About the author

Dr. Velislava Hillman is a fellow at Berkman Klein Centre in Harvard University studying children and young people in digital learning environments. Together with Varunram Ganesh from MIT Digital Currency Initiative she is currently developing a prototype for decentralised data management platform, which aims to provide student agency, data literacy, visibility and control. vhillman@cyber.harvard.edu

 

Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below

Image credit: philm1310 via Pixabay (Licensed under a CC0 licence)

Print Friendly, PDF & Email

About the author

Taster

Posted In: AI Data and Society | Big data | Research ethics

Leave a Reply

Your email address will not be published. Required fields are marked *