LSE - Small Logo
LSE - Small Logo

Editor

November 29th, 2016

The limits of parental consent in an algorithmic world

0 comments

Estimated reading time: 10 minutes

Editor

November 29th, 2016

The limits of parental consent in an algorithmic world

0 comments

Estimated reading time: 10 minutes

a-worner-cc-by-sa-2-0_cropped

headshot-photo_nathan-fiskData protection reform is set to take place throughout the European Union when the General Data Protection Regulation becomes law in May 2018. Nathan Fisk, Assistant Professor of Cybersecurity Education at the University of South Florida, discusses questions around the age of consent for data collection and processing, and its implications for teenagers and their parents. [Header image credit: A. Worner, CC BY-SA 2.0]

Fundamentally, the General Data Protection Regulation (GDPR) is centered on informed individuals choosing to allow their data to be “processed” – whether that be the consent of an adult, the consent of teens over the age of 16, or the consent of parents of children aged under 16. Regardless of which category an individual might be in, the assumption is that given a transparent, simple explanation of how their data will be used, along with easily used tools for managing those processes, users will be able to make informed decisions about the use of their information. As someone who has personally advocated for increased transparency and privacy controls, this all sounds like a good step forward, at least in theory. I am not, however, without some reservations concerning the GDPR.

At the recent GDPR roundtable at the LSE, one of the many topics discussed was that of education, particularly the lack of any requirements for member countries to educate parents or children around the issues of personal data processing. While I myself advocated further media literacy efforts, many of the concerns over the uses of personal data are grounded in the complexity of algorithmic systems. In fact, one of the best books on the subject is itself entitled “Black Box Society”. Systems that collect and process youth data are often, in fact, proprietary black boxes. The power of algorithmic processing comes not from efficiencies, but rather the emergent results that the system provides. Not even the designers and data controllers themselves fully understand how or why a sufficiently complex system yields some forms of output, recently exemplified by the Facebook fake news controversy throughout the U.S. election.

Academics, industry experts and policy makers are only just starting to come to terms with algorithmic processing and the often invisible problems such services and mechanisms produce. So why then should we expect parents and kids to be able to make informed decisions, even under the most ideal conditions? As such, the move to up the age of consent for personal data collection and processing to 16 makes even less sense. As discussed at the roundtable, children of the same age can have widely varying capacities to understand risks and make decisions, depending on their experiences – not all 13-year-olds are the same. A difference of three years, already based on the flawed assumption that the competence of children can be so easily judged by age, is unlikely to ensure that teenagers truly grasp the potential consequences of providing controllers access to their personal data.

Transparency and education are not the only issues, however. If there is anything I’ve learned working in cybersecurity for nearly a decade, it is that the forces of convenience and social inclusion should never be underestimated, particularly when it comes to teenagers. Consent is always structured by local contexts, social forces and immediate needs, and while children and parents may understand the risks, those risks are highly unlikely to appear to outweigh the benefits often offered by online services. As I’ve explained in my recent book, Framing Internet Safety, adolescents are more likely to value privacy from their parents and other local adults more so than they are the somewhat more distant-seeming but ever-present threat of corporate surveillance. Application developers are well aware of this fact, and are more than willing to provide teens with online spaces that provide them with tools to maintain their local contextual integrity. Take, for example, the limited persistence of content in apps like Snapchat and Yeti, or the forms of anonymity offered by YikYak and Kik.

Youth and parents should not be placed in a situation in which they must choose to “sell off” their privacy (and increasingly agency) in order to take advantage of online services. While the GDPR does include provisions that seem to forbid data controllers from requiring that users give up their data in order to use a service, it also allows controllers to require data collection if the service requires the data to function properly. I have difficulty imagining that many controllers will not frame personal data collection as a necessity wherever possible, even given provisions that they must defend their decisions to do so. Beyond that, systems that collect and algorithmically process personal data hold genuinely positive potential, for both individuals and society, and it is increasingly rare to find an app or platform that does not use personal data to provide some form of benefit to users. Teens choosing to protect their privacy in these cases will miss out on these benefits, again, something of a difficult decision to make.

While the GDPR does address significant problems with profiling, interpretation, and data processing (see 71)  by providing users with the ability to refute decisions based on algorithmic processes, it largely fails to address the ways in which the systems that increasingly undergird everyday social life also effectively operate as mechanisms of subtle social control. Mentions of an emerging “Internet of Toys” were frequent at the roundtable, with concerns that emerged from not only the collection of data, but additionally the shaping of youth behavior based on that data collection. While persistently connected toys and software present interesting entertainment and educational opportunities, they also hold the potential for invisible forms of social shaping over time. This kind of control has already been well noted by those that study or have otherwise criticized free-to-play games and their often predatory design – provoking the release of new rules from the EU. However, these common forms of manipulation are likely only initial experiments in subtly shaping user behavior, particularly given the increasingly blurred line between advertisement and content.

Of course, none of this is to suggest that parents and youth should not have a role in making decisions about personal data processing, of course, nor that data controllers should not be required to inform and provide evidence that users have consented to participation. However, it is through an overreliance on consent that, as Sonia Livingstone noted in her previous post, the GDPR places an undue burden on youth and parents, ultimately distributing responsibility for a broader social policy problem down to individuals. Rather than bringing  experts, parents, and youth together to collectively do the difficult work of figuring out what we as a society want from personal data collection and algorithmic processing technologies, relying on consent requires that youth and parents figure it out for themselves within an often predatory and changing marketplace. While teens and parents might well be savvy consumers, without significant educational efforts and stronger rules around the shaping of social behavior by data controllers, they cannot be the informed decision makers assumed by the GDPR.

 


Notes

This text was originally published on the LSE Media Policy Project blog and has been re-posted with permission.

About the author

Editor

Posted In: Research shows...