To discuss the issues arising from the General Data Protection Regulation’s provision that under 16 year-olds will need parental consent before accessing social media or other online services, the LSE’s Media Policy Project, the UK Council for Child Internet Safety’s Evidence Group, the Centre for Digital Democracy and the School of Communication at American University met in a round table on 14 October. Professor Sonia Livingstone explains the key questions here. A meeting note, and a series of blog posts teasing out these complex issues, will follow in the coming weeks and months. Watch this space.
To meet the challenges of the era of “big data” and the expansion of sophisticated commercial tracking and profiling within digital environments, the European Parliament, the Council and the Commission agreed (on 15/12/15) on the new harmonised data protection rules which, they hope, will benefit both European business and European citizens.
But will the new General Data Protection Regulation (GDPR) benefit children?
Many of the provisions strengthen consumer protections for all internet users, in a context where personal data collection is rapidly becoming ubiquitous, thereby supporting the digital single market and introducing long-overdue privacy and data security standards that consumers – including children – can trust. For example, the GDPR requires that providers of ‘information society services’ (as defined by Directive 2000/31/EC) must appoint a data protection officer; they must treat users’ data transparently and fairly, permitting the right to be forgotten, and they cannot transfer personal data to third parties without explicit consent from the user.
The GDPR also includes a series of specific protections for children (Recital 38), including a ban on data profiling (Recital 71) and special requirements on transparency (Article 12), although exactly what is meant, and whether and how these can be guaranteed remains unclear. All this should be welcome to many, including child advocates who have long been concerned at the growing commercialisation of childhood, now intensified in the digital age when ‘free’ isn’t and when geo-location tracking, data profiling, embedded marketing and cross-platform convergence of data are fast becoming the norm in a way that far outstrips parents’ and children’s digital literacy.
Yet mechanisms to protect children seem hard to come by, and the GDPR places considerable – probably undue – reliance on parents to manage their children’s access to online services. Although little noticed until the decision was reached, unless member states specifically legislate to introduce a different age, one effect of the GDPR will be to prevent those under 16 from using social media and other online platforms unless the social media site or platform obtains parental consent.
Thus 16 years old is about to become the new European norm below which parental permission is required before offering information services of any kind to a minor, a distinct increase on the age of 13 that’s more commonly (though not universally) required across Europe, especially by companies headquartered in the US where the Children’s Online Protection Act (COPPA) defines both children’s access to the internet and companies’ access to children.
Lots of questions arise:
- At what age are children able to give consent for their data to be used for marketing purposes?
- Do parents understand the issues involved and should they be the sole arbiter of whether teens can access online services?
- Is there evidence available to support the answer to either question? Certainly note should be taken of the research which shows that, in practice, many children won’t ask for or obtain parental permission, for a host of reasons, some of them likely to exacerbate social and digital inequalities (parents who don’t understand, have no time, etc.). And of the evidence on the limitations on children’s digital literacy.
Then, lots of regulatory questions arise also:
- Why does the GDPR place no burden on companies to deal with the degree to which children lie about their age to gain access to services? Or, is this covered in the risk-based impact assessment (Article 35)?
- How can children’s right to participate online be weighed against parents’ concerns to protect them from marketing, and who will adjudicate in each child’s best interests when conflicts arise?
- Given such controversy, one wonders why the drafters of the GDPR did not find a way for children to go online without their data being exploited for marketing purposes?
- And why did they not – or so it appears – consult children in the process, as they are required to do by the UN Convention on the Rights of the Child (Article 12) and the EU Charter of Fundamental Rights (Article 24)?
Some experts are clear that the new regulation supports teenagers who indeed deserve greater opt-in, transparency and individual control over their personal data online in the face of the expansion in commercial marketing practices. One might also note that, since recent years have seen considerable problems of cyberbullying, pornography and sexting among young teenagers, raising the age of digital consent may have the additional consequence of reducing such safety risks.
But, by the same token, it will have the unintended consequence of significantly limiting children’s rights to communicate with their peers or engage online with other valuable resources, or participate in the online civic and public sphere. Indeed, many judge that the horse has already bolted. As The Guardian put it, “Is Europe really going to ban teenagers from Facebook and the internet?” On the other hand, are we really going to allow 13-16 year olds to be profiled, tracked and targeted by marketers, including those (such as fast food companies) that may adversely influence their health and development?
By May 2018, the GDPR will be law across Europe, and if member states do not legislate to change the age, all those under 16 will be unable to use the internet unless their parents consent to their data being collected or unless they lie about their age (or unless they use those online services that are non-compliant and thus escape regulation). Is this what society wanted, when it enthusiastically embraced the internet in homes and schools for its exciting new opportunities to learn, create, communicate and participate?
So, to be 13 or 16, that is the question. As for the answer, it remains unclear how the decision will be made in each country, including in the UK. Can we identify solutions that enable children to benefit from the internet without subjecting them to commercial messaging? Or that place less reliance on parental consent tied to a particular age?
In short, it seems we face a classic case of conflicting rights, where children’s rights to participate appear in conflict with their rights to protection. Can no better solution be found which, on the one hand, protects young people from commercial exploitation online while also allowing them the freedom to access social networking and related sites?
It is urgent that key stakeholders – policymakers, educators, advocates, and industry organizations – engage in a full discussion of the range of issues surrounding these new rules, to ensure the optimal decision is taken in the best interests of UK teenagers.
This post gives the views of the author and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science.