From May 2018, The General Data Protection Regulation, will take effect in the EU. After years of debate, the legislation was passed including provisions that will affect children’s personal data. Sonia Livingstone discusses the impact of the GDPR on children and young teens, especially related to social networking. Since the legislation was passed, many opinions have been expressed and many problems have been identified with the GDPR which Sonia summarises here. Sonia is Professor of Social Psychology at LSE’s Department of Media and Communications and has more than 25 years of experience in media research with a particular focus on children and young people. She is the lead investigator of the Parenting for a Digital Future research project. [Header image credit: P. Walsh, CC BY-NC-SA 2.0]
On 10 December, John Carr, the UK’s expert for the children’s charities on internet matters blogged about the implications for minors of the upcoming decision by the European Commission to strengthen online data protection and privacy. He and I had just co-authored a paper with Jasmina Byrne for the Global Commission on Internet Governance on how children are consistently overlooked by internet governance decisions. Or, at best, they are treated as vulnerable individuals – not the independent rights-bearers of the UN Convention on the Rights of the Child.
This was a classic instance. The purpose of the new data protection rules is simultaneously to save red tape for businesses in the interests of a digital single market and to introduce new privacy and data security standards that consumers – including children – can trust. This sounds good – social networking and other online companies that collect large amounts of personal data will have to appoint a data protection officer; they won’t be able to transfer personal data to third parties without explicit consent from the user when the data is being used for other purposes; and so on. And since currently the data practices targeting adults are the same as the ones used for teens—cross platform, mobile location tracking, productive analytics – these are unfair in the sense that they exceed reasonable expectations of young teen’s digital literacy. No wonder that the US’s Center for Digital Democracy, among others, supports the new regulation, arguing that teenagers especially deserve greater opt-in, transparency and individual control. After all, child welfare experts have long been calling for data protection safeguards for teenagers to address the commercial online marketing practices of Facebook and others.
But strengthening data protection – surely a welcome development – turns out to introduce new restrictions on online freedoms, as Janice Richardson and others quickly pointed out in protest. This was because there was a little noticed or debated proposal in the European General Data Protection Regulation draft to raise the age at which teenagers could use the internet without parental permission from 13 (the age curiously set for European children by an informal extension of the US’s law, COPPA) to 16, the age at which presumably European regulators consider teenagers able to understand how their data is treated by companies when collected online (assuming anyone of any age actually understands this).
As those who pay attention to the implications of internet governance and regulation for children quickly grasped, enhanced protections would have the major (though presumably unintended consequence) of significantly limiting children’s rights to communicate with peers, engage online with educational, health and other valuable resources, or participate in the online civic and public sphere, as argued forcefully by Larry Magid and danah boyd, among others. Or as The Guardian put it, “Is Europe really going to ban teenagers from Facebook and the internet?” There are, surely, more satisfactory precedents for striking the right balance between participation and protection bearing in mind children’s developmental needs and circumstances.
As those of us interested in evidence are acutely aware, while the proposal is not actually for a ban, in practice many children won’t ask for or obtain parental permission, for a host of reasons, some of them likely to exacerbate social and digital inequalities (parents who don’t understand, have no time, don’t care…). As a result, new sites will proliferate and many children will lie about their age even more than they do now, to gain access to services, pushing their internet use further under the parental radar and so making it harder for well-meaning parents to guide them.
As those of us interested in children’s rights in the digital age are acutely aware, the internet poses dilemmas regarding the clash of rights – most often, child protection versus participation. And, one might add, the protection issues potentially go beyond those of data and marketing – how much public concernover cyberbullying, sexting and need for porn filters will no longer be needed now that we have, it seems, banned teens from Snapchat, Facebook and Google. But how much fun, communication, creativity, participation and learning has also been banned? Can teenagers really return to pre-digital days?
Notwithstanding the complexities involved in such genuine clashes of rights, by 16 December the decision was pretty much made – bringing to a conclusion a four year long regulatory wrangle about data protection, ostensibly in the interests of European citizens. Reading the decision carefully reveals that in principle, protections for children as well as adults have been enhanced.
Further, perhaps in recognition of the above argument, the decision now includes a proviso that while 16 years old becomes the new European norm below which parental permission is required before offering information services of any kind to a minor, member states can choose to pass legislation to lower that age. The problem is that children’s rights to protection and participation don’t exactly vary country by country but, rather, by child and by circumstance. Nor can parents be the sole arbiter of children’s rights (as famously established in the UK by the Gillick ruling). As stated in the EU Charter of Fundamental Rights, “Children shall have the right to such protection and care as is necessary for their well-being. They may express their views freely. Such views shall be taken into consideration on matters which concern them in accordance with their age and maturity.” (Article 24)
So here are the questions we’re facing.
- Why is parental permission seen as the key mechanism to protect children from online abuses? We have ample evidence that it doesn’t work. We have in principle arguments that children have rights independent of their parents’ preferences. We have profitable companies who could instead be required themselves to identify and treat children in ways appropriate to their age without passing the burden to parents and the risk of exclusion to children.
- Is there no way to separate data collection from data exploitation, in terms of process and regulation? In other words, couldn’t we enable information service providers (whether public, private or third sector) to collect data from children so as to enhance the service offered, without parental permission for those aged 13 or over, provided no commercial use is made of the data? (Yes, this would require a big change for Facebook and the others, but they’ll then be more likely to get them as consumers when older).
- Was there, and is there still, no chance that major decisions of this kind could be made with – rather than in the apparent absence – of both research evidence on teenagers’ online risks and opportunities and of direct consultation with teenagers and those concerned for their welfare? This would surely allow for a better, and a more legitimate, outcome.
- Will the outcome of the new regulations in practice make companies treat under 16s better or will they just cease to provide for them at all, given increased regulatory complexity and loss of revenue? What Impact Assessment has already been conducted to demonstrate the benefits to children of this new regulation? And who will evaluate the subsequent effects of these changes for children and young people in particular? Without these, how can we be confident that the new regulation is, as Joseph Savirimuthu asks, in children’s best interests?
- What will governments decide about whether to reduce the age from 16 back to 13, and with what evidence and process of consultation and monitoring? Who will decide in the UK in particular, and how can those of us with something to contribute get involved? Including teenagers themselves.
I think these questions are both important and urgent. Europe’s teenagers await the answers for this really matters to them.
Notes
This text was originally published on the LSE Media Policy Project blog and has been re-posted with permission.