The UK Secretary of State for Culture, Media and Sport Karen Bradley has just announced a new Internet Safety Strategy to crack down on risks to children such as cyber-bullying, sexting and online trolls. One way that tech companies claim to protect children is through setting age limits – usually 13 years old – for the use of their social media properties. LSE Professor Sonia Livingstone (who will be advising DCMS on the new strategy) and John Carr, member of the Executive Board of the UK Council for Child Internet Safety, decided to investigate whether and how age restrictions are enforced on photo-sharing app Instagram (owned by Facebook), which is one of the most widely used social networks with more than 600 million users worldwide.
You’d think two child internet safety experts would already have been on Instagram, checking out its offer. But this only occurred to us a couple of weeks ago, and unfortunately we were unimpressed with what we found.
Here’s what Sonia did:
- Visit the App Store, click Install, then Open.
- Ponder the automatic invitation to Continue with your named Facebook login and digest the fact that personal data is shared across Facebook and Instagram (already problematic for WhatsApp)
- Sign in via mobile number (or email) (not via Facebook) and receive a verification code from Facebook (not Instagram).
- Once in Instagram and asked for a photo, name and password, followed by “Create username” (any), hit NEXT, noting (if attentive) that “By signing in you agree to our Terms and Privacy Policy.”
The “Terms and Privacy Policy” was clickable, and presumably Instagram (and Facebook) knows if anyone actually clicks it. But since it isn’t necessary (there’s no “I agree” step), and since as we learned recently, Instagram’s T&Cs are not comprehensible by kids or, arguably, their parents, it seems likely that few do.
Nothing in the sign-up process mentions age at all. Even if it’s assumed all children lie about their age and that parents encourage them in this (assumptions the evidence does not support), since you’re not asked, you don’t even need to lie. Indeed, you may not even know there’s an age limit.
Nor, when you sign up, is there any suggestion of reviewing your privacy settings (Sonia was a bit upset by being immediately followed an Iranian news agency which showed gory photos of a person attacked by a dog – and quickly learned the hard way to make her profile private).
John was determined to check that Instagram complies with COPPA so, installing Instagram via Google Play, he considered the little icon which suggested there would be some “Parental Guidance” but could not get it to open before the app had been downloaded. He notes:
At no point in the signing up process was I asked my age or asked to confirm my age. When I had joined, and because was I determined, I was able to click on something that said I had to be 13 to use the service. Then this appeared:
“In the event that we learn that we have collected personal information from a sub-13-year old without parental consent we will delete it.”
But hang on……if I am not asked my age and I am not asked if my parents exist, never mind whether or not they consent to anything, what exactly do you think Instagram’s owners, Facebook, are trying to say?
Promises versus compliance
Why are we telling you this? Because in meeting after meeting on child internet safety, we have been told how the “big players” take their responsibilities to child safety seriously; it’s all the little foreign companies that need bringing into the fold.
As a result, we’ve sat in meeting after meeting discussing child internet safety guidelines, guidance, codes of conduct etc., with no plans or seeming need for monitoring or compliance. Here, for sure, we see the limits of self-regulation.
For instance, in the UK, the key guide “for Providers of Social Media and Interactive Services” recently produced by the UK Council for Child Internet Safety enjoins providers to:
“Be clear on minimum age limits, and discourage those who are too young.”
“Consider different default protections for accounts that are opened by under 18s.”
“Consider using available age verification and identity authentication solutions.”
“Involve parents/guardians if you collect personal data from under-18s.”
Does this happen on Instagram? Not that we could see. The NSPCC’s Net Aware assessment of social media services by children and parents does not give Instagram good marks for risk, signing up, reporting or privacy. Should we now check all the other social media sites? Perhaps, as there’s surely a need for mystery shopper exercises. But then one must deal with the push back from companies, as when the EC tried an independent evaluation of whether industry promises are delivered.
Instagram – highly successful among big global brands and, therefore, increasingly profitable – says it provides a mechanism for adults to report on under-age users they may know of so that the account can be deleted “if the reported child’s age” can “reasonably” be verified as under 13.
So, parents must report on their kids? Hardly a recipe for domestic harmony or learning about digital citizenship. Do parents actually do this? To the best of our knowledge Instagram (or Facebook) doesn’t report on this. But the blogosphere is full of parental accounts of difficulties with the company in precisely this respect.
Who’s affected?
Ofcom’s latest audit of UK children’s media use shows that Instagram is used by nearly half (43%) of the 26% of online 8-11 year olds who have a social media profile. This works out about 11% of online 8-11 year olds. Since about 90% of 8-11s go online, that’s around 1 in 10 8-11s who use Instagram. Ofcom’s data isn’t sufficiently granular to go further, but from their overall age trends in social media use, we can guess that’s fewer among the 8 year olds, more among the 11s and even more among the 12 year olds. So, it’s a minority of kids who use Instagram under-age. But it’s still a lot of kids. And the risk of harm is real.
Will the General Data Protection Regulation, when it comes in next year, perhaps raising the age to 16 and bringing a lot more teens into the category of “under-age” users, make matters better or worse for children? Would it help if the government agreed with the Children’s Commissioner of England’s call for an independent Children’s Digital Ombudsman? We have some hopes of both approaches.
But one must also wonder if companies clever enough to network millions of people couldn’t also figure out how – clearly and kindly – to prevent under-age users, along with meeting their other commitments made in self-regulatory codes at British and European levels. Or is this a key task now for the just-announced UK Government’s Internet Safety Strategy?
This post gives the views of the author and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science.