The outcome of the Government’s consultation on parental internet controls, published on the 14th of December, urges all ISPs “to actively encourage people to switch on parental controls if children are in the household and will be using the internet.” This is, more or less, taking forward the ‘active choice’ recommendation of the Bailey Review, namely that – initially as new users, then also as existing users – parents should be faced with the choice of whether or not to use parental controls.
This is welcome news to those concerned with improving children’s internet safety in the UK, and it is likely to set a precedent also for policy makers in other countries. Why are such measures needed?
– Because, as EU Kids Online has found, a sizeable minority (11%) of 9-16 year olds in the UK have seen pornography online (and 2% have seen violent sexual imagery). Among 11-16 year olds, 13% have seen or received hate messages, 8% pro-anorexia or pro-drug messages, and 2% have visited a suicide site in the past year.[1]
– Also because, while most parents believe it their responsibility to keep their children safe, many are unsure about the risks, confused about what protections to implement, or too embarrassed to deal with sexual or intimate matters in relation to their children.
– A recent survey by Ofcom, which has lots of interesting statistics on parental views and experiences, suggests that around half of parents don’t talk to their kids about online safety, even though this is often referred to as everyone’s preferred solution.
– Most worryingly, EU Kids Online found that 11% of UK parents do nothing at all to keep their child safe online (though, to be sure, it also found that around half already use filters [2], the highest proportion across Europe, and most keep an eye on their children’s internet use or check up on what they do online).
– Our research also found that parents of vulnerable children tend to be less confident about how to keep their child safe online: so relying on parents to ensure children’s safety risks promoting a strategy that helps the internet-savvy ahead of the less savvy parents.
Although the consultation closed in September, the Government has only just responded – because it was overwhelmed by public and stakeholder input; from some 3500 individuals and organisations in all. People really care about this issue. For those keen for public policy in general and communications policy in particular to be conducted in a transparent, inclusive and deliberative manner, this is a healthy state of affairs.
Interestingly, most of the consultation responses did not support the government’s decision, contra some of the panicky media stories we’ve seen this week. This may be because the consultation was hard to follow – as an expert in this field, I was puzzled by a number of the consultation questions, wanting to know the type or level of pornography would be filtered, how filtering could apply differently for different household members, how easy it would be to change my mind, how effectively the filters work, and so on – before saying yes or no to the questions asked. As a respondent, I was ready to say ‘yes’ to tools that are precise, effective, flexible and independently audited. But is that what is on offer?
However, it seems likely that many responses came from campaigning groups rather than being representative of all parents, so that doesn’t necessarily tell us what the average parent wants. We do know that many parents are both worried and confused about online risks and safety provision. EU Kids Online found that one in three parents are very worried about what their child sees or who they are contacted by online (more than they worry about crime, alcohol or drugs, for instance, where most parents do expect government intervention).
What worries parents a lot about their child, by age and gender
% |
Age |
All |
|||
9-12 |
13-16 |
||||
Boys |
Girls |
Boys |
Girls |
||
How they are doing at school |
46 |
44 |
42 |
35 |
41 |
Being treated in a hurtful or nasty way by other children |
47 |
44 |
36 |
33 |
40 |
Being injured on the roads |
48 |
40 |
38 |
32 |
39 |
Being contacted by strangers on the internet |
35 |
37 |
35 |
43 |
38 |
Being a victim of crime |
31 |
25 |
40 |
32 |
33 |
Seeing inappropriate material on the internet |
35 |
29 |
27 |
32 |
31 |
Getting into trouble with the police |
22 |
12 |
33 |
21 |
22 |
Drinking too much alcohol/taking drugs |
12 |
8 |
26 |
28 |
19 |
Their sexual activities |
11 |
8 |
16 |
28 |
16 |
None of these |
17 |
26 |
22 |
24 |
22 |
|
|
|
|
|
Question: Thinking about your child, which of these things, if any, do you worry about a lot? (Multiple responses allowed)
Base: UK parents of children aged 9-16 who use the internet.
Rather than trying to guess what parents really want from the Government’s consultation, we should pay attention to a recent, representative survey conducted by YouGov for Talk Talk.[3] This found that:
– 37% of UK adults with children in the household think that active choice (where customers are asked when they sign up to broadband if they want their internet to be filtered or not) should be applied as standard to best protect children online.
– A further 30% said their internet service should only be filtered if they ask for it.
– Just 22% thought that default filtering of harmful content, such as pornography, is the best system, where the internet is filtered unless they ask for it not to be.
– 11% said none of these or that they weren’t sure.
We can add up these figures several ways. While only 37% want what the government has decided to do, broadly speaking, an additional 22% want a tougher system, making a majority in favour of intervention (though the 22% are clearly in the minority when it comes to the question of default filtering in the home).
Undermining support for the government’s solution is the fact that many parents have had poor experiences of filters, this possibly making them sceptical of their capacity to solve internet safety problems[4]. They don’t work in many of the languages spoken in British homes today. They generally don’t help with user-generated content (so are no good for bullying or sexting, for instance). Many under-block and/or over-block, without reporting how often this occurs. And how do you tell which ones are better or worse?[5]
Also undermining such support is genuine distrust in Government among the UK public: if pornography is monitored or restricted today, what will be restricted next month? It’s time to address head on the consultation responses concerned with civil liberties, mission creep, etc.[6]
Clearly, industry has a job to do here in improving what it offers to parents and consumers. Additionally, government has a job to do in building trust. The best way to address both these challenges would be to establish a transparent and independent body – not to regulate the internet industry necessarily but, to borrow Lord Leveson’s clever solution for press regulation, to regulate a self-regulated industry.
Surely there is now a pressing need to better understand the contexts in which such tools are used so as to identify the design requirements that could meet parental and children’s needs and concerns regarding children’s online safety. In order to achieve this, future tools should be user-friendly, flexible and easily customizable. Can they, even, cease to be parental ‘controls’ and become parental ‘mediation’ tools that guide, inform and enable as well as limiting children’s online experiences?
In the spirit of encouraging active and open communication regarding e-safety between parents (and teachers) and children, it would be great to see a new generation of parental tools that would allow for more customisation of the online environment so as to cater for diverse backgrounds, contexts of use, family interactions and parental styles. Such tools should also take into consideration children’s rights, especially those related to privacy and information access – and including privacy from their parents.
Filtering out pornographic or other inappropriate online content represents one useful part of what must be a multi-stakeholder approach – involving industry, government, parents, teachers, police and others. The government has taken a welcome step forward. But there is much more to do if this is to work.
[1] These figures were reported to us by children under conditions of privacy from their parents and the researcher; and they are confirmed by research from Ofcom and others. If anything, they are likely to underestimate the ‘real’ incidence as children can be embarrassed or worried about admitting what they have seen. Further, as internet use rises, so does the incidence of risk: we are now witnessing children using the internet more, at younger ages, and on more diverse platforms and devices.
[2] What exactly they mean by this is unclear, since we know very little about actual usage rates of filtering software, or assessments of its effectiveness, and we especially lack evidence based on in-home observation by independent research (rather than as self-reported by children or parents or, indeed, by companies).
[3] These figures are from the September 2012 YouGov survey of 2010 UK adults online. All figures have been weighted and are representative of all UK adults (18+).
[4] Parents may be right, if this is what they think. EU Kids Online finds that, when we control statistically for the effects of age (and gender, online activities, access and country), any apparent benefit of parental controls in reducing risk seems to disappear. In short, using a filter or not does not seem to reduce the chance of a child encountering online risks. On the one hand, parents install filters for younger children, who encounter little risk anyhow (though that may change as they use the internet more). On the other hand, filters don’t deal with most risks parents worry about (meeting strangers, bullying, user-generated content). But even for pornography, where you’d expect using a filter to reduce exposure, the effect is minimal – perhaps such exposure these days is more deliberate than accidental, and if kids wish to find it, they’ll get around the filter?
[5] Actually, the EC’s SIP Bench project to test and compare filters includes a handy site that answers this question. But who knows about it?
[6] I’m puzzled as to why everyone accepts virus checking and blocking software. Everyone has it. It filters their spam (“censorship”?), it saves them a lot of trouble too. I don’t hear any protests over its inclusion with computer sales or internet provision. Is there a way to define content highly inappropriate to children so as to achieve similar public acceptance?
Interesting discussion, Sonia.
Isn’t your point in footnote 4 rather significant? If “using a filter or not does not seem to reduce the chance of a child encountering online risks”, then why is anyone pushing active choice or even mandating such filters?
It’s a fair question, Ian, but as I tried to explain in the piece, the point is that the evidence shows no reduction in risk because of filter use. It’s the classic problem of the ‘null hypothesis’. In other words, we don’t have evidence that filtering reduces risk, but that does not permit the conclusion that filtering does not work. Sorry to sound pedantic, but there could be other factors at work. What I wanted to emphasise is that at present, filters are not good enough and nor is it clear how they are being used in the home. So we really don’t know if good filters, well used, could keep kids safer, but this still seems worthy goal. If the government plans go ahead, I hope there will be more impetus behind developing better filters and also better advising parents. Then we’ll see.
Indeed. But to justify a serious invasion of human rights by the state, evidence is required that the invasion is necessary and proportionate – neither of which can be said to be true given the evidence you describe.
I cannot see that any invasion of human rights is at issue, let alone a serious one. The government decision is to encourage ISPs to *ask* parents (only) *if* they wish the filters switched on. The purpose is to protect children’s rights. Is there any evidence that asking this question of parents would infringe anyone’s rights at all?
You write above: “While only 37% want what the government has decided to do, broadly speaking, an additional 22% want a tougher system, making a majority in favour of intervention (though the 22% are clearly in the minority when it comes to the question of default filtering in the home).”
I don’t know that this is a helpful way to add up these various numbers, but there is clearly a sizeable minority (including the Daily Mail) still campaigning for mandated default/opt-out filtering.
Mandated default filters obviously have the potential to seriously impact freedom of expression – we know from the mobile networks that very few people are even aware they have an option to opt-out, and that all sorts of uncontroversial content is being blocked. I don’t think active choice is as problematic, although the implementation details are important.
I am not so bothered by spam filters because I have a free choice of ISP and e-mail service, including whether and how they do spam filtering.
I defer to your expertise on children’s access to extreme porn, but as you know any proposals that would require legislation and that could impact on freedom of expression and privacy (as detailed monitoring of browsing behaviour undoubtedly does) must meet tests of necessity and proportionality under European law. If evidence is not available that a given “solution” will contribute to the (vital) goal of protecting children, those tests are not met – it is not for the courts to look for evidence of the negative of your proposition (that well-designed filters might help).
Ian’s assertion that European Human Rights law might pose a threat to measures designed to improve child safety online looks a little thin.
First: could “Active Choice” be described as an action of the state? There is no legislation. It is all voluntary. I agree one could interpret what is going on as heavy pressure from the state but is that the same as state action? Probably not because companies could always say “no”. It is ridiculous to argue that the state cannot do everything in its power to persuade companies to take voluntary steps. That’s tantamount to saying politicians may not express political views or set out their intentions for the future. How companies choose to react to such non-binding, non-legal admonitions or exhortations is a matter for them.
Second: one of the justifications for the policy of Active Choice or similar would be that states were merely trying as far as they can to replicate in the online world polices which have been long-established in the real world. I agree that would not necessarily be conclusive because if they chose to do it in a way that had close to zero effect or had a disproportionate impact on free speech then it would raise questions but I think the evidence of “zero effect” or of a negative effect would have to be pretty substantial i.e. there would be a presumption in favour of what the state was attempting to do.
I suppose someone could mount an argument to say that those real world policies were all rubbish anyway so they don’t see why we should allow them to be replicated or extended into a new environment. That argument seems unlikely to succeed.
Third: I agree with Sonia’s point – what is the impact on free speech anyway? Nobody is being denied the right to anything. At worst you could say people are being asked to suffer a minor and temporary inconvenience (to prove they are over 18) then they are in exactly the same position they would have been beforehand i.e. the whole of the internet would be available to them. In the real world this sort of thing is accepted by everyone except the worst grumpy grouches because people understand and accept the underlying purpose of the policy, namely the protection of children.
Fourth: even if I accepted that the LSE-ORG report proved anything of substance (and I don’t) again it is easily remedied – a person can get the bar lifted by proving they are an adult and absolutely no legal sites are then blocked.
Fifth: if any site is inappropriately classified that is bad and wrong, a systems failure, but against the background and objectives of the policy the risk that a tiny number of sites might be incorrectly classified is as nought as compared to the potential advantages of the policy. Moreover let’s not forget any site which discovers or believes it has been incorrectly classified can appeal against the decision and the appeal should be heard swiftly. Any member of the public could also lodge an appeal, with or without the site’s knowledge or consent.
Sixth: even if the software could be shown to be of marginal effectiveness there is also a normative aspect to the policy. Society is entitled to say “We think sites such as these should not be easily accessible to children and we are going to do everything we can to reinforce that view.”
Seventh: would this policy necessitate the creation of any large new databases which would, if they were hacked, reveal sensitive or important information about you? No.