Oct 25 2012

Should children’s internet use be filtered? Multi-stakeholder prudishness impedes open deliberation

Around one in three European parents say that they use filtering tools for their child’s computer; one in three is worried about what or who their child may encounter on the internet. But these are not necessarily the same people – parents are more like to use filters if they are themselves regular or confident users; this suggests that some parents may be worried but lack the skills or knowledge to use parental tools. So, is there a problem? What should be done? Can there be a pan-European solution, given that levels of concern and use of filters varies hugely from the UK and Ireland, where half use them, to fewer than one in ten in Romania and Lithuania.

At the recently held Safer Internet Forum in Brussels, teenagers and their parents expressed strong views against filtering tools. It seems that no-one likes the language of control (as in ‘parental controls’). Kids wish to be trusted, guided, and have their rights respected. Parents wish to trust, though they also wish to be informed and supported. The result was legitimate and important advocacy for open, honest and unembarrassed communication between parents and children. So perhaps filtering tools are unnecessary?

Problematically in such discussions, no-one wants to talk about the content to be controlled. While in the early days of the internet, conferences presenters showed pornography, race hate or violent content so that we knew what we are talking about, now it seems we’ve become rather prudish. So we don’t know if people are (or are not) concerned about mild sexual material (such as Page 3). Or explicit sexual content (that would not be shown on prime time television or that is restricted to over 18s in the cinema). Or extreme or violent content pornography that society keeps away from children offline but that can be found online?

At the Safer Internet Forum, then, although many preferred parental responsibility to parental tools, there were dissenting voices. Is the task less to control the kids than to control those who put potentially harmful content online? Can this be done by self-regulation (as for social networking sites) or by self-managing online communities (as Youtube is attempting). Does adherence to online services’ terms and conditions meet parental concerns, or is public/state intervention required? Or are parental tools still needed? And what is to be done about the parents who appear to pay no attention to their children’s online activities or experiences, especially if those are the already disadvantaged or vulnerable children?

Young people also bear some responsibility and, depending on maturity, many would prioritise their right to view sexual content – as part of their rights, their sexual development, and to build digital literacy and resilience. Again the debate is impeded by our prudishness in asking how teenagers access pornography. In some Eastern European countries, screens are still full of the explicit pop-ups that seem to have disappeared in Western Europe (how did that occur, and was it censorship?). But though accidental exposure is reducing, we do expect teenagers to go looking for pornography. But we find it hard to ask them about what they found, and we don’t really know the reaction of younger children (some would include teenagers too) when they witness, say, sexual violence against women – ethically too, this is hard to find out.

Stakeholders in these debates suffer from a further prudishness, I discover – now that listening to the voices of youth is, rightly, in vogue, it is hard to counter or qualify their views. So those who think youth are too liberal, too confident in their own resilience, or too ignorant of dangers online have a hard time getting credibility for their concerns. But as was pointed out, even these carefully selected youth were accompanied on their trip to Brussels – so they do need support from adults and they are, surely as influenced by adult society online as offline (aren’t we all?). Problematic also is the gap between what people know and what they do: as one young person said, ‘we want you to trust us, but we all post without thinking’. And a significant number report being upset on occasion by what’s online.

Thus I prefer to listen to the evidence. EU Kids Online surveyed 25000 9-16 year olds across Europe, and found a small percentage of children express real concerns about the internet. This doesn’t make life easier (how shall we meet the needs of the minority without restricting the freedoms of the majority?) but at least research provides a clearer grasp of the scale of the problem. In short, the problem is real, but it is not overwhelming. The evidence also shows, disappointing to some, that using filtering tools is not associated with a reduction in children’s exposure to online risk. We should think about this. Maybe filtering is the wrong way to go. But maybe the reason is that the children who need protection don’t get it? Or maybe the reason is that the tools don’t work (indeed, the SIP Bench study reveals their continuing limitations – too much overblocking, for instance). And perhaps the kids can get around them (at the Forum, we heard some strategies from the youth for this, though most children respect their parents’ advice).

In the UK, the government’s recently closed consultation will soon report on whether parental filtering will be forcibly offered to parents (‘active choice’) or whether a more top-down solution will be judged necessary. In Europe, EC Vice President Neelie Kroes’ CEO Coalition is progressing its work on parental control tools as one of five priorities for a better internet for children. This is a multi-stakeholder issue, and no simple or single solution is likely to emerge. But having listened to arguments on all sides at the Forum, and having weighed the evidence, I suggest that we should try harder in developing parental filters as one among several important future directions (recall that it took us decades to produce a strategy for content regulation on other media that has widespread support). So here are four recommendations to companies providing online services used by children:

(1)   We need greater availability of both really simple filters that any parent can install, and carefully tailored filters that parents can choose to fit to the maturity of their child and the values of their family. User-centred design is key, and testing on real life families, in the messy complexities of the home, is vital. Such tools should be offered, free, to all internet users – on initially gaining a new device, new internet service, and with periodic reminders thereafter.

(2)   Tools should not simply ban or restrict without explanation or transparency. The days of of spying on, censoring and surveilling kids set a really bad starting point for today’s challenges, and should be firmly left behind. Children’s rights should be respected, and tools should be positively designed so as to promote parent-child discussion about guidelines, norms and values. (just as dialogue boxes prompt users to a range of safety checks).

(3)   Providers must recognise the legitimate concerns of those advocating freedom of expression, including child rights advocates. Thus they must not over-block content, they must report their over- and under-blocking rates. What is filtered should be transparent, accountable and accurate. And providers must be quick to respond to mistakes or biases in their filtering processes.

(4)   Last, providers cannot offer the complete solution, and they should not promise ‘complete safety’ or ‘peace of mind’ to parents. Parents need to be aware that filters are fallible, that the internet is complex and changing, that user-generated content poses particular challenges, and that children have rights to freedom and privacy even from their parents. If something goes wrong, parents need a range of strategies other than just confiscating the smart phone or taking away the laptop. Rather, they must talk to their children, seek help if necessary, and know what they are talking about by becoming competent internet users themselves.


About Sonia Livingstone

Sonia Livingstone is a professor in the Department of Media and Communications at the London School of Economics and Political Science. She is author or editor of sixteen books and many academic articles and chapters. Her research examines children, young people and the internet; social and family contexts and uses of ICT; media and digital literacies; the mediated public sphere; audience reception for diverse television genres; internet use and policy; public understanding of communications regulation; and research methods in media and communications. Sonia Livingstone directs a 33-country network, EU Kids Online, for the EC's Safer Internet Programme. She serves on the Executive Board of the UK's Council for Child Internet Safety, and has, at various times, served on the Department of Education's Ministerial Taskforce for Home Access to Technology for Children, Ofcom's Media Literacy Research Forum, the Voice of the Listener and Viewer, and the Internet Watch Foundation. She has advised Ofcom, Department for Children, Schools and Families, Home Office, Economic and Social Research Council, BBC, The Byron Review on children's online risk, and Higher Education Funding Council for England. She was President of the International Communication Association (2007-8).
This entry was posted in Communications Review, Media Literacy and tagged , , , , , , . Bookmark the permalink.

8 Responses to Should children’s internet use be filtered? Multi-stakeholder prudishness impedes open deliberation

  1. a parent says:

    The fundamental flaw in this piece is that ALL parental control software on a PC can be very easily avoided, most simply by booting into “safe mode”. Teenagers find this out from their friends or from google. Search “avoid parental filters” to find out how. No IT skills required.

    Not surprisingly the companies who market parental control software (including Microsoft who are responsible for the vulnerability in Windows) do not tell you this.

    • Sonia Livingstone says:

      Thanks for your comment and I take your point. However, some but not all teenagers know tricks like the one you mention to circumvent the controls (it’s important not to generalise from the experience of some children to all children). The producers of these filters (and many parents) might still be confident that the filters can work for younger children than the teenagers you mention. But my main point is that the filtering tools should indeed be much better designed so that they cannot be circumvented. Why this does not happen is a puzzle.

  2. Mark says:

    Yes, they should be monitored. For those who say that kids also need privacy, there is the case of the unfortunate Amanda Todd. I watch who my son is talking to on Facebook using an app called Qustodio that allows me to view the profile pictures of accounts that he engages with. Such monitoring is for their own good. Qustodio is a nice app. Just Google for it.

  3. Vicente says:

    Why should these tools necessarily be free?
    I do agree that most of current parental control tools aren´t good, specially because they provide a really bad user experience, specially thinking in parents that aren´t tech savvy. Therefore, I understand that parents don´t spend money on them.
    However, as a parent, I spend a lot of money educating my little girl, and I would also spend money on these tools if they were able to help me on my kids´ online education.
    In addition, nothing is really free. There are free tools on the market, but of course, their developers need to get revenues somehow to keep improving the tool. If you don´t mind that these tools add ads or change your default internet searcher to get the needed revenue stream from advertisement, that´s ok. But paid versions of these tools that leave out these things and focus 100% on improving the tool should also exist.

    By the way, congratulations for your reports and insights on child safety. I think that they a great source of knowledge

  4. Pingback: Space invaders: creating child-centred spaces of public debate | Centre for Innovation & Research in Childhood and Youth

  5. As always there are two sides to this story, blocking unwanted content and educating your children and both of these need to be done.

    It is possible to secure most computers using widely available software and processes. Most corporate environments use a similar process and these are readliy available.

    There is also the need to ensure that we as parents do what we can to block/filter content and then have a conversation about what is bad and what is good in the same way we discuss other parts of our lives.

    But it is also about knowing what they could be searching or who they could be talking to which is what the vest best protection packages will do. Of course none are perfect but you can get close with free controls and training.

    The safemode workaround is easily locked out by changing some system settings.

    Kids being kids they will often find a way you just need to make sure as a parent you are one step ahead. Our work with two educators has started to provide the tools and the training with more support to come. Really it is possible to restrict information online and allow parents to sleep well at night with widely available tools

    Richard Smith

  6. Pingback: Should children’s internet viewing be filtered? | Families, Children and Youth Blog

  7. Pingback: A Better Internet for UK Children? | LSE Media Policy Project

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>