Professor Sonia Livingstone proposes using short surveys of children across the globe based on well-tested models to get more evidence before policy makers set about changing policies aimed at protecting children online. She sets out 7 key questions.
Since the birth of the Web, hopes and fears for the digital age have been strongly focused on the education and protection of kids.
Policy changes are underway that will impact the fundamental division of responsibilities among parents, children and internet companies, from the revision of child protection aspects of the EU’s Audiovisual Media Services and E-Commerce Directives to the on-going review of ICANN and the domain name system. But these changes could be made without attention to the most basic data about how many children access the internet, how, why and with what results. Undoubtedly the usual controversies over free speech and protection of the vulnerable will rage, but without knowing how many are vulnerable, to what extent and what proportionate, evidence-based mitigation strategies might be, policy will be stuck in the usual mire of ideological mudslinging.
Current gaps in evidence on children
Despite all the hype about digital natives, and myriad policies to promote children’s online opportunities and mitigate the risk of harm, there is as yet no systematic effort to measure even how many children are internet users around the world, let alone the quality of their online experiences. ITU data on internet usage is available for 15-24 year olds, showing that in many countries young internet users predominate over other age groups. But as the ITU also notes, fewer than half of the world’s countries even measure the proportion of under 15s online, including many in the “global North”.
It is time for this situation to change. To be sure, it is expensive to conduct surveys with children and a host of measurement and ethical challenges must be addressed. But there is a well-tested model.
Following extensive cognitive testing and piloting, the EU Kids Online network developed a lengthy survey questionnaire – delivered to some 25000 children (aged 9-16 years old) and parents in 25 countries in 2010 – generating results widely used to inform European policy and practice to make ‘a better internet for children’. This was also administered in Australia and Brazil, and then adapted to the mobile environment by the Net Children Go Mobile project for a seven-country survey in 2014. The comparative results across time and country point to the value of administering such a survey both more widely and, given the rapid pace of social, policy and technological change, more frequently.
The EU Kids Online experience
Our seven main research questions begin with the most basic: “What is the frequency and location of children’s use of the internet?” EU Kids Online has developed an optimal set of response options and an efficient scale for measuring children’s answers to this question, as have some other surveys (e.g. Health Behaviour of School Children). What matters to policy makers here is who has access (relevant to the digital divide) and whether children use the internet privately or in a place open to adult supervision (relevant to safety and support).
Second – here drawing on the measurement design and testing developed by Ellen Helsper and Alexander van Deursen, EU Kids Online recommends a set of ten indicators to assess “How digitally skilled are children?” Our research has shown that more skills equals more opportunities, (though also more risk of harm).
Third, to capture how far children reach up the ladder of opportunities, we propose to ask: “What are the activities (or opportunities) children undertake online?” While standard response options are offered, this is also a question where adaptation to cultural contexts may be advisable to capture the activities important to children in particular places. The number of opportunities reveals both the costs of highly protective parental strategies and also differences between more and less advantaged users.
We balance questions of opportunity with those of risk of harm, importantly asking children an open question about ‘harm’ as they perceive it, before following up by listing particular risks they may have encountered. Given children’s suggestibility, this order is important, and the open question has also proved a useful indicator across time, country and user demographics. Thus we ask: “What is the frequency with which children encounter potentially harmful online experiences?” For this and the next question, it is important to provide conditions of confidentiality (e.g. via pen-and-paper with secure envelope) for self-completion by the child.
Clearly, there is also considerable interest in particular risks, hence our next question: “What are the risks that children encounter online?” For the response items we identify the range of risks that children have spontaneously reported bothering them in research, encompassing content, contact and conduct risks. For ethical reasons, mentioning some items might be restricted to older respondents aged 11+.
Filling the gaps with additional evidence
The EU Kids Online network has prioritised two further questions – one concerning the ways that children themselves respond to risk, as agents; the other building on the substantial research literature on parental actions. Therefore we propose to ask: “How do children cope with online risks?” Response items cover social support, active and passive coping strategies, drawing on pan-European questions asked in our earlier survey. For parental mediation, we asked: “How do parents mediate their children’s experiences on the internet?” Response items encompass five subscales: active mediation, active safety mediation, restrictive mediation, monitoring and use of technical tools, again tested across 25 countries and languages. For simplicity in guiding policy makers, answers can be summed to create a single indicator, although demographic and cultural differences in parental strategies can also be informative.
Of course it would be ideal to conduct diverse and complex, multi-method, context-sensitive, even longitudinal studies. After all, a short questionnaire cannot go into depth and follow up what children ‘really mean’ by certain answers. It relies on self-report, and it captures nothing of the context of internet use. But there are many practical advantages to conducting a short survey questionnaire. It is easily administered, can be incorporated into ongoing household surveys, and can generate cross-nationally comparative results that track trends across time and context. For a far lower cost, short survey results may stimulate policy makers to investigate further by giving handy insight into emerging trends, areas of specific need or unexpected differences by demographics or country. And it can be conducted in a manner that is ethically responsible, reliable and robust, as elaborated in our research toolkit. As I argued to the OECD, if just these seven questions were asked in all countries where children are online in substantial numbers, policy and practice would be far better positioned to address their needs and rights.
On 23 June 2015 Sonia Livingstone proposed indicators for children’s online experiences to the OECD Working Party on Measurement and Analysis for the Digital Economy’s subgroup on children. This post gives the views of the author, and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science.