There have been a number of recent initiatives that hold the internet industry to account for online bullying – but how effective have they been? We interview Tijana Milosevic about her new book Protecting Children Online? Cyberbullying Policies of Social Media Companies, which is a timely examination of how social media platforms are tackling the problem of harassment of children online. Here Tijana recommends that the child should be centred in the evaluation of such initiatives and that dignity education could be part of the solution. Tijana is a postdoctoral researcher at the University of Oslo focusing on social media policies, internet governance, and digital media use among children and youth. [Header image credit: Nevit Dilmen CC-BY-SA-3]
Congratulations on the publication of your book. Can you tell us how you came to choose this topic, and why you think it is important?
The starting point for Protecting Children Online? Cyberbullying Policies of Social Media Companies was realising that there was insufficient information about the measures that social media companies were taking to address the problem of cyberbullying of children and teens. This was strange, given that researchers and stakeholders recognised the increasing importance of social media platforms in young people’s lives and the ongoing debates on whether social media companies should be regulated as online intermediaries or as publishers.
I wanted to find out what social media companies were doing to address cyberbullying; what could be known about the effectiveness of their efforts; how effective regulation had been; and how debates on intermediary liability evolved in the aftermath of high-profile cyberbullying cases – those that gathered significant public attention because they were said to have been, in some way, related to child suicides.
When I refer to “effectiveness” and “evidence of effectiveness” I mean available information on whether children and teens are using social media companies’ tools against bullying (e.g. blocking, reporting, information provided in Safety Centres etc.) and whether the tools are actually helping them resolve bullying incidents. For instance, if a child uses a reporting tool to flag an alleged bullying case to the company, does the company respond and does it respond in due time? If it responds, does the bullying content get taken down or is some other action taken? Or is the content allowed to remain on the platform? Do content take-down, and other mechanisms that companies have, actually help young people in resolving such conflicts? If the company is leveraging some form of what could be characterised as machine learning or artificial intelligence in addressing the problem, how is this being done? And what evidence does the company provide that these tools are effective in helping it identify bullying cases and respond to them in a timely manner?
The book is based on several years of research – can you tell us how you went about your research, and your main findings?
I conducted a qualitative analysis of 14 social media companies’ corporate documents and statements and carried out interviews with several of their representatives. I also interviewed members of non-governmental organisations or charities that work with social media companies, as well as non-affiliated e-safety experts; and finally I analysed public debates surrounding high-profile cyberbullying incidents. I also compared the different approaches taken in the US and European contexts. The main findings were:
- There was a limited amount of evidence about the effectiveness of companies’ anti-bullying enforcement mechanisms.
- There was even less evidence on whether these mechanisms were effective from the perspective of children, as few studies looked directly at children’s experiences of such mechanisms.
- More established social media companies put more effort into developing their anti-bullying enforcement mechanisms but also tended to ensure their efforts were more visible; this can sometimes obscure the fact that palpable evidence of effectiveness is limited. At one point, companies were moving towards tools for users to resolve conflicts amongst themselves rather than only reporting to companies’ moderators; this was said to allow for more effective resolutions but also appeared to shift responsibility away from the companies.
- Where laws aimed at addressing cyberbullying among youth had been proposed or passed it was difficult to demonstrate that such laws were effective, while some laws had/would have negative implications for freedom of speech and privacy of adults.
- In Europe the European Commission has played a significant role in bringing companies together in a coordinated effort, to create various self-regulatory processes. In the US, such a systematic effort has largely been missing.
Why do you think bullying has become such a problem in the digital age? Does it merit a particular societal response or would it be more efficient to try and reduce risk through one big internet safety strategy?
I do not think that cyberbullying is primarily an e-safety problem but one of human dignity and as long as policy-makers and stakeholders see it as primarily an e-safety problem, we will have difficulties finding adequate solutions. I think there should be mechanisms in place that allow children (and adults) to control abusive content. But by focusing on content take-down or other forms of conflict resolution in any specific incident (which I do not deny are extremely important), we forget that cyberbullying tends to be a relational problem; one that is sustained through what I see as a pervasive culture of “one-upmanship”.
Yes, we have cases of trolling and harassment where a number of individuals that the victim may not know harass the individual, where thinking about the problem in terms of safety could lead to adequate solutions. But many cases are about identity-related issues where children know each other well – too well. When it comes to girls, for example, who is prettier, better dressed — these are often at the heart of bullying conflicts and drama. Western countries, as well as the so-called transitional countries, are immersed in a neoliberal culture where we teach children to draw self-worth from external insignia of success – looks, money, grades, etc.
Combating cyberbullying lies in finding effective ways to teach young people and adults alike that everyone has dignity regardless of these external insignia and that one’s inherent worth is not based on being better than the next person but on fulfilling one’s full, unique potential. Understanding this, in my view, would obviate humiliation. I think that dignity education can be taught under digital citizenship education but it will take a lot of thinking and planning as to how such education should be designed and implemented for it to be effective.
As I have elaborated previously here, citing some of my interviewees, it is important to have a clear understanding of what digital citizenship education is for it to be more than a re-branded term for e-safety education. Once it is clearly defined and operationalised, evaluating the effectiveness of such education should be undertaken, see here. In my view, the term runs the danger of becoming a catch-all phrase or a buzzword: a popular term in policy circles that is used to signal a positive change. We need to ensure that concrete steps are attached to it and that their effectiveness can be evaluated.
What are the lessons of your work for current regulatory debates? What do you think industry – and regulators – should do differently?
De-focusing on regulation and focusing instead on how we can evaluate companies’ efforts from the perspective of children would allow us to move beyond the usual mantras reinforced by insufficiently researched laws or hasty proposals for regulation that have negative effects on civil liberties – mantras such as “traditional regulation stifles innovation” or “laws are too slow to adapt to fast-paced technological change”.
Therefore, rather than focusing on hasty proposals for new pieces of legislation, policy-makers should conduct regular independent evaluation of the effectiveness of various companies’ anti-bullying enforcement mechanisms, so that we understand the gaps and – more importantly — where children see the gaps.
How could civil society (child welfare organisations, academic researchers, parent organisations etc.) be more effective in promoting children’s rights in digital environments?
Involving children in the evaluation process is the first step. It is also extremely important for companies that work with non-governmental organizations or e-safety experts to be more transparent and allow third parties to talk about their efforts and evaluate them more openly and critically. A common practice that I found in my research was that large companies would require these third parties to sign non-disclosure agreements which could then limit what they could talk about with researchers. I think that transparency is important when it comes to allowing researchers who do not work “in-house” to have access to the relevant data. National and international governance bodies need to provide more funding for these third parties so that they do not have to be dependent on companies.
To conclude, I think that a lot of the debate around cyberbullying, including the discussion on regulating social media companies, has overly emphasised punitive measures. This blame-gaming discourse tends to be reductionist with respect to the complexity of cyberbullying as a phenomenon. I wonder if, when thinking about regulation, a more effective approach would be to focus on enabling rather than merely punitive measures, such as: requiring companies to assist in funding school-based educational measures; funding support for parents and helpline services; funding psychological and counseling services for children and families, which are normally underfunded.
This post gives the views of the authors and does not represent the position of the LSE Parenting for a Digital Future blog, nor of the London School of Economics and Political Science.