LSE - Small Logo
LSE - Small Logo

Guest author

January 16th, 2019

7 great developments in internet safety that happened in 2018

0 comments | 2 shares

Estimated reading time: 10 minutes

Guest author

January 16th, 2019

7 great developments in internet safety that happened in 2018

0 comments | 2 shares

Estimated reading time: 10 minutes

Amidst many negative headlines throughout 2018 about safety and wellbeing of children online, there were some positive stories too. Here Anne Collier outlines developments in areas including cyberbullying, screen time, social-emotional literacy, content moderation and policy on fake news. Anne Collier is founder and executive director of The Net Safety Collaborative, home of the US’s social media helpline for schools. She has been writing about youth and digital media at NetFamilyNews.org since before there were blogs, and advising tech companies since 2009.

You know that, by definition, the news reports airline crashes, not safe landings, right? But even if 2018 was truly unique, and bad tech news was actually the rule rather than the exception, then positive developments in digital safety and wellbeing really are news. So here’s some “news,” then  –  the most interesting positive developments in internet safety of the past year.

1. An important book on cyberbullying vs. dignity

In Protecting Children Online? author and researcher Tijana Milosevic for the first time places the subject of cyberbullying where it belongs: in the framework of (and slowly growing public discussion about) dignity. Why there and not in ‘internet safety‘? Because “dignity is what is violated when bullying and cyberbullying take place  – when a child or a teen is ridiculed because of who or what they are,” Dr Milosevic writes. “Dignity is sometimes conceptualised as the absence of humiliation,” and –  though it can be private or one-on-one like bullying  – cyberbullying, because media-based, takes the form of public humiliation almost by definition. Dignity is particularly effective as an antidote to social aggression because it removes the differentiations and imbalances, such as social comparison, social positioning and power imbalances, that fuel it. It puts the focus on our humanity, where it belongs, rather than on technology.

“Dignity is an inalienable right, which, unlike respect, does not have to be deserved or earned,” according to Milosevic, citing the work of scholars and practitioners from the fields of political science, education, conflict resolution and clinical psychology. Because the internet and the content and behaviour on it represent more and more of life on Earth, we need this kind of cross-discipline work to optimize the Net for our children and their children.

2. Real ‘screen time’ clarity, finally

Screen time is not a thing. It’s many things, researchers tell us, which contrasts pretty significantly with lots of scary headlines and many parents’ harsh inner (parenting) critic. Here’s a headline actually drawn from academic research: “We’re got the screen time debate all wrong. Let’s fix it.” As Wired reported under that headline, citing researchers at Oxford University, University of California, Irvine, and the National Institutes of Health, that “time spent playing Fortnite ≠ time spent socialising on Snapchat ≠ time spent responding to your colleague’s Slack messages.” See also “Why the very idea of screen time is muddled and misguided” and “The trouble with screen time rules” from researchers in the Parenting for a Digital Future blog. They say we can learn a lot more from watching (and listening to and playing with) our children than watching the clock. I agree, but let’s not hover too much. Because those researchers also say that our kids can’t develop resilience without some exposure to risk and opportunities to figure things out for themselves.

3. Safety innovation in social norms

A powerful tool for social-emotional safety and civility that humans have shaped for thousands of years, social norms are just beginning to be associated with safety in communities, from schools (see this from Prof. Sameer Hinduja) to online communities. And now this tool is being deployed by some social platforms for their users’ safety. I wrote about a couple of examples in the massively popular live video sector of social media here, but briefly I mean giving users tools to set the tone, create a sense of belonging, establish norms, then resolve issues in their own communities online based on that work. It’s about platforms giving users more control not ceding responsibility. We can contribute to that trend’s momentum by reporting online abuse ourselves and encouraging our children to report content that disturbs or hurts them  – showing them they’re part of the solution. We know they are not passive consumers online; they have agency and intelligence, and one way they can exercise their rights of participation is in protecting their own and their peers’ safety in the apps they use. Equipping them for this is part of social-emotional learning – another “tool” that has made real headway in adoption by schools in many states this past year. SEL teaches skills that support children’s empathy development, good social decision-making and recognition of their own and their peers’ dignity and perspectives (see this report for examples in six states and more on social norms for safety here).

4. Multi-perspective discussion –  even in policymakers’ hearings

I wrote about one historic one  –  the first-ever formal House of Commons committee hearing outside the UK – here. It aimed to take a deep dive into the problem of “fake news” (a threat to the safety and wellbeing of people as well as societies). There was grandstanding, sure, but also truly substantive testimony from a rich range of views and expertise, those of scholars, news executives and reporters, as well as platform executives (note toward the end of my post what CBS chief White House correspondent Major Garrett said about our children’s generation). I’m convinced we will not move the needle in making this new media environment truly work for us until we get all stakeholders at the table talking rationally and respectfully. Old-school shaming, fear-mongering and adversarial approaches will not serve us.

5. An important new book on content moderation

The ability to get harmful online content deleted has long been the main focus of ‘online safety’. This was the year it became clear that content moderation is both less and more than our source of online safety  –  and that we need it but certainly shouldn’t completely rely on it. One person’s ‘free speech’ is another’s harm. It’s highly contextual. “It is essential, constitutional, definitional,” writes Tarleton Gillespie in his important new book Custodians of the Internet. “Moderation is in many ways the commodity that platforms offer.” It defines a platform, our experience of it and even the nature of our media environment. And it defines even more: “We have handed over the power to set and enforce the boundaries of appropriate public speech to private companies,” writes Dr. Gillespie, a principal researcher at Microsoft Research New England, in the Georgetown Law Technology ReviewAnd we’re talking about “appropriate public speech” in every society on the planet.

It’s not just platforms or Internet companies we’re talking about, here. They’re social institutions, a point made by scholar Claire Wardell in the parliamentary hearings I mentioned above and journalist Anna Wiener in The New Yorker. That fact calls for new not more – new forms of risk mitigation and regulation, TBD in my next installment.

6. Platforms discussing content moderation themselves  – publicly

Another first this year was the rich, cross-sector discussion about this on both coasts this year. At two conferences called “CoMo at Scale” – one at Santa Clara University in California, the other in Washington (the latter all recorded here) – social media platform executives gathered with scholars, user advocates and the news media and discussed their content moderation tools and operations publicly for the first time. “One of the great things about attending these events is that it demonstrated how each internet platform is experimenting in very different ways on how to tackle these problems,” Techdirt reported. “And out of all that experimentation, even if mistakes are being made, we’re finally starting to get some ideas on things that work for this community or that community.”

7. Platforms’ improved transparency

There’s a long way to go, but they’re investing in it. This year they put out increasingly granular numbers on what content is coming down. That’s partly due to laws like Germany’s just-enacted anti-online hate law NetzDG (though that too is not all good news, according to The Atlantic). What’s different now is that Facebook now includes numbers on proactive deletions vs. reactive ones, and Twitter includes deletions in response to users’ requests, not just governments (here are Facebook’s and Twitter’s transparency reports). Also for the first time this year, Facebook included data on bullying and harassment violations, saying that in the third quarter (the first time it provided numbers for this category), it took down 2.1 million pieces of such content, 85.1% of it reported by users, demonstrating the importance of users making use of abuse reporting tools. This greater transparency is so important. But it can’t be the ultimate goal, right? It’s a diagnostic tool that gets us to a better treatment plan  – where the treatment demands a range of skills and actions both human and technological behind the platforms and in society.

So that’s it  –  not for this series, just for this year. This list is by no means comprehensive, but it’s important, because these developments come with some really interesting ideas for developing solutions to the problems that got so much scrutiny and news coverage this year  –  ideas for risk mitigation and regulation and more. That’s what’s coming up next, first thing in 2019. Meanwhile…

Here’s wishing you a happy 2019!

Notes


This post originally appeared on the Medium website and it has been reposted here with permission.

This post gives the views of the authors and does not represent the position of the LSE Parenting for a Digital Future blog, nor of the London School of Economics and Political Science.

About the author

Guest author

Posted In: Reflections