LSE - Small Logo
LSE - Small Logo

Yudhanjaya Wijeratne

July 23rd, 2020

Facebook, language and the difficulty of moderating hate speech

0 comments | 7 shares

Estimated reading time: 5 minutes

Yudhanjaya Wijeratne

July 23rd, 2020

Facebook, language and the difficulty of moderating hate speech

0 comments | 7 shares

Estimated reading time: 5 minutes

In March 2018, the Sri Lankan government blocked access to Facebook, citing the spread of hate speech on the platform and tying it to the incidents of mob violence in Digana, Kandy. In this post by Yudhanjaya Wijeratne, a senior researcher at Asia Pacific think-tank LIRNEasia, the difficulties of responding to hate speech are unpacked based on research that his Data, Algorithms and Policy team recently completed.

It’s no secret that both hate speech and misinformation pose huge problems on Facebook. Much has been written about the tech giant’s attempts to tackle this content, including tales of thousands of moderators working in digital assembly-lines in jobs that even cause post-traumatic stress disorder (PTSD) for some.

It is clear that human-based moderation is never going to be enough: Facebook would have to hire half of its user-population to review the posts of the other half on a randomly assigned basis, and that’s assuming a globally accepted framework of morals or standards even exists (which it doesn’t). Any response to hate speech must involve some form of automation – sophisticated machine-learning, fine-tuned for context and region – as well as human moderation. In other words, a Kasparov-like advanced chess symphony where the machine handles the banal majority and the human handles the exceptional outliers. And indeed, this is the direction that Facebook seems to be taking. After many discussions and consultations around the world, global policies have been set up, and definitions thought through and teased out.

The efficacy of these policies in reality rarely, however, measures up to the ideals committed to in policy documents (or policy speak). Living and working in Sri Lanka (as I do), examples of the hate speech that continues to proliferate on social media platforms are all too common. Despite the best PR and policy promises, the stuff is rather sticky: Sinhalese-Tamil antagonism as the leftover ashes from a thirty-year civil war; anti-Muslim hate stoked by political actors and mob-leading Buddhist monks; threats of death and rape against women.  And Sri Lanka is certainly not alone in this.

Why is the governance of hate speech online so difficult? This was a question we engaged in on multiple levels in our research – ontologies, policy differences, legal structures, and the fundamental difficulties of trying to transfer laws built to tackle violence ‘offline’ to the digital world. But more importantly, a technical barrier exists that needs to be acknowledged and worked around: language.

What’s in a word?

I’m fond of pointing out a snippet that mathematician and philosopher René Descartes wrote in a letter to Marin Mersenne in in 1629: “There are only two things to learn in any language: the meaning of the words and the grammar. As for the meaning of the words, your man does not promise anything extraordinary; because in his fourth proposition he says that the language is to be translated with a dictionary. Any linguist can do as much in all common languages without his aid. I am sure that if you gave M. Hardy a good dictionary of Chinese or any other language, and a book in the same language, he would guarantee to work out its meaning.”

Unfortunately, Descartes didn’t know as much about language as he thought he did. Languages differ greatly from each other in their sentence structure (syntax), word structure (morphology), sound structure (phonology), and vocabulary (lexicon). These differences allow us to classify languages into families: in fact, one may visualize languages as a tree, as the artist Minna Sundberg did. Languages on the same branch resemble each other. As they diverge, the differences compound. For example, English, which belongs on the West Germanic tree, has three tenses – past, present and future. Sinhala (the most-spoken language in Sri Lanka and the language in which most hate speech in the country manifests), belongs on the Indo-Aryan branch and is influenced by Pali and Sankrit. Sinhala has only two tenses: the concept of past and not-past (atheetha and anatheetha).

These differences impact different computational natural language processing in fundamental ways. For example, experiments on the EuroParl corpus (a set of documents that consists of the proceedings of the European Parliament from 1996 to today) have shown that Danish, German, Greek, English, Spanish, Finnish, French, Italian, Dutch, Portuguese and Swedish translations of the same parliamentary summaries all returned different results in the extraction of topics. Thus, resources built for one language – and progress made in one language – does not mean the same progress is made for other languages, no matter what the policy documents say. [1]

Languages and the Global South

Sinhala is a resource-poor language. So, too, are most languages in the Global South. Resource poverty is not a reference to GDP: in computational linguistics circles, it refers to the fundamental data and algorithms available that work on a given language. English is enormously resource-rich, meaning that downloading a corpus of text, removing noise (such as stop-words), performing named entity recognition, and modeling the topics therein is the stuff of beginner programming tutorials where English is concerned. In hundreds of languages in the Global South, however, these tasks range from ‘needs research funding and a year or two of work’ to ‘impossible’.

Well-meaning policymakers of today, who typically imagine the task of using machine-learning to respond to challenges like hate speech online to be as straightforward as Descartes believed language-learning to be, are perhaps guilty of 16th century thinking. Even large Silicon Valley entities may never fully understand the nuances required to do sophisticated work in Global South languages. Consider, for example, that models released from Facebook’s FAIR research lab appear to be trained using Wikipedia, which is largely written in formal grammar. Colloquial Sinhala, which is a diglossic language (i.e., two dialects or languages are used by a single language community), is a completely different beast. As such, when we tested these models out, they generated all sorts of interesting errors on colloquial Sinhala – the most absurd of which was when it classified codemixed Sinhala as Punjabi.

Going forward

Before tackling the extraordinary complexities of hate speech, we must arguably first get the foundations ready. There are possible ways around this. Sci-fi writer Douglas Adam’s books refer to the Babel fish, a simultaneous universal translator. This dream, more or less embodied in the 1954 IBM-Georgetown experiment, has been the genesis of modern computational linguistics.

Today, Google is racing towards the goal of simultaneous translation with its Pixel earbuds. With enough language data and computing power, we may just be able to reach the state of ‘good enough’ (though this might be a way off). But aligning with digital giants again puts our societies even more in the hands of a small number of corporations with unreplicable amounts of computing capability, and even more data troves to draw upon – something straight out of a cyberpunk nightmare.

Fortunately, we can design a different future. In the case of Facebook, the platform itself is essentially a gigantic repository of text. We have published, for instance, two large corpora of Sinhala for open analysis by local researchers – one a 28-million word trilingual monster with all three of the major languages spoken in Sri Lanka; the other a single-language corpus more readily usable by researchers – and documented the process in detail [2].

While it demands more work, this method decentralizes power as it not only brings local language expertise to the table, nuances and all, but it unlocks datasets for researchers to work with. The code required is neither brain surgery nor rocket science; with similar collaborations, we may perhaps not be able to sort out the hate speech issue overnight – but for hundreds of other languages, local linguists and computer scientists may be able to work in a mutually beneficial relationship with technology corporations. It isn’t a utopia, but certainly worth considering.

This article represents the views of the author, and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

[1] Wijeratne, Y., de Silva, N., Shanmugarajah, Y. (2019).  Natural Language Processing for Government: Problems and Potential. LIRNEasia.

[2] Wijeratne, Y., de Silva, N. (2020). Sinhala Language Corpora and Stopwords from a Decade of Sri Lankan Facebook. LIRNEasia.

Featured image: Photo by Greg Bulla on Unsplash

 

About the author

Yudhanjaya Wijeratne

Yudhanjaya Wijeratne is a writer and researcher with the Data, Algorithms and Policy team at LIRNEasia, a nonprofit think tank working across the Global South. He co-founded and helps run Watchdog Sri Lanka, a factchecker. His work spans data science, linguistics, artificial intelligence, public policy, and futurism.

Posted In: Internet Governance

Leave a Reply

Your email address will not be published. Required fields are marked *