The JournalismAI Fellowship began in June 2022 with 46 journalists and technologists from news organisations globally collaborating on using artificial intelligence to enhance their journalism. At the halfway mark of the 6-month long programme, our Fellows describe their journey so far, the progress they’ve made, and what they learned along the way. In this blog post, you’ll hear from team Attack Detector.
Abraji and Data Crítica came to the JournalismAI Fellowship with complementary ideas on using AI-powered tools to analyse online hate speech and discover the core of narratives around journalism. Both organisations are deeply troubled by the increasing violence against journalists. Abraji works regularly on documenting this type of digital attacks. At Data Crítica, we have been exploring the application of AI for investigative journalism before.
This is the reason why we decided to team up and combine the data and machine learning expertise of the two organisations to develop a tool for detection of online attacks against journalists, environmental activists, and land defenders in Mexico and Brazil. The aim is to build a tool that can also be applied to other Latin American countries.
The motivation for this project is the rampant situation of violence against journalists in our countries. Between January and April 2022, Abraji identified 151 episodes of physical and verbal aggressions against journalists in Brazil (62.9% originated on the internet).
In Mexico, from 2000 to date, the journalist protection organization Artículo 19 has documented 156 murders of journalists. 47 happened during the administration of former President Enrique Peña Nieto and 36 under current President Andrés Manuel López Obrador.
In the 2021 World Press Freedom Index, Reporters Without Borders places Brazil and Mexico on the list of countries where journalism is very risky – 111th and 143rd respectively – out of 180 countries. They share the “bad” classification with India (142nd) and Russia (at 150th), among others.
Digital violence is known to have serious consequences for the psychological health of journalists and land defender activists. The defender Samir Flores Soberanes was murdered in Mexico after being vilified by the Mexican government for his opposition to the Morelos Integral Project. The defender Bruno Pereira was also recently murdered along with journalist Dom Phillips in an investigation in the Yavari Valley in Brazil.
It is essential to analyse online attacks to understand how they work, who initiates them, who shares these messages, and what forms the attacks take. For example, it is important to understand whether they are misogynistic, racist, or a combination of various forms of hate speech.
The approach we follow to develop this tool is through artificial intelligence, in order to handle and filter the large amounts of data that are generated in social networks. Our goal is to use a pre-trained language model and teach it to identify hate speech in Spanish and Portuguese. But this approach comes with two major challenges.
The first challenge is related to the methodology. Hate speech is a complex social problem to address. First of all because, in order to identify it, it needs to be clearly defined and categorised. But individual examples can have various nuances and be subject to context, making it hard to attain a well-defined categorisation.
Furthermore, we recognise that investigating a social problem as relevant and complex as attacks on social networks implies an ethical responsibility, both in how the topic is approached and in how technology is applied to it.
On the other hand, there is a technical challenge: existing technologies are designed to work well for the English language but not as well for other languages. Since our object of study is in Spanish and Portuguese, we have to create our own databases and configure pre-trained models in these languages to be able to detect hate speech in Mexico and Brazil.
To create the databases, we conducted research on who are the profiles of journalists and land defenders who are usually the victims of these attacks and also the profiles of those who initiate them. As part of the process, we contacted organisations that work with this type of violence to get overviews of the situation in both countries.
With regard to environmental activists and land defenders, it was more difficult to identify the profiles that would be suitable for our analysis. We found that these attacks tend to take place more in the real world than online.
After this research, we started collecting tweets with examples of these attacks in Spanish and Portuguese. For that, we use the Twitter API for Academic Research, which allows downloading 10M of tweets per month. The purpose of this data collection is to create a database on hate speech in both languages in order to subsequently train language models that are capable of identifying these attacks.
At our current stage of work, we have faced many doubts regarding both classification and categorisation. Some tweets have hate speech, but they are directed to haters, not to journalists and land defenders. For example, an attack against a profile that routinely attacks environmental activists. This is a type of attack that we had not contemplated and has led to discussions about whether we should add these examples to our databases. For now, our resolution has been to include them, since at the end of the day they are also attacks, even if they are directed at those who usually offend.
There are also cases that contain irony, which can cause confusion as they are very contextual In addition, there are situations where the speech does not contain swear words, but common words are used in a pejorative way.
At this point, we continue to think about new methodologies to help us track representative data on the various types of hate speech, as well as ways to scale this collection. We want to test alternatives that will help us to offset potential gaps in our data that are generated from the filters we apply to download tweets.
We also remain aware that the labelling of a database is hard work that comes with an ethical responsibility, so we continue to question the categories in which we classify the type of attacks, and we maintain an open debate on these decisions.
After a few months working on this project, we are fine-tuning the language models, but we know that several tests and increasing our databases will have to be done to reach a good result.
In the coming weeks we intend to test unsupervised topic modelling algorithms and visualise graph networks with the help of metadata from tweets to find new discoveries.
The Attack Detector team is made up of:
- Reinaldo Chaves, Project Coordinator, Abraji (Brazil)
- Schirlei Alves, Data Journalist, Abraji (Brazil)
- Fernanda Aguirre Ruiz, Data Analyst and Researcher, Data Crítica (México)
- Gibrán Mena, Research and Direction, Data Crítica (México)
Do you have skills and expertise that could help team Attack Detector? Get in touch by sending an email to Fellowship Manager Lakshmi Sivadas at email@example.com.