Welcome to another edition of the JournalismAI Collab Diary, your window on the developments of our collaborative experiment to prototype AI-powered journalistic solutions.
We often hear about algorithmic bias and of the risks related to AI systems replicating and multiplying different types of bias – we have a section about it in our report, too. But what if the same technologies could also help us tackle existing biases in our newsrooms?
This is the question that one of the teams of our JournalismAI Collab decided to explore. Today, we talk with Team 1 about what made them decide to research opportunities in this area, what they have learned over the past three months and what they hope to achieve.
If you missed any of the previous editions of the Diary, you can find more information about the Collab in the archive.
The members of Team 1 are Agnes Stenbom (Schibsted), Jane Barrett (Reuters), Florencia Coelho and Delfina Arambillet (La Nación), Michaëla Cancela-Kieffer (AFP), Yosuke Suzuki and Issei Mori (Nikkei), Fabienne Meijer (VRT News), Ruth Kühn (Deutsche Welle), Paul Gallagher (Reach plc), and Aurore Malval (Nice Matin).
Can you tell us about the project you are working on?
Our team is looking into how we – news organisations – might leverage AI to understand, identify and mitigate newsroom biases. To break down this beast of a theme, we are approaching our challenge through three key tracks:
- We are looking into where and how bias might manifest itself in newsrooms, focusing primarily on gender, racial and age bias.
- We are exploring how AI might help us identify bias by assessing our own reporting, currently through experimenting with AI solutions that will help generate insights about our binary gender representations.
- We are researching how “AI insights” – such as those generated through our own experiment – can practically contribute to newsroom diversity and inclusion goals.
Why did you decide to focus on this area to explore the potential of AI technologies?
Our newsrooms are committed to diversity and inclusion. We want to do better – both in terms of who’s part of our organisations and in what/how we report on the world. We were all eager to use this Collab experience to explore together how AI might be a resource to our organisations and the news industry as a whole.
Which resources, people and experiences have been most useful so far?
The Collab coaches have been great resources for our team. Notably, Alyssa Zeisler of The Wall Street Journal was very generous in sharing her practical experiences (and not least, challenges!) with working with various solutions aimed at uncovering and mitigating newsroom biases. Nicholas Diakopoulos of Northwestern University also gave us a lot of food for thought, not least through highlighting how biases are an inherent part of being human and that they are not a negative thing per se.
Through different conversations with industry experts, we realised that newsroom management is still largely asking for the business case to improve diversity. This made us even more eager to explore our hypothesis of AI being a great resource for newsrooms but also reminded us that such a hypothesis needs to be paired with critical business insights to actually move the needle.
What are the key things that you learned over these months by working together?
The key learning of our team is probably just how much of an onion this topic of bias and diversity in newsrooms is. And not just in newsrooms, but in society, too. The layers just keep coming, and the deeper into the challenge we get, the more we realise how cultural and “human” this challenge really is. Add to that the fact that we’re a team of many different nationalities and cultures, from Argentina to Europe to Japan…
Working together helped us realise that there are no “one-size-fits-all” tools and solutions, powered by AI or not. To make sure that we do a better job serving diverse communities with news and editorial content, AI alone will never be the solution. What we could strive for, however, is to make AI a tool for positive cultural change in our industry and organisations.
What do you hope to achieve?
By December, our team plans to share a public digital resource outlining our learnings from this Collab. We will also be sharing the results of our practical AI experiments with assessing the gender representations of our respective news sites. This is data that we are all eager to get our hands on and we plan to open source the solution that we’ve used so that other newsrooms can use it too.
We are approaching AI as a potential force for good in the media industry. While there is so much talk about the downsides of using AI in the media context, we are trying to highlight how the same technologies people are scared of – e.g. face recognition – might be used for good. We hope that our collaboration, and the many conversations surrounding it, might contribute to uncovering what that potential for good can truly be!
The Collab is approaching the finish line and the teams have started to prepare to showcase the results of their work in December. You can find out everything about what they’ve been up to, and the innovative ideas they have developed, by joining the JournalismAI Festival, an online event we will host online on 7–11 December 2020.
Sign up for the newsletter if you want to receive all the details about the Festival programme and how to participate. And if you have any questions get in touch with Mattia on Twitter or email M.Peretti@LSE.ac.uk.
JournalismAI is a project of POLIS, supported by the Google News Initiative.