LSE - Small Logo
LSE - Small Logo

Lakshmi Sivadas

May 25th, 2021

Building better human-AI interfaces for journalism

0 comments | 2 shares

Estimated reading time: 10 minutes

Lakshmi Sivadas

May 25th, 2021

Building better human-AI interfaces for journalism

0 comments | 2 shares

Estimated reading time: 10 minutes

Christina Elmer is the Deputy Head of Editorial R&D at Der Spiegel in Germany. At the time of writing, Christina is on sabbatical exploring Explainable AI as Journalist-in-Residence at Cyber Valley. The idea of creating new user-centric journalistic products that combine journalism and technology inspires her.  

In this new episode of our interview series with women working on the intersection of AI and journalism, Christina speaks about the potential for AI to transform journalism and the importance of creating better user interfaces for human-AI interactions.

There are many possible ways to use AI but, first, you need to foster data and algorithmic literacy in the newsroom if you want to be able to understand what solution you might need and how to implement it.

JournalismAI: Christina, could you describe your journey into journalism and how it led you to work with AI?

Christina: I started my career as a science journalist and I initially thought I’d be reporting on scientific topics like genetics, which was one of the fields I studied. Around the same time, data journalism started to take root in Germany. In 2007, I helped build a data journalism team at the German news agency, DPA. It was then that I started thinking that this could be a perfect combination of my interests in journalism and science.  

I value the scientific method and it’s nice to have the opportunity to apply it to my journalism and investigations. This evidence-based approach that data journalism makes possible has always been very interesting to me. 

I later started working at Der Spiegel as a science and data journalist and I formed and established the first data journalism team at Der Spiegel, too. Now I’m part of the editorial R&D team that fosters technological and editorial developments and projects. 

Editorial R&D is a fascinating field, with the opportunity to shape the future of our journalistic work. We think hard about the needs of our users and how we can build journalistic products and design new journalistic formats that can help us meet those needs. It’s in this context that we started exploring how new technologies like AI can be applied to our work.

What potential does AI hold for journalism in your experience?

I’ve encountered many use cases in the production and distribution workflows that have really been striking to me. The moment this scales up – when we will really be able to implement AI features in our production workflows – it will be a truly transformational force for our journalism. 

For example, I see great potential in the promise of structured journalism. Right now, we produce the same content for a broad range of readers. To be successful in the future, we need to be able to approach readers at a more personal level. You could use AI to decompose journalistic pieces into separate elements and automatically recompose them based on where they should be published and for whom. Along with it, there is also a huge potential for personalisation. With the help of AI and metadata, we can get to know our readers’ preferences, what they know or don’t know about a topic, and help to do our journalism in a more personalised way that meets their specific needs, and the affordances of different publication contexts. But we need a more structured understanding of our products and workflows to achieve that. 

See also: Check out Christina’s work as part of a team of the 2020 JournalismAI Collab that explored how to connect users to quality journalism with AI-powered summaries

How do you decide which problems might need an AI-based solution?

We decide that based on the use case, because every use case is different. Not only does it depend on the topic, for example, but also the culture, workflows and existing challenges within the editorial teams that are involved in a particular project. We ask questions like: How easily can they be integrated into this process? Who does need to be involved in the implementation of this AI system? Do tools exist already that we can use? We also look at the interplay of the technological systems involved.

There are many possible ways to use AI but, first, you need to foster data and algorithmic literacy in the newsroom if you want to be able to understand what solution you might need and how to implement it.

 

How do you successfully translate AI workflows to the rest of your news organisation and address fears about job replacement?

At Der Spiegel, we try to explain it everywhere possible and also make the process understandable. If we introduce certain AI systems into our workflows, we try to make it as easy as possible for our people to work with them on a day-to-day basis, even under the high pressure of the newsroom. For example, we introduced a support system for linking our articles to each other, which is also based on AI technology. We tested this with a small group of users and then closely supported it with communications when we rolled it out in the newsroom. It was important that the system worked transparently and only suggested links, so users could always decide what made sense from their point of view.

With AI and journalism, it’s not about choosing between one or the other. You can automate some processes and even the creation of some content, but journalism is so much more than that. It is a profession based on empathy and creativity and on building a connection with your readers. It’s not a question of whether AI will replace us, but more about the connection and coordination of a human-AI hybrid system. 

See also: Read what newsroom leaders have to say about managing AI workflows in a news organisation

At Cyber Valley, the focus of your research is on “Explainable AI”. Why was this topic important to you?

*Editor’s note: Explainable AI is the process that seeks to understand how AI-based systems arrive at solutions as opposed to “Black Box” AI. 

My experiences with AI taught me that it’s important to be able to explain and understand what you are doing when you are working with complex learning systems. 

During these weeks at Cyber Valley, I’d love to explore new ways to design interfaces between human users and the AI systems that they’re working with. I do not know enough about the actual state of scientific work on AI to develop something like this on my own, so I took this as an opportunity to learn and ask questions to the many high-profile scientists working here, including people who are working on fairness in machine learning, AI ethics, and algorithmic accountability to tackle the challenges that we have to address when technology meets society. 

Why is explainable AI important for journalism and why now?

This is a crucial time when it comes to AI regulation considering the speed of scientific development. In a few years, we will see AI systems that are capable of things we can’t even imagine right now, so we should really think about the ethics and explainability issues before it’s late. 

For journalists, a deeper understanding of AI is important for two reasons: On the one hand, they need a certain degree of transparency to investigate socially relevant developments and uses of learning systems. At the same time, we will integrate AI more and more into our own processes to leverage potential in research, production and distribution. For this to work well, we need to be able to understand how the system works, how a result has come about and what I, as a user, can do to influence it. Here, it would help a lot if the most important questions were answered in a generally understandable way at the moment of use. 

What would your advice be for smaller news organisations that might not have the capacity to develop AI tools but want to develop an AI strategy?

It’s helpful to reach out to different institutions and groups working in this field and develop partnerships with them to exchange knowledge and conduct experiments together. We see broadcasters in Germany doing this where they work with institutes of applied sciences, for example. Reach out to other newsrooms and build up networks. What I saw in data journalism is that it is important for data teams to collaborate. It’s helpful to build up networks and take part in international initiatives, to collaborate with organisations in different markets that are not your direct competitors. 

See also: Newsroom leaders spoke with us about their best tips to create an AI strategy for newsrooms

What do you think is the future of AI at Der Spiegel? 

It’s hard to imagine where the development of AI systems will be in two years. For example, if NLP (Natural Language Processing) reaches new dimensions – in languages other than English, like German, for example – then it will open up a number of new possibilities for a newsroom.

The most important thing will be to stay updated when it comes to AI developments and the speed at which they appear. It’s hard to imagine how many new AI tools we will have in five years in the newsroom, but I would think that we should try to be open in this direction – for example by having a dedicated AI development team in-house. It would be great to be able to develop these tools ourselves and also keep them updated and useful for the newsroom in the future.

 


 

The interview was conducted by Lakshmi Sivadas, JournalismAI Community Coordinator. It is part of a JournalismAI interview series with women working at the intersection of journalism and artificial intelligence.

Christina was part of the 2020 JournalismAI Collab, a global collaboration to experiment with AI. Learn about the work of her team in their Festival presentation.

If you want to stay informed about our activities, you can sign up for the monthly newsletter.

JournalismAI is a project of POLIS, supported by the Google News Initiative.

About the author

Lakshmi Sivadas

Posted In: JournalismAI | Women in AI-journalism