LSE - Small Logo
LSE - Small Logo

Bordow,E (pgt)

May 16th, 2022

Fairness, accountability, explainability. Creating guidelines to use AI in the newsroom

0 comments | 2 shares

Estimated reading time: 5 minutes

Bordow,E (pgt)

May 16th, 2022

Fairness, accountability, explainability. Creating guidelines to use AI in the newsroom

0 comments | 2 shares

Estimated reading time: 5 minutes

How can you design guidelines for the responsible use of AI and automation in your news organisation?

In our second JournalismAI Community Workshop, industry experts told us how to design clear guidelines to mitigate the risks that come with AI and automation being used in journalism.

Uli Köppen, Head of the AI + Automation Lab of Bayerischer Rundfunk, and Claire Leibowicz, Head of AI and Media Integrity at Partnership on AI,shared their experiences. 

The AI + Automation Lab of Bayerischer Rundfunk

Uli Köppen explained how her team decided to create guidelines to guide their use of AI and automation, and how these guidelines can add value to the irreplaceable work of journalists in the newsroom. 

To create these guidelines, they gathered an interdisciplinary group to write down their strategy, looking at their own AI use cases and discovering what their pitfalls were:

“In the beginning it was just a list of solutions for pitfalls we might meet in the future or have already met… [but] in the end we looked at the list and said ‘that might be a very helpful tool for our journalists and our newsrooms when they are confronted with automated workflows.’”

The AI + Automation Lab’s ethical guidelines were created after the team looked at the issues they were facing internally in their own use cases. Addressing these issues provided practical support for their journalists’ every day tasks. Uli explained that the team recognised how these guidelines should not only useful to their own organisation, but should be published externally as well, to facilitate discussion with other organisations using AI and get feedback from journalists and data scientists. 

By doing so, they turned their guidelines into a set of living standards for the use of AI – standards that are always in progress, being revised and rewritten by the team in an effort to gather input from others inside and outside the industry to discuss and collectively learn more about using AI and automation responsibly. 

To create your own ethical guidelines, Uli emphasised how beneficial it is to use your organisation’s own use cases: “Starting with your own use cases helps you to focus on what you really want to do.”

Uli also discussed the importance of referring to other published AI guidelines. With an abundance of industries implementing AI into their workforce, looking at other guidelines for inspiration can be beneficial when creating your own organisation’s ethical guidelines, opening up your mind to aspects you may not have considered otherwise. 

Uli emphasised the importance of collaboration. When working on new guidelines for your organisation, organise working groups, write down what you find useful, discuss it with your colleagues, and rewrite and rework your guidelines when necessary – collaboration is key! 

Building a diverse team while creating your AI guidelines is also critical as it allows for more inclusive, effective outcomes, both internally and for your audiences.

Partnership on AI

When creating their ethical guidelines, the team at Partnership on AI (PAI) started by asking: How do we promote a healthy information ecosystem in the age of AI? 

Claire Leibowicz and her team convened a community of partners, bringing together people from all over the industry to develop their guidelines. They worked together to discover how they can use AI’s advantages in local newsrooms while addressing the unintended harms and ethical consequences of automation.

This collaboration resulted in a set of important recommendations for the implementation of AI in local news. Claire explained that newsrooms should use these recommendations to create their ethical guidelines “before the train leaves the station. This is the time to think about AI ethics”. Local newsrooms can significantly benefit from thinking about ethics before AI is adopted into their organisations. 

After surveying the PAI community, Claire and her team discovered that accountability, explainability, and fairness were considered the three most important principles for small newsrooms to be transparent when using AI. Ensuring that AI can be held accountable, be explained, and remain fair is key to responsibly using AI in your newsroom.

Newsrooms should articulate clear goals for adopting AI – ask yourself: does this offer a tangible benefit for your users and employees? Claire also recommends borrowing from other domains in the machine learning space that can help organisations practice accountability and transparency to make AI more bespoke for your organisation’s own use cases.

Claire emphasised that technology must embody the standards and values of the news operation. There needs to be a dialogue about how AI is being deployed as well as how it interacts with journalistic standards and audiences throughout society. Newsroom staff should also actively work to supervise the use of AI – their direct involvement in the choices about the use of automation can be beneficial and help ensure transparency. 

When implementing AI into your newsroom, Claire highlighted again the importance of explainability. Both your staff and your audiences should know what AI tools are being used in the production of news, how they are used, and why they are helpful.

Lastly, distribution platforms should embed journalistic values into their systems, too. Claire reminded us that outside of your newsroom, there is an entire system distributing your content, and it’s essential to partner with them to uphold ethical journalistic standards. “Do not forget that there’s this whole set of actors of the field distributing your content and should be listening to you about journalistic values and how those might work.”

You can watch the recording of our latest JournalismAI Community Workshop on YouTube:

Our next community workshop is coming soon! Make sure to sign up for our newsletter and join our Telegram group to stay up-to-date with our upcoming events and to connect with fellow journalists and technologists around the globe.


JournalismAI is a global initiative by Polis – supported by the Google News Initiative – that empowers news organisations to use artificial intelligence responsibly.

About the author

Bordow,E (pgt)

Posted In: JournalismAI | Original Reporting | Project Update

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.