AI systems are coming fast and forcing existential questions across industries, not least in journalism. Easy-to-use apps are now making machine learning accessible for all businesses, but strategic efforts to match the vast possibilities to your team’s mission, in a coherent way, are fraught with complexities.
Small media organisations can find it especially challenging to deal with the technical and ethical questions involved in keeping abreast of this technological transformation. The implications for working processes, staff skill-sets, labour rights, and journalistic integrity can be daunting. In this article Matthew Linares, Technical and Publishing Manager at openDemocracy, explains how his team is approaching these challenges.
openDemocracy is an independent global media organisation that seeks to educate citizens to challenge power and encourage democratic debate across the world.
Bringing strategy to life
Strategy helps ensure that everyone, including staff and readers, understands what motivates decisions. Achieving alignment between stakeholders throughout the long journey of implementing complex technologies is necessary to ensure things don’t get derailed.
However, even if you can agree to a strategy, translating that into meaningful actions that are relevant to your mission can be missed as you get swamped by existing projects.
A strategic action plan can help you agree on a top-level strategy that also specifies key actions in the same swipe – you can come away with a board or management strategy decision that simultaneously details concrete steps to get things done.
It can be hard to design such a technically-informed strategy with limited resources. Thus the utility of an open-source strategy, from which anyone can take a meaningful action plan, based on best practice, and available to customise for your situation.
In keeping with our commitment to open-source and Creative Commons publishing, we at openDemocracy have begun work on just such a document. This sets out principles to guide us as we integrate AI into our processes, alongside actions to help us do that, both with respect to management and implementation.
Management action Action: Regularly survey staff to discover which tasks may be suitable for automation. Implementation action Action: Implement automated toxic comment moderation.
Open-source strategy can build global norms
Working from an open-source document, strategy becomes like shared software code, collaboratively built by communities to achieve the best outcomes.
Open-source code is contributed to by its users, sharing knowledge and best practice in a way that has been proven to deliver. We stand on the shoulders of giants, but also each other.
Initiatives such as Eleuther contend that open-source development of AI systems is critical to ensure knowledge becomes a commons rather than being accrued by ever more powerful entities.
This mode of production is what enabled the development of the very AI capacities that have brought us here – shouldn’t we use similar collaborative methods to build the strategies by which we employ those technologies?
For example, a participating team may expand the strategy for themselves with specifics relating to their use of NLP to translate article summaries from a partner. They can then update the strategy document with this info and contribute that to the code repository for the benefit of others. Shaping policy just as we do code makes sense.
Another bonus of building strategy in an open-source fashion is that, by endorsing shared principles publicly, we uphold those as public standards which are codified by the strategy.
The more of us adopt a clear set of norms, the stronger the message that this is how AI should be done properly, in the realm of media and beyond.
Our focus on building a strategy was inspired by JournalismAI’s work to highlight recommended preparations for a successful AI transition. We hope that, by building on that in a collaborative, networked fashion, we can all ensure that media and civil society prosper as we move ahead.