The JournalismAI Fellowship began in June 2022 with 46 journalists and technologists from news organisations globally collaborating on using artificial intelligence to enhance their journalism. At the halfway mark of the 6-month long programme, our Fellows describe their journey so far, the progress they’ve made, and what they learned along the way. In this blog post, you’ll hear from team ClaimCheck.
Seeking a solution for a daily problem faced by newsrooms all over the world is not easy. During the last three months of the JournalismAI Fellowship, the ClaimCheck team – a collaboration between ABC Australia and Newtral in Spain – has been experimenting with the design and implementation of different AI models to identify repeated potential false claims made by politicians in Spain and Australia.
We embarked on the ClaimCheck project unsure of whether we would be able to achieve our final goal, and now that we have reached the midpoint of the Fellowship, we have three key learnings to share:
Firstly, the importance of keeping to your goals but also being flexible. We began the project with a clear idea of what we wanted to improve on with our daily fact-checking efforts as journalists, but how we could achieve that wasn’t clear.
Having the time and resource to test solutions within the Fellowship – in an experimental environment – is an unique opportunity for our teams to understand and determine the path we can follow to improve how AI and fact-checking best work together. The Fellowship has allowed us to discard theoretical ideas or proposals previously held. For instance, we have realised that fact-detecting paraphrases in theory is much clearer than in practice. Adapting the project goalposts was essential to keep moving forward.
Secondly, the way forward to discovery is iteration. After defining the scope of ClaimCheck, we wanted to develop and design some lines of work we were going to follow. To start the trials, we were ready to start the first draft annotation for collecting data necessary to teach the algorithm and define the criteria to follow. It felt clear to us so we shared it with our colleagues and experts to check and begin testing. However, the results of the first annotation testing left us with more doubts than the ones we started with. We are now on the fourth round of the annotation trial and even after reducing the semantic matching categories from three to two, we still have doubts.
Lastly, the path can sometimes be more important than the goal. By preparing the first prototype of ClaimCheck, we had a much better understanding of the direction the project should take over the following months, and we were able to determine the points of development we needed to concentrate on. We are now at a point where we are unsure whether any of the solutions proposed will fully solve the problem we have posed, but the paths to this discovery have already been beneficial to both our organisations.
We began by saying that finding a product solution to a problem like efficient daily fact-checking is not easy, but having a space like the JournalismAI Fellowship to try these different paths, test our process, and canvas solutions with a team working in the same domain is invaluable.
The ClaimCheck team is formed by:
- Gina McKeon, Innovation Editor, ABC
- Gareth Seneque, Technical Lead – AI/ML, ABC
- Irene Larraz, Fact Checking and Data Coordinator, Newtral
- Rubén Miguez, Product Leader, Newtral
Do you have skills and expertise that could help team ClaimCheck? Get in touch by sending an email to Fellowship Manager Lakshmi Sivadas at email@example.com.
Header image: Max Gruber / Better Images of AI / Ceci n’est pas une banane / CC-BY 4.0
JournalismAI is a global initiative of Polis and it’s supported by the Google News Initiative. Our mission is to empower news organisations to use artificial intelligence responsibly.
You need to be extremely careful when attempting to build something like this, as small mistakes early on become “baked in” to the algorithm and may start systemically misidentifying things as false information. It is incredibly difficult to separate your own beliefs from objective misinformation. But if you guys can do it then you’ll have solved fact checking, because the main issue right now is sheer volume of information to check!