LSE - Small Logo
LSE - Small Logo

Charlie Beckett

April 23rd, 2020

How a global news innovator uses AI

0 comments | 16 shares

Estimated reading time: 10 minutes

Charlie Beckett

April 23rd, 2020

How a global news innovator uses AI

0 comments | 16 shares

Estimated reading time: 10 minutes

Viktoriia Samatova is the Managing Director of the Applied Innovations Team at Reuters Technology. She is based in Toronto, Canada. For our interview series with women working on the intersection of AI and journalism, we talked with Viktoriia about her role at Reuters, value neutrality when working with AI and her thoughts on how the journalism industry is changing and what that means for aspiring journalists.

JournalismAI:  Could you tell us about the kind of daily decisions you face in your job?

Viktoriia:  We focus on solutions that employ artificial intelligence, machine learning and natural language processing techniques as part of our proof of concept (POC) work. Proof of concept is when you’re not sure if something can become a product yet. You believe that there might be the opportunity to meet a customer need, but you need to test it out with a small number of users to see how they react to it. If you see positive feedback, the POC may become part of an existing solution or an entirely new solution.

Day to day, I am talking to business stakeholders and clients, understanding their needs and then translating them into what can be done in terms of technology.

Over the course of the last year, what kind of products were implemented and what was their impact on how Reuters operates?

I joined Reuters just six months ago, so the projects that I’ve been working on are still in progress. One example that we’ve been working on is related to content recommendation, like on platforms such as Netflix and YouTube. After you watch a video, the next one is recommended to you because it’s supposed to be similar either in terms of topic, maybe related to some of the categories that are assigned to it, or based on your specific interests. This is the same concept that we’re using for our B2B platform called Reuters Connect. If it goes well, we will be able to recommend content to our users that might be relevant for them based on their preferences and past behaviour.

When it comes to deciding whether or not to implement a certain technology, what are the considerations that go into the decision? Are there specific criteria that have to be met for a solution to be taken on board?

We think about two ways of measuring success. On the one hand we look at different metrics: Does the solution improve performance of our services? Does it improve engagement? On the other hand, we look at the technical aspects of the solution: Does it work and how so?

For the type of solution I mentioned earlier, recommending content to users, one common metric is click-through rate: When I recommend something, does the user actually click on it? These data points can then be aggregated and looked at in perspective: How many times did the users click on it?

Then there are other metrics for specific applications, as well as others around general engagement. For example, where and how do users search on the platform? Those are some of the ways to measure success based on user behaviour and data analytics.

Apart from how the content is delivered to the users, do you also use artificial intelligence in content creation?

Within Reuters there are a lot of different initiatives. One of them is using artificial intelligence to create a synthetic speaker that can talk about the data it is fed. Let’s say that there was a sports event, for example a soccer match. The user might want to know who won and who scored the goals. This data, once selected, will be ‘performed’ by a speaker on video. From a drop down menu you can select which teams played, the score and who scored, and then the speaker will say it on video. Basically it’s a synthetically generated video based on the key information about a soccer match.

Viktoriia and colleagues at Thomson Reuters Labs

 

One of the things that I find fascinating is that you have a value neutral approach to the content you produce at Reuters. Value neutrality is already a tough standard to reach for humans, so isn’t it even more difficult once artificial intelligence is deployed?

This is a very interesting challenge. Machine learning models are based on the data that you train them on. So if that data is biased, whatever comes out as a result will also be biased. To give you an example, we can look at recruitment processes, when you select from a pool of candidates who get hired and who doesn’t. If your initial dataset includes a disproportionate number of examples of white men being hired for the higher-up positions, then the algorithm you’re training will reproduce the same bias. If we extrapolate that back to having an unbiased news source that is factual and objective, then it becomes critical that you consider potential biases in the data you train your models on.

I read that, for example, Reuters aims to exhibit value neutrality by refusing to label some events as terrorist attacks. However, the data that goes into writing an article on the matter might contain explicit references to terrorism. How can value neutrality be ensured in such a case?

First of all, it is important to keep in mind that articles on sensitive matters, such as reporting on incidents that may be labeled as acts of terrorism, are never produced by a machine but require editors who make final decisions on the content and the wording of each article.

However, what is important in such a case is to have a sufficient amount of data around the same incident, so that the information is balanced between sources that talk about terrorism and others that don’t. And the same applies from a perspective of model training. That way, the model is able to compare semantic analysis of the word “terrorist”, in order to pick up relevant sources and instances going forward. It will, so to speak, learn which incidents are to be placed into a comprehensive ‘terrorism’ category, whether they explicitly refer to terrorism or not.  But ultimately the best applications of AI and machine learning tend to have some form of human-in-the-loop approach who would then validate the quality of the model’s suggestion and give it a final OK to publish. That would be the best case approach in this case.

How is the implementation of AI technologies affecting jobs and roles at Reuters?

AI should be seen as a set of tools that can support journalistic work. I don’t think we’re anywhere near AI replacing journalists in the newsroom. In fact, AI creates some ground material that journalists can build upon. Even with regard to the content automatically produced with natural language generation, if the confidence in the output is not extremely high, you’ll have a person reviewing it. It’s more about asking how we can free up time for our journalists to work on the tasks that are less manual and repetitive. So that instead of copy-pasting information, they can actually use their human skills and expertise to focus, for example, on content that requires a judgement call.

Do you think that the skills that aspiring journalists should acquire have changed due to the introduction of AI in the newsroom?

I know that today many journalists are a little bit like aspiring computer scientists. One thing that they are involved in is templating work so that it can be used to create sentences via natural language generation. In practice, this means setting up a template where you indicate to the machine things like: “when you see this, create a sentence that looks like that.” The skills that are needed to effectively create the template are as much journalistic as they are computer science-y. That’s the combination of skills I would suggest an aspiring journalist to aim for. And there’s also a lot of room for work in data analytics and visualisation: for example the creation of meaningful and informative graphics to accompany the articles. On the other hand, traditional reporting and opinion pieces are not going to disappear either.

What about your own background? How do you think your education helped you land the role you now have at Reuters?

I did my degree in economics and finance and there was actually quite a lot of coding involved. The financial sector experienced a little earlier some of the trends that journalism is facing now. In the early days of my career in economics, being able to use Excel proficiently was a must. But today one needs to be able also to use more advanced statistical software, like Matlab and Python, and to interpret the output the models provide. This background really helped me develop the quantitative skills, as well as analytical skills, that put me in the position that I am in at Reuters.

Do you see a lot of resistance within the company towards technological innovation?

Journalists are super curious people. There is a lot of interest in innovation and a lot of push for it too. But, there’s also the desire to maintain control, so to speak, as well as lack of trust in technology at times. The reality is that technology is limited. There is excitement about innovation, folks really want to try new and different things, but generally people still want to make sure they have the final say.

Data science, computer science and technology in general still attract only a relatively small proportion of women. As a woman working at the intersection of AI and journalism, do you perceive the gender imbalance?

Yes, definitely. There are multiple issues here, one of them being that women don’t see a lot of mentors and role models they can follow. I think that it’s very important to see that someone else has done it: they made it, this is what it looks like and how you can get there.

Many women that I observe and talk to get their education in engineering or computer science and are excited to work in these fields. It’s great at the beginning but then, for many reasons,  many end up switching to other teams a couple of years down the road. That’s why I really try to bring women into my team and encourage them to grow in their roles. Maybe they just need a little bit more encouragement, a little bit more recognition. This also depends on the country one comes from, the society and family one grew up in, and what she has been exposed to in her personal life and career.

How was your own experience? Did you have a role model?

I had an excellent mentor. My manager at my old job, where I started working as an intern, had an idea for a product that involved machine learning and AI. I basically started learning on the job and realised that it was actually quite similar to my training in economics in many ways. My manager encouraged me to step up and challenge myself. There were many things that I had never tried before. But he always gave me tasks that would be outside of my comfort zone. And I was willing to try. That’s one of the things that I found very helpful. And it’s also what I try to do with my team members. It brings you confidence in what you can do. It develops your skillset and ultimately, you can do anything you put your mind to. But you’ve got to be challenged and you’ve got to be willing to try and learn.

So, AI and journalism were not part of your educational background, right? 

No, I started working in the financial industry and that’s where I built my expertise in AI. Then I just wanted to continue working on interesting projects, no matter the field. And that’s how I ended up with AI and journalism, because obviously technology is translatable. The applications will differ from one industry to another, but the technological know-how is very similar across industries. Reuters has a mission I want to contribute to and I’ve always had a profound respect for journalism and unbiased and trusted news. So this seemed like a good cause to be part of.

It sounds like you have a very fascinating job!

It really is! I think that there’s a lot to discover in the field of artificial intelligence. And I think it’s a good area to be working on for the foreseeable future. I would say that if you can get into it or interact with it in any way, it’s going to be valuable for many years to come. There are incredible advancements that are being made, although it’s obviously not a silver bullet for everything. As a general advice, try to do things outside of your comfort zone and you might be surprised where you end up in and what you end up doing.


The interview was conducted by Valentina Gianera, Polis intern and LSE’s MSc Student. It is part of a JournalismAI interview series with women working at the intersection of journalism and artificial intelligence.

If you want to follow the series and stay informed about JournalismAI activities, you can sign up for our monthly newsletter.

JournalismAI is a project of Polis, supported by the Google News Initiative.

About the author

Charlie Beckett

Posted In: JournalismAI | Women in AI-journalism