LSE - Small Logo
LSE - Small Logo

Lakshmi Sivadas

October 11th, 2022

Lessons from prototyping an AI visual pipeline

0 comments | 2 shares

Estimated reading time: 5 minutes

Lakshmi Sivadas

October 11th, 2022

Lessons from prototyping an AI visual pipeline

0 comments | 2 shares

Estimated reading time: 5 minutes

The JournalismAI Fellowship began in June 2022 with 46 journalists and technologists from news organisations globally collaborating on using artificial intelligence to enhance their journalism. At the halfway mark of the 6-month long programme, our Fellows describe their journey so far, the progress they’ve made, and what they learned along the way. In this blog post, you’ll hear from team Automating Visuals.

Across the entirety of our local news landscape, there’s at least one thing most audiences seem to have in common: They love real estate stories.

Sure, there’s plenty of entertainment value in the local “Zillow Gone Wild” story. But people in our communities read about real estate because it’s a vital part of their lives.

A home is still a fundamental part of middle-class financial security in the U.S. Homeowners need information on how to make the most of that wealth. Renters want to know why their landlords are pricing them out, and what public officials are doing about it. The housing market in a city can be a key indicator of economic health.

Yet as compelling as these stories are, many of them are hard to illustrate with visuals. What if automation and artificial intelligence could help?

That was the original question driving our Automating Visuals group – four reporters, technologists and product managers from McClatchy and Gannett working as part of the JournalismAI Fellowship.

Real estate is a beat with high reader interest. It’s also rich in data and omnipresent in every market. So even though it’s not the only topic with hard-to-find visuals, we saw it as the perfect entry point to explore more complicated questions about how to practically harness machines to reduce the drudgery of generic image selection at scale.

We’re still building out our idea, but we’ve learned a few things worth sharing along the way.

Automation vs. Machine Learning? You might need both

AI discussions, especially in non-technological spaces, often have a definition problem. Ask four different people, even within our industry, and you’ll likely hear four different ways to explain AI.

As we talked about our project among our team members and stakeholders, we realized we needed to be more precise than simply saying “AI.”

For instance, we are exploring how we might automatically populate a social card with relevant text from an article. Social cards are used to promote articles on social media and other platforms.

We could use machine learning techniques, which use data to create statistical models that inform actions. Topic modeling tools like BERTopic could help us summarize an article and identify keywords.

Or we could use automation to create a series of rules that would scrape headlines or first paragraphs to populate prebuilt social card templates.

Both of these things are valuable, especially when combined.

Knowing how to talk about what we needed was a crucial part of our early planning.

Ethics are a foundation, not an add-on

We started on our automated visuals project at an interesting time for the field. In just the last few months, there’s been an explosion in the public’s interest in AI-generated art, through platforms and frameworks like DALL-E and Midjourney.

In the last few weeks, a complex debate over the implications of AI imagery caught fire after a Colorado man took home first prize in his state fair’s digital art contest. From The Washington Post’s Drew Harwell:

❝But AI-generated art has been criticized as automated plagiarism, because it relies on millions of ingested art pieces that are then parroted en masse. It has also fueled deeper fears: of decimating people’s creative work, blurring the boundaries of reality or smothering human art.❞

Beyond the purely artistic, we’re getting into tricky territory here – because a lot of these images look very, very real.

From a journalism perspective, as we heard from a visuals editor we spoke with, the ethics of this issue can first pose a practical question: How do we show our audiences what’s a photo and what’s AI-generated?

We’ve decided that transparency and human input are two keys here.

We need effective communications about how our system works and its limitations. We also need editorial oversight – a human in the loop – to make final decisions about publication.

Don’t let perfect be the enemy of good

Initially, we believed our hyperfocus on a specific “domain” – real estate – might make the problem of automated imagery a little more manageable.

Our rough sketch on paper had several parts: Graph creation, stock photo selection, social cards, AI-art generation. That’s too much to prototype.

The more folks we talked to, the more we were able to hone down to the most “useful” use case and treat the rest of our sketches like potential future add-ons. For us, a useful image is compelling and relevant. And while a Midjourney illustration might be compelling, we found it hard to imagine the output would reliably be relevant to our local real estate coverage.

This winnowing down also applied to other lines of thinking. Do we really need to build everything, or are there off-the-shelf tools and APIs we can use and modify? Can simple automation provide a shortcut around the more complicated AI problems? Is a simple, more generic image actually more useful to photo editors and content managers than something that is too specific?

There can also be features we know we need for widespread adoption – namely, full integration into our content management systems – that just don’t have to be complete at this stage.

In the technology world, this is just iterative design: Fail early and often, and a better product will result.

But maybe we can take a more apt analogy from real estate: Build the framing first, and you’ll have an easier time getting your structure off the ground.

The Automating Visuals team is made up of:

  • Theresa Poulson, Senior Product Manager, McClatchy (US)
  • Tyler Dukes, Investigative Reporter, McClatchy (US)
  • Sean Smith, Product Manager, Gannett / USA TODAY Network (US)
  • Mike Stucka, National Data Solutions Editor, Gannett / USA TODAY Network (US)

Do you have skills and expertise that could help team Automating Visuals? Get in touch by sending an email to Fellowship Manager Lakshmi Sivadas at

Header: David Man & Tristan Ferne / Better Images of AI / Trees / CC-BY 4.0

JournalismAI is a global initiative of Polis and it’s supported by the Google News Initiative. Our mission is to empower news organisations to use artificial intelligence responsibly.

About the author

Lakshmi Sivadas

Posted In: JournalismAI | JournalismAI Fellowship