Audiences doubt the benefits of AI-generated imagery in news are worth the risks, new study finds

Nov 7, 2025 - 08:00
 0  1
Audiences doubt the benefits of AI-generated imagery in news are worth the risks, new study finds

What do readers really think of AI-generated images in their news? There has been substantial research on how audiences respond to AI-generated text, but far less dedicated research on image generators. That includes the use of products like Midjourney, Adobe Firefly, and Dall-E, which has now been integrated into both ChatGPT. The release of OpenAI’s Sora 2 in September has only brought renewed urgency to this question, as higher-quality AI-generated videos and ones depicting real people have flooded social media.

A new study in Digital Journalism,Reality Re-Imag(in)ed. Mapping Publics’ Perceptions and Evaluation of AI-generated Images in News Context,” take this question head on. University of Amsterdam professors Edina Strikovic and Hannes Cools conducted four focus groups in October 2024 on the topic. All 25 participants were Dutch residents between the ages of 19 and 50. While there are obvious limitations to generalizing the study’s findings beyond the Netherlands, these focus groups offer some helpful insights by going deep on several aspects of AI-generated imagery in the news.

For one, the researchers asked the participants about their prior experiences with AI-generated imagery. Most participants said that, to their knowledge, they had rarely seen AI-generated images published by news outlets, and said they most often encountered these images on social media platforms like Instagram and TikTok. Largely, they said they didn’t know how to distinguish between real photographs and AI-generated images they saw online.

Some participants said they examined images for irregularities in lighting or a lack of imperfections, like an image that is “too perfect.” But most said they relied on gut feeling and explicit AI labels to figure out when images were AI-generated. They considered it far more difficult to fact-check images than written material. “Even if they wanted to, participants stated, they would not know how to verify visual information,” write the researchers.

When it came to the potential use of image generators by news organizations, the participants often drew a distinction between AI-generated illustrations and photorealistic imagery. Many thought it was understandable to use image generators to create graphs and charts to depict data and trend lines, or to create satirical images. “If it has a guiding function, then I don’t have much objection to it. So illustrative, or a graph or something like that,” said one participant, to the agreement of others in the focus group. “AI images can be used just fine. Just cartoons or imitations…” said another participant.

Participants also considered a story’s topic when determining whether they thought AI-generated images were appropriate. Many in the focus groups were more accepting of AI-generated images to depict information about entertainment or “softer” news beats, but those same participants considered AI-generated visualizations of politics and conflict as completely out of bounds.

By and large, the focus groups voiced concerns about new organizations adopting image generators, and expressed more general anxieties about its downstream effects.

Participants spoke about how photographs had long been a form of “eyewitness” testimony in news coverage, evidence that something really happened at a certain time in a certain place. Despite how easy it is to manipulate photographs without AI, they feared news organizations leaning more on AI-generated images could erode a sense of “shared reality.”

Participants also openly discussed algorithmic bias, and how news organizations that use AI-generated imagery could reinforce stereotypes. “I recently read about… AI being quite racist and sexist. As in, if you ask [it] to give you a picture of a happy family, you’re much more likely to get a picture of an American dream family — a white family, a man and a woman with their children — than a mixed-race family,” one participant said. “Or if you ask it to show a doctor you get a man, and if you ask it to show a nurse you get a woman,” said another.

Finally, participants spoke to the way that images are experienced passively. As they put it, users can scroll past a photo and immediately internalize it without thinking twice. They drew distinctions between this casual consumption and reading the text of news articles. “I think the difference between GenAI texts and images is that pictures can be shared passively faster. Forwarding a photo has so much more effect than an article. All your friends are not going to read an article. But they will put that photo on their [Instagram] story or forward it to a group. And so the image starts to live its own life,” said one participant.

Across the focus groups, participants questioned whether the benefits outweigh the harms when it comes to employing AI-generated imagery in news. They feared the stakes were too high to justify the opportunities for news organizations. “Participants reported that these benefits were negligible when compared to the risks of an AI-generated reality,” write the researchers. Is using AI-generated images a necessity?, they asked.

If news organizations were to adopt image generators one thing was near unanimous from the focus groups — they wanted explicit disclosure and labeling of those images. One participant compared it to the need for “sponsored” labels on advertising content in news publications. Some even asked for the specific prompt language or the image generators used to be named in the published label. In this section, Strikovic and Cools draw attention to a “transparency paradox” baked into the participants’ comments:

“Participants consistently expressed a desire for clear disclosure of AI-generated content, they simultaneously demonstrated uncertainty about what specific information they needed and how they would use such disclosures. This paradox manifests in audiences wanting transparency while lacking frameworks for meaningfully processing that information — a tension that complicates straightforward calls for mandatory AI labeling in news media.”

You can read the full study in Digital Journalism here.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0