People are using ChatGPT twice as much as they were last year. They’re still just as skeptical of AI in news.

The generative AI wave isn’t coming — it’s already here, and it’s reshaping how the public finds information. In a new report, “Generative AI and News Report 2025: How People Think About AI’s Role in Journalism and Society,” my colleagues Richard Fletcher, Rasmus Kleis Nielsen, and I surveyed audiences across six countries, including the United States.
Our results show a public increasingly fluent in AI and happy to embrace this technology but at the same time deeply ambivalent about its role in the news, creating a critical challenge for newsrooms navigating a rapidly changing environment.
An explosion in use, led by information-seeking
It is undeniable that AI use is growing. Across the six countries we looked at (Argentina, Denmark, France, Japan, the United Kingdom, and the United States), the proportion of people who say they have ever used a generative AI tool jumped from 40% in 2024 to 61% in 2025. More significantly, weekly usage surged from 18% to 34%. In the U.S., where adoption was already higher, growth was more modest but still significant, rising from 31% to 36% weekly use. At the same time, it is worth remembering that a majority of people in all countries we looked at are not yet regular users of any AI tool or system.
While ChatGPT remains the dominant standalone product, we find that AI embedded in existing services, like Google’s Gemini or Microsoft’s Copilot, is driving broader exposure and use. Meanwhile, AI systems beloved by some professionals, like Claude and Perplexity, barely seem to cut through with the general population.
Crucially for the news industry, the primary reason people turn to AI has shifted. Last year, creating media, for example creating an image or a summary, was the top use case. This year, information-seeking has taken the lead, more than doubling from 11% to 24% weekly. People are using AI to research topics, answer factual questions, and ask for advice. They are, in essence, increasingly using it for tasks that were once the primary domain of search engines and, by extension, news publishers.
We also found that using AI for social interaction has increased, with 7% (8% in the U.S.) saying they had done so in the last week, a figure that is higher among younger people.
The search frontline: AI answers are now unavoidable
This shift is most visible in search. We find that seeing AI-generated search answers, like Google’s AI Overviews, has become commonplace across countries, including in the U.S. A majority of Americans (61%) report having seen an AI-generated answer in response to a search query in the last week — a finding that dovetails with recent research by Kirsten Eddy at the Pew Research Center. This passive exposure to AI-generated information in search is far higher than the active use of any single AI tool.
For publishers worried about declining referral traffic, our findings paint a worrying picture, in line with other recent findings in industry and academic research. Among those who say they have seen AI answers for their searches, only a third say they “always or often” click through to the source links, while 28% say they “rarely or never” do. This suggests a significant portion of user journeys may now end on the search results page.
Contrary to some vocal criticisms of these summaries, a good chunk of population do seem to find them trustworthy. In the U.S., 49% of those who have seen them express trust in them, although it is worth pointing out that this trust is often conditional.
In many of the long-form responses we received when we asked people to explain their trust, people replied that they see AI as a “first pass,” especially for low-stakes queries, and pointed to the fact that in their view the AI “knows more” because it has been trained on large amounts of data. Others said they see these responses as “good enough” answers. But people also explained that they remain cautious on complex topics like health or politics, and say they seek to verify information with traditional sources.
Obviously, what people say they do might differ from what they actually do, but these findings should at least give us some pause before we assume that people encounter such summaries uncritically or will be easily fooled by misleading or false answers that these systems sometimes provide.
The “comfort gap” and the verdict on AI in news
When the conversation turns specifically to journalism, public sentiment cools considerably. Our report identifies a clear and persistent “comfort gap” between human- and AI-led news production. Across countries, only 12% of respondents are comfortable with news made entirely by AI, rising slightly to 21% if there is a “human in the loop.” Comfort, however, jumps to 43% for news led by a human with some AI assistance and 62% for news produced entirely by a human journalist.
Americans are not outliers here. Their comfort levels mirror these global averages, showing a deep-seated preference for human oversight and authorship.
The public also continues to draw a sharp line between back-end and front-facing uses of (generative) AI in the news. People are broadly comfortable with AI being used for tasks like editing spelling and grammar (55% comfortable across countries, 60% in the U.S.) or for translation (53%, 51% in the U.S.). But comfort plummets for uses that directly shape the final product, such as creating a realistic image when no photo exists (26% across countries and in the U.S.) or, most notably, creating an artificial presenter or author (19%, 20% in the U.S.).
And, just as when we did this research last year, the perception is that AI will primarily benefit publishers, not the public. While respondents believe AI will make news cheaper to produce and more up-to-date, they also firmly expect it to be less trustworthy (-19 net score) and less transparent (-8 net score).
A pessimistic outlook on AI’s societal impact
This skepticism is part of a broader pessimism in the United States about AI’s societal role. While people in four of the six countries we surveyed are optimistic about AI making their personal lives better, the U.S. is one of three where pessimism dominates when it comes to society as a whole.
A striking 42% of Americans believe generative AI will make society worse, compared to just 30% who think it will make it better. This likely reflects a deep distrust in how some powerful institutions — including the news media, but also governments, or politicians — will wield the technology. This is reinforced by the fact that only 27% of Americans believe journalists “always or often” check AI outputs before publication, a figure lower than in Japan or Argentina.
For news organizations, our findings are in some ways bitter medicine. The public is already using AI to find information and content with its increasing use by various other actors, but they are wary of newsrooms using the technology. Then again, this does not mean that everything is lost. At least when asked, people seem to see a premium on human judgment and reporting, and welcome a commitment to the responsible use of AI in news.
The path forward for news media then, maybe, isn’t to hide AI usage, but to lean into transparency and original journalism that stands out not just from the “AI slop” increasingly permeating the internet but also from some of the old-fashioned “churnalism” that some outlets are pursuing with even greater vigor with the help of AI. None of this will necessarily save news organizations from the greater upheaval that the use of AI, especially by platforms, brings to digital infrastructures around the world. But it might just help the news to make a case for why they should still matter in this brave new world.
The full report, “Generative AI and News Report 2025: How people think about AI’s role in journalism and society,” can be found on the website of the Reuters Institute for the Study of Journalism.
Felix M. Simon is the research fellow in AI and news at the Reuters Institute for the Study of Journalism and a research associate at the Oxford Internet Institute.
What's Your Reaction?






