The newsroom’s AI has an agenda

Dec 8, 2025 - 17:00
 0  2
The newsroom’s AI has an agenda

In October 2023, OpenAI CEO Sam Altman warned that “AI will be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes.” Two years later, we’re watching those strange outcomes unfold in real time. And in 2026, they’re going to collide with journalism in ways most reporters won’t even notice.

Here’s what’s happening: The Trump administration has been systematically pushing to reshape AI systems according to its ideological preferences. The July executive order “Preventing Woke AI in the Federal Government” mandates that AI systems be “truth-seeking” and “ideologically neutral” — while simultaneously defining acknowledgment of concepts like systemic racism or climate science as ideological bias that must be eliminated. Companies that want federal contracts will need to comply. Companies that want to avoid regulatory headaches will preemptively comply. We’ve already seen Meta shift its Llama model rightward to curry favor with the administration, framing it as “correcting bias” when it was really just changing which direction the bias pointed.

Meanwhile, newsrooms keep shrinking. Business Insider laid off 21% of its staff while announcing it was going “all-in on AI.” Over 70% of their remaining employees now use ChatGPT regularly. This pattern is repeating across the industry: fewer reporters, more AI tools filling the gaps.

As AI tools become essential to how journalism gets produced — for research, for drafting, for summarization — the biases built into those tools will invisibly shape the output. A reporter using an AI assistant to research a story on immigration policy might not realize the tool has been calibrated to treat certain perspectives as more “neutral” than others. An editor using AI to summarize background documents might not notice which facts get emphasized and which get buried. The bias won’t announce itself. It’ll just be there, in the background, nudging coverage in directions that serve the interests of those who control the models.

We saw what this looks like in its clumsy form when Elon Musk’s Grok chatbot started inserting “white genocide” conspiracy theories into random conversations about baseball stats and cat videos. That was obvious. What’s coming will be subtle — the algorithmic equivalent of editors who’ve internalized which stories are “too political” and which framings are “balanced.” Except now those editors will be invisible, and most journalists won’t even know they’re there.

The administration has also been drafting executive orders to block states from regulating AI, threatening to withhold federal funding and deploy a DOJ “AI Litigation Task Force” against states that try. The goal is clear: eliminate any resistance to this project of capturing the tools that will increasingly mediate how Americans understand reality.

In my 2025 prediction, I wrote that media organizations were trading their watchdog role for a seat at the billionaire’s table. What I didn’t anticipate was how quickly the table itself would be automated. The surrender I described was at least visible — editorial decisions made by humans who could be identified and criticized. What’s coming is surrender by default, encoded into systems that journalists will use without thinking twice.

Parker Molloy writes The Present Age newsletter.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0