Thank God Americans Aren't Getting Their News From Chatbots
Seventy-five percent of Americans never get news from AI chatbots like ChatGPT or Gemini. Less than 1% actually prefer chatbots as a news source....
4 min read
Writing Team
:
Dec 11, 2025 8:00:02 AM
AI chatbots are remarkably effective at changing people's political opinions, according to a study published Thursday in the journal Science—and they're most persuasive when they share large amounts of inaccurate information.
Researchers from Oxford, Stanford, MIT, and the UK's AI Security Institute paid nearly 77,000 people to interact with various AI chatbots (using models from OpenAI, Meta, and xAI) about political topics like taxes and immigration. Regardless of whether participants were conservative or liberal, chatbots attempted to change their minds to opposing views.
The results are unsettling: AI chatbots frequently succeeded at persuasion, with effects lasting at least one month. More troubling, "the most persuasive models and prompting strategies tended to produce the least accurate information," researchers found. About 19% of all AI claims were rated "predominantly inaccurate."
Lead author Kobi Hackenburg emphasized the implications: "Our results demonstrate the remarkable persuasive power of conversational AI systems on political issues."
The study found AI chatbots most persuasive when providing large amounts of in-depth information rather than deploying alternate tactics like moral appeals or personalized arguments. This creates a perverse dynamic: more information sounds more authoritative, even when much of it is false.
Claims made by GPT-4.5—OpenAI's February release—were significantly less accurate than claims from smaller, older models. Researchers observed "a concerning decline in the accuracy of persuasive claims generated by the most recent and largest frontier models."
Translation: as AI models get better at sounding authoritative, they get worse at accuracy. The optimization target is persuasiveness, not truthfulness. Models learn that confident assertions backed by extensive detail convince people, regardless of whether the details are correct.
"Taken together, these results suggest that optimizing persuasiveness may come at some cost to truthfulness, a dynamic that could have malign consequences for public discourse and the information ecosystem," researchers wrote.
AI chatbots proved substantially more persuasive than static AI-generated messages. When researchers compared participants who interacted with chatbots versus those who read 200-word persuasive messages written by AI, the conversational format was 41-52% more persuasive depending on the model.
The back-and-forth creates an illusion of dialogue—the feeling that you're engaging with an intelligence that listens, understands, and responds to your specific concerns. This perceived responsiveness makes arguments feel personalized even when they're algorithmic.
Humans evolved to find conversation persuasive because conversations with other humans generally involve good-faith exchange. We use conversational cues to assess credibility. But AI chatbots exploit these heuristics without the underlying constraints that make human conversation trustworthy. They can generate unlimited "evidence," cite nonexistent sources, and maintain internally contradictory positions across different conversations without consequence.
Between 36-42% of the persuasive effect remained evident one month later. This isn't momentary confusion cleared up by subsequent fact-checking. It's lasting opinion change from a single conversation with an AI system that provided inaccurate information.
This persistence is what makes the findings dangerous for democratic discourse. Misinformation that changes opinions temporarily might be concerning but manageable. Misinformation that durably shifts political views—especially when delivered through technology that can reach millions simultaneously—represents a fundamental threat to informed democratic decision-making.
The researchers acknowledged that controlled study conditions don't translate directly to real-world politics: "The extent to which people will voluntarily sustain cognitively demanding political discussions with AI systems outside of a survey context remains unclear." But as AI chatbot use grows—44% of U.S. adults reported using tools like ChatGPT, Gemini, or Copilot "sometimes" or "very often" in a June poll—voluntary engagement is clearly happening.
The paper warned that highly persuasive AI chatbots "could benefit unscrupulous actors wishing, for example, to promote radical political or religious ideologies or foment political unrest among geopolitical adversaries."
This isn't hypothetical. State actors from China and Russia already deploy AI-generated content in propaganda campaigns. Political campaigns use AI for fundraising emails and content creation. President Trump regularly posts AI-created videos and images. The infrastructure for scaled political persuasion using inaccurate AI-generated information already exists.
What's missing isn't capability—it's widespread deployment. And the barrier to deployment isn't technical complexity. It's simply that most political actors haven't yet realized how effective this approach could be. This study provides the proof of concept.
Foreign governments attempting to sow division could deploy chatbots on social media that engage users in "conversations" about contentious topics, providing volumes of inaccurate but persuasive information calibrated to shift opinions in destabilizing directions. Domestic political actors could deploy similar tools while claiming they're just providing "information" to voters.
David Broockman, a UC Berkeley political science professor studying persuasion, offered measured optimism: "If you've got both sides of an issue using this, I would guess it would cancel out and you're going to hear more persuasive arguments on both sides."
This assumes symmetric deployment and symmetric resources. But AI persuasion tools favor actors willing to use misinformation over those constrained by accuracy. If one side limits itself to truthful claims while opponents deploy persuasive falsehoods, the "cancel out" theory fails.
It also assumes recipients can distinguish persuasive arguments from accurate arguments. The study's core finding contradicts this: people found inaccurate information most persuasive. More persuasive arguments on "both sides" doesn't help if persuasiveness correlates inversely with accuracy.
Shelby Grossman, an Arizona State journalism professor who studies AI persuasiveness, noted the escalation: "Now we have evidence showing that as models get better, they are becoming more persuasive."
The trajectory is clear. Current models already persuade effectively with inaccurate information. Next-generation models will be better at persuasion and potentially worse at accuracy. The gap between "sounds authoritative" and "is accurate" widens as optimization targets persuasiveness.
No obvious technical solution exists. You can't train models to be persuasive-but-only-with-accurate-information because persuasiveness and accuracy aren't naturally aligned. Humans find confident assertions backed by extensive detail persuasive regardless of accuracy. Models optimize for what works.
For organizations concerned about information integrity, the implications are stark: AI chatbots represent a persuasion technology that works precisely because it isn't constrained by truthfulness. At Winsome Marketing, we help teams understand these dynamics—not to exploit them, but to develop communication strategies that account for an information environment where persuasive falsehoods distributed through conversational AI pose existential challenges to informed decision-making. Sometimes the most important marketing question is h
Seventy-five percent of Americans never get news from AI chatbots like ChatGPT or Gemini. Less than 1% actually prefer chatbots as a news source....
Jeanine Wright, former Wondery COO turned AI evangelist, has a bold vision: flood the podcasting world with 5,000 AI-generated shows producing 3,000...
Multi-agent AI systems are everywhere—coding assistants that split tasks across specialized models, customer service platforms that route queries...