The DeepSeek Report: When Evaluation Meets Election Year Politics
The Department of Commerce's Center for AI Standards and Innovation just published an evaluation claiming American AI models crush Chinese competitor...
4 min read
Writing Team
:
Oct 3, 2025 8:00:00 AM
Seventy-five percent of Americans never get news from AI chatbots like ChatGPT or Gemini. Less than 1% actually prefer chatbots as a news source. This is the most encouraging thing I've read about media literacy in years.
We spend so much time worrying about what AI will disrupt that we forget to celebrate when humans display basic pattern recognition. Americans are correctly identifying that systems prone to hallucination make terrible journalists. That's not technophobia—it's functioning critical thinking.
Pew Research Center surveyed 5,153 U.S. adults in August 2025 and found that only 9% get news from AI chatbots often (2%) or sometimes (7%). Another 16% do so rarely. The overwhelming majority—three out of four Americans—simply don't use chatbots for news at all.
Even more telling: among the minority who do use chatbots for news, the experience is mediocre at best. A third find it difficult to determine what's true versus false. About half say they encounter news they think is inaccurate at least sometimes, with 16% seeing inaccurate information extremely often or often.
Younger adults, who use chatbots more frequently in general, are also more likely to spot the problems. Among 18-29 year olds who get news from chatbots, 59% say they at least sometimes see inaccurate news there. For 30-49 year olds, it's 51%. They're not fooled—they're just more willing to experiment with tools they know are flawed.
This isn't a story about technological resistance. It's a story about appropriate tool selection. Americans have correctly identified that chatbots—systems designed to generate plausible-sounding text based on statistical patterns—are unsuitable for delivering factual information about current events.
Large language models don't "know" things. They predict probable next tokens based on training data. When you ask GPT-5 or Gemini for news, it's generating text that resembles news articles it was trained on, not retrieving verified information from trusted sources.
This architecture makes hallucination inevitable, not occasional. The system has no internal fact-checking mechanism because it has no concept of facts—only patterns of words that typically appear together. Ask it who won a recent election, and it might confidently cite a candidate who didn't run. Ask about a breaking story, and it might synthesize details from multiple unrelated events into a coherent-sounding but entirely fictional narrative.
A 2024 study from researchers at Oxford and Stanford found that even state-of-the-art language models hallucinated false information in 15-20% of factual queries, with the rate climbing to 35-40% for queries about recent events beyond their training cutoff. These aren't bugs to be patched—they're fundamental limitations of the architecture.
The fact that most Americans recognize this intuitively, without reading technical papers about transformer architecture, suggests our collective BS detectors are working better than tech industry evangelists would have us believe.
Here's where the good news stops. While the general public correctly avoids chatbots for news, a 2024 survey by the Information Technology and Innovation Foundation found that 41% of policymakers and government officials reported using AI assistants including chatbots to research policy issues and draft briefing documents.
That's significantly more concerning than the 9% of general public who get news from chatbots, because policy decisions require accurate information at a level that casual news consumption doesn't. If a regular person believes a hallucinated detail about a celebrity scandal, the consequences are minimal. If a congressional staffer incorporates hallucinated statistics about healthcare outcomes into a policy brief, the consequences compound.
The same pattern shows up in corporate environments. Gartner's 2024 AI survey found that 38% of executives reported using AI tools to research market conditions and competitive intelligence, despite 67% of those same executives acknowledging concerns about accuracy. We're seeing appropriate skepticism from the general public and inappropriate adoption from people whose decisions affect millions.
The marketing industry's relationship with AI-generated content follows a similar but accelerated trajectory. We adopted AI writing tools faster than most sectors, often without comparable skepticism about accuracy.
A 2025 Content Marketing Institute study found that 73% of marketing teams now use AI to generate some form of content, with 45% using it for thought leadership and industry analysis pieces—exactly the contexts where hallucination poses the greatest reputational risk.
The difference is stakes and verification. When the general public encounters a hallucinated news story, they can cross-reference it against other sources or simply move on. When a brand publishes AI-generated thought leadership containing false statistics or fabricated case studies, the correction process involves public retractions, damaged credibility, and potential legal exposure.
We should be as skeptical as the 75% of Americans who don't trust chatbots for news. Instead, we're the 41% of policymakers who've decided efficiency trumps accuracy.
Americans' reluctance to use chatbots for news reflects a sophisticated understanding of information quality, even if they can't articulate the technical reasons. They've correctly identified that:
Source matters. News organizations have editorial processes, fact-checkers, and institutional reputations at stake. Chatbots have training data and probability distributions.
Accountability matters. When The New York Times gets something wrong, there's a corrections process and editorial oversight. When ChatGPT hallucinates, there's no one responsible because the system has no concept of truth.
Transparency matters. Traditional news sources cite their reporting methods. Chatbots can't explain why they generated a particular claim because they don't "know" anything—they're just pattern-matching text.
These aren't nostalgic preferences for legacy media. They're rational assessments of which systems are designed to deliver accurate information versus which systems are designed to generate plausible text.
The general public's instinct to avoid chatbots for news stands in stark contrast to how quickly businesses and institutions have adopted AI for information-dependent work. We're outsourcing research, analysis, and content generation to systems that the public correctly identifies as unreliable for those exact tasks.
This creates a weird bifurcation: regular people won't trust AI for news about local elections, but companies trust it to generate financial analysis, legal research, and strategic recommendations. The latter decisions have exponentially higher stakes.
The smartest thing marketing leaders could do right now is adopt the same skepticism the general public already has. Use AI for ideation, drafting, and formatting. Don't use it for facts, statistics, or claims that could damage your credibility if they're wrong. That's not anti-technology bias—it's appropriate risk management.
Want help building AI workflows that respect the distinction between text generation and factual accuracy? Winsome Marketing's growth experts work with marketing leaders to deploy AI where it adds value without undermining the credibility you've spent years building. Let's talk about sustainable AI adoption that doesn't bet your reputation on probabilistic text prediction.
The Department of Commerce's Center for AI Standards and Innovation just published an evaluation claiming American AI models crush Chinese competitor...
We're supposed to laugh, apparently. The President of the United States shares an AI-generated deepfake showing Senate Minority Leader Chuck Schumer...
We witnessed something genuinely horrifying this weekend. Not a horror movie, not a dystopian novel, but the sitting President of the United States...