Trump's AI-generated video of Obama's arrest
We witnessed something genuinely horrifying this weekend. Not a horror movie, not a dystopian novel, but the sitting President of the United States...
3 min read
Writing Team
:
Mar 18, 2026 7:59:59 AM
The internet has a slop problem, and it's scaling faster than anyone is cleaning it up.
NewsGuard, the media credibility organization, announced this week the launch of a real-time AI Content Farm detection datastream built in partnership with Pangram Labs. The system has already identified 3,006 websites that churn out AI-generated, largely undisclosed, deliberately human-seeming news content — either to capture programmatic ad revenue or spread targeted disinformation. That number has more than doubled in the past year. New sites are appearing at a rate of 300 to 500 per month.
That's not a trend. That's an industrial operation.
NewsGuard classifies a site as an AI Content Farm on three criteria: a substantial portion of its content is AI-generated, it doesn't disclose that fact, and it's presented in a way that leads an average reader to assume human journalists wrote it. Generic names — Times Business News, Business Post, CitizenWatchReport — are a common tell. So are publishing cadences of dozens of articles per day that no human staff could sustain.
The content isn't just low quality. It's actively harmful. In October 2025, a site called "News 24" published a false claim that Coca-Cola threatened to pull its Super Bowl sponsorship if Bad Bunny performed at halftime. The story was fabricated — Coca-Cola isn't even a Super Bowl sponsor — but it ran alongside ads from Expedia, AT&T, YouTube, Priceline, Hotels.com, Skechers, and GoDaddy. Blue-chip brands, funding fiction, at programmatic scale.
Another site spread the false claim that two U.S. senators spent $814,000 on hotels in Ukraine. Russian state media picked it up. It spread from there.
The business model here is not complicated. AI generates content cheaply. Programmatic advertising buys placements automatically, at volume, without human review of the specific sites receiving spend. The content farm collects the revenue. The advertiser's brand sits next to disinformation it didn't know existed.
NewsGuard previously reported that as many as 141 blue-chip brands had advertised on AI content farm sites within a two-month period. The new datastream integrates directly into pre-bid segments on platforms including The Trade Desk, giving advertisers the ability to exclude these sites before the impression is served. It can also be licensed directly by brands or their agencies.
Matt Skibinski, NewsGuard's Chief Operating Officer, put it plainly: advertisers are "being suckered into funding" sites that make baseless claims about celebrities, brands, and government officials — claims that then spread across social channels and compound the original damage.
The disinformation dimension is more serious than brand safety alone. NewsGuard's datastream has identified 358 AI content farms linked to Storm-1516, a pro-Russian influence operation that builds sites designed to mimic local newspapers in the United States and Europe. China and Iran are also represented in the dataset.
This is the convergence point nobody wanted to reach: the same cheap AI content infrastructure that generates clickbait about celebrities is being used by hostile foreign actors to manufacture political disinformation at scale, distributed through ad-supported websites that look credible enough to fool a casual reader.
The technology to produce this content costs almost nothing. Until now, the technology to detect it required significant human review. Pangram Labs' detection system automated the identification layer. NewsGuard's analysts review and confirm. The result is a real-time datastream that can flag new farms within days of launch rather than weeks.
If you're managing paid media at any meaningful scale, AI content farm contamination is not a hypothetical risk. It is a present, documented, and accelerating one. The programmatic supply chain was not built with this threat model in mind, and most brands are not running the exclusion lists needed to avoid it.
For content and brand strategy teams, there's a second implication. As AI-generated content floods the information environment, the value of verifiably human, editorially accountable content increases. Not because audiences are carefully auditing every source — most aren't — but because trust, once lost at a category level, redistributes to the sources that maintained it.
The brands and publishers investing in a genuine AI content strategy — transparent about AI use, rigorous about accuracy, clear about human oversight — are building differentiation that will matter more, not less, as the volume of slop rises.
"Today's flood of AI-generated content puts the internet as we know it at risk," said Max Spero, Pangram Labs' CEO. He's not being dramatic.
Three thousand sites and growing at 500 a month is not a content quality problem. It's an information infrastructure crisis, moving faster than the systems designed to contain it.
Source: NewsGuard Press Release — "NewsGuard Launches Real-time 'AI Content Farm' Detection Datastream to Counter Onslaught of AI Slop in News"
Winsome Marketing helps growth teams build content and paid media strategies that account for the realities of today's information environment. Talk to our experts at winsomemarketing.com.
We witnessed something genuinely horrifying this weekend. Not a horror movie, not a dystopian novel, but the sitting President of the United States...
A groundbreaking study from researchers at Collinear AI, ServiceNow, and Stanford University has exposed a fundamental vulnerability in...
Another day, another headline screaming about AI gone rogue in the legal profession. This time, it's New Jersey attorney Sukjin Henry Cho, slapped...