Is AI Deliberately Inflating Word Counts to Charge You More?
There's a bunch of speculation on Reddit about this. The runaround of, like, when it's like "are you sure," and "approve this," and it gives outlines...
3 min read
Writing Team
:
Dec 29, 2025 8:00:00 AM
I've been having hallucination problems with Claude lately—like, throw my laptop out the window level of frustration. It won't listen to me. It continues to not just make things up, but it will cite sources and then make things up within those real sources. So it's a real source, but with fake information.
That's been happening to me consistently over the last month. I've seen it get worse for me personally. So I asked the team: are you seeing this too, or is something configured wrong with my setup?
The answer was simple: which model are you using?
I was defaulting to Sonnet 4.5. That's Claude's default, and for a lot of tasks, it's great. It's fast. It's efficient. It handles most everyday requests well.
But when I switched to Opus, the hallucination problem mostly disappeared. Because Opus thinks harder. It takes more time to process. It's more careful about its outputs. It's less likely to confidently make things up.
That's not a bug in Sonnet. It's a feature trade-off. Sonnet prioritizes speed and efficiency. Opus prioritizes accuracy and depth. Different models for different use cases.
Opus is going to think harder. That's the key distinction. Use it when accuracy matters more than speed.
Use Opus for: fact-checking, research-heavy content, complex analysis, situations where a hallucination would be costly, anything where you need citations to be accurate, strategic thinking that requires nuance, editing content that needs to maintain factual accuracy.
The trade-off is that Opus is slower and uses more tokens. It's more expensive to run. But if the alternative is spending an hour fact-checking AI-generated content that cited fake information from real sources, Opus is worth it.
Sonnet 4.5 is the default for a reason. For most tasks, it's the right balance of speed and capability.
Use Sonnet for: first drafts where you'll heavily edit anyway, brainstorming, reformatting content, generating variations, social media posts, meta descriptions, anything where speed matters more than perfect accuracy, tasks where you're providing the factual information and just need AI to structure it.
Sonnet is faster, cheaper, and perfectly adequate when you're using AI as a drafting tool rather than a research tool.
Here's something I do: whenever I question something in Claude, I go to ChatGPT and ask it to use the internet to fact-check this for me and find the original source.
That usually works because ChatGPT can use the internet. Claude will not unless you tell it to, because of Anthropic's ethics policy.
So if you're doing research or need current information, ChatGPT might be the better choice from the start. It can pull from the web in real-time. It can find sources. It can verify claims against current information.
Use ChatGPT for: current events, fact-checking against web sources, finding original sources for statistics, research that requires internet access, verifying claims in real-time.
The trade-off is that ChatGPT's web access can sometimes lead it to pull from low-quality sources. You still need to verify, but at least it's pulling from actual sources rather than hallucinating entirely.
Gemini 3 is killing everybody right now. Who would have thought? Google's AI was dead in the water for a while, and now it's the comeback story of 2025.
Gemini's advantage is integration with Google's ecosystem and data. If you're working with Google products, Google search data, or need that integration, Gemini might be your best bet.
The challenge is that Gemini is still evolving rapidly. Performance varies. But it's worth experimenting with, especially for tasks where Google's data advantage matters.
Here's what actually works: use the right model for the specific task, not the same model for everything.
Start with Sonnet for drafting. If you notice accuracy problems or need deeper thinking, switch to Opus. If you need current information or web sources, use ChatGPT. If you're deep in Google's ecosystem, try Gemini.
And always fact-check. I see it a lot when I'm checking a fact that Claude gave me—I'll go to the link because you want to try and find the original, but then theirs will be made up too, or the page won't exist. It's this full cycle.
That's always been a problem, though. When you try and find stats, you just get this circular trail and you never get back to the original stat from any reputable source. But AI makes it worse because it can cite confidently even when the source doesn't exist.
Claude's new model is coming in February or March—everyone's saying Claude 5. ChatGPT is working on their next one, Orion, coming in Q1. Gemini just released their latest version. The models are evolving fast.
That means the guidance changes. What's true about which model to use today might be different in three months. You have to stay current, test new releases, and adjust your workflow as capabilities change.
But the principle stays the same: different models have different strengths. Match the model to the task, and you'll get better outputs with less frustration.
Model choice matters more than most marketers realize. At Winsome Marketing, we help teams build AI workflows that match specific models to specific tasks—getting better outputs faster by understanding which tool actually fits the job.
Ready to optimize your AI workflow? Let's build a system that uses the right models for the right tasks.
There's a bunch of speculation on Reddit about this. The runaround of, like, when it's like "are you sure," and "approve this," and it gives outlines...
LLMs are deeply malleable, and right now, they are completely susceptible to the spam tactics that kind of worked pretty well for Black Hat SEO ten...
Previsible SEO released an AI SEO benchmark testing how well large language models handle expert-level search marketing questions. The results are...