A Bot as a Study Buddy? OpenAI's Study Mode is Live
OpenAI just solved the biggest problem in education, and most people haven't even noticed yet. Study Mode launched this week, turning ChatGPT from a...
3 min read
Writing Team
:
Oct 20, 2025 10:44:38 AM
Let's talk about the most expensive math homework mistake in tech history.
OpenAI briefly announced that GPT-5 had cracked ten problems from the legendary Erdős collection—a database of 1,103 mathematical challenges that have stumped the world's best mathematicians for decades. Only 37% have ever been solved. For a moment, it looked like artificial general intelligence had arrived with a résumé that could get tenure at Princeton.
Then the claim collapsed faster than a poorly trained neural network. GPT-5 hadn't solved anything. It had just located existing solutions already published in academic papers. DeepMind CEO Demis Hassabis called the incident "embarrassing." Microsoft researcher Sebastien Bubeck clarified that the model had essentially performed very sophisticated search, not mathematical reasoning. The difference between finding an answer and deriving one is the difference between Google and Gauss.
Here's the thing: this would be a charming academic footnote if OpenAI weren't currently trying to convince investors it can grow revenue from $13 billion to $100 billion in three years. When you're asking for that kind of faith—and that kind of capital—you can't afford to confuse retrieval with reasoning. Especially not publicly. Especially not when your chief competitor is the one pointing out the error.
But let's pivot to the part of this story that actually matters for anyone not pursuing a Fields Medal: OpenAI's plan to cut up to 30% off its $35 billion accelerator spend through custom Broadcom chips.
On paper, this is smart business. Why pay Nvidia's markup when you can design silicon optimized specifically for your models? Custom chips could theoretically reduce inference costs while improving performance for specific workloads. Google did it with TPUs. Amazon built Trainium and Inferentia. Apple's M-series chips run on custom architecture. The precedent exists.
The problem is execution risk. Bespoke silicon requires massive upfront investment, multi-year design cycles, and extremely accurate predictions about future model architectures. If your models evolve faster than your chip design process—which they almost certainly will—you've just spent billions on hardware optimized for last year's paradigm. Semiconductor industry research shows that custom AI chip development typically takes 3-5 years from specification to deployment, with costs easily exceeding $500 million before fabrication even begins.
OpenAI is betting it can compress that timeline while simultaneously scaling revenue tenfold while also advancing model capabilities while also not getting its chips obsoleted by its own research. The margin for error is approximately zero.
Here's where we get contrarian: maybe none of this matters as much as everyone thinks.
The Erdős fiasco is embarrassing, sure. But it's embarrassing the way any overeager press release is embarrassing—it reveals poor internal communication and excessive confidence, but it doesn't actually invalidate the underlying technology. GPT-5 can still do remarkable things with language, reasoning, and synthesis. It just can't do novel mathematics, which, let's be honest, wasn't the killer feature most enterprise customers were asking for anyway.
The hardware bet is risky, but it's also necessary. If OpenAI doesn't control more of its stack, it remains perpetually dependent on Nvidia's pricing and supply constraints. According to recent supply chain analyses, AI chip demand is projected to outpace supply through at least 2027, meaning anyone without vertically integrated hardware faces both cost and availability risks. Building custom chips isn't just about saving money—it's about survival.
And the revenue projections? They're either going to happen or they're not, and frankly, even hitting half those targets would make OpenAI one of the fastest-growing software companies in history. The bar for "success" here is absurdly high, which means "failure" could still mean building a $40-50 billion annual revenue business. Most companies would take that outcome.
What OpenAI actually needs isn't better math or cheaper chips. It's narrative discipline.
When you're operating at this scale, with this much scrutiny, with this many stakeholders parsing every announcement, you can't afford unforced errors. The Erdős claim was an unforced error. Claiming mathematical breakthroughs that turn out to be retrieval capabilities is the AI equivalent of a fintech startup claiming it "solved fraud" when it just implemented two-factor authentication.
The truth is, OpenAI doesn't need to solve Erdős problems to justify its valuation. It needs to solve enterprise workflow problems, advertising attribution problems, customer service scaling problems—the kind of messy, domain-specific challenges that actually generate revenue. Pure mathematics is beautiful and important, but it's not a business model.
For marketers and growth leaders, the lesson here is about claims verification in vendor relationships. When an AI company announces a capability breakthrough, ask: is this solving new problems or finding existing solutions? Is this reasoning or retrieval? Is this a technical advance or a search optimization? The difference determines whether you're investing in genuine capability or just sophisticated autocomplete.
OpenAI remains the most capable language model provider in the market. Its technology works for real use cases. Its API powers thousands of products. But watching the company navigate the gap between research achievements and business requirements is instructive. The path from $13 billion to $100 billion doesn't run through the Erdős problem database. It runs through boring, profitable applications that work reliably at scale.
The hardware strategy will either pay off spectacularly or become a very expensive learning experience. The revenue targets will either rewrite tech industry growth records or get quietly revised downward. But the technology itself? That's already proven. Now OpenAI just needs to prove it can execute on business fundamentals without stumbling over its own press releases.
Which, given the stakes, should be the easiest problem to solve. It just requires less math and more discipline.
Need AI strategies grounded in capability rather than hype? Winsome Marketing's growth experts help you separate signal from noise in vendor evaluations.
OpenAI just solved the biggest problem in education, and most people haven't even noticed yet. Study Mode launched this week, turning ChatGPT from a...
The honeymoon is officially over. Microsoft just dropped two in-house AI models—MAI-Voice-1 and MAI-1-preview—in what can only be described as the...
We need to stop everything and talk about what OpenAI just admitted. On June 19, 2025, OpenAI issued a chilling warning: its next generation of AI...