3 min read
OpenAI's $6.5B Jony Ive Acquisition
Let's start with the math that doesn't add up. OpenAI just paid $6.5 billion for a one-year-old startup with 55 employees—that's roughly $118...
4 min read
Writing Team
:
Jun 12, 2025 8:00:01 AM
Sam Altman has once again graced us with his cosmic wisdom, declaring that humanity has crossed the "superintelligence event horizon" and entered a "gentle singularity." According to OpenAI's prophet-in-chief, we're now living in the early stages of digital superintelligence, with ChatGPT's 800 million weekly users serving as evidence of our glorious AI-powered future.
One small problem: the AI he's calling "superintelligent" still makes stuff up roughly 30% of the time and thinks Mahatma Gandhi used Gmail to organize resistance movements.
But sure, let's call it superintelligence.
The Event Horizon of Hype
"We are past the event horizon; the takeoff has started," Altman proclaimed in his latest blog post, which reads like a cross between a TED talk and a venture capital pitch deck. "Humanity is close to building digital superintelligence, and at least so far it's much less weird than it seems like it should be."
Translation: "Please ignore the fact that our AI still confidently tells users that Great Danes are larger than Mini Cooper cars because it doesn't understand physical reality. Focus on the 800 million users!"
Altman's definition of this supposed superintelligence threshold? ChatGPT "now outpaces any human who has ever lived." Which is a remarkable claim for a system that, when asked about legal precedents, collectively invented over 120 non-existent court cases in a Stanford study, complete with convincingly realistic names like "Thompson v. Western Medical Center (2019)."
But hey, at least it invented them really fast.
While Altman is busy announcing our superintelligent future, the reality is that AI hallucinations are actually getting worse, not better. According to OpenAI's own tests, ChatGPT's reasoning improvements have come with increased hallucination rates. The better it gets at reasoning, the more confident it becomes about completely fabricated information.
Research shows that $67.4 billion was lost globally in 2024 due to hallucinated AI output, with 47% of enterprise AI users making at least one major decision based on false information. Even Google's best model, Gemini-2.0-Flash-001, still hallucinates 0.7% of the time, while some models like TII's Falcon-7B-Instruct hallucinate nearly 30% of responses.
But sure, we've achieved superintelligence. The kind that confidently tells you Gandhi had a Gmail account.
Let's take a moment to appreciate Altman's previous prophecies. In January 2025, he confidently declared that "we may see the first AI agents 'join the workforce' and materially change the output of companies." By 2026, he predicted systems that can "figure out novel insights," and by 2027, "robots that can do tasks in the real world."
Meanwhile, companies like Klarna, which famously set out to be OpenAI's "favorite guinea pig" for replacing human workers with AI, quietly reversed course and hired additional support staff because customers actually wanted to talk to real people. Turns out superintelligence still can't handle "Can you help me with my refund?" without supervision.
Altman's 2021 essay "Moore's Law for Everything" gets praised for "accurately predicting" AI developments, but predicting that computer processing would get better and AI would improve is like predicting that smartphones will have better cameras next year. It's not prophetic—it's observing obvious technological trends.
Here's the beautiful thing about Altman's superintelligence claims: the definition keeps changing. AGI used to mean artificial general intelligence—systems that could match human cognitive abilities across domains. Now it's become what Altman himself calls "a very sloppy term" that apparently includes any AI system that gets popular enough.
OpenAI historically defined AGI as "a highly autonomous system that outperforms humans at most economically valuable work." But when your system still needs human oversight to avoid telling people that drinking bleach cures COVID, maybe we should pump the brakes on the superintelligence victory lap.
The real tell is in OpenAI's own contradictions. The company previously wrote that they "don't have a solution for steering or controlling a potentially superintelligent AI and preventing it from going rogue." Yet now Altman claims we've already crossed into superintelligence territory. Which is it—are we superintelligent or do we still not know how to control what we've built?
Let's be real about what's happening here. OpenAI is reportedly closing on a $6 billion investment round valuing the company at $150 billion. Nothing sells like superintelligence dreams to venture capitalists who've never actually used AI for anything more complex than writing marketing copy.
Altman's essay, like Marc Andreessen's "Techno-Optimist Manifesto," promises a tide of technological prosperity so massive it will "sweep away all humanity's social and political problems." It's the same utopian pitch that's accompanied every major tech wave from the steam engine to the internet.
As Keach Hagey, Altman's biographer, notes: "I see the doomers and the boomers feeding off each other and being part of the same sort of hype universe." The real opposite of superintelligence apocalypse isn't superintelligence utopia—it's the boring reality that we've built "another way to waste time on the internet" that sometimes gets things right.
Altman describes our current moment as a "gentle singularity"—a gradual transition toward superintelligence rather than sudden change. But there's nothing gentle about losing $67 billion to AI hallucinations or making business decisions based on fabricated data.
The most honest thing Altman said was: "This is how the singularity goes: wonders become routine, and then table stakes." He's right—we've become so accustomed to AI making things up that we barely notice it anymore. Hallucinations are now just the price of doing business with "superintelligent" systems.
Meanwhile, AI legal expert Damien Charlotin tracks over 30 instances per month where lawyers have used AI-generated evidence that turned out to be completely fabricated. Air Canada was ordered to honor a bereavement fare policy that existed only in their chatbot's imagination.
But apparently, this is superintelligence in action.
Sam Altman has mastered the art of rebranding limitations as features. Can't control AI? Call it a "gentle singularity." Systems hallucinate constantly? That's just the price of superintelligence. Users adapt to AI's failures? That proves how transformative the technology is.
The real superintelligence move would be building AI that actually works reliably before declaring victory over human intelligence. But that's harder than writing blog posts about event horizons and cosmic significance.
We haven't passed any superintelligence threshold. We've just gotten really good at accepting that our "superintelligent" systems are confidently wrong about basic facts, and calling that progress.
Sam Altman's superintelligence isn't a technological breakthrough—it's a marketing campaign with an API. And apparently, that's enough to convince investors it's worth $150 billion.
The singularity isn't coming. It's just another day in Silicon Valley, where hype is the only technology that consistently exceeds expectations.
Ready to cut through AI hype and build marketing strategies that actually work? Our growth experts at Winsome Marketing know the difference between superintelligent tools and super-expensive toys. Let's talk about leveraging AI realistically—without the cosmic promises.
3 min read
Let's start with the math that doesn't add up. OpenAI just paid $6.5 billion for a one-year-old startup with 55 employees—that's roughly $118...
Open AI's latest ChatGPT upgrades, including Record Mode and enterprise Connectors, represent exactly the kind of tech stack consolidation we've...
1 min read
In a world where AI promises often feel like marketing fluff, San Francisco's City Attorney David Chiu is doing something refreshingly practical:...