4 min read
Why AI Model Collapse Signals the End of Our Gold Rush
The ancient Greeks gave us the Ouroboros—a snake eating its own tail, symbolizing eternal cycles and, more ominously, self-destruction. In 2025,...
4 min read
Writing Team
:
Sep 2, 2025 8:00:00 AM
Ethan Mollick wants us to celebrate. More than a billion people now use AI chatbots regularly, with ChatGPT alone boasting "over 700 million weekly users." He calls this "Mass Intelligence"—an era where "powerful AI is becoming as accessible as a Google search." But before we break out the champagne, we should ask a harder question: When has giving a billion people simultaneous access to any transformative technology ever ended well? The answer should give us pause about whether we're witnessing democratization or weaponization of intelligence.
Mollick's enthusiasm for "Mass Intelligence" rests on a fundamental assumption: that access equals benefit. He celebrates how "free users can now run prompts that would have cost dollars just two years ago" and marvels that "the marginal cost of serving each additional user has collapsed." But cost reduction doesn't automatically translate to value creation—sometimes it just makes destructive behavior cheaper to scale.
Consider his own examples. He proudly demonstrates creating fake historical photographs, showing Neil Armstrong and Buzz Aldrin "sitting in their seats in a modern airplane" with an otter using a laptop. Mollick calls this "impressive" but also admits it represents "a distortion of a famous moment in history made possible by AI" and "a potential warning about how weird things are going to get." That cognitive dissonance—celebrating the same capability you acknowledge as problematic—captures the entire Mass Intelligence paradox.
When MIT research shows that 95% of AI pilots fail to deliver measurable business impact, and over 80% of AI projects fail entirely, what makes us think giving these tools to a billion untrained users will produce better outcomes? The failure rates suggest that even organizations with dedicated AI teams, clear objectives, and substantial budgets struggle to extract value from AI. Mass deployment doesn't solve the fundamental competency problem—it amplifies it exponentially.
Mollick argues that AI is "getting easy to use" because "these techniques don't really help anymore" and "powerful AI models are just getting better at doing what you ask them to." This misses the deeper issue: most people don't know what to ask for. Technical barriers were never the primary obstacle to AI adoption—conceptual ones were.
The organizations succeeding with AI aren't those with the best prompting techniques; they're those with clear problems worth solving and the domain expertise to evaluate AI outputs. When Accenture CEO Julie Sweet warns against treating AI as a "collaboration opportunity" rather than assigning technical accountability, she's highlighting the expertise gap that Mass Intelligence ignores.
Giving a billion people access to AI image generation doesn't create a billion graphic designers—it creates a billion people who can produce graphics without understanding composition, brand consistency, or visual communication principles. The result isn't democratized creativity; it's democratized mediocrity at scale.
Mollick acknowledges that AI image generators have "guardrails to limit misuse" and "invisible watermarks to identify AI images," but immediately notes that "much less restrictive AI image generators will likely get close to nano banana in quality in the coming months." This highlights the fundamental flaw in the Mass Intelligence thesis: the race to the bottom always wins.
The same economic forces making AI accessible—collapsed marginal costs and simplified interfaces—also make abuse trivially easy. When anyone can generate photorealistic images with simple text prompts, the epistemological foundation of visual evidence crumbles. We're not just giving people creative tools; we're giving them reality-manipulation capabilities without the institutional frameworks to handle the consequences.
Mollick wonders "How do we rebuild trust when anyone can fabricate anything?" but seems unconcerned that his celebrated Mass Intelligence era is precisely what makes this question urgent. The technology isn't neutral—widespread access to fabrication tools inherently degrades information reliability at societal scale.
The most troubling aspect of Mollick's analysis is his casual acknowledgment that "every institution we have — schools, hospitals, courts, companies, governments — was built for a world where intelligence was scarce and expensive." He treats this as an exciting challenge rather than an existential threat to social order.
Institutions weren't built around scarce intelligence—they were built around verified expertise, accountable decision-making, and quality control mechanisms. When courts rely on evidence authentication, schools assess genuine learning, and hospitals depend on medical judgment, Mass Intelligence doesn't enhance these systems—it threatens their foundational assumptions.
The recent surge in AI-generated academic papers, legal briefs written by chatbots, and medical advice from untrained AI systems shows what happens when powerful tools outpace institutional adaptation. We don't get enhanced human capability; we get institutional breakdown disguised as technological progress.
Mollick's excitement about AI efficiency improvements—"energy efficiency per prompt has improved by 33x in the last year alone"—misses the larger resource allocation question. Making AI cheaper doesn't make human judgment cheaper, and human judgment is often the bottleneck that matters most.
When he celebrates that AI can "outperform humans at a range of intellectual tasks," he's conflating task completion with problem-solving. AI can generate text, create images, and manipulate data faster than humans, but it can't determine which problems are worth solving, evaluate solutions in context, or take responsibility for outcomes. Mass Intelligence gives a billion people access to powerful task-completion tools while leaving the judgment and accountability problems unsolved.
Perhaps most tellingly, Mollick describes the current state as already chaotic: "Some people have intense relationships with AI models while other people are saved from loneliness. AI models may be causing mental breakdowns and dangerous behavior for some while being used to diagnose diseases of others." His response is essentially to shrug and say this will "only multiply as AI systems get more powerful."
This isn't technological optimism—it's abdication of responsibility disguised as inevitability. When a technology simultaneously saves lives and causes "mental breakdowns and dangerous behavior," the appropriate response isn't to celebrate wider deployment but to develop better safeguards and deployment strategies.
The AI companies Mollick mentions "seem to be as unable to absorb all of this as the rest of us are." If the organizations building these systems can't manage their implications, what makes us think billion-person deployment will somehow self-regulate into positive outcomes?
The fundamental flaw in Mass Intelligence advocacy is the assumption that the choice is between AI access and AI restriction. But there's a third option: thoughtful, gradual deployment that matches capability expansion with institutional adaptation and user education.
We don't need to choose between AI elitism and AI chaos. We can build systems that expand access while maintaining quality controls, that democratize tools while preserving expertise evaluation, that embrace innovation while protecting institutional integrity.
Mass Intelligence isn't inevitable—it's a choice. And based on current evidence of widespread AI project failures, institutional stress, and the chaos Mollick himself documents, it may be the wrong choice.
The question isn't whether powerful AI should be accessible, but whether accessibility without accountability, capability without competence, and scale without safeguards represents progress or regress. Mollick's celebration of Mass Intelligence suggests we're about to find out the hard way.
Navigate AI implementation strategically, not chaotically. Winsome Marketing's growth experts help you deploy AI capabilities that enhance rather than replace human judgment. Let's build systems that scale competence, not just access.
4 min read
The ancient Greeks gave us the Ouroboros—a snake eating its own tail, symbolizing eternal cycles and, more ominously, self-destruction. In 2025,...
While the marketing world obsesses over AI-generated content that sounds confident but lacks substance, a team of MIT researchers has been quietly...
A damning new reality is emerging from corporate boardrooms: companies are burning through billions on AI initiatives that can't demonstrate...