The $200 Brain Tax: Why ChatGPT's "Juice 200" Is Intelligence Inequality in Action
Welcome to the age of cognitive castes. OpenAI just rolled out what they're calling "thinking effort" levels in ChatGPT—a euphemistic slider that...
4 min read
Writing Team
:
Nov 14, 2025 7:00:01 AM
Boaz Barak, a computer scientist at Harvard, just published an economic analysis that should terrify and excite in equal measure. His central finding: if current trends hold, AI capabilities are doubling every six months, and the critical variable isn't whether AI will transform the economy—it's how fast we go from modest impact to explosive growth.
Barak's analysis of METR's benchmark data shows that the time horizon for tasks AI can complete successfully has been growing exponentially, with flagship models doubling their task complexity every 6-7 months. The implications are staggering: it could take just two years to go from AI automating 50% of tasks in an industry to automating 97%.
That's not incremental disruption—that's a phase change. And unlike previous automation waves that progressed linearly over decades, AI's exponential capability growth could compress 80 years of economic transformation into a single decade.
The most striking visualization in Barak's analysis comes from METR's research on AI task completion. The X-axis tracks model release dates. The Y-axis measures the time it takes humans to complete tasks that AI models can solve with 50% success rates—plotted on a logarithmic scale. The relationship is remarkably linear, meaning capabilities are growing exponentially.
GPT-5 can handle tasks that take humans over two hours. If the 6-month doubling time holds, models released in 2026 could tackle 8-hour tasks. By 2027, multi-day projects. By 2028, week-long workflows. The intercept—the absolute duration AI can handle—is uncertain and affected by reliability requirements, benchmark bias, and the "messiness tax" of real-world deployment.
But the slope is robust across multiple benchmarks and reliability thresholds. That's the terrifying part: we have significant uncertainty about when specific milestones arrive, but much less uncertainty about the rate of progress.
Barak identifies factors that could slow or accelerate this trajectory. Exponential inputs like compute, training data, and capital investment can't grow exponentially forever—tautologically, sustaining exponential growth becomes exponentially harder. Physical robotics may face manufacturing bottlenecks even if capabilities improve at software-like rates.
But threshold effects could work the other way: once AI crosses certain capability levels tied to human work patterns—daily tasks, weekly sprints, quarterly projects—it might suddenly simulate arbitrary combinations of human labor for arbitrary durations. And if AI begins automating AI research itself (recursive self-improvement), the doubling time could collapse. We don't know which forces dominate. We just know the current trajectory is exponential, and exponentials always look manageable until suddenly they don't.
Barak's most provocative claim is that AI could break the 2% annual GDP growth rate that has held steady in the U.S. for 150 years—through electrification, automobiles, computers, and the internet. None of those technologies changed the growth trajectory. Why would AI be different? Because previous technologies automated specific tasks but left humans as the bottleneck.
Baumol's cost disease describes this perfectly: computers got exponentially faster, but economic growth stayed linear because human productivity in service sectors—education, healthcare, management—didn't scale. AI potentially removes that bottleneck by automating cognitive labor itself. If AI automates 30% of the economy (a conservative estimate for cognitive work), that could increase GDP by 1/(1-0.3) ≈ 42%. If that happens over a decade, it corresponds to 3.5% annual growth—nearly double the historical rate.
But Barak's analysis goes further. Using Benjamin Jones' model of automation and substitution effects, he shows that transformative growth (10x productivity, comparable to the Industrial Revolution) requires both increasing AI productivity (λ) and shrinking the fraction of unautomated tasks (ρ).
The harmonic mean structure of Jones' model means that even if AI becomes infinitely productive at automated tasks, maximum gains are capped by unautomated tasks. If 25% of tasks remain unautomated, productivity can only 4x.
The critical question is whether ρ shrinks exponentially. If AI automates 75% of remaining tasks each year (consistent with 6-month doubling times), we reach transformative growth in under two years. If it's a more conservative 9% annually, it takes one to two decades. Either scenario is unprecedented—historical automation has been linear, with single-digit percent annual increases. AI's exponential capability growth suggests we're heading for something categorically different.
One underappreciated variable in Barak's analysis is cost decay. While extending AI capabilities is expensive, replicating existing capabilities gets 10x cheaper annually. Once a job is automated, within a year the cost becomes negligible. This creates a ratchet effect: every capability threshold crossed becomes trivially cheap to deploy at scale. For marketing teams, this has immediate implications.
Tasks that AI struggles with today—complex multi-stakeholder campaigns, brand strategy synthesis, creative direction—will become automated eventually. When they do, the cost to deploy that automation will drop so fast that competitive advantage accrues to early adopters who build infrastructure to absorb and scale these capabilities. The teams that win won't be the ones with the best prompts—they'll be the ones with systems designed to integrate AI capabilities as they become available and economically viable.
Barak frames AI's economic contribution as injecting N(t) new "workers" with quality Q(t) into the labor force each year, where quality corresponds to the fraction of economically useful tasks they can perform. If both N and Q double annually (consistent with 4x product growth per year), then once AI provides a non-trivial number of workers—say 100,000—it becomes the dominant labor source within a decade.
Under Cobb-Douglas production assumptions, doubling the workforce increases GDP by ~50%. But if AI "workers" scale to trillions (which is technically feasible if costs keep falling), traditional economic models break. We don't have frameworks for economies where labor is functionally infinite and capital becomes the binding constraint. Barak doesn't claim to predict outcomes in that regime—he's just pointing out that current trends lead there faster than we're prepared for.
Barak deliberately avoids predicting when AI will cross specific capability thresholds, focusing instead on how fast things change once they do. That's the right analytical move. Economists like Daron Acemoglu predict AI-driven growth at 0.1% annually. Goldman Sachs estimates 1.5%. GDP doubling over a decade (7% annual growth, with AI contributing 5%) is 50x Acemoglu's estimate and triple Goldman's.
Who's right? Probably no one—the uncertainty is genuinely high. But Barak's analysis suggests the distribution of outcomes is wildly skewed. The median scenario might be modest growth. But the tail risk is explosive growth compressed into timelines much shorter than historical automation waves. For businesses and policymakers, preparing for the median is rational. Ignoring the tail is catastrophic.
Ready to build infrastructure that scales with AI capabilities instead of fighting them? Winsome Marketing's growth experts help teams architect systems that absorb exponential progress without constant rebuilds. Let's talk.
Welcome to the age of cognitive castes. OpenAI just rolled out what they're calling "thinking effort" levels in ChatGPT—a euphemistic slider that...
Miles Brundage just dropped the most sensible economic policy proposal of the decade, and predictably, everyone's calling him crazy. The former...
5 min read
Cursor just raised $900 million at a $10 billion valuation for building AI that writes code. Meanwhile, the energy required to power these systems...