OpenAI, Oracle, and Vantage Data Centers announced this week they're building a new AI data center campus in Port Washington, Wisconsin—part of the Stargate initiative, a $500 billion infrastructure project designed to keep the U.S. competitive in the global AI race. The Wisconsin site joins another facility already under construction in Abilene, Texas, as part of a partnership that aims to deliver 10 gigawatts of data center capacity.
That's not incremental expansion. That's a fundamental thesis about the future of AI: whoever builds the biggest infrastructure wins.
The problem is that thesis might already be wrong.
Stargate is backed by OpenAI, Oracle, SoftBank, and Vantage Data Centers. The announced goal: $500 billion in investment to build 10 gigawatts of data center capacity over the next several years. For context, 10 gigawatts is roughly the output of 10 nuclear power plants, or enough electricity to power 7.5 million homes.
The Port Washington campus is part of a 4.5-gigawatt build announced in July between OpenAI and Oracle. Texas is getting the first facility. Wisconsin is next. More sites are likely coming.
This is infrastructure at nation-state scale. And it's being built on the assumption that training bigger models on more data with more compute will continue to be the primary path to AI advancement. That assumption held true from GPT-2 to GPT-4. But the returns are flattening.
According to a 2024 analysis from Epoch AI, compute used in frontier AI training runs has been growing at 4-5x per year since 2010. But performance improvements—measured by benchmark accuracy gains per unit of compute—are decelerating. We're entering the zone where you need exponentially more infrastructure to get logarithmically better results.
Stargate is a bet that brute-force scaling still works. The evidence says we're approaching the point where it doesn't.
While OpenAI and Oracle are building gigawatt-scale data centers, other AI labs are moving in the opposite direction. Anthropic, Mistral, and Meta have all published research showing that smaller, more efficient models—trained on higher-quality data with better architectures—can match or exceed the performance of larger models at a fraction of the cost.
Meta's Llama 3.1, for example, performs comparably to GPT-4 on many benchmarks but was trained on significantly less compute. Anthropic's Claude models emphasize reasoning efficiency and safety over raw parameter count. Mistral has built a business around open-source models that run on consumer hardware.
The efficiency play isn't just academic—it's economic. AI inference costs (the cost of actually running models after training) are growing faster than training costs, and companies are increasingly prioritizing models that deliver performance per dollar rather than raw capability.
Stargate represents the old paradigm: build the biggest training infrastructure, assume scale solves everything, and hope the applications justify the investment. But if the bottleneck has shifted from compute to data quality, architectural innovation, and inference efficiency, then building 10 gigawatts of capacity is building toward yesterday's problem.
SoftBank's involvement in Stargate is worth scrutinizing. This is the same firm that bet $100 billion on WeWork, Uber, and a portfolio of startups that collapsed or underperformed. SoftBank's Vision Fund strategy has always been to flood capital into markets and hope scale creates monopolies.
AI infrastructure is the new target. But unlike ride-sharing or co-working, AI doesn't have natural monopoly dynamics. Model performance is converging across labs. Open-source alternatives are proliferating. Switching costs are low. And the competitive moat isn't infrastructure—it's the quality of the model, the speed of iteration, and the ability to deploy efficiently.
SoftBank's track record suggests they're better at identifying trends than picking winners. Stargate might be a correct read on AI's importance and a catastrophically expensive bet on the wrong bottleneck.
The geographic choices here matter. Port Washington, Wisconsin, and Abilene, Texas, aren't tech hubs. They're politically strategic locations in swing states with favorable energy policies and available land.
Wisconsin is a battleground state with a manufacturing economy looking for next-generation industries. Texas has deregulated energy markets and surplus renewable capacity. Both offer tax incentives for data center development. This is economic development theater as much as technical strategy.
The risk is that political considerations are driving infrastructure decisions that should be driven by proximity to talent, energy reliability, and network latency. Building data centers in the Midwest makes sense if the goal is job creation and political optics. It makes less sense if the goal is optimal AI development.
Stargate isn't just about building data centers. It's a geopolitical statement: the U.S. intends to dominate AI infrastructure the way it dominated oil refining in the 20th century. The project is designed to outpace China's AI investments, maintain U.S. leadership in model training, and ensure that the next generation of AI breakthroughs happens on American soil.
That's a defensible strategic goal. The question is whether gigawatt-scale training infrastructure is still the right lever. If the next breakthroughs come from algorithmic efficiency, synthetic data, or inference-time reasoning rather than bigger training runs, then Stargate is building a Maginot Line—impressive, expensive, and irrelevant.
To be fair, OpenAI has access to data the rest of us don't. If they're committing to $500 billion in infrastructure, they likely have internal results showing that GPT-6, GPT-7, and beyond still require massive compute scaling. The company's entire business model depends on maintaining a capability lead over competitors, and that lead has historically come from training larger models.
It's possible that reasoning models, multimodal systems, and long-context architectures still scale predictably with compute—and that Stargate is the only way to maintain that trajectory. If true, this is a correct and necessary investment.
But it's also possible that OpenAI is locked into a sunk-cost fallacy, doubling down on scaling because it's the only strategy they know how to execute, even as diminishing returns make the economics untenable.
The ultimate question isn't whether Stargate gets built—it will. The question is whether the applications enabled by that infrastructure generate enough economic value to justify the cost.
$500 billion is roughly the GDP of Belgium. If AI doesn't create transformative productivity gains, automate massive labor categories, or unlock entirely new industries, then Stargate is the most expensive infrastructure bet in tech history—and it's going to lose.
We'll know the answer in about five years. By then, either Stargate will look like visionary infrastructure planning, or it'll look like the AI industry's equivalent of fiber optic cable overbuilding in the late 1990s—billions invested in capacity that no one needed.
The smart money says it's somewhere in between: useful, expensive, and not nearly as decisive as the backers hope.
If your team is trying to separate AI infrastructure decisions from AI theater, we can help. Winsome Marketing works with growth leaders to build strategies based on what actually works—not what sounds impressive in press releases. Let's talk.