3 min read

What If AI Could Run on Noise Instead of Power?

What If AI Could Run on Noise Instead of Power?
What If AI Could Run on Noise Instead of Power?
6:26

Every Google search you run uses enough energy to power a 6-watt LED for 3 minutes. Scale that by billions of daily queries across every AI system running globally, and the energy math becomes one of the more uncomfortable facts about the technology industry's current trajectory. Researchers at Lawrence Berkeley National Laboratory just published work in Nature Communications that points toward a genuinely different approach — one that doesn't fight the fundamental physics of computing but instead works with it.

The field is called thermodynamic computing, and the core idea inverts everything classical computers are built on.

The Problem With How Computers Currently Handle Noise

Every electronic device contends with thermal noise — the constant vibration of electrons within conductive materials. Classical computers solve this problem by brute force: they operate at energy scales thousands of times larger than the noise level, essentially drowning it out to produce consistent, reliable outputs. It works. It also requires enormous amounts of power, and it places a hard floor on how energy-efficient conventional computing can ever become.

Quantum computing takes a different approach but faces the same fundamental adversary — thermal noise disrupts quantum states, which is why most quantum systems require cooling to temperatures near absolute zero to function at all.

Thermodynamic computing does neither. Instead of suppressing thermal noise, it uses it as the power source. The idea is to build physical devices operating at energy scales comparable to thermal fluctuations, then program them so that the system's natural, noise-driven motion over time performs useful computation. As Berkeley Lab researcher Stephen Whitelam describes it: "Classical and quantum computing fight noise; thermodynamic computing is powered by it."

New call-to-action

What the Berkeley Lab Research Actually Solved

Until this paper, thermodynamic computing faced two significant barriers that kept it theoretical rather than practical.

The first was an equilibrium problem. Existing thermodynamic computers could only perform calculations after reaching their lowest-energy state — and the time required to get there was unpredictable and often impractically long. The second was a scope problem: thermodynamic computing could only handle linear algebra, which excluded the nonlinear calculations that neural networks — and therefore machine learning — depend on entirely.

Whitelam and his colleague Corneel Casert of NERSC addressed both. Using digital simulations, they demonstrated that when the components of a thermodynamic computer are themselves nonlinear, the system can perform nonlinear computations at specified times without waiting for equilibrium. That makes it behave more like a classical computer in terms of timing, while still drawing power from thermal fluctuations rather than external energy sources.

The breakthrough in nonlinearity matters specifically because it brings thermodynamic computing into machine learning territory. Nonlinearity is what gives neural networks their expressive power — their ability to model complex, non-obvious relationships in data. A thermodynamic circuit that can replicate that property is, in principle, a thermodynamic neural network.

How You Train a Computer That Runs on Randomness

Training a system this inherently stochastic — where no two runs look identical — can't use the standard backpropagation methods that train conventional neural networks. Casert built a different approach entirely, running massively parallel evolutionary simulations across 96 GPUs on the Perlmutter supercomputer at NERSC.

The method is called a genetic algorithm. Start with a population of different thermodynamic neural network configurations. Evaluate each one's performance. Select the best performers, introduce random mutations to their parameters, and evaluate again. Repeat across billions of noisy trajectories per generation. The team ultimately simulated more than a trillion runs of a thermodynamic computer to find effective network parameters.

The training cost is significantly higher than conventional methods. That's the honest tradeoff. But once a thermodynamic neural network is trained and built as physical hardware, the energy required to run inference — to actually use the model — drops dramatically. The expensive part moves from operation to construction, which is a meaningful shift for systems running at scale continuously.

Why This Matters Beyond the Lab

For anyone tracking AI infrastructure and sustainability, the energy consumption of machine learning is not a background issue. It is increasingly a first-order constraint on how AI scales, where data centers can be built, and what the technology's long-term environmental footprint looks like. Thermodynamic computing doesn't solve that problem today — the research is explicit that hardware realization and new algorithms are the necessary next steps, and the team is actively seeking experimental partners to make both real.

But the conceptual contribution is significant. The assumption baked into every current AI system is that computation requires fighting the physical properties of the materials on which it runs. This research demonstrates that the assumption isn't fixed — that a fundamentally different architecture is possible, one in which the physics works with the system rather than against it.

For marketing and growth teams whose operations increasingly depend on AI infrastructure, the practical implication is still years away. The relevant question to track isn't when thermodynamic computing arrives but whether the energy economics of AI become a constraint on your stack before it does. For large-scale operations running inference continuously, that constraint is arriving faster than most roadmaps account for.

The paper was published in Nature Communications and is available at DOI: 10.1038/s41467-025-67958-0.

If you want to build AI into your marketing and growth operations in ways that account for where the technology is actually heading — not just where it is today — Winsome Marketing's strategists can help you plan for both.

AI's Overthinking Problem Is Real — And It's Costing You More Than You Know

AI's Overthinking Problem Is Real — And It's Costing You More Than You Know

Reasoning models frequently arrive at the correct answer, then keep talking anyway. A new ByteDance study quantified exactly how bad this is: in over...

Read More
OpenAI Just Told the White House That Electricity Is the New Oil

OpenAI Just Told the White House That Electricity Is the New Oil

OpenAI didn't send the White House a polite memo. They sent a formal pitch with a thesis so blunt it belongs on a protest sign: electricity is the...

Read More
OpenAI's $1 billion Stargate Norway Facility

OpenAI's $1 billion Stargate Norway Facility

Everyone's panicking about AI's energy consumption, but they're missing the plot entirely. OpenAI just announced a $1 billion AI facility in Norway...

Read More