Google's Willow Chip Just Made Quantum Computing Real
We've been hearing about quantum computing's "potential" for two decades. Google just turned potential into proof.
5 min read
Writing Team
:
Nov 28, 2025 8:00:00 AM
Quantum computing has a chicken-and-egg problem: You can't build better quantum chips without understanding how noise and crosstalk destroy qubit coherence. But you can't study those effects without simulating entire chips at microscale resolution—a computational challenge that was impossible until now.
Researchers at Lawrence Berkeley National Laboratory just cracked it. Using 6,724 NVIDIA A100 GPUs—95% of the entire Perlmutter supercomputer—they completed the world's first full-wave electromagnetic simulation of a complete, state-of-the-art quantum chip at micron-scale resolution.
The simulation tracked 1.5 million time steps representing just one nanosecond of physical time, requiring nearly eight hours on one of the world's most powerful supercomputers. But that single nanosecond revealed crosstalk patterns, spurious modes, and electromagnetic interference effects that no previous simulation could capture—the exact information needed to design quantum chips that actually work at scale.
Superconducting quantum chips present a brutal simulation problem. You need to model physics across multiple orders of magnitude simultaneously—from micrometer-scale qubit structures to centimeter-scale dimensions of the full chip.
Without GPU acceleration, researchers faced an impossible tradeoff: Simulate small regions at high resolution, missing chip-level interactions, or simulate full chips at coarse resolution, missing critical microscale effects. Neither approach captures the crosstalk and noise coupling that destroys qubit performance in real devices.
The problem compounds because accurate simulation requires time-domain approaches. You can't just calculate steady-state electromagnetic fields and call it done. You need to track how control pulses and microwave signals propagate, reflect, and interfere across the chip in real time, revealing transient effects like mode coupling and signal distortion that frequency-domain methods miss entirely.
Traditional CPU-based simulations couldn't handle this. The computational requirements scale exponentially with spatial resolution and chip size.
Berkeley Lab developed ARTEMIS—an open-source simulation package optimized specifically for GPU parallelization on NVIDIA's CUDA platform. It's a full-wave, time-domain electromagnetic solver designed to exploit the massive parallelism GPUs provide.
The scaling numbers are remarkable. On a node-by-node basis, GPU simulations run 60x faster than CPU-only approaches. But the real breakthrough is scalability across thousands of GPUs simultaneously. ARTEMIS exhibits excellent weak scaling up to 2,048 GPUs, with the Berkeley Lab team pushing it to 6,724 GPUs for this demonstration.
That extreme scalability enables what was previously impossible: Modeling large, chip-scale systems while preserving fine spatial and temporal details. The simulation resolved electromagnetic interactions from micrometer-scale qubit structures to centimeter-scale control lines, capturing multiscale physics in a single unified model.
The team discretized a 1-centimeter quantum chip into over 10 billion grid points using micron resolution. They injected a control pulse and tracked its propagation through the chip at femtosecond time resolution.
The results are stunning—and revealing. Video footage shows the electric field propagating along coplanar waveguide transmission lines, coupling from the control layer to the qubit layer, interfering and resonating as it travels. The simulation exposes spurious modes and crosstalk paths that steady-state models completely miss.
In the qubit region, clear resonances appear—the electric field continues oscillating long after the excitation pulse disappears. This is exactly the kind of unwanted electromagnetic coupling that destroys qubit coherence and creates errors in quantum computations.
For chip designers, this information is gold. They can now identify specific geometric features that create crosstalk, test modifications before expensive fabrication runs, and validate that design changes actually reduce noise sources rather than just shifting problems elsewhere.
Quantum computing is facing a hard truth: Building useful, large-scale quantum computers is primarily an engineering challenge, not a physics problem. The theory is largely understood. The barrier is fabricating qubits that maintain coherence long enough to perform meaningful computations.
Noise is the enemy. Environmental interference, unwanted interactions between circuit elements, electromagnetic crosstalk between control lines and qubits—these effects accumulate and destroy the quantum states you're trying to manipulate.
Traditional chip design solved similar problems decades ago through Electronic Design Automation (EDA) tools. The modern semiconductor industry doesn't fabricate chips and hope they work. They simulate exhaustively in software, identify problems, iterate designs, and only commit to expensive fabrication once simulations confirm the design will perform as intended.
Quantum chip design needs the same workflow. But until Berkeley Lab's demonstration, the simulation tools didn't exist for full-chip, high-resolution modeling. Designers were working partially blind, relying on simplified models, analytical approximations, and educated guesses.
Now they have validated, scalable simulation frameworks that model real chip behavior before entering fabrication cycles. This isn't just faster—it's qualitatively better design methodology.
There's delicious irony in using classical supercomputers to advance quantum computing. But it makes perfect sense strategically.
Electromagnetic simulations are embarrassingly parallel—you can divide the chip into spatial regions and calculate field evolution in each region simultaneously. GPUs excel at exactly this kind of workload, with thousands of cores performing similar calculations in parallel.
NVIDIA's CUDA platform provides the validated software infrastructure to orchestrate these calculations across thousands of GPUs efficiently. The CUDA-Q platform specifically targets quantum computing workloads, providing out-of-the-box tools for the dynamical simulations needed in quantum chip design.
Berkeley Lab's collaboration with NVIDIA demonstrates the ecosystem forming around quantum computing development—not just quantum hardware startups, but classical computing infrastructure providers recognizing quantum simulation as a major workload and optimizing for it.
The practical impact extends beyond understanding crosstalk in existing designs. Validated simulation enables testing novel qubit architectures computationally before committing resources to fabrication.
Want to try a new qubit geometry that might reduce noise? Simulate it first and see if crosstalk actually decreases or just shifts to different frequency bands. Considering new materials or multi-layer designs? Model the full electromagnetic environment and verify coupling behavior meets requirements.
This iterative design process—simulate, analyze, modify, simulate again—is standard practice in classical chip design. Berkeley Lab's work brings it to quantum computing, potentially accelerating the path to useful quantum systems by years.
The eight-hour runtime for simulating one nanosecond might sound expensive, but consider the alternative: Fabricating a chip takes weeks or months and costs hundreds of thousands to millions of dollars. Running even dozens of simulations is vastly cheaper than a single failed fabrication run.
This work arrives as quantum computing faces mounting skepticism about timelines to practical systems. Companies have been promising "quantum advantage" for years while error rates remain stubbornly high and qubit counts scale more slowly than hoped.
The Berkeley Lab demonstration doesn't solve those problems directly, but it provides critical infrastructure for addressing them. Better simulation tools mean better chip designs. Better chips mean lower error rates. Lower error rates mean more qubits can work together reliably. More reliable qubits mean practical quantum computing moves from decades away to years away.
NVIDIA's involvement signals their bet that classical computing will remain essential to quantum development for the foreseeable future—not just for simulation, but for error correction, circuit optimization, and hybrid classical-quantum algorithms.
Google's recent work with the Willow chip, also leveraging CUDA-Q Dynamics for chip design, demonstrates this isn't isolated academic research. Industry leaders are adopting GPU-accelerated simulation as standard methodology.
Berkeley Lab's decision to make ARTEMIS open source matters strategically. Quantum computing is pre-competitive in the sense that everyone benefits from better design tools. Proprietary simulation platforms would fragment the ecosystem and slow progress.
By providing validated, GPU-accelerated tools freely, Berkeley Lab accelerates the entire field's ability to design better chips. Startups gain access to simulation capabilities that would otherwise require building entire software teams. Academic researchers can contribute improvements and extensions.
The open-source approach also enables reproducibility and validation—critical for a field where extraordinary claims require extraordinary evidence.
The 6,724-GPU demonstration establishes feasibility and validates the approach. The next phase is optimization and accessibility.
Can simulation time be reduced through algorithmic improvements or better GPU utilization? Can the framework extend to even larger chips or longer time sequences? Can cloud-based GPU clusters make these simulations accessible to organizations without supercomputer access?
All are tractable engineering challenges. The fundamental barrier—whether full quantum chip simulation is computationally feasible at all—has been definitively answered.
For quantum computing skeptics questioning whether we'll ever build useful systems, Berkeley Lab just provided a tool that materially improves the odds. Better simulation means better chips. Better chips mean quantum computing's promise gets a real shot at becoming reality.
The qubits are still noisy. But now we can see exactly why—and design our way out of it.
If your organization is exploring quantum computing applications or needs strategic guidance on emerging compute paradigms and their business implications, Winsome Marketing's team can help you separate realistic timelines from hype cycles.
We've been hearing about quantum computing's "potential" for two decades. Google just turned potential into proof.
History has pivot points where everything accelerates. The steam engine. The transistor. The internet. We just witnessed another one: quantum...
While America argues about TikTok bans and content moderation, China just dropped a neuromorphic supercomputer that makes Intel's flagship look like...