Quantum Computing in 2026: Separating Engineering Progress from Cryptographic Threat
Abstract
Google's December 2024 announcement that its Willow quantum processor completed a specific computation 13,000 times faster than classical supercomputers generated headlines suggesting that practical quantum computing had arrived. This paper contextualizes the Willow result within the broader quantum computing landscape, examines the state of error correction and logical qubit development across the major hardware platforms, distinguishes carefully between the quantum simulation capabilities demonstrated by Willow and the cryptographic threat posed by future fault-tolerant quantum computers, and assesses the practical implications for AI infrastructure planning. The central argument is that quantum computing has achieved a genuine engineering milestone in 2024-2025 while remaining decades from the scale required to threaten modern cryptography, and that conflating these two timelines leads to misallocated investment.
The Willow Result in Technical Context
Google's Willow processor, a 105-qubit superconducting chip fabricated at Google's Santa Barbara quantum AI lab, solved a specific class of quantum simulation problem, modeling how disturbances propagate through a complex many-body quantum system, in approximately two hours. The equivalent classical computation would require an estimated 3.2 years on the most powerful available supercomputer. Unlike Google's 2019 "quantum supremacy" demonstration with its 53-qubit Sycamore processor, which involved sampling from random quantum circuits with no practical verification method, the Willow experiment produces a deterministic, verifiable answer. Researchers can validate the result at smaller scales (where classical computers can still compute the answer directly) and then extrapolate to confirm the full-scale result. This methodological improvement addresses the principal criticism of the 2019 claim, namely that it was impossible to independently verify whether the quantum computer had actually produced the correct output.
The 13,000x speedup figure requires careful interpretation. The computation is inherently quantum-native: the system naturally models quantum mechanical interactions that classical computers must simulate through exponentially expensive linear algebra operations. Simulating quantum systems on classical hardware requires representing an exponentially growing state space; for a 105-qubit system, the full state vector contains 2^105 complex amplitudes, a number that exceeds the storage capacity of any classical computer. The speedup is real but domain-specific. It does not imply that quantum computers are 13,000 times faster than classical computers in general. It demonstrates that quantum processors can simulate quantum physics faster than classical approximations of quantum physics, a result that is expected from first principles but whose practical demonstration at meaningful scale is nonetheless significant.
The more consequential technical achievement in Willow is its demonstration of below-threshold error correction. For the first time, a quantum processor demonstrated that increasing the number of physical qubits in a surface code error-correcting array reduced the logical error rate rather than increasing it. This is the fundamental prerequisite for fault-tolerant quantum computing: if adding more qubits makes the system noisier rather than more reliable, then scaling to useful problem sizes is impossible. Willow showed that for its specific error-correcting code implementation, each doubling of physical qubits in the array reduced the logical error rate by a factor of approximately 2. While this ratio must improve significantly for practical fault-tolerant computation (a factor of 10 per doubling is the threshold most theorists consider necessary), the demonstration that the trend goes in the right direction at all represents a genuine inflection point.
The Competitive Landscape: Hardware Platforms and Divergent Approaches
The quantum computing industry in 2026 comprises several competing hardware platforms, each with distinct physical implementations, error characteristics, and scaling trajectories. Understanding this landscape is essential for assessing realistic timelines.
Superconducting qubits, the approach used by Google, IBM, and Rigetti Computing, remain the most mature platform. IBM's 2024 Heron processor achieved 133 qubits with significantly improved two-qubit gate fidelities compared to its predecessor Eagle chip. IBM's roadmap targets 100,000 qubits by 2033 through a modular architecture that connects multiple quantum processors via quantum interconnects. Google's Willow represents a parallel approach focused on error correction quality over raw qubit count. The superconducting approach requires operation at approximately 15 millikelvin, maintained by dilution refrigerators that cost $1-2 million each, creating significant infrastructure overhead.
Trapped ion systems, pursued by Quantinuum (formed from the merger of Honeywell Quantum Solutions and Cambridge Quantum Computing) and IonQ, offer substantially higher gate fidelities than superconducting systems. Quantinuum's H2 processor, using ytterbium ions trapped in a linear radio-frequency trap, demonstrated two-qubit gate fidelities exceeding 99.8% in 2024, the highest reported for any platform. The trade-off is speed: trapped ion gate operations are approximately 1,000 times slower than superconducting gates, making them less suitable for algorithms requiring deep circuits. However, for applications where fidelity matters more than clock speed, trapped ions currently lead.
Neutral atom arrays, developed by QuEra Computing, Pasqal, and Atom Computing, represent the newest competitive platform. QuEra's December 2023 demonstration of 48 logical qubits (error-corrected from 280 physical qubits) surprised the field by achieving a logical qubit count that superconducting and trapped ion systems had not yet matched. Neutral atom systems use optical tweezers to arrange individual rubidium or cesium atoms in configurable 2D and 3D arrays, enabling flexible qubit connectivity that other platforms cannot easily replicate. The platform's key advantage is scalability: atom arrays can in principle scale to thousands of physical qubits in a single system, as the atoms are individually addressable and do not require individual wiring.
Photonic quantum computing, pursued by PsiQuantum and Xanadu, takes a fundamentally different approach by encoding quantum information in photons rather than matter-based qubits. PsiQuantum's strategy, backed by over $700 million in venture funding, bets on manufacturing photonic quantum chips using existing CMOS fabrication infrastructure at GlobalFoundries, which would dramatically reduce the per-qubit manufacturing cost. The photonic approach operates at room temperature, eliminating the dilution refrigerator requirement. However, photonic systems face challenges with deterministic two-photon gates, which remain probabilistic and require resource-intensive error correction overhead.
What Willow Does Not Demonstrate: The Cryptographic Threat Timeline
The Willow result does not advance the timeline for breaking modern encryption. This statement requires elaboration because the popular press routinely conflates quantum computing progress of any kind with imminent cryptographic vulnerability.
Breaking RSA-2048 encryption, the standard protecting the majority of internet communications, banking transactions, and government classified networks, requires running Shor's algorithm on a fault-tolerant quantum computer with approximately 4,000 error-corrected logical qubits. Each logical qubit, in turn, requires between 1,000 and 10,000 physical qubits for error correction, depending on the physical error rate of the underlying hardware. At current error rates, the total physical qubit requirement is estimated at 10 to 20 million qubits. Google's Willow has 105 physical qubits. The gap between current hardware and cryptographically relevant hardware spans approximately five orders of magnitude in qubit count and requires error rates to improve by at least two orders of magnitude.
The most credible estimates for cryptographically relevant quantum computing (CRQC) place the milestone between 2035 and 2045. The Global Risk Institute's 2024 survey of quantum computing experts found a median estimate of 2037 for a 50% probability of CRQC, with significant uncertainty spanning a 15-year range. NIST's internal assessment, which informed the urgency behind its post-quantum cryptography standardization program, assumes a planning horizon of 10 to 15 years from 2024. The National Security Agency's Commercial National Security Algorithm Suite 2.0 (CNSA 2.0), published in 2022, mandates transition to quantum-resistant algorithms by 2035 for most national security systems, implicitly placing the threat window in the 2035-2040 range.
This distinction matters for investment prioritization. Organizations should not conflate quantum simulation progress (which is real and advancing) with cryptographic urgency (which is based on a fundamentally different hardware capability) when making infrastructure decisions. The appropriate response to the cryptographic timeline is adoption of post-quantum cryptographic standards, specifically NIST's ML-KEM (Kyber) for key encapsulation and ML-DSA (Dilithium) for digital signatures, finalized in August 2024. These algorithms are designed to resist both classical and quantum attacks and can be deployed on existing classical hardware with minimal performance overhead. Migration to post-quantum cryptography is warranted on its own merits, independent of the precise CRQC timeline, because the data being protected today may retain sensitivity for decades.
Implications for AI Infrastructure Planning
Quantum computing and classical AI infrastructure are complementary technologies operating on different timescales, not competing paradigms. The near-term practical applications of quantum computing, molecular simulation, combinatorial optimization, and certain classes of machine learning kernel computation, are valuable but distinct from the workloads that drive AI infrastructure investment decisions today.
The most plausible near-term intersection between quantum computing and AI infrastructure is indirect: quantum-assisted materials science that improves classical hardware. Molecular simulation of battery chemistry could accelerate development of higher-density energy storage for data centers. Quantum simulation of heat transfer in novel materials could yield more efficient cooling substrates. Optimization of semiconductor doping profiles could improve chip yields and performance. These are genuine applications where quantum simulation capabilities like those demonstrated by Willow could produce tangible benefits for the AI hardware ecosystem, but the timeline is measured in years to decades, not quarters.
Direct quantum execution of neural network inference or training, the scenario sometimes implied by breathless press coverage, is not on any credible technology roadmap. Neural network operations are dense linear algebra computations (matrix multiplications, convolutions, activation functions) that run efficiently on classical GPU architectures. Quantum computers offer no theoretical speedup for these operations; the quantum advantage applies to problems with specific mathematical structure (periodicity for Shor's, unstructured search for Grover's, Hamiltonian simulation for quantum chemistry) that neural network computation does not exhibit.
AI infrastructure investment decisions should be made on the basis of current and near-term classical compute requirements. The organizations that defer GPU infrastructure investment while waiting for quantum computing to "disrupt" classical compute will find themselves without the infrastructure needed to compete in the present while waiting for a future that, for AI workloads specifically, may never arrive in the form they imagined. Quantum computing is a transformative technology for specific domains. AI inference is not one of them.