DARPA bets on mixed-modality qubits with new HARQ program
DARPA's new HARQ program funds 19 teams to build a heterogeneous quantum computer that mixes trapped-ion, neutral-atom, superconducting, and diamond qubits over a shared photonic backbone, targeting a 1,000x cut in physical resource demand.

On April 14, DARPA's Microsystems Technology Office named the performer teams for HARQ, Heterogeneous Architectures for Quantum: a 24-month effort that treats "one qubit type to rule them all" as a dead end. Nineteen teams across fifteen organizations are splitting into two workstreams. One is a compiler stack called MOSAIC, which partitions a single circuit across different qubit modalities. The other is a hardware effort called QSB, Quantum Shared Backbone, that builds the interconnect fabric those modalities will talk over. Seventeen teams are on contract. Two are still in negotiation.
The premise is simple. Every qubit modality has a physical regime where it wins, and no single modality wins everywhere. HARQ is the first DARPA program to formalize that as a design constraint instead of a problem to defer.
What HARQ is actually trying to do
The HARQ program page frames the premise in resource-demand terms. Homogeneous qubit arrays force a single technology to handle every function in a computation (gate operations, memory, long-range communication), even when the physics of that technology is only optimal for one of those jobs. Trapped ions hold coherence for seconds but shuttle slowly. Superconducting qubits switch in nanoseconds but decohere in microseconds. Neutral-atom arrays scale well and carry their own gate-time and connectivity tradeoffs. Photons move. They do not store.
HARQ's bet is that partitioning a workload across heterogeneous qubit types (processing on one, memory on another, communication over a shared photonic backbone) can lower the total physical resource cost of running a useful algorithm. DARPA's stated program target is "potentially cutting resource demands by a factor of 1,000" versus a single-modality baseline. That figure is aspirational, not demonstrated. The closest peer-reviewed anchor comes from Stein et al., whose 2024 analysis of a heterogeneous surface-plus-gross-code architecture showed up to a 6.42x reduction in physical qubit count at a target logical error rate, with a 3.43x tradeoff in execution time. That 6.4x result sits on one corner of a tradeoff surface. HARQ's 1,000x target describes what's possible if compilers learn to navigate the whole surface and the hardware keeps up. The gap between the two numbers is the program.
HARQ is structured around two task areas plus an integration phase. TA1, MOSAIC, is the software stack: logical-circuit partitioning, modality-aware scheduling, and the error-correction glue that makes a multi-technology computer behave like one machine. TA2, QSB, is the interconnect: high-fidelity, high-rate quantum links between dissimilar hardware. A separate scale-up period combines both into an end-to-end system model and a commercialization plan. The HARQ Q&A document also confirms that UARCs, FFRDCs, and government labs were explicitly barred from proposing as primes. HARQ is an industry-and-academia program by design.
Program manager Justin Cohen, a Caltech-trained physicist who came in from Booz Allen's integrated-photonics practice, is running it out of MTO.
The performer lineup
Named performers span trapped-ion, neutral-atom, superconducting, diamond-color-center, and photonic networking labs. Most of the modality assignments, though, are inferences from the companies' existing businesses. Only one performer release actually names specific qubit types in scope.
That release is IonQ's. IonQ is on the QSB side, building high-speed quantum interconnects anchored on quantum memories fabricated from synthetic diamond. The release describes the architecture as explicitly combining "trapped ions, neutral atoms, and/or superconducting qubits" into one networked system. "IonQ's pioneering quantum interconnect technology can enable modular scalability not only for ion traps, but for a wide range of quantum technologies," said Niccolo de Masi, IonQ's chairman and CEO. The technical underpinning is a wafer-scale thin-film diamond platform with silicon-vacancy color-center memories and photonic-crystal cavities reaching cooperativities near 100, described in an August 2025 arXiv preprint co-authored by members of the former Lukin group now at IonQ.
On the MOSAIC side, the lead sub-team is memQ, a University of Chicago spin-out focused on qubit-agnostic networking. memQ is building a hardware- and network-aware compiler that maps and partitions logical circuits across heterogeneous processors linked by quantum networking, with qBraid, MIT, Yale, and the University of Chicago's Liang Jiang on the team. The compiler effort builds on memQ's earlier xDQC work on NVIDIA's CUDA-Q stack. "The HARQ program will catalyze the modularity, scale, and resource optimization needed to realize the full potential of quantum computing," said Manish Singh, memQ's chief product officer. Jiang, whose group has published extensively on heterogeneous error-correction codes, added that "quantum error correction is central to making these interfaces practical."
The only publicly disclosed dollar figure belongs to the University of Illinois Urbana-Champaign's QSB contribution: a two-year, $2 million grant to Kejie Fang's lab in the Electrical and Computer Engineering department. Fang's team is developing a wavelength-agile entangled photon source (InGaP nanophotonic waveguides coupled to periodically poled thin-film lithium niobate) designed to bridge atomic quantum processors and quantum memories operating at different wavelengths. "A quantum interconnect like this is crucial for memory-enhanced quantum information processing and is a fundamental building block in many quantum repeater architectures," Fang said. No other HARQ performer funding amounts have been disclosed.
Everyone else is on the DARPA release but has said nothing on their own channels yet. Infleqtion and Q-CTRL are on the MOSAIC side. Infleqtion's neutral-atom processor roadmap and Superstaq compilation stack are the obvious seeds for their contribution; Q-CTRL's error-suppression and control layer is the equivalent for theirs. The University of Michigan and the University of Pennsylvania round out the MOSAIC academic participants, with Penn's QUIEST center as the relevant institutional hub. On the QSB side, Harvard, Stanford, UC Berkeley, Carnegie Mellon, EPFL, and the Australian National University are all named in the DARPA release. Stanford's LINQS and Harvard's Quantum Initiative represent each institution's broader quantum portfolio. Specific modality assignments for these performers are not public, and their core technology areas are the only hint at what each will contribute.
How HARQ fits DARPA's quantum portfolio
HARQ is a sibling to two other active DARPA programs and a successor to a third. The Quantum Benchmarking Initiative is asking whether industrially useful quantum computers are achievable by 2033. Eleven companies advanced to Stage B in November 2025 across superconducting, trapped-ion, photonic, and silicon-spin-qubit approaches. QBI is the economic question: is the machine worth building? HARQ is the architectural question that follows from a "yes": what's the cheapest way to build it? US2QC, QBI's predecessor, is now in final Stage C validation with Microsoft and PsiQuantum.
On the networking side, QuANET has been DARPA's vehicle for pushing quantum-augmented classical networking toward practical use. The program recently reported 0.7-millisecond bit transmission as a benchmark. HARQ's QSB workstream is likely to borrow methods from QuANET for its interconnect fabric. The physics of moving entangled photons between dissimilar hardware is the same whether the endpoints are computers or repeaters.
What to watch
The interesting benchmark over the next 24 months will not be IonQ's gate fidelity or memQ's compiler in isolation. Both are already strong on their own. The test is whether the compiler and the interconnect co-design against each other well enough to turn Stein-style 6x reductions into something meaningfully closer to DARPA's 1,000x program ambition. That requires the software team to know the interconnect's loss budget and the hardware team to design to the compiler's partitioning assumptions. HARQ's explicit "scale-up period" exists for exactly that integration work.
Second, watch for the other performers to break their silence. IonQ, memQ, and UIUC have all published detailed statements on their roles. Infleqtion, Q-CTRL, Michigan, Penn, Harvard, Stanford, CMU, EPFL, Berkeley, and ANU have not. Their announcements, plus whatever emerges about the two unnamed teams still in contract negotiation, should surface over the coming weeks. Each one will clarify the modality map.
And finally, QBI Stage B. HARQ's premise only pays off if at least one modality actually crosses the utility-scale threshold. If Stage B thins the herd down to a small number of viable approaches, HARQ's compiler and interconnect targets get easier to define. If it doesn't, the 1,000x figure stays aspirational for a long time.
🤖 AI Disclosure
AI-assisted research and first draft. This article has been verified by a human editor.