A team at Delft and CSIC cracked the readout problem for topological qubits - the same architecture Microsoft bet its entire quantum future on.

For the HPC community, quantum computing has lived in a perpetual state of "promising but not yet." The hardware exists. The algorithms exist. What doesn't exist, at least not at useful scale, is the error correction needed to make quantum computation reliable enough to solve problems that classical supercomputers can't.
A breakthrough published in February by researchers at Delft University of Technology and the Madrid Institute of Materials Science (ICMM-CSIC) may have just removed one of the biggest roadblocks for the most ambitious approach to solving that problem: topological qubits built from Majorana zero modes.
The team demonstrated, for the first time, a method to read the quantum state of a Majorana qubit in real time with a single measurement, and measured parity coherence times exceeding one millisecond. If those numbers hold up and improve, they validate the core thesis behind Microsoft's decade-long, billions-of-dollars bet on topological quantum computing.
Most quantum computers today use superconducting qubits, the approach favored by IBM, Google, and a growing constellation of startups. These qubits are conceptually straightforward: they encode information in the quantum states of superconducting circuits. The problem is that they're incredibly fragile. Environmental noise (thermal fluctuations, electromagnetic interference, even stray cosmic rays) corrupts quantum information constantly. This is the decoherence problem, and it's why current quantum computers need massive amounts of error correction overhead, sometimes requiring thousands of physical qubits to produce a single reliable logical qubit.
Topological qubits take a different approach. Instead of storing information in a single physical location (where it's vulnerable to local noise), topological qubits distribute quantum information across a pair of linked quantum states called Majorana zero modes (MZMs). These MZMs exist at the ends of specially engineered superconducting nanowires.
CSIC researcher Ramón Aguado, co-author of the study, described it with an analogy that cuts through the physics: topological qubits are "like safe boxes for quantum information." The data isn't stored in one spot. It's spread across the system. To corrupt it, you'd need to affect the entire system globally, not just poke at one point.
This inherent noise resistance is why Microsoft has pursued topological qubits as its primary quantum computing strategy for nearly two decades. The logic is compelling: if your qubits are inherently more stable, you need dramatically less error correction, which means you can scale to useful qubit counts with far less overhead.
The catch? Until now, the same property that makes topological qubits robust also made them nearly impossible to read.
Here's the paradox Aguado articulated: "This same virtue had become their experimental Achilles' heel: how do you 'read' or 'detect' a property that doesn't reside at any specific point?"
In a conventional superconductor, electrons pair up into Cooper pairs and flow without resistance. Any unpaired electron sticks out because it requires extra energy and can be detected. In the topological superconductor that hosts Majorana zero modes, an unpaired electron is shared between two MZMs, making it invisible to local measurement. The quantum information is encoded in the parity, whether the wire contains an even or odd number of electrons. Distinguishing between, say, 1,000,000,000 and 1,000,000,001 electrons in a superconducting wire is exactly as difficult as it sounds.
Previous experimental approaches used local charge measurements to try to detect this parity. They failed. The information simply isn't accessible locally. That's the whole point of topological protection.
The Delft-CSIC team built a device called a Kitaev minimal chain: two semiconductor quantum dots connected through a superconductor. This modular "Lego block" approach, as the researchers described it, allowed them to create and control Majorana zero modes from the bottom up rather than relying on bulk material properties.
The key innovation was applying a technique called quantum capacitance as a readout mechanism. Rather than trying to measure local charge (which is blind to topological information), the team coupled both ends of the nanowire to a quantum dot and measured how the dot's ability to hold charge changed depending on the wire's parity state. They then used microwaves to detect this change, since the microwaves reflect differently depending on whether the parity is even or odd.
As Aguado explained: "The experiment elegantly confirms the protection principle: while local charge measurements are blind to this information, the global probe reveals it clearly."
Critically, this works as a single-shot measurement in real time, not a statistical average over many repetitions. And the measured parity coherence time exceeded one millisecond, which the team considers highly promising for future qubit operations.
Microsoft announced the Majorana 1 processor in early 2025, the world's first quantum processing unit powered by a topological core. That announcement was built on a related but distinct approach: Microsoft engineered a novel class of materials called topoconductors (indium arsenide semiconductor combined with aluminum superconductor) that create topological superconductivity when cooled to near absolute zero and tuned with magnetic fields.
Microsoft's readout approach uses a similar principle, coupling MZMs to a quantum dot and using microwave reflectometry to detect parity. The company reported initial measurement error rates of 1% with clear paths to improvement, and demonstrated that external radiation flips the qubit state only once per millisecond on average.
The Delft-CSIC result validates the broader approach independently. Different device architecture (Kitaev minimal chain vs. Microsoft's topoconductor design), same physics, same conclusion: topological qubit readout is feasible, reliable, and fast enough to be useful.
Microsoft's roadmap now moves from single-qubit devices to a 4×2 tetron array, demonstrating entanglement and measurement-based braiding. The company is building toward a fault-tolerant prototype (FTP) as part of the DARPA US2QC program, in years, not decades, they claim.
The quantum computing field is effectively running two parallel experiments at civilizational scale:
The superconducting path (IBM, Google, Rigetti, and others) has more qubits today, more operational experience, and a clearer near-term trajectory. IBM's roadmap targets thousands of qubits with improving error rates. Google demonstrated quantum error correction thresholds with its Willow processor. The ecosystem is deep: software tools, cloud access, real customer workloads (even if most are still exploratory).
The topological path (primarily Microsoft) has fewer qubits, functionally zero at useful scale right now, but promises a dramatically better scaling trajectory. If topological qubits deliver on their theoretical advantage, they could leapfrog the superconducting approach entirely by requiring orders of magnitude less error correction overhead.
The Delft-CSIC result strengthens the topological case by resolving what was arguably the biggest open experimental question. But the gap between "we can read a single qubit" and "we have a useful quantum computer" remains vast. Microsoft needs to demonstrate multi-qubit entanglement, error correction on topological arrays, and eventually the kind of algorithm execution that IBM and Google are already doing (imperfectly) on superconducting hardware.
For organizations making long-term infrastructure bets (national labs, hyperscalers, enterprises with quantum strategies) the Majorana readout breakthrough shifts the probability distribution on timelines.
The topological approach was always the high-risk, high-reward play. The readout problem was a genuine existential risk: if you can't read the qubits, nothing else matters. That risk just got substantially reduced.
This doesn't mean topological quantum computers are arriving soon. But it means they're arriving, and the competition between quantum architectures is about to get more interesting. When Microsoft talks about building a fault-tolerant prototype in years rather than decades, the experimental evidence is starting to support that timeline rather than contradict it.
For the HPC community that will ultimately be the primary customer for quantum computing at scale, this is worth watching closely. The readout problem was the padlock on the safe box. It just got picked.