The Chiplet Summit marked the transition from "chiplets as concept" to "chiplets as product." The winners won't be the ones with the best silicon. They'll be the ones with the best software and the deepest packaging ecosystem.

For the better part of five years, the semiconductor industry has talked about chiplets as the inevitable successor to monolithic die scaling. The logic is sound: as Moore's Law slows, you can still increase system performance by disaggregating functions onto separate dies, optimizing each for its specific job, and connecting them through advanced packaging. It's a modular approach to computing that trades monolithic simplicity for architectural flexibility.
The 2026 Chiplet Summit, held in late February, marked the point where this narrative shifted from architecture theory to production execution. Companies showed shipping products, not roadmap slides. They discussed yield data, not hypothetical performance projections. They argued about software ecosystems, not just physical interfaces.
The transition is real, but the challenges ahead are less about silicon and more about everything around it.
If you want to understand where chiplet technology stands, look at Amkor Technology's financials. Amkor is one of the world's largest outsourced semiconductor assembly and test (OSAT) providers, and its advanced packaging revenue hit $1.58 billion in Q4 2025 alone. The company guided $2.5-3.0 billion in capital expenditure for fiscal year 2026, with the majority directed toward advanced packaging capacity.
This is money being spent to build production lines for technologies like 2.5D interposers, fan-out wafer-level packaging, and hybrid bonding: the physical infrastructure that connects chiplets into functioning systems.
Amkor's capital intensity reflects a broader industry truth: advanced packaging is becoming as strategically important as front-end lithography. You can design the most brilliant chiplet architecture in the world, but if you can't package it with the density, yield, and reliability that production demands, it stays on a slide deck.
TSMC, Intel, and Samsung are all expanding advanced packaging capacity. TSMC's CoWoS (Chip-on-Wafer-on-Substrate) packaging, used in NVIDIA's H100 and Blackwell GPUs, has been capacity-constrained since 2023. The supply bottleneck in advanced packaging, not in the GPUs themselves, was one of the primary constraints on NVIDIA's ability to ship Blackwell in 2025.
Previous chiplet summits were dominated by architects showing block diagrams and standards bodies debating interface specifications (UCIe, BoW, AIB). The 2026 event had a noticeably different character.
Specific developments that stood out:
Here's the uncomfortable truth about chiplets that hardware architects prefer to avoid: the silicon is the easy part.
A chiplet-based system composed of dies from multiple vendors, each with its own power delivery requirements, thermal profiles, and performance characteristics, needs software that understands the heterogeneous hardware and can schedule work accordingly. Current operating systems and runtime libraries mostly assume a homogeneous compute environment. They don't know, and don't care, that one tile is a compute chiplet made on TSMC 3nm and another is an I/O tile on GlobalFoundries 12nm.
The chiplet vision of mixing and matching dies from different vendors into custom systems requires:
None of these software problems are unsolvable, but they're largely unsolved at the ecosystem level. Individual companies (NVIDIA, AMD, Intel) have proprietary solutions for their own chiplet products. An open, multi-vendor chiplet software stack doesn't exist yet. The Chiplet Summit acknowledged this gap more candidly than any previous edition.
Every advanced chiplet design requires an interposer or substrate that connects the dies, a piece of silicon, glass, or organic material etched with fine-pitch wiring. The manufacturing of these substrates is capacity-constrained and becoming more so.
TSMC's CoWoS technology uses silicon interposers, which are expensive to produce but provide the finest-pitch interconnects. As AI chips get larger and demand more chiplets per package, the interposer area grows. NVIDIA's Blackwell B200, for example, uses a CoWoS interposer that's among the largest ever produced. The next generation will be bigger still.
Glass substrates are emerging as an alternative. Intel has been developing glass core substrates that offer better electrical performance (lower signal loss) and can be manufactured in larger sizes than silicon interposers. Samsung and several startups are pursuing similar technology. Glass substrates could ease the capacity bottleneck, but the manufacturing process isn't mature enough for high-volume production yet.
CEA, the French research institute, presented at a YouTube talk on March 9 a photonic interposer called STARAC that enables all-to-all optical links for HPC chiplet systems. Using optical rather than electrical interconnects between chiplets could solve the bandwidth and power challenges that plague current die-to-die interfaces at high data rates. This is further out (more research than product) but it represents the direction of travel.
One of the more intriguing developments adjacent to the chiplet space: shape-shifting molecular complexes that combine memory and computation in the same material. Published in ScienceDaily in January, this research describes metal coordination compounds that can switch between different electrical states, storing data and performing logic operations simultaneously.
This is neuromorphic computing at the materials level. The hardware mimics the brain's approach of integrating memory and processing rather than shuttling data between separate chips. As a chiplet, a neuromorphic die could be co-packaged with conventional compute and accelerator chiplets, with each handling the workload types it's best suited for.
Fraunhofer IIS in Germany is running an EU-funded project through May 2026 that generates spiking neural networks from conventional deep neural network models, targeting neuromorphic hardware like Intel's Loihi and IBM's TrueNorth. If successful, this could bridge the software gap: developers write conventional neural networks, and the toolchain automatically converts them to run on neuromorphic chiplets.
The integration of neuromorphic chiplets alongside conventional compute is exactly the kind of heterogeneous system that chiplet architectures were designed for. Whether the neuromorphic computing community can deliver chiplets with sufficient performance and programmability to earn their place in the package remains an open question. But the packaging technology to integrate them now exists.
In a corner of the chiplet world that few outside the space community are watching: Monarch Quantum's integrated photonics Quantum Light Engines were selected by NASA for a space-based quantum gravity gradiometer. This is a chiplet-scale photonic quantum device that generates entangled photon pairs for precision measurement.
The relevance to the broader chiplet story: quantum photonic devices are small enough and self-contained enough to be packaged as chiplets. A future quantum-classical hybrid system could integrate quantum photonic chiplets alongside classical compute chiplets in a single advanced package, with the packaging technology serving as the integration layer between radically different computing technologies.
That future is years away. But the packaging technology being built today for AI chiplet systems is the same packaging technology that will enable quantum-classical integration tomorrow.
The Chiplet Summit's most important takeaway wasn't any single technical demonstration. It was the emerging consensus about what separates chiplet platform winners from losers.
Silicon design quality matters, but it's not the primary differentiator. Multiple companies can design good compute chiplets, good memory chiplets, good I/O chiplets. The manufacturing processes are becoming accessible through foundry partnerships.
The differentiators are: packaging ecosystem control (who owns the substrate technology and capacity), software maturity (whose tools let you design and validate heterogeneous chiplet systems most efficiently), and supply chain reliability (who can deliver known-good dies and substrates at the volumes and yields production requires).
This maps to a familiar pattern in computing: the platform owners win. Intel's dominance wasn't just about making good x86 processors. It was about controlling the socket, the chipset, and the ecosystem. NVIDIA's AI dominance isn't just CUDA. It's the full stack from GPU to NVLink to software libraries.
In the chiplet era, the platform winners will be the companies that control the packaging technology, the die-to-die interface standards, and the software stack that makes heterogeneous systems programmable. The chip design will be table stakes. Everything around it will be the moat.