Investments in Coherent and Lumentum, OFC 2026 timing, and the conspicuous absence of NVLink CPO. NVIDIA knows optical interconnects are the next bottleneck, and it's moving to own the solution.

In early March, NVIDIA invested $4 billion across two companies: Coherent Corp and Lumentum Holdings. Both are photonics companies. Both make components critical to optical interconnects in data centers. The timing, weeks before OFC 2026 (the optical networking industry's premier conference), was not coincidental.
NVIDIA doesn't write $4 billion checks casually. This tells you that optical interconnects have moved from "interesting technology" to "strategic priority on par with GPU silicon."
Photonics is replacing copper in AI data centers - the question is how fast, in what form, and who controls the technology.
The physics are straightforward. Electrical signals in copper cables lose energy as heat, with losses increasing at higher data rates and longer distances. At 400 Gbps per lane, copper works fine over a couple of meters. At 800 Gbps and 1.6 Tbps (the rates needed for next-generation AI cluster interconnects), copper's signal integrity degrades rapidly beyond about one meter.
AI training clusters are getting bigger. NVIDIA's NVL72 rack connects 72 GPUs with high-bandwidth NVLink. The next generation will connect multiple racks. The rack-to-rack distances involved, 10 to 100+ meters in a data center, are well beyond copper's viable range at the bandwidths required.
Optical interconnects solve this by converting electrical signals to light, transmitting over fiber, and converting back. Light doesn't have copper's distance-dependent loss problem. A 1.6 Tbps optical link works just as well at 100 meters as at 1 meter. The energy cost per bit is lower. The cables are lighter and easier to manage.
The silicon photonics market tracks accordingly: from $2.3 billion today to a projected $17.8 billion within the decade. The optical interconnect market specific to AI data centers is growing from $9.94 billion in 2025 to a projected $31.04 billion by 2033, a 15.3% CAGR. These growth rates are driven almost entirely by AI infrastructure buildout.
Coherent Corp is the product of the 2022 merger of II-VI Incorporated and Coherent Corp (the original). The combined company makes indium phosphide lasers, silicon photonics transceivers, and optical components used in data center interconnects. Coherent's silicon photonics platform is directly relevant to AI data center deployments: its components sit in the transceivers that connect switches, GPUs, and storage across the network.
Lumentum makes laser sources, optical amplifiers, and photonic integrated circuits. Its product line covers both telecom (long-haul fiber networks) and datacom (data center interconnects). Lumentum's expertise in high-power laser sources is particularly relevant as optical links move to higher data rates requiring more sophisticated light sources.
Together, these investments give NVIDIA influence over two of the most important photonics component suppliers in the data center ecosystem. NVIDIA isn't acquiring them outright; these are equity investments, not acquisitions. But $4 billion buys significant board influence, supply chain priority, and co-development access.
OFC 2026 is dominated by a technical and commercial debate that will shape data center architecture for the next decade: pluggable optical transceivers versus co-packaged optics (CPO).
Pluggable transceivers are the status quo. An optical transceiver module plugs into a faceplate port on a switch or NIC, converting electrical signals to optical and vice versa. When a transceiver fails, you pull it out and plug in a new one. The supply chain is mature, standardized, and competitive.
CPO takes a different approach: the optical engine is integrated directly into (or adjacent to) the switch or GPU package itself. Instead of running electrical signals from the chip to the faceplate and then converting to optical, you convert at the chip boundary. This eliminates the electrical loss in the trace between the chip and the faceplate, a loss that becomes significant at 1.6 Tbps and above.
The trade-off: CPO is more efficient but less serviceable. If a co-packaged optical engine fails, you can't just swap a module. You may need to replace the entire switch or GPU package. That's an expensive failure mode that network operators are understandably nervous about.
NVIDIA's position is revealing. The company is adopting CPO for its Spectrum-X Ethernet switches and InfiniBand switch ASICs, the "scale-out" network that connects racks across a data center. For these applications, where hundreds or thousands of optical links connect the fabric, CPO's efficiency advantages outweigh the serviceability concerns.
But NVIDIA has NOT announced CPO for NVLink, the "scale-up" network that connects GPUs within a rack. NVLink runs over copper within the NVL72 rack. The distances are short enough (< 1 meter) that copper still works. The question is whether the next generation of NVLink, connecting GPUs across multiple racks at higher bandwidth, will require optical, and if so, whether NVIDIA will go CPO or pluggable.
This gap (CPO for scale-out but not for scale-up) is the most interesting technical frontier in AI interconnect design. Whoever solves rack-to-rack GPU interconnect at NVLink-class bandwidth using optics will have a significant architectural advantage.
OFC 2026 also saw the formal launch of the Open CPX Multi-Source Agreement, a consortium including Ciena, Coherent, Marvell, Molex, Samtec, and TeraHop. The MSA defines specifications for optical engine packaging, thermal management, and electrical interfaces for co-packaged optics.
Standardization efforts like this cut both ways. They suggest CPO is mature enough for the industry to agree on common specs. But MSAs can also be used by incumbents to lock in architectural choices that favor their existing designs while creating barriers for novel approaches.
The conspicuous absence from the Open CPX MSA: NVIDIA and Broadcom, the two largest consumers of co-packaged optics in AI data centers. NVIDIA's photonics investments suggest it prefers a more proprietary approach, co-developing optical technology with its invested companies (Coherent, Lumentum) rather than adopting an industry standard it doesn't control.
This is a familiar NVIDIA playbook. NVLink is proprietary. CUDA is proprietary (functionally, if not technically). If NVIDIA develops its own CPO technology for next-generation NVLink, it would create yet another layer of vendor lock-in in the AI infrastructure stack.
TrendForce projects CPO penetration reaching approximately 35% of data center optical deployments by 2030. That implies 65% of deployments will still use pluggable transceivers at the end of the decade, a coexistence scenario where both technologies serve different parts of the network.
The 35% figure maps roughly to the highest-bandwidth applications: spine switches, GPU cluster interconnects, and storage network backbones where the efficiency advantage of CPO justifies the serviceability trade-off. Lower-bandwidth applications (top-of-rack switches, management networks, connectivity to legacy equipment) will remain pluggable for years.
For NVIDIA specifically, the timeline depends on when NVLink outgrows copper. The NVL72 rack's NVLink 6 runs at bandwidths that copper can handle over short distances. The next generation, connecting multiple NVL72 racks into "mega-racks," will almost certainly require optical interconnects. Whether that's 2027 or 2028 depends on when NVIDIA ships the multi-rack NVLink architecture that Jensen Huang has been hinting at.
NVIDIA's $4 billion investment reshapes the competitive dynamics of the silicon photonics market. Coherent and Lumentum, already the two largest pure-play photonics suppliers for data centers, now have a deep-pocketed strategic investor with a direct line to the largest GPU market on Earth.
Other photonics companies (Marvell, which makes PAM4 DSPs used in transceivers; Broadcom, which makes both ASICs and optical components; Intel, which has a silicon photonics division; and a constellation of startups) now face a market where two of their suppliers have a preferred relationship with their largest customer's largest competitor.
The implications run through the supply chain. An optical transceiver startup trying to sell into hyperscaler data centers now has to compete against suppliers that are partially owned by the company that makes the GPUs the data center exists to run. Not an impossible barrier, but it tilts the field.
For the AI infrastructure buildout broadly, NVIDIA's photonics investments are a positive signal. They accelerate the transition from copper to optical interconnects, which is necessary for AI clusters to continue scaling. The $4 billion also validates the silicon photonics thesis: that photonic components can be manufactured using semiconductor-like processes at semiconductor-like scale, bringing down costs enough for mass deployment.