Lightwave Logic's deal with Tower Semiconductor puts polymer modulators into production silicon photonics. Meanwhile, III-V compounds are pushing 400G per lane. The fight to replace silicon's optical limitations is getting real.

On March 11, Lightwave Logic and Tower Semiconductor announced a development agreement to integrate electro-optic polymer modulators into Tower's PH18 silicon photonics platform. This deal matters for the entire AI infrastructure stack. A production-grade foundry is working to incorporate exotic optical materials alongside standard silicon for the first time.
Tower Semiconductor is a specialty foundry that also partners with NVIDIA. Its PH18 platform is a mature silicon photonics process used by multiple companies to build optical transceivers for data centers. Getting polymer modulators onto PH18 means the technology could reach volume production, not just lab demos.
Silicon modulators are hitting performance limits at the exact moment AI data centers need dramatically higher bandwidth optical interconnects. New materials are necessary.
Silicon photonics has been a genuine success story. By building optical components (waveguides, modulators, photodetectors) using the same CMOS fabrication infrastructure that makes processors and memory chips, the industry has driven down the cost of optical transceivers to the point where they're standard equipment in every data center.
But silicon has a physical limitation as a modulator material. The electro-optic effect in silicon is weak. To modulate light (switch it on and off to encode data), silicon devices rely on injecting or depleting electrical carriers, which changes the refractive index enough to alter the light's phase. This works, but it's slow and power-hungry compared to materials with strong intrinsic electro-optic effects.
At 100 Gbps per lane, silicon modulators perform adequately. At 200 Gbps, they're strained. At 400 Gbps and above (the speeds needed for 1.6T and 3.2T optical modules), silicon modulators require complex signal processing (PAM4 or higher-order modulation) and consume significant power to maintain signal quality.
The power consumption issue is directly relevant to AI data center economics. Optical transceivers already account for a meaningful fraction of total system power in large GPU clusters. If modulator power consumption scales superlinearly with data rate (which it does for silicon modulators), the optical subsystem becomes a growing tax on the total power budget.
Electro-optic polymers are organic materials engineered to have very strong electro-optic coefficients, meaning a small applied voltage produces a large change in refractive index. Compared to silicon, polymer modulators can switch faster, at lower voltage, and with lower power consumption.
Lightwave Logic has been developing proprietary polymer formulations for over a decade. The company's materials claim electro-optic coefficients 10-50x higher than silicon, depending on the specific formulation and operating conditions. In practical terms, this translates to modulators that can operate at 400+ Gbps per lane with lower drive voltage and lower power consumption than silicon equivalents.
The historical knock on polymers has been stability. Organic materials can degrade over time, especially at the elevated temperatures inside data center equipment. Lightwave Logic has spent years engineering polymer stability, and their more recent formulations show multi-year operational lifetimes at data center temperatures. But proving 10+ year reliability in a production environment (the standard the industry requires) takes time and extended testing.
The Tower deal addresses the other historical objection: manufacturability. Building polymer modulators in a lab is one thing. Building them at scale on a production silicon photonics line, with the yield and consistency that commercial deployment demands, is another. Tower's involvement signals that the foundry believes polymer integration is feasible on PH18, and, as one commenter on the r/LWLG subreddit noted, "Tower does not dedicate resources unless it is going to be profitable."
Electro-optic polymers aren't the only alternative to silicon modulators. III-V semiconductor compounds (indium phosphide, gallium arsenide, and their alloys) have been used in photonics for decades, primarily in telecom laser sources. The advantage of III-V materials: they can both generate and modulate light efficiently, unlike silicon which can only modulate.
OpenLight, a company spun out of Intel's silicon photonics group, showed III-V silicon photonics breakthroughs at OFC 2026 that grabbed attention. Their demonstrations include 400G-per-lane modulators and 1.6T transceiver photonic integrated circuits (PICs) that integrate III-V laser sources, modulators, and photodetectors on a single chip using heterogeneous integration, bonding III-V layers directly onto silicon wafers.
The appeal of III-V integration is performance: by combining the best optical properties of III-V materials with the manufacturing scale of silicon CMOS, you potentially get devices that are faster, more efficient, and more integrated than either technology alone. The challenge is that III-V materials require different processing conditions than silicon, and bonding them together reliably at wafer scale is a non-trivial engineering problem.
STMicroelectronics is pursuing a related path, using through-silicon via (TSV) technology to stack electronic and photonic components vertically, avoiding the need for monolithic integration of dissimilar materials. This 3D packaging approach adds manufacturing complexity but may be more practical for near-term production.
What's emerging is a bifurcation in the silicon photonics ecosystem:
Traditional silicon photonics remains the workhorse for current-generation 400G and 800G transceivers. It's proven, manufacturable, and cost-effective at today's data rates. Companies like Intel, Cisco, and Broadcom have billions invested in silicon photonics production lines and aren't abandoning them.
Exotic material integration is coming for the next generation: 1.6T modules shipping in volume in 2026-2027, and 3.2T modules on the horizon for 2028-2029. This is where polymers, III-V compounds, and potentially other materials (lithium niobate thin films, barium titanate) enter the picture.
The strategic question for every photonics company: when do you start integrating exotic materials, and which ones do you bet on? Too early, and you're fighting manufacturability problems while your competitors ship silicon-only products. Too late, and you're stuck with silicon modulators that can't keep up with the bandwidth demands of the next generation of AI clusters.
IQE, a UK-based manufacturer of III-V compound semiconductor wafers (InP substrates, specifically), is positioning itself for this transition. The company supplies epitaxial wafers used in photonic devices for AI data centers, and it's seeing growing demand as III-V integration gains traction.
Away from the commercial ecosystem, academic research is pushing photonics in a more radical direction. The University of Sydney recently demonstrated a photonic chip that performs AI calculations using light instead of electricity. Not just optical communication, but optical computation.
This is the long-theorized concept of photonic computing: using the properties of light (interference, diffraction, phase) to perform mathematical operations directly, without converting to electrical signals. If photonic computing matures, it could bypass the power consumption problem entirely for certain AI inference workloads.
A related demonstration, widely shared on X: inverse-designed nanophotonic chips that classify images "at the speed of photons." The chip uses a machine-learning-optimized optical structure to sort light patterns into categories, performing inference without transistors. The post by @bravo_abad got 773 likes and 112 retweets, high engagement for an academic photonics paper.
These are research results, however, not products. Photonic computing faces enormous challenges in precision, programmability, and integration with conventional electronic systems. But the direction of travel is clear: light isn't just the future of data movement. It might also be part of the future of data processing.
JPMorgan's recent optical networking deep-dive framed the economics bluntly: top five US hyperscaler capex is rising from approximately $450 billion in 2025 to roughly $850 billion in 2027. A meaningful and growing fraction of that spend goes to optical interconnects. Every dollar of GPU spending requires an associated investment in the optical network that connects GPUs to each other and to storage.
As GPU cluster sizes grow from racks to halls to buildings, the optical network becomes a larger percentage of total system cost. For a 100,000-GPU cluster spread across multiple buildings, the optical interconnect infrastructure can represent 15-25% of total system cost. At those percentages and those total spend numbers, even small improvements in optical component efficiency or cost translate to billions of dollars in aggregate savings.
That's why NVIDIA invested $4 billion in photonics companies. That's why Tower Semiconductor is spending resources on polymer modulator integration. That's why OpenLight is pushing III-V heterogeneous integration to 400G per lane. The optical interconnect is no longer a commodity component in the data center. It's a strategic bottleneck where materials innovation directly impacts the economics of AI infrastructure.
The battle of materials (silicon vs. polymers vs. III-V compounds) isn't an academic exercise. It's a fight over who supplies the arteries of the AI economy.