TSMC used its 2026 Technology Forum in Hsinchu on Thursday to put a single acronym at the heart of its AI strategy: COUPE. Deputy Co-COO and Senior Vice President Kevin Zhang told attendees that "better AI days lie ahead," pointing to the Compact Universal Photonic Engine as the technology TSMC believes will define the next phase of AI infrastructure.
The message landed at a moment when hyperscalers are openly worried about the limits of copper interconnects inside increasingly dense AI clusters. With training runs spanning hundreds of thousands of accelerators and inference workloads spreading from cloud to edge, bandwidth, power, and thermal headroom have become the binding constraints on building bigger systems.
From transistors to system-level AI infrastructure
Ray Wan, TSMC's director of Asia-Pacific business, framed the forum's broader theme around the company's shift from pure transistor scaling toward system-level integration. He said TSMC will lean on advanced process technology and advanced packaging to help customers accelerate AI innovation, citing exponential growth in compute demand from larger generative models and agentic workflows that span cloud and edge.
That shift is most visible in TSMC's silicon photonics roadmap. COUPE uses TSMC's SoIC-X chip stacking to place an electrical die directly on top of a photonic die, an arrangement that yields ultra-low impedance at the die-to-die interface and tighter integration than conventional co-packaged optics designs.
Why COUPE matters for AI clusters
The performance gap that COUPE targets is large. TSMC says co-packaged optics built on COUPE can deliver around 4x better power efficiency and 90% lower latency compared with copper interconnects, alongside broader gains of 5–10x in power efficiency and 10–20x in latency versus conventional optical packaging approaches.
Those numbers matter because the cost of moving data between accelerators is increasingly dominating AI cluster economics. As model parallelism scales out, every joule and nanosecond saved on inter-chip communication translates directly into either higher throughput or lower operating cost — sometimes both.
COUPE is also a milestone for the broader optics-on-silicon ecosystem. The platform uses a metalens-based optical coupling element, marking one of the first mainstream production deployments of metasurface technology inside chip packaging.
NVIDIA's photonic switches ride on top
The most concrete commercial signal of COUPE's readiness is NVIDIA's roadmap. The company has previously detailed two silicon-photonics-based switching platforms — Spectrum-X Photonics for Ethernet and Quantum-X Photonics for InfiniBand. Spectrum-X Photonics scales up to 400 Tb/s of aggregate throughput in its top configuration (512 ports × 800 Gb/s), while Quantum-X Photonics delivers roughly 115 Tb/s per switch (144 ports × 800 Gb/s). Both are built on TSMC's COUPE technology. Quantum-X Photonics is slated for early 2026, with NVIDIA already shipping a co-packaged optics version, while Spectrum-X Photonics is scheduled for the second half of 2026.
That timeline aligns with TSMC's own guidance for COUPE entering volume production this year, and it underscores how tightly the two companies' AI roadmaps are now stitched together at the packaging layer.
Implications
If COUPE ramps on schedule, 2026 may be remembered as the year AI data centers began the long-promised pivot from copper to light at the rack and pod level. For TSMC, the forum keynote was also a reminder that its moat in advanced packaging — not just leading-edge logic — is becoming central to the AI buildout. For NVIDIA and the hyperscalers behind it, the message is simpler: the next generation of AI factories has a photonics layer baked in.


