As AI model training and inference continue to scale, overall system performance is increasingly determined by the reach, bandwidth, latency, and energy efficiency of GPU-to-GPU interconnects.
Optical interconnects in scale-up networks expand GPU interconnect scale through long-reach, high-bandwidth, low-latency optical technologies to form large-size AI supernodes, becoming a critical technology for high-performance AI computing.
To address system-level challenges in scale-up networks, PhotonicX AI uses a chiplet-based optical I/O
architecture to significantly increase GPU-to-GPU interconnect efficiency.