🤖 AI Summary
The AI surge is refocusing investment and innovation on the networking that stitches chips and racks into giant, distributed compute systems. Large incumbents (Nvidia—via Mellanox and Cumulus, Broadcom, Marvell) are already building high‑speed fabrics for AI datacenters, while startups such as Lightmatter, Celestial AI and PsiQuantum are attracting billions to commercialize optical and silicon‑photonics interconnects. ARM’s acquisition of chiplet specialist DreamBig and Broadcom’s rumored “Thor Ultra” networking chip underscore that connecting GPUs and accelerators is now as strategic as the chips themselves. Venture funding and strategic deals reflect the belief that electron‑based interconnects may not scale fast enough for exploding AI bandwidth needs.
Technically, the focus spans every layer—from on‑chip links and chiplets to rack‑to‑rack fabrics—where light-based links promise much higher throughput and lower latency than copper signaling. Startups tout 3D photonic stacks and optical engines for linking multiple chips, while PsiQuantum applies optics toward quantum processors. Challenges remain: photonics is costly, requires specialized fabrication and must interoperate with existing electrical ecosystems, so hyperscalers and large silicon vendors retain advantage in scaling. The result is a hybrid, multi‑year transition: heavy R&D and acquisitions now, with the potential for a photonics‑led networking future that reshapes AI system architectures.
Loading comments...
login to comment
loading comments...
no comments yet