OpenAI and Broadcom to deploy 10 GW of OpenAI-designed AI accelerators (openai.com)

🤖 AI Summary
OpenAI and Broadcom announced a multi‑year collaboration to design, build and deploy 10 gigawatts of OpenAI‑designed AI accelerator racks, with Broadcom supplying the networking and systems integration. OpenAI will architect the custom accelerators and system designs while Broadcom provides Ethernet, PCIe and optical connectivity and will deploy the racks starting in the second half of 2026 through the end of 2029 across OpenAI facilities and partner data centers. The deal is formalized via a term sheet and is explicitly targeted at scale‑up and scale‑out AI clusters. Technically and strategically, this is significant because it signals a shift toward vertically integrated, model‑informed hardware at hyperscale. By embedding lessons from frontier model development directly into silicon, OpenAI aims to boost performance and power efficiency for transformer workloads; Broadcom’s endorsement of Ethernet for large‑scale interconnects frames Ethernet as the preferred technology for scaling AI clusters (vs. other options). A 10 GW deployment implies massive capacity and operational impact—potentially lower per‑inference costs, higher training throughput, and new supply‑chain dynamics as custom accelerators and standardized networking are married at rack scale—accelerating the infrastructure available for next‑generation AI research and products.
Loading comments...
loading comments...