🤖 AI Summary
OpenAI announced a partnership with chipmaker Broadcom to co-design and deploy custom “AI accelerators” and network systems, with new racks slated to enter service next year. Financial terms were not disclosed, but Broadcom CEO Hock Tan said the companies aim to roll out “10 gigawatts of next generation accelerators and network systems,” and Broadcom shares jumped more than 8% on the news. OpenAI framed the move as expanding the partner ecosystem needed to scale AI infrastructure.
Technically, this is a significant step toward hardware-software co‑design: building purpose‑built accelerators and integrated networking at rack scale can improve performance-per-watt, latency and overall throughput for large language models and other compute‑intensive AI workloads. The deal signals deeper vertical integration by a major model developer and could influence supply-chain dynamics, cost structures and competitive relationships with incumbent GPU vendors. For researchers and operators, custom accelerators plus optimized networking hint at gains in efficiency and scalability for next‑generation model training and inference, while accelerating the industry trend of bespoke silicon tailored to AI computation patterns.
Loading comments...
login to comment
loading comments...
no comments yet