🤖 AI Summary
OpenAI and Broadcom announced a plan to deploy up to 10 gigawatts of custom AI chips, with rollouts beginning in the second half of 2026. The arrangement will pair OpenAI’s custom chip racks with Broadcom networking equipment, signaling a large-scale, integrated hardware push that follows OpenAI’s existing chip partnerships with Nvidia and AMD. The deal prompted a strong market reaction: Broadcom shares jumped about 12% premarket on the news.
The scale and integration matter for the AI/ML community because 10 GW is equivalent to the power footprint of multiple hyperscale data centers and implies thousands of racks optimized for model training and inference. Using Broadcom networking gear highlights the importance of high-bandwidth, low-latency interconnects for distributed training at this scale. Strategically, the move accelerates verticalization—OpenAI designing or specifying end-to-end compute stacks—and further diversifies suppliers beyond Nvidia, potentially shifting competitive dynamics and supply-chain resilience. Practically, it foreshadows significant investments in power, cooling, and datacenter networking architecture, and could encourage more custom silicon and co-designed systems optimized for large foundation models.
Loading comments...
login to comment
loading comments...
no comments yet