🤖 AI Summary
OpenAI is continuing an aggressive round of infrastructure and financing deals with major chipmakers and cloud partners, and CEO Sam Altman says more are coming. Recent moves include a reciprocal-style arrangement with AMD—granting OpenAI up to 10% of AMD stock in exchange for co-developing and using next-gen AI GPUs (about 6 GW committed)—and a large Nvidia investment (reported up to $100B) tied to at least 10 GW of capacity. OpenAI’s Stargate program with Oracle, SoftBank and others targets another ~10 GW in U.S. facilities and a separate ~$300B cloud contract with Oracle; observers estimate the total of these deals approaches $1 trillion. Nvidia’s Jensen Huang noted direct sales of systems, networking and GPUs as preparation for OpenAI eventually self-hosting its own hyperscaler, while acknowledging each GW of AI data-center capacity could cost tens of billions ($50–$60B, per Huang).
Technically and economically, these deals matter because they shift how models are built, funded and deployed: suppliers are becoming financial backers and co-design partners, enabling OpenAI to scale rapidly without upfront revenue to match (OpenAI reported ~$4.5B in H1 2025). The arrangements accelerate hardware–software co-optimization (chips, systems, networking) but raise governance and competition questions—critics call some structures “circular” financing where vendors effectively underwrite OpenAI purchases in exchange for equity. If repeated, this model could reshape supplier incentives, supply chains and the economics of training and serving ever-larger AI models.
Loading comments...
login to comment
loading comments...
no comments yet