Did Nvidia Just Repeat Cisco's Mistake and Build a House of Cards (www.fool.com)

🤖 AI Summary
Nvidia announced up to $100 billion in investment into OpenAI — money that will largely flow back into Nvidia hardware as OpenAI deploys systems across Oracle’s cloud buildout. OpenAI says it will need roughly 10 gigawatts of power (about 4–5 million GPUs), which is on the order of the total GPUs Nvidia expects to ship this year. The first $10 billion will be released once the first gigawatt of capacity is live, with the remainder staged as new data centers come online. That circular-financing structure secures near‑term demand but also means Nvidia is effectively funding a major customer to keep buying its chips. The move is strategically defensive: hyperscalers and cloud providers (Google TPUs, AWS Trainium/Inferentia, Microsoft, and even OpenAI’s own chip efforts and a $10B Broadcom order) are building custom silicon, and the market is shifting toward inference workloads where Nvidia’s CUDA-led moat is weaker. Training still favors Nvidia’s GPUs and software stack, but inference’s continuous cost-per-inference dynamics favor cheaper, specialized accelerators. Nvidia has also taken a $5B stake in Intel and partnered on AI processors to guard the inference market. The deal bolsters Nvidia’s near-term dominance but concentrates risk—if AI spending slows or OpenAI’s unproven business model falters, the arrangement could resemble Cisco’s circular-financing misstep and expose Nvidia to significant downside.
Loading comments...
loading comments...