🤖 AI Summary
Nvidia CEO Jensen Huang told CNBC’s Jim Cramer that the company’s relationships with Oracle, Intel, CoreWeave and OpenAI are strategic partnerships shaping the AI infrastructure market. He pushed back on reports that Oracle’s cloud sees thin margins from Nvidia chips, calling the systems “supercomputers” that will be profitable over their lifetimes. On Intel, Huang framed a decades-long rivalry turned cooperation—Nvidia is buying $5 billion of Intel stock—saying he prefers a future where multiple players can win. He also highlighted CoreWeave, an early investment that turned lucrative after the cloud GPU provider’s IPO (priced at $40, trading around $130), underscoring demand for Nvidia-powered rental compute.
Technically and commercially, the biggest implication is Nvidia’s direct and massive commitment to AI-scale infrastructure: Nvidia agreed to help OpenAI build 10 gigawatts of AI data-center capacity with up to $100 billion in allocations as each gigawatt is deployed, and OpenAI will buy directly from Nvidia rather than only through cloud resellers. That deal — plus deeper ties to cloud and chip partners — signals accelerating demand for GPUs, enormous power and networking scale, and shifting go-to-market channels for AI compute. For the AI/ML community, this means faster access to large-scale training and inference capacity, intensified competition among cloud providers, and heightened importance of energy and system-level integration.
Loading comments...
login to comment
loading comments...
no comments yet