🤖 AI Summary
Oracle co-CEO Clay Magouyrk told CNBC he’s confident OpenAI can shoulder massive cloud bills—saying “of course” the company could pay roughly $60 billion for a year’s worth of infrastructure—after the two firms agreed in July to a five‑year partnership worth more than $300 billion. His comments come as OpenAI reports explosive user growth (about 800 million weekly active users) but also a $5 billion net loss in 2024, highlighting a tension between rapid scale and hefty operating costs. Oracle paired CEO Mike Sicilia noted Oracle is already embedding OpenAI models into vertical products, such as a patient portal for Cerner electronic health records, signaling stronger enterprise integration.
For the AI/ML community this underscores two trends: first, commercial cloud and enterprise deals are becoming a critical revenue mechanism to fund model scale; second, infrastructure is diversifying beyond public GPU rentals. OpenAI currently runs models on Nvidia GPUs via Oracle and other cloud partners (CoreWeave, Google, Microsoft) while also developing custom Broadcom-built AI chips, with Broadcom/OpenAI targeting a 10‑gigawatt deployment. That magnitude raises energy and supply-chain implications even as it accelerates specialized hardware, multi‑cloud consumption, and verticalized AI deployments—factors that will shape model economics, latency, and enterprise adoption going forward.
Loading comments...
login to comment
loading comments...
no comments yet