🤖 AI Summary
Nvidia has struck a roughly $1.5 billion deal with Lambda to lease back about 18,000 GPU servers that Lambda originally purchased from the company — reportedly a four‑year, $1.3B lease for 10,000 units plus a separate $200M arrangement for another 8,000 (likely lower‑end or older chips). Those leased servers will be used in‑house by Nvidia researchers rather than resold, making Nvidia by far Lambda’s largest customer. Lambda, a cloud provider that serves hyperscalers and AI startups (Microsoft, Amazon and reportedly in line for Google, OpenAI, xAI and Anthropic), will continue to host and operate the hardware.
This move is significant because it reflects how the industry is managing extreme demand and capital constraints for AI compute: instead of building more datacenters or hoarding inventory, Nvidia is tapping cloud partners to quickly scale internal research capacity while freeing capital and operational overhead. Technically, these are full GPU server stacks (different generations likely mixed), which gives Nvidia flexible, leased access to large-scale training and inference infrastructure without immediate capex. The deal echoes prior partner financing patterns (e.g., CoreWeave) and could affect supply dynamics and pricing for third parties — both stabilizing access for Nvidia and shaping how GPU vendors, cloud providers, and AI labs coordinate capacity going forward.
Loading comments...
login to comment
loading comments...
no comments yet