🤖 AI Summary
NVIDIA has quietly become the fulcrum of the AI compute economy: its GPUs and Mellanox-enabled networking now drive the vast majority of its revenue (data-center sales surged from $47B FY2023 to $41.1B in a single recent quarter) and it’s actively underwriting the market by buying unsold cloud compute (~$6.3B committed) and renting GPUs back to partners ( ~$1.5B over four years to Lambda). The company has invested in and given preferential access to so‑called “neoclouds” — CoreWeave, Lambda, Nebius, Crusoe — which buy NVIDIA GPUs, resell hardware via Dell/Supermicro (39% of Nvidia’s recent revenues), and use multi‑year compute contracts as collateral to raise massive private debt from banks and investors.
That structure is significant because it can create apparent demand that’s largely circular and concentrated among a handful of big tech buyers (Microsoft, OpenAI, Meta, AWS, Google). Several neoclouds show thin customer diversification, huge cash burn (CoreWeave lost ~$300M last quarter and plans ~$20B capex in 2025), and reported multi‑billion dollar contracts (e.g., CoreWeave/OpenAI ~$11.9B; Nebius/Microsoft ~$17.4B) that may exceed actual capacity. For AI/ML this means distorted pricing and supply signals, systemic concentration risk if private credit tightens or NVIDIA’s growth slows, and the real possibility of a leveraged “compute bubble” that could disrupt model training pipelines, cloud availability, and capital flows across the AI ecosystem.
Loading comments...
login to comment
loading comments...
no comments yet