AI Investment Is Starting to Look Like a Slush Fund (nymag.com)

🤖 AI Summary
OpenAI and Nvidia announced a landmark letter of intent: Nvidia will supply at least 10 gigawatts of systems to OpenAI for training and running next‑generation models and intends to invest up to $100 billion in OpenAI as those systems are deployed. The deal sits alongside a flurry of related arrangements — Nvidia’s prior investments in cloud renters like CoreWeave (which has expanded deals with OpenAI now valued at about $22.4 billion), and joint plans with Oracle and SoftBank to build new data centers — that collectively funnel huge sums into building massive GPU-backed compute capacity aimed at scaling toward ever‑larger LLMs and, in company messaging, “superintelligence.” For the AI/ML community this is a double‑edged signal: it funds the rapid expansion of compute infrastructure essential for frontier model training, but it also crystallizes a pattern of circular, vendor‑financing where chipmakers invest in customers who then buy or lease the chips back. That intensifies concentration of supply, creates potential conflicts of interest, and raises macro‑financial and regulatory risks reminiscent of 1990s vendor‑financing bubbles. Technically, committing multiple gigawatts of GPU capacity drives demand for enormous power, data‑center buildout, and bespoke systems engineering — accelerating research but tying the ecosystem tightly to a few commercial players and their financing models.
Loading comments...
loading comments...