🤖 AI Summary
OpenAI told investors and the public that it has already committed roughly $1.4 trillion in infrastructure deals to date—work Altman equated to about 30 gigawatts (GW) of data‑center capacity across agreements with chip and cloud partners including Nvidia, AMD, Broadcom and Oracle. He laid out a far more aggressive target: build the technical and financial apparatus to add about 1 GW of compute capacity per week (≈52 GW/year) at an estimated cost of ~$20 billion per GW, which would put OpenAI’s annual infrastructure burn around $1 trillion if sustained. Altman cautioned there are significant technical, supply‑chain and financing hurdles to clear before that scale is achievable.
The announcement matters because it crystallizes how capital‑intensive generative AI at hyperscale is becoming — not just GPU purchases but power, cooling, custom silicon, interconnects and financing arrangements. Reaching the plan would reshape chip demand, data‑center buildout and energy needs, and requires OpenAI to generate “hundreds of billions” in revenue (Altman sees enterprise customers and new consumer monetization paths as the drivers). He also said an IPO is the most likely path to raise the massive capital required, though no timetable was given.
Loading comments...
login to comment
loading comments...
no comments yet