🤖 AI Summary
A debate over how long AI GPUs remain useful has become central as hyperscalers prepare to spend roughly $1 trillion on AI data centers. Google, Oracle and Microsoft have publicly estimated useful lives up to six years for their AI servers, but critics — most prominently short seller Michael Burry — argue actual lives are closer to two to three years. That discrepancy matters for investors and lenders because longer depreciation periods spread costs over more years, boosting near-term profits and easing financing of large GPU purchases; shorter lives increase annual expense and heighten the risk of stranded capital if hardware becomes obsolete.
Technically, GPUs are a new asset class (Nvidia’s data‑center AI chips date to ~2018 and the current boom began in 2022), and chip makers have accelerated cadence to roughly annual generations, increasing obsolescence risk. Companies show mixed signals: CoreWeave reports strong demand and secondary-market value for A100s/H100s, while Amazon trimmed some server lives from six to five years. Microsoft cites a 2–6 year range and is pacing purchases to avoid being stuck on one generation. Auditors will require engineering and usage data to justify lives, and organizations must factor faster replacement cycles into cost models, procurement strategy, and long‑term ML infrastructure planning.
Loading comments...
login to comment
loading comments...
no comments yet