Accounting for AI (cernocapital.com)

🤖 AI Summary
Since ChatGPT’s debut, the hyperscalers (AWS, Azure, GCP, Meta) have dramatically accelerated AI infrastructure spending—capex rose from roughly $150B to $230B between 2023–24 and is forecasted to top $300B in 2025—with GPUs estimated to represent 60–80% of data‑centre TCO and roughly 55% of that capex. Nvidia has become the dominant supplier as successive GPU architectures delivered step changes in performance and efficiency (Stanford figures cite ~7,000x GPU performance growth since 2003 and Nvidia‑driven LLM inference energy efficiencies improving ~45,000x over eight years). In near‑lockstep the hyperscalers have lengthened server and networking depreciation windows (moving typical server lives from ~3 years to 4–6 years), materially lowering recorded depreciation: Cerno/Bloomberg estimates suggest 2024 data‑centre depreciation falling from ~$39B to ~$21B and 2025 from ~$51B to ~$28B (≈46% savings). The change matters because it alters the optics and economics of AI investment. Executives argue software and ops gains legitimately extend hardware life, but rapid GPU product cadence, supply shifts (A100→H100→Blackwell) and high utilisation make economic obsolescence likely to outpace physical wear. That creates a tension: longer depreciation smooths earnings and reduces short‑term ROI hurdles, while the underlying assets may lose value far faster—deepening Nvidia lock‑in, masking underlying capital intensity risks, and complicating how the AI/ML community judges capacity, cost-per-token economics, and long‑term infrastructure sustainability.
Loading comments...
loading comments...