Are we dismissing AI spend before the 6x compute lands? (martinalderson.com)

🤖 AI Summary
Recent research from Morgan Stanley reveals significant upcoming advancements in AI compute capabilities, with a projected sixfold increase in global AI chip capacity from 2024 to 2026. This surge is driven by TSMC's Chip-on-Wafer-on-Substrate technology, essential for leading silicon in AI applications. Notably, NVIDIA is set to command 60% of this capacity, with substantial contributions from Broadcom and AMD, which are rapidly scaling their production. Estimates indicate that total AI compute could reach over 122 exaFLOPs by 2026. The implications of this growth are monumental; the AI community anticipates that within a few years, the cumulative compute power will rival historical precedents, likening it to transformative infrastructure projects in human history. However, challenges remain in the deployment of this new capacity. Delays in chip production and installation mean that the impressive computing capabilities expected from new chips won't be realized immediately. Additional hurdles include a complicated rollout due to cooling system requirements and overall datacenter power constraints. As for current models like Opus 4.5 and Gemini 3, they are benefiting from earlier infrastructure rather than the coming wave of compute. As the AI landscape evolves, the anticipated 2026 compute explosion could push the boundaries further, potentially accelerating advancements in AI research and application.
Loading comments...
loading comments...