🤖 AI Summary
Epoch AI has updated and published a comprehensive dataset covering 3,000+ AI models, with the latest refresh on Sept 29, 2025, and pulled out several high-level trends. Training compute for notable models has been doubling roughly every six months (about 4x/year), driven primarily by larger training clusters, longer runs, and better hardware. Power consumption for frontier training is doubling about once a year, but hardware efficiency (≈12× over the last decade), adoption of lower‑precision formats (≈8×), and longer training schedules (≈4×) have collectively reduced power per unit of compute, roughly offsetting compute growth by ~2×/year. Spending on training is growing ~2.4×/year; top models now cost hundreds of millions to train (about half of costs are GPUs). Epoch AI also reports that over 30 models have reached roughly the GPT‑4 training scale (~10^25 FLOP) as of mid‑2025, a level that triggers extra oversight under the EU AI Act coming into force August 2025.
For researchers, engineers and policymakers this dataset quantifies how scaling is happening (cluster size > single‑GPU improvements), the economic and energy implications of continued rapid scaling, and the practical limits shaping access and governance. Epoch documents its estimation methods, labels records by confidence (confident/likely/speculative with ~3×/10×/30× uncertainty bands), provides a CSV download, and licenses the data under CC‑BY—making it a useful empirical baseline for infrastructure planning, emissions estimates, regulatory compliance, and model-risk analysis.
Loading comments...
login to comment
loading comments...
no comments yet