🤖 AI Summary
Microsoft announced a global buildout of purpose‑built AI datacenters anchored by Fairwater, a new 315‑acre, 1.2M sq ft AI facility in Wisconsin, with additional identical US sites plus planned hyperscale centers in Narvik (Norway) and Loughton (UK) via partnerships. These projects represent tens of billions in capital and "hundreds of thousands" of AI accelerators tied into Microsoft’s 400+ datacenter cloud footprint, enabling a distributed “AI supercomputer” model that supports OpenAI, Microsoft Copilot and frontier model training at much larger scale and wider availability.
The sites are engineered for tightly coupled, large‑scale training: racks host 72 NVIDIA Blackwell (GB200) GPUs with 1.8 TB/sec GPU‑to‑GPU NVLink domains and 14 TB pooled memory per rack, delivering ~865k tokens/sec per rack. Multi‑rack pods use NVLink/NVSwitch internally and 800 Gbps InfiniBand/Ethernet full fat‑tree fabrics to avoid congestion across tens of thousands of GPUs; future GB300 deployments are planned for Norway/UK. Infrastructure includes closed‑loop liquid cooling (minimal water loss), a massive rearchitected Azure storage stack (exabyte scale, >2M read/write TPS per Blob account, BlobFuse2 for GPU‑local throughput) and an AI WAN to stitch datacenters together. The result is higher training throughput, lower idle compute, and a scalable, more energy‑efficient platform for trillion‑parameter models and distributed AI workloads.
Loading comments...
login to comment
loading comments...
no comments yet