Google's rolling out its most powerful AI chip, taking aim at Nvidia with custom silicon (www.cnbc.com)

🤖 AI Summary
Google announced that Ironwood, its seventh‑generation Tensor Processing Unit (TPU), will be made widely available in the coming weeks after an April preview. Built in‑house for both training large models and powering low‑latency inference (chatbots and agents), Ironwood is reportedly more than four times faster than the prior TPU generation and can be networked up to 9,216 chips in a single pod to remove data bottlenecks for the largest, most data‑intensive models. Major customers are already lined up — Google says Anthropic plans to use up to a million Ironwood TPUs to run its Claude models. The rollout matters because it accelerates the industry shift toward custom AI silicon that can compete with Nvidia GPUs on price, performance and power efficiency. Google is pairing the chip with cloud pricing, performance upgrades and increased capital spending (raising its 2025 capex high end to $93B) to capture AI infrastructure demand; cloud revenue rose 34% year‑over‑year to $15.15B in the last quarter. For the AI/ML community this means more high‑capacity, TPU‑optimized options for large‑scale training and inference, potential cost/perf tradeoffs relative to GPU stacks, and intensified competition among hyperscalers to supply the next generation of model training hardware.
Loading comments...
loading comments...