After the AI boom: what might we be left with? (blog.robbowley.net)

🤖 AI Summary
The piece argues that unlike the dotcom era—where overinvestment produced durable, open infrastructure (fibre, TCP/IP/HTTP) that powered decades of innovation—the current AI buildout risks leaving behind highly specialised, short-lived assets. Massive investment is concentrated in purpose-built GPUs (often obsolete within 1–3 years) and bespoke AI data centres engineered for extreme power density, advanced cooling, and vendor-tied networking. Those stacks are optimised for training and serving large generative models and are tightly coupled to a handful of platform owners (Nvidia, Google, Amazon), making them hard to repurpose if demand falls—potentially resulting in “silent cathedrals” of stranded compute. There is an optimistic alternative: surplus capacity could drive down costs, broaden access to large-scale compute, and spur experimentation in simulation, science, and analytics; a second‑hand hardware market might emerge and power new entrants. Crucially, however, those gains depend on openness—shared standards and interoperability turned the internet’s capacity into a public platform. Without similar standards for compute, models, and APIs, cheaper chips alone won’t guarantee broad benefit. The lasting legacy of the boom may therefore hinge less on the silicon and more on whether industry chooses to open up stacks, preserve interoperability, and translate operational expertise into reusable public infrastructure.
Loading comments...
loading comments...