Datacenters in space are a terrible, horrible, no good idea (taranis.ie)

🤖 AI Summary
A former NASA engineer and ex‑Google hardware lead argues that putting AI datacenters in space is essentially infeasible: power, cooling, radiation and cost make the idea economically and technically nonsensical. Solar arrays like the ISS’s deliver ~200 kW at peak—enough for roughly 200 NVIDIA H200‑class GPUs (0.7–1 kW each). That sounds small next to Earth deployments: a planned 100,000‑GPU facility on Earth would require on the order of 500 ISS‑sized satellites. Nuclear RTGs only produce tens to a few hundred watts, so they can’t help. Launch mass, complexity and the need to assemble huge arrays in orbit make the power case poor. Thermal management and radiation are equally damning. Space has no convective cooling, so high‑density chips need large radiators and active thermal loops; the ISS Active Thermal Control System dissipates ~16 kW (≈16 H200s) with ~42.5 m² radiators—scaling to 200 kW would demand ~531 m² of radiators and vastly larger structures. Radiation in LEO/MEO/deep space produces single‑event upsets (brief bit flips), single‑event latch‑up (potentially destructive) and total ionizing dose that degrades transistor speed over time. Modern GPUs/TPUs with tiny geometries and huge die areas are worst‑case; space hardware uses older, larger‑feature chips and Radiation‑Hardness‑By‑Design, trading massive performance loss for survivability. Bottom line: space datacenters would be astronomically expensive, heavy, fragile and far lower performance per launch‑cost than terrestrial sites—so the AI community should favor ground infrastructure improvements over orbiting GPU farms.
Loading comments...
loading comments...