🤖 AI Summary
The piece argues that sensible AI safety regulations are unlikely to cause the U.S. to “lose” an AI race with China because the contest is dominated by three layers—compute, models, and applications—and America’s lead is largest at the compute and model layers. The U.S. currently enjoys roughly a 10x compute advantage (roughly a 1–2 year model-quality lead), driven by superior chips and massive cloud capex; models scale with compute, so that advantage carries through. Typical near-term safety proposals (model-spec disclosure, safety policies, whistleblower protections, and evaluations for hacking/biothreats) would add only a small marginal cost to training (author estimates ~0.1–1%, plausibly up to ~1–2%), versus multi‑billion-dollar training budgets, so they don’t materially erode the compute gap.
Where regulation can matter is the application layer. Laws like the Colorado AI Act that demand recurring impact assessments and broad disclosure can chill adoption by small businesses and startups, concentrating production in incumbents or slowing safe deployment—an opening China could exploit by integrating slightly less advanced models into manufacturing, robotics, or military systems (“fast follow”). The recommended U.S. counterstrategy is to preserve compute superiority (export controls, chip security), protect secrets, and prioritize defending application-layer deployment—because the strategic risk is not small extra costs on training runs but regulatory friction that prevents beneficial adoption.
Loading comments...
login to comment
loading comments...
no comments yet