California’s new AI safety law shows regulation and innovation don’t have to clash (techcrunch.com)

🤖 AI Summary
California Gov. Gavin Newsom signed SB 53, a first‑in‑the‑nation AI safety and transparency law that requires large AI labs to disclose and adhere to documented safety and security protocols — including model cards and safety testing — for preventing catastrophic misuse (e.g., cyberattacks on critical infrastructure or enabling biological threats). The Office of Emergency Services will enforce compliance, closing a gap where some firms might otherwise relax safeguards under competitive pressure. Advocates say the law proves regulation can protect the public without stifling innovation because it largely formalizes practices many labs already claim to follow. The bill also sharpens a national debate about federal preemption and industry influence: tech giants and donors have pushed moratoria or federal limits that could override state rules, while lawmakers like Sen. Ted Cruz propose waiver-based sandboxing (SANDBOX Act) that critics warn would let companies bypass safeguards. Proponents argue state‑level rules target specific harms (deepfakes, discrimination, child safety) and don’t undermine the U.S.–China AI race — instead, measures like export controls on advanced chips (e.g., Chip Security Act, CHIPS Act) are more relevant levers. SB 53’s practical requirements and enforcement model could become a template for balancing safety, accountability and continued AI development amid broader federal policy fights.
Loading comments...
loading comments...