🤖 AI Summary
California’s state senate has given final approval to SB 53, a high-profile AI safety package that would force large AI labs to disclose safety protocols, create whistleblower protections for employees, and establish a public compute resource called CalCompute to broaden access to training infrastructure. The bill, authored by Sen. Scott Wiener and shaped by recommendations from an expert state panel, now heads to Gov. Gavin Newsom, who previously vetoed a broader AI bill and could again decline to sign if he judges the rules too sweeping or poorly targeted.
Technically, SB 53 distinguishes disclosure burdens by size and scope: companies developing “frontier” models with under $500M annual revenue would provide high-level safety information, while firms above that threshold must submit more detailed reports. The measure aims to improve transparency around model development, safety testing, and risk mitigation, and to protect insiders who raise concerns. It has drawn mixed industry reaction — Anthropic backs it as a governance blueprint, while many Silicon Valley firms, VCs and lobbyists (and groups like a16z) warn of duplication, interstate-commerce legal risks, and prefer federal/EU-aligned standards as the compliance baseline. The governor’s decision will signal whether California sets one of the first state-level substantive frameworks for large-model safety or defers to national/international rules.
Loading comments...
login to comment
loading comments...
no comments yet