Here's what's in the RAISE Act, a state-level AI bill opposed by Trump and industry leaders (www.cnbc.com)

🤖 AI Summary
New York assemblyman Alex Bores — who helped write and co-sponsor the RAISE Act — is now the target of a bipartisan, Trump-aligned super PAC, "Leading the Future" (backed by figures including OpenAI President Greg Brockman, Palantir co‑founder Joe Lonsdale, a16z and Perplexity). The RAISE Act, passed by New York’s legislature and awaiting the governor’s decision, would force large AI developers (defined by spending more than $100 million in compute to train advanced models) to publish and follow safety/risk protocols, implement safeguards against “critical harm” (death/serious injury to 100+ people or >=$1B in damages), and disclose serious safety incidents — for example stolen models — within 72 hours. It would bar release of models that present an “unreasonable risk of critical harm” and exposes violators to penalties up to $30 million. Technically and politically, the bill sets a compute-based threshold and concrete reporting/enforcement rules that could shape operational safety governance at AI firms, but it also sharpens the state vs federal regulatory battle. Industry and LTF argue state-level mandates create a patchwork that stifles innovation and helps rivals like China, while proponents counter that federal action is too slow and that transparent safety requirements (citing recent Anthropic-reported cyberattacks) are urgently needed to manage misuse risks. The clash highlights whether U.S. AI oversight will be driven by state experiments or a unified federal framework.
Loading comments...
loading comments...