🤖 AI Summary
The Biden administration’s 2026 NDAA is being used to push a contentious provision: a 10-year moratorium on state-level AI regulation that would bar states from restricting how AI systems collect data or generate outputs. If Congress resists, the administration has a drafted executive order as plan B that would, within 30 days, create an Attorney General-led AI Litigation Task Force to challenge state laws and, within 90 days, direct the Commerce Secretary to evaluate state rules that “require models to alter their truthful outputs” or force disclosures thought to violate the First Amendment. The order would also tie BEAD broadband funding and other discretionary grants to states’ compliance, effectively pressuring states to drop independent AI safeguards.
This matters because preempting state rules before a federal standard exists risks creating a regulatory vacuum that benefits large AI vendors and weakens privacy, safety, and competition protections. Technical implications include limits on states’ ability to mandate data-use restrictions, transparency, or model-behavior controls—measures that affect training data, logging, and content-moderation requirements. The piece argues for a single federal law crafted before preemption and highlights alternatives: privacy-first tools (e.g., Proton’s Lumo) that publish security models, keep no conversation logs, use zero‑access encryption, and apply GDPR-like deletion and data protections to mitigate risks from data brokers, jailbreak-as-a-service, misinformation, and other harms.
Loading comments...
login to comment
loading comments...
no comments yet