Anthropic irks White House with limits on models’ use (www.semafor.com)

🤖 AI Summary
Anthropic’s public push in Washington has hit a snag after the company refused requests from contractors working with federal law enforcement to allow some uses of its Claude models — chiefly anything the company classifies as “domestic surveillance.” That refusal has aggravated officials in the Trump administration, who see it as a politically driven limitation on tools they expect U.S. AI firms to make available. The dispute matters practically because Claude instances running in AWS GovCloud are among the few high-performing models cleared for top‑secret work, so contractors and agencies find themselves blocked from capabilities they thought they’d bought or contracted for. Technically the friction stems from the vagueness of Anthropic’s usage policy, which bars surveillance without defining it in a law‑enforcement context, creating broad interpretive room that contrasts with rivals like OpenAI (which bans “unauthorized monitoring,” implying lawful monitoring is allowed). Anthropic also limits weaponization while still doing some DoD work. The episode spotlights a larger debate: how much control model providers should retain over downstream uses, especially in government contracts where traditional software typically carries no such use restrictions. The outcome could shape procurement practices, model adoption in national security settings, and the balance between AI‑safety norms and government operational needs.
Loading comments...
loading comments...