Claude Opus 4.7 has turned into an overzealous query cop, devs complain (www.theregister.com)

🤖 AI Summary
Anthropic's recent release of Claude Opus 4.7 has sparked significant discontent among developers due to its overly restrictive Acceptable Use Classifier (AUP), which is intended to prevent misuse but is now blocking legitimate requests. Following the announcement of Mythos—a model deemed too risky for public release—Opus 4.7 serves as a testing ground for enhanced safety measures. However, the surge in complaints of false positives indicates that the new safeguards are hinderingly excessive, with users reporting around 40 instances of AUP violations in just a few days of April. This backlash is critical for the AI/ML community as it highlights the delicate balance between security and functionality in AI deployments. Developers are frustrated with the frequency of AUP-induced refusals on innocuous requests in various fields, from cybersecurity labs to academic research. The increase in complaints, from a consistency of two to three per month to over 30 in April, raises questions about the efficacy of the classifier's design, which seems to be relying on strict keyword filtering rather than contextual understanding. This situation underscores the potential drawbacks of overly aggressive safety protocols, emphasizing the need for more nuanced approaches to AI safety without compromising usability.
Loading comments...
loading comments...