🤖 AI Summary
Anthropic announced it will begin using conversations with its Claude chatbot as training data for future models unless users explicitly opt out. The company had previously avoided using user chats for training, but an updated privacy policy taking effect Oct. 8 flips the default to include new and revisited chats in model training (new users see the choice at signup; existing users have seen a pop-up). The training toggle—labelled “Help improve Claude”—is turned on by default, and commercial/government/education accounts remain excluded from this change. Anthropic also extended data retention from about 30 days to up to five years, regardless of whether users consent to training.
This matters for the AI/ML community because real-world interaction data can materially improve model accuracy and safety, but it also raises privacy and IP concerns—especially for developers using Claude as a coding assistant, since code snippets and projects may be incorporated into training sets. Important technical implications: only chats you don’t opt out of are used, but reopening an old conversation will make that thread eligible for training; the shift aligns Claude’s default closer to competitors like ChatGPT and Gemini. Users worried about privacy or proprietary code should toggle off “Help improve Claude” in Privacy Settings and avoid reopening archived chats.
Loading comments...
login to comment
loading comments...
no comments yet