Is my data used for model training? (privacy.claude.com)

🤖 AI Summary
Anthropic clarifies how consumer Claude accounts (Free, Pro, Max and Claude Code on those accounts) may contribute data to model training. Your chats and coding sessions can be used to improve Claude only under specific conditions: you explicitly allow it, a conversation is flagged for safety review (so it can be analyzed to improve policy enforcement and train Safeguards tools), or you explicitly opt into training programs like Trusted Tester. Commercial products (Claude for Work, Anthropic API) follow a different policy. Technically, the company may ingest the entire related conversation — including content you paste, custom styles, conversation preferences, and data from Claude for Chrome — but excludes raw connector content (e.g., Google Drive or remote/local MCP servers) unless that content is directly copied into the chat. Thumbs-up/down feedback stores the full related conversation in a secure backend for up to five years; feedback is de-linked from your user ID before use and is not merged with your other conversations. Anthropic says these data are used for service improvement, research, behavior studies, and model/safeguards training, emphasizing user choice and targeted use cases for safety and product enhancement.
Loading comments...
loading comments...