Fighting the New York Times' invasion of user privacy (openai.com)

🤖 AI Summary
OpenAI says The New York Times has demanded a court order forcing it to hand over 20 million private ChatGPT conversations — a randomly sampled set of user chats from Dec. 2022 to Nov. 2024 — as part of the Times’ lawsuit alleging paywall circumvention. OpenAI calls the request an overreach and is fighting it in court after previously pushing back on an even larger early demand for 1.4 billion chats. If granted, the data would be accessible to the Times’ outside counsel and hired technical consultants, raising major privacy concerns because it would include millions of conversations unrelated to the dispute. OpenAI says it offered privacy-preserving alternatives (targeted searches and high-level metadata) that were rejected and is appealing the order while keeping the material under legal hold in a secure environment. Technically, OpenAI is de-identifying and scrubbing PII, restricting access to a small audited legal/security team, and promising further mitigations. More consequentially, the company is accelerating a privacy roadmap that includes client-side encryption for messages (so data would be inaccessible even to OpenAI), automated safety detection with human reviewers used only for serious risks, and other short-term protections. The case could set a precedent balancing legal discovery against user privacy, influencing future product design, cryptographic protections, and how AI providers respond when courts seek broad user data.
Loading comments...
loading comments...