🤖 AI Summary
OpenAI disclosed that “limited identifying information” for some ChatGPT API customers was exposed after a breach at third‑party analytics provider Mixpanel, which traced the incident to an SMS‑phishing (smishing) campaign first detected on Nov. 8. OpenAI said the leak involved analytics event data used to track API frontend interactions — potentially including account name, email, approximate location (city/state/country), OS and browser, referring websites, and organization or user IDs — and that CoinTracker users may also have had device metadata and a limited transaction count exposed. OpenAI maintains its own systems, chats, API requests, API keys, passwords, payment details and government IDs were not accessed. Mixpanel confirmed a limited customer impact, secured affected accounts, rotated credentials, blocked attacker IPs and reset employee passwords; OpenAI removed Mixpanel from production, notified users, and opened its own investigation.
For the AI/ML community this is a salient supply‑chain and metadata risk: even non‑sensitive analytics can enable very effective phishing, social‑engineering or deanonymization of API users and projects. The incident underscores the need to vet third‑party telemetry providers, apply strict data‑minimization and isolation for analytics pipelines, enforce 2FA and credential hygiene, and monitor for suspicious account activity. Although core secrets reportedly weren’t exposed, developers and organizations should assume elevated phishing risk and validate any communications claiming to be from OpenAI or related services.
Loading comments...
login to comment
loading comments...
no comments yet