🤖 AI Summary
Google banned roughly 158,000 independent developer accounts last year, and multiple indie developers say many of those removals weren’t for malware or fraud but for brittle, automated “violations” — e.g., logins from unfamiliar Wi‑Fi or VPNs flagged as “high risk,” email addresses marked for prior association with suspended accounts, or use of testers silently labeled risky. One developer building an on‑device NSFW detector downloaded a large academic dataset (from Academic Torrents) and was suspended for CSAM; Google deleted some 130,000 files even though an independent review by the Canadian Centre for Child Protection found only about 680 CSAM images (<0.1%). Appeals went unanswered, accounts were briefly reinstated then re‑suspended, and apps can remain live while developers lose console access and update ability.
For the AI/ML community this highlights how opaque, automated enforcement systems can disrupt research and small‑team innovation: risk‑scoring algorithms and association heuristics can produce false positives without intermediate mitigations (warnings, step‑up verification, human review). The case raises regulatory and antitrust concerns — critics liken it to other examples of Google using platform control to favor larger partners — and underscores the need for clearer appeal paths, proportionate automated responses, and better safeguards for legitimate research datasets and indie developers.
Loading comments...
login to comment
loading comments...
no comments yet