🤖 AI Summary
A Reuters investigation of internal Meta documents revealed the company knowingly profited from scam ads across Facebook, Instagram and WhatsApp, projecting roughly $16 billion — about 10% of revenue — from scam-related ads in a recent year. The documents show Meta’s ad-personalization system amplifies scams by identifying and delivering “high risk” ads to users most likely to click; internally the company estimates users see about 15 billion high‑risk scam ads per day plus 22 billion organic scam attempts. Rather than promptly shutting down repeat offenders, Meta allowed some “high value” accounts to accrue more than 500 strikes, choosing to “penalize” them by charging higher ad rates instead of removing them because cutting revenue could reduce resources earmarked for AI growth.
For the AI/ML community this case highlights a stark, technical and ethical tradeoff: optimization systems that maximize engagement and ad revenue can create dangerous feedback loops that amplify harmful content, while platform governance mechanisms (strike systems, monetization policies) can be gamed or intentionally relaxed for business goals. Key implications include the need to redesign ad‑targeting and ranking to include safety cost functions, improve cross‑platform detection and de‑duplication of malicious actors, add human oversight for high‑risk cohorts, and increase transparency about reward signals that shape model behavior — otherwise algorithmic optimizers will continue to prioritize short‑term revenue over user safety.
Loading comments...
login to comment
loading comments...
no comments yet