🤖 AI Summary
Don Marti argues that the crisis in the news business is a symptom of a broader, economy-wide collapse of trust driven by deliberate technical and policy choices by Big Tech. He cites a study showing AI assistants misrepresent news 45% of the time and traces the problem to platform strategies that commoditized publishers (beginning with Google’s 2007 DoubleClick acquisition), centralized ad targeting inside a few firms, and pushed personalized surveillance advertising that breaks the reputation–revenue feedback loop that sustained quality journalism. Personalized targeting and opaque ad systems make deception easier to deliver to specific victims, while platform designs (e.g., Meta’s restrictive Ad Library, blocked ad crawling, and opaque Custom Audiences) actively hinder detection and accountability. The result is lost ad revenue, weaker brand signals, proliferating scams, and even national-security risks as adversaries exploit the same techniques.
For the AI/ML community this matters technically and ethically: ML systems concentrated at a few firms enable fine-grained targeting that amplifies deception and makes harmful content harder to spot, and AI crawlers both train on and propagate disinformation. Marti frames privacy harms as deception and discrimination and recommends policy shifts (limiting cross-context tracking) and coalition-building with news organizations to rebuild trust. The piece positions publishers not just as victims but as essential partners in restoring more transparent, trustworthy data and ad ecosystems—Part 2 will outline remedies.
Loading comments...
login to comment
loading comments...
no comments yet