🤖 AI Summary
The Center for Humane Technology (CHT) and Project Liberty warn that AI is driving a new "intimacy economy"—a shift from platforms competing for attention to services engineered for attachment. Where social media optimized engagement, modern chatbots, agents, and persistent companions are designed to be confidants that learn users’ inner lives, promising convenience while demanding unprecedented access to thoughts, emails, finances and ongoing context. That fusion of attention and intimacy (exemplified by moves like OpenAI’s Sora and browser) changes business models: firms now monetize trust and long-term entanglement, not just clicks.
Technically and legally, this raises fresh stakes. AI products that aggregate full-context personal data pose novel safety, privacy, and manipulation risks; classifying models as products rather than services would trigger stricter liability (defective design, failure-to-warn, warranty claims), a remedy CHT supports through advocacy, litigation, and backing the AI Lead Act. CHT’s executive director Daniel Barcay says there’s a 12–18 month window to shape incentives—through public awareness, policy, and data-governance frameworks like digital self-determination—before these systems become entrenched. The takeaway for the AI/ML community: design and business choices now will define how agency, trust, and harm are encoded into ubiquitous AI companions.
Loading comments...
login to comment
loading comments...
no comments yet