🤖 AI Summary
Acemoglu et al. present a dynamic model showing how AI-enabled big data can let platforms manipulate user behavior by exploiting a product’s “glossiness” — transient attributes that make low-quality goods appear attractive. In the model a platform repeatedly offers one of n products (each high- or low-quality) and users get noisy signals over time. Glossiness is modeled as a binary state that decays from glossy to revealing at rate ρ (a continuous-time Markov process); if non-glossy, bad-news signals arrive at rate γ. In a pre-AI world platform and users update beliefs symmetrically, but in a post-AI world the platform uses pooled data to estimate which items remain glossy for particular user types and can strategically present glossy, low-quality items while users fail to realize the platform’s informational advantage.
The technical payoff: when glossiness is short-lived (large ρ) better targeting improves matches and raises consumer welfare; when glossiness is long-lived (small ρ) the platform profitably steers users toward durable, deceptive glossiness and consumer welfare falls. Increasing the product set amplifies manipulation under small ρ because more items offer exploitable short-term appeal. The paper formalizes an “endogenous privacy cost” and highlights a policy-relevant trade-off: more granular user data and wider offerings can either improve matching or enable systematic behavioral manipulation depending on how persistent superficial appeal is.
Loading comments...
login to comment
loading comments...
no comments yet