🤖 AI Summary
Researchers are showing that algorithmic pricing can produce high, cartel-like prices even without explicit collusion or “backroom” agreements. Building on prior work that demonstrated learning algorithms can tacitly collude by mutually punishing price cuts, a new game-theory study proves a subtler vulnerability: when a standard class of learning rules called no-swap-regret algorithms (which guarantee players can’t benefit by systematically swapping one action for another) faces a simple “nonresponsive” opponent that randomizes according to a fixed distribution, the market can settle into an equilibrium with persistently high prices. Critically, neither side needs to send threats or coordinate—both players are best-responding, so no one has an incentive to change strategy, and buyers are left worse off.
The technical takeaway is sharp and unsettling for regulators: properties that prevent threat-based collusion (e.g., no-swap-regret) don’t rule out exploitative equilibria created by seemingly benign strategies. The exploit relies on probability mass concentrated on high prices plus occasional undercuts, which coax the learning algorithm to raise prices; many such distributions work, making detection hard. Policy remedies proposed include mandating only no-swap-regret pricing agents and using black-box tests to certify them, but authors warn these fixes won’t close all gaps. The work underscores that algorithmic pricing failures are nuanced, detectable neither by obvious “agreement” nor by surface-level benignness, and demand new regulatory and technical tools.
Loading comments...
login to comment
loading comments...
no comments yet