🤖 AI Summary
At a University of Oxford book presentation for The Means of Prediction: How AI Really Works, economist Maximilian Kasy argued that the crucial policy question is not whether AI will “escape” or become sentient but who defines its objective function — i.e., what systems are optimized for and whose interests those objectives serve. Framing many AI harms (safety lapses, workplace displacement, biased decisions) as optimization failures highlights how disproportionate power accrues to actors controlling key inputs — data, compute, expertise and energy — and thus to the priorities embedded in models. Kasy and discussant Dani Rodrik urged shifting the debate beyond engineers and firms to include workers, consumers, regulators and courts, with reputational and legal incentives nudging companies toward broader social welfare outcomes.
Technically and politically, this reframing has concrete implications: regulation and oversight should cover more than headline-grabbing LLMs to include automated hiring, ad targeting, feed ranking and admissions algorithms, where objective choices (maximize test scores, promote social mobility, remediate injustice) lead to very different system designs and trade-offs. Kasy warned that ideological and geopolitical narratives — e.g., “race to outcompete China” — can make contingent design choices seem inevitable, blocking democratic contestation. The takeaway: AI governance is fundamentally about redistributing decision rights over objective functions and inputs, a tractable public debate if widened beyond the tech sector.
Loading comments...
login to comment
loading comments...
no comments yet