Humanity Needs Democratic Control of AI (jacobin.com)

🤖 AI Summary
In The Means of Prediction, economist Maximilian Kasy argues that biased and harmful AI outcomes are not accidental but the predictable result of who controls the “means of prediction” — data, compute, technical expertise and energy. Grounding his critique in machine‑learning mechanics, Kasy shows that models are optimized for objective functions chosen by their owners: social platforms maximize engagement (and thus outrage), lenders and risk tools optimize profit‑weighted accuracy (producing racially disparate mortgage denials), and judicial tools like COMPAS reproduce historical bias. He emphasizes that prediction is a technical mapping from past data to future decisions, so biased training data and profit‑driven objectives systematically encode inequality into algorithms. For the AI/ML community this reframes fairness from a purely technical problem to a political one: changing model behavior requires changing who sets objectives and who controls resources. Kasy proposes complementary institutional remedies — taxes that internalize social costs, regulation to curb harmful data practices, and community‑run data trusts — alongside collective action (strikes, litigation, boycotts) to shift leverage away from tech owners. The book’s key implication is that technical fixes alone are insufficient; ensuring AI serves public welfare demands democratic governance of datasets, compute, and the incentives that shape models.
Loading comments...
loading comments...