Why Tech Inevitability is Self-Defeating (deviantabstraction.com)

🤖 AI Summary
This piece challenges Silicon Valley’s favorite refrain that technological futures—especially AI and AGI—are “inevitable.” Citing public statements from figures like Kevin Kelly, Sam Altman and Elon Musk, the author argues that inevitability is less a description of reality and more a persuasive posture. Predictions about technology are communicative acts, not laws of physics: they can’t be verified ahead of time and often mix forecast with advocacy. That ambiguity makes declarations of inevitability powerful and dangerous, because they can discourage dissent or alternative paths and absolve powerful actors of responsibility. The author traces the logic of the self-fulfilling prophecy—label X “inevitable,” stop resisting, X occurs—and contrasts it with a pragmatic epistemology that treats forecasts as contingent and contests them through agency. For the AI/ML community this reframing matters practically and ethically. If inevitability rhetoric drives investment, regulation, hiring and safety choices, it concentrates power and narrows options for oversight, contestation, or different technical roadmaps. The author’s “Agency’s Wager” advises treating predictions as defeasible and acting as if you can change outcomes: push for governance, alternative architectures, safety work and public debate rather than passively accepting prognostications. In short, rejecting fatalism restores responsibility—critically important as AI systems scale and influential actors frame futures as foreordained.
Loading comments...
loading comments...