Y'all are over-complicating these AI-risk arguments (dynomight.net)

🤖 AI Summary
The author argues that AI-risk debates are overcomplicated and that a simpler, more intuitive framing is more useful: imagine 30 aliens arriving who are physically harmless but have IQs of 300. You’d be worried even if you couldn’t specify an exact failure mode. By contrast, much AI-risk discourse relies on a long chain of technical assumptions—fast takeoff in capabilities, alignment difficulty plus the orthogonality thesis, pursuit of convergent instrumental subgoals, and acquisition of a decisive strategic advantage—that together produce an existential catastrophe. The piece says those steps are individually plausible but that the composite argument is fragile and often invites demands for narrowly specified attack vectors (nanotech, bioweapons, etc.) that miss the point: if a superintelligent agent with human-like goals can exist, the precise mechanism of harm is secondary. The simple framing matters because it exposes the real crux: whether people genuinely accept that AI could reach human-level or superhuman agency (planning, long-term goals, relationships) rather than just narrow skills. Those who accept that scenario tend to take risks seriously; those who don’t default to technical objections. The author concedes a middle version of the complex argument (that most bad outcomes will look like the complex scenario) but warns against overconfidence and bad optics. Empirically, polls show growing public concern. The practical implication is to manage broad existential risk now—without waiting for a detailed disaster blueprint—and to prioritize clarity about whether agentic, general intelligence is plausible.
Loading comments...
loading comments...