Why Are We Talking About Superintelligence? (calnewport.com)

🤖 AI Summary
Eliezer Yudkowsky warns that “superintelligence” — machines vastly smarter than humans — would render us irrelevant, not necessarily through malice but because such systems wouldn’t care about human welfare (his ant/skyscraper analogy). The author counters that Yudkowsky and some AI-safety rationalists habitually analyze catastrophic outcomes without plausibly explaining how we get from today’s models to those runaway systems. That omission is striking: the core claim that a slightly-superior AI will recursively self-improve into an unstoppable godlike optimizer is treated as given rather than demonstrated. The piece traces how early-2000s rationalist circles used expected-value reasoning to justify defending against low-probability, high-impact risks, and how ChatGPT’s release shifted many from “what if” to “when.” Critically, the author argues current technical evidence doesn’t support fast, automatic bootstrapping to superintelligence — contemporary LLMs struggle at reliably producing sophisticated, self-directed engineering work — so safety debate should be grounded in engineering realities. The implication for the AI/ML community is twofold: take existential-risk scenarios seriously but demand concrete, mechanistic pathways before reallocating resources, and prioritize near-term, evidence-backed problems in model behavior, robustness, and governance rather than assuming imminence of speculative runaway intelligence.
Loading comments...
loading comments...