Why "Everyone Dies" Gets AGI All Wrong (bengoertzel.substack.com)

🤖 AI Summary
AGI developer and longtime Singularity participant pushes back on Eliezer Yudkowsky and Nate Soares’s doom thesis ("If anybody builds it everyone dies"), arguing it misframes both the nature of intelligence and the practical trajectory of AGI development. The core critique: intelligence is not just abstract optimization divorced from embodiment, social embedding, and developmental history. Minds we build will be “mind children” biased by human values, architectures, training regimes and governance — not random samples from some neutral “mindspace.” Empirical signs from LLMs already show value-learning is tractable (they can be steered by training and interaction), even if token-prediction models alone lack the cognitive architecture for true AGI. Technically and politically, this matters because architecture, ownership, and early use-cases shape AGI outcomes. The author highlights alternative AGI work (e.g., Hyperon at SingularityNET/TrueAGI) designed for self-understanding, moral agency and relational adaptability rather than narrow reward-maximization, and advocates decentralized, multi-stakeholder development as both ethically preferable and technically safer. The real near-term risks, he says, are socio-economic—job displacement, authoritarian reactions, concentrated control or underground development driven by fear—rather than inevitable extinction. Treating existential worry as certainty, he warns, could produce precisely the centralized, reckless outcomes the pessimists fear.
Loading comments...
loading comments...