A.I. Bots or Us: Who Will End Humanity First? (www.nytimes.com)

🤖 AI Summary
A new round of books captures the polarized debate over AI: Eliezer Yudkowsky and Nate Soares double down on existential risk in If Anyone Builds It, Everyone Dies, arguing that any group that builds a superintelligence with “anything remotely like current techniques” would destroy humanity — a claim reviewers criticize for vague concepts (e.g., “wanting”) and speculative thought experiments. Emily Bender and Alex Hanna push the opposite critique in The AI Con, documenting hype-driven failures (self‑driving systems needing extensive human oversight, flawed automation in law and social services, education use that subverts learning) and arguing that language‑model benchmark performance is often a “Clever Hans” effect rather than genuine understanding; they also highlight real harms, including a reported suicide after reliance on a chatbot. Richard Susskind’s How to Think About AI provides a steadier middle path, urging clearer vocabularies, defined objectives, and pragmatic governance: decide what we want, then design mechanisms to impose those preferences. The debate’s technical implications are concrete — we need better operational definitions of “intelligence,” robust benchmarks that avoid spurious correlations, improved human‑in‑the‑loop designs, and policy frameworks to manage deployment risks and harms. Together, the books underline that progress in models is real and powerful, but so too is the need for clearer concepts, evaluation methods, and governance to avoid both hype and careless catastrophe.
Loading comments...
loading comments...