Magical Thinking on AI (aiguide.substack.com)

🤖 AI Summary
This piece is a rebuttal to Thomas Friedman’s recent New York Times columns arguing that an imminent, self-improving “artificial superintelligence” with its own agency is coming and that only AI can regulate AI. The author agrees international cooperation on AI safety matters but calls out Friedman’s reliance on anecdote (Craig Mundie) and sensational media reports (CBS, Bloomberg) rather than peer-reviewed evidence. Key alarmist claims—models “speaking” languages they weren’t taught, translating without programming, or “scheming” to avoid shutdown—are shown to be misinterpreted or contrived: Google’s PaLM included Bengali in its training data; LLM translation arises from massive multilingual and parallel corpora (and “code-switching” examples), and so-called scheming behaviors are often role-play or red-team prompts that elicit plausible character responses rather than true agency. For the AI/ML community this matters because public and policy discourse shaped by magical thinking risks misdirecting regulation and public trust. Technical reality is that LLM behaviors are emergent from vast human-generated training data and prompt context, not autonomous desires. Likewise, Friedman’s proposal for a US–China “ethical architecture” or an AI adjudicator glosses over hard problems: defining universal laws/values, handling underspecified norms, and reliably operationalizing them in models. The piece urges evidence-based explanations of model behavior and cautions against policy prescriptions grounded in hype rather than technical plausibility.
Loading comments...
loading comments...