Silicon Valley Has a God Problem (www.webworm.co)

🤖 AI Summary
Joshua Drummond’s provocative essay “Silicon Valley Has a God Problem” argues that many tech leaders—Sam Altman, Mark Zuckerberg, Eric Schmidt and others—are treating “superintelligence” and AGI as quasi-religious goals. He traces these beliefs to an ideological cocktail summarized by the acronym TESCREAL (Transhumanism, Extropianism, Singulatarianism, Cosmism, Rationalism, Effective Altruism, Longtermism), and shows how it reframes existential risk calculus: speculative, far-future scenarios justify massive investment and moral priority over pressing real-world problems. Drummond calls out the rhetoric—Altman’s “takeoff has started,” Schmidt’s endorsement of endless data centers, Zuckerberg’s tolerance for expensive missteps—and argues there is no credible technical path from present-day large language models (LLMs)—which are probabilistic text predictors, not thinkers—to AGI or superintelligence. For the AI/ML community this is a timely critique of hype, incentives and governance. Technical takeaways: LLMs lack the causal reasoning, planning and grounding that would plausibly scale to human-level general intelligence; claims of imminent superintelligence conflate marketing with evidence. The essay highlights practical implications—misallocated capital, increased carbon footprint from unchecked compute expansion, and distorted research priorities driven by faith-like certainty rather than empirical milestones. Its core warning: researchers, funders and policymakers should temper speculative narratives with clear technical criteria, rigorous evaluation, and attention to near-term harms and public goods instead of surrendering resources to unlikely future fantasies.
Loading comments...
loading comments...