🤖 AI Summary
MIT Technology Review argues that “AGI” has migrated from a speculative research idea into something that resembles a conspiracy theory: an omnipotent, imminent force many in tech speak about in quasi-religious terms. The piece traces how fringe concepts (Ben Goertzel’s AGI coinage, conferences, and early evangelists) were legitimized by DeepMind and Big Tech, and how leaders—Sam Altman, Demis Hassabis, Dario Amodei, Ilya Sutskever—now oscillate between utopian promises (abundance, curing disease) and existential doom (extinction risks). Sutskever’s move from OpenAI to found Safe Superintelligence and the rise of “superintelligence” rhetoric illustrate how belief, fear, and self-interest are driving narratives as much as demonstrable engineering progress.
That matters because AGI doesn’t exist today, yet the belief in it is reshaping investment, infrastructure (data centers, power), hiring, research agendas, and public policy. The article warns that conflating current LLM and narrow-AI advances with a near-term, human-level AGI creates flexible narratives that survive setbacks, justify massive spending, and distort risk-prioritization—often amplifying both hype and doomerism. For the AI/ML community the takeaway is pragmatic: interrogate claims, separate demonstrable capabilities from speculative futures, and align safety, regulation and funding with technical realities rather than mythic expectations.
Loading comments...
login to comment
loading comments...
no comments yet