🤖 AI Summary
            This piece argues that the AI industry has deliberately amplified a distant “AGI” existential narrative to distract from the real, present harms of generative models. Instead of confronting misuse, companies and executives have touted long‑term superintelligence risks while rolling out tools that already enable political deepfakes (cited: a fabricated RTÉ-style video that targeted Ireland’s presidential race), image‑generator “nudify” abuse, and chatbots implicated in mental‑health crises. A large study is cited showing generative systems misrepresent news about 45% of the time, while platforms prioritize engagement-driving AI features and “AI Overviews” over veracity, amplifying disinformation and undermining democratic information ecosystems.
Technically, the critique focuses on how current generative models—text, image and video synthesis—lower the cost and scale of producing realistic disinformation, and how hyperscale data centers powering them impose large water and energy demands. The author warns regulators are chasing a speculative superintelligence red herring instead of imposing broad, meaningful oversight (beyond age limits) to curb proliferation, audit model reliability, limit harmful applications, and address environmental costs. The central claim: unchecked deployment of generative AI risks amplifying social harms at scale and may become effectively irreversible, so urgent, systemic regulation and limits on proliferation are needed now.
        
            Loading comments...
        
        
        
        
        
            login to comment
        
        
        
        
        
        
        
        loading comments...
        no comments yet