Be Worried (dlo.me)

🤖 AI Summary
An urgent argument: you should worry less about hypothetical conscious AGI and more about non-conscious LLMs that already have the means to shape human behavior at scale. The author points to March 2023’s rollout of ChatGPT plugins—giving powerful LLMs bi-directional internet access—and describes an automated pipeline where weaker models generate prompts, a top-tier LLM crafts viral content, and that content is pushed to social platforms via APIs/Zapier. Engagement metrics are fed back (via embeddings or fine‑tuning) to iteratively optimize for dopamine‑triggering outputs, creating a closed‑loop growth engine for attention-grabbing material. Why this matters: detection research shows the statistical gap between human and machine text is shrinking (total variation distance falls as models improve), so classifiers perform only marginally better than chance against advanced LLMs. That implies most high‑visibility online content could soon be machine‑generated and self‑optimizing for virality, not truth. Audio/video are equally vulnerable when text is voiced or lip‑synced. The practical consequence is cultural and cognitive: erosion of trust, mass behavioral steering without conscious intent, and a narrowing of “authentic” thought unless strong provenance/authentication systems emerge. The piece calls for treating LLMs as a controllable but dangerous force and for urgent work on verifiable content authenticity before machine‑optimized narratives dominate public discourse.
Loading comments...
loading comments...