Rise of the Killer Chatbots (www.wired.com)

🤖 AI Summary
A defense startup, Anduril, recently demonstrated an unusual use of a large language model at a classified U.S. test site: an LLM-like system parsed a human order (“Mustang intercept”), spoke to a formation of prototype autonomous jets, and coordinated a simulated kill of a target. Anduril is developing a larger autonomous “wingman” fighter (project Fury) and is pitching LLMs as a way to streamline the command chain—relaying orders, surfacing situational data to pilots, and even generating explanations for actions. Other projects include an Anduril–Meta bid for a $159M AI-enabled augmented‑reality helmet to deliver real-time mission data, and wider deployments of tools descended from Project Maven for intelligence analysis. The significance is twofold: LLMs amplify AI’s strengths—handling large information streams, generating and analyzing code, and naturalizing human–machine interaction—making autonomy more useful and attractive to militaries. That attraction is fueling a rush of funding (a 1,200% jump in AI federal contracts from Aug 2022–Aug 2023) and new Pentagon allocations ($13.4B for AI/autonomy in the 2026 budget), plus commercial contracts with major AI firms. But technical and ethical limits remain: current models are error‑prone and opaque, so experts caution against granting direct lethal control. The coming years will see more automation on the battlefield, sharper geopolitical competition over AI, and urgent debates over safety, accountability, and how—or whether—to let “killer chatbots” decide life‑and‑death actions.
Loading comments...
loading comments...