Anthropic’s Claude Takes Control of a Robot Dog (www.wired.com)

🤖 AI Summary
Anthropic’s “Project Fetch” tested whether its Claude coding model could help non‑roboticist researchers program and control a Unitree Go2 quadruped robot. Two teams without prior robotics experience were given a controller and a sequence of increasingly complex tasks; one team used Claude to generate code and interfaces, the other wrote code unaided. The Claude-assisted group completed some tasks faster and enabled the Go2 to do things the human-only group couldn’t—most notably walking around and finding a beach ball—while the unaided team showed more confusion and negative sentiment. The Go2 used costs about $16,900, walks autonomously but typically needs high‑level software commands or human control, and was chosen as a realistic commercial platform. The experiment highlights two important trends: LLMs are moving beyond text into agentic coding that can bridge software and physical systems, and that AI-assisted workflows can materially change developer productivity and team dynamics. It also surfaces safety and governance questions—today’s models still need access to sensing, navigation, and other programs to act physically, and researchers warn that as models gain embodied feedback they could both unlock powerful new capabilities and increase misuse risks. Proposed mitigations like rule‑enforcing layers (e.g., RoboGuard) and careful interface design will be crucial as “self‑embodying” models become more plausible.
Loading comments...
loading comments...