🤖 AI Summary
DeepMind is shifting focus from games and proteins to the hard problem of embodied intelligence: building robots that can learn, generalize and act reliably in messy, real-world environments. The company (led in robotics by Raia Hadsell) is tackling two intertwined bottlenecks that have kept robotics roughly a decade behind fields like computer vision: the scarcity and cost of physical training data, and the brittleness of current neural nets when they must learn new skills without erasing old ones. Success would enable genuinely versatile robots for driving, caregiving, agriculture, disaster response—and advance core AI questions about transfer learning and continual adaptation.
Technically, DeepMind and others rely on sim-to-real training (OpenAI’s Rubik’s-cube hand is a headline example) but acknowledge simulations are “too perfect” and miss real-world complexity. A central ML challenge is catastrophic forgetting: gradient-based reweighting that makes nets lose prior tasks when trained on new ones. DeepMind favors elastic weight consolidation (EWC), which estimates which parameters are critical to a task and partially "freezes" them (Hadsell cites small fractions, e.g., ~5%), letting other parameters remain plastic. EWC reduces forgetting and enables some transfer, but accumulates rigidity as tasks pile up, trading long-term plasticity for retention. These trade-offs—and the need for better sim realism, data-efficient learning, and scalable continual-learning schemes—define the technical frontier for robotics and for general, adaptable AI.
Loading comments...
login to comment
loading comments...
no comments yet