🤖 AI Summary
The author argues that “full self‑driving” remains unrealized and probably unattainable in the near term for three concrete reasons: sensor limitations, unverifiable stochastic software, and the lack of human‑level cognition. Practically, camera‑only approaches (championed by Tesla) sacrifice redundancy—lidar, radar and other modalities are not just complementary but essential fail‑safes when vision is occluded (e.g., smashed windshields, freezing, glare). Neural networks make perception and control inherently non‑deterministic (even simple preprocessing like Gaussian noise produces different outputs), complicating formal software verification; meanwhile real product software shows brittle bugs in non‑safety UI subsystems that erode confidence in safety‑critical stacks.
For the AI/ML community this underscores that autonomy is an integrated robotics problem, not a pure scale‑up of perception models. Solutions favor sensor fusion, hierarchical/subsumption control (as in DARPA‑era systems), robust V2V coordination, and incremental deployment of partial autonomy tied to incentives (e.g., CAN‑bus retrofit programs or insurance discounts) rather than chasing end‑state sentience. The piece calls for humility: human‑in‑the‑loop approaches, provable redundancy, and architectural designs that prevent single‑sensor or software failures are more likely to deliver real-world safety than headline claims of “full” autonomy.
Loading comments...
login to comment
loading comments...
no comments yet