US opens Tesla probe after more crashes involving its "full self-driving" (apnews.com)

🤖 AI Summary
U.S. auto safety regulators (NHTSA) have opened a new investigation into Tesla’s “Full Self-Driving” (FSD) suite after 58 reported incidents in which FSD-equipped cars allegedly ran red lights or drove on the wrong side of the road—sometimes colliding with other vehicles and causing injuries. The probe covers about 2,882,566 vehicles, essentially all Teslas fitted with FSD. Many driver reports say the cars gave no warning before the unexpected behavior. NHTSA’s action adds to existing investigations into Tesla features including a “summon” parking feature, crashes in low-visibility conditions, and whether Tesla has been timely reporting crashes as required. Technically, the inquiry underscores risks around Tesla’s mix of Level 2 supervised driver-assistance (Full Self-Driving (Supervised)) and ongoing testing of an unsupervised, no-intervention version Musk has long promised. Level 2 systems legally require driver attention, yet the incidents suggest failures in perception/decision-making or human-machine interaction—issues that affect liability, regulatory compliance, and validation of edge-case behaviors. For the AI/ML community, the probe highlights the need for robust validation, transparency around training and safety metrics, and better driver monitoring. The investigation could slow deployments and increase regulatory demands for explainability, logging, and reporting standards for production autonomous systems.
Loading comments...
loading comments...