Tesla's Full Self-Driving software under investigation for safety violations (techcrunch.com)

🤖 AI Summary
The National Highway Traffic Safety Administration’s Office of Defects Investigation (ODI) has opened a Preliminary Evaluation into Tesla’s Full Self-Driving (FSD) software after receiving more than 50 reports — including at least four that caused injuries — alleging behaviors such as running red lights, failing to remain stopped, crossing double-yellow lines, entering opposing lanes, and turning the wrong way despite wrong-way signage. The probe cites specific clusters of incidents (including repeat events at an intersection in Joppa, Maryland) and combines consumer complaints, media reports, and required crash submissions under the Standing General Order. The action comes the same week Tesla pushed a highly publicized FSD update that incorporates training data from its Austin robotaxi pilot. For the AI/ML community this raises immediate safety, validation and governance issues: these are among the first agency actions targeted specifically at driver-assist neural systems and could lead to a recall if systemic failures are confirmed. Technically, the incidents suggest failures in perception, scene understanding, or decision-making around traffic signals, lane topology and turn semantics — and highlight risks when models trained on limited, deployment-biased robotaxi data are rolled into consumer driving stacks. The inquiry follows earlier NHTSA probes into Autopilot and occurs amid staffing changes at the agency, factors that may influence the timeline and future regulatory scrutiny of ML-driven vehicle control systems.
Loading comments...
loading comments...