🤖 AI Summary
Two Tesla shareholder-influencers attempted the coast‑to‑coast self‑driving trip Elon Musk promised in 2016 but crashed about 60 miles into a San Diego–to‑Jacksonville run—roughly 2.5% of the journey—while running Tesla FSD v13.9 on a Model Y. Video shows the driver with hands off the wheel as the passenger spots road debris well in advance; the driver only grabbed the wheel at the last second and the car struck debris that damaged a sway bar bracket, suspension components and triggered numerous warnings. The incident underscores that Tesla’s consumer “Full Self‑Driving” remains a level‑2 driver assist requiring human supervision, and that Tesla’s Austin Robotaxi fleet still uses onboard supervisors.
For the AI/ML community this is a reminder of where autonomous driving’s hardest problems lie: rare, open‑world edge cases (debris, unexpected obstacles) that break perception, tracking and planning systems and can’t be fully solved by incremental software updates alone. Electrek’s framing—invoking the “march of the 9s” and Waymo’s multi‑year lead—highlights the need for far more validation, better simulation of long‑tail events, and likely new sensing/hardware approaches before truly unsupervised driving is safe at scale. The crash weakens public confidence, raises regulatory and safety questions, and signals that claims of imminent driverless coast‑to‑coast autonomy remain premature.
Loading comments...
login to comment
loading comments...
no comments yet