When Tesla's FSD works well, it gets credit. When it doesn't, you get blamed (electrek.co)

🤖 AI Summary
Tesla’s Full Self-Driving (FSD) remains a Level 2 driver-assist system—technically requiring an attentive human in the loop—even as the company frequently touts FSD’s safety gains and quasi “self-driving” capabilities (highway lane-keeping, navigation-guided interchanges, and recent surface-street operation). Tesla releases metrics (miles between crashes/interventions) claiming FSD is safer than human-only driving, but those figures are selectively presented, rely on human+software performance, and lack independent, peer-reviewed analysis; third-party measures show higher intervention rates. In litigation, Tesla has often blamed drivers—rightly, under Level 2 rules—but founder Elon Musk’s public promises about autonomous capability and withholding of internal data have already resulted in courts apportioning liability (e.g., a Florida crash where Tesla bore 33% responsibility and a large judgment). That fragile legal shield may erode after Musk suggested Tesla would relax or disable camera-based driver monitoring to permit “texting and driving” while FSD is active. Allowing eyes-off-road operation undermines the defensive premise that drivers remain responsible, and Musk’s statements could be cited as evidence of negligence or misleading marketing in future suits. Combined with Tesla’s opaque datasets and history of settling or contesting discovery, this shift raises meaningful technical and regulatory implications: increased liability exposure, renewed scrutiny over safety claims and telemetry, and stronger calls for independent testing before any move toward unsupervised driving.
Loading comments...
loading comments...