🤖 AI Summary
            A newly unsealed April 2025 sworn declaration from Tesla computer-vision engineer Christopher Payne, filed in a wrongful-death suit tied to a 2019 Model S crash, lays out how Tesla tightly controls its Autopilot technology. Payne describes strict, feature-level digital controls (password-protected systems, multi-factor authentication sometimes required multiple times per day), hardware restrictions (company laptops with USB/USB‑C ports disabled), physical controls (special ID badges and building-level clearances), and procedural gates (access requests requiring manager and Autopilot team approval). He also notes that only Autopilot engineers may enter certain development areas, staff sign NDAs, and the Autopilot org chart is intentionally opaque even internally.
For the AI/ML community the filing is notable for illustrating the extreme measures a commercial autonomy program uses to protect IP and data — measures that affect reproducibility, third-party audits, regulatory oversight, and internal collaboration. While these controls defend proprietary models, datasets, and safety-critical pipelines, they can also create silos that complicate post-incident forensics, external validation, and cross-team error‑checking. The declaration surfaced amid a jury verdict that found Tesla partly liable in the Key Largo crash and a $329M damages award (Tesla is appealing), underscoring how corporate secrecy around ML systems can become central evidence in litigation and public-policy debates about transparency and safety in deployed AI.
        
            Loading comments...
        
        
        
        
        
            login to comment
        
        
        
        
        
        
        
        loading comments...
        no comments yet