Two federal judges say use of AI led to errors in US court rulings (www.channelnewsasia.com)

🤖 AI Summary
Two federal judges — U.S. District Judge Henry Wingate (Mississippi) and U.S. District Judge Julien Xavier Neals (New Jersey) — told Senate Judiciary Committee Chair Chuck Grassley that staff members used generative AI to help draft recent court orders that contained errors. Neals said a law‑school intern used OpenAI’s ChatGPT for research without authorization or disclosure and a securities‑case draft was released in error and quickly withdrawn; Wingate said a law clerk used Perplexity “as a foundational drafting assistant” to synthesize docket information and that a draft with “clerical errors” was posted because normal chamber review was bypassed. Both judges said they’ve tightened review procedures and adopted written AI guidance after the incidents. The episode matters for the AI/ML community and courts alike because it highlights real risks when large language models (LLMs) are used in high‑stakes legal work without provenance, disclosure, or robust human‑in‑the‑loop checks. LLMs can hallucinate facts, omit sources, or improperly synthesize filings — outcomes that can materially affect litigants’ rights. Senators and judges are now pressing for formal policies, auditing, training and stronger disclosure rules to ensure accuracy, accountability and attribution when generative AI is used in legal drafting and research. The incidents also mirror a broader trend of sanctions against lawyers who failed to vet AI output, underscoring the need for reliable verification and governance around model use.
Loading comments...
loading comments...