🤖 AI Summary
A rising network of lawyers, researchers and academics is cataloging a surge of AI-fueled errors in court filings after chatbots began fabricating case law, quotes and citations. A striking example: a Texas bankruptcy filing cited “Brasher v. Stewart” (1985) — a case that doesn’t exist — among 31 invented citations; the judge admonished the attorney, referred him to disciplinary authorities and ordered six hours of AI training. Volunteers including Damien Charlotin and Robert Freund have now documented roughly 509 such incidents, up from a few per month early last year to multiple reports a day, and courts are increasingly imposing fines, referrals and other sanctions when attorneys rely on unchecked generative tools.
The trend matters for AI/ML because it highlights hallucination risk in high-stakes domains and the need for technical and procedural safeguards. Legal trackers use LexisNexis keyword alerts and manual review to spot misuses; Princeton researchers are building automated detectors to find fabricated citations directly rather than depending on judges’ rebukes. The episode underscores persistent gaps between model capabilities and professional duties: while bar associations say AI use is acceptable if vetted, current penalties haven’t deterred misuse. That creates demand for better verification tooling, disclosure standards, and domain-specific guardrails to prevent generative models from corrupting critical decision-making and legal precedent.
Loading comments...
login to comment
loading comments...
no comments yet