🤖 AI Summary
A California Court of Appeal published a notable opinion after finding that generative AI had fabricated nearly all legal quotations in an appellant’s briefs—reportedly 21 of 23 citations were false or misattributed, and some cited cases did not exist. The panel determined the fabricated authorities came from AI tools used by plaintiff’s counsel, who failed to personally read or verify the sources. Because this conduct breached basic duties to client and court, the attorney was ordered to pay monetary sanctions, to notify the client of the opinion, and the clerk was directed to send the opinion to the State Bar. The court published the decision as a formal warning: no filing should contain citations an attorney has not personally read and verified.
For the AI/ML community this serves as a high-profile, real-world example of “hallucinations” from large language models causing legal and professional harm. Key technical implications: models can invent plausible-looking but false authorities, misquote real documents, and misattribute content; human-in-the-loop verification is mandatory in high-stakes domains. Practical takeaways include the need for retrieval-augmented generation with verifiable provenance, robust citation-grounding, automated citation-checkers, calibrated uncertainty signals, and workflows that force source inspection. The case underscores that unreliability in current LLM outputs is not just an academic flaw but a liability risk where incorrect generation can trigger sanctions, ethical breaches, and regulatory scrutiny.
Loading comments...
login to comment
loading comments...
no comments yet