🤖 AI Summary
AI is already reshaping legal work — and courts are wrestling with whether parts of judging should follow. Law firms and many judicial operators are using generative models for document review, drafting and predictions (a 2024 UNESCO survey found 44% of judicial operators using generative AI). But widespread “hallucinations” — fake or misrepresented citations, outdated or invented case law — have produced at least 84 problematic Australian instances, professional sanctions (including fines) and high‑profile courtroom embarrassments. Chief Justice Andrew Bell and other jurists warn that while AI might improve access, speed and consistency, it also threatens core judicial values: accuracy, accountability, public confidence and the integrity of evidence (including a “liar’s dividend” where genuine material can be dismissed as faked).
Technically, the debate hinges on what LLMs can and can’t do: they’re strong at summarisation, search and even emotional‑intelligence tasks (one study found LLMs scored ~81% vs humans’ 56%), and tools like China’s DeepSeek can draft opinions in minutes — but they reliably produce plausible‑sounding errors and can amplify biases. Courts have begun hard limits: Bell’s practice notes bar AI for affidavits, witness statements and character references and advise judges not to use AI for primary legal research. The immediate implication is a hybrid future: AI will augment legal work and online dispute resolution, but human judges must retain fact‑finding, discretionary and credibility assessments — and courts will need stronger verification, transparency and regulatory frameworks to preserve due process.
Loading comments...
login to comment
loading comments...
no comments yet