Courts don't know what to do about AI crimes (restofworld.org)

🤖 AI Summary
After a gun attack on Colombian presidential candidate Miguel Uribe Turbay in June, hundreds of videos circulated online—some later exposed as AI deepfakes—forcing investigators to waste hours verifying footage while charging a teenager. The incident exemplifies a wider tension across Latin America: courts and police are rapidly adopting AI to clear backlogs and automate routine legal work (tools like Prometea, SAJ Digital and PretorIA cut processing times dramatically), yet they are ill-equipped to handle AI-enabled harms such as deepfakes, biased facial recognition, and algorithmic predictive policing. Deepfake videos rose 550% between 2019 and 2023, and though less than 1% are produced in Latin America, the region shows some of the fastest growth. Meanwhile, 85% of Colombian judges use free ChatGPT or Copilot versions but receive minimal training. The legal response is fragmented: some countries (Brazil, South Korea, Australia, U.S.) have targeted deepfake abuses, and Peru and Colombia now treat deepfakes as aggravating factors, but many local laws are vague and enforcement lags. Technically, many AI systems are trained on skewed datasets—producing false positives against Indigenous, Afro-descendant and female faces—and predictive models often rely on “dirty” police records, amplifying bias. The upshot: AI can streamline justice but also deepen harms unless courts build technical capacity, strengthen regulation, and insist on human oversight and robust evidence standards.
Loading comments...
loading comments...