🤖 AI Summary
Dean Ball argues that the common law — judge-made, precedent-driven tort law centered on a “duty of reasonable care” — has historically adapted to disruptive technologies by reallocating who must bear new risks, and can play a similar role for AI. He traces examples from 19th‑century railroads (courts shifted fence-building and cargo-protection duties to adjacent landowners and passengers to enable rapid infrastructure growth) to mid‑20th‑century liability reforms that moved toward strict liability for mass-produced goods, showing how judicial decisions recalibrated incentives without comprehensive statutes. Ball points to contemporary suits (e.g., the Raine family’s case against OpenAI) as evidence that common law already reaches modern AI harms even absent explicit legislation.
For AI/ML, this means tort litigation could both incentivize safer design (by imposing negligence-based duties) and encourage adoption (when courts favor accommodating innovation), because common law can adjust up or down unlike one‑way administrative rules. But there are limits: case law is slow relative to rapid model development, state-by-state variation risks a patchwork regime, and tort is poorly suited to catastrophic tail risks that can bankrupt defendants. The takeaway: legal culture and judicial posture will materially shape AI’s path — either enabling progress through calibrated liability or chilling it if courts overreach or insurers withdraw coverage.
Loading comments...
login to comment
loading comments...
no comments yet