🤖 AI Summary
Major insurers including AIG, WR Berkley and Great American have sought regulatory clearance to add policy exclusions that would let them deny claims tied to the use or integration of AI systems — from chatbots to autonomous agents — citing a string of high‑cost, public failures and the difficulty of quantifying correlated exposures. The move follows incidents such as Google’s $110M defamation suit over an AI “Overview” error, Air Canada being forced to honor a chatbot’s invented discount, and UK engineer Arup losing £20M to a deepfake executive scam. Insurers and underwriters describe large language models as “black boxes”; some firms (Mosaic) have refused to underwrite LLM risks, while proposed exclusions from WR Berkley could bar claims tied to “any actual or alleged use” of AI even when it’s a minor workflow component.
The shift matters because it reallocates AI deployment risk back onto companies, potentially slowing adoption or raising costs for buyers and developers. Carriers are experimenting with narrower endorsements (QBE’s EU AI Act fines coverage capped at 2.5% of limits; Chubb covering certain incidents but excluding events that could cause “widespread” simultaneous losses), highlighting a regulatory and contractual patchwork. Insurers warn the real peril is systemic, correlated loss from a single upstream model or vendor — Aon estimates single‑company hits of $400–500M are absorbable, but not thousands of simultaneous claims — underscoring urgent needs for standards, transparency and new risk‑sharing models.
Loading comments...
login to comment
loading comments...
no comments yet