EA's Attempt to Use AI for Game Development Backfiring Horribly (futurism.com)

🤖 AI Summary
Electronic Arts’ push to embed generative AI across studios—framed internally as a “thought partner” and rolled out with training courses—appears to be backfiring: developers told Business Insider the tools regularly hallucinate, produce flawed code, and create extra remediation work rather than saving time. Industry surveys back the trend toward automation (Google Cloud found 87% of game developers using AI), but internal tension is rising as employees report being asked to train systems that may replace parts of their roles—one ex-QA designer suspects AI-assisted summarization contributed to his layoff. Executives’ enthusiasm (Dayforce: 87% of execs use AI daily versus 27% of workers) is producing visible resentment and workplace mockery. The technical and business implications are serious: generative models can speed ideation but introduce correctness, safety, and UX risks when used without human oversight. EA itself warns in SEC filings about legal/reputational exposure, and early demos—like a derided AI prototype of a game protagonist—show player pushback against unnatural, “creepy” outputs. Practically, studios need stronger human-in-the-loop workflows, robust validation for generated code/art, clearer change management to protect jobs, and governance to manage ethical and product-quality fallout as the industry tests AI at scale.
Loading comments...
loading comments...