🤖 AI Summary
Researchers from King's College London and Cardiff University examined the growing "digital afterlife" industry—services that create "deathbots" by ingesting a person's digital traces (voice recordings, texts, emails, social posts) to produce interactive avatars. As part of the Synthetic Pasts project (published in Memory, Mind & Media), the authors became test subjects, uploading videos, messages and voice notes to build "digital doubles" and trying both archival tools (searchable, theme-organized memory stores) and generative chatbots that use machine learning to mimic tone and evolve with use.
Their findings highlight important technical and ethical limits: generative models can produce convincing, intimate-seeming replies but often echo verbatim phrasing, display incongruous tone, and flatten nuance; archival systems impose rigid categories and privilege continuity over ambivalence. Behind the experiences lie commercial incentives—subscription models, freemium tiers and data partnerships—that harvest emotional and biometric signals to drive engagement. The study argues AI-enabled resurrection normalizes certain modes of remembering, conflates storage with memory, and risks erasing the uncertainty that makes mourning meaningful. In short, you can "talk" to the dead with AI, but what returns is colored more by algorithmic design and market logics than by authentic, living complexity.
Loading comments...
login to comment
loading comments...
no comments yet