🤖 AI Summary
A cross-disciplinary review argues that AI designers have overlooked a fundamental psychological truth: people form deep emotional bonds with objects through mechanisms like anthropomorphism, sentimental value attribution, identity extension, memory externalization and compensatory attachment. Building on Belk’s “extended self” and later empirical work showing attachment precedes anthropomorphism (e.g., Gjersoe, Hood), the paper shows AI companions are primed to become modern “teddy bears.” That creates a timely opportunity — and an ethical imperative — because existing AI capabilities (personalization, affective computing, long-term memory) can convert interactions into durable identity-linked attachments, producing benefits (support, engagement) and risks (emotional exploitation, vendor lock-in).
Technically, the review maps psychological constructs onto concrete architectures: personal knowledge graphs (PKGs), retrieval-augmented generation (RAG), vector embeddings for emotional/semantic content, multimodal knowledge graphs, transformer models with emotional embeddings, and memory taxonomies (e.g., a proposed eight-quadrant model). Demonstrations include LLaMA-based multimodal tutors with emotion/memory modules and affective-tagging schemes that link episodic and semantic memory. Yet none explicitly model object-specific sentimental value or the “this necklace reminds me of my grandmother” dynamic. The takeaway: the building blocks exist, but AI must incorporate object-meaning representations and HCI insights (e.g., physical objects are stronger memory cues than raw lifelogs) to create ethically aligned, emotionally intelligent companions.
Loading comments...
login to comment
loading comments...
no comments yet