Driving Generative Agents with Their Personality (arxiv.org)

🤖 AI Summary
Researchers demonstrate that large language models can be driven by explicit psychometric personality values to produce consistent, human-like NPC behavior in games. Using an Affective Computing setup to quantify an NPC’s “psyche,” the team encodes personality traits as prompt inputs and tests LLM outputs against a repurposed International Personality Item Pool (IPIP) questionnaire. The study shows that modern models — notably GPT-4 — reliably interpret and express the supplied personality profile, generating content and behaviors that align with the trait dimensions measured by the IPIP. This work matters because it provides a practical pipeline for personality-conditioned generative agents: affective-system measurements → trait-conditioned prompts → behaviorally consistent LLM output. Technically, the paper ties psychometric standards to prompt engineering and offers an evaluation methodology (IPIP applied to model outputs) that treats LLMs like subjects in personality assessment. Implications include more believable, customizable NPCs, improved tools for interactive storytelling and human-AI interaction, and a replicable way to quantify persona fidelity. It also raises follow-ups around robustness, long-term behavioral consistency, potential propagation of bias from psychometric priors, and ethical design when simulating personalities.
Loading comments...
loading comments...