Literary character approach helps LLMs simulate more human-like personalities (techxplore.com)

🤖 AI Summary
Researchers from Hebei Petroleum University of Technology and Beijing Institute of Technology introduced a framework for assessing how well large language models (LLMs) simulate human-like personalities and reported a key empirical finding: a scaling law where the level of persona detail strongly governs realism. They argue that applying human psychometric validity tests directly to LLMs is a categorical mismatch and instead evaluate personality distributions at a population level. Across experiments, simple persona prompts and prompt engineering produced systematic positive biases (models behaving like résumé-writers), but asking LLMs to generate novels or using detailed Wikipedia-style character profiles dramatically reduced that bias and produced simulated personality distributions that converge toward human data. The result matters for AI-driven social simulation, virtual characters and behavioral research because it identifies "persona detail level" as the primary lever for realistic simulation and suggests LLMs internally encode priors about human attributes. Technical implications include new evaluation paradigms, the potential to probe latent representations (the authors plan linear-regression-based probing to expose internal priors), and a roadmap to improve realism by training on richer persona datasets. The work also flags serious ethical and privacy risks: platforms with rich user profiles could create highly believable virtual agents, raising manipulation and autonomy concerns that will need detection and mitigation strategies.
Loading comments...
loading comments...