LLM-empowered knowledge graph construction: A survey (arxiv.org)

🤖 AI Summary
Researchers released a comprehensive survey examining how large language models (LLMs) are reshaping knowledge graph (KG) construction, moving the field from rule-based and statistical pipelines to language-driven, generative workflows. The paper revisits the classical three-layer KG pipeline—ontology (schema) engineering, knowledge extraction, and knowledge fusion—and systematically catalogs LLM-powered methods that either preserve structured schemas or operate schema-free. It synthesizes representative frameworks across each stage, contrasts design trade-offs, and highlights practical limitations seen in current work. This shift matters because LLMs enable more flexible entity/relation discovery, natural-language-driven schema induction, and generative fusion strategies that can scale to open domains, but also introduce challenges around consistency, normalization, hallucination, and evaluation. Key technical themes include prompt engineering, in-context learning and fine-tuning for extraction, schema-constrained versus generative (schema-free) construction paradigms, and mechanisms for integrating symbolic constraints with neural generation. The survey points to promising directions—KG-augmented reasoning for LLMs, dynamic knowledge memories for agentic systems, and multimodal KG construction—framing a research agenda to combine neuro-symbolic rigor with LLM adaptability for more explainable, updatable, and multimodal knowledge systems.
Loading comments...
loading comments...