Stanford Graph Learning Workshop 2025 Video Recordings (snap.stanford.edu)

🤖 AI Summary
Stanford’s Graph Learning Workshop (Oct 14, 2025) has published video recordings of a full-day program that brings together leaders in graph ML, foundation models, agents, and systems for fast LLM inference. Key talks and posters (many linked as videos) cover Relbench V2 and Relational Transformer for zero‑shot relational learning, in‑context learning on structured data, GraphRAG and RAG with PyG, Ember (an inference‑time scaling graph system), Optimas (end‑to‑end optimization for compound AI systems), higher‑order gradient techniques for language models, and numerous agentic applications spanning spatial biology, autonomous scientific discovery, and precision medicine. Technically, the workshop highlights a clear shift toward relational foundation models and graph‑aware LLMs: scalable relational transformers, schema‑agnostic perceiver encoders, graph‑augmented language models, and methods for size‑generalization and zero‑shot transfer across attribute domains. Equally notable are systems contributions—architectures and optimization methods focused on inference speed, edge deployment, and composed AI systems—and work on privacy, verifiable RL rewards for reasoning, and human-in-the-loop graph agents. These recordings are a concentrated resource for researchers and engineers seeking practical advances in graph‑LLM integration, scalable inference, and domain applications (biology, networks, finance, IoT), signaling that graph inductive biases and agentic behavior are becoming central to next‑gen foundation models.
Loading comments...
loading comments...