🤖 AI Summary
Anton Antonov released LLM::Graph, a Raku package that models multi-step LLM workflows as directed computation graphs to schedule and combine multiple LLM generation steps efficiently. It’s installable via zef (zef install LLM::Graph) or directly from GitHub, and requires normal LLM service authentication and internet access for LLM nodes. LLM::Graph instances are callable objects (you can call $g(input) or $g.eval(input)), expose a default llm-evaluator, and support async execution via a boolean async option that uses Promise to submit concurrent LLM requests. The author demonstrates patterns (e.g., three poet nodes and a judge node) and provides notebooks for visualization and interpretation.
Technically, nodes are specified as either eval-function (Raku subs), llm-function (LLM submission), or listable-llm-function (threaded LLM on list inputs); each node can declare inputs, conditional test-function logic, and templated prompts. llm-function prompts accept static strings, string lists, repository prompts, templated subs, or LLM::Function objects — and llm-functions now produce function objects (functors) by default. The graph is converted to a Graph object and evaluated via recursion (topological sorting proved unnecessary), with special wrapping for string-template subs (blocks currently unsupported). Visual output includes richer plots with distinct node shapes and dashed edges for test dependencies, making complex LLM pipelines both performant and debuggable.
Loading comments...
login to comment
loading comments...
no comments yet