🤖 AI Summary
Anthropic researchers Erik Schluntz and Barry Zhang published a clear, practical guide that standardizes what people mean by “agents” and how they differ from other multi-LLM systems. They propose “agentic systems” as the umbrella term, distinguish “workflows” (predefined orchestration of multiple LLM calls) from true “agents” (LLMs that dynamically plan, call tools, and direct their own processes), and introduce “augmented LLMs” to mean LLMs extended with tools. Their operational definition of an agent: it starts from a human command or interactive discussion, plans and executes steps autonomously, obtains ground truth from the environment (tool outputs, code execution), checkpoints for human feedback or judgments, and uses stopping conditions to retain control.
The paper’s most practical contribution is a taxonomy of five workflow patterns—prompt chaining, routing, parallelization, orchestrator-workers, and evaluator-optimizer—plus cookbook recipes illustrating each (notably an evaluator-optimizer loop that iterates code generation and review). Key implications: prefer the simplest approach that meets requirements; use workflows before moving to full agents; reserve agents for open-ended problems where step counts are unpredictable and some trust in autonomous decision-making exists. They also flag higher cost, compounding error risks, and the need for sandbox testing and guardrails, advising teams not to overengineer agent frameworks before exhausting direct API and simple orchestration options.
Loading comments...
login to comment
loading comments...
no comments yet