I think "agent" may have a widely enough agreed upon useful definition (simonwillison.net)

🤖 AI Summary
A growing consensus in AI engineering is converging on a practical definition of “agent”: an LLM that runs tools in a loop to achieve a goal. That concise formulation captures the common pattern in modern tool-enabled LLM systems — the model issues tool/function calls to a harness, the tool outputs are fed back into the conversation, and the cycle continues until a stopping condition (the goal) is reached. This view subsumes short-term memory (the conversation history and previous tool calls) and allows long-term memory to be implemented as additional tools; it also admits sub-agent patterns where another model can set goals. That clarity matters: shared jargon improves technical communication, design trade-off discussions, and realistic expectations. It also helps push back on two persistent confusions — the myth of agents as human replacements (which ignores accountability and autonomy) and inconsistent public-facing uses (notably differing descriptions from OpenAI, including ChatGPT’s browser-automation “agent”). For implementers, “agent = tools-in-a-loop to achieve a goal” is a useful, actionable baseline that highlights the role of tool interfaces, loop control/stopping conditions, and explicit memory tooling when persistence or accountability is needed.
Loading comments...
loading comments...