🤖 AI Summary
Over the past three years the ways users extend LLMs have migrated from pasted prompts to fully autonomous agents with access to code, browsers, and infrastructure. The essay traces that history — ChatGPT Plugins (2023) offered OpenAPI-driven tool use but failed early due to model limitations and UX friction; Custom Instructions and Custom GPTs packaged personas and context; Memory added persistent personalization; Cursor introduced repo-level .cursorrules for native, versioned instructions; and Anthropic’s Model Context Protocol (MCP, 2024) provided a heavyweight client‑server way to expose tools, resources and prompts to agents. Claude Code (2025) aggregated many extension patterns (CLAUDE.md, hooks, sub-agents), while Agent Skills (2025) rediscovered plugin-style power with a lightweight design: simple skills/ folders, SKILL.md frontmatter, plus scripts/examples that agents call rather than ingest, avoiding context bloat.
The significance is practical and conceptual: modern models are finally capable enough to prefer giving agents general‑purpose compute and trusting them to glue tools together, rather than defining every tiny API. Technically, Skills trade MCP’s persistent tool definitions for on‑demand indexing and script execution (bash/Playwright examples), reducing context overload and friction. The implication is a new agent model — “an LLM in a while loop with a computer strapped to it” — and a likely shift back toward natural‑language first extension mechanisms that hide protocol complexity from end users while empowering robust, autonomous workflows.
Loading comments...
login to comment
loading comments...
no comments yet