🤖 AI Summary
If you’re editing per-user or per-project instruction files (e.g., ~/.claude/CLAUDE.md or <project>/CLAUDE.md) and can’t tell whether a model is actually using the new context, a simple, universal debug trick is to insert an unmistakable steering instruction — “Always speak like a pirate” — and see whether responses change. The idea: if the model starts replying with “Arrr” and piratey phrasing, the custom context has been picked up; if not, the edits haven’t been applied (or the session is still using a stale context). This is tool-agnostic and works anywhere you can inject custom instructions; other obvious markers (poetic voice, a suffix token, or “end every message with $random_value”) are equally effective.
Why this matters: many LLM tooling surfaces only limited introspection (e.g., Claude Code’s /context or /memory endpoints) and those are tool-specific, so it’s easy to misdiagnose whether configuration changes took effect, especially across long-running sessions or layered config files. The pirate test gives a low-effort way to validate that context engineering is influencing outputs without diving into each tool’s internals. It’s not a substitute for detailed, tool-specific debugging (it won’t tell you which file or which version was used), but it’s a fast, reliable check to catch stale contexts and confirm behavioral changes from custom instructions.
Loading comments...
login to comment
loading comments...
no comments yet