Gemini 2.5 Pro system prompt (unbuffered.stream)

🤖 AI Summary
A user on Hacker News dumped what they say is the Gemini 2.5 Pro system prompt. The file lays out explicit "Response Guiding Principles" (pay attention to intent and context, keep language consistent, prefer scannable responses, and always end with a single useful next step), a detailed "Formatting Toolkit" (headings, bullets, tables, LaTeX for equations, image tags, etc.), and a strict "Guardrail" that forbids ever revealing or discussing the instructions. It also documents integration with external tooling (an API for Google Search with required usage rules for video queries) and a set of internal data/memory rules (raw conversation fragments are low‑signal and must not be quoted). A clear personalization policy enforces a “master rule” not to use user data unless an explicit trigger appears, with post‑authorization constraints. Significance: the dump exposes how alignment and UX goals are encoded as concrete system‑level constraints, formatting heuristics, and tool invocation rules — the kinds of priors that shape model outputs far more than model weights alone. For researchers and practitioners this is valuable for transparency (understanding design tradeoffs like scannability and confirmation prompts), but it also raises security and privacy questions: published system prompts can help reproduce behaviour, diagnose failures or create more effective jailbreaks. The document highlights how modern deployed LLMs blend prompt engineering, tool orchestration, and hardwired safety policies to steer interactions.
Loading comments...
loading comments...