🤖 AI Summary
OpenAI quietly shipped GPT-5-Codex-Mini inside its Codex CLI/VS Code flow, and a developer reverse‑engineered the open-source Codex Rust client to add a new "codex prompt" subcommand that sends arbitrary prompts to the same backend the CLI uses. Without writing Rust themselves, they iterated with Codex to implement a tool-free prompt mode, a --debug flag, and model selection, then tested SVG generation (a pelican on a bicycle) against gpt-5, gpt-5-codex, and gpt-5-codex-mini. The mini model worked but produced lower-quality SVGs; the process exposes how the model is currently accessible only via a privileged client that bills through a user’s ChatGPT account.
Technically revealing and significant: the reverse engineering surfaced the private endpoint (https://chatgpt.com/backend-api/codex/responses) and the JSON contract the CLI uses — fields like model, instructions (a required default instruction blob), an input array with role="developer" and role="user" messages, tools: [], stream: true, include: ["reasoning.encrypted_content"], and a prompt_cache_key. That shows Codex runs as a coding agent with distinct developer instructions and tool sandboxing, and that callers can inject a developer message and disable tools. For the AI/ML community this both provides an early look at production prompt engineering and model plumbing (streaming, reasoning outputs, tool control) and raises questions about access controls, billing surface, and the ethics/security of leveraging open‑source clients to hit non-public APIs.
Loading comments...
login to comment
loading comments...
no comments yet