OpenAI prepares GPT-5.1-Codex-MAX for large-scale projects (www.testingcatalog.com)

🤖 AI Summary
OpenAI appears to be preparing GPT-5.1-Codex-MAX, a coding model described in their codebase as “smarter and faster” and explicitly built to handle project-scale, long-running development tasks rather than isolated files or short edits. The reference suggests the model will retain or reconstruct repository-level knowledge so it doesn’t repeatedly re-ingest entire code trees — a major pain point for current coding assistants that hit hard limits when projects exceed a single context window. Technically, the leak hints at mechanisms beyond simply enlarging the context window (Anthropic’s Claude MAX offers a 500k-token window), such as faster compute, different architectures, structured memory, or retrieval/indexing systems to navigate big repos. If true, Codex-MAX could substantially change AI-assisted software engineering by enabling stable multi-file, multi-step workflows and lowering the overhead of keeping large codebases “in mind.” With competitive pressure from recent model launches like Gemini 3 and the timing of the codebase entry, OpenAI may be close to announcing or rolling out this capability, which would address a gap most major systems still struggle to fill.
Loading comments...
loading comments...