Thoughts on Cursor 2.0 and Cursor Compose (simonwillison.net)

🤖 AI Summary
Cursor released Cursor 2.0 and a new in-house model, Composer 1, positioning itself as a fast, agentic coding assistant tuned for running many agents in parallel. The update pairs a refreshed UI for agentic workflows with a model the company bills as “4x faster than similarly intelligent models.” Composer is a mixture‑of‑experts (MoE) LLM with long‑context capabilities, specialized for software engineering through reinforcement learning in real development environments. There’s no public Composer API yet — users interact via Cursor’s chat/“Ask” mode — and early tests (e.g., generating an SVG) returned results quickly, matching the speed focus. Technically, Cursor emphasizes heavy systems engineering: Composer was trained with asynchronous RL at scale using PyTorch and Ray, low‑precision MXFP8 MoE kernels, expert parallelism and hybrid sharded data parallelism to scale to thousands of NVIDIA GPUs with low communication cost. During RL the model is taught to call a broad Cursor Agent toolset (code edits, semantic search, grep, terminal commands), which required running hundreds of thousands of concurrent sandboxed coding environments. One notable omission is the training provenance — Cursor hasn’t confirmed whether Composer was trained from scratch or fine‑tuned from an open‑weights base; researcher Sasha Rush stresses RL post‑training as the core focus and has denied rumors linking earlier previews to xAI’s Grok. The release signals serious infrastructure investment to build faster, tool‑aware coding agents, but raises reproducibility and openness questions for the community.
Loading comments...
loading comments...