🤖 AI Summary
The author ran a practical experiment using Admiral (a React back‑office framework) to see how three AI coding assistants—Cursor, GitHub Copilot, and Windsurf—handle the same set of plain‑text “rules” (essentially .md prompts) for generating admin CRUD code. They compared rule features (types: project‑wide/local/global), storage locations (.cursor/rules, .windsurf/rules, .github/copilot‑instructions.md), nested‑rule support, activation modes (mention, always‑on, glob patterns, or agent decision), and length guidance (Cursor recommends ~500 lines; Windsurf notes a ~12,000 character max; Copilot has no official limit). Windsurf can import .cursor rules, and Copilot lacks nested‑rule and mention activation support, relying mainly on always‑on/glob triggers.
Results were decisive for rule‑driven workflows: Cursor was the most reliable—correctly executing nested rule‑in‑rule examples and producing precise code. Windsurf succeeded after a couple of tries (initially misplacing folders but later generating a runnable app). Copilot struggled: it read the rule but produced an unusable “Frankenstein” project with incorrect imports (e.g., @pankod/refine‑antd), irrelevant framework assumptions (Next.js), empty type files, and mismatched content. Implications: for reproducible, mention‑activated, or monorepo‑scoped rule workflows, Cursor (and secondarily Windsurf) are far better choices; Copilot remains useful for global style/naming guidance but is ill‑suited to fine‑grained, rule‑triggered codegen.
Loading comments...
login to comment
loading comments...
no comments yet