🤖 AI Summary
The author put Spec-Driven Development (SDD) into practice using GitHub’s Spec Kit to re-create a removed “circuits” feature in a SvelteKit/Firestore PWA (KartLog). Spec Kit followed a strict spec-as-source workflow (constitution → specify → plan → tasks → implement → PR), driven via GitHub Copilot with Claude Sonnet 4.5. The tool generated polished, exhaustive artifacts—specs, module contracts, migration plans and task lists—but at a heavy cost: for the first increment the agents ran ~33½ minutes, produced ~2,577 lines of markdown and 689 lines of code, while the author spent ~3.5 hours reviewing; a second GPS increment added ~2,262 lines of markdown, ~300 LOC, ~23½ minutes of agent time and ~2 hours review. The implementation step produced working code quickly (13m to generate ~700 LOC) but initially failed at runtime due to a trivial bug that felt awkward to address purely through spec updates.
This experiment highlights both promise and friction of SDD. Positives: automated generation of detailed plans, migrations, and test/checklists that could speed coordination and support regulated or large-team workflows. Negatives: extreme verbosity, long agent runtimes, heavy human review, and unclear processes for handling small implementation bugs or iterative fixes—making SDD feel closer to a reinvented waterfall for lightweight projects. For the AI/ML community this suggests SDD may be useful where formal specs and traceability matter, but current tooling needs tighter feedback loops, reduced verbosity, and better code↔spec reconciliation before it’s practical for everyday “vibe engineering.”
Loading comments...
login to comment
loading comments...
no comments yet