Show HN: Kodaii generated a 20K-line FastAPI back end from one prompt (github.com)

🤖 AI Summary
Kodaii’s engine autonomously generated, tested, and deployed a complete Calendly-style booking backend from a single prompt in roughly eight hours. The output is a ~20,489-line async Python codebase (FastAPI) with a Postgres schema (6 tables), modular routers/services/models, background tasks for email notifications, and booking logic that enforces referential integrity and prevents double-booking. The project includes 40 unit tests and 22 integration tests aligned to five user stories, Docker Compose for local/dev deployment, a GitHub Actions CI/CD pipeline (build → test → deploy → rollback), an admin UI, and published OpenAPI docs and live endpoints; the repo and live API are publicly available. This demo is significant because it goes beyond single-file snippets to show end-to-end coherence across planning, code generation, testing, infra, and deployment—essentially turning BDD-style requirements into a running service. For the AI/ML community it’s an important milestone in large-scale code generation and automation: it suggests prompt-driven systems can produce production-shaped backends and pipelines, accelerating prototyping and developer workflows. Key technical questions remain (code clarity, architectural choices, security, and maintainability), but the result provides a concrete artifact to evaluate how well generative engines maintain cross-file consistency, test coverage, and deployment-ready infrastructure.
Loading comments...
loading comments...