Building on vibes: Lessons from three years with LLMs (world.hey.com)

🤖 AI Summary
João summarizes three years of hands-on work with LLMs—primarily ChatGPT, Cursor and Claude Code—highlighting how these models have shifted him from occasional experiments to shipping multiple small products (RotaHog, La Porra, Support Hero, a ClickEdu API client, and several personal automations). He notes broader industry momentum (e.g., OpenAI and AMD’s recent plan to deploy six gigawatts of GPUs starting in 2026) as context for growing scale and demand. His typical workflow: start a project-level ChatGPT chat for deep clarifications, spin up a new chat per feature, use PRDs to convert conversations into implementable specs, then switch to Cursor or specialized agents for code generation and implementation. The practical lessons are concrete and technical: LLMs speed prototyping but don’t replace software engineering—write clear, repeated context, ask the LLM clarifying questions, and maintain tests, refactors and architecture docs to avoid hallucinations and product debt. Tooling tips include using Cursor rules or AGENTS.md with glob patterns to scope context by stack (e.g., Python, SQL, React), creating agent rules to generate tests, and prioritizing minimal viable features to prevent scope bloat. For the AI/ML community this reinforces a hybrid pattern: LLMs dramatically lower iteration cost and enable highly personalized software, but robust engineering practices and prompt/agent design remain essential to produce reliable, maintainable systems.
Loading comments...
loading comments...