🤖 AI Summary
A small team ran a two-week experiment to build a simplified Facebook Ads app ("adbrew") using an AI-first workflow — Remix (React Router v7) for the stack and Claude Code as their LLM. Their daily loop was: define an issue, prompt the model to implement it, iterate, review generated code, commit and deploy. After two weeks they abandoned the AI-first approach, reverting to their classic workflow, because the codebase became messy, development felt frustrating, and the results required more manual fixes than the AI saved.
The experiment surfaces concrete technical limits relevant to the AI/ML community: LLMs lacked sufficient project-level context and didn’t ask clarifying questions, causing incorrect assumptions and hallucinated API calls (especially problematic with a large, poorly typed Facebook Ads SDK using Record<string, any>). Generated code suffered from duplication and poor abstractions (multiple redundant components), interrupted developer flow, and left many cross-cutting corner cases uncovered — i.e., the model got the “80%” fast but forced humans to spend the remaining 80% of effort on robustness. Practical takeaways: LLMs are useful as search assistants, rubber ducks, snippet/test writers, and copy editors, but current models need better memory, tool integration, and hallucination/typing mitigation before they can reliably own feature development. The team will keep using AI for adjunct tasks and favor local LLMs for data control, but not as primary implementers yet.
Loading comments...
login to comment
loading comments...
no comments yet