My Time *Ive Coding (ikouchiha47.github.io)

🤖 AI Summary
An experienced engineer recounts breaking up with “vibe coding” and AI code editors after months of using high-end LLMs (Claude, GPT‑5, Kimi) via a Windsurf account to automate feature work. The author tried to offload core tasks—an Electron/WebView app to capture audio and call an LLM, video support via ffmpeg, and concurrency patterns in Go (channels-first design)—but found that models could bootstrap prototypes (roughly the first 40–80%) and generate “hacky” or inconsistent implementations that failed to integrate with existing code. Iteration latency, context compression, poor understanding of project structure, missing implementations, and a lack of developer tooling in editors (vs. simple scripts, commit hooks, templates) turned initial gains into expensive debugging: ~30K INR over four months and months of manual stabilization. The “last 10%” problem dominated—what the LLM claimed as 70–90% done required far more human effort to make production-ready. For the AI/ML community this is a cautionary, practical takeaway: large models speed prototyping but don’t replace engineering judgment, design intent, or reliable toolchains. Key implications include the need for better model-context handling, deterministic developer affordances (codebase awareness, tests, linters, hooks), lower latency/local inference for fast iteration and privacy, and tighter integration patterns for concurrency and media pipelines. The piece argues that until LLMs reliably handle the final integration and correctness checks, traditional disciplined optimization and engineering practices remain essential.
Loading comments...
loading comments...