My experience with AI as a front end developer (www.frontendundefined.com)

🤖 AI Summary
A front-end developer recounts how LLMs evolved from creative text tools into practical coding assistants and even operating-system agents between 2022–2025. Early ChatGPT impressed with text generation but had multilingual and accuracy limits; GitHub Copilot sped up boilerplate and refactors in VS Code and later Copilot Chat handled bulk Redux→Redux Toolkit migrations. The real inflection came with Claude Sonnet’s “computer use” (via the Cline client): the model could interact with the OS, run wget/unzip, launch apps, fill web forms and tweak UI settings—successfully installing Pocketbase and fixing collection permissions—demonstrating agent-like behavior. Terminal-first, open-source Aider (paired with Sonnet 3.5) enabled fast “vibe coding,” producing full React screens while still requiring human oversight for bad patterns and API-version mismatches. Sonnet’s agent features impressed but were costly and slower than simpler coding assistants. Technically and practically, the post highlights key tradeoffs and best practices: LLMs excel at boilerplate, pattern-based refactors and spec-driven workflows but hallucinate and degrade with poor context or obscure libraries. Documentation-driven development (using CLAUDE.md/CONVENTIONS.md or spec files) is emerging as a coordination pattern for agents. Concrete tips: provide balanced, targeted context and representative code examples; prefer mainstream libraries for better model performance; use explain-then-execute and multi-step plans rather than one-shot requests; and expect to review and fix generated code, especially for API drift or inefficient constructs. The takeaway: agents are already productive and transformative, but control, cost, and correctness remain the central challenges.
Loading comments...
loading comments...