🤖 AI Summary
A seasoned vim user describes switching from tag/clang-based completion tools and plugins like YouCompleteMe to LLM-powered coding assistants—first Codium as a vim/neovim plugin and then the GUI-focused Cursor—and reports a dramatic shift in day-to-day development. LLM completions produce longer, context-aware code, reduce keystrokes, and can trace usage across a messy codebase, localize paper-to-code changes, infer tensor sizes for ML models, generate tailored boilerplate, and even scaffold frontend pages or one-off scripts. Background-agent features can crawl sites and extract data, effectively acting like a junior developer for mundane tasks.
For the AI/ML community this is a practical case study in how LLMs are reshaping workflows: they boost productivity and make debugging/model inspection (e.g., tensor shape inference) easier, but require careful prompting and human oversight. Problems persist—models sometimes overreach (unwanted refactors), emit non-runnable or unwieldy functions, or produce extraneous changes—so maintainability and correctness remain concerns. The author argues the role of human developers is shifting toward guiding, constraining, and aligning models rather than typing every line, offering both opportunity and unease as LLMs evolve.
Loading comments...
login to comment
loading comments...
no comments yet