🤖 AI Summary
A developer reflecting on using LLMs for coding pinpoints two persistent weaknesses: poor handling of code movement and a reluctance to ask clarifying questions. Rather than actually cut-and-paste, most LLM agents "remember" code blocks and emit write/delete commands to recreate them, which feels fragile compared with exact copy-paste and risks subtle divergences. Some older models (e.g., Codex) attempted shell-style tricks (sed/awk) to mimic copying, but that’s inconsistent. Equally important, agents tend to make assumptions and brute-force solutions instead of pausing to ask simple clarifying questions — they iterate until they hit a wall rather than solicit missing requirements or constraints.
These quirks matter because they shape how useful LLMs are in real-world dev workflows: refactors and migrations need precise, auditable moves, and collaborative development relies on clarification to avoid costly mistakes. Technically this points to gaps in tool integration (true clipboard semantics, stateful editor ops), uncertainty modeling, and prompting/behavioral objectives — companies may be optimizing for speed of output (RL/finetuning objectives) over cautious interaction. The result: LLMs behave more like overconfident interns than reliable teammates, suggesting the community should prioritize interactive questioning, better editor primitives, and provenance-aware code ops.
Loading comments...
login to comment
loading comments...
no comments yet