Hey LLM, write production-ready code (wejn.org)

🤖 AI Summary
A developer asked LLMs to implement a runtime-reconfigurable sliding-window maximum (SlidingMax) and discovered that adding one sentence—“Write production-ready code.”—dramatically improved results. Initial outputs from models like ChatGPT-5 and Claude were correct in idea but cluttered with dead code, inconsistent APIs, and unclear state handling. After the single extra instruction, the models produced clean, well-documented Python classes that closely match the author’s hand-written version and include sensible guards (positive window size), optional timestamps, and immediate pruning when window_size is changed. Technically, the canonical solution uses a monotonic deque of (timestamp, value) pairs: evict entries older than cutoff = now - window_size from the left, and maintain decreasing values by popping smaller elements from the right before appending the new pair. The front holds the current max. This yields amortized O(1) time per add, O(n) worst-case space, and supports runtime reconfiguration by trimming stale entries in the window_size setter. The story highlights two lessons for the AI/ML community: small prompt nudges can substantially change code quality (important for prompt engineering and deployment), but generated “production-ready” code still requires human review for edge cases (monotonic vs. system time, thread-safety, input validation) before shipping.
Loading comments...
loading comments...