🤖 AI Summary
A developer at Mondoo built a VSCode extension to replace flaky Copilot completions for writing Mondoo Query Language (MQL), creating a custom inline autocomplete driven by domain-specific snippets and lightweight models. Frustrated with Copilot’s intrusive and often incorrect inline suggestions (and the inability to switch its underlying gpt-41-copilot model), they used vscode.InlineCompletionItemProvider to inject alternative completions. Instead of stuffing huge raw MQL context (~20k tokens) into the model, they implemented a dynamic context selector that picks relevant boilerplate patterns/snippets based on the open filename and keywords (e.g., platform checks like Linux). The snippet library feeds concise, focused context to the generator, reducing token usage and improving suggestion quality.
Technically, the author benchmarked many LLMs and tool configurations — Claude Sonnet 4 produced the best MQL, Gemini improved ~60% when tools/context were enabled, and Grok Code Fast 1 was favored for cheap, fast coding tasks. They observed that LLM-driven generation can help name new resource fields (useful for models lacking training data) but ran into a UX constraint: VSCode/Copilot still surfaces its own completions first, so you must cycle items to reach the custom suggestion. The project demonstrates a practical pattern for domain-specific autocompletion: combine inline providers, dynamic snippet context, and lightweight models to balance cost, latency, and accuracy, while recognizing integration limits with proprietary editor assistants.
Loading comments...
login to comment
loading comments...
no comments yet