🤖 AI Summary
A Show HN project, Distil-localdoc.py, ships a locally runnable "SLM assistant" that automatically generates Google‑style docstrings for Python code using a distilled Qwen3 0.6B model. The repo includes a small workflow: install Ollama, create a venv and required packages, download the tuned model from Hugging Face, register it with Ollama, then run localdoc.py --file your_script.py to produce a _documented version of your file. The tool parses files with the AST, finds functions/methods without docstrings (skipping dunder methods), preserves existing docstrings and code, and emits full param/return/raises sections, examples and support for type hints and async functions.
Technically, the student model was trained by distillation from a GPT‑OSS‑120B teacher using 28 seed functions plus 10,000 synthetic examples; evaluation on 250 held‑out examples with an LLM‑as‑judge shows the tuned 0.6B reaches ~0.76 accuracy vs the teacher’s ~0.81 (base Qwen3 0.6B: ~0.55). The main selling point is privacy and cost: everything runs locally (no cloud API calls), reducing IP exposure, compliance risk and per‑token charges. Limitations: it only adds missing docstrings (won’t rewrite existing ones) and Google style is currently the only output; planned work includes docstring updates, git integration and Sphinx/MkDocs tooling.
Loading comments...
login to comment
loading comments...
no comments yet