Show HN: ShellAI – Local Terminal Assistance with SLM (github.com)

🤖 AI Summary
ShellAI is a new open‑source CLI that brings local LLM-style assistance directly into your terminal — you type a short command like ai, ask for an action and it generates the shell commands for you without calling any external API. Unlike wrappers that require a separate inference server, ShellAI performs local inference with small-language models (SLMs) so data never leaves your machine. That makes it attractive for privacy‑conscious DevOps, security‑sensitive workflows, and offline development, while streamlining developer productivity by keeping interaction in the shell. Technically, ShellAI is installed from the repo (git clone … && pip install -e .) and uses Hugging Face to download models (hf login); the project defaults to a finetuned Gemma model (micrictor/gemma-3-270m-it-ft-bash) but you can point it at other local SLMs like google/gemma-3-270m-it. Local inference is built in, but it requires relaxing ptrace protections (echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope) so processes can inspect each other — a clear security tradeoff. Example runs show command generation in ~5–11s on the author’s machine. The project is lightweight and practical, but users should weigh the convenience of local command synthesis against the ptrace security implications and the friction of obtaining Gemma model access due to licensing.
Loading comments...
loading comments...