🤖 AI Summary
qqqa is a compact, stateless CLI that brings LLM assistance directly into your shell via two tiny binaries: qq (a “quick question” single-shot prompt) and qa (a “quick agent” that can safely read/write files or propose/execute one command with confirmation). It ships with provider profiles for Groq (recommended: openai/gpt-oss-20b for very fast, low-cost inference) and OpenAI (gpt-5-mini by default), supports streaming and non‑streaming calls, renders responses with simple XML-like tags to ANSI colors, and can be configured via ~/.qq/config.json or env vars (GROQ_API_KEY / OPENAI_API_KEY). Prebuilt binaries are available for macOS and Linux targets; an interactive init flow sets up provider and keys.
qqqa emphasizes Unix-style composability and safety: every invocation is stateless and reproducible (no hidden chat memory), works with pipes/files for contextual hints, and enforces safety rails for tools—qa allows at most one tool step per run, prompts for confirmation before execution (or -y to auto-approve), uses a command allowlist (blocks destructive patterns and risky constructs), caps file reads to 1 MiB and prevents path escapes, and times out commands after 120s. For devs and SREs this means fast, scriptable LLM help with low friction and predictable behavior—suitable for CI, quick shell workflows, and embedding into scripts where reproducibility, speed, cost, and safety matter.
Loading comments...
login to comment
loading comments...
no comments yet