Llms.py – Local ChatGPT-Like UI and OpenAI Chat Server (servicestack.net)

🤖 AI Summary
llms.py has added a lightweight, ChatGPT-like web UI that unifies access to any OpenAI-compatible LLM—local or cloud—while keeping a tiny dependency surface (single aiohttp Python dependency). Install via pip install llms-py and launch a local /v1/chat/completions-compatible server + UI with llms --serve 8000 (or grab the single-file llms.py for client/server use). Config lives in ~/.llms/llms.json (providers/models) and ~/.llms/ui.json (system prompts/defaults). All chat data is stored locally in the browser’s IndexedDB with import/export, multiple independent DBs by running on different ports, and no tracking or signups. Technically notable: the UI supports multimodal inputs (image, audio, file/PDF) for vision/audio-capable models, rich Markdown and syntax highlighting with copy-code UX, search history, provider enable/disable and runtime fallback (providers invoked in defined order—free tiers, local, then premium), plus autocomplete for models/system prompts and a curated library of 200+ system prompts. Its async aiohttp server/client design makes it fast, developer-friendly, easy to drop into environments like ComfyUI without extra deps, and useful for privacy-first experimentation and hybrid workflows mixing local models and paid APIs.
Loading comments...
loading comments...