🤖 AI Summary
Llumen is a new lightweight, self-hostable LLM chat app (Rust backend + SvelteKit frontend) that aims to make running a full chat stack trivial: a single OpenRouter API key is all you need to get model-powered chat, without separate keys for OCR, search, embeddings, or image generation. The project emphasizes tiny footprint and speed — startup under 1 second and disk use under 100 MiB — and ships as a Windows executable, Linux binary, or multi-stage Docker image that serves static files and runs the server. Features include Markdown rendering with code and math, multiple chat modes (including web-search-enabled), and work-in-progress deep-research/agentic modes; a “reasoning-proxy” can bridge to standard OpenAI endpoints to unlock search/OCR features.
Technically the repo separates backend/ (Rust) and frontend/ (SvelteKit), and the Docker image is built multi-stage to keep it small. Run it with an OpenRouter API key (API_KEY) and defaults use SQLite for DATABASE_URL and a redb blob store for BLOB_URL; container binds port 80 by default. Important operational notes: prebuilt binaries are release-based and may be stale, and the default admin credentials are shipped as admin / P@88w0rd — change them immediately. For the AI/ML community, Llumen offers a minimal, fast self-hosting option for experiments, demos, and edge deployments where simplicity, small footprint, and single-key integration matter.
Loading comments...
login to comment
loading comments...
no comments yet