Loyca.ai – An open-source, local-first AI assistant with contextual awareness (github.com)

🤖 AI Summary
Loyca.ai is an open-source, local-first desktop AI assistant that "sees" your screen and decides when to help rather than interrupting like Clippy. Built with Tauri 2.0 and SvelteKit (frontend) and a Rust backend, it runs on Windows, macOS and Arch GNOME (with Wayland caveats) and stores all data locally in SQLite (rusqlite + sqlite-vec). It supports local inference endpoints (LM‑Studio, Ollama) as well as OpenAI‑API‑compatible services, and recommends qwen/qwen2.5‑vl‑7b for vision and openai/gpt-oss‑20b for chat. You can also run an MCP server to expose tools (get-user-context, semantic-search-screenshots, ocr_screenshot) for integrations or chat use. Technically notable is its pipeline: a Vision-Language Model analyzes focused windows via few‑shot image examples and OCR, extracts intent/state/keywords, and a Contextual Bandit (NeuralUCB‑Diagonal implemented with HuggingFace’s candle) decides whether to prompt the user. Image-change heuristics (RMS similarity via image-compare) and Jaccard checks throttle analyses to avoid spurious interruptions. Rewards are computed based on user reactions (accept/reject/ignore) and inferred user state (flowing, struggling, idle, etc.), enabling online, personalized interruption policies. The project emphasizes privacy, adaptive behavior, and extensibility; planned features include memory, avatar customization, improved chat sessions and standardized app-title handling.
Loading comments...
loading comments...