🤖 AI Summary
FileChat is a read-only AI coding assistant that indexes your local project and lets you "chat" with the codebase to ask questions, get explanations, and surface improvements — all without modifying files. It builds a searchable project index, maintains ChatGPT-like per-directory histories, and auto-detects file changes so responses stay contextual to the latest code. The tool is open-source (GitHub) and still early-stage, so expect bugs and rapid development.
Technically, FileChat runs on Python 3.12+, stores configuration at ~/.config/filechat.json, and lets you choose any OpenAI-compatible LLM provider or a self-hosted server via an API key. It supports local embedding acceleration on CUDA (NVIDIA) or XPU (Intel Arc) — falling back to CPU if none specified — and exposes config options for max file size, ignored directories, allowed suffixes, and index storage. Installation is straightforward (pip install filechat or uv tool install filechat) and GPU support can be enabled via uv sync --extra cuda/xpu. For the AI/ML community this is significant because it provides a privacy-preserving, extensible way to apply LLMs to local codebases, enabling bespoke workflows, self-hosted models, and hardware-accelerated embeddings for faster, localized code understanding.
Loading comments...
login to comment
loading comments...
no comments yet