🤖 AI Summary
Poe is a new desktop AI chat app (Show HN) built by “vibe coding” to feel like a CLI coding agent but with a GUI. It’s aimed at ad‑hoc inference inside a project: the working directory is visible in the window, read/write/find utilities are restricted to that directory, and you can fork sessions, edit/delete history, and hot‑swap prompts. Important workflow primitives are supported — rolling or halting context windows to manage token/state, defaults that “Ask” before write ops, and optional local MCP server support for running inference locally. Right now Poe talks to Ollama and LM Studio APIs but the author wants to expand provider support.
For AI/ML practitioners this matters because Poe prioritizes local, project‑scoped experimentation and reproducibility: you can branch conversational sessions, control context windows for prompt engineering, and safely confine file access, which makes it suitable for iterative coding and model-in-the-loop development. Technical implications include easier debugging of prompt/context effects, token budget management, and the ability to hook into local model servers. The app is an Electron + Vite/React dev build (npm run dev) with packaging scripts for mac/win/linux; the repo is rough but open to contributions, and planned features include terminal command injection, popup editors for suggested edits, message queuing, and MCP pre/post hooks.
Loading comments...
login to comment
loading comments...
no comments yet