🤖 AI Summary
AI Counsel is an open-source MCP server that implements true multi-model deliberation — not just parallel aggregation — letting models actually debate, see each other’s responses, and iteratively refine positions across multiple rounds. Live examples show cloud models (Claude Sonnet, GPT‑5 Codex, Gemini) converging on a hybrid API architecture with 0.82–0.95 confidence, and local-only runs where models changed votes mid-debate. The system provides full transcripts, AI-generated summaries, structured voting (votes + 0.0–1.0 confidence), automatic early stopping when opinions stabilize, and a decision-graph memory that injects relevant past debates to speed convergence.
Technically, AI Counsel supports mixed adapters (CLI for Claude/Codex/Droid, HTTP for Ollama/LM Studio/OpenRouter), evidence-based tooling (read_file, search_code, list_files, run_command) to ground decisions, semantic grouping of similar vote options (≥0.70), and convergence states (Converged ≥85%, Refining 40–85%, Diverging <40%, Impasse). It’s fault-tolerant, exports Markdown transcripts, and can run fully on-premises with local models (Ollama/llamacpp/LM Studio) to avoid API costs and protect data. Recommended models are 7B–8B+ for reliable structured outputs. For teams making architecture, testing, or code review decisions, this offers auditable, evidence-backed multi-agent consensus with configurable thresholds, mixed cloud/local workflow, and cost-saving auto-convergence.
Loading comments...
login to comment
loading comments...
no comments yet