Show HN: An Open-Source, Local-First Agent Framework in Rust (github.com)

🤖 AI Summary
AutoAgents is an open-source, local-first multi-agent framework written in Rust that lets developers build high-performance autonomous agents powered by LLMs and the Ractor model. It provides a modular, provider-agnostic stack (OpenAI, Anthropic, Ollama, ONNX, Mistral-rs, Burn, etc.), multiple executors including ReAct (reasoning+acting) with streaming, type-safe structured outputs (JSON schema validation), configurable memory backends (sliding-window now, persistent storage coming), and procedural macros to simplify tool/agent definitions. Notable features include native WASM compilation to run agent orchestration and sandboxed tool execution directly in browsers, edge deployment via ONNX local inference, tokio-based async concurrency, and type-safe pub/sub multi-agent orchestration — all accessible via a CLI that runs YAML-defined workflows or serves them over HTTP. For the AI/ML community this matters because it combines Rust’s performance and safety with flexible LLM provider support and local-first execution paths, enabling hybrid cloud/edge/browser deployments that improve privacy, latency, and reproducibility. Technical implications include safer, type-checked agent outputs, pluggable executors and memory layers for experimentation or production use, and a WASM runtime that reduces attack surface for tool execution. The project is geared toward researchers and engineers who need scalable, composable agent architectures that can run across cloud, edge, and client environments.
Loading comments...
loading comments...