Pipecat alternative – distributed AI Agent orchestrator in Rust (github.com)

🤖 AI Summary
MeshAG is an open-source, Rust-built distributed orchestrator for real-time conversational AI that stitches together STT, LLM, and TTS services into an event-driven microservices pipeline. It uses NATS JetStream for ultra-low-latency event streaming, Valkey (Redis-compatible) for session/config storage with TTL cleanup, and Daily.co (WebRTC) for transport. Each service (STT on 8081, LLM on 8082, TTS on 8083, transport on 8084) runs independently with pluggable connector traits so you can route calls to OpenAI, Anthropic, Deepgram, ElevenLabs, self-hosted models, etc. Sessions are created via HTTP (e.g., POST /sessions/with-config), services self-route based on stored config, and the repo provides Docker Compose, health checks, metrics, and horizontal scaling patterns. For the AI/ML community this matters because it provides a production-oriented reference architecture for low-latency multi-agent conversational systems that easily mixes providers and custom models. Technical implications: JetStream enables high-throughput, ordered event routing across agents; Valkey supports dynamic session configs and TTL garbage collection; connector traits make vendor-switching and custom integrations straightforward (implement SttConnector/LlmConnector/TtsConnector, register in main.rs). Operational requirements include running NATS, Valkey/Redis, and API keys (OpenAI, Deepgram, ElevenLabs). MeshAG is useful for building scalable, real-time voice/video assistants, research on multi-agent pipelines, or hybrid provider deployments, while requiring standard production concerns like key management, monitoring, and resource orchestration.
Loading comments...
loading comments...