🤖 AI Summary
Pydantic today opened the beta for Pydantic AI Gateway (PAIG), a multi-provider LLM gateway designed to simplify governance, cost control, and observability for production LLM usage. Rather than normalizing every provider into a single “universal schema,” PAIG forwards requests in each provider’s native format—“one key, zero translation”—so new model features (tool calling, image input, etc.) are immediately usable. It supports OpenAI, Anthropic, Google Vertex, Groq and AWS Bedrock today (Azure coming), and can be used either via Pydantic AI with one-line switching or by pointing existing SDKs at PAIG’s proxy base_url. The core is open-source (AGPL-3.0) and self-hostable; the managed console, UI and some enterprise features are closed-source.
Technically focused on reliability and control, PAIG runs on Cloudflare’s global edge to keep added latency to single-digit milliseconds and can even reduce network hops by routing through Cloudflare’s backbone. Features include per-project/user/key daily/weekly/monthly/total spend caps, automatic failover across providers, file-based config, OIDC SSO, granular permissions, and built-in observability via Pydantic Logfire or any OpenTelemetry backend. The open beta (free) launched Nov 13, 2025 and runs until early December; pricing and GA details will be announced later. For teams struggling with API key sprawl, runaway costs, and brittle adapter layers, PAIG offers a minimal, high-performance control plane that keeps full provider feature fidelity.
Loading comments...
login to comment
loading comments...
no comments yet