Show HN: Agent/LLM observability for tracing, cost, evals, and debugging (aback-handbell-1cd.notion.site)

🤖 AI Summary
Progress Observability Platform (Agent Observability) is an early-access product demo for tracing, analyzing, and evaluating agentic AI systems—multi-agent frameworks, LLM pipelines (RAG + tools), and autonomous/long-running agents. It provides an Agent Trace Explorer that records end-to-end runs as spans/steps and exposes reasoning phases (plan/retrieve/decide/act), prompts and LLM parameters (model, temperature), responses and intermediate outputs, tool/API inputs-outputs and status codes, and control flow (branches, retries, parallelism). Outputs can be inspected per trace or aggregated by agent, model, or time window, enabling targeted debugging and provenance. The platform also adds cost analytics (tokens, total cost, cost per 1M tokens, top models/apps, trends and spikes) and behavior/quality insights including model-based "LLM-as-a-judge" evaluations to score quality, usefulness, and policy alignment, plus side-by-side comparisons to evaluate prompt or workflow changes. Integration is via code-level instrumentation and SDKs to emit spans and metadata from orchestration layers and agent frameworks. The Early Access Program offers free usage for 1–3 months, onboarding support, and direct collaboration with the product team—aimed at teams who can instrument their agents and provide feedback. For practitioners, this addresses a growing need for observability, cost control, and empirical evaluation in complex agentic systems, speeding debugging and safer deployment.
Loading comments...
loading comments...