Show HN: Open Line Protocol – a minimal wire for AI agents (MIT) (github.com)

🤖 AI Summary
Open Line Protocol (OLP) v0.1 is an MIT-licensed, minimal “wire” for AI agents that represents plans as small typed graphs (the shape) plus smooth, operation-based updates (the liquid). Instead of sending prose, agents exchange Frames composed of typed nodes (Claim, Evidence, Counter, etc.) and typed edges (supports, contradicts, depends_on…), validated by a frozen Pydantic schema. The protocol is built for auditability and robust multi-model collaboration: every Frame yields a 5-number digest (b0, cycle_plus, x_frontier, s_over_c, depth) and a holonomy gap Δ_hol that quantifies order-debt across loops, while telemetry (phi_sem, phi_topo, delta_hol, kappa_eff, commutator, cost_tokens, da_drift) lets agents auto-throttle and detect coherence/curvature issues. A set of guards and morph operations (add_*, del_*, retype, reweight, merge, split, homotopy) form an operation-based CRDT workflow—SYNC → MEASURE → STITCH—preventing self-reinforcing myths, silent deletions, and “too-clean” rewrites. Technically, OLP provides a FastAPI HTTP bus, Merkle witness marks/signatures, adapters for queues/stores, a determinism anchor, and a lightweight topological scorer (β₀/β₁). POST /frame returns {ok, digest, telem} or guarded HTTP 422 errors; the server recomputes digests and enforces invariants. The repo emphasizes small, well-tested core primitives (schema, digest, guards, telem), CI coverage, observability hooks (OTel), and one-command demos—making it a practical foundation for coord-free, conflict-aware agent ecosystems that need explicit, auditable reasoning traces.
Loading comments...
loading comments...