Show HN: AI-assert – Constraint verification for LLM outputs (278 lines, Python) (github.com)

🤖 AI Summary
A new tool named **ai_assert** has been launched, aimed at enhancing the reliability of outputs from large language models (LLMs) by implementing runtime constraint verification. This Python library, which comprises just 278 lines of code with zero dependencies, allows developers to enforce specific output criteria—such as ensuring the output is valid JSON, does not exceed a certain length, and includes particular substrings. Through a systematic process of generating, checking constraints, and potentially retrying with feedback, ai_assert guarantees outputs that meet user-defined standards. The significance of ai_assert lies in addressing the prevalent issue where LLMs often provide outputs that fail to meet explicit instructions. Traditional solutions rely on ad-hoc validation, but ai_assert introduces a structured, model-agnostic framework that works with any function implementing a string-to-string interface. By adopting a multiplicative gate approach—where failure in any constraint results in a complete output failure—this tool enhances both prompt-level and constraint-level accuracy, achieving improvements of 6.8 percentage points and 5 percentage points respectively in tests. With its ease of integration and comprehensive audit trails for each generated attempt, ai_assert empowers developers to use LLMs more reliably and effectively across a range of applications.
Loading comments...
loading comments...