🤖 AI Summary
Hegelion is an open-source toolkit that runs any LLM through a deliberate three‑phase dialectic—Thesis → Antithesis → Synthesis—and returns the full reasoning trace as a structured JSON HegelionResult. Each run yields the three passages plus explicit contradictions, testable research_proposals, and rich metadata (per‑phase timings, backend/model, optional debug trace with an experimental internal_conflict_score). The package (v0.2.3) ships with a CLI, Python async/sync APIs, a benchmark tool, and an MCP server for Claude Desktop; it supports Anthropic Claude by default and is backend‑agnostic (OpenAI, Ollama, Google Gemini, custom HTTP endpoints). Parsing is engineered for robustness and graceful degradation, so partial results and structured logs are returned even on backend failures.
For the AI/ML community this is a practical way to push LLMs beyond single‑pass answers into structured self‑critique, surfacing assumptions, tensions, and concrete hypotheses useful for evaluation, safety analysis, RAG enrichment, and research ideation. The canonical schema (HEGELION_SPEC.md) enables direct integration into eval pipelines and observability stacks, while the contradictions/research_proposals arrays make outputs machine‑testable. Caveats: debug telemetry (e.g., internal_conflict_score) is experimental and proposals may be empty; users must sanitize logs before sharing. Overall, Hegelion offers a reproducible, auditable pattern for eliciting multi‑step reasoning and measurable failure modes from today’s LLMs.
Loading comments...
login to comment
loading comments...
no comments yet