Writing Silly LLM Agent in Haskell (xlii.space)

🤖 AI Summary
A developer wrote a compact LLM-driven "SedAgent" in Haskell to experiment with building an agent that asks a local Ollama model (llama3:latest) for GNU sed expressions, runs the model multiple times, selects the most common response, and executes it against a test file using gsed. The repo shows a minimal, typed design (newtypes LLMPrompt/LLMInstruction, data LLMCommand/LLMResult, Agent typeclass and SedAgent) and a simple pipeline: construct a command for ollama via Shelly, call it N=50 times (runCommandNTimes), pick the best result by grouping/sorting (pickBest), then run gsed --sandbox with the chosen expression. The test prompt ("replace zeroes with unicode empty circle") produced the output s/0/⚫️/2p and a partially incorrect transformation — a “successful failure.” The post is significant as a pragmatic Haskell example for building LLM agents and surfaces concrete engineering challenges: brittle token-level generation for syntax-heavy outputs (sed regexes), weak prompt/ model selection (off-the-shelf llama3), naive result-aggregation/validation, and safety concerns mitigated only by gsed's sandbox. The author notes alternative designs (making agents monads for contextual interactions) and highlights lessons for the AI/ML community: syntactic tasks require stronger validation, smarter reranking or execution-aware prompting, and agent trust should be treated cautiously unless you control the logic. The full runnable example is in the haskell-toys repo.
Loading comments...
loading comments...