🤖 AI Summary
At the 2025 Data & AI Summit Drew Breunig argued for using DSPy—a small, opinionated library that “lets the model write the prompt”—to manage LLM-driven tasks in applications and pipelines. Using a geospatial conflation problem (deciding whether two place records refer to the same real-world POI) as a running example, he showed how long, brittle prompt strings embedded in code become unmaintainable and model-dependent. DSPy instead expresses tasks as typed signatures and modules, decoupling the task definition from any single prompt strategy or model so teams can iterate, evaluate, and re-optimize prompts automatically as models change.
Technically, DSPy turns a signature (string or Pydantic class) into system/user prompts via modular “modules” (Predict, ChainOfThought, etc.), handles formatting and parsing, and connects to models through LiteLLM (cloud, on-prem, or local runtimes). Docstrings and field descriptions flow into generated prompts, and a library of optimization functions uses eval data to tune prompting strategies. In the conflation pipeline Breunig described, DSPy simplified integration (no manual parsing), allowed fallback to LLM only for ambiguous cases, and produced correct judgments (e.g., Qwen 3 0.6b returning a True match), illustrating a pragmatic path to more maintainable, model-agnostic LLM systems.
Loading comments...
login to comment
loading comments...
no comments yet