Show HN: Un-LOCC – Reduce LLM API costs by compressing text into images (github.com)

🤖 AI Summary
Un-LOCC (Universal Lossy Optical Context Compression) is a new Python library that wraps the OpenAI SDK to compress large text contexts into images so they can be fed to vision-language models (VLMs). By rendering bulky text as images, Un-LOCC aims to reduce LLM API token usage and enable longer effective context windows, making it a practical tool for compressing prior chat history, long documents, or other verbose inputs while keeping critical instructions as plain text. It’s presented as a drop-in replacement for the OpenAI client and supports both synchronous (UnLOCC) and asynchronous (AsyncUnLOCC) workflows. Technically, Un-LOCC offers configurable, lossy optical compression (font, size, padding, max width/height; defaults include Atkinson Hyperlegible Regular, font_size 15, 864×864 target) and selects fast renderers when available (ReportLab + pypdfium2 recommended, ReportLab-only, or PIL fallback). Usage is simple: mark large message parts with "compressed": True or pass compression to responses.create; lists and mixed content are handled so only text parts are rasterized. Key implications: it can cut API costs and bypass token limits for multimodal models, but is lossy and depends on VLM text-recognition fidelity and model pricing for image inputs. Library is MIT-licensed and available on GitHub (github.com/MaxDevv/UN-LOCC) for experimentation and contributions.
Loading comments...
loading comments...