Fighting Email Spam on Your Mail Server with LLMs – Privately (cybercarnet.eu)

🤖 AI Summary
A self-hosted Mailcow user tackled rising spam by hooking local LLMs into Rspamd’s GPT plugin instead of sending mail to cloud APIs. They built a proxy (code: github.com/unixfox/mailcow-rspamd-ollama) that enriches LLM queries with live web-search results (via Mullvad Leta) and forwards them to a local Ollama instance running models such as gemma3:12b. The Rspamd plugin is configured to call the Ollama API with a deterministic prompt that asks the model to output exactly three lines: a 0.00–1.00 spam probability, a one-line justification, and (if score >0.5) a single concern category. The chain preserves privacy, keeps evaluation local, and lets Rspamd apply scores/filters as usual. This approach matters because smaller, locally hosted LLMs are typically weaker on web knowledge; injecting search context addresses that gap and significantly improves real-world classification without exposing mail content to third parties. Key technical constraints: Ollama needs a GPU for acceptable latency, Rspamd plugin timeouts must be tuned (default plugins expected to respond quickly), and the proxy supports OpenAI-compatible backends if a local GPU isn’t available. After six months the author reports ~2,000 messages processed with only ~5 false positives, demonstrating a practical, private, and integrable alternative to cloud-based spam classification for mail server operators.
Loading comments...
loading comments...