🤖 AI Summary
Audarma (alpha) is an LLM-powered translation layer for React/Next.js that targets dynamic content — product catalogs, user messages, listings, articles — rather than static UI labels. It uses external LLMs (OpenAI, Anthropic/Claude, Google Gemini, Cerebras, Nebius, local Llama models) to translate content on demand or in batch, and pairs that with smart caching so each item is translated only once per target locale. Key UX/engineering features include view-level translation (translate whole views instead of individual strings), progressive loading (show original text immediately, translate in background), batch LLM calls, and a simple React API (ViewTranslationProvider, useViewTranslation). A CLI mode pre-fills the same cache for SEO/high-traffic pages, letting teams combine lazy and pre-translation strategies for best cost/performance.
Technically it’s adapter-driven and backend-agnostic: three interfaces (DatabaseAdapter, LLMProvider, I18nAdapter) let you plug Supabase/Postgres, Prisma, Redis, any LLM API, and any i18n system. Cache is keyed by SHA256 view and item hashes; a content_translations table (content_type, content_id, locale, original_text, translated_text, source_hash, timestamps) stores results. Mount flow: compute view hash → local metadata check → fetch cached rows → call LLM for missing items → save translations → re-render. Alpha limitations matter: English-only source, client-only components, no cache invalidation API, no retries or streaming yet. For teams needing dynamic, changing content translation at lower API cost and with easy integration, Audarma is a practical starting point — contributions for adapters, server-component support, retry/invalidations, and cost tracking are requested.
Loading comments...
login to comment
loading comments...
no comments yet