🤖 AI Summary
Anki-LLM is a new CLI toolkit that connects your Anki collection to modern LLMs to bulk-clean, verify, and generate flashcards. It provides two main workflows: file-based (process-file) for safe, resumable offline processing with incremental saves and automatic resume, and direct in-place updates (process-deck) for faster batch edits. You can add structured fields (e.g., "Key Vocabulary" with readings and HTML context), bulk-verify translations, or interactively generate multiple contextual cards per term using a generate wizard that tailors prompt templates to your deck’s style.
For technical users it’s rich and scriptable: installable via npm, it talks to Anki through AnkiConnect (Anki Desktop must be running) and supports OpenAI and Google Gemini models (multiple model choices and token-pricing). Commands support JSON-mode (LLM returns structured JSON to merge multiple fields) or single-field mode, concurrent API requests, retries, dry-runs, detailed logging, and error export. There’s also a copy-mode that lets you paste responses from browser UIs to avoid API keys. Key ops include configurable prompts, batch-size, and resume/force flags—so you can test on small samples before committing. The main tradeoffs are token costs and the need for careful prompt/template design, but the tool dramatically scales otherwise tedious manual note editing and generation workflows for language learners, educators, and AI-powered SRS pipelines.
Loading comments...
login to comment
loading comments...
no comments yet