What I learned building an AI-driven spaced repetition app (www.seangoedecke.com)

🤖 AI Summary
An engineer-built app called AutoDeck uses LLMs to generate an “infinite” spaced-repetition flashcard feed for any topic, automatically adjusting difficulty based on user responses. It’s notable because it shows a high‑value non-chat AI UI: users simply get a stream of cards rather than conversing with an agent. That makes spaced repetition much easier to apply to new subjects (no pre-made decks or heavy setup) and demonstrates a practical pattern for AI apps where the model produces narrow, repeatable units of content rather than freeform dialogue. Technically, the project wrestles with two core problems: speed and consistency. Generating cards one-by-one is slow due to time-to-first-token latency; naïve parallel generation produces duplicates. The solution: batch-generate multiple cards server-side but stream and persist each <card></card> XML chunk to the client as it arrives, enabling immediate consumption without waiting for a full JSON payload (JSON can’t be parsed mid-stream). That required a background producer plus client polling and careful de-duplication. The author used OpenAI Codex to accelerate coding (with frequent human intervention) and found other agents like Claude Code less helpful. Practical takeaways: model choice and latency matter more than raw capability for UX, streaming structured formats are crucial today, and per-request inference costs are pushing makers toward paid models that incentivize higher polish.
Loading comments...
loading comments...