Who needs Git when you have 1M context windows? (www.alexmolas.com)

🤖 AI Summary
A RevenueCat engineer lost an uncommitted set of notebooks and scripts that produced a +5% improvement to an LTV model, then discovered the original file when they asked a long‑context LLM to retrieve it. Using Cursor to query gemini-2.5‑pro — a model with a 1 million token context window — the engineer asked for the exact ml_ltv_training.py they had first sent and the model returned the original script, restoring the lost uplift and saving days of rework. This anecdote highlights a practical and technical shift: ultra‑long context models can persist and retrieve large chunks of code and experiment artifacts across an entire project conversation, essentially acting as an informal backup and stateful collaborator. For ML engineers, that means easier recovery of exploratory work, debugging, and experiment provenance when journaling happens in chat or notebooks linked to an LLM. But it also raises important caveats — long‑context recall is not a substitute for git, CI, and strict versioning; relying on model memory risks reproducibility, auditability, and data governance issues (sensitive data retention, compliance). The takeaway: long‑context LLMs are powerful productivity tools for experiment retrieval and collaboration, but teams must pair them with disciplined version control and security practices.
Loading comments...
loading comments...