Should I Multi-Task? (maryrosecook.com)

🤖 AI Summary
Waiting on an LLM to generate code often tempts you to switch to other work, but that multitasking usually backfires. Context for the original task decays while you’re away (forcing costly reloads), high-cognitive tasks seize your attention so you can’t think coherently about a second task, and much of “waiting” time is actually occupied by diagramming, reading, composing prompts—so parallelizing only captures a small slice of your workflow. That said, selective parallelism can be powerful. Good candidates are long-running or verifiable generations (e.g., auto-implementing a spec, agents that can self-test and confirm correctness), background research related to the same task, or low-cost “fire-and-forget” experiments (sending an agent a bug report or a prototype idea). Practically, this suggests delegating asynchronous, automatable work to agents or queued pipelines while keeping high-focus design and review tasks uninterrupted. The takeaway for AI/ML practitioners: prefer targeted backgroundization (self-verifying agents, long jobs, low-risk probes) over juggling many active tasks—maintain context and cognitive continuity to get more reliable, higher-quality outcomes.
Loading comments...
loading comments...