🤖 AI Summary
An opinion piece argues that "boring technology" like LaTeX remains a sensible choice in the age of large language models: because LaTeX has decades of written material (tutorials, examples, Q&A) LLMs are exceptionally good at reasoning about it, lowering the learning curve and automating tedious work. The author says LLMs make many of the touted advantages of newer tools such as Typst less compelling—LLMs can find packages and symbols, translate images of equations into LaTeX, diagnose compilation errors, and even generate boilerplate (tables, charts, TikZ diagrams), so the accumulated ecosystem and stability of LaTeX still win for many use cases.
Technically, this matters for tooling and workflow decisions: editor integrations (Overleaf AI helpers, VS Code plugins) let LLMs be used inline or as standalone aids; for complex or repetitive graphics the author often has an LLM write a Python generator that emits TikZ; and for non-trivial automation they prefer real programming languages over LaTeX macros. The piece also cautions that newer systems may suffer from sparse documentation (so LLMs know them less) and design choices like leaving LaTeX math syntax risk fragmenting tooling—so sticking with battle-tested tech plus LLM augmentation can be a pragmatic path for researchers and engineers.
Loading comments...
login to comment
loading comments...
no comments yet