🤖 AI Summary
Researchers and practitioners are arguing that large language models are making Language-Oriented Programming (LOP) and domain-specific languages (DSLs) practical again: rather than seeing LLMs as limited to existing languages, people like Simon Wilson and Richard Felman suggest LLMs lower the cost and friction of designing new languages, and projects such as Maxime Chevalier’s Plush show LLMs can port examples and scaffold implementations. The core idea is “middle-out” LOP—design a compact, domain-focused language, implement the system in that language, and build a compiler/interpreter—while using LLMs to generate the language runtime, docs, examples, and iterative tooling. That turns a DSL into a shared medium between domain experts, developers, and models.
Technically this shift matters because LLM tokenizers treat source text differently from programming-language parsers: token counts depend on vocabulary, formatting, and symbol usage, so language design should optimize for token efficiency (e.g., Python’s whitespace/words often tokenizes better than symbol-heavy JS; APL’s Unicode can blow up token counts while Q or token-oriented formats like TOON can be far more compact). Implications: tiny, token-efficient DSLs can fit larger semantic contexts, let LLMs reliably generate and evolve compilers/VMs, and reduce tooling overhead (LLMs can auto-generate docs, REPLs, and syntax helpers). Caveats remain—training-set bias and maintenance trade-offs—but overall LLMs reshuffle the economics of language design, making rapid prototyping and custom DSLs far more attainable.
Loading comments...
login to comment
loading comments...
no comments yet