LLMs are breaking 20 year old system design (zknill.io)

🤖 AI Summary
Recent discussions in the AI/ML community highlight how large language models (LLMs) are challenging foundational tenets of stateful architecture that have dominated application design for the last two decades. Traditionally, systems have relied on a model where state is maintained in databases and compute is stateless, leading to scaling solutions that involve vertical database enhancement and horizontal scaling of application servers. However, LLMs introduce complexities like long-running asynchronous tasks, stateful compute with accumulated context, and bi-directional user interactions, thus complicating the conventional design. The current solutions, such as durable execution frameworks, address some challenges but fail to resolve the inherent routing issues between processes, often leading to inefficient polling mechanisms. As LLMs are inherently non-deterministic and costly in terms of processing, the limitations of existing architectures become pronounced. Experts suggest a shift towards using pub/sub channels that enable bi-directional communication without losing state, improving the resilience and responsiveness of interactions. This paradigm shift emphasizes the need for a new architectural approach that integrates durable execution with effective routing mechanisms, moving beyond the one-size-fits-all models of the past to better support agentic applications.
Loading comments...
loading comments...