🤖 AI Summary
A growing critique argues that traditional semantic layers—designed to centralize definitions and enforce consistency for BI dashboards—become a liability when applied to AI reasoning. BI semantics excel at reporting “what happened” by exposing cleaned, aggregated metrics, but that rigidity constrains models that need to generalize, hypothesize and perform multi-step inference. The piece illustrates this with a “perfectly wrong” AI: it can repeat that sales dropped 8%, but cannot probe causes because the semantic layer only exposes top-level aggregates and hard-schema joins. Over time adding more rigid joins and filters makes the schema heavy and the model’s reasoning brittle.
The recommended shift is not to discard semantic layers but to repurpose them as lightweight scaffolds that guide rather than dictate reasoning. Practically, that means pruning hardcoded rules/joins, using retrieval to surface relevant metrics or models from BI systems, and restructuring those artifacts in prompts combined with natural-language instructions so the model can explore hypotheses while staying grounded. Because every token of context matters, this “right altitude” approach—echoed in Anthropic’s work on context engineering—seeks a balance between brittleness and drift, preserving shared definitions while enabling flexible, multi-step AI analysis.
Loading comments...
login to comment
loading comments...
no comments yet