"Causal" is like "error term" (statmodeling.stat.columbia.edu)

🤖 AI Summary
After a question at a talk about causal inference and spatial statistics, the author uses the classic agricultural “spillover” example—fertilizer spreading between neighboring plots arranged in rows and columns—to argue that when you can write a mechanistic model, the problem stops feeling like “causal inference” and becomes ordinary modeling. Practically, you would specify a parametric model for spillover as a function of distance that adjusts the effective dose each plot receives, then fit the model to estimate treatment effects. The key point: when you have a model for individual-level mechanisms, you can directly infer individual effects instead of relying on black‑box causal estimands. This reframing matters for the AI/ML and stats communities because it clarifies when causal language is useful: “causal inference” typically denotes situations where we don’t model the underlying process (clinical trials, observational policy studies, survey experiments), whereas many applied problems that are fundamentally causal—dosing in pharmacology, climate reconstruction from proxies, item-response models—are treated as modeling tasks. The author stresses that identification issues and selection bias still matter, and that causal inference is essentially about aggregating individual effects into averages; having a mechanistic model lets you avoid that aggregation step by estimating individual effects directly.
Loading comments...
loading comments...