Show HN: Ato – A thin ML layer. When results change, it tells you why (github.com)

🤖 AI Summary
Ato is a lightweight, installable layer for ML experiments that fingerprints config structure, function logic, and runtime outputs so you can answer “Why did this result change?” without adopting a full platform. It bundles three independent tools—ADict (structural hashing for configs), Scope (priority-based config merging with explicit reasoning and decorators to trace functions/runtime), and SQLTracker (local-first experiment tracking in SQLite)—that together reveal which config functions ran, how merge priorities resolved, and whether code or runtime behavior actually changed. It intentionally complements existing stacks (Hydra/OmegaConf, argparse, MLflow/W&B, OpenMMLab) and requires no servers, dashboards, or migration. Key technical details: ADict hashes keys+types (detects architectural changes versus mere value tweaks); Scope provides observable/trace/runtime_trace decorators, manual visualization of merge order, lazy evaluation, and MultiScope namespace isolation to avoid key collisions; code fingerprinting uses SHA256 over function bytecode (so refactors/comments don’t create false versions); runtime fingerprinting hashes actual outputs to catch silent failures or non-determinism. SQLTracker logs causality locally in SQLite for zero-setup auditability. For practitioners this means faster root-cause debugging of divergent runs, clearer config causality across teams, and reliable detection of behavioral drift without locking into new orchestration, HPO, or dashboard tools.
Loading comments...
loading comments...