Things I Hate About SHAP as a Maintainer (mindfulmodeler.substack.com)

🤖 AI Summary
Tobias Pitters, a SHAP maintainer, published a candid breakdown of six maintenance pain points that are slowing development and hurting usability for one of the most popular model-explainability libraries. The issues include severe slowdowns for large datasets (much of it from implementation choices like Python loops and limited parallelism, though a recent Cython change yielded only ~5% KernelExplainer improvement), DeepExplainer breakages caused by TensorFlow’s move to eager execution which now forces per-layer backprop implementations (so LayerNorm, LSTM, Attention support has eroded), and a very fast but fragile C-based TreeExplainer whose age and memory bugs make changes risky (the team is considering a Rust rewrite and wants easier GPU installation/options). Pitters also flags frequent CI/test failures from upstream dependency churn (he estimates ~30% of his time is spent fixing those), brittle, legacy plotting code (including old JS) with poor test coverage, and other gaps such as missing JAX support, weak type annotations, and lack of nightly builds or lazy-loading. The post matters because SHAP underpins many XAI workflows: performance, layer support, cross-library compatibility, and packaging directly affect researchers and production users. Pitters’ suggestions—rewriting risky C parts in Rust, adding GPU wheels/install extras, improving tests for plotting, and investing in CI resilience—are practical roadmaps for making SHAP more maintainable, performant, and extensible; he closes by inviting community contributions.
Loading comments...
loading comments...