Show HN: Systems and algorithms for (machine-)learning Monopoly Deal (github.com)

🤖 AI Summary
A new open-source research platform implements a modified version of the card game Monopoly Deal as a testbed for sequential decision-making under imperfect information. The project bundles game logic, model training pipelines, experiment tracking, and a live web app (play at monopolydeal.ai) so researchers can run, reproduce and evaluate game-theoretic and reinforcement-learning approaches in a single, interactive system. It’s designed for human–AI interaction, rapid prototyping, and benchmarking—making it useful both for algorithm development (CFR, RL) and systems work (scaling, deployment, fault tolerance). Technically the stack is modular and production-minded: a FastAPI backend, React/Next.js frontend, PostgreSQL for persistent game state (with DB-log-based recovery), containerized services (Docker) deployed to Google Cloud Run, and developer tooling for local and Docker-based workflows. The learning pipeline includes a counterfactual regret minimization (CFR) implementation, Ray for parallelized self-play rollouts, Weights & Biases for experiment tracking, Kubernetes jobs on GKE for training, GCS checkpointing, and configurable state abstractions to reduce game-tree complexity. The repo emphasizes reproducibility (seed and commit tracking), evaluation against diverse opponents, and extensibility—offering a practical platform for studying imperfect-information algorithms at scale.
Loading comments...
loading comments...