🤖 AI Summary
Christopher B. Gregory shows how a combinatorial puzzle (the game Kami) can be framed as iterated vertex contractions on a colored graph and accelerated by both classical heuristics and a learned Graph Neural Network (GNN) value function. Regions in the puzzle map to vertices; recoloring a region corresponds to contracting that vertex with same-colored neighbors, and the goal is a sequence of contractions that reduces the graph to one vertex within a move limit. Naïve DFS/BFS explodes exponentially, so Gregory introduces greedy heuristics (vertex degree, neighbor-color frequency, and a centrality score based on summed BFS distances) and, importantly, a locality-based pruning: only consider candidates in the 2-hop “Markov blanket” of the last-contracted vertex. That locality constraint prunes equivalent search subtrees (since distant contractions commute) and empirically speeds search, especially on planar graphs.
To go beyond hand-crafted heuristics he trains a value-based model (GCNConv in PyTorch Geometric) that embeds graphs with one-hot color node features, ReLU-activated GCN layers, dropout, global max pooling, and a final linear head that predicts the minimal remaining contractions (trained with MSE). During tree search the learned value ranks candidate next states so the solver explores higher-value trajectories first. Empirically GCNConv + global max outperformed GATConv and SAGEConv in this setup. The work illustrates a practical hybrid: structural search + GNN-guided value estimates for combinatorial graph problems, offering a promising route to scale and generalize heuristics for contraction-style tasks.
Loading comments...
login to comment
loading comments...
no comments yet