Markov Random Field (en.wikipedia.org)

🤖 AI Summary
A Markov random field (MRF), also called a Markov network or undirected graphical model, is a way to represent a joint distribution over many random variables using an undirected graph whose edges encode local conditional independencies. Originating from statistical physics (Ising and Sherrington–Kirkpatrick models), MRFs are equivalent to Gibbs distributions when the joint density is positive: the probability of a configuration is proportional to exp(−energy), factorized into nonnegative clique potentials and normalized by a partition function Z. The model admits three Markov properties (pairwise, local, global) that coincide for positive distributions, and can be written as an exponential-family log-linear model where clique features and weights capture interactions. MRFs matter to AI/ML because they provide a principled framework for structured prediction and spatial/relational modelling—widely used in image processing (segmentation, denoising, texture synthesis), computational biology, and more. Key technical trade-offs: learning and likelihood evaluation require inference, but exact inference is #P-complete in general, so practitioners rely on approximations (MCMC, loopy belief propagation, variational methods) or restrict structure (trees, chordal graphs, associative networks) for tractability. Conditional Random Fields (CRFs) are the discriminative counterpart that condition on observed inputs. Understanding clique factorization, the role of the partition function, and classes with efficient MAP or marginal algorithms is essential when choosing MRFs for real-world structured problems.
Loading comments...
loading comments...