# DIMACS Focus on Discrete Probability Seminar

## Title:

Exact Sampling with Markov Chains

## Speaker:

- Dave Wilson
- UC Berkeley

## Place:

- CoRE Building, Room 431
- Busch Campus, Rutgers University.

## Time:

- 3:30 - 4:30 PM
- Thursday, February 27, 1997

Abstract:
For many applications it is useful to sample from a finite set of
objects in accordance with some particular distribution. One approach
is to run an ergodic (i.e., irreducible aperiodic) Markov chain whose
stationary distribution is the desired distribution on this set; after
the Markov chain has run for M steps, with M sufficiently large, the
distribution governing the state of the chain approximates the desired
distribution. Unfortunately it can be difficult to determine how
large M needs to be, and frequently the bounds that can be proven are
unduly pessimistic. I will describe the recent algorithms of Jim
Propp and myself which circumvent this problem. These algorithms
determine on their own when to stop, and then output samples in exact
accordance with the desired distribution. Our techniques,
coupling-from-the-past and cycle-popping, may be applied to any
(finite irreducible) Markov chain, and have particularly efficient
implementations when the Markov chain is well structured in a suitable
sense. Applications include sampling from the Gibbs distributions
associated with various statistical mechanics models (including Ising,
random-cluster, ice, and dimer), and an algorithm for sampling random
spanning trees of a graph, that is just as simple, but more efficient
and more general than the well-known Broder/Aldous algorithm. I will
conclude by describing some recent work by others that extends the
class of Markov chains for which coupling-from-the-past is efficient,
and highlight some open problems.

Joint work with Jim Propp

dimacs-www@dimacs.rutgers.edu
Document last modified on February 14, 1997