## Section: Research Program

### Perfect Simulation

Simulation approaches can be used to efficiently estimate the stationary behavior of Markov chains by providing independent samples distributed according to their stationary distribution, even when it is impossible to compute this distribution numerically.

The classical Markov Chain Monte Carlo simulation techniques suffer from two main problems:

$\u2022$ The convergence to the stationary distribution can be very slow, and it is in general difficult to estimate;

$\u2022$ Even if one has an effective convergence criterion, the sample obtained after any finite number of iterations is biased.

To overcome these issues, Propp and Wilson [56] have introduced a perfect sampling algorithm (PSA) that has later been extended and applied in various contexts, including statistical physics [47], stochastic geometry [52], theoretical computer science [33], and communications networks [30], [46] (see also the bibliography at http://dimacs.rutgers.edu/~dbwilson/exact.html/ annotated by David B. Wilson.

Perfect sampling uses coupling arguments to give an unbiased sample from the stationary distribution of an ergodic Markov chain on a finite state space $\mathcal{X}$. Assume the chain is given by an update function $\Phi $ and an i.i.d. sequence of innovations ${\left({U}_{n}\right)}_{n\in \mathbb{Z}}$, so that

The algorithm is based on a backward coupling scheme: it computes the trajectories from all $x\in \mathcal{X}$ at some time in the past $t=-T$ until time $t=0$, using the same innovations. If the final state is the same for all trajectories (i.e. $\left|\{\Phi (x,{U}_{-T+1},...,{U}_{0})\phantom{\rule{0.277778em}{0ex}}:\phantom{\rule{0.277778em}{0ex}}x\in \mathcal{X}\}\right|=1$, where $\Phi (x,{U}_{-T+1},...,{U}_{0}):=\Phi (\Phi (x,{U}_{-T+1}),{U}_{-T+2},...,{U}_{0})$ is defined by induction on $T$), then we say that the chain has globally coupled and the final state has the stationary distribution of the Markov chain. Otherwise, the simulations are started further in the past.

Any ergodic Markov chain on a finite state space has a representation of type (1) that couples in finite time with probability 1, so
Propp and Wilson's PSA gives a “perfect” algorithm in the sense that it provides an *unbiased* sample in *finite time*. Furthermore, the stopping criterion is given by the coupling from the past scheme, and
knowing the explicit bounds on the coupling time is not needed for the validity of the algorithm.

However, from the computational side, PSA is efficient only under some monotonicity assumptions that allow reducing the number of trajectories considered in the coupling from the past procedure only to extremal initial conditions. Our goal is to propose new algorithms solving this issue by exploiting semantic and geometric properties of the event space and the state space.