## Section: Scientific Foundations

### Interacting Monte Carlo methods and particle approximation of Feynman–Kac distributions

Monte Carlo methods are numerical methods that are widely used
in situations where
(i) a stochastic (usually Markovian) model is given for some underlying
process, and (ii) some quantity of interest should be evaluated, that
can be expressed in terms of the expected value of a functional of the
process trajectory, which includes as an important special case the
probability that a given event has occurred.
Numerous examples can be found, e.g. in financial engineering (pricing of options and derivative
securities) [50] ,
in performance evaluation of communication networks (probability of buffer
overflow), in statistics of hidden Markov models (state estimation,
evaluation of contrast and score functions), etc.
Very often in practice, no analytical expression is available for
the quantity of interest, but it is possible to simulate trajectories
of the underlying process. The idea behind Monte Carlo methods is
to generate independent trajectories of this process
or of an alternate instrumental process,
and to build an approximation (estimator) of the quantity of interest
in terms of the weighted empirical probability distribution
associated with the resulting independent sample.
By the law of large numbers, the above estimator converges
as the size $N$ of the sample goes to infinity, with rate $1/\sqrt{N}$
and the asymptotic variance can be estimated using an appropriate
central limit theorem.
To reduce the variance of the estimator, many variance
reduction techniques have been proposed.
Still, running independent Monte Carlo simulations can lead to
very poor results, because trajectories are generated *blindly*,
and only afterwards are the corresponding weights evaluated.
Some of the weights can happen to be negligible, in which case the
corresponding trajectories are not going to contribute to the estimator,
i.e. computing power has been wasted.

A recent and major breakthrough,
has been the introduction of interacting Monte Carlo methods,
also known as sequential Monte Carlo (SMC) methods,
in which a whole (possibly weighted) sample,
called *system of particles*, is propagated in time, where
the particles

*explore*the state space under the effect of a*mutation*mechanism which mimics the evolution of the underlying process,and are

*replicated*or*terminated*, under the effect of a*selection*mechanism which automatically concentrates the particles, i.e. the available computing power, into regions of interest of the state space.

In full generality, the underlying process is a discrete–time Markov chain, whose state space can be

finite, continuous, hybrid (continuous / discrete), graphical, constrained, time varying, pathwise, etc.,

the only condition being that it can easily be *simulated*.

In the special case of particle filtering,
originally developed within the tracking community,
the algorithms yield a numerical approximation of the optimal Bayesian
filter, i.e. of the conditional probability distribution
of the hidden state given the past observations, as a (possibly
weighted) empirical probability distribution of the system of particles.
In its simplest version, introduced in several different scientific
communities under the name of
*bootstrap filter* [52] ,
*Monte Carlo filter* [57]
or *condensation* (conditional density propagation)
algorithm [54] ,
and which historically has been the first algorithm to include
a redistribution step,
the selection mechanism is governed by the likelihood function:
at each time step, a particle is more likely to survive
and to replicate at the next generation if it is consistent with
the current observation.
The algorithms also provide as a by–product a numerical approximation
of the likelihood function, and of many other contrast functions for
parameter estimation in hidden Markov models, such as the prediction
error or the conditional least–squares criterion.

Particle methods are currently being used in many scientific and engineering areas

positioning, navigation, and tracking [53] , [47] , visual tracking [54] , mobile robotics [48] , [71] , ubiquitous computing and ambient intelligence, sensor networks, risk evaluation and simulation of rare events [51] , genetics, molecular simulation [49] , etc.

Other examples of the many applications of particle filtering can be
found in the contributed volume [34] and in the special
issue of *IEEE Transactions on Signal Processing* devoted
to *Monte Carlo Methods for Statistical Signal Processing*
in February 2002,
where the tutorial paper [35] can be found,
and in the textbook [67] devoted
to applications in target tracking.
Applications of sequential Monte Carlo methods to other areas,
beyond signal and image processing, e.g. to genetics,
can be found in [66] .
A recent overview can also be found in [37] .

Particle methods are very easy to implement, since it is sufficient in principle to simulate independent trajectories of the underlying process. The whole problematic is multidisciplinary, not only because of the already mentioned diversity of the scientific and engineering areas in which particle methods are used, but also because of the diversity of the scientific communities which have contributed to establish the foundations of the field

target tracking, interacting particle systems, empirical processes, genetic algorithms (GA), hidden Markov models and nonlinear filtering, Bayesian statistics, Markov chain Monte Carlo (MCMC) methods.

These algorithms can be interpreted as numerical approximation schemes for Feynman–Kac distributions, a pathwise generalization of Gibbs–Boltzmann distributions, in terms of the weighted empirical probability distribution associated with a system of particles. This abstract point of view [42] , [40] , has proved to be extremely fruitful in providing a very general framework to the design and analysis of numerical approximation schemes, based on systems of branching and / or interacting particles, for nonlinear dynamical systems with values in the space of probability distributions, associated with Feynman–Kac distributions. Many asymptotic results have been proved as the number $N$ of particles (sample size) goes to infinity, using techniques coming from applied probability (interacting particle systems, empirical processes [74] ), see e.g. the survey article [42] or the recent textbook [40] , and references therein

convergence in ${L}^{p}$, convergence as empirical processes indexed by classes of functions, uniform convergence in time, see also [63] , [64] , central limit theorem, see also [59] , propagation of chaos, large deviations principle, etc.

The objective here is to systematically study the impact of the many algorithmic variants on the convergence results.