The scientific objectives of ASPI are the design, analysis and implementation of interacting Monte Carlo methods, also known as particle methods, with focus on

statistical inference in hidden Markov models and particle filtering,

risk evaluation and simulation of rare events,

global optimization.

The whole problematic is multidisciplinary, not only because of the many scientific and engineering areas in which particle methods are used, but also because of the diversity of the scientific communities which have already contributed to establish the foundations of the field

target tracking, interacting particle systems, empirical processes, genetic algorithms (GA), hidden Markov models and nonlinear filtering, Bayesian statistics, Markov chain Monte Carlo (MCMC) methods, etc.

Intuitively speaking, interacting Monte Carlo methods are sequential simulation methods, in which particles

*explore*the state space by mimicking the evolution of an underlying random process,

*learn*the environment by evaluating a fitness function,

and
*interact*so that only the most successful particles (in view of the value of the fitness function) are allowed to survive and to get offsprings at the next generation.

The effect of this mutation / selection mechanism is to automatically concentrate particles (i.e. the available computing power) in regions of interest of the state space. In the special case of particle filtering, which has numerous applications under the generic heading of positioning, navigation and tracking, in

target tracking, computer vision, mobile robotics, wireless communications, ubiquitous computing and ambient intelligence, sensor networks, etc.,

each particle represents a possible hidden state, and is multiplied or terminated at the next generation on the basis of its consistency with the current observation, as quantified by the likelihood function. With these genetic–type algorithms, it becomes easy to efficiently combine a prior model of displacement with or without constraints, sensor–based measurements, and a base of reference measurements, for example in the form of a digital map (digital elevation map, attenuation map, etc.). In the most general case, particle methods provide approximations of Feynman–Kac distributions, a pathwise generalization of Gibbs–Boltzmann distributions, by means of the weighted empirical probability distribution associated with an interacting particle system, with applications that go far beyond filtering, in

simulation of rare events, simulation of conditioned or constrained random variables, interacting MCMC methods, molecular simulation, etc.

The main applications currently considered are geolocalisation and tracking of mobile terminals, terrain–aided navigation, data fusion for indoor localisation, optimization of sensors location and activation, risk assessment in air traffic management, protection of digital documents.

Monte Carlo methods are numerical methods that are widely used in situations where (i) a stochastic (usually Markovian) model is given for some underlying process, and (ii) some
quantity of interest should be evaluated, that can be expressed in terms of the expected value of a functional of the process trajectory, which includes as an important special case the
probability that a given event has occurred. Numerous examples can be found, e.g. in financial engineering (pricing of options and derivative securities)
, in performance evaluation of communication networks (probability of
buffer overflow), in statistics of hidden Markov models (state estimation, evaluation of contrast and score functions), etc. Very often in practice, no analytical expression is available for
the quantity of interest, but it is possible to simulate trajectories of the underlying process. The idea behind Monte Carlo methods is to generate independent trajectories of this process or
of an alternate instrumental process, and to build an approximation (estimator) of the quantity of interest in terms of the weighted empirical probability distribution associated with the
resulting independent sample. By the law of large numbers, the above estimator converges as the size
*blindly*, and only afterwards are the corresponding weights evaluated. Some of the weights can happen to be negligible, in which case the corresponding trajectories are not going to
contribute to the estimator, i.e. computing power has been wasted.

A recent and major breakthrough, has been the introduction of interacting Monte Carlo methods, also known as sequential Monte Carlo (SMC) methods, in which a whole (possibly weighted)
sample, called
*system of particles*, is propagated in time, where the particles

*explore*the state space under the effect of a
*mutation*mechanism which mimics the evolution of the underlying process,

and are
*replicated*or
*terminated*, under the effect of a
*selection*mechanism which automatically concentrates the particles, i.e. the available computing power, into regions of interest of the state space.

In full generality, the underlying process is a discrete–time Markov chain, whose state space can be

finite, continuous, hybrid (continuous / discrete), graphical, constrained, time varying, pathwise, etc.,

the only condition being that it can easily be
*simulated*.

In the special case of particle filtering, originally developed within the tracking community, the algorithms yield a numerical approximation of the optimal Bayesian filter, i.e. of the
conditional probability distribution of the hidden state given the past observations, as a (possibly weighted) empirical probability distribution of the system of particles. In its simplest
version, introduced in several different scientific communities under the name of
*bootstrap filter*
,
*Monte Carlo filter*
or
*condensation*(conditional density propagation) algorithm
, and which historically has been the first algorithm to include a
redistribution step, the selection mechanism is governed by the likelihood function: at each time step, a particle is more likely to survive and to replicate at the next generation if it is
consistent with the current observation. The algorithms also provide as a by–product a numerical approximation of the likelihood function, and of many other contrast functions for parameter
estimation in hidden Markov models, such as the prediction error or the conditional least–squares criterion.

Particle methods are currently being used in many scientific and engineering areas

positioning, navigation, and tracking , , visual tracking , mobile robotics , , ubiquitous computing and ambient intelligence, sensor networks, risk evaluation and simulation of rare events , genetics, molecular simulation , etc.

Other examples of the many applications of particle filtering can be found in the contributed volume
and in the special issue of
*IEEE Transactions on Signal Processing*devoted to
*Monte Carlo Methods for Statistical Signal Processing*in February 2002, where the tutorial paper
can be found, and in the textbook
devoted to applications in target tracking. Applications of sequential
Monte Carlo methods to other areas, beyond signal and image processing, e.g. to genetics, can be found in
. A recent overview can also be found in
.

Particle methods are very easy to implement, since it is sufficient in principle to simulate independent trajectories of the underlying process. The whole problematic is multidisciplinary, not only because of the already mentioned diversity of the scientific and engineering areas in which particle methods are used, but also because of the diversity of the scientific communities which have contributed to establish the foundations of the field

target tracking, interacting particle systems, empirical processes, genetic algorithms (GA), hidden Markov models and nonlinear filtering, Bayesian statistics, Markov chain Monte Carlo (MCMC) methods.

These algorithms can be interpreted as numerical approximation schemes for Feynman–Kac distributions, a pathwise generalization of Gibbs–Boltzmann distributions, in terms of the weighted
empirical probability distribution associated with a system of particles. This abstract point of view
,
, has proved to be extremely fruitful in providing a very general
framework to the design and analysis of numerical approximation schemes, based on systems of branching and / or interacting particles, for nonlinear dynamical systems with values in the
space of probability distributions, associated with Feynman–Kac distributions. Many asymptotic results have been proved as the number

convergence in

The objective here is to systematically study the impact of the many algorithmic variants on the convergence results.

Hidden Markov models (HMM) form a special case of partially observed stochastic dynamical systems, in which the state of a Markov process (in discrete or continuous time, with finite or continuous state space) should be estimated from noisy observations. The conditional probability distribution of the hidden state given past observations is a well–known example of a normalized (nonlinear) Feynman–Kac distribution, see . These models are very flexible, because of the introduction of latent variables (non observed) which allows to model complex time dependent structures, to take constraints into account, etc. In addition, the underlying Markovian structure makes it possible to use numerical algorithms (particle filtering, Markov chain Monte Carlo methods (MCMC), etc.) which are computationally intensive but whose complexity is rather small. Hidden Markov models are widely used in various applied areas, such as speech recognition, alignment of biological sequences, tracking in complex environment, modeling and control of networks, digital communications, etc.

Beyond the recursive estimation of a hidden state from noisy observations, the problem arises of statistical inference of HMM with general state space , including estimation of model parameters, early monitoring and diagnosis of small changes in model parameters, etc.

**Large time asymptotics** A fruitful approach is the asymptotic study, when the observation time increases to infinity, of an extended Markov chain, whose state includes
(i) the hidden state, (ii) the observation, (iii) the prediction filter (i.e. the conditional probability distribution of the hidden state given observations at all previous time
instants), and possibly (iv) the derivative of the prediction filter with respect to the parameter. Indeed, it is easy to express the log–likelihood function, the conditional least–squares
criterion, and many other clasical contrast processes, as well as their derivatives with respect to the parameter, as additive functionals of the extended Markov chain.

The following general approach has been proposed

first, prove an exponential stability property (i.e. an exponential forgetting property of the initial condition) of the prediction filter and its derivative, for a misspecified model,

from this, deduce a geometric ergodicity property and the existence of a unique invariant probability distribution for the extended Markov chain, hence a law of large numbers and a central limit theorem for a large class of contrast processes and their derivatives, and a local asymptotic normality property,

finally, obtain the consistency (i.e. the convergence to the set of minima of the associated contrast function), and the asymptotic normality of a large class of minimum contrast estimators.

This programme has been completed in the case of a finite state space , and has been generalized under an uniform minoration assumption for the Markov transition kernel, which typically does only hold when the state space is compact. Clearly, the whole approach relies on the existence of an exponential stability property of the prediction filter, and the main challenge currently is to get rid of this uniform minoration assumption for the Markov transition kernel , , so as to be able to consider more interesting situations, where the state space is noncompact.

**Small noise asymptotics** Another asymptotic approach can also be used, where it is rather easy to obtain interesting explicit results, in terms close to the language of
nonlinear deterministic control theory
. Taking the simple example where the hidden state is the solution to
an ordinary differential equation, or a nonlinear state model, and where the observations are subject to additive Gaussian white noise, this approach consists in assuming that covariances
matrices of the state noise and of the observation noise go simultaneously to zero. If it is reasonable in many applications to consider that noise covariances are small, this asymptotic
approach is less natural than the large time asymptotics, where it is enough (provided a suitable ergodicity assumption holds) to accumulate observations and to see the expected limit laws (law
of large numbers, central limit theorem, etc.). In opposition, the expressions obtained in the limit (Kullback–Leibler divergence, Fisher information matrix, asymptotic covariance matrix, etc.)
take here a much more explicit form than in the large time asymptotics.

The following results have been obtained using this approach

the consistency of the maximum likelihood estimator (i.e. the convergence to the set

if the abovementioned set

it has been shown
that (i) the parameter dependent probability distributions of
the observations are locally asymptotically normal (LAN)
, from which the asymptotic normality of the maximum likelihood
estimator follows, with an explicit expression for the asymptotic covariance matrix, i.e. for the Fisher information matrix

The estimation of the small probability of a rare but critical event, is a crucial issue in industrial areas such as

nuclear power plants, food industry, telecommunication networks, finance and insurance industry, air traffic management, etc.

In such complex systems, analytical methods cannot be used, and naive Monte Carlo methods are clearly unefficient to estimate accurately very small probabilities. Besides importance sampling, an alternate widespread technique consists in multilevel splitting , where trajectories going towards the critical set are given offsprings, thus increasing the number of trajectories that eventually reach the critical set. As shown in , the Feynman–Kac formalism of is well suited for the design and analysis of splitting algorithms for rare event simulation.

**Propagation of uncertainty** Multilevel splitting can be used in static situations. Here, the objective is to learn the probability distribution of an output random
variable

The key issue is to learn as fast as possible regions of the input space which contribute most to the computation of the target quantity. The proposed splitting methods consists in (i) introducing a sequence of intermediate regions in the input space, implicitly defined by exceeding an increasing sequence of thresholds or levels, (ii) counting the fraction of samples that reach a level given that the previous level has been reached already, and (iii) improving the diversity of the selected samples, usually using an artificial Markovian dynamics. In this way, the algorithm learns

the transition probability between successive levels, hence the probability of reaching each intermediate level,

and the probability distribution of the input random variable, conditionned on the output variable reaching each intermediate level.

A further remark, is that this conditional probability distribution is precisely the optimal (zero variance) importance distribution needed to compute the probability of reaching the considered intermediate level.

**Rare event simulation** To be specific, consider a complex dynamical system modelled as a Markov process, whose state can possibly contain continuous components and finite
components (mode, regime, etc.), and the objective is to compute the probability, hopefully very small, that a critical region of the state space is reached by the Markov process before a final
time

The proposed splitting method consists in (i) introducing a decreasing sequence of intermediate, more and more critical, regions in the state space, (ii) counting the fraction of
trajectories that reach an intermediate region before time

the branching rate (number of offsprings allocated to a successful trajectory) is fixed, which allows for depth–first exploration of the branching tree, but raises the issue of controlling the population size,

the population size is fixed, which requires a breadth–first exploration of the branching tree, with random (multinomial) or deterministic allocation of offsprings, etc.

Just as in the static case, the algorithm learns

the transition probability between successive levels, hence the probability of reaching each intermediate level,

and the entrance probability distribution of the Markov process in each intermediate region.

Contributions have been given to

minimizing the asymptotic variance, obtained through a central limit theorem, with respect to the shape of the intermediate regions (selection of the importance function), to the thresholds (levels), to the population size, etc.

controlling the probability of extinction (when not even one trajectory reaches the next intermediate level),

designing and studying variants suited for hybrid state space (resampling per mode, marginalization, mode aggregation),

and in the static case, to

minimizing the asymptotic variance, obtained through a central limit theorem, with respect to intermediate levels, to the Metropolis kernel introduced in the mutation step, etc.

A related issue is global optimization. Indeed, the difficult problem of finding the set

This additional topic was not present in the initial list of objectives, and has emerged only recently.

In pattern recognition and statistical learning, also known as machine learning, nearest neighbor (NN) algorithms are amongst the simplest but also very powerful algorithms available.
Basically, given a training set of data, i.e. an

In general, there is no way to guess exactly the value of the feature associated with the new object, and the minimal error that can be done is that of the Bayes estimator, which cannot be
computed by lack of knowledge of the distribution of the object–feature pair, but the Bayes estimator can be useful to characterize the strength of the method. So the best that can be expected
is that the NN estimator converges, say when the sample size

The asymptotic behavior when the sample size grows is well understood in finite dimension, but the situation is radically different in general infinite dimensional spaces, when the objects to be classified are functions, images, etc.

**Nearest neighbor classification in infinite dimension** In finite dimension, the

**Rates of convergence of the functional
$k$–nearest neighbor estimator** Motivated by a broad range of potential applications, such as regression on curves, rates of convergence of the

This emerging topic has produced several theoretical advances , in collaboration with Gérard Biau (université Pierre et Marie Curie, ENS Paris and EPI CLASSIC, INRIA Paris—Rocquencourt), and a possible target application domain has been identified in the statistical analysis of recommendation systems, that would be a source of interesting problems.

Among the many application domains of particle methods, or interacting Monte Carlo methods, ASPI has decided to focus on applications in localisation (or positioning), navigation and tracking , , which already covers a very broad spectrum of application domains. The objective here is to estimate the position (and also velocity, attitude, etc.) of a mobile object, from the combination of different sources of information, including

a prior dynamical model of typical evolutions of the mobile, such as inertial estimates and prior model for inertial errors,

measurements provided by sensors,

and possibly a digital map providing some useful feature (terrain altitude, power attenuation, etc.) at each possible position.

In some applications, another useful source of information is provided by

a map of constrained admissible displacements, for instance in the form of an indoor building map,

which particle methods can easily handle (map-matching). This Bayesian dynamical estimation problem is also called filtering, and its numerical implementation using particle methods, known as particle filtering, has been introduced by the target tracking community , , which has already contributed to many of the most interesting algorithmic improvements and is still very active, and has found applications in

target tracking, integrated navigation, points and / or objects tracking in video sequences, mobile robotics, wireless communications, ubiquitous computing and ambient intelligence, sensor networks, etc.

ASPI is contributing (or has contributed recently) to several applications of particle filtering in positioning, navigation and tracking, such as geolocalisation and tracking in a wireless network, terrain–aided navigation, and data fusion for indoor localisation, see .

Another application domain of particle methods, or interacting Monte Carlo methods, that ASPI has decided to focus on is the estimation of the small probability of a rare but critical event, in complex dynamical systems. This is a crucial issue in industrial areas such as

nuclear power plants, food industry, telecommunication networks, finance and insurance industry, air traffic management, etc.

In such complex systems, analytical methods cannot be used, and naive Monte Carlo methods are clearly unefficient to estimate accurately very small probabilities. Besides importance sampling, an alternate widespread technique consists in multilevel splitting , where trajectories going towards the critical set are given offsprings, thus increasing the number of trajectories that eventually reach the critical set. This approach not only makes it possible to estimate the probability of the rare event, but also provides realizations of the random trajectory, given that it reaches the critical set, i.e. provides realizations of typical critical trajectories, an important feature that methods based on importance sampling usually miss.

ASPI is contributing (or has contributed recently) to several applications of multilevel splitting for rare event simulation, such as risk assessment in air traffic management, see , detection in sensor networks, see , and protection of digital documents.

This is a collaboration with Tony Lelièvre (CERMICS, Ecole des Ponts ParisTech).

Consider a one-dimensional Brownian motion in a double well potential. It is known that, as its variance goes to zero, the Brownian particle has to wait for a longer and longer time to jump from a well to the other. This metastable behavior is described by the Freidlin–Wentzell theory. We are investigating the length of the paths between the last passage time in the bottom of a well and the hitting time of the other one (reactive trajectory). In the case of an Ornstein–Uhlenbeck process between the wells, we obtained the remarkable result that the time length of the reactive trajectories converges in distribution when the noise intensity goes to zero, to a Gumbel random variable shifted by a deterministic term growing like minus the logarithm of the noise intensity. Our numerical simulations are also in good accordance with this theoretical result. We are also very close to extend this result to more general one dimensional diffusion processes.

This is a collaboration with Jérôme Morio (ONERA Palaiseau).

As explained in , multilevel splitting ideas can be useful even to solve some static problems, such as evaluating the (small) probability that a random variable exceeds some (extreme) threshold. Incidentally, it can also be noticed that a population of particles is available at each stage of the algorithm, which is distributed according to the original distribution conditionned on exceeding the current level. Furthermore, this conditional distribution is known to be precisely the optimal importance distribution for evaluating the probability of exceeding the current level. In other words, the optimal importance distribution is learned automatically by the algorithm, as a by–product, and therefore can be used to produce an importance sampling estimate with very low variance. Building on this idea, several other iterative methods have been studied, that learn the optimal importance distribution at each stage of the algorithm, such as nonparametric adaptive importance sampling (NAIS) , or the cross–entropy (CE) method , . These methods have been applied to a practical example from the aerospace industry, the evaluation of collision probabilities between two satellites, or between a satellite and space debris.

See .

This is a collaboration with Christian Musso (ONERA Palaiseau).

Particle filtering is a widely used Monte Carlo method to approximate the posterior probability distribution in non–linear filtering, with an error scaling as

This is a collaboration with Pierre Ailliot (UBO) and Christophe Maisondieu (IFREMER)

Directional wave spectra generally exhibit several peaks due to the coexistence of wind sea generated by local wind conditions and swells originating from distant weather systems. This paper proposes a new algorithm for partitioning such spectra and retrieving the various systems which compose a complex sea–state. It is based on a sequential Monte–Carlo algorithm which allows to follow the time evolution of the various systems. The main particularity of the algorithm is that the dimension of the hidden state can change from time to time and a model selection step is included. The proposed methodology is validated on both synthetic and real spectra and the results are compared with a method commonly used in the literature.

See .

Surprisingly, very little was known about the asymptotic behaviour of the ensemble Kalman filter
,
,
, whereas on the other hand, the asymptotic behaviour of many different
classes of particle filters is well understood, as the number of particles goes to infinity. Interpreting the ensemble elements as a population of particles with mean–field interactions, and
not only as an instrumental device producing an estimation of the hidden state as the ensemble mean value, it has been possible to prove the convergence of the ensemble Kalman filter, with a
rate of order

INRIA contract ALLOC 2399 — May 2007 to August 2011.

This FP6 project is coordinated by National Aerospace Laboratory (NLR) (The Netherlands), and ASPI is also collaborating with University of Twente (The Netherlands) and Direction des Services de la Navigation Aérienne (DSNA).

The objective of iFLYis to develop both an advanced airborne self separation design and a highly automated air traffic management (ATM) design for en–route traffic, which takes advantage of autonomous aircraft operation capabilities and which is aimed to manage a three to six times increase in current en–route traffic levels. The proposed research combines expertise in air transport human factors, safety and economics with analytical and Monte Carlo simulation methodologies. The contribution of ASPI to this project concerns the work package on accident risk assessment methods and their implementation using conditional Monte Carlo methods, especially for large scale stochastic hybrid systems: designing and studying variants suited for hybrid state space (resampling per mode, marginalization) are currently investigated .

See and

INRIA contract ALLOC 4233 — April 2009 to September 2011.

This is a collaboration with Sébastien Paris (université Paul Cézanne), related with the supervision of the PhD thesis of Mathieu Chouchane.

The objective of this project is to optimize the position and activation times of a few sensors deployed by a platform over a search zone, so as to maximize the probability of detecting a moving target. The difficulty here is that the target can detect an activated sensor before it is detected itself, and it can then modify its own trajectory to escape from the sensor. Because of the many constraints including timing constraints involved in this optimization problem, a stochastic algorithm is preferred here over a deterministic algorithm. The underlying idea is to replace the problem of maximizing a cost function (the probability of detection) over the possible configurations (admissible position and activation times) by the apparently simpler problem of sampling a population according to a probability distribution depending on a small parameter, which asymptotically concentrates on the set of global maxima of the cost function, as the small parameter goes to zero. The usual approach here is to use the cross–entropy method , .

The contribution of ASPI has been to propose a multilevel splitting algorithm, in order to evaluate the probability of detection for a given configuration. When this probability is small, these methods are known to provide a significant reduction in the variance of the relative error.

See .

INRIA contract ALLOC 2856 — January 2008 to June 2011.

This ANR project is coordinated by Thalès Alenia Space.

The overall objective is to study and demonstrate information fusion algorithms for localisation of pedestrian users in an indoor environment, where GPS solution cannot be used. The sought design combines

a pedestrian dead–reckoning (PDR) unit, providing noisy estimates of the linear displacement, angular turn, and possibly of the level change through an additional pression sensor,

range and / or proximity measurements provided by beacons at fixed and known locations, and possibly indirect distance measurements to access points, through a measure of the power signal attenuation,

constraints provided by an indoor map of the building (map-matching),

collaborative localisation when two users meet and exchange their respective position estimates.

Besides particle methods, which are proposed as the basic information fusion algorithm for the centralized server–based implementation, simpler algorithms such as the extended Kalman filter (EKF) or the unscented Kalman filter (UKF) have been investigated, to be used for the local PDA–based implementation with a map of a smaller part of the building. Constraints could be taken care of automatically with the help of a Voronoi graph , but this approach implies heavy pre–computations. A more direct approach, taking care of constraints on the fly, using a simple rejection method, has been preferred. Adapting the sample size using KLD–sampling has also been investigated, which could be useful in the case of a poor initial information, or if the user walks in poorly informative area (open zone, absence of beacons). Collaboration between users has been implemented , which allows from a user with a poor localization to benefit from the more accurate localization of another user. In this implementation, the latter user is seen by the former user as a ranging beacon with uncertain position. See , for a description of the overall fusion algorithm and an illustration with simulation results.

See .

INRIA contract ALLOC 3767 — January 2009 to December 2012.

This ANR project is coordinated by École Normale Supérieure, Paris. The other partner is Météo–France. This is a collaboration with Étienne Mémin and Anne Cuzol (INRIA Rennes Bretagne Atlantique, project–team FLUMINANCE).

The contribution of ASPI to this project is to continue the comparison of sequential data assimilation methods initiated in , such as the ensemble Kalman filter (EnKF) and the weighted ensemble Kalman filter (WEnKF), with particle filters. This comparison will be made on the basis of asymptotic variances, as the ensemble or sample size goes to infinity, and also on the impact of dimension on small sample behavior.

INRIA contract ALLOC 4402 — November 2009 to October 2012.

This ANR project is also coordinated by Alcatel–Lucent Bell Labs France.

The focus of our project is to reduce the impact of nonlinear effect. The objective is twofold: specify, design, realize and evaluate fibres of reduced nonlinear effects by firstly increasing the effective area to unprecedented values and secondly, by splitting optical power along two modes, using bimodal propagation. While the first step is ambitious but primarily relies in the evolution of current fibre technologies, the second is disruptive and requires not only deep changes in fibre technologies but also new advanced transmitter / receiver equipment, preferably based on coherent detection. Naturally, bimodal propagation also brings another key advantage, namely a twofold increase of system capacity.

Jointly with the team Processus Stochastiques of IRMAR, ASPI organizes a working group on the Freidlin–Wentzell theoryand its applications. One of the main goals of these talks is to study the theory of large deviations which describe how a metastable diffusion process evolves. Moreover, several talks are dedicated to simulation algorithms and applications (molecular dynamics, turbulence modelling)

François Le Gland has been a member of the committee for the PhD theses of Xuan–Binh Lam (université de Rennes 1, advisor: Laurent Mevel) and Christophe Avenel (université de Rennes 1, advisor: Étienne Mémin) and for the habilitation thesis of Frédéric Dambreville (université de Bretagne Occidentale).

François Le Gland is a member of the “conseil d'UFR” of the department of mathematics of université de Rennes 1.

Florent Malrieu is a member of the “conseil” of IRMAR (institut de recherche mathématiques de Rennes, UMR 6625).

Valérie Monbet is the director of the master on statistics and econometry at université de Rennes 1.

Arnaud Guyader is a member of the committee of “oraux blancs d'agrégation de mathématiques” for ENS Cachan at Ker Lann.

François Le Gland gives a course on Kalman filtering and hidden Markov models, at université de Rennes 1, within the master SISEA (signal, image, systèmes embarqués, automatique, école doctorale MATISSE), a 3rd year course on Bayesian filtering and particle approximation, at ENSTA (école nationale supérieure de techniques avancées), Paris, within the systems and control module, a 3rd year course on linear and nonlinear filtering, linear and nonlinear filtering, at ENSAI (école nationale de la statistique et de l'analyse de l'information), Ker Lann, within the statistical engineering track, and a 3rd year course on hidden Markov models, at Télécom Bretagne, Brest.

Florent Malrieu gives a course on Markov models at université de Rennes 1, within the probability and statistics track of the master on mathematics and applications.

Valérie Monbet gives several courses on data analysis, on time series and hidden Markov models, and on mathematical statistics, all at université de Rennes 1 within the master on statistics and econometry.

Arnaud Guyader has defended his habilitation thesis
,
*Contributions to nonparametric estimation and rare event simulation*, at université de Rennes 2 in December 2011.

François Le Gland is currently supervising four PhD students

Rudy Pastel, title:
*Estimation of rare event probabilities and extreme quantiles. Applications in the aerospace domain*, started in October 2008, expected defense in February 2012, funding:
ONERA grant, co–direction: Jérôme Morio (ONERA, Palaiseau).

Paul Bui–Quang, provisional title:
*The Laplace method for particle filtering*, started in October 2009, funding: ONERA grant, co–direction: Christian Musso (ONERA, Palaiseau).

Alexandre Lepoutre, provisional title:
*Monte Carlo methods for dim target tracking*, started in October 2010, funding: ONERA grant, co–direction: Olivier Rabaste (ONERA, Palaiseau).

Damien Jacquemart, provisional title:
*Rare event methods for the estimation of collision risk*, started in October 2011, funding: DGA / ONERA grant, co–direction: Jérôme Morio (ONERA, Palaiseau).

In addition to presentations with a publication in the proceedings, and which are listed at the end of the document, members of ASPI have also given the following presentations.

Arnaud Guyader has given a talk on multilevel Monte Carlo for extreme quantiles and probabilities at the SIAM Conference on Computational Science and Engineering, held in Reno in March 2011. He has also been invited by Nicolas Hengartner to visit Los Alamos National Laboratories in April 2011.

Frédéric Cerou has been invited to give a talk on smoothed splitting methods for counting at the 16th INFORMS Applied Probability Conference, held in Stockholm in July 2011.

Florient Malrieu has been invited to give seminar talks on non–uniqueness of equilibrium states for McKean–Vlasov equations in Toulouse in March 2011, on long time behavior of McKean–Vlasov equations in Pau in March 2011, on Markov modulated ordinary differential equations in Bordeaux in March 2011, on functional inequalities for mixtures in Grenoble in March 2011, on functional inequalities and concentration inequalities for mixtures in Vannes in June 2011, on ergodicity of piecewise deterministic Markov proceses in Paris in June 2011. He has also been invited to give a three hours lecture on uniform (in time) propagation of chaos for a class of McKean–Vlasov equations, at the workshop on Mean Field Limit, held at IHP (institut Henri Poincaré) Paris in March 2011.

François Le Gland has been invited to give a talk on marginalization for rare event simulation in switching diffusions, at the workshop on Numerical Methods for Filtering and for Parabolic PDE's, held at Imperial College in September 2011. He has also presented a poster on information fusion for indoor navigation, at the DGA seminar on Information Fusion and Planning for Surveillance and Intelligence, held at ENSTA ParisTech in June 2011.

Paul Bui–Quang has been invited to give seminar talks on importance sampling in high–dimension via the Laplace method at the seminar of the BigMCANR project in April 2011 and on the Laplace method for particle filtering at the ENSAI seminar for PhD students in May 2011.

Rudy Pastel has given talks on splitting methods for the evaluation of satellite vs. debris collision probabilities at the 3rd IEEE International Conference on Computer Modeling and Simulation (ICCMS), held in Mumbai in January 2011 and at the European Conference for Aero–Space Sciences (EUCASS), held in St. Petersburg in July 2011.