The scientific objectives of ASPI are the design, analysis and implementation of interacting Monte Carlo methods, also known as particle methods, with focus on

statistical inference in hidden Markov models, e.g. state or parameter estimation, including particle filtering,

risk evaluation, including simulation of rare events.

The whole problematic is multidisciplinary, not only because of the many scientific and engineering areas in which particle methods are used, but also because of the diversity of the scientific communities which have already contributed to establish the foundations of the field

target tracking, interacting particle systems, empirical processes, genetic algorithms (GA), hidden Markov models and nonlinear filtering, Bayesian statistics, Markov chain Monte Carlo (MCMC) methods, etc.

Intuitively speaking, interacting Monte Carlo methods are sequential simulation methods, in which particles

*explore*the state space by mimicking the evolution of an underlying random process,

*learn*the environment by evaluating a fitness function,

and
*interact*so that only the most successful particles (in view of the value of the fitness function) are allowed to survive and to get offsprings at the next generation.

The effect of this mutation / selection mechanism is to automatically concentrate particles (i.e. the available computing power) in regions of interest of the state space. In the special case of particle filtering, which has numerous applications under the generic heading of positioning, navigation and tracking, in

target tracking, computer vision, mobile robotics, ubiquitous computing and ambient intelligence, sensor networks, etc.,

each particle represents a possible hidden state, and is multiplied or terminated at the next generation on the basis of its consistency with the current observation, as quantified by the likelihood function. These genetic–type algorithms are particularly adapted to situations which combine a prior model of the mobile displacement, sensor-based measurements, and a base of reference measurements, for example in the form of a digital map (digital elevation map, attenuation map, etc.). In the most general case, particle methods provide approximations of probability distributions associated with a Feynman–Kac flow, by means of the weighted empirical probability distribution associated with an interacting particle system, with applications that go far beyond filtering, in

simulation of rare events, simulation of conditioned or constrained random variables, interacting MCMC methods, molecular simulation, etc.

ASPI essentially carries methodological research activities, rather than activities oriented towards a single application area, with the objective of obtaining generic results with high potential for applications, and of implementing up–to–date results on a few appropriate examples, through collaboration with industrial partners.

The main applications currently considered are geolocalisation and tracking of mobile terminals, terrain–aided navigation, data fusion for indoor localisation, and risk assessment for complex hybrid systems such as those used in air traffic management.

Monte Carlo methods are numerical methods that are widely used in situations where (i) a stochastic (usually Markovian) model is given for some underlying process, and (ii) some
quantity of interest should be evaluated, that can be expressed in terms of the expected value of a functional of the process trajectory, which includes as an important special case the
probability that a given event has occurred. Numerous examples can be found, e.g. in financial engineering (pricing of options and derivative securities)
, in performance evaluation of communication networks (probability of buffer overflow), in statistics of hidden
Markov models (state estimation, evaluation of contrast and score functions), etc. Very often in practice, no analytical expression is available for the quantity of interest, but it is possible
to simulate trajectories of the underlying process. The idea behind Monte Carlo methods is to generate independent trajectories of this process or of an alternate instrumental process, and to
build an approximation (estimator) of the quantity of interest in terms of the weighted empirical probability distribution associated with the resulting independent sample. By the law of large
numbers, the above estimator converges as the size
Nof the sample goes to infinity, with rate
N^{-1/2}and the asymptotic variance can be estimated using an appropriate central limit theorem. To reduce the variance of the estimator, many variance reduction techniques have been proposed.
Still, running independent Monte Carlo simulations can lead to very poor results, because trajectories are generated
*blindly*, and only afterwards are the corresponding weights evaluated. Some of the weights can happen to be negligible, in which case the corresponding trajectories are not going to
contribute to the estimator, i.e. computing power has been wasted.

A recent and major breakthrough, a brief mathematical presentation of which is given in
, has been the introduction of interacting Monte Carlo methods, also known as sequential Monte Carlo (SMC) methods, in which a
whole (possibly weighted) sample, called
*system of particles*, is propagated in time, where the particles

*explore*the state space under the effect of a
*mutation*mechanism which mimics the evolution of the underlying process,

and are
*replicated*or
*terminated*, under the effect of a
*selection*mechanism which automatically concentrates the particles, i.e. the available computing power, into regions of interest of the state space.

In full generality, the underlying process is a discrete–time Markov chain, whose state space can be

finite, continuous, hybrid (continuous / discrete), graphical, constrained, time varying, pathwise, etc.,

the only condition being that it can easily be
*simulated*. The very important case of a sampled continuous–time Markov process, e.g. the solution of a stochastic differential equation driven by a Wiener process or a more general Lévy
process, is also covered.

In the special case of particle filtering, originally developed within the tracking community, the algorithms yield a numerical approximation of the optimal filter, i.e. of the conditional
probability distribution of the hidden state given the past observations, as a (possibly weighted) empirical probability distribution of the system of particles. In its simplest version,
introduced in several different scientific communities under the name of
*interacting particle filter*
,
*bootstrap filter*
,
*Monte Carlo filter*
or
*condensation*(conditional density propagation) algorithm
, and which historically has been the first algorithm to include a redistribution step, the selection mechanism is
governed by the likelihood function : at each time step, a particle is more likely to survive and to replicate at the next generation if it is consistent with the current observation. The
algorithms also provide as a by–product a numerical approximation of the likelihood function, and of many other contrast functions for parameter estimation in hidden Markov models, such as the
prediction error or the conditional least–squares criterion.

Particle methods are currently being used in many scientific and engineering areas

positioning, navigation, and tracking , visual tracking , mobile robotics , ubiquitous computing and ambient intelligence , sensor networks , risk evaluation and simulation of rare events , genetics, molecular simulation , etc.

Other examples of the many applications of particle filtering can be found in the contributed volume
and in the special issue of
*IEEE Transactions on Signal Processing*devoted to
*Monte Carlo Methods for Statistical Signal Processing*in February 2002, which contains in particular the tutorial paper
, and in the textbook
devoted to applications in target tracking. Applications of sequential Monte Carlo methods to other areas, beyond
signal and image processing, e.g. to genetics, can be found in
.

Particle methods are very easy to implement, since it is sufficient in principle to simulate independent trajectories of the underlying process. The whole problematic is multidisciplinary, not only because of the already mentioned diversity of the scientific and engineering areas in which particle methods are used, but also because of the diversity of the scientific communities which have contributed to establish the foundations of the field

target tracking, interacting particle systems, empirical processes, genetic algorithms (GA), hidden Markov models and nonlinear filtering, Bayesian statistics, Markov chain Monte Carlo (MCMC) methods.

The following abstract point of view, developed and extensively studied by Pierre Del Moral
,
, has proved to be extremely fruitful in providing a very general framework to the design and analysis of numerical
approximation schemes, based on systems of branching and / or interacting particles, for nonlinear dynamical systems with values in the space of probability distributions, associated with
Feynman–Kac flows. Feynman–Kac distributions are characterized by a Markov chain with transition kernels
Q_{k}(
x,
dx^{'}), and by nonnegative potential functions
g_{k}(
x^{'})that play the role of selection functions. They naturally arise whenever importance sampling is used: this applies for instance to simulation of rare events, to filtering,
i.e. to state estimation in hidden Markov models (HMM), etc. To solve
*numerically*the recurrent equation satisfied by the Feynman–Kac distributions, and in view of the basic assumption that it is easy to
*simulate*r.v.'s according to the probability distributions
Q_{k}(
x,
dx^{'}), i.e. to mimic the evolution of the Markov chain, and that it is easy to
*evaluate*the potential functions
g_{k}(
x^{'}), the original idea behind particle methods consists of looking for an approximation in the form of a (possibly weighted) empirical probability distribution associated with
a system of particles. The approximation is completely characterized by the set of particle positions and weights, and the algorithm is completely described by the mechanism which builds this
set recursively. In practice, in the simplest version of the algorithm, known as the
*bootstrap*algorithm, particles

are selected according to their respective weights (selection step),

move according to the Markov kernel
Q_{k}(
x,
dx^{'})(mutation step),

are weighted by evaluating the fitness function
g_{k}(
x^{'})(weighting step).

The algorithm yields a numerical approximation of the Feynman–Kac distribution as the weighted empirical probability distribution associated with a system of particles, and many asymptotic
results have been proved as the number
Nof particles (sample size) goes to infinity, using techniques coming from applied probability (interacting particle systems, empirical processes
), see e.g. the survey article
or the recent textbook
, and references therein

convergence in
L^{p}, convergence as empirical processes indexed by classes of functions, uniform convergence in time (see also
,
), central limit theorem (see also
), propagation of chaos, large deviations principle, moderate deviations principle (see
), etc.

Beyond the simplest
*bootstrap*version of the algorithm, many algorithmic variations have been proposed
, and are commonly used in practice

in the redistribution step, sampling with replacement could be replaced with other redistribution schemes so as to reduce the variance (this issue has also been addressed in genetic algorithms),

to reduce the variance and to save computational effort, it is often a good idea not to redistribute the particles at each time step, but only when the weights are too much uneven.

Most of the results proved in the literature assume that particles are redistributed (i) at each time step, and (ii) using sampling with replacement. Studying systematically the impact of these algorithmic variations on the convergence results is still to be done. Even with interacting Monte Carlo methods, it could happen that some particles generated in one time step have a negligible weight: if this happens for too many particles in the sample, then computing power has been wasted, and it has been suggested to use importance sampling again in the mutation step, i.e. to let particles explore the state space under the action of an alternate wrong mutation kernel, and to weight the particles according to their likelihood for the true model, so as to compensate for the wrong modeling. More specifically, using an arbitrary importance decomposition

R_{k}(
x,
dx^{'}) =
Q_{k}(
x,
dx^{'})
g_{k}(
x^{'}) =
W_{k}(
x,
x^{'})
P_{k}(
x,
dx^{'}) ,

results in the following general algorithm, known as the
*sampling with importance resampling*(SIR) algorithm, in which particles

are selected according to their respective weights (selection step),

move according to the importance Markov kernel
P_{k}(
x,
dx^{'})(mutation step),

are weighted by evaluating the importance weight function
W_{k}(
x,
x^{'})on the resulting transition (weighting step).

Hidden Markov models (HMM) form a special case of partially observed stochastic dynamical systems, in which the state of a Markov process (in discrete or continuous time, with finite or continuous state space) should be estimated from noisy observations. The conditional probability distribution of the hidden state given past observations is a well–known example of a normalized (nonlinear) Feynman–Kac distribution, see . These models are very flexible, because of the introduction of latent variables (non observed) which allows to model complex time dependent structures, to take constraints into account, etc. In addition, the underlying Markovian structure makes it possible to use numerical algorithms (particle filtering, Markov chain Monte Carlo methods (MCMC), etc.) which are computationally intensive but whose complexity is rather small. Hidden Markov models are widely used in various applied areas, such as speech recognition, alignment of biological sequences, tracking in complex environment, modeling and control of networks, digital communications, etc.

Beyond the recursive estimation of a hidden state from noisy observations, the problem arises of statistical inference of HMM with general state space, including estimation of model parameters, early monitoring and diagnosis of small changes in model parameters, etc.

**Large time asymptotics** A fruitful approach is the asymptotic study, when the observation time increases to infinity, of an extended Markov chain, whose state includes
(i) the hidden state, (ii) the observation, (iii) the prediction filter (i.e. the conditional probability distribution of the hidden state given observations at all previous time
instants), and possibly (iv) the derivative of the prediction filter with respect to the parameter. Indeed, it is easy to express the log–likelihood function, the conditional least–squares
criterion, and many other clasical contrast processes, as well as their derivatives with respect to the parameter, as additive functionals of the extended Markov chain.

The following general approach has been proposed

first, prove an exponential stability property (i.e. an exponential forgetting property of the initial condition) of the prediction filter and its derivative, for a misspecified model,

from this, deduce a geometric ergodicity property and the existence of a unique invariant probability distribution for the extended Markov chain, hence a law of large numbers and a central limit theorem for a large class of contrast processes and their derivatives, and a local asymptotic normality property,

finally, obtain the consistency (i.e. the convergence to the set of minima of the associated contrast function), and the asymptotic normality of a large class of minimum contrast estimators.

This programme has been completed in the case of a finite state space , and has been generalized under an uniform minoration assumption for the Markov transition kernel, which typically does only hold when the state space is compact. Clearly, the whole approach relies on the existence of an exponential stability property of the prediction filter, and the main challenge currently is to get rid of this uniform minoration assumption for the Markov transition kernel , , so as to be able to consider more interesting situations, where the state space is noncompact.

**Small noise asymptotics** Another asymptotic approach can also be used, where it is rather easy to obtain interesting explicit results, in terms close to the language of
nonlinear deterministic control theory
. Taking the simple example where the hidden state is the solution to an ordinary differential equation, or a
nonlinear state model, and where the observations are subject to additive Gaussian white noise, this approach consists in assuming that covariances matrices of the state noise and of the
observation noise go simultaneously to zero. If it is reasonable in many applications to consider that noise covariances are small, this asymptotic approach is less natural than the large time
asymptotics, where it is enough (provided a suitable ergodicity assumption holds) to accumulate observations and to see the expected limit laws (law of large numbers, central limit theorem,
etc.). In opposition, the expressions obtained in the limit (Kullback–Leibler divergence, Fisher information matrix, asymptotic covariance matrix, etc.) take here a much more explicit form than
in the large time asymptotics.

The following results have been obtained using this approach

the consistency of the maximum likelihood estimator (i.e. the convergence to the set
Mof global minima of the Kullback–Leibler divergence), has been obtained using large deviations techniques, with an analytical approach
,

if the abovementioned set
Mdoes not reduce to the true parameter value, i.e. if the model is not identifiable, it is still possible to describe precisely the asymptotic behavior of the estimators
: in the simple case where the state equation is a noise–free ordinary differential equation and using a
Bayesian framework, it has been shown that (i) if the rank
rof the Fisher information matrix
Iis constant in a neighborhood of the set
M, then this set is a differentiable submanifold of codimension
r, (ii) the posterior probability distribution of the parameter converges to a random probability distribution in the limit, supported by the manifold
M, absolutely continuous w.r.t. the Lebesgue measure on
M, with an explicit expression for the density, and (iii) the posterior probability distribution of the suitably normalized difference between the parameter and its projection on
the manifold
M, converges to a mixture of Gaussian probability distributions on the normal spaces to the manifold
M, which generalized the usual asymptotic normality property,

it has been shown
that (i) the parameter dependent probability distributions of the observations are locally asymptotically
normal (LAN)
, from which the asymptotic normality of the maximum likelihood estimator follows, with an explicit expression
for the asymptotic covariance matrix, i.e. for the Fisher information matrix
I, in terms of the Kalman filter associated with the linear tangent linear Gaussian model, and (ii) the score function (i.e. the derivative of the log–likelihood function w.r.t.
the parameter), evaluated at the true value of the parameter and suitably normalized, converges to a Gaussian r.v. with zero mean and covariance matrix
I.

Among the many application domains of particle methods, or interacting Monte Carlo methods, ASPI has decided to focus on applications in localisation (or positioning), navigation and tracking , which already covers a very broad spectrum of application domains. The objective here is to estimate the position (and also velocity, attitude, etc.) of a mobile object, from the combination of different sources of information, including

a prior dynamical model of typical evolutions of the mobile,

measurements provided by sensors,

and possibly a digital map providing some useful feature (altitude, gravity, power attenuation, etc.) at each possible position.

This Bayesian dynamical estimation problem is also called filtering, and its numerical implementation using particle methods, known as particle filtering, has found applications in

target tracking, integrated navigation, points and / or objects tracking in video sequences, mobile robotics, wireless communications, ubiquitous computing and ambient intelligence, sensor networks, etc.

Particle filtering was definitely invented by the target tracking community , , which has already contributed to many of the most interesting algorithmic improvements and is still very active. Beyond target tracking, ASPI is also considering various possible applications of particle filtering in positioning, navigation and tracking, see .

The estimation of rare event probabilities is a crucial issue in areas such as reliability, telecommunication networks, air traffic management, etc. In complex systems, analytical methods
cannot be used, and naive Monte Carlo methods are clearly unefficient to estimate accurately probabilities of order less than
10
^{-9}, say. A widespread technique consists in multilevel splitting, where trajectories going towards the critical set are given offsprings, thus increasing the number of
trajectories that eventually reach the critical set. As shown in
, the Feynman–Kac formalism of
is well suited for the design and analysis of splitting algorithms for rare event simulation.

Note that unlike importance sampling methods, this approach allows us not only to compute the probability of the rare event, but also provides realizations of the random trajectory, given that it reaches the critical set.

To illustrate that particle filtering algorithms are efficient, easy to implement, and extremely visual and intuitive by nature, for localisation, navigation and tracking problems in complex environments, with geometrical constraints, that would be very difficult to solve with usual Kalman filters. This material has proved very useful in training sessions and seminars that have been organized in response to the demand from industrial partners (SAGEM, CNES and EDF), and also in teaching. At the moment, the following three demos are available

Inertial position and velocity estimates are known to drift away from their true values, and need to be combined with some external source of information. In this demo, noisy measurements of the terrain height below an aircraft are obtained as the difference between (i) the aircraft altitude above the sea level (provided by a pression sensor) and (ii) the aircraft altitude above the terrain (provided by an altimetric radar), and are compared to the terrain height in any possible point (read on the elevation map). A cloud (swarm) of particles explores various possible trajectories generated from inertial navigation estimates and from a model of inertial navigation errors, and are replicated or discarded depending on whether the terrain height below the particle (i.e. at the same horizontal position) matches or not the available noisy measurement of the terrain height below the aircraft.

In this demo, several stations cooperate to locate and track a mobile from noisy angle measurements, in the presence of obstacles (walls, tunnels, etc), which make the mobile temporarily invisible from one or several stations.

In this demo, a mobile robot is finding its way inside a building, a digital map of which (including walls, doorways, etc.) is provided. The initial position, velocity and orientation of the robot are unknown, and noisy measurements of its rotation and linear displacement are given by an odometer. In addition, a ring of laser sensors detects with some error the distance from the robot to obstacles in sixteen different directions. A cloud (swarm) of particles explores various possible trajectories generated from odometer navigation estimates and from a model of odometer navigation errors, and are replicated or discarded depending on whether the distance from the particle to obstacles matches or not the available noisy measurement of the distance from the robot to the obstacles, in all sixteen directions, and depending also on whether the generated trajectories are compatible with the presence of obstacles.

A particle approximation has been studied , that combines SIS and SIR algorithms in the sense that only a fraction of the importance weights is used for resampling, and two different approaches have been proposed to analyze its performance. The starting point is a factorization of the potential functions as the product of an importance weight function and a resampling weight function.

Even though the two different approaches used here to obtain a central limit theorem for the proposed particle approximation do actually provide the same explicit expression for the asymptotic variance, they differ in the following aspects.

In the first approach, based on a representation in path–space, the importance weight functions appear only in the test function, and it is therefore easy to analyze the joint particle approximation of unnormalized distributions (and normalizing constants and normalized distributions, as a by–product) for the reference model and for several alternate models at the same time, just by choosing the appropriate test function in the central limit theorem. In other words, this first approach is appropriate to analyze particle approximations in statistical models depending on some parameter, in sensitivity analysis, etc.

In the second approach, based on a representation in terms of a multiplicative functional, importance weights that are not used for resampling are interpreted as particles. This interpretation is of independent interest and could be used to analyze particle approximation with adaptive resampling schemes, where the decision to use resampling weights only vs. importance weights only is made dependent on an empirical criterion (effective number of particles, entropy of the sample, etc.) evaluated using the current particle approximation of the unnormalized distribution. The idea is simply to introduce a factorization of the potential functions that depends on the current unnormalized distribution: this results in a representation in terms of a McKean model, and the associated particle approximation could then be easily analyzed.

Surprisingly, very little was known about the asymptotic behaviour of the ensemble Kalman filter
,
, whereas on the other hand, the asymptotic behaviour of many different classes of particle filters is well
understood, as the number of particles goes to infinity. Interpreting the ensemble elements as a population of particles with mean–field interactions, and not only as an instrumental device
producing an estimation of the hidden state as the ensemble mean value, it has been possible to prove the convergence of the ensemble Kalman filter, with a rate of order
N^{-1/2}, as the number
Nof ensemble elements increases to infinity. In addition, the limit of the empirical distribution of the ensemble elements has been exhibited, which differs from the usual Bayesian
filter. Several cases have been invesigated, from the simple case where the drift coefficient is bounded and globally Lipschitz continuous, to the more realistic case where the drift
coefficient is locally Lipschitz continuous, with polynomial growth. In all these cases, the observation coefficient was assumed linear, so that the analysis step for each ensemble element has
exactly the same structure as the analysis step of the usual Kalman filter.

The next step is to study the asymptotic normality of the estimation error, i.e. to prove a central limit theorem. It is somehow expected that the asymptotic variance for the ensemble Kalman filter would be smaller than the known asymptotic variance for the different brands of particle filters, just because the ensemble Kalman filter follows essentially a parametric approach, where only the first two empirical moments are propagated, whereas the particle filters follow a fully nonparametric approach.

This is a collaboration with Christophe Baehr, from the Centre National de Recherche Météorologique (CNRM) of Météo France.

The motivating application is the estimation of Lagrangian velocity in a turbulent flow: to filter out observation noise a Bayesian approach is used, with a simplified Langevin model
as the prior for the Lagrangian velocity. This model involves local means of the Eulerian velocity field, which can
be expressed in terms of the probability distribution of the Lagrangian velocity. Other nonlinear terms in the model, such as the mean pressure gradient, the turbulent kinetic energy
kand its dissipation rate
are either considered as unknown random variables, with a somehow arbitrary prior probability distribution, or are related with local Eulerian means and can then be expressed in terms of
the probability distribution of the Lagrangian velocity. In other words, the proposed simplified Langevin model is a special example of a nonlinear McKean model with mean–field interactions,
where the drift coefficient depends on the probability distribution of the solution.

The original estimation problem reduces to the estimation of the hidden state in a nonlinear Markov model, and numerical approximations have been studied, with two populations of particles: the first population of particles with mean–field interactions learns the unconditional probability distribution of the hidden state, whereas the second population of particles approximates the Bayesian filter, i.e. the conditional probability distribution of the hidden state given the observations.

Alternatively, since noisy observations are available, the local Eulerian means can be expressed in terms of the conditional probability distribution of the Lagrangian velocity given the observations. This results in a much simpler model, where a single population of particles is sufficient to approximate the Bayesian filter.

INRIA contract ALLOC 2399 — May 2007 to August 2010

This FP6 project is coordinated by National Aerospace Laboratory (NLR) (The Netherlands). The academic partners are University of Cambridge and University of Leicester (United Kingdom), Politecnico di Milano and Universita dell'Aquila (Italy), University of Twente (The Netherlands), ETH Zürich (Switzerland), University of Tartu (Estonia), National Technical University of Athens (NTUA) and Athens University of Economics and Business (Greece), Direction des Services de la Navigation Aérienne (DSNA), École Nationale de l'Aviation Civile (ENAC), Eurocontrol Experimental Center (EEC) and INRIA Bretagne–Atlantique (France), and the industrial partners are Honeywell (Czech Republic), Isdefe (Spain), Dedale (France), NATS En Route Ltd. (United Kingdom).

The objective of iFLYis to develop both an advanced airborne self separation design and a highly automated air traffic management (ATM) design for en–route traffic, which takes advantage of autonomous aircraft operation capabilities and which is aimed to manage a three to six times increase in current en–route traffic levels. The proposed research combines expertise in air transport human factors, safety and economics with analytical and Monte Carlo simulation methodologies. The contribution of ASPI to this project concerns the work package on accident risk assessment methods and their implementation using conditional Monte Carlo methods, especially for large scale stochastic hybrid systems.

INRIA contract ALLOC ...— January 2007 to December 2009

This collaboration with Thalès Communications is supported by DGA (Délégation Générale à l'Armement) and is related with the supervision of the CIFRE thesis of Nordine El Baraka.

The overall objective is to study innovative algorithms for terrain–aided navigation, and to demonstrate these algorithms on four different situations involving different platforms, inertial navigation units, sensors and georeferenced databases. The thesis will essentially focus on the special use of image sensors (optical, infra–red, radar, sonar, etc.) for navigation tasks, based on correlation between the observed image sequence and a reference image available on–board in the database, and on the integration of computer vision techniques in particle filter algorithms.

INRIA contract ALLOC 2205 — December 2006 to November 2009.

This ANR project is coordinated by Alcatel–Lucent. The other partners are Alcatel Thales III–V Lab, INT Évry, INRIA Bretagne–Atlantique (project–teams ASPI and TEMICS), Kylia, Photline and XLIM (université de Limoges).

The project COHDEQ40 intends to demonstrate the potential of coherent detection associated with digital signal processing for the next generation high density 40Gb/s WDM systems optimized for transparency and flexibility. Key integrated optoelectronics components and specific algorithms will be developed and system evaluation performed. The INRIA task is to develop these signal processing algorithms needed to recover the message on the decoder side. This makes full use of our knowledge of equalization and synchronization techniques involved in digital communications. First results have been obtained using a preliminary version of the processing algorithms.

January 2006 to December 2007.

This ARC is coordinated by the project–team ARMOR from IRISA / INRIA Bretagne Atlantique. The other academic partners are the project–team MATHFI from INRIA Paris and CERMICS / ENPC, the project–team OMEGA from INRIA Méditérannée and INRIA Lorraine, the project–team MESCAL from INRIA Rhône–Alpes, the industrial partners are Électricité de France RD, and Direction des Services de la Navigation Aérienne (DSNA), and the international partners are CWI (Netherlands) and University of Bamberg (Germany).

The objective of RAREis to design and evaluate various Monte Carlo techniques (importance sampling, importance splitting, cross–entropy, etc.), for the simulation of rare but critical events, in several important domain of applications (communication networks, financial risk management, air traffic management, etc.).

January 2007 to December 2008.

This ARC is coordinated by the project–team MERE from INRIA Méditérannée and INRA Montpellier. The other partners are the project–team Virtual Plants from INRIA Méditérannée, the fisheries ecology laboratory of Agrocampus Rennes, the CIRAD internal research unit on dynamics of natural forests in Montpellier, and the CERMICS / ENPC team on optimization and systems.

The activities of MICRintersect two scientific axes: assessment of renewable ressources and statistical inference from probabilistic modeling.

INRIA contract ALLOC 2229 — January 2007 to December 2009.

This ANR project is coordinated by the project–team TEMICS from IRISA / INRIA Bretagne Atlantique. The other partners are LIS–INPG in Grenoble and université de Nice.

There are mainly two strategic axis in NEBBIANO: watermarking and independent component analysis, and watermarking and rare event simulations. To protect copyright owners, user identifiers are embedded in purchased content such as music or movie. This is basically what we mean by watermarking. This watermarking is to be “invisible” to the standard user, and as difficult to find as possible. When content is found in an illegal place (e.g. a P2P network), the right holders decode the hidden message, find a serial number, and thus they can trace the traitor, i.e. the client who has illegally broadcast their copy. However, the task is not that simple as dishonest users might collude. For security reasons, anti–collusion codes have to be employed. Yet, these solutions (also called weak traceability codes) have a non–zero probability of error defined as the probability of accusing an innocent. This probability should be, of course, extremely low, but it is also a very sensitive parameter: anti–collusion codes get longer (in terms of the number of bits to be hidden in content) as the probability of error decreases. Fingerprint designers have to strike a trade–off, which is hard to conceive when only rough estimation of the probability of error is known. The major issue for fingerprinting algorithms is the fact that embedding large sequences implies also assessing reliability on a huge amount of data which may be practically unachievable without using rare event analysis. Our task within this project is to adapt our methods for estimating rare event probabilities to this framework, and provide watermarking designers with much more accurate false detection probabilities than the bounds currently found in the literature. We have already applied these ideas to some randomized watermarking schemes and obtained much sharper estimates of the probability of accusing an innocent.

François Le Gland has reported on the PhD thesis of Kari Heine (Tampere University, advisor: Robert Piché). He was a member of the committee for the PhD thesis of Boujemaa Aït El Fquih (université de Technologie de Troyes and INT Evry, advisor: François Desbouvries).

Arnaud Guyader and François Le Gland are members of the “commission de spécialistes” in applied mathematics (section 26) of université de Rennes 2. François Le Gland is a member of the “commission de spécialistes” in applied mathematics (section 26) of INSA (institut national de sciences appliquées) Rennes.

François Le Gland gives a course on Kalman filtering and hidden Markov models, at université de Rennes 1, within the Master STI (école doctorale MATISSE), a 3rd year course on Bayesian filtering and particle approximation, at ENSTA, Paris, within the quantitative finance track, and a 3rd year course on hidden Markov models, at ENST Bretagne, Brest.

In addition to presentations with a publication in the proceedings, and which are listed at the end of the document, members of ASPI have also given the following presentations.

Frédéric Cérou has presented numerical results during the workshop on polymer models and related topics held in Nice in February 2007 within the programme of Persi Diaconis visiting chair.

François Le Gland has given a talk on the multilevel splitting approach to rare event simulation at the ARC RARE (Monte Carlo methods for rare event analysis) meeting held at université de Nice in May 2007. He has given talks on Monte Carlo methods in sequential data assimilation at the Dagstuhl seminar on experimental fluid dynamics, computer vision and pattern recognition, held in Schloß Dagstuhl in March 2007, and at the meeting of the GDR Turbulence et Mélange, held at CEMAGREF in Rennes in November 2007. He has given talks on adaptive resampling at the workshop on new directions in Monte Carlo methods, held in Fleurance in July 2007, and on combining importance weights and resampling weights at the workshop on nonlinear filtering and control, held at Warwick university in August 2007.

Vu Duc Tran has presented a poster on the convergence of the ensemble Kalman filter at the workshop on empirical processes and asymptotic statistics, held in Rennes in June 2007.

François Le Gland has been invited in March 2007 and September 2007 by Christophe Baehr to the Centre National de Recherche Météorologique (CNRM) of Météo–France in Toulouse, and conversely Christophe Baehr has visited ASPI for one week in May 2007.