The objectives of the sigma2 project–team are the design, analysis,
and implementation of statistical model–based algorithms,
for identification, monitoring and diagnosis of complex industrial
systems.
The models considered are on one hand the stochastic state space models
of automatic control, with increasing emphasis on nonlinear models,
and on the other hand some partially stochastic models (hmm,
Petri nets, nets of automata, etc.) on discrete structures (trees,
graphs, etc.), e.g. to model distributed discrete events systems.
The major methodological contributions of the project–team, which make the
scientific background for the current research activities, consist of
the use of the local asymptotic approach for monitoring and diagnosis
of continuous systems,
the design of particle filters for the statistical inference of hmm
with general state space,
and the design of distributed Viterbi–like state reconstruction
algorithms, for monitoring and diagnosis of distributed discrete events
systems.
The main applications considered are
monitoring and diagnosis of vibrating mechanical structures
(automobile, aerospace, civil engineering),
monitoring and diagnosis of automobile subsystems,
and fault diagnosis in telecommunication networks.

observers and filters for monitoring and diagnosis of nonlinear dynamical systems,

subspace methods for modal analysis and monitoring,

statistics of

hmmwith general state space, and associated particle methods,monitoring and diagnosis of distributed discrete events systems,

approximate state estimation algorithms in graphical models and Bayesian networks, and application to turbo algorithms.

industrial projects : with Renault on monitoring and diagnosis of automobile subsystems, with Alcatel Space Industry on turbo–estimation for satellite communications, with

eadsLaunch Vehicles on modal analysis of a launch vehicle,multi–partners projects : at national level on fault management in telecommunication networks (

rnrt), on video transmission overipchannels (rnrt), and at european level on exploitation of flight test data under natural excitation conditions (Eurêka), on structural assessment, monitoring and control (Growth), on distributed control and stochastic analysis of hybrid systems (ist),academic research networks : at national level on hidden Markov chains and particle filtering (

mathstic), on particle methods (as–stic), and at european level on system identification (tmr), on statistical methods for dynamical stochastic models (ihp).

The sigma2 project–team is concerned with modeling issues, from
physical principles and from observation data. Hence the main
problems considered are parametric estimation and identification,
as well as model validation, hypotheses testing and diagnosis,
which allow one to detect and to explain a possible disagreement
between the assumed model and the observation data. These issues
are addresssed for different classes of dynamical systems :
linear, nonlinear, and more recently, distributed discrete event
dynamical systems.
We have chosen to give a detailed presentation of three points,
where the project–team has had important contributions, and which form
the scientific and methodological foundations
for the current research activities.

See module .

- Glossary
Statistical technique for comparing two different models to the same data sample, when the sample size

grows towards infinity. In order to avoid singular situations, the deviation between the two models is normalized and stated proportional toN $1/\sqrt{N}$ . Results similar to the central limit theorem show that the test statistics for deciding between these models is asymptotically gaussian distributed with a different mean according to the model which the data has been sampled fromAs opposite to systematic inspection, the purpose is to continuously monitor the considered plant, structure or machine, based on data recorded by sensors, in order to prevent from the occurrence of a malfunctioning or a damage, before it might have had too severe consequences.

Fault indicators, carrying information useful for diagnostics, under the form of the components the most likely responsible of the detected fault. These indicators automatically perform the tradeoff between tha magnitude of the detected changes and the identification accuracy of the reference model on the one hand, and the measurement noise level on the other hand. These indicators are computationnally cheap, and thus can be embedded.

We have developed a general statistical method for confronting a model to data
measured on a process, and performing the early detection of a slight mismatch
between the model and the data. Making such a decision requires the comparison
of the predicted effect of a slight deviation in the process with the
measurements uncertainty. The so–called « asymptotic local » approach,
introduced in the seventies by Le Cam

This activity is in line with the earlier activity of the team on change detection. The major contribution is the develpment of a general and original approach to monitoring based on the statistical local approach. This monitoring approach is strengthened by the increasing interest in condtion–based maintenance, fatigue prevention and aided control in a number of industrial applications. The proposed approach consists in the eatrly detection of small deviations with respect to a reference behavior considered as normal, under usual working conditions, namely without artificial excitation, speeding down or stopping the monitored process or machine. The key principle is to design a « residual », ideally zero under normal functioning mode, with low sensitivity to noises and perturbations, and high sensitivity to small deviations (faults, damages, ...).

The behavior of the monitored system is assumed to be described by
a parametric model

where the « estimating function »

is such that

Does the new sample still correspond to the nominal model

${\mathbb{P}}_{{\theta}_{0}}$ ?If not, what are the components of the parameter vector

$\theta $ the most affected by this change ? In case where the parameter$\theta $ has a physical meaning, answering this question allows to perform a diagnostic about the origin of the change in the behavior.

The asymptotic local approach allows a generic reduction of the validation
problem for a « dynamical system » into a universal and « static »
detection problem for the mean of a Gaussian random « vector ».
Given a

The identifiability condition:

where the asymptotic covariance matrix

where

With this approach, it is possible to decide, with a quantifiable error level,
if a residual value is significantly non zero for assessing whether a fault
has occurred.
It is important to note that the residual and the sensitivity and covariance
matrices

Assuming that the question « which fault type occurred » boils down to the
question « which component of vector

For deciding between

where

where

This approach also allows us to perform a physical diagnostics, namely
in terms
of a parameter

The above systematic approach to monitoring provides us with a general framework for monitoring and troubleshooting continuous industrial processes and machines. The key issue to be addressed within each parametric model class is the residual generation, or equivalently the choice of the estimating function. The present activities of the team address two main model classes :

- Glossary
Hidden Markov model : stochastic automaton (or Markov chain) whose internal state is not observed directly, and should be estimated from noisy observations. More generally, the model itself should be identified. These models are widely used in speech recognition, for the alignment of biological sequences, and more recently for the diagnosis of distributed discrete events dynamical systems, see module .

Numerical method which allows to approximate the conditional probability distribution of an hidden state given observations, by means of the empirical distribution of a system of particles, which explore the state space following independent realizations of the state equation (or state transition), and which are redistributed according to their consistency (quantified by the likelihood function) with the observations.

Statistical inference of hidden Markov models relies on studying the asymptotic behavior of the prediction filter (i.e. the probability distribution of the hidden state given observations at all previous time instants) and of its derivative with respect to the parameter. Two different type of asymptotics have been studied : (i) large time asymptotics, where we have proposed a systematic approach, based on an exponential stability property of the filter, and (ii) small noise asymptotics, where it is easy to obtain interesting explicit results, in terms close to the language of nonlinear deterministic control theory.

We have also proposed numerical methods of Monte Carlo type,
very easy to implement and known under the generic name of
particle filters, for the approximate computation of the filter,
of the linear tangent filter, and other filters associated with
the recursive computation of additive functionals of the hidden
state (as for instance in the em algorithm).

Hidden Markov models (hmm), form a special case of partially
observed stochastic dynamical systems, in which the state of a Markov
process (in discrete or continuous time, with finite or continuous
state space) should be estimated from noisy observations.
These models are very flexible, because of the introduction of latent
variables (non observed) which allows to model complex time dependent
structures, to take constraints into account, etc.
In addition, the underlying Markovian structure makes it possible
to use numerical algorithms (particle filtering, Markov chain Monte Carlo
methods (mcmc), etc.) which are computationally intensive
but whose complexity is rather small.
Hidden Markov models are widely used in various applied areas, such as
speech recognition, alignment of biological sequences, tracking in
complex environment, modeling and control of networks, digital
communications, etc.

Beyond the recursive estimation of an hidden state from noisy
observations, the problem arises of statistical inference of hmm
with general state space, including estimation of model parameters,
early monitoring and diagnosis of small changes in model parameters,
see module , etc.

Our main contribution is the asymptotic study, when the observation time increases to infinity, of an extended Markov chain, whose state includes (i) the hidden state, (ii) the observation, (iii) the prediction filter (i.e. the conditional probability distribution of the hidden state given observations at all previous time instants), and possibly (iv) the derivative of the prediction filter with respect to the parameter. Indeed, it is easy to express the log–likelihood function, the conditional least–squares criterion, and many other clasical contrast processes, as well as their derivatives with respect to the parameter, as additive functionals of the extended Markov chain.

We have proposed the following general approach :

first, we prove an exponential stability property (i.e. an exponential forgetting property of the initial condition) of the prediction filter and its derivative, for a misspecified model,

from this, we deduce a geometric ergodicity property and the existence of a unique invariant probability distribution for the extended Markov chain, hence a law of large numbers and a central limit theorem for a large class of contrast processes and their derivatives, and a local asymptotic normality property,

finally, we obtain the consistency (i.e. the convergence to the set of minima of the associated contrast function), and the asymptotic normality of a large class of minimum contrast estimators.

This programme has been completed in the case of a finite state
space

Clearly, all the approach relies on the existence of exponential
stability property of the prediction filter, and the main challenge
currently is to get rid of this uniform minoration assumption for
the Markov transition kernel

Another asymptotic approach can also be used, where it is rather easy
to obtain interesting explicit results, in terms close to the language
of nonlinear deterministic control theory

Taking the simple example where the hidden state is the solution of an ordinary differential equation, or a nonlinear state model, and where the observations are subject to additive Gaussian white noise, this approach consists in assuming that covariances matrices of the state noise and of the observation noise go simultaneously to zero. If it is reasonable in many applications to consider that noise covariances are small, this asymptotic approach is less natural than the large time asymptotics, where it is enough (provided a suitable ergodicity assumption holds) to accumulate observations and to see the expected limit laws (law of large numbers, central limit theorem, etc.). In opposition, the expressions obtained in the limit (Kullback–Leibler divergence, Fisher information matrix, asymptotic covariance matrix, etc.) take here much more explicit form than in the large time asymptotics.

The following results have been obtained during the last few years using this approach :

the consistency of the maximum likelihood estimator (i.e. the convergence to the set

of global minima of the Kullback–Leibler divergence), has been obtained using large deviations techniques, with an analytical approachM , if the abovementioned set

does not reduce to the true parameter value, i.e. if the model is not identifiable, it is still possible to describe precisely the asymptotic behavior of the estimators : in the simple case where the state equation is a noise–free ordinary differential equation, we have shown inM in a Bayesian framework, that (i) if the rank of the Fisher information matrixr $\mathcal{I}$ is constant in a neighborhood of the set , then this set is a differentiable submanifold of codimensionM , (ii) the posterior probability distribution of the parameter converges to a random probability distribution in the limit, supported by the manifoldr , absolutely continuous w.r.t. the Lebesgue measure onM , with an explicit expression for the density, and (iii) the posterior probability distribution of the suitably normalized difference between the parameter and its projection on the manifoldM , converges to a mixture of Gaussian probability distributions on the normal spaces to the manifoldM , which generalized the usual asymptotic normality property,M more recently, we have shown that (i) the parameter dependent probability distributions of the observations are locally asymptotically normal (LAN)

, from which we deduce the asymptotic normality of the maximum likelihood estimator, with an explicit expression for the asymptotic covariance matrix, i.e. for the Fisher information matrix $\mathcal{I}$ , in terms of the Kalman filter associated with the linear tangent linear Gaussian model, and (ii) the score function (i.e. the derivative of the log–likelihood function w.r.t. the parameter), evaluated at the true value of the parameter and suitably normalized, converges to a Gaussian r.v. with zero mean and covariance matrix$\mathcal{I}$ .

Studying hmm with general state space immediately raises the
question of computing, even approximately, the optimal filter and
related quantities, such as the derivative of the optimal filter
w.r.t. an unknown parameter of the model (more generally, the
linear tangent optimal filter),
or other filters fa) of the
hidden state (an example of such a fa is the auxiliary
function in the em algorithm).

An attractive and promising answer has been proposed recently
under the generic name of particle filtering, which is the
subject of intense research activities, in the direction of practical
implementation, and in the direction of extension to more general
models and problems.
Most of the currently available mathematical results are presented
in the survey paper

In its simplest version bootstrap filter, the algorithm consists in approximating
the optimal filter by means of the empirical probability distribution
of a particle system.
Between two observations, particles explore the state space by
mimicking independent trajectories / transitions of the hidden state
sequence, and whenever a new observation becomes available,
a resampling step (with replacement) is performed, by which
particles are selected in terms of their consistency (quantified by
the likelihood function). As a result of this resampling mechanism,
which is the key step in the algorithm, particles automatically
concentrate in regions of interest of the state space.
The algorithm is extremely easy to implement, since it is enough
to simulate independent trajectories / transitions of the
hidden state sequence.
Interacting particle methods can also be used to solve various
statistical problems for hmm with general state space,
including recursive estimation of model parameters,
early monitoring and diagnosis of small changes in model parameters,
etc.
These two problems require to compute, even approximately,
the linear tangent optimal filter, in addition to the optimal filter.

- Glossary
Generically denotes systems that combine aspects which are usually treated separately. For example : discrete vs. continuous state space, discrete vs. continuous time, logical vs. real time, time driven vs. event driven, equations or constraints vs. random variables, etc.

see module .

In a

des, two events are called concurrent if they may occur in any order, or simultaneously, without changing the outcome

Great parts of the investigations in this subject are done jointly with
the triskell project–team (Claude Jard), the common target application
being the supervision of telecommunication networks, in particular fault
diagnosis. For this, we extended techniques known as hmm to
distributed systems. Those systems are modelled by safe Petri nets,
under a dynamics — or, following the use of language in computer
science, a semantics — that is adequate for the distributed nature
of those systems, namely : based on « partial orders ».

For this, we had to

develop a state observer technique based on the unfolding

of a Petri net or a product of automata, introduce a new notion of « partially stochastic » system that fits into the partial order semantics, and

adapt the Viterbi algorithm for computing a max–likelihood trajectory for

hmm's to the partial order setting.

An important result in this area is the fact that the diagnosis / observation algorithms can be « distributed ». That is, only local states of the supervised system — which is often of considerable size — need to be treated. This result is based on a close analogy between Markov fields (a.k.a. networks of random variables) and distributed systems viewed as networks of elementary dynamical systems. Therefore, the existing inference algorithms for Markov fields carry over naturally to distributed systems, thus offering a wealth of techniques.

Our current work aims at

the further development of the probabilistic framework, towards identification of local transition probabilities,

a greater robustness of the supervision algorithms, especially w.r.t. incomplete knowledge of the system,

a better understanding of relations between distributed algorithms and known models for concurrent processes (specifically the wide variety of formalisms for event structures),

being able to handle distributed systems with changing structure.

We are dealing with distributed systems that are at the same time des,
or finite state machines : they are obtained as the combination of elementary
subsystems, and therefore distributed. Petri nets constitute typical
examples for this : one can easily connect two Petri nets via « shared »
places. The tokens on those places can thus pass from one net to the other,
and one can model in this way interactions such as resource transimission
or mutual exclusion. More generally, a distributed system can be viewed as
a network of elementary automata, each acting on some set of state variables.
Interaction is then brought about by shared variables, which play the role
of places in Petri nets. The graph of connections between the subsystems thus
defines an interaction network.

Modeling a complex system becomes much easier under a modular approach. However, the real advantage of distributed systems is elsewhere : by identifying the interactions between subsystems, one also exhibits the domains of concurrent behaviour. In other words, as long as two subsystems do not interact via their shared variables, they evolve independently. If this is a tautology, it deserves to be noted : there exist « regions of concurrency » between subsystems, inside which the respective behaviour can be governed by independent clocks, without impact on the global behaviour. This means that events in a distributed system are only partially ordered in time, that is, there is no adequate notion of global time. The trajectories of distributed systems are hence to be described by a « partial order », or « true concurrency », semantics, in which partial orders of events are given rather than sequences. The space of trajectories is given by the « unfolding », rather than the marking graph, of the system : this leads to a considerable reduction in the size of the trajectory space.

Several concepts of stochastic Petri nets have been proposed in the literature : however, exept the case of free choice nets, none of these models reflects correctly the « concurrency » of transitions by « stochastic independence ». In fact, two concurrent transitions that share no place should behave, under randomized firing, like independent random variables.

It can be shown that this requirement contradicts any Petri net dynamics given by Markov chains, such as in all usual models of stochastic Petri nets. Indeed, the concept of Markov chain requires a global time to synchronize events : it follows that the probability of two concurrent events will depend on their order of occurrence.

A better framework is given by the css model in

The diagnosis problem consists in identifying the most likely behaviour
of the system, given the partial order of system events observed.
We have established that it suffices to consider « tiles » formed by
a transition and the set of variables (or places) on which the
transition acts. A « likelihood » is associated to each individual tile,
based on the probabilistic model and conditional on the observations.
The system trajectories can thus be viewed as puzzles formed by those
tiles, and their likelihood is obtained as the product of the tile
likelihoods. Max–likelihood diagnosis therefore amounts to constructing
inductively the optimal puzzles via dynamic
programming

This principle works if the observations from the entire system are
collected in a single site, the « supervisor ». This assumption is not
the most appropriate one in practice : rather, it seems natural to have
a distributed structure of observers for a distributed system. We have
shown that there is no need to centralize the observations, and that
distributed hmm algorithms are much more efficient. Such algorithms
rely on several agents, each of them handling observations
emanating from the subsystem under its supervision, by means of tiles local
for that subsystem. In this way, part of the global puzzle is constructed
locally. Communication among agents concern only those tiles pertaining
to shared resources. One thus obtains a truly parallel and
asynchronous reconstruction of the global puzzles, while handling only local
states and exploiting fully the interaction graph, as well as concurrency
between subsystems. These distributed algorithms are being experimented
in an industrial network management platform, for sdh
or gmpls/wdm network models, see the Magda2 project,
module .

Applications involving an industrial partnership, with access to real data, cover a wide range of domains : vibration mechanics, onboard electronics for the automotive industry, diagnosis of large telecommunication networks. We have chosen to give a detailed presentation of our activity in vibration mechanics, which clearly represents our main cumulated investment, and we describe next our activity in telecommunications.

- Glossary
Identification of the « modes of vibration », namely 1) the eigenfrequencies and associated damping coefficients, and 2) the observed components of the associated eigenvectors or « modeshapes ».

Generic name for an identification algorithm for linear systems based on output covariance matrices, in which different subspaces of Gaussian random vectors play a key role

.

In a long serie of investigations, the sigma2 project–team has developed
an original set of techniques performing the following services,
« for a structure under ambiant operating conditions » :
1) modal analysis,
2) correlation between measurements and model,
3) detection and diagnostics of fatigues.
Performing in–operation data processing imposes the following constraints :
1) the excitation is natural, it results from the very functioning
conditions of the structure, it is often nonstationary,
2) the excitation is not measured.

In–situ damage monitoring and predictive maintenance for complex mechanical structures and rotating machines is of key importance in many industrial areas : power engineering (rotating machines, core and pipes of nuclear power plants), civil engineering (large buildings subject to hurricanes or earthquakes, bridges, dams, offshore structures), aeronautics (wings and other structures subject to strength), automobile, rail transportation. These systems are subject to both fast and unmeasured variations in their environment and small slow variations in their vibrating characteristics. Of course, only the latter are to be detected, using the available data (accelerometers), and noting that the changes of interest (1% in eigenfrequencies) are not visible on spectra.

For example, offshore structures are subject to the turbulent action of
the swell which cannot be considered as measurable. Moreover,
the excitation is highly time–varying, according to wind and weather
conditions. Therefore the challenge is as follows.
The available measurements do not separate the effects of the
external forces from the effect of the structure, the external forces vary
more rapidly than the structure itself (fortunately !),
damages or fatigues on the
structure are of interest, while any change in the excitation is meaningless.
Expert systems based on a human–like exploitation of
recorded spectra can hardly work in such
a case, the fdimethod must rather rely
on a model which will help in discriminating between the two
mixed causes of the changes that are contained in the
accelerometer measurements.

A different example concerns rotating machines such as huge alternators in electricity power plants. Most of the available vibration monitoring techniques require to stop the production, in order to vary the rotation speed and, by the way, get reliable spectra. When monitoring the machine without speeding it down, one is faced with the following challenge. The non–stationary and non–measured excitation is due to load unbalancing which creates forces with known frequency range (harmonics of the rotation speed) but unknown geometry, and to turbulence caused by steam flowing through the alternator and frictions at the bearings. The problem is again to discriminate between two mixed causes of changes in vibrating characteristics.

Classical modal analysis and vibration monitoring methods basically process
data registered either on test beds or under specific excitation or rotation
speed conditions.
The object of Eureka project sinopsys « Model Based
Structural Monitoring Using in–Operation System Identification »
coordinated by lms (Leuven Measurements Systems, Leuven, Belgium)
has been to develop and integrate modal analysis and vibration
monitoring algorithms devoted to the processing of data recorded
in–operation,
namely during the actual functioning of the considered structure or machine,
without artificial excitation, speeding down or stopping.
The sigma2 project–team
and the metalau project–team of inria at Rocquencourt
have been jointly involved within sinopsys.

The main contribution of inria to sinopsys
is an original set of algorithms for multi–sensor signal processing
(e.g. accelerometer measurements) that produces intelligent warnings,
that is to say warnings that reveal the hidden causes of the defects
or damage undergone by the machine or structure. This software can be
embedded and work online. Among the actual data that Inria had to process
with this software is the Ariane 5 test flight data.

During the first stage of the sinopsys project, focused on modal
analysis, we have improved our interactive mode selection and validation
procedures, and developed a module for « model validation »,
performing a crossed validation of an identification result on a validation
data set. Within a second stage, we have developed a fatigue detection tool,
which evaluates for each mode the extent of the modification in the modal
behavior. This works on laboratory data sets involving a measured excitation,
as well as on in–situ data without measuring the excitation.
A third stage has been devoted to the develoment of a fatigue diagnostics
tool, where the fatigues or damages are explained in terms of modifications
in volumic mass or Young modulus, with a localization of the changes on
the structure.

The entire set of tools has been integrated
within the lms cada_x system on the one hand,
and within the modal / cosmad toolbox for the Scilab
« freeware » on the other hand,
see

Another cooperation in the aeronautics domain, still within the Eureka framework, has been devoted to improving the exploitation of flight test data, under natural excitation conditions (e.g. turbulence), enabling more direct exploration of the flight domain, with improved confidence and at reduced cost, see module .

A new project, launched by the Computer Security program of the French Ministry of Research, is devoted to statistical rejection of environmental effects in structural health monitoring, and related issues, see module .

These monitoring and diagnosis algorithms were generalized to models that are more complex than vibration models and can be used for monitoring in industrial process control (gas turbine, electric plant) or for on board diagnosis (catalytic exhaust and particulate filter of automobiles, see module ).

- Glossary
Denotes the top level of management of telecommunication, i.e. supervision, monitoring, maintenance, etc.

Comprises processing, filtering and interpretation of alarms sent across the network.

Interpretation of alarms, based on which reconfiguration and maintenance operations can be triggered.

see module .

Error correcting codes introduced by Berrou, Glavieux and Thitimajshima

, joining two convolution codes by means of an interleaver. The name is due to the iterative decoding algorithm, which uses « soft » probabilistic information.

Currently, important efforts in network management concern « fault
diagnosis ». Based on a suitably abstract network model, one strives to
automatically generate algorithms for behavior monitoring, and diagnosis
in particular.
This technique allows to update more easily the diagnosis software,
as the network evolves. The originality of this approach lies in the fact
that we aim at a modular modeling from the start, so as to obtain
distributed diagnosis algorithms. This work is carried out jointly with
the triskell project-team, and is at the heart of
the exploratory rnrt project magda2, see
module .

Besides, we are also interested in turbo–codes and their extensions. Digital communications at the physical layer have evolved from a traditional approach where the different functions of modulation, coding, and equalization were considered separately, to an integrated systems approach. To achieve this goal, we propose joint design of different functions for receivers based on turbo–techniques.

In cooperation with Lennart Ljung of Linköping University and
Anatoli Juditsky of université Joseph Fourier,
we develop a Matlab toolbox for nonlinear system identification.
This toolbox is designed as an extension to nonlinear systems of the
existing System Identification Toolbox (sitb) authored by
Lennart Ljung and distributed by The Mathworks.
In addition to classical parametric estimation for grey–box
models, algorithms based on non–parametric estimation with
regression trees, wavelet networks and neural networks
are also implemented. The dynamic model structures include
nonlinear autoregression with exogenous inputs (nlarx)
and some output error models.

On the basis of a joint study with colleagues of Linköping University
in Sweden sitb), its command line syntax
and graphical user interface are designed closely following those of
the sitb. An experienced user of the sitb can thus easily
learn to work with the new toolbox.

The considered model structures cover both
parametric state space models for nonlinear system grey–box modeling
and non–parametric models for black–box modeling.
The black–box part includes nonlinear autoregressive with exogenous
inputs (nlarx) models
and block–oriented output error models (Wiener and Hammerstein models).
For the estimation of nonlinearities,
general estimators such as regression trees, wavelet networks and neural
networks are proposed, in addition to some specific nonlinearity
estimators (saturation and dead zone).
The main algorithm originality consists of the use of non iterative
algorithms for nonlinearity estimation with regression trees and wavelet
networks.
These algorithms are faster than the classical gradient–based iterative
search and avoid the problem of bad local
minima

The implementation of the toolbox strongly relies on the object–oriented programing functionalities of Matlab. The algorithms have been implemented and tested. The current development efforts are mainly about the user interface improvement.

In collaboration with Maurice Goursat (metalau project-team at inria
Rocquencourt), we have pursued the development of the Scilab toolbox devoted
to modal analysis and vibration monitoring of structures or machines subject
to known or ambiant (unknown) excitation.

In collaboration with Maurice Goursat (metalau project-team at inria
Rocquencourt), we have pursued the development of the Scilab
toolbox

This toolbox performs the following tasks :

Output–only subspace–based identification, see modules and . The problem is to identify the eigenstructure (eigenvalues and observed components of the associated eigenvectors) of the state transition matrix of a linear dynamical system, using only the observation of some measured outputs summarized into a sequence of covariance matrices corresponding to successive time shifts. For this method, see

. Input–output subspace–based identification, see modules and . The problem is again to identify the eigenstructure, but now using the observation of some measured inputs and outputs summarized into a sequence of cross–covariance matrices. For this method, see

. Subspace–based identification through moving sensors data fusion, see module 6.2 of our 2001 activity report. The problem is to identify the eigenstructure based on a joint processing of signals registered at different time periods, and under different excitations. For this method, see

. Automated online identification package, see module . The problem is still to identify the modes of a mechanical structure. The main question is to react to non stationarities and fluctuations in the evolution of the modes, especially the damping. The developed package allows the extraction of such modes using a graphical interface allowing to follow the evolution of all frequencies and damping over time and to analyse their stabilization diagram (from which they were extracted). Automated modal extraction is performed based on the automated analysis and classification of the stabilization diagram. For this method, see

. Damage detection, see modules , and . Based on vibration measurements processing, the problem is to perform early detection of small deviations of the structure w.r.t. a reference behavior considered as normal. Such an early detection of small deviations is mandatory for fatigue prevention. The algorithm confronts a new data record, summarized into covariance matrices, to a reference modal signature. For this method, see the articles

. Damage monitoring. This algorithm solves the damage detection problem above, based on on–line (and not block) processing. For this method, see

. Modal diagnosis, see modules , and . This algorithm finds the modes the most affected by the detected deviation. For this method, see

and . Damage localization, see modules , and . The problem is to find the part of the structure which have been affected by the damage and the associated structural parameters (masses, stiffness coefficients). For this method, see

and . Optimal sensor positioning for monitoring. A criterion is computed, which quantifies the relevance of given sensor number and positioning for the purpose of monitoring. For this criterion, see

.

This software has been registered at the app.
Its modules have been tested by different partners, especially some
industrial partners within the flite project,
see module .

See module .

Our study on adaptive observers is mainly for the purpose of fault diagnosis by means of state and parameter estimation. When faults are modeled as parametric changes, adaptive observers provide direct and indirect methods for fault diagnosis.

Recently, a globally convergent adaptive observer has been developed for some nonlinear systems. It is based on a non trivial combination of a nonlinear high gain observer and a linear adaptive observer. The study of this year has been carried out on several aspects of both theoretic and practical interests.

The first one is for the generalization of the results to
a larger class of nonlinear systems, while still
preserving the global convergence of the algorithm.
Through a joint work with
Gildas Besançon (lag, Laboratoire d'Automatique de Grenoble)
and Hassan Hammouri (lagep, Laboratoire d'Automatique et de Génie
des Procédés, Lyon),
now the result is extended to the case of nonlinear systems
with product between unknown parameters and nonlinear
functions of states

In order to deal with more general nonlinear systems, researches
have been made on local convergence of adaptive observers.
The resulting algorithm is applicable to a large class of
nonlinear systems of quite general structure.
It is tested in a preliminary study on particulate filter fault detection
for cars equipped with diesel engines, in the framework of
the thesis study of Olivier Perrin.
The result will be presented at the 1st ifac Symposium on Advances
in Automotive Control

The role of the persistent excitation condition is better understood,
as explained in

This research is conducted in cooperation with Maurice Goursat (metalau
project-team at inria Rocquencourt), in the framework of the
Eureka project flite which has followed sinopsys,
see modules
and .

The problem is the identification and the monitoring of
the eigenstructure (eigenvalues and observed components of the
associated eigenvectors) of the state transition matrix

where the dimension of the observation

The achievements of this year include conceptual investigations, experimental results and software developments.

On the conceptual side, our damage localization paper has been accepted
for publication svd).
During the visit of Ivan Goethals (kul / sista),
we have tried to speed up the algorithm by implementing recursive versions
of the svd.
For the input–output case, recursive subspace method have been studied
to extend the applicability of those algorithms from off–line to on–line
frameworks.
Especially the use of fast approximation subspace tracking algorithms
like projection approximation subspace tracking (past)
and its instrumental variable extenson iv–past have been studied.
We have derived a recursive version of the classical subspace realization
algorithm using iv–past.
This algorithm has been shown to be a relatively
good algorithm for tracking modes in the case of vibrational modes
from in–flight test data sets

Based on the work on interacting particle systems,
see module ,
we have studied the performance of such algorithms for estimating
and tracking the parameters
of linear systems. The underlying application which motivated the study
is the tracking of vibration modes for in–flight data sets.
The classical subspace algorithm and its recursive implementation,
requires the computation of a svd for the extraction
of the state matrix.
These algorithms do not keep track of the previous state matrix when
the number of observations grows to infinity.
Everything is done without accumulating the information from previous data.
The idea underlying the study is to apply together stochastic algorithms
and particle filtering to perform such parameter estimation.
For more information on the computation of the conditional probability
distribution of hidden states given the observations,
and on the computation of its derivative with respect
to the parameter

We have investigated a first theoretical solution to the problem of
monitoring a damping coefficient, which results from the flutter
monitoring problem,
see module .
The idea is to use the same subspace–based residual as we use for structural
health monitoring, and to design a unilateral test statistics for detecting
that a given damping coefficient decreases towards zero. Since this
hypothesis problem is no longer a local hypotheses testing problem,
we have used another asymptotic approximation for the residual,
and a cumulative sum
test

From an experimental point of view, the Eureka flite project has
provided lots of opportunities for testing and improving our identification
and detection techniques, including the new flutter monitoring algorithm;
see module .
Another opportunity for improving, tuning and testing our techniques
through our modal toolbox, has been provided by a contract
with eads Launch Vehicles,
see module .

On the software side, the development of the modal / cosmad toolbox
to be used with Scilab has been pursued, the above recursive identification
algorithms and the flutter monitoring algorithms incorporated,
see module .

Particle methods are sequential methods of Monte Carlo type,
in which particles explore the state space, by mimicking the dynamics
of some underlying stochastic process, and interact under the effect
of a selection mechanism which automatically concentrates particles,
i.e. the available computing power, into regions of interest of
the state space.
This methodology is currently applied within the sigma2 project-team
to several types of problems and applications : simulation of rare
events and evaluation of collision risk (ist project Hybridge,
see module ), tracking
of mobiles in a cellular network (thesis study of Natacha Caylus),
turbo–synchronization for satellite communications (research
contract with Alcatel Space Industry,
see module ),
identification and monitoring of particulate filters (PhD thesis
of Olivier Perrin,
see module ),
estimating and tracking vibration modes for in–flight data sets
(internship of Johan Fichou,
see module ).
Particle methods are also studied in the vista project-team, with
applications in target tracking, and in tracking of points or extended
objects in video sequences.

The numerical evaluation of extremely small probabilities, such as the probability of occurrence of a rare event — typically the probability that a set is reached by a continuous–time Markov process before a fixed or a random time, is a challenging numerical problem whose applications are numerous : analysis and performance evaluation of a telecommunication network, evaluation of conflict or collision risk in air traffic management, see module , etc. To deal with this class of problems, there are on one hand probabilistic methods, which provide asymptotic results and are based on large deviations theory, and on the other hand simulation methods, the most widely used of which is « importance sampling », where independent sample paths (i) are generated under a proposal probability distribution for which the considered event is not so rare, and (ii) are weighted by the Radon–Nikodym derivative of the proposal probability distribution w.r.t. the true probability distribution.

An alternative method is « importance splitting », in which a sequence
of increasingly rare events is defined, and a selection mechanism is
introduced where sample paths for which an intermediate event holds
true split / branch into several offsprings, while sample paths
for which none of the intermediate events hold true are
terminated restart
algorithm armor project-team, with
applications in telecommunication networks.

In collaboration with Pierre Del Moral (ldp, laboratoire de
Statistique et Probabilités, université Paul Sabatier in Toulouse)
and with Pascal Lezaud (cena, Centre d'Études de la Navigation
Aérienne, in Toulouse), we have started

Several extensions have been considered this year. We have studied a new algorithm which combines « importance splitting » and « importance sampling » : sample paths are generated under a proposal probability distribution for which the intermediate events are not so rare, and those sample paths for which an intermediate event holds true are given a number of offsprings related to their weight, i.e. to the Radon–Nikodym derivative of the proposal probability distribution w.r.t. the true probability distribution. In other words, among all the sample paths for which an intermediate event holds true, those which are closest to a typical sample path from the true probability distribution are given more offsprings than others.

We have studied particle filters for models where the hidden states
and the observations form jointly a Markov chain, which means that
the hidden states alone do not necessarily form a Markov chain.
This model includes as a special case hidden Markov models,
switching ar models, non–linear state–space models with
correlated Gaussian noise, etc.
The evolution of the optimal filter is defined in terms of an unnormalized
kernel defined on the state space, which combines the prediction step
and the correction step simultaneously, and we have studied particle
filters associated with an arbitrary decomposition of this unnormalized
kernel into a Markov mutation kernel and a selection weight function.
There is no universally optimal decomposition that would
minimize the approximation error for any test function, and we have
characterized the optimal decomposition for the approximation
of the likelihood function.
We have studied the propagation of approximation errors,
the stability properties of the filter, and we have obtained uniform
error estimates

We consider joint Markov chains depending on an unknown parameter. Under the assumption that a probabilistic representation exists for the derivative w.r.t. the parameter of the unnormalized kernel characterizing the evolution of the optimal filter, the linear tangent filter, which is the derivative of the optimal filter w.r.t. the parameter, is absolutely continuous w.r.t. the optimal filter. Exploiting this fact, several joint particle approximation schemes have been proposed in the form of the empirical probability distribution and a weighted empirical signed measure associated with a unique particle systems.

The problem is very close to the problem of computing sensibilities
of mathematical expectations, using Monte Carlo methods, which is
a very active area of research in mathematical finance. This motivates
our contacts with the mathfi project-team at inria Rocquencourt,
in particular within a project of the Lyapunov Institute.

Applications of particle methods to recursive mle has been
studied

For the description of distributed systems, we dropped the Petri net framework and used instead the more general one of « Bayes networks of dynamic systems ». There, a system is considered as an automaton acting, via « tiles », on a set of state variables. The state variables play the role of places in Petri nets, the tiles correspond to transitions. Systems can be composed via shared variables, i.e. such that several components can act on those variables. We thus obtain a distributed dynamic system as the product of elementary components. This construction is akin, and close, to the techniques of specifying Markov fields (or Bayesian networks), using potentials whose action is restrained to a subset of the field variables; the main difference (besides the stochastic framework) lies therefore in the introduction of the dynamics.

This analogy between familiar models from signal and image processing on one hand and of distributed dynamic systems on the other turns out to be very fruitful, both for the formal study of large systems and for developping algorithms. For instance, Bayesian networks reflect the factorization properties of a joint probability distribution; for a distributed system, the interaction graph reflects the factorization properties of the set of global system trajectories in a true concurrency semantics. This interesting result allows to rewrite directly, for distributed systems, an entire class of efficient inference algorithms developed for Bayesian networks and graphical models.

A natural application of this consists in recovering all system trajectories
that are compatible with a set of events that have been observed during its
evolution, and that are themselves of a distributed nature since they have
been sampled on different components. The main advantage of these methods
is that they handle only the factorized form of the set of those compatible
trajectories.
We therefore never need to deal with global states of the system, which
would be unfeasible : instead, we only handle local states of the
components, ensuring their consistency with the state of the neighbouring
components. More is true : these algorithms for reconstructing hidden
trajectories can themselves be distributed : associate a « local
supervisor » to each component, such that each such supervisor constructs
the local trace of the global trajectories on its component, based on its
local observations together with information received from the neighbouring
component supervisors. One can therefore define a truly distributed
supervision architecture

Our current research on this subject focuses on three points.

First, the extension of the algorithmic framework. A minimal axiomatics has been exhibited, allowing the implementation of distributed algorithms. The theoretical framework thus obtained allows to go beyond the case of tree–shaped interaction graphs : using turbo–code techniques, it can be shown that the inference algorithms proposed for the tree–shaped case behave relatively well in cyclic graphs. The axiomatic framework allows to understand how « turbo »–type algorithms can be transposed to distributed dynamic systems, and to study some of their convergence properties

. Current work concerns the robustness of the algorithms, in particular w.r.t. losses of alarms and incomplete knowledge / modelling of the supervised system. A second research direction aims at defining an adequate stochastic framework for distributed dynamic systems. The difficulty is here to randomize the set of all trajectories of a distributed system while respecting the concurrent nature of the interactions in it. More precisely, when two components have no interaction with one another, one should expect stochastically independent behaviour. This is not possible in usual Markovian dynamics since those require a global time, and hence interleaving of concurrent events. We have proposed several ways of constructing a stochastic framework for probabilizing partial orders of events. The first solution, partially stochastic models

, uses a finite time horizon, thus excluding any limit theorems. A second approach, recursive and therefore with an infinite horizon, consists in probabilizing sets of increasingly long trajectories, represented directly in the form of partial orders. One then defines a filtration on the « unfolding » of the Petri net describing the system

: this approach, however, works only for a limited class of models, there being no adequate filtration in the general case. Recent results of S. Abbes thesis research allow, however, to lift restrictions in this approach, and the development of a rich theory is under way. Finally, a third possibility is based on a different semantics, « cluster unfolding »

, that yields a different form of data structure, giving a Markovian model of concurrent runs of distributed systems in logical time : its extension to timed Petri nets has possible applications to performance evaluation. Our third research direction concerns the use of the unfolding of a distributed system to solve the diagnosis problem. This work was carried out for centralized diagnosis, that is, recursive identification of global trajectories that explain a given partial order pattern of observed events

. In particular, diagnosability issues have been studied for these partially ordered runs . Solving the diagnosis problem with unfoldings has led to interesting results in the case of

distributedsystems. We have evidenced factorization properties on the unfolding of a distributed system : specifically, the unfolding of the global system decomposes as the product of local unfoldings, one for each component. The advantage of this representation is that each factor has a reasonable size, which reduces the combinatorial explosion due to large systems. We have proved that the centralized diagnosis problem could be also solved on a factorized representation of the unfolding. And to do so, we again make use of our distributed « turbo–like » algorithms. Distributed diagnosis using unfoldings has been studied and presented in .

See module .

- Glossary
Error correcting codes introduced by Berrou, Glavieux and Thitimajshima

, joining two convolution codes by means of an interleaver. The name is due to the iterative decoding algorithm, which uses « soft » probabilistic information.

The algorithms we propose for joint processing, arise from the turbo (or iterative) algorithm in digital communications. The applications however concern not only the communication community (coding, multiple access) but also signal processing (equalization). Our current research on this subject focuses on the analysis of such algorithms in order to propose new designs.

Our work stems from

In ira) codes for binary–input symmetric channels in the
large blocklength limit. Our optimization technique is based on
approximating the evolution of the densities of the messages
exchanged by the belief–propagation message–passing decoder by a
one–dimensional dynamical system. In this way, the code ensemble
optimization can be solved by linear programming. We propose four
such density evolution approximation methods, and compare the
performance of the obtained code ensembles over the binary
symmetric channel and the binary–antipodal input additive white
Gaussian channel. Our results clearly identify the best among the
proposed methods and show that the ira codes obtained by these
methods are competitive with respect to the best–known irregular
low–density parity–check (ldpc) codes. In view of this and the very
simple encoding structure of ira codes, they emerge as attractive
design choices.

A very efficient multiplexing method is the code division multiple
access (cdma). This technique has raised a lot of interest
in mobile communications due to its large spectral
efficiency is-95, imt-2000, umts).
We propose here two approaches :
an ldpc–encoded cdma system and then a competitor
to the cdma.

For the first one, we investigated in cdma
with qpsk modulation and binary error–control codes, in the large
system limit where the number of users, the spreading factor and
the code block length go to infinity. For given codes, we maximize
spectral efficiency assuming an mmse successive stripping decoder
for the cases of equal rate and equal power users. In both cases,
the maximization of spectral efficiency can be formulated as a
linear program and admits a simple closed–form solution that can
be readily interpreted in terms of power and rate control. We
provide examples of the proposed optimization methods based on
off–the–shelf ldpc codes and we investigate by simulation the
performance of practical systems with finite code block length.

Regarding the second approach, we started a new study on codes (and their related iterative decoding algorithms) that are able to get close to the boundary of the capacity region of the Gaussian multiple access channel, without the use of time sharing or rate splitting. The approach will be based on density evolution and/or its variant (mean, mutual information).

Here we propose to adapt standard design tools (e.g. density evolution)
to capture heterogeneous graphical structures, where correlations
between bits are introduced both at the bit level (error correcting code),
and at the symbol level (multiple description coding of a source).
The target application is related to image coding for transmission
over wireless ip, where both packet losses and bit errors may occur,
see the vip rnrt project .

This study, performed in a collaboration with Christian Roland
from lester in Lorient, is twofold.
First, we have studied the blind recognition of the standard
in use ieee Communications Magazine.
We are also studying the possible sensors which could
be used in the cognitive radio context.

Under these terms we understand two different approaches. The first one,
considers the common functions of many standards. The second one, tries
to identify a common operator which could be used by the greatest number
of functions of many standards. This last approach was the one we considered,
and we found that the fft operator could be used as a common
basic operator

We have proposed a variation of the Euclid
algorithm zf criteria
in a simo context.
We also proposed a geometrical derivation of the emse for
the Bussgang algorithm ml–like improvement of the ouput
sequence of the simo dfe.

Concerning the block equalization and frequency domain equalization
techniques, we continue our collaboration with the University of Patras.
This includes a work on the qn dfe dfe scheme which
we first presented in Globecom 2001.
This collaboration will continue through a project that was
approved within the platon bilateral program between France
and Greece, see module .

This is merely an application of previous work, that we started in 1997,
to a quite simple problem proposed recently in the litterature
and that now attracts a lot of attention.
The question that is considered is the following : given a matrix

where

where

We consider the extension to arbitrary

which is a quadratic (convex) program. In this case provided
the parameter

The advantage of (QP) over (LP) lies precisely in the possibility
of having

This corresponds to the prospective subject of the thesis that Sebastien Maria starts this fall. In a linear regression context, the aim of robust estimation schemes is to downweight the influence of potential outliers that are present on some components of the observation vector. Our objective is to analyse several extensions to this scheme.

If outliers, i.e. abnormal noise components, are also present on the
regression matrix and not just in the observation vector,
then one should take this into account and for instance replace in
the total least squares approach the squares by a different convex
function that penalizes less heavily large errors. One possibility is
to replace the

If each outlier affects not just one component but all the observations they are called interferences and one generally assumes to know the interfering subspace and performs the parameter estimation on its orthogonal complement. This is a quite restrictive model. We plan to investigate scenarios where the interferences belong to a known parametrized family that spans the whole space. The number and the range space of the interferences that are indeed present are unknown and the robust estimation scheme has to select the interferences that are present and downweight their influence.

An application on real data is sought. Preliminary contacts have been taken in the area of synthetic aperture radars.

Contract inria 2 02 C 0040
— May 2001/October 2003.

This activity is conducted in collaboration with Maurice Goursat
(metalau project-team at inria Rocquencourt).

The Eureka project flite (« Flight Test Easy ») falls within the
aeronautical domain. It is coordinated by the industrial test laboratory
Sopemea. The partners are Dassault–Aviation and
eads (AeroMatra Airbus) (France), lms and kul (Belgium),
Cracow University and the company pzl–Swidnik (Poland),
and inria (sigma2 and metalau project–teams).

The development of new aircrafts requires a careful exploration of the
dynamical behaviour of the structure subject to vibration and
aero–servo–elastic forces. This is achieved via a combination
of ground vibration tests and in–flight tests.
For both types of tests, various sensors data are recorded,
and modal analyses are performed.
Important challenges of the in–flight modal analyses are the limited
choices for measured excitation inputs, and the presence of unmeasured
natural excitation input (turbulence).
The flite project aims at
a better exploitation of flight test data,
exploiting data recorded under natural excitation conditions
(e.g., turbulent), without resorting to artificial control surface
excitation and other types of excitation inputs. A second objective of
flite is an improvement of the flight test procedures themselves.
One of the intended application examples is the a3xx.

Our expertise in output–only system identification methods,
for modal analysis of vibrating structures under ambiant
and nonstationary excitation, and thus under unknown inputs, is central
in the project inria project–teams are responsible for
the task « development of algorithms and associated methods », and
for the corresponding task reports. Moreover, Albert Benveniste
helps Sopemea in the scientific coordination of the project.

The achievements of this year include conceptual investigations, experimental results and software developments.

On the conceptual side, three investigations have been conducted,
on recursive subspace, particle filtering and flutter monitoring
algorithms, respectively,
see module .
The work on recursive subspace algorithms has been the topic of the visit
of Ivan Goethals (kul / sista),
see module .

The flutter monitoring problem has been addressed as the problem of
monitoring a damping coefficient, and a first theoretical solution
has been investigated. The idea is to use the same subspace–based
residual as we use for structural health monitoring, and to design
a unilateral test statistics for detecting that a given damping
coefficient decreases towards zero. Since this problem is no longer
a local hypotheses testing problem, we have used another
asymptotic approximation for the residual (different from the local
approximation in
module ),
and a cumulative sum test built on the residual. This algorithm works
on–line

From an experimental point of view, this project has provided lots of opportunities for testing and improving our identification and detection techniques. On the identification front, large aircraft datasets have been successfully investigated using our online identification monitoring toolbox, for both full and recursive subspace algorithms. On the detection side, massive progress have been achieved in flutter monitoring using our new detection scheme. It allows us to succesfully track damping values (the only really fluctuating part of the modes) without re–identifying the modes. This feature speeds up the process tremendeously. Notice that both identification techniques and detection techniques have been tested on the same aircraft dataset and gave results cross–validating the methods.

On the software side, the development of the modal / cosmad toolbox
to be used with Scilab has been pursued, the above recursive identification
algorithms and the flutter monitoring algorithms incorporated,
see module .

See modules ,
,
and .
Contract cnrs 500232
— February 2002/September 2005.

This thematic network has been launched in October 2001 within the framework of the Growth program. It aims at becoming a focal point of reference in the field of assessment, monitoring and control of civil and industrial structures, in particular the transportation infrastructure (bridges, etc.).

Several partners of the network have proposed our participation, and we became a participating member, involved especially in the thematic group « Monitoring and Assessment ». This turns out to be a useful complement to the diffusion of our knowledge and expertise in vibration monitoring.

Within this framework, we have been involved in the fp6 ip e–moi
proposal submitted in March 2003 within the nmp framework.
We have offered Scilab as an open platform for the integration of the
modules for algorithms and methods covering the objectives of automatic
modal analysis, automatic modal and statistical damage detection methods.
We have also offered the Scilab modal analysis modules,
see module .

Contrat inria 1 03 D 1046
— December 2002/June 2003.

This activity is also conducted in collaboration with Maurice Goursat
(metalau project–team at inria Rocquencourt).

The objective was to provide a methodology for off–line/on–line monitoring of vibration modes of a launcher in launch situation. The project is a follow–up of a previous contract, where rough evaluations of the launcher data sets were conducted. In this year project, we went deeply into analysing those datasets using input and feedback from the industrial partner (they provided datasets, end results requirements and comments). The intended result was to obtain both a fully automated identification procedure and to evaluate the influence of the tuning parameters on the identification results. The objective was then to hide as much as possible unnecessary parameters and to explain as much as possible the important and required tuning buttons.

This contract has thus provided us with another opportunity
for improving, tuning and testing our techniques
through our modal toolbox.
A massive rewriting of our online monitoring
toolbox has been performed, both for the features and in the graphical user
interface

Contract inria 1 01 C 0104
— January 2001/December 2003.

This activity is jointly performed with Michel Sorine
(projet sosso of inria Rocquencourt).

For the purpose of reducing fuel consumption and thus also exhaust emission, Renault is developing new car components based on the most advanced technologies in the fields of car engine and post–combustion processing of exhaust pollutants. Because of the high technological complexity of the new components, it is necessary to develop monitoring devices for the detection and the diagnosis of their possible failures, in order to increase their maintenance facilities and to fulfill the requirement of the legislation on automobile pollution.

In the framework of a contract Renault–inria on the PhD thesis
of Olivier Perrin (cifre grant, Convention Industrielle de Formation
par la Recherche), studies have been focused on exhaust–pipe particle
filters for vehicles equipped with diesel engines.
Special attention is paid to catalytic particle filters which eliminate
continuously the filtered particles through chemical reactions
with the aid of catalyst.
This new technology is in opposite to the conventional one
which needs regeneration phases to burn the filtered particles
at a high temperature.

After the work of the previous years on exhaust–pipe particle filter
modeling and on model simplification,
several approaches to fault diagnosis have been
tested this year with the simplified models.
A new nonlinear adaptive observer is applied to the
detection of cracks in a filter,
see module .
Experiments with real
data have shown satisfactory results of the method.
This work ifac Symposium on Advances
in Automotive Control, to be held in Salerno, in April 2004.

For the detection of failures affecting the catalytic reaction, a method based on the particle filtering algorithm, see module , is first applied. A simple method based on the specific model structure is also designed. No real data is available for the moment. Numerical simulations have been satisfactory with both methods, but the second one needs much less numerical computation, and is thus more suitable for on–board diagnosis.

See module .

Contract inria 1 02 C 0037
— January 2002/December 2004

The ist project hybridge deals with stochastic analysis
and distributed control of hybrid systems, with conflict management
in air trafic, as its target application area.
It is coordinated by National Aerospace Laboratory (nlr,
Netherlands) and its partners are
Cambridge University (United Kingdom),
Universita di Brescia and Universita dell'Aquila (Italy),
Twente University (Netherlands),
National Technical University of Athens (ntua, Greece),
Centre d'Études de la Navigation Aérienne (cena),
Eurocontrol Experimental Center (eec),
aea Technology and bae Systems (United Kingdom),
and inria (sigma2 project–team).

Our contribution to this project concerns the work package on modeling accident risks with hybrid stochastic systems, and the workpackage on risk decomposition and risk assessment methods, and their implementation using conditional Monte Carlo methods. This problem has motivated our work on the « importance splitting » algorithm for the simulation of rare events, see module .

See module .

Contract inria 4 03 C 1409
— June 2003/October 2003

F. Le Gland and F. Campillo have given a four–days training
course on particle filtering at sagem, Argenteuil.
Further cooperation should take the form of a PhD thesis with
a cifre grant (Convention Industrielle de Formation
par la Recherche) support.

Contrat inria 1 03 C 1356
— November 2002/April 2003

The purpose of this work was to evaluate the particle filtering approach for the synchronization of digital signals, and to see whether stochastic filtering algorithms may improve the new turbo–synchronization techniques.

The problem under investigation is the following :
we want to transmit a binary sequence, which is coded and modulated
into symbols qpsk). It is assumed that the measurements take
the form

where snr is assumed less than 3 dB.
We consider that the signal coding and modulation follow
the dvb (Digital Video Broadcasting) norm.
The phase dynamics is modelled as

where

We tried to apply particle filtering techniques to the recovery of the
hidden state ssl and/or partial decoding.
Particle methods give about the same precision but at a much
higher computational cost.

Considering then other non linear stochastic filtering tools, we
developed a new and promising turbo–synchronization algorithm, where a
extended Kalman filter is embedded in the decoder. This method is still
under evaluation, but it already proved suitable when the

Contract inria 2 01 C 0694 MPR 01 1
— November 2001/January 2004

- Glossary
Denotes the top layer of management for a telecommunication network, i.e. the supervision operations : monitoring, maintenance, etc.

Processing, filtering and interpretation of alarms sent across the network.

Interpretation of alarms, for possible reconfiguration and maintenace operations.

see module .

This is joint work with Claude Jard from the triskell project–team.
It started within cti–cnet 95 1b 151, and continued in the
exploratory rnrt projects « Modélisation et Apprentissage
pour une Gestion Distribuée des Alarmes », or magda,
until November 2001, then
magda2triskell and sigma2,
as well as the aïda project–team of irisa,
the participants of this project are
France Télécom r&d (leader of magda),
Alcatel (leader of magda2), ilog,
and université de Paris–Nord.

The goal is to develop a systematic approach to fault diagnosis in telecommunication networks, under the following principles :

account explicitly for the distributed nature of the networks,

follow a « model–based » approach, such that the diagnosis algorithms can be derived automatically from the model,

take hazards such as loss of alarms, confusions, etc. into account,

allow the diagnosis software to be distributed over the network.

An original technology of designing distributed algorithms has been developed. It is based on modelling distributed systems as networks of automata. Such structures can be viewed as generalizations, to dynamical systems, of Bayesian networks, or Markov fields. This analogy allows to import an entire set of well–founded inference algorithms, easily distributable by nature. More details are given in module and in module .

In the magda2 project, joint work with triskell aims at
constructing automatically the model of the supervised
network, based on its uml description. The internal model thus
obtained is used directly as the inference base for distributed
supervision algorithms. This technology was originally developed
for sdh networks, and has been extended to heterogeneous networks,
using a gmpls communication model
allowing for circuit based, routed (e.g. ip),
or packet switch based networks of sdh or wdm type.

A prototype gathering the two aspects above is being developed in close
collaboration with triskell. This prototype is integrated into
a commercialized management platform by Alcatel, and experiments are
carried out on small models of sdh and gmpls/wdm networks.
In 2003, efforts on the prototype were oriented towards two objectives :

the automatic discovery of the underlying network, and deployment of the

ad hocdistributed monitoring architecture (the so–called local supervisors),the reformulation of the algorithm run by each supervisor above a commercialized rule engine (JRules, provided by Ilog), in order to minimize the proprietary part of the prototype.

Contract inria 2 01 A 0650 MC 01 1
— October 2001/June 2004

The overall goal of the vipip)
wireline and wireless networks. It implies digital video
transmission and real-time services. The total duration of the
project is 33 months and started October 2001. Our role in ip
is limited, the main team involved at irisa is temics,
headed by Christine Guillemot.

The contribution of our group is to design an encoder / decoder pair
that deals with both packet losses and bit errors.
We propose to adapt standard design tools (e.g. density evolution)
to capture heterogeneous graphical structures, where
correlations between
bits are introduced both at the bit level (error correcting code),
and at the symbol level (multiple description coding of a source).

Contrat inria 1 03 C 1559
— July 2003/June 2006

The strong emerging of safety and environmental norms, and the massive and widespread diffusion of various kinds of sensors, result in a thorough renewal of sensor information processing problems. Fatigue and aging prevention, and integrity monitoring for mechanical and civil structures (airplanes, bridges, dams) are the subject of intensive research efforts abroad, especially in the US, Japan, Great Britain, Italy, including within connected fields (such as automatic control). The French research effort on these issues has not yet been sufficiently developed. In particular, research on advanced information processing methods for structural analysis, non destructive evaluation, monitoring for aging prevention, damage diagnostic and localization, are mandatory. Because the associated scientific and economic challenges are of crucial importance, the synergy between information science, engineering science and scientific computing should be increasingly favored and stimulated.

The objectives of the present project supported by the French Ministry
of Research within the aci Sécurité Informatique programme,
are on one hand the intrinsic
coupling of statistical models of sensor data with fine models of the physical
phenomena governing the instrumented structures, and, on the other hand,
the mixing of statistical inference, data assimilation, finite element model
updating and optimization methods for structural dynamics.
The investigation of potential mutual benefits of criteria used for
different purposes by various methods designed in different scientific
communities, is the central axis of the project. The main object of the study
is the intrinsic involvement
of the temperature effect, which is a generic issue for vibration monitoring
of civil engineering structures, and a major scientific challenge.

Contract université de Rennes 1 MW 33 — March 1998/February 2003.

The sigma2 project–team participates in the european network
sicwi (coordinator, Netherlands),
Technische Universität Wien (Austria),
université catholique de Louvain (Belgium),
inria Sophia–Antipolis and
irisa/université de Rennes 1 (France),
University of Cambridge (United Kingdom),
ladseb/cnr et Università degli Studi di Padova (Italy),
kth et Linköpings Universitet (Sweden),
within the tmr program.
The annual workshop has been held in Noordwijkerhout (Netherlands),
in October 2003.
The proposal of a follow–up Marie Curie research training
network ernsi+, coordinated by Bo Wahlberg (kth, Stockholm),
has been submitted to the April 2003 call of the FP6, with an additional
research group from sztaki (Hungary) : it has received a very
positive evaluation, but has finally not been selected in the last
stage. It has been updated and submitted again
to the subsequent November 2003 call.
Our contribution in the current network is focused on
identification of hybrid systems, particle filtering and
hidden Markov models (hmm),
system monitoring, and nonlinear observers.

Affiliation to the French partner of the network — September 2000/August 2004.

The sigma2 project–team participates in the european network
dynstochladseb/cnr (Italy),
université de Paris 6 (France),
within the ihp program.
The annual workshop has been held in Helsinki in May 2003.
Our contribution within the French team of the network
(pma, laboratoire de Probabilités et Modèles Aléatoires,
université de Paris 6/7),
is focused on asymptotic statistics of hmm with finite
or continuous state space, and their particle implementation.
The proposal of a follow–up Marie Curie research training
network dynstoch, coordinated by Peter Spreij (uvA,
Amsterdam), has been submitted to the November 2003 call of the FP6,
with inria Rennes as a research group on its own,
and with additional research groups from sztaki (Hungary),
Universiteit Gent (Belgium),
Ruprecht Karls Universität Heidelberg (Germany),
and Linköpings Universitet (Sweden).

Since September 2002,
F. Le Gland is coordinating with Olivier Cappé (enst Paris)
a project (action spécifique) « Méthodes particulaires » supported
by the stic department of cnrs,
and promoted by the rtp « Mathématiques de l'Information
et des Systèmes ».
This project follows another project « Chaînes de Markov cachées
et filtrage particulaire », which started in December 2001 within the
inter–departmental cnrs programme Math–stic,
and was coordinated by F. Le Gland and Éric Moulines (enst Paris).

A one–day workshop has been organized in December 2003 on applications
of particle filtering, with the joint support of the as67
and of the gdr-prc isis.

The newcom project (Network of excellence in wireless
communication) addresses the design of systems « beyond 3G ». This
requires to successfully solve problems such as : the
inter–technology mobility management between 3G and ad–hoc
wireless lans, the coexistence of a variety of traffic/services
with different and sometimes conflicting Quality of Service (QoS)
requirements, new multiple–access techniques in a hostile
environment like a channel severely affected by frequency
selective fading, the quest for higher data rates also in the
overlay cellular system, scaling with those feasible in a wireless
lan environment, permitting seamless handover with the same degree
of service to the user, the cross–layer optimisation of physical
coding/modulation schemes with the medium access control (mac)
protocols to conform with fully packetised transmission as well as
the tcp/ip rules of the core network, and the like.
The duration of the newcom project is 18 months
and our contibution is 6 manmonths. It has started in fall 2003.

This is a collaboration between irisa and the Signal Processing
and Communications Lab headed by Kostas Berberidis, University of
Patras in Greece. The goal of
the proposed project is the development of an efficient
equalization scheme, suitable for wireless burst communication
systems. The basic characteristic of the proposed scheme will be
the iterative operation of a channel estimator and a decision
feedback equalizer on a data burst, in a way similar to turbo
techniques. This project has a duration of one year and can be
extended to one more year. The grant supports travel and living
expenses of investigators for short visits to partner institutions
abroad. It has started in summer 2003.

M. Basseville is member of the leading committee of the gdr-prc isis
(Information, Signal, Images), member of the steering committee of the
rtp24 « Mathématiques de l'information et des systèmes ».
She is member of (the board of) the scientific committee of the Computer
Security program launched by the French Ministry of Research
(aci « Sécurité Informatique »).

She is co–chair of the ifac technical committee ifac iaf–tc
« Fault Detection, Supervision and Safety of Technical Processes »,
within the coordinating comittee ia–cc
« Industrial Applications »,
and member of the technical committee ssm–tc
« Modelling, Identification and Signal Processing »,
within the coordinating comittee ssm–cc
« Systems and Signals ».

She is associate editor for the ifac journal « Automatica »,
and for the journal « Mechanical Systems and Signal Processing ».
She has been member of the international program committees
of Safeprocess'03, Sysid'03 and ecc'03.
She has been sollicitated for the evaluation of
two « Research Grant Projects » submitted to the Swedish Research Council,
in the « Signals and Systems » area.

A. Benveniste is member of the editorial board of the journals
« European Journal of Control »,
« Discrete Event Dynamic Systems »
and « Proceedings of the ieee ».

J.–J. Fuchs is a member of the program committee of the French 19th
symposium on Signal and Image processing (gretsi)
which was held in Paris in September 2003,
and of the international program commitee
of the 13th ifac–ifors symposium on Identification
and System Parameter Estimation (sysid)
which was held in Rotterdam in August 2003.
He is a member of the ieee sam « Sensor Array and Multichannel »
technical committee
and a co–chair of the technical program committee
of the 3rd ieee Sensor Array and Multichannel Signal Processing
workshop (sam'04) to be held in July 2004 in Barcelona.

Since September 2002,
F. Le Gland is coordinating a project (action spécifique, as67)
« Méthodes particulaires » supported by the stic department
of cnrs, and promoted by the rtp 24 « Mathématiques
de l'Information et des Systèmes »,
see module .
He has coorganized with Olivier Cappé (enst Paris) a one–day
workshop in December 2003 on applications of particle filtering,
with the joint support of as67 and of the gdr-prc isis.

L. Mevel is coordinating a project (action concertée incitative
constructif) on monitoring of mechanical structures,
supported within the national « aci Sécurité Informatique »
programme,
see module .

Q. Zhang is co–organizer of « Sûreté, Supervision, Surveillance »
(s3),
a national research group on safety, supervision and monitoring.
He participates in the activities of the rtp 20 « Fiabilité,
Diagnostic et Tolérance aux Fautes des Systèmes Complexes ».
He is member of ifac technical committee on
Fault Detection, Supervision and Safety of Technical Processes.

In the dea–stir (Signal, Télécommunications, Images, Radar)
(école doctorale matisse, université de Rennes 1) :
J.–J. Fuchs gives a course on optimization, and a course
on spectral estimation.
F. Le Gland gives a course on Kalman filtering and hidden Markov models.

M. Basseville teaches « Statistical methods for in–situ monitoring » within the module « Tools for diagnostics », of the option « Automatic control and Industrial Computing », at École des Mines de Nantes.

É. Fabre and Q. Zhang teach numerical optimization techniques
at diic, ifsic, université de Rennes 1.

É. Fabre and A. Roumy teach information theory and coding theory at Ecole Normale Supérieure de Cachan, Ker Lann campus, in the computer science and telecommunications magistère program.

S. Haar teaches problem sessions for the applied probability
course of diic, ifsic, université de Rennes 1,
and for the course on mathematics in computer science and
telecommunications of the magistère program
at Ecole Normale Supérieure de Cachan, Ker Lann campus.

A. Roumy teaches « Modern coding theory » and « Multiuser detection »
in the dea tis (Traitement des Images et du Signal)
at ensea, université de Cergy Pontoise.

M. Basseville has been invited to deliver the opening plenary talk at
the ifac triennal symposium Safeprocess'03 in Washington dc,
in June 2003, on the subject « Model–based statistical signal processing
and decision theoretic approaches to monitoring ».

F. Le Gland has been invited to give a plenary talk on particle
filtering gretsi Colloque
sur le Traitement du Signal et des Images » in September 2003 in Paris.

F. Le Gland and F. Campillo have been invited to give a four–days
training course on particle filtering at sagem, Argenteuil,
and to give a one–day seminar on particle filtering at cnes,
Toulouse.

J. Palicot has organized two training courses on software radio,
in May 2003 at enst Brest and in June 2003 at enst Paris.

In addition to presentations with a publication in the proceedings,
and which are listed at the end of the document, members of
the sigma2 project–team have also given the following presentations.

At the annual workshop of the tmr si network,
see module ,
which has been held in Noordwijkerhout in October 2003,
F. Le Gland has given a tutorial presentation on particle methods
for stochastic systems,
and Q. Zhang has given a talk on nonlinear systems fault diagnosis
based on adaptive estimation.

F. Le Gland has been invited in November 2003 by Hans–Rüdi Künsch in
the Seminar für Statistik of eth Zürich, and has given there
a seminar on recursive identification of state–space models, and its
implementation with particle techniques.

É. Fabre has given a seminar on turbo–codes at Ecole Normale Supérieure de Cachan, Ker Lann campus.

S. Haar has presented probabilistic net unfoldings at the meeting
of GdR stqrds in Paris in March 2003. He has also given a tutorial
on distributed diagnosis for asynchronous systems
at the forte Conference in Berlin in September 2003.

A. Roumy has presented her work on the design of ira codes
at the ieee international symposium on Information Theory
(isit) in Yokohama in June 2003.

Ivan Goethals, a PhD student of Bart De Moor at kul / sista
has visited us during two weeks in February 2003, in the framework
and with the support of the flite project.
We have jointly tried to speed up the output–only subspace–based
identification algorithm by implementing recursive versions
of the svd

The French embassy in Antananarivo in Madagascar
has provided Rivo Rakotozafy, assistant professor at the university
of Fianaranstoa, with a grant to support three visits (one per year,
each stay of three months duration) to prepare a Madagascar habilitation
thesis (hdr) under the supervision of Fabien Campillo.
A related objective is to set up a collaboration between the university of
Fianaranstoa, ird Montpellier and inria.
This collaboration is focussed on Bayesian inference applied to
engineering for renewable resources. Our first subject is the use
of Monte Carlo Markov chain (mcmc) algorithms for fisheries stock
assessment. The biomass of the stock of a fishery at a given year
could be modelled as a nonlinear function first of the biomass
and catch for the two previous years, then of different parameters
(recruitment, growth rate, natural mortality rate). Given a time
series of annual catch (considered as given parameters) and effort
data (considered as observation process), we would like to achieve the
best fitting between the data and the model.