Section: Research Program
Vibration analysis
In this section, the main features for the key monitoring issues, namely identification, detection, and diagnostics, are provided, and a particular instantiation relevant for vibration monitoring is described.
It should be stressed that the foundations for identification, detection, and diagnostics, are fairly general, if not generic. Handling high order linear dynamical systems, in connection with finite elements models, which call for using subspacebased methods, is specific to vibrationbased SHM. Actually, one particular feature of modelbased sensor information data processing as exercised in I4S, is the combined use of blackbox or semiphysical models together with physical ones. Blackbox and semiphysical models are, for example, eigenstructure parameterizations of linear MIMO systems, of interest for modal analysis and vibrationbased SHM. Such models are intended to be identifiable. However, due to the large model orders that need to be considered, the issue of model order selection is really a challenge. Traditional advanced techniques from statistics such as the various forms of Akaike criteria (AIC, BIC, MDL, ...) do not work at all. This gives rise to new research activities specific to handling high order models.
Our approach to monitoring assumes that a model of the monitored system is available. This is a reasonable assumption, especially within the SHM areas. The main feature of our monitoring method is its intrinsic ability to the early warning of small deviations of a system with respect to a reference (safe) behavior under usual operating conditions, namely without any artificial excitation or other external action. Such a normal behavior is summarized in a reference parameter vector ${\theta}_{0}$, for example a collection of modes and modeshapes.
Identification
The behavior of the monitored continuous system is assumed to be described by a parametric model $\{{\mathbf{P}}_{\theta}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\theta \in \Theta \}$, where the distribution of the observations (${Z}_{0},...,{Z}_{N}$) is characterized by the parameter vector $\theta \in \Theta $.
For reasons closely related to the vibrations monitoring applications, we have been investigating subspacebased methods, for both the identification and the monitoring of the eigenstructure $(\lambda ,{\phi}_{\lambda})$ of the state transition matrix $F$ of a linear dynamical statespace system :
$\left\{\begin{array}{ccc}\hfill {X}_{k+1}& =& F\phantom{\rule{3.33333pt}{0ex}}{X}_{k}+{V}_{k+1}\hfill \\ \hfill {Y}_{k}& =& H\phantom{\rule{3.33333pt}{0ex}}{X}_{k}+{W}_{k}\hfill \end{array}\right.,$  (4) 
namely the $(\lambda ,{\varphi}_{\lambda})$ defined by :
$det\phantom{\rule{3.33333pt}{0ex}}(F\lambda \phantom{\rule{3.33333pt}{0ex}}I)=0,\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}(F\lambda \phantom{\rule{3.33333pt}{0ex}}I)\phantom{\rule{3.33333pt}{0ex}}{\phi}_{\lambda}=0,\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}{\varphi}_{\lambda}\stackrel{\Delta}{=}H\phantom{\rule{3.33333pt}{0ex}}{\phi}_{\lambda}$  (5) 
The (canonical) parameter vector in that case is :
$\theta \stackrel{\Delta}{=}\left(\begin{array}{c}\Lambda \\ \mathrm{vec}\Phi \end{array}\right)$  (6) 
where $\Lambda $ is the vector whose elements are the eigenvalues $\lambda $, $\Phi $ is the matrix whose columns are the ${\varphi}_{\lambda}$'s, and $\mathrm{vec}$ is the column stacking operator.
Subspacebased methods is the generic name for linear systems identification algorithms based on either time domain measurements or output covariance matrices, in which different subspaces of Gaussian random vectors play a key role [57].
Let ${R}_{i}\stackrel{\Delta}{=}\mathbf{E}\left({Y}_{k}\phantom{\rule{3.33333pt}{0ex}}{Y}_{ki}^{T}\right)$ and:
${\mathscr{H}}_{p+1,q}\stackrel{\Delta}{=}\left(\begin{array}{cccc}{R}_{1}& {R}_{2}& \phantom{\rule{0.166667em}{0ex}}\vdots \phantom{\rule{0.166667em}{0ex}}& {R}_{q}\\ {R}_{2}& {R}_{3}& \phantom{\rule{0.166667em}{0ex}}\vdots \phantom{\rule{0.166667em}{0ex}}& {R}_{q+1}\\ \vdots & \vdots & \phantom{\rule{0.166667em}{0ex}}\vdots \phantom{\rule{0.166667em}{0ex}}& \vdots \\ {R}_{p+1}& {R}_{p+2}& \phantom{\rule{0.166667em}{0ex}}\vdots \phantom{\rule{0.166667em}{0ex}}& {R}_{p+q}\end{array}\right)\stackrel{\Delta}{=}\mathrm{Hank}\left({R}_{i}\right)$  (7) 
be the output covariance and Hankel matrices, respectively; and: $G\stackrel{\Delta}{=}\mathbf{E}\left({X}_{k}{Y}_{k1}^{T}\right)$. Direct computations of the ${R}_{i}$'s from the equations (4) lead to the well known key factorizations :
$\begin{array}{ccc}\hfill {R}_{i}& =& H{F}^{i1}G\hfill \\ \hfill {\mathscr{H}}_{p+1,q}& =& {\mathcal{O}}_{p+1}(H,F)\phantom{\rule{3.33333pt}{0ex}}{\mathcal{C}}_{q}(F,G)\hfill \end{array}$  (8) 
where:
${\mathcal{O}}_{p+1}(H,F)\stackrel{\Delta}{=}\left(\begin{array}{c}H\hfill \\ HF\hfill \\ \phantom{\rule{0.166667em}{0ex}}\vdots \phantom{\rule{0.166667em}{0ex}}\hfill \\ H{F}^{p}\hfill \end{array}\right)\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\text{and}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}{\mathcal{C}}_{q}(F,G)\phantom{\rule{0.166667em}{0ex}}\stackrel{\Delta}{=}\phantom{\rule{0.166667em}{0ex}}(G\phantom{\rule{3.33333pt}{0ex}}FG\phantom{\rule{3.33333pt}{0ex}}\cdots \phantom{\rule{3.33333pt}{0ex}}{F}^{q1}G)$  (9) 
are the observability and controllability matrices, respectively. The observation matrix $H$ is then found in the first blockrow of the observability matrix $\mathcal{O}$. The statetransition matrix $F$ is obtained from the shift invariance property of $\mathcal{O}$. The eigenstructure $(\lambda ,{\phi}_{\lambda})$ then results from (5).
Since the actual model order is generally not known, this procedure is run with increasing model orders.
Detection
Our approach to onboard detection is based on the socalled asymptotic statistical local approach. It is worth noticing that these investigations of ours have been initially motivated by a vibration monitoring application example. It should also be stressed that, as opposite to many monitoring approaches, our method does not require repeated identification for each newly collected data sample.
For achieving the early detection of small deviations with respect to the normal behavior, our approach generates, on the basis of the reference parameter vector ${\theta}_{0}$ and a new data record, indicators which automatically perform :

The early detection of a slight mismatch between the model and the data;

A preliminary diagnostics and localization of the deviation(s);

The tradeoff between the magnitude of the detected changes and the uncertainty resulting from the estimation error in the reference model and the measurement noise level.
These indicators are computationally cheap, and thus can be embedded. This is of particular interest in some applications, such as flutter monitoring.
Choosing the eigenvectors of matrix $F$ as a basis for the state space of model (4) yields the following representation of the observability matrix:
${\mathcal{O}}_{p+1}\left(\theta \right)=\left(\begin{array}{c}\Phi \hfill \\ \Phi \Delta \hfill \\ \vdots \hfill \\ \Phi {\Delta}^{p}\hfill \end{array}\right)$  (10) 
where $\Delta \stackrel{\Delta}{=}\mathrm{diag}\left(\Lambda \right)$, and $\Lambda $ and $\Phi $ are as in (6). Whether a nominal parameter ${\theta}_{0}$ fits a given output covariance sequence ${\left({R}_{j}\right)}_{j}$ is characterized by:
${\mathcal{O}}_{p+1}\left({\theta}_{0}\right)\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\text{and}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}{\mathscr{H}}_{p+1,q}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\text{have}\phantom{\rule{4.pt}{0ex}}\text{the}\phantom{\rule{4.pt}{0ex}}\text{same}\phantom{\rule{4.pt}{0ex}}\text{left}\phantom{\rule{4.pt}{0ex}}\text{kernel}\phantom{\rule{4.pt}{0ex}}\text{space.}$  (11) 
This property can be checked as follows. From the nominal ${\theta}_{0}$, compute ${\mathcal{O}}_{p+1}\left({\theta}_{0}\right)$ using (10), and perform e.g. a singular value decomposition (SVD) of ${\mathcal{O}}_{p+1}\left({\theta}_{0}\right)$ for extracting a matrix $U$ such that:
${U}^{T}\phantom{\rule{0.277778em}{0ex}}U={I}_{s}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\text{and}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}{U}^{T}\phantom{\rule{0.277778em}{0ex}}{\mathcal{O}}_{p+1}\left({\theta}_{0}\right)=0$  (12) 
Matrix $U$ is not unique (two such matrices relate through a postmultiplication with an orthonormal matrix), but can be regarded as a function of ${\theta}_{0}$. Then the characterization writes:
Residual associated with subspace identification.
Assume now that a reference ${\theta}_{0}$ and a new sample ${Y}_{1},\cdots ,{Y}_{N}$ are available. For checking whether the data agree with ${\theta}_{0}$, the idea is to compute the empirical Hankel matrix ${\widehat{\mathscr{H}}}_{p+1,q}$:
${\widehat{\mathscr{H}}}_{p+1,q}\stackrel{\Delta}{=}\mathrm{Hank}\left({\widehat{R}}_{i}\right),\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}{\widehat{R}}_{i}\stackrel{\Delta}{=}1/(Ni)\phantom{\rule{3.33333pt}{0ex}}\sum _{k=i+1}^{N}{Y}_{k}\phantom{\rule{3.33333pt}{0ex}}{Y}_{ki}^{T}$  (14) 
and to define the residual vector:
${\zeta}_{N}\left({\theta}_{0}\right)\stackrel{\Delta}{=}\sqrt{N}\phantom{\rule{3.33333pt}{0ex}}\mathrm{vec}\left(U{\left({\theta}_{0}\right)}^{T}\phantom{\rule{3.33333pt}{0ex}}{\widehat{\mathscr{H}}}_{p+1,q}\right)$  (15) 
Let $\theta $ be the actual parameter value for the system which generated the new data sample, and ${\mathbf{E}}_{\theta}$ be the expectation when the actual system parameter is $\theta $. From (13), we know that ${\zeta}_{N}\left({\theta}_{0}\right)$ has zero mean when no change occurs in $\theta $, and nonzero mean if a change occurs. Thus ${\zeta}_{N}\left({\theta}_{0}\right)$ plays the role of a residual.
As in most fault detection approaches, the key issue is to design a residual, which is ideally close to zero under normal operation, and has low sensitivity to noises and other nuisance perturbations, but high sensitivity to small deviations, before they develop into events to be avoided (damages, faults, ...). The originality of our approach is to :

Design the residual basically as a parameter estimating function,

Evaluate the residual thanks to a kind of central limit theorem, stating that the residual is asymptotically Gaussian and reflects the presence of a deviation in the parameter vector through a change in its own mean vector, which switches from zero in the reference situation to a nonzero value.
The central limit theorem shows [51] that the residual is asymptotically Gaussian :
${\zeta}_{N}\frac{}{\phantom{\rule{3.33333pt}{0ex}}N\to \infty}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\to \left\{\phantom{\rule{1.em}{0ex}}\begin{array}{cc}\mathcal{N}(0,\Sigma )\hfill & \text{under}\phantom{\rule{4.pt}{0ex}}{\mathbf{P}}_{{\theta}_{0}}\phantom{\rule{4pt}{0ex}},\hfill \\ \\ \mathcal{N}(\mathcal{J}\phantom{\rule{0.166667em}{0ex}}\eta ,\Sigma )\hfill & \text{under}\phantom{\rule{4.pt}{0ex}}{\mathbf{P}}_{{\theta}_{0}+\eta /\sqrt{N}}\phantom{\rule{4pt}{0ex}},\hfill \end{array}\right.$  (16) 
where the asymptotic covariance matrix $\Sigma $ can be estimated, and manifests the deviation in the parameter vector by a change in its own mean value. Then, deciding between $\eta =0$ and $\eta \ne 0$ amounts to compute the following ${\chi}^{2}$test, provided that $\mathcal{J}$ is full rank and $\Sigma $ is invertible :
${\chi}^{2}={\overline{\zeta}}^{\phantom{\rule{0.166667em}{0ex}}T}\phantom{\rule{0.166667em}{0ex}}{\mathbf{F}}^{1}\phantom{\rule{0.166667em}{0ex}}\overline{\zeta}\gtrless \lambda \phantom{\rule{4pt}{0ex}}.$  (17) 
where
$\overline{\zeta}\stackrel{\Delta}{=}{\mathcal{J}}^{T}\phantom{\rule{0.166667em}{0ex}}{\Sigma}^{1}\phantom{\rule{0.166667em}{0ex}}{\zeta}_{N}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\text{and}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\mathbf{F}\stackrel{\Delta}{=}{\mathcal{J}}^{T}\phantom{\rule{0.166667em}{0ex}}{\Sigma}^{1}\phantom{\rule{0.166667em}{0ex}}\mathcal{J}$  (18) 
Diagnostics
A further monitoring step, often called fault isolation, consists in determining which (subsets of) components of the parameter vector $\theta $ have been affected by the change. Solutions for that are now described. How this relates to diagnostics is addressed afterwards.
The question: which (subsets of) components of $\theta $ have changed ?, can be addressed using either nuisance parameters elimination methods or a multiple hypotheses testing approach [50].
In most SHM applications, a complex physical system, characterized by a generally non identifiable parameter vector $\Phi $ has to be monitored using a simple (blackbox) model characterized by an identifiable parameter vector $\theta $. A typical example is the vibration monitoring problem for which complex finite elements models are often available but not identifiable, whereas the small number of existing sensors calls for identifying only simplified inputoutput (blackbox) representations. In such a situation, two different diagnosis problems may arise, namely diagnosis in terms of the blackbox parameter $\theta $ and diagnosis in terms of the parameter vector $\Phi $ of the underlying physical model.
The isolation methods sketched above are possible solutions to the former. Our approach to the latter diagnosis problem is basically a detection approach again, and not a (generally illposed) inverse problem estimation approach.
The basic idea is to note that the physical sensitivity matrix writes $\mathcal{J}\phantom{\rule{0.166667em}{0ex}}{\mathcal{J}}_{\Phi \theta}$, where ${\mathcal{J}}_{\Phi \theta}$ is the Jacobian matrix at ${\Phi}_{0}$ of the application $\Phi \mapsto \theta \left(\Phi \right)$, and to use the sensitivity test for the components of the parameter vector $\Phi $. Typically this results in the following type of directional test :
${\chi}_{\Phi}^{2}={\zeta}^{T}\phantom{\rule{0.166667em}{0ex}}{\Sigma}^{1}\phantom{\rule{0.166667em}{0ex}}\mathcal{J}\phantom{\rule{0.166667em}{0ex}}{\mathcal{J}}_{\Phi \theta}\phantom{\rule{0.166667em}{0ex}}{\left({\mathcal{J}}_{\Phi \theta}^{T}\phantom{\rule{0.166667em}{0ex}}{\mathcal{J}}^{T}\phantom{\rule{0.166667em}{0ex}}{\Sigma}^{1}\phantom{\rule{0.166667em}{0ex}}\mathcal{J}\phantom{\rule{0.166667em}{0ex}}{\mathcal{J}}_{\Phi \theta}\right)}^{1}\phantom{\rule{0.166667em}{0ex}}{\mathcal{J}}_{\Phi \theta}^{T}\phantom{\rule{0.166667em}{0ex}}{\mathcal{J}}^{T}\phantom{\rule{0.166667em}{0ex}}{\Sigma}^{1}\phantom{\rule{0.166667em}{0ex}}\zeta \gtrless \lambda \phantom{\rule{4pt}{0ex}}.$  (19) 
It should be clear that the selection of a particular parameterization $\Phi $ for the physical model may have a nonnegligible influence on such type of tests, according to the numerical conditioning of the Jacobian matrices ${\mathcal{J}}_{\Phi \theta}$.