EN FR
EN FR


Section: Research Program

Vibration analysis

In this section, the main features for the key monitoring issues, namely identification, detection, and diagnostics, are provided, and a particular instantiation relevant for vibration monitoring is described.

It should be stressed that the foundations for identification, detection, and diagnostics, are fairly general, if not generic. Handling high order linear dynamical systems, in connection with finite elements models, which call for using subspace-based methods, is specific to vibration-based SHM. Actually, one particular feature of model-based sensor information data processing as exercised in I4S, is the combined use of black-box or semi-physical models together with physical ones. Black-box and semi-physical models are, for example, eigenstructure parameterizations of linear MIMO systems, of interest for modal analysis and vibration-based SHM. Such models are intended to be identifiable. However, due to the large model orders that need to be considered, the issue of model order selection is really a challenge. Traditional advanced techniques from statistics such as the various forms of Akaike criteria (AIC, BIC, MDL, ...) do not work at all. This gives rise to new research activities specific to handling high order models.

Our approach to monitoring assumes that a model of the monitored system is available. This is a reasonable assumption, especially within the SHM areas. The main feature of our monitoring method is its intrinsic ability to the early warning of small deviations of a system with respect to a reference (safe) behavior under usual operating conditions, namely without any artificial excitation or other external action. Such a normal behavior is summarized in a reference parameter vector θ0, for example a collection of modes and mode-shapes.

Identification

The behavior of the monitored continuous system is assumed to be described by a parametric model {𝐏θ,θΘ}, where the distribution of the observations (Z0,...,ZN) is characterized by the parameter vector θΘ.

For reasons closely related to the vibrations monitoring applications, we have been investigating subspace-based methods, for both the identification and the monitoring of the eigenstructure (λ,φλ) of the state transition matrix F of a linear dynamical state-space system :

X k + 1 = F X k + V k + 1 Y k = H X k + W k , (4)

namely the (λ,ϕλ) defined by :

det ( F - λ I ) = 0 , ( F - λ I ) φ λ = 0 , ϕ λ = Δ H φ λ (5)

The (canonical) parameter vector in that case is :

θ = Δ Λ vec Φ (6)

where Λ is the vector whose elements are the eigenvalues λ, Φ is the matrix whose columns are the ϕλ's, and vec is the column stacking operator.

Subspace-based methods is the generic name for linear systems identification algorithms based on either time domain measurements or output covariance matrices, in which different subspaces of Gaussian random vectors play a key role  [62].

Let Ri=Δ𝐄YkYk-iT and:

p + 1 , q = Δ R 1 R 2 R q R 2 R 3 R q + 1 R p + 1 R p + 2 R p + q = Δ Hank R i (7)

be the output covariance and Hankel matrices, respectively; and: G=Δ𝐄XkYk-1T. Direct computations of the Ri's from the equations (4) lead to the well known key factorizations :

R i = H F i - 1 G p + 1 , q = 𝒪 p + 1 ( H , F ) 𝒞 q ( F , G ) (8)

where:

𝒪 p + 1 ( H , F ) = Δ H H F H F p and 𝒞 q ( F , G ) = Δ ( G F G F q - 1 G ) (9)

are the observability and controllability matrices, respectively. The observation matrix H is then found in the first block-row of the observability matrix 𝒪. The state-transition matrix F is obtained from the shift invariance property of 𝒪. The eigenstructure (λ,φλ) then results from (5).

Since the actual model order is generally not known, this procedure is run with increasing model orders.

Detection

Our approach to on-board detection is based on the so-called asymptotic statistical local approach. It is worth noticing that these investigations of ours have been initially motivated by a vibration monitoring application example. It should also be stressed that, as opposite to many monitoring approaches, our method does not require repeated identification for each newly collected data sample.

For achieving the early detection of small deviations with respect to the normal behavior, our approach generates, on the basis of the reference parameter vector θ0 and a new data record, indicators which automatically perform :

  • The early detection of a slight mismatch between the model and the data;

  • A preliminary diagnostics and localization of the deviation(s);

  • The tradeoff between the magnitude of the detected changes and the uncertainty resulting from the estimation error in the reference model and the measurement noise level.

These indicators are computationally cheap, and thus can be embedded. This is of particular interest in some applications, such as flutter monitoring.

Choosing the eigenvectors of matrix F as a basis for the state space of model (4) yields the following representation of the observability matrix:

𝒪 p + 1 ( θ ) = Φ Φ Δ Φ Δ p (10)

where Δ=Δ diag (Λ), and Λ and Φ are as in (6). Whether a nominal parameter θ0 fits a given output covariance sequence (Rj)j is characterized by:

𝒪 p + 1 ( θ 0 ) and p + 1 , q have the same left kernel space. (11)

This property can be checked as follows. From the nominal θ0, compute 𝒪p+1(θ0) using (10), and perform e.g. a singular value decomposition (SVD) of 𝒪p+1(θ0) for extracting a matrix U such that:

U T U = I s and U T 𝒪 p + 1 ( θ 0 ) = 0 (12)

Matrix U is not unique (two such matrices relate through a post-multiplication with an orthonormal matrix), but can be regarded as a function of θ0. Then the characterization writes:

U ( θ 0 ) T p + 1 , q = 0 (13)
Residual associated with subspace identification.

Assume now that a reference θ0 and a new sample Y1,,YN are available. For checking whether the data agree with θ0, the idea is to compute the empirical Hankel matrix ^p+1,q:

^ p + 1 , q = Δ Hank R ^ i , R ^ i = Δ 1 / ( N - i ) k = i + 1 N Y k Y k - i T (14)

and to define the residual vector:

ζ N ( θ 0 ) = Δ N vec U ( θ 0 ) T ^ p + 1 , q (15)

Let θ be the actual parameter value for the system which generated the new data sample, and 𝐄θ be the expectation when the actual system parameter is θ. From (13), we know that ζN(θ0) has zero mean when no change occurs in θ, and nonzero mean if a change occurs. Thus ζN(θ0) plays the role of a residual.

As in most fault detection approaches, the key issue is to design a residual, which is ideally close to zero under normal operation, and has low sensitivity to noises and other nuisance perturbations, but high sensitivity to small deviations, before they develop into events to be avoided (damages, faults, ...). The originality of our approach is to :

  • Design the residual basically as a parameter estimating function,

  • Evaluate the residual thanks to a kind of central limit theorem, stating that the residual is asymptotically Gaussian and reflects the presence of a deviation in the parameter vector through a change in its own mean vector, which switches from zero in the reference situation to a non-zero value.

The central limit theorem shows  [56] that the residual is asymptotically Gaussian :

ζ N N 𝒩 ( 0 , Σ ) under 𝐏 θ 0 , 𝒩 ( 𝒥 η , Σ ) under 𝐏 θ 0 + η / N , (16)

where the asymptotic covariance matrix Σ can be estimated, and manifests the deviation in the parameter vector by a change in its own mean value. Then, deciding between η=0 and η0 amounts to compute the following χ2-test, provided that 𝒥 is full rank and Σ is invertible :

χ 2 = ζ ¯ T 𝐅 - 1 ζ ¯ λ . (17)

where

ζ ¯ = Δ 𝒥 T Σ - 1 ζ N and 𝐅 = Δ 𝒥 T Σ - 1 𝒥 (18)

Diagnostics

A further monitoring step, often called fault isolation, consists in determining which (subsets of) components of the parameter vector θ have been affected by the change. Solutions for that are now described. How this relates to diagnostics is addressed afterwards.

The question: which (subsets of) components of θ have changed ?, can be addressed using either nuisance parameters elimination methods or a multiple hypotheses testing approach [55].

In most SHM applications, a complex physical system, characterized by a generally non identifiable parameter vector Φ has to be monitored using a simple (black-box) model characterized by an identifiable parameter vector θ. A typical example is the vibration monitoring problem for which complex finite elements models are often available but not identifiable, whereas the small number of existing sensors calls for identifying only simplified input-output (black-box) representations. In such a situation, two different diagnosis problems may arise, namely diagnosis in terms of the black-box parameter θ and diagnosis in terms of the parameter vector Φ of the underlying physical model.

The isolation methods sketched above are possible solutions to the former. Our approach to the latter diagnosis problem is basically a detection approach again, and not a (generally ill-posed) inverse problem estimation approach.

The basic idea is to note that the physical sensitivity matrix writes 𝒥𝒥Φθ, where 𝒥Φθ is the Jacobian matrix at Φ0 of the application Φθ(Φ), and to use the sensitivity test for the components of the parameter vector Φ. Typically this results in the following type of directional test :

χ Φ 2 = ζ T Σ - 1 𝒥 𝒥 Φ θ ( 𝒥 Φ θ T 𝒥 T Σ - 1 𝒥 𝒥 Φ θ ) - 1 𝒥 Φ θ T 𝒥 T Σ - 1 ζ λ . (19)

It should be clear that the selection of a particular parameterization Φ for the physical model may have a non negligible influence on such type of tests, according to the numerical conditioning of the Jacobian matrices 𝒥Φθ.