EN FR
EN FR


Section: Scientific Foundations

Subspace-based identification and detection

For reasons closely related to the vibrations monitoring applications described in module  4.2 , we have been investigating subspace-based methods, for both the identification and the monitoring of the eigenstructure (λ,φ λ ) of the state transition matrix F of a linear dynamical state-space system :

X k+1 =FX k +V k+1 Y k =HX k ,(9)

namely the (λ,ϕ λ ) defined by :

det(F-λI)=0,(F-λI)φ λ =0,ϕ λ = ΔHφ λ (10)

The (canonical) parameter vector in that case is :

θ= ΔΛ vec Φ(11)

where Λ is the vector whose elements are the eigenvalues λ, Φ is the matrix whose columns are the ϕ λ 's, and vec is the column stacking operator.

Subspace-based methods is the generic name for linear systems identification algorithms based on either time domain measurements or output covariance matrices, in which different subspaces of Gaussian random vectors play a key role  [39] . A contribution of ours, minor but extremely fruitful, has been to write the output-only covariance-driven subspace identification method under a form that involves a parameter estimating function, from which we define a residual adapted to vibration monitoring [1] . This is explained next.

Covariance-driven subspace identification.

Let R i = Δ𝐄Y k Y k-i T and:

p+1,q = ΔR 0 R 1 R q-1 R 1 R 2 R q R p R p+1 R p+q-1 = Δ Hank R i (12)

be the output covariance and Hankel matrices, respectively; and: G= Δ𝐄X k Y k T . Direct computations of the R i 's from the equations (9 ) lead to the well known key factorizations :

R i =HF i G p+1,q =𝒪 p+1 (H,F)𝒞 q (F,G)(13)

where:

𝒪 p+1 (H,F)= ΔHHFHF p and𝒞 q (F,G)= Δ(GFGF q-1 G)(14)

are the observability and controllability matrices, respectively. The observation matrix H is then found in the first block-row of the observability matrix 𝒪. The state-transition matrix F is obtained from the shift invariance property of 𝒪. The eigenstructure (λ,φ λ ) then results from (10 ).

Since the actual model order is generally not known, this procedure is run with increasing model orders.

Model parameter characterization.

Choosing the eigenvectors of matrix F as a basis for the state space of model (9 ) yields the following representation of the observability matrix:

𝒪 p+1 (θ)=ΦΦΔΦΔ p (15)

where Δ= Δ diag (Λ), and Λ and Φ are as in (11 ). Whether a nominal parameter θ 0 fits a given output covariance sequence (R j ) j is characterized by [1] :

𝒪 p+1 (θ 0 )and p+1,q havethesameleftkernelspace.(16)

This property can be checked as follows. From the nominal θ 0 , compute 𝒪 p+1 (θ 0 ) using (15 ), and perform e.g. a singular value decomposition (SVD) of 𝒪 p+1 (θ 0 ) for extracting a matrix U such that:

U T U=I s andU T 𝒪 p+1 (θ 0 )=0(17)

Matrix U is not unique (two such matrices relate through a post-multiplication with an orthonormal matrix), but can be regarded as a function of θ 0 . Then the characterization writes:

U(θ 0 ) T p+1,q =0(18)

Residual associated with subspace identification.

Assume now that a reference θ 0 and a new sample Y 1 ,,Y N are available. For checking whether the data agree with θ 0 , the idea is to compute the empirical Hankel matrix  ^ p+1,q :

^ p+1,q = Δ Hank R ^ i ,R ^ i = Δ1/(N-i) k=i+1 N Y k Y k-i T (19)

and to define the residual vector:

ζ N (θ 0 )= ΔN vec U(θ 0 ) T ^ p+1,q (20)

Let θ be the actual parameter value for the system which generated the new data sample, and 𝐄 θ be the expectation when the actual system parameter is θ. From (18 ), we know that ζ N (θ 0 ) has zero mean when no change occurs in θ, and nonzero mean if a change occurs. Thus ζ N (θ 0 ) plays the role of a residual.

It is our experience that this residual has highly interesting properties, both for damage detection [1] and localization [3] , and for flutter monitoring [8] .

Other uses of the key factorizations.

Factorization ( 3.5.1 ) is the key for a characterization of the canonical parameter vector θ in (11 ), and for deriving the residual. Factorization (13 ) is also the key for :

  • Proving consistency and robustness results [6] ;

  • Designing an extension of covariance-driven subspace identification algorithm adapted to the presence and fusion of non-simultaneously recorded multiple sensors setups [7] ;

  • Proving the consistency and robustness of this extension [9] ;

  • Designing various forms of input-output covariance-driven subspace identification algorithms adapted to the presence of both known inputs and unknown excitations [10] .