EN FR
EN FR


Section: Scientific Foundations

Identification

The behavior of the monitored continuous system is assumed to be described by a parametric model {𝐏 θ ,θΘ}, where the distribution of the observations (Z 0 ,...,Z N ) is characterized by the parameter vector θΘ. An estimating function, for example of the form :

𝒦 N (θ)=1/N k=0 N K(θ,Z k )

is such that 𝐄 θ [𝒦 N (θ)]=0 for all θΘ. In many situations, 𝒦 is the gradient of a function to be minimized : squared prediction error, log-likelihood (up to a sign), .... For performing model identification on the basis of observations (Z 0 ,...,Z N ), an estimate of the unknown parameter is then [34]  :

θ ^ N =arg{θΘ:𝒦 N (θ)=0}

Assuming that θ * is the true parameter value, and that 𝐄 θ * [𝒦 N (θ)]=0 if and only if θ=θ * with θ * fixed (identifiability condition), then θ ^ N converges towards θ * . Thanks to the central limit theorem, the vector 𝒦 N (θ * ) is asymptotically Gaussian with zero mean, with covariance matrix Σ which can be either computed or estimated. If, additionally, the matrix 𝒥 N =-𝐄 θ * [𝒦 N ' (θ * )] is invertible, then using a Taylor expansion and the constraint 𝒦 N (θ ^ N )=0, the asymptotic normality of the estimate is obtained :

N(θ ^ N -θ * )𝒥 N -1 N𝒦 N (θ * )

In many applications, such an approach must be improved in the following directions :

  • Recursive estimation: the ability to compute θ ^ N+1 simply from θ ^ N ;

  • Adaptive estimation: the ability to track the true parameter θ * when it is time-varying.