EN FR
EN FR


Section: Scientific Foundations

Identification

The behavior of the monitored continuous system is assumed to be described by a parametric model {𝐏θ,θΘ}, where the distribution of the observations (Z0,...,ZN) is characterized by the parameter vector θΘ. An estimating function, for example of the form :

𝒦N(θ)=1/Nk=0NK(θ,Zk)

is such that 𝐄θ[𝒦N(θ)]=0 for all θΘ. In many situations, 𝒦 is the gradient of a function to be minimized : squared prediction error, log-likelihood (up to a sign), .... For performing model identification on the basis of observations (Z0,...,ZN), an estimate of the unknown parameter is then [32]  :

θ^N=arg{θΘ:𝒦N(θ)=0}

Assuming that θ* is the true parameter value, and that 𝐄θ*[𝒦N(θ)]=0 if and only if θ=θ* with θ* fixed (identifiability condition), then θ^N converges towards θ*. Thanks to the central limit theorem, the vector 𝒦N(θ*) is asymptotically Gaussian with zero mean, with covariance matrix Σ which can be either computed or estimated. If, additionally, the matrix 𝒥N=-𝐄θ*[𝒦N'(θ*)] is invertible, then using a Taylor expansion and the constraint 𝒦N(θ^N)=0, the asymptotic normality of the estimate is obtained :

N(θ^N-θ*)𝒥N-1N𝒦N(θ*)

In many applications, such an approach must be improved in the following directions :

  • Recursive estimation: the ability to compute θ^N+1 simply from θ^N;

  • Adaptive estimation: the ability to track the true parameter θ* when it is time-varying.