EN FR
EN FR
##### I4S - 2011

Team I4s

Overall Objectives
Application Domains
Software
Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography

Team I4s

Overall Objectives
Application Domains
Software
Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography

## Section: Scientific Foundations

### Identification

The behavior of the monitored continuous system is assumed to be described by a parametric model $\left\{{𝐏}_{\theta }\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\theta \in \Theta \right\}$, where the distribution of the observations (${Z}_{0},...,{Z}_{N}$) is characterized by the parameter vector $\theta \in \Theta$. An estimating function, for example of the form :

${𝒦}_{N}\left(\theta \right)=1/N\phantom{\rule{0.166667em}{0ex}}\sum _{k=0}^{N}K\left(\theta ,{Z}_{k}\right)$

is such that ${𝐄}_{\theta }\left[{𝒦}_{N}\left(\theta \right)\right]=0$ for all $\theta \in \Theta$. In many situations, $𝒦$ is the gradient of a function to be minimized : squared prediction error, log-likelihood (up to a sign), .... For performing model identification on the basis of observations $\left({Z}_{0},...,{Z}_{N}\right)$, an estimate of the unknown parameter is then [32]  :

${\stackrel{^}{\theta }}_{N}=arg\left\{\theta \in \Theta \phantom{\rule{0.166667em}{0ex}}:\phantom{\rule{0.166667em}{0ex}}{𝒦}_{N}\left(\theta \right)=0\right\}\phantom{\rule{4pt}{0ex}}$

Assuming that ${\theta }^{*}$ is the true parameter value, and that ${𝐄}_{{\theta }^{*}}\left[{𝒦}_{N}\left(\theta \right)\right]=0$ if and only if $\theta ={\theta }^{*}$ with ${\theta }^{*}$ fixed (identifiability condition), then ${\stackrel{^}{\theta }}_{N}$ converges towards ${\theta }^{*}$. Thanks to the central limit theorem, the vector ${𝒦}_{N}\left({\theta }^{*}\right)$ is asymptotically Gaussian with zero mean, with covariance matrix $\Sigma$ which can be either computed or estimated. If, additionally, the matrix ${𝒥}_{N}=-{𝐄}_{{\theta }^{*}}\left[{𝒦}_{N}^{\text{'}}\left({\theta }^{*}\right)\right]$ is invertible, then using a Taylor expansion and the constraint ${𝒦}_{N}\left({\stackrel{^}{\theta }}_{N}\right)=0$, the asymptotic normality of the estimate is obtained :

$\sqrt{N}\phantom{\rule{0.166667em}{0ex}}\left({\stackrel{^}{\theta }}_{N}-{\theta }^{*}\right)\approx {𝒥}_{N}^{-1}\phantom{\rule{0.166667em}{0ex}}\sqrt{N}\phantom{\rule{0.166667em}{0ex}}{𝒦}_{N}\left({\theta }^{*}\right)$

In many applications, such an approach must be improved in the following directions :

• Recursive estimation: the ability to compute ${\stackrel{^}{\theta }}_{N+1}$ simply from ${\stackrel{^}{\theta }}_{N}$;

• Adaptive estimation: the ability to track the true parameter ${\theta }^{*}$ when it is time-varying.