The Apics Team is a Project Team since January 2005.
The Team develops constructive methods for modeling, identification and control of dynamical systems.
Meromorphic approximation in the complex domain, with application to frequency identification and design of transfer functions, as well as singularity detection for the 2D Laplace operator. Development of software for filter identification and the synthesis of microwave devices.
Inverse potential problems in 3D and analysis of harmonic fields with applications to source detection and electroencephalography.
Control and structure analysis of nonlinear systems: continuous stabilization, linearization, and near optimal control with applications to orbit transfer of satellites.
Industrial collaborations with AlcatelAleniaSpace (centre de Toulouse), Temex (SophiaAntipolis), CNES, IRCOM.
Exchanges with UST (Villeneuve d'Asq), CMIUniversité de Provence (Marseille), CWI (the Netherlands), CNR (Italy), SISSA (Italy), the Universities of Illinois (UrbanaChampaign), of California at San Diego and Santa Barbara (USA), of Minnesota at Minneapolis (USA), Vanderbilt University (USA), of Padova (Italy), of Beer Sheva (Israel), of Leeds (GB), of Maastricht and Amsterdam (The Netherlands), TUWien (Austria), TFHBerlin (Germany), of Kingston (Canada), of Szegëd (Hungary), CINVESTAV (Mexico), ENIT (Tunis), VUB (Belgium).
The project is involved in a NATO Collaborative Linkage Grant (with Vanderbilt University and ENITLAMSIN), in a EMS21RTG NSF program (with Vanderbilt University), in the ACI ``ObsCerv'' (with the Teams Caiman and Odyssée from InriaSophia Antipolis, among others), in a STIC Convention between INRIA and Tunisian Universities, in an EPSRC Grant with Leeds University (UK), in the ERCIM ``Working Group Control and Systems Theory'', in the ERNSI and TMRNCN European research networks, and in a MarieCurie EIF European program.
Let us first introduce the subject of Identification in some generality.
Modelingis the process of abstracting the behaviour of a phenomenon in terms of mathematical equations. It typically serves two purposes: the first one is to describe the phenomenon with minimal complexity for some specific purpose, the second one is to predictits outcome. It is used in most applied sciences, be it for design, control or prediction. However, it is seldom considered as an issue per seand today it is usually embedded in some global optimization loop.
As a general rule, the user devises the model to fit a parameterized form that reflects his own prejudice, knowledge of the underlying physical system, and the algorithmic effort to be consented. Looking for such a tradeoff usually raises the question of approximating the experimental data by the prediction of the model when the latter is subject to external excitations assumed to be the cause of the phenomenon under study. The ability to solve this approximation problem, which is often nontrivial and illposed, conditions the practical usefulness of a given method.
It is when assessing the predictive power of a model that one is led to postulatethe existence of a truefunctional correspondence between data and observations, thereby entering the field of identificationproper. The predictive power of a model can be expressed in various manners all of which attempt at measuring the difference between the true model and the observations. The necessity of taking into account the difference between the observed behavior and the computed behavior naturally induces the notion of noiseas a corrupting factor of the identification process. This noise incorporates into the model, and can be handled in a deterministic mode, where the quality of an identification algorithm is its robustness to small errors. This notion is that of wellposedness in numerical analysis or stability of motion in mechanics. The noise however is often considered to be random, and then the true model is estimated by averaging the data. This notion allows one for approximate but otherwise reasonably simple descriptions of complex systems whose mechanisms are not well known but plausibly antagonistic. Note that, in any case, some assumptionson the noise are required in order to justify the approach (it has to be small in the deterministic case, and must satisfy some independence and ergodicity properties in the stochastic case). These assumptions can hardly be checked in practice, so that the satisfaction of the enduser is the final criterion.
Hypothesizing an exact model also results in the possibility of choosing the data in a manner suited for identifying a specific phenomenon. This often interacts in a complex manner with the localcharacter of the model with respect to the data (for instance a linear model is only valid in a neighborhood of a point).
Let us turn to the activity of the team proper to identification. Although the subject, on the academic level, has been the realm of the stochastic paradigm for more than twentyfive years, it is in a deterministic approach to identification of linear dynamical systems (i.e. 1D
convolution processes) based on approximation in the complex domain, that the Team made perhaps its most original contributions. Naturally, the deep links stressed by the spectral theorem between time and frequency domains induce wellknown parallels between function theory and probability,
and the work of the Apics Team
The data are considered without postulating an exact model, but we simply look for a convenient approximation to the data in a range of frequency representing the working conditions of the underlying system. A prototypical example that illustrates our approach is the harmonic identification of dynamical systems which is widely used in the engineering practice, where the data are the responses of the system to periodic excitations in its bandwidth. We look for a stable linear model that describes correctly the behavior in this bandwidth, although the model can be inaccurate at high frequencies (which can seldom be measured). In most cases, we also want this model to be rational of suitable degree, either because this is imposed by the physical significance of the parameters or because complexity must remain reasonably low to allow the efficient use of the model for control, estimation or simulation. Other structural constraints, arising from the physics of the phenomenon to be modeled, often superimpose on the model. Note that, in this approach, no statistics are used for the errors, which can originate from corrupted measurements or from the limited validity of the linear hypothesis.
We distinguish between an identification step (called nonparametric in a certain terminology) associated with an infinite dimensional model, and an approximation step in which the order is reduced under specific constraints on the considered system. The first step typically consists, mathematically speaking, in reconstructing a function, analytic in the right halfplane, knowing its pointwise values on a portion of the imaginary axis. In other terms, the problem is to make the principle of analytic continuation effective on the boundary of the analyticity domain. This is a classical illposed issue (the inverse Cauchy problem for the Laplace equation) that we embed into a family of wellposed extremal problems, that may be viewed as a Tikhonovlike regularization scheme related to the spectral theory of analytic operators. This first step could in fact be made in higher dimensions, with analytic functions being replaced by harmonic fields. The second step is typically a rational or meromorphic approximation procedure (although other approximating families may be considered as well) in some class of analytic functions in a simply connected domain, say the right halfplane in the case of harmonic identification. To make the best possible use of the allowable number of parameters, or to privilege some specific physical parameters of the system, it is generally important, in the second step, to compute optimal or nearly optimal approximants. Rational approximation in the complex plane is a classical and difficult problem, for which only few effective methods exist. In relation to system theory, mainly two difficulties arise: the necessity of controlling the poles of the approximants (to ensure the stability of the model), and the need to handle matrixvalued functions in the case where the system has several inputs and outputs. Moreover, in connection with inverse problems, the behaviour of the poles of best approximants to certain functions constructed from the observations can be viewed as an estimator of singularities to be detected, and therefore receives a great deal of attention within the team.
Rational approximation in the
L^{p}sense to a transfer function on the imaginary axis (i.e., the boundary of the right halfplane) acquires a particular significance in this context for
p= 2and
p=
. If
p= 2, it corresponds to parametric identification of minimum variance when the system is fed with white noise input (the case of a colored noise corresponds to a weighted approximation), and it also corresponds to the minimization of the
error in operator norm in the time domain. If
p=
, the approximation consists in minimizing the power transfer
L^{2}L^{2}of the error (both in the time and frequency domains since the Fourier transform is an isometry). These problems contribute to a generalization (both rational and matrixvalued) of Szegö theory on orthogonal polynomials, that seems the most natural framework for setting out many
optimization problems related to linear system identification. Concerning this second step, it is worth pointing out that the analogs to rational functions in higher dimensions are the gradients of Newtonian potentials of discrete measures. Very little is known at present on the
approximationtheoretic properties of such objects, and a recent endeavour of the project is to study them in the prototypical –though somewhat particular– case of a spherical geometry.
We shall explain in more detail the above steps in the subparagraphs to come. For convenience, we shall approach them on the circle rather than the line, which is the framework for discretetime rather than continuoustime systems. The two frameworks are mathematically equivalent via a Möbius transform.
The title refers to the construction of a convolution model of infinite dimension from frequency data in some bandwidth and some reference gauge outside . The class of models consists of stable transfer functions (i.e., analytic in the domain of stability, be it the halfplane, the disk, etc), and possibly also transfer functions with finitely many poles in the domain of stability i.e., convolution operators corresponding to linear differential or difference equations with finitely many unstable modes. This issue arises in particular for the design and identification of linear dynamical systems, and in some inverse problems for the Laplacian in dimension two.
Since the question under study may occur on the boundary of planar domains with various shapes when it comes to inverse problems, it is common practice to normalize this boundary once and for all, and apply in each particular case a conformal transformation to recover the normalized
situation. The normalized contour chosen here is the unit circle. We denote by
Dthe unit disk, by
H^{p}the Hardy space of exponent
p(
i.e.the closure of polynomials in the
L^{p}norm on the circle if
1
p<
and the space of bounded holomorphic functions if
p=
), by
R_{N}the set of all rational functions having at most
Npoles in
D, and by
C(
X)the set of continuous functions on a space
X. We are looking for a function in
H^{p}+
R_{N}, taking on an arc
Kof the unit circle values that are close to some experimental data, and satisfying on
some gauge constraints, so that a prototypical Problem is:
(
P) Let
p1,
N0,
Kbe an arc of the unit circle
T,
fL^{p}(
K),
and
M>0; find a function
gH^{p}+
R_{N}such that
and such that
g
fis of minimal norm in
L^{p}(
K)under this constraint.
In order to impose pointwise constraints in the frequency domain (for instance if the considered models are transfer functions of lossless systems, see section ), one may wish to express the gauge constraint on in a more subtle manner, depending on the frequency:
(
) Let
p1,
N0,
Kbe an arc of the unit circle
T,
fL^{p}(
K),
and
; find a function
gH^{p}+
R_{N}such that

g

Ma.e. on
and such that
g
fis of minimal norm in
L^{p}(
K)under this constraint.
Problem
(
P)is an extension to the meromorphic case, and to incomplete data, of classical analytic extremal problems (obtained by setting
K=
Tand
N= 0), that generically go under the name
bounded extremal problems. These have been introduced and intensively studied by the Team, distinguishing the case
p=
from the cases
1
p<
, among which the case
p= 2presents an unexpected link with the Carleman reconstruction formulas
.
Deeply linked with Problem
(
P), and meaningful for assessing the validity of the linear approximation in the considered passband, is the following completion Problem:
(
) Let
p1,
N0,
Kan arc of the unit circle
T,
fL^{p}(
K),
and
M>0; find a function
such that
, and such that the distance to
H^{p}+
R_{N}of the concatenated function
fhis minimal in
L^{p}(
T)under this constraint.
A version of this problem where the constraint depends on the frequency is:
(
) Let
p1,
N0,
Kan arc the unit circle
T,
fL^{p}(
K),
and
; find a function
such that

h

Ma.e. on
, and such that the distance to
H^{p}+
R_{N}of the concatenated function
fhis minimal in
L^{p}(
T)under this constraint.
Let us mention that Problem
reduces to Problem
(
P)that in turn reduces, although implicitly, to an extremal Problem without constraint, (i.e., a Problem of type
(
P)where
K=
T) that is denoted conventionally by
(
P
_{0}). In the case where
p=
, Problems
and
can viewed as special cases of
(
P)and
respectively, but if
p<
the situation is different. One can also chose different exponents
pon
Kand
(the Problem is then said to be of mixed type). This comes up naturally when identifying lossless systems for which the constraint

h
1must hold at each point while the data, whose signaltonoise ratio is small at the endpoints of the bandwidth, are better approximated in the
L^{2}sense. Mixed Problems are currently under study. It is perhaps nonintuitive that these problems have in general no solution when no constraint is provided on
(that is, if
M= +
). For instance, considering Problem
, a function given by its trace on a subset
Kof positive measure on the unit circle can always be extended in such a manner that it is arbitrarily close, on
K, to a function analytic in the disk; however, it goes to infinity in norm on
when the approximation error goes to zero, unless we are in the ideal case where the initial data are
exactlythe trace on
Kof an analytical function. The phenomenon illustrates the illposedness of the analytic continuation on the boundary of the analyticity domain, which is germane to the wellknown unstability of the Cauchy problem for the Laplace equation
.
The solution to
(
P
_{0})is classical if
p=
: it is given by the AdamjanArovKrein (in short: AAK) theory. If
p= 2and
N= 0, then
(
P
_{0})reduces to an orthogonal projection. AAK theory plays an important role in showing the existence and uniqueness of the solution to
when
p=
, under the assumption that the concatenated function
fbelongs to
, and for the computation of this solution by solving iteratively a spectral problem relative to a family of Hankel operators whose symbols depend implicitly on the data. The robust convergence of this algorithm in separable HölderZygmund classes has been established
. In the Hilbertian case
p= 2, again for
N= 0, the solution of
(
P)is obtained by solving a spectral equation, this time for a Toeplitz operator, depending linearly on a parameter
that plays the role of a Lagrange multiplier and makes the dependence of the solution implicit in
M. The illposed character of the analytic continuation described above is to the effect that, if the data are not exactly analytic, the approximation error on
Ktends to 0 if, and only if, the constraint
Mon
goes to infinity
. This phenomenon can be quantified in Sobolev or meromorphic classes of functions
f, and asymptotic estimates of the behavior of
Mand of the error respectively can be obtained, based on a constructive diagonalization scheme for Toeplitz operators due to Rosenblum and Rovnyak, that makes the spectral theorem effective
. These results indicate that the error decreases much faster, as
Mincreases, if the data have a holomorphic extension to a neighborhood of the unit disk, this being conceptually interesting for discriminating between nearly analytic data and those that are not close to a linear stable model. From the constructive viewpoint, we face
the problem of representing functions through expansions that are specifically adapted to the underlying geometry, for instance, rational bases whose poles cluster at the endpoints of
K. Research in this direction is in its infancy.
The study of Problem
has been recently carried out in the case where
p= 2(with
= 0) which encompasses all mixed problems where the exponent on
is greater than 2
. It turns out that the solution uniquely exists and that the constraint is saturated pointwise, that is

g =
Ma.e. on
, unless
fis the trace on
Kof and
H^{2}function satisfying the constraint; the latter fact is perhaps counterintuitive. Although nonsmooth, this infinitedimensional convex problem has a critical point equation and solves a
min
maxequation where the multiplier is a function on
. The solution can be expressed in terms of the multiplier through a Toeplitz spectral equation as well as a Cauchytype representation. More details on an algorithmic approach can be found in section
.
Smoothness issues in Problems
(
P)and
are both delicate and important in practice. In fact, the solution to such problems is bound to be very irregular at the endpoints of
Kunless
Mis adjusted to
f; sufficient conditions for smoothness are only emerging.
Let us also emphasize that
(
P)has many analogs, equally interesting, that occur in different contexts connected to conjugate functions. For instance one may consider the following extremal Problem, where the constraint on the approximant is expressed in terms of its real and imaginary parts
while the criterion takes only its real part into account:
Let
p1,
Kbe an arc of the unit circle
T,
fL^{p}(
K),
, and
,
,
M>0; find a function
gH^{p}such that
and such that
is of minimal norm in
L^{p}(
K)under this constraint.
This is a natural formulation for issues concerning the DirichletNeumann problem for the Laplace operator, see sections and , where data and physical prior information concern real (or imaginary) parts of analytic functions.
For
p= 2, existence and uniqueness of a solution have been established in
, as well as a constructive procedure which, in addition to the Toeplitz operator that characterizes the solution
of
(
P)in the case
p= 2and
N= 0, also involves a Hankel operator (this extends the results of
).
In the nonHilbertian case, where
p2,
, but still
N= 0, the solution of
(
P)can be deduced from that of
(
P
_{0})in a manner analogous to the case
p= 2, though the situation is a bit more tricky concerning duality, because one remains in a convex setup (infinitedimensional of course), for which local optimization methods can be applied.
Up to now, if
p<
and
N>0, no demonstrably convergent solution to Problem
(
P
_{0})is available. However, a coherent picture has emerged and rather efficient numerical schemes have been devised, although their convergence has only been established for prototypical classes of functions. The essential features of the approach are summarized below.
First of all, the case
p= 2and
N>0of Problem
(
P
_{0}), which is of particular importance, reduces to rational approximation as described in more details in section
. Here, the link with classical interpolation theory, orthogonal polynomials, and logarithmic potentials is strong
and fruitful. Second, a general AAK theory in
L^{p}has been proposed which is relatively complete for
p2
. Although it does not have, for
p, the computational power of the classical theory, it has better continuity properties and stresses a continuous link between rational approximation in
H^{2}(see section
) and meromorphic approximation in the uniform norm, allowing one to use, in either context, the techniques
available from the other. Hence, similar to the case
p=
, the best meromorphic approximation with at most
npoles in the disk of a function
fL^{p}(
T)is obtained from the singular vectors of the Hankel operator of symbol
fbetween the spaces
H^{s}and
H^{2}with
1/
s+ 1/
p= 1/2, the error being here again equal to the
(
n+ 1)st singular number of the operator. This generalization has a strong topological flavour and relies on the critical points theory of LjusternikSchnirelman as well as on the particular geometry of the Blaschke products of given degree. A matrixvalued version
has been recently obtained along the same lines. A noticeable common feature to all these problems is the following: the critical point equations express nonHermitian orthogonality of the denominator (i.e., the polynomial whose zeroes are the poles of the approximant) against polynomials
of lower degree, for a complex measure that depends however on this denominator (because the problem is nonlinear). This allows one to
extend the index theorem to the case
2
p
and tackle the uniqueness problem,
study asymptotic errors with classical techniques of potential theory,
characterize the asymptotic behavior of the poles of the approximants for functions with connected singularities that are of particular interest for inverse problems (cf. section ).
In connection with the second and third items above, there are two types of asymptotics, namely weak and strong ones. Weak asymptotics begin to be reasonably understood for functions with branched singularities. Strong asymptotics for non Hermitian orthogonality relations have only been obtained recently in some particular cases, see section .
In light of these results, and despite the fact that many questions remain open, algorithmic progress is expected concerning
(
P
_{0})for
N>0and
p2in the forthcoming years. Subsequently, it is conceivable that the transition from
(
P
_{0})to
(
P)would follow the same lines as in the analytic case
.
The case where
1
p<2remains largely open, especially from the constructive point of view, because if the approximation error can still be interpreted in terms of singular values, the Hankel operator takes an abstract form which does not lead to a functional identification of its
singular vectors. This is unfortunate as this range of values for
pis quite interesting: for instance the
L^{1}criterion induces the operator norm
in the frequency domain, which is interesting for damping perturbations. It is possible that some appropriate duality relates the case
p<2to the case
2<
p, but this has not been established yet.
A valuable endeavor is to extend to higher dimensions (in particular in 3D) parts of the the above analysis, where harmonic fields replace analytic functions. On the ball or the halfspace, it seems that many of the necessary ingredients are available after the development of real Hardy space theory from harmonic analysis , with the notable exception of multiplicative techniques which are unfortunately essential to define Hankel operators. Any progress on these multiplicative aspects would yield corresponding progress in harmonic identification and its use in elliptic inverse problems. Some recent research developments within the team aim in this direction, see section .
Rational approximation is the second step mentioned in section and we first consider it in the scalar case, for complexvalued functions (as opposed to matrixvalued ones). The Problem can be stated as:
Let
1
p
,
fH^{p}and
nan integer; find a rational function without poles in the unit disk, and of degree at most
nthat is nearest possible to
fin
H^{p}.
The most important values of
p, as indicated in the introduction, are
p=
and
p= 2. In the latter case, the orthogonality between Hardy spaces of the disk and of the complement of the disk (the last one being restricted to functions that vanish at infinity to exclude the constants) makes rational approximation equivalent to meromorphic
approximation, i.e., we are back to Problem
(
P)of section
with
p= 2and
K=
T. Although no demonstrably convergent algorithm is known for a single value of
p, the former Miaou project (the predecessor of APICS) has designed a steepestdescent algorithm for the case
p= 2whose convergence to a
local minimumis guaranteed in theory, and it is the first satisfying this property. Roughly speaking, it is a gradient algorithm, proceeding recursively with respect to the order
nof the approximant, that uses the particular geometry of the problem in order to restrict the search to a compact region of the parameter space
. This algorithm can generate local
minimaif several exist, thus allowing one to discriminate between them. If there is no local
maximum, a property which is satisfied when the degree is large enough, every local
minimumcan be obtained from an initial condition of lower order. It is not proved, however, that the absolute
minimumcan always be obtained using the strategy of the hyperion or RARL2 software (see section
) that consists in choosing the collection of initial points corresponding to critical points of lower degree;
note that we do not know of a counterexample either, still assuming that there is no
maximum, so there is room for a conjecture at this point.
It is only fair to say that the design of a numerically efficient algorithm whose convergence to the best approximant would be proved is the most important problem from a practical perspective. However, the algorithms developed by the team seem rather effective and although their global
convergence has not been established.
A contrario, it is possible to consider an elimination algorithm when the function to approximate is rational, in order to find all critical points, since the problem is algebraic in this case. This method is surely convergent, since it is exhaustive, but one has to compute the roots
of an algebraic system with
nvariables of degree
N, where
Nis the degree of the function to approximate and there can be as many as
N^{n}solutions among which it is necessary to distinguish those that are coefficients of polynomials having all their roots in the unit disk; the latter indeed are the only ones that generate critical points. Despite the increase of computational power, such a procedure is still
unfeasible granted that realistic values for
nand
Nare like a ten and a couple of hundreds respectively (see section
).
To prove or disprove the convergence of the abovedescribed algorithms, and to check them against practical situations, the team has undergone a longhaul study of the number and nature of critical points, depending on the class of functions to be approximated, in which tools from
differential topology and operator theory team up with classical approximation theory. The study of transfer functions of relaxation systems (i.e., Markov functions) was initiated in
and more or less completed in
, as well as the case of
e^{z}(the prototype of an entire function with convex Taylor coefficients) and the case of meromorphic functions (
à laMontessus de Ballore)
. After these studies, a general principle has emerged that links the nature of the critical points in rational
approximation to the regularity of the decrease of the interpolation errors with the degree, and a methodology to analyze the uniqueness issue in the case where the function to be approximated is a Cauchy integral on an open arc (roughly speaking these functions cover the case of
singularities of dimension one that are sufficiently regular, see section
) has been developed. This methodology relies on the localization of the singularities via the analysis of
families of nonHermitian orthogonal polynomials, to obtain strong estimates of the error that allow one to evaluate its relative decay. Note in this context an analogue of the Gonchar conjecture, that uniqueness ought to hold at least for infinitely many values of the degree, corresponding
to a subsequence generating the
liminfof the errors. This conjecture actually suggests that uniqueness should be linked to the ratio of the tobeapproximated function and its derivative on the circle. When this ratio is pointwise greater than 1 (i.e., the logarithmic variation is small), it has been recently
proved using Morse theory and the Schwartz lemma that uniqueness holds in degree 1
. The generalization to higher dimensions is an exciting open question.
Another uniqueness criterion has been obtained for rational functions, inspired from the spectral techniques of AAK theory. This result is interesting in that it is not asymptotic and does not require pointwise estimates of the error; however, it assumes a rapid decrease of the errors and the current formulation calls for further investigation.
The introduction of a weight in the optimization criterion is an interesting issue induced by the necessity to balance the information one has at the various frequencies. For instance in the stochastic theory, minimum variance identification leads to weight the error by the inverse of the spectral density of the noise. It is worth noting that most approaches to frequency identification in the engineering practice consists of posing a leastsquare minimization problem, and to weigh the terms so as to obtain a suitable result using a generic optimization toolbox. In this way we are led to consider minimizing a criterion of the form:
where, by definition,
and
is a positive finite measure on
T,
p_{m}is a polynomial of degree less or equal to
mand
q_{n}a monic polynomial of degree less or equal to
n. Such a problem is nicely put when
is absolutely continuous with respect to the Lebesgue measure and has invertible derivative in
. For instance when
is the squared modulus of an invertible analytic function, introducing
orthogonal polynomials instead of the Fourier basis makes the situation similar to the nonweighted case, at least if
mn1
. The corresponding algorithm was implemented in the hyperion software. The analysis of the critical points
equations in the weighted case gives various counterexamples to unimodality in maximum likelihood identification
.
It is worth pointing out that meromorphicapproximation is better behaved (i.e., essentially invariant) with respect to the introduction of a weight, see . Another kind of rational approximation, that arises in several design problems where only constraints on the modulus are sought, consists in approximating the module of a function by the module of a rational function, that is, solving for
This problem is strongly related to the previous ones; in fact, it can be reduced to a convergent series of standard rational approximation problems. Note also that if
p=
, and if moduli are squared, i.e., if the feasibility of
is required, one can use the FéjèrRiesz characterization of positive trigonometric polynomials on the unit as squared moduli of algebraic polynomials to approach this issue as a convex problem in infinite dimension. This constitutes another fundamental direction for dealing with rational approximation in modulus that arises naturally in filter design problems.
We refer here to the behavior of the poles of best meromorphic approximants, in the
L^{p}sense on a closed curve, to functions defined as Cauchy integrals of complex measures whose support lies inside the curve. If one normalizes the contour to be the unit circle (which is no restriction in principle thanks to conformal mapping but raises of course difficult questions
from the constructive point of view), we find ourselves again in the framework of sections
and
, and the invariance of the problem under such transformation was established in
. The research so far has focused on functions that are analytic on and outside the contour, and have
singularities on an open arc inside the contour.
Generally speaking, the behavior of poles is particularly important in meromorphic approximation to obtain error rates as the degree goes large and also to tackle more constructive issues like uniqueness. However, the original motivation of APICS is to consider this issue in connection with the approximation of the solution to a DirichletNeumann problem, so as to extract information on the singularities. This way to tackle a free boundary problem, classical in every respect but still quite open, illustrates the approach of the team to certain inverse problems, and gives rise to an active direction of research at the crossroads of function theory, potential theory and orthogonal polynomials.
As a general rule, critical point equations for these problems express that the polynomial whose roots are the poles of the approximant is a nonHermitian orthogonal polynomial with respect to some complex measure (that depends on the polynomial itself and therefore vries with the
degree) on the singular set of the function to be approximated. New results were obtained in recent years concerning the location of such zeroes. The approach to inverse problem for the Laplacian that we outline in this section appears to be attractive when the singularities are
onedimensional, for instance in the case of a cracked domain (see section
). When the crack is sufficiently smooth, the approach in question is in fact equivalent to the meromorphic
approximation of a function with two branch points, and we were able to prove
that the poles of the approximants accumulate in a neighborhood of the geodesic hyperbolic arc that links the
endpoints of the crack
. Moreover the asymptotic density of the poles turns out to be the equilibrium distribution on the geodesic arc of
the Green potential and it charges the end points, that are
de factowell localized if one is able to compute sufficiently many zeros (this is where the method could fail). It is interesting to note that these results apply also, and even more easily, to the detection of monopolar and dipolar sources, a case where poles as well as logarithmic
singularities exist. The case of more general cracks (for instance formed by a finite union of analytic arcs) requires the analysis of the situation where the number of branch points is finite but arbitrary. It is conjectured that the poles tend to the contour
that links the end points of these analytic arcs while minimizing the capacity of the condenser
, where
Tis the exterior boundary of the domain (see section
). The conjecture is confirmed numerically and has been actually proved (paper in preparation) in the case where
the locus of minimal capacity is
connected; this covers a large number of interesting cases, including the case of general polynomial cracks, or of cracks consisting of sufficiently smooth arcs. This breakthrough, we hope, will constitute a substantial progress towards a proof of the general case. It would of course
be very interesting to know what happens when the crack is ``absolutely non analytic'', a limiting case that can be interpreted as that of an infinite number of branch points, and on which very little is known, although there are grounds to conjecture that the endpoints at least are still
accumulation points of the poles. This is an outstanding open question for applications to inverse problems
. Concerning the problem of a general singularity, that may be two dimensional, one can formulate the following
conjecture: if
fis analytic outside and on the exterior boundary of a domain
Dand if
Kis the minimal compact set included in
Dthat minimizes the capacity of the condenser
(
T,
K)under the constraint that
fis analytic and singlevalued outside
K(it exists, it is unique, and we assume it is of positive capacity in order to avoid degenerated cases), then every limit point (in the weak star sense) of the sequence
_{n}of probability measures having equal mass at each pole of an optimal meromorphic approximant (with at most
npoles) of
fin
L^{p}(
T)
has its support in
Kand sweeps out to the boundary of
Kas the equilibrium measure on
Kof the condenser
(
T,
K). Yet this conjecture is far from being solved.
We conclude by mentioning that the problem of approximating, by a rational or meromorphic function, in the
L^{p}sense on the boundary of a domain, the Cauchy transform of a real measure, localized inside the domain, can be viewed as an optimal discretization problem for a logarithmic potential according to a criterion involving a Sobolev norm. This formulation can be generalized to higher
dimensions, even if the computational power of complex analysis is no longer actual, and this makes for a longterm research project with a wide range of applications. It is interesting to mention that the case of sources in dimension three in a spherical geometry, can be attacked with the
above 2D techniques as applied to planar sections (see section
).
Matrixvalued approximation is necessary for handling systems with several inputs and outputs, and generates substantial additional difficulties with respect to scalar approximation, theoretically as well as algorithmically. In the matrix case, the McMillan degree (i.e., the degree of a
minimal realization in the SystemTheoretic sense) generalizes the degree. Hence the problem reads:
Let
1
p
,
and
nan integer; find a rational matrix of size
m×
lwithout poles in the unit disk and of McMillan degree at most
nnearest possible to
in
(
H
^{p})
^{m×
l}.To fix ideas, we may define the
L^{p}norm of a matrix as the
pth root of the sum of the
ppowers of the norms of its entries.
The main interest of the Apics Team lies in the case
p= 2. Then, the approximation algorithm designed in the scalar case generalizes to the matrixvalued situation
. The first difficulty consists here in the parametrization of transfer matrices of given McMillan degree
n, and the inner matrices (i.e., matrixvalued functions that are analytic in the unit disk and unitary on the circle) of degree
nenter the picture in an essential manner: they play the role of the denominator in a fractional representation of transfer matrices using the socalled DouglasShapiroShields factorization. The set of inner matrices of given degree has the structure of a smooth
manifold that allows one to use differential tools as in the scalar case. In practice, one has to produce an atlas of charts (parameterizations valid in a neighborhood of a point), and she must handle changes of chart in the course of the algorithm. The tangential Schur algorithm
provides us with such a parameterization and allowed the team to develop two rational approximation codes. The
first one is integrated in the hyperion software dealing with transfer matrices while the other, which is developed under the Matlab interpreter, goes by the name of RARL2 and works with realizations. Both have been experimented on measurements by the CNES (branch of Toulouse), IRCOM, and
Alcatel Space, and they give high quality results
in all cases encountered so far. These codes are now of daily use by Alcatel Space and IRCOM, coupled with
simulation software like EMXD to design physical coupling parameters for the synthesis of hyperfrequency filters made of resonant cavities, see
.
In the above application, obtaining physical couplings requires the computation of realizations, also called internal representation in system theory. Among the parameterizations obtained via the Schur algorithm, some have a particular interest from this viewpoint . They lead to a simple and robust computation of balanced realizations and form the basis of the RARL2 algorithm.
Problems relative to multiple local minima naturally arise in the matrixvalued case as well, but deriving criteria that guarantee uniqueness is much more difficult than in the scalar case. The case of rational functions of the right degree already uses rather heavy machinery , and that of matrixvalued Markov functions, that are the first example beyond rational function has made progress only recently (see section ).
In practice, a method similar to the one used in the scalar case has been developed to generate local minima of a given order from those at lower orders. In short, one sets out a matrix of degree
nby perturbation of a matrix of degree
n1where the drop in degree is due to a polezero cancellation. There is an important difference between polynomial representations of transfer matrices and their realizations: the former lead to an embedding in a ambient space of rational matrices that allows a
differentiable extension of the criterion on a neighborhood of the initial manifold, but not the latter (the boundary is strongly singular). Generating initial conditions in a recursive manner is more delicate in terms of realizations, and some basic questions on the boundary behavior of
the gradient vector field are still open.
Let us stress that the algorithms mentioned above are first to handle rational approximation in the matrix case in a way that converges to local minima, while meeting stability constraints on the approximant.
The asymptotic study of likelihood estimators is a natural companion to the research on rational approximation described above. The context is ultraclassical. Given a discrete process
y(
t)with values in
R^{p}, and another process with values in
R^{m}, we check for an explanation of
yin terms of
uas a finite order linear model:
where
eis a white noise with
pcomponents, uncorrelated to
u, assumed to represent the uncertainty in
y(
t), and where the transfer matrix
[
L
H]that links
(
e
u)
^{t}to
is rational and stable of McMillan degree
n, the matrix
Lbeing also of stable inverse (among all noises with same covariance, and given innovation, we chose those whose spectral factor has minimum phase). The number
nis, by definition, the order of the model. If we only suppose that
[
H
L]belongs to the Hardy space
H^{2}and that
Lis outer (this means stably invertible in some sense), such a representation is in fact general for
regular(i.e., purely nondeterministic) stationary processes. Identification in this context appears then as a rational approximation problem for which the classical theory makes a tradeoff between two antagonistic factors, namely the bias error on the one hand that decreases when
nincreases and the variance error on the other hand that increases with
nsince the dispersion is amplified with the number of parameters. This is the stochastic version of the complexity versus precision alternative which is allpervasive in modeling.
If one introduces now as a new variable the rational matrix
Rdefined by
and if
Tstands for the first blockrow, normalizing the variance of the noise to be the identity matrix, the maximum likelihood estimator is asymptotically equivalent, when the sample size increases, to the minimization of
where
is the spectral measure of the process
(
y
u)
^{t}(which positive and matrixvalued) and where
Trindicates the trace. If we further restrict the class of models by assuming that we deal with white noise, that is if
L=
I_{m}, one obtains a weighted rational approximation problem corresponding to the minimization of the variance on the output error. If moreover
uitself is (observed) white noise, the situation becomes that of
.
The consistency problem arises from the fact that the measure
is not available, so that one has to estimate (
) from time averages of the observed samples, assuming that the process is ergodic. The question is then to decide
whether the argument of the minimum of the estimated functional tends to that of (
) when the sample size increases, and what is the speed of convergence. The most significant result here is
perhaps the one asserting that if there exists a functional model linking
uto
y(i.e.,
uis indeed the cause of the phenomenon), and without assuming compactness of the class of models
, then consistency holds under weak ergodicity conditions and persistent excitation assumptions. An analogous of
the law of large numbers indicates, in this context, that convergence is in the order of
, where
Nis the sample size.
In the preceding result, consistency holds in the sense of pointwise convergence of the estimates on the manifold of transfer functions of given size and order. One contribution of the former Miaou project has been to show that the result holds even if we do not postulate a causal
dependency between inputs and outputs, the measure
being simply defined as the weak limit of the covariances. A second contribution is that this convergence holds uniformly with all its derivatives on each compact subset of the manifold of models, thereby drawing a path between the algorithmic behavior of the rational approximation
problem (number and nature of critical points, decrease of error, behavior of the poles) and that of the minimization of empirical means. This allows one to translate in terms of asymptotic behavior of the estimators virtually all properties that are uniform with respect to the order of the
approximants, and without having to assume that the ``true'' system belongs to the class of models. Let us mention for instance that uniqueness of a critical point in
H^{2}rational approximation, in the case where the system to approximate is nearly rational of degree
n, implies
uniqueness of a local minimizer for the output error when the input is a white noise, asymptotically almost surely
on every compact, when the density of
ywith respect to
uis nearly rational of degree
n. In the case of relaxation systems, with one inputoutput, that is, if the transfer function is a Markov function, we obtain, in the light of the results exposed in module
, the same conclusion when the order of approximation is large enough. This is the first known case of unimodality
where the ``true'' system does not belong to the class of models. An extension to the case of matrixvaluedMarkov function was obtained recently, see section
. An application of this to the localization of the poles of rational estimates of the output error of a long
memory system can be found in
. Here, we face again the question, already mentioned in the introduction, of how to expand functions in bases
that are adapted to the singularities of the spectral density of long memory processes. We believe this research direction would be worth exploring.
In order to control a system, one generally relies on a model, obtained from a prioriknowledge, like physical laws, or from experimental observations. In many applications, one is satisfied with a linear approximation around a nominal point or trajectory. However, certain control problems, such as path planning, are not of a local nature and cannot be answered via a linear approximation; it is also often the case that linear control does not apply, either because the magnitude of the control is limited or because the linear approximation is not controllable.
Module describes a problem of this nature, where the controllability of the linear approximation is of little help. The structural study described in module aims at exhibiting invariants that can be used, either to bring the study back to that of simpler systems or to lay grounds for a nonlinear identification theory. The latter would give information on model classes to be used in case there is no a priorireliable information and still the blackbox linear identification is not satisfactory.
Stabilization by continuous state feedback—or output feedback, that is, the partial information case—consists in designing a control law which is a smooth (at least continuous) function of the state making a given point (or trajectory) asymptotically stable for the closed system. One can consider this as a weak version of the optimal control problem: to compute a control that optimizes a given criterion (for instance to reach a prescribed state in minimal time) leads in general to a very irregular dependence on this state; stabilization is a qualitativeobjective (i.e., to reach that state asymptotically) which is more flexible and allows one to impose a lot more regularity.
Lyapunov functions are a wellknown tool to study stability of noncontrol dynamic systems. For a control system, a Control Lyapunov Functionis a Lyapunov function for the closedloop system where the feedback is chosen appropriately. It can be expressed by a differential inequality called the ``Artstein (in)equation '', that looks like the HamiltonJacobiBellmann equation but is largely underdetermined. One can easily deduce from the knowledge of a control Lyapunov function a continuous stabilizing feedback.
The team is engaged in obtaining control Lyapunov functions for certain classes of systems. This should be the first step in synthesizing a stabilizing control, but even when such a control is known beforehand, obtaining a control Lyapunov function can still be very useful to study the robustness of the stabilization, or to modify the initial control law into a more robust one. Moreover, if one has to deal with a problem where it is important to optimize a criterion, and if the optimal solution is hard to compute, one can look for a control Lyapunov function which comes ``close'' (in the sense of the criterion) to the solution of the optimization problem but leads to a control which is easier to work with.
These constructions are exploited in the joint collaborative research conducted with Alcatel Space (see module ), where minimizing a certain cost is very important (fuel consumption / transfer time) while at the same time a feedback law is preferred because of robustness and ease of implementation.
Here we study certain transformations of models of control systems, or more accurately equivalence classes modulo such transformations. The interest is twofold:
From the point of view of control, it is that a command satisfying specific objectives on the transformed system can be used to control the original system including the transformation in the controller. Of course the favorable case is when the transformed system has a structure that can easily be exploited, for instance when it is a linear controllable system.
From the point of view of identification and modeling, in the nonlinear case, the interest is either to derive qualitative invariants to support the choice of a nonlinear model given the observations, or to contribute to a classification of nonlinear systems which is missing sorely today. Indeed, the success of the linear model, in control or in identification, is due to the deep understanding one has of it; in the same fashion, a refined knowledge of invariants of nonlinear systems under basic transformations is a prerequisite for a theory of nonlinear identification and control.
Concerning the classes of transformations, a static feedbacktransformation of a dynamical control system is a (nonsingular) reparametrization of the control depending on the state, together with a change of coordinates in the state space. A dynamic feedbacktransformation of a dynamic control system consists of a dynamic extension (adding new states, and assigning them a new dynamics) followed by a state feedback on the augmented system. Let us now stress two specific problems that we are tackling.
The problem of dynamic linearization, still unsolved, is that of finding explicit conditions on a system for the existence of a dynamical feedback that would make it linear.
Over the last years , the following property of control systems has been emphasized: for some systems (in particular linear ones), there exists a finite number of functions of the state and of the derivatives of the control up to a certain order, that are differentially independent (i.e., coupled by no differential equation) and do ``parameterize all the trajectories''. This property and its importance in control, has been brought in light in , where it is called differential flatness, the above mentioned functions being called flator linearizing functions, and it was shown, roughly speaking, that a system is differentially flat if, and only if, it can be converted to a linear system by dynamic feedback. On the one hand, this interesting property of the set of trajectories is at least as important in control than the equivalence to a linear system, and on the other hand it gives a handle for tackling the problem of dynamic linearization, namely to find linearizing functions.
An important question remains open: how can one algorithmically decide that a given system has this property or not, i.e., is dynamically linearizable or not? This problem is both difficult and important for nonlinear control. For systems with four states and two controls, whose dynamics is affine in the control (these are the lowest dimensions for which the problem is really nontrivial), necessary and sufficient conditions for the existence of linearizing functions depending on the state and the control (but not the derivatives of the control) can be given explicitly, but they do point to the complexity of the issue.
From the algebraicdifferential point of view, the module of differentials of a controllable system is free and of finite dimension over the ring of differential polynomials in
d/
dtwith coefficients in the space of functions of the system, and for which a basis can be explicitly constructed
. The question is to find out if it has a basis made of closed forms, that is, locally exact forms. Expressed in
this way, it is an extension of the classical integrability theorem of Frobenius to the case where coefficients are differential operators. Together with stability by exterior differentiation (the classical condition), further conditions are required here to ascertain the degree of the
solutions is finite, the midterm goal being to obtain a formal and implementable algorithm, able to decide whether or not a given system is flat around a regular point. One can further consider subproblems having their own interest, like deciding flatness with a given precompensator,
or characterizing ``formal'' flatness that would correspond to a weak interpretation of the differential equation. Such questions can also be raised locally, in the neighborhood of an equilibrium point.
In what precedes, we have not taken into account the degree of smoothnessof the transformations under consideration.
In the case of dynamical systems without control, it is well known that, away from degenerate (non hyperbolic) points, if one requires the transformations to be merely continuous, every system is locallyequivalent to a linear system in a neighborhood of an equilibrium (the HartmanGrobman theorem). It is thus tempting when classifying controlsystems, to look for such equivalence modulo nondifferentiable transformations and to hope bring about some robust ``qualitative'' invariants and perhaps stable normal forms. A HartmanGrobman theorem for control systems would say for instance, that outside a ``meager'' class of models (for instance, those whose linear approximation is noncontrollable), and locally around nominal values of the state and the control, no qualitative phenomenon can distinguish a nonlinear system from a linear one, all nonlinear phenomena being thus either of global nature or singularities. Such a statement is wrong: if a system is locally equivalent to a controllable linear system via a bicontinuous transformation—a local homeomorphism in the statecontrol space—it is alsoequivalent to this same controllable linear system via a transformation that is as smooth as the system itself, at least in the neighborhood of a regular point (in the sense that the rank of the control system is locally constant), see for details; a contrario, under weak regularity conditions, linearization can be done by noncausal transformations (see the same report) whose structure remains unclear, but acquire a concrete meaning when the entries are themselves generated by a finite dimensional dynamics.
The above considerations call for the following question, which is important for modeling control systems: are there local ``qualitative'' differences between the behavior of a nonlinear system and its linear approximation when the latter is controllable?
The botton line of the team's activity is twofold, namely optimization in the frequency domain on the one hand, and the control of systems governed by differential equations on the other hand. Therefore one can distinguish between two main families of applications: one dealing with the design and identification of diffusive and resonant systems (these are inverse problems), and one dealing with the control of certain mechanical or optical systems. For applications of the first type, approximation techniques as described in module allow one to deconvolve linear equations, analyticity being the result of either the use of Fourier transforms or the harmonic character of the equation itself. Applications of the second type mostly concern the control of systems that are ``poorly'' controllable, for instance low thrust satellites or optical regenerators. We describe all these below in more detail.
Localizing cracks, pointwise sources or occlusions in a twodimensional material, using thermal, electrical, or magnetic measurements on its boundary is a classical inverse problem. It arises when studying fatigue of structures, behavior of conductors, or else magnetoencephalography as well as the detection of buried objects (mines, etc). However, no really efficient algorithm has emerged so far if no initial information on the location or on the geometry is known, because numerical integration of the inverse problem is very unstable. The presence of cracks in a plane conductor, for instance, or of sources in a cortex (modulo a reduction from 3D to 2D, see later on) can be expressed as a lack of analyticity of the (complexified) solution of the associated DirichletNeumann problem that may in principle be approached using techniques of best rational or meromorphic approximation on the boundary of the object (see sections to and ). In this connection, the realistic case where data are available on part of the boundary only is a typical opportunity to apply the analytic and meromorphic extension techniques developed earlier.
The 2D approach proposed here consists in constructing, from measured data on a subset
Kof the boundary
of a plane domain
D, the trace on
of a function
Fwhich is analytic in
Dexcept for a possible singularity across some subset
(typically: a crack). One can then use the approximation techniques described above in order to:
extend
Fto all
if the data are incomplete (it may happen that
K) if the boundary is not fully accessible to measurements), for instance to identify an unknown Robin coefficient, see
where stability properties of the procedure are established;
detect the presence of a defect in a computationally efficient manner, ;
Thus, inverse problems of geometric type that consist in finding an unknown boundary from incomplete data can be approached this way , usually in combination with other techniques . Preliminary numerical experiments have yielded excellent results and it is now important to process real experimental data, that the team is currently busy analysing. In particular, contacts with the Odyssée Team of Inria Sophia Antipolis (within the ACI ``ObsCerv'') has provided us with 3D magnetoencephalographic data from which 2D information was extracted, see section . The team is also in contact with other laboratories (e.g., Vanderbilt Univ. Physics Dept.) in order to work out 2D or 3D data from physical experiments.
The team has begun this year to study this type of methods for problems with variable conductivity governed by a 2D Beltrami equation (these appear in plasma confinment for thermonuclear fusion, for example, a subject on which a collaboration has started with the Laboratoire J.Dieudonné of the University of Nice); it is the object of the postdoctoral stay of E. Sincich. In the longer term, we envisage also applying such techniques to the Helmholtz equation. Using convergence properties of approximation algorithms in order to establish stability results for these inverse problems is an appealing direction for future research.
One of the best training ground for the research of the team in function theory is the identification and design of physical systems for which the linearity assumption is wellsatisfied in the working range of frequency, and whose specifications are made in frequency domain. Resonant systems, acoustic or electromagnetic, are prototypical examples of common use in telecommunications. We shall be more specific on two examples below.
Surface acoustic waves filters are largely used in modern telecommunications especially for cellular phones. This is mainly due to their small size and low cost. Unidirectional filters, formed of Single Phase UniDirectional Transducers(in short: SPUDT) that contain inner reflectors (cf. Figure ), are increasingly used in this technological area. The design of such filters is more complex than traditional ones.
We are interested here in a filter formed of two SPUDT transducers (Figure ). Each transducer is composed of cells of the same length each of which contains a reflector and all but the last one contain a source (Figure ). These sources are all connected to an electrical circuit, and cause electroacoustic interactions inside the piezoelectric medium. In the transducer SPUDT2 represented on Figure , the reflectors are positioned with respect to the sources in such a way that near the central frequency, almost no wave can emanate from the transducer to the left ( ), this being called unidirectionality. In the right transducer SPUDT1, reflectors are positioned in a symmetric fashion so as to obtain unidirectionality to the left.
Specifications are given in the frequency domain on the amplitude and phase of the electrical transfer function. This function expresses the power transfer and can be written as
where
Yis the admittance of the coupling:
The design problem consists in finding the reflection coefficients
rand the source efficiency in both transducers so as to meet the specifications.
The transducers are described by analytic transfer functions called mixed matrices, that link input waves and currents to output waves and potentials. Physical properties of reciprocity and energy conservation endow these matrices with a rich mathematical structure that allows one to use approximation techniques in the complex domain (see module ) according to the following steps:
describe the set of electrical transfer functions obtainable from the model,
set out the design problem as a rational approximation problemin a normed space of analytic functions:
where
Dis the desired electrical transfer,
use a rational approximation software (see module ) to identify the design parameters.
The first item, is the subject of ongoing research. It connects the geometry of the zeroes of a rational matrix to the existence of an inner symmetric extension without increase of the degree (reciprocal Darlington synthesis), see . A collaboration with TEMEX (SophiaAntipolis) was initiated this year on the subject.
In the domain of space telecommunications (satellite transmissions), constraints specific to onboard technology lead to the use of filters with resonant cavities in the hyperfrequency range. These filters serve multiplexing purposes (before or after amplification), and consist of a sequence of cylindrical hollow bodies, magnetically coupled by irises (orthogonal double slits). The electromagnetic wave that traverses the cavities satisfies the Maxwell equations, forcing the tangent electrical field along the body of the cavity to be zero. A deeper study (of the Helmholtz equation) states that essentially only a discrete set of wave vectors is selected. In the considered range of frequency, the electrical field in each cavity can be seen as being decomposed along two orthogonal modes, perpendicular to the axis of the cavity (other modes are far away, and their influence can be neglected).
Near the resonance frequency, a good approximation of the Maxwell equations is given by the solution of a second order differential equation. One obtains thus an electrical model for our filter as a sequence of electricallycoupled resonant circuits, and each circuit will be modeled by two resonators, one per mode, whose resonance frequency represents the frequency of a mode, and whose resistance represent the electric losses (current on the surface).
In this way, the filter can be seen as a quadripole, with two ports, when plug on a resistor at one end and fed with some potential at the other. We are then interested in the power transmitted and reflected. This leads to defining a scattering matrix
S, that can be considered as the transfer function of a stable causal linear dynamical system, with two inputs and two outputs. Its diagonal terms
S_{1, 1},
S_{2, 2}correspond to reflections at each port, while
S_{1, 2},
S_{2, 1}correspond to transmission. These functions can be measured at certain frequencies (on the imaginary axis). The filter is rational of order 4 times the number of cavities (that is 16 in the example), and the key step consists in expressing the components of the equivalent electrical
circuit as a function of the
S_{ij}(since there are no formulas for expressing the length of the screws in terms of parameters of this electrical model). On the other hand, this is also useful for the design of the filter, for analyzing numerical simulations of the Maxwell equations, and for checking the design,
particularly the absence of higher resonant modes.
In reality, resonance is not studied via the electrical model, but via a low pass equivalent obtained upon linearizing near the central frequency, which is no longer conjugate symmetric (i.e., the underlying system may not have real coefficients) but whose degree is divided by 2 (8 in the example).
In short, the identification strategy is as follows:
measuring the scattering matrix of the filter near the optimal frequency over twice the pass band (which is 80Mhz in the example).
solving bounded extremal problems, in
H^{2}norm for the transmission and in Sobolev norm for the reflection (the module of he response being respectively close to 0 and 1 outside the interval measurement) cf. module
. This gives a scattering matrix of order roughly 1/4 of the number of data points.
Then one rationally approximate with fixed degree (8 in this example) via the hyperion software cf. module .
A realization of the transfer function is thus obtained, and some additional symmetry constraints are imposed.
Finally one builds a realization of the approximant and looks for a change of variables that eliminates nonphysical couplings. This is obtained by using algebraicsolvers and continuation algorithms on the group of orthogonal complex matrices (symmetry forces this kind of change of basis).
The final approximation is of high quality. This can be interpreted as a validation of the linearity hypothesis for the system: the relative
L^{2}error is less than
10
^{3}. This is illustrated by a reflection diagram (Figure
). Nonphysical coupling are less than
10
^{2}.
The above considerations are valid for a large class of filters. These developments have also been used for the design of unsymmetric filters, useful for the synthesis of repeating devices.
The team investigates today the design of output multiplexors (OMUX) where several filters of the previous type are coupled on a common guide. In fact, it has undergone a rather general analysis of the question "How does an OMUX work?" With the help of numerical simulations and Schur analysis, general principles are being digged out to take into account:
within each channel the coupling between the filter and the "Tee" that connects it to the manifold,
the coupling between two consecutive channels.
The model is obtained upon chaining the corresponding scattering matrices, and mixes up rational elements and complex exponentials (because of the delays) hence constitutes an extension of the previous framework. Its study is being conducted under contract with the CNES in collaboration with AlcatelAleniaSpace (Toulouse), see ).
The use of satellites in telecommunication networks motivates a lot of research in the area of signal and image processing; see for instance section for an illustration.
Of course, this requires that satellites be adequately located and positioned (correct orientation). This problem and other similar ones continue to motivate research in control from the part of the team. Aerospace engineering in general is a domain that requires sophisticated control techniques, and where optimization is often crucial, due to the extreme conditions.
The team has been working for two years on control problems in orbital transfer with lowthrust engines, under contract with Alcatel Space Cannes, see module . Technically, the reason for using these (ionic) low thrust engines, rather than chemical engines that deliver a much higher thrust, is that they require much less ``fuel''; this is decisive because the total mass is limited by the capacity of the launchers: less fuel means more payload, and fuel represents an impressive part of the total mass.
From the control point of view, the low thrust makes the transfer problem delicate. In principle of course, the control law leading to the right orbit in minimum time exists, but it is quite heavy to obtain numerically and the computation is nonrobust against many unmodelled phenomena. Progress on the approximation of such a law by a feedback has been obtained this year, see section .
The increased capacity of numerical channels in information technology is a major industrial challenge. The most performing means nowadays for transporting signals from a server to the user and backwards is via optical fibers. The use of this medium at the limit of its capacity of response causes new control problems in order to maintain a safe signal, both in the fibers and in the routing and regeneration devices.
In a recent past, the team has worked in collaboration with Alcatel R&I (Marcoussis) on the control of ``alloptic'' regenerators. Although no collaboration is presently active, we consider this a potentially rich domain of applications
The works presented in module lie upstream with respect to applications. However, beyond the fact that deciding whether a given system is linear modulo an adequate compensator is clearly conceptually important, it is fair to say that ``flat outputs'' are of considerable interest for path planning . Moreover, as indicated in section , a better understanding of the invariants of nonlinear systems under feedback would result in significant progress in identification.
The development of a
RARL2 (Réalisation interne et Approximation Rationnelle L2) is a software for rational approximation (see module ). Its web page is http://wwwsop.inria.fr/miaou/RARL2/rarl2.html. The `miaou' should be replaced by `apics' here. This software takes as input a stable transfer function of a discrete time system represented by
either its internal realization
or its first
NFourier coefficients
or discretized values on the circle
It computes a local best approximant which is
stable, of prescribed McMillan degree, in the
L^{2}norm.
It is germane to the arl2 function of hyperion from which it differs mainly in the way systems are represented: a polynomial representation is used in hyperion, while RARL2 uses realizations, this being very interesting in certain cases. It is implemented in MATLAB. This software handles multivariablesystems (with several inputs and several outputs), and uses a parameterization that has the following advantages
it incorporates the stability requirement in a buitin manner,
it allows the use of differential tools,
it is wellconditioned, and computationally cheap.
An iterative research strategy on the degree of the local minima, similar in principle to that of arl2, increases the chance of obtaining the absolute minimum (see module ) by generating, in a structured manner, several initial conditions. Contrary to the polynomial case, we are in a singular geometry on the boundary of the manifold on which minimization takes place, which forbids the extension of the criterion to the ambient space. We have thus to take into account a singularity on the boundary of the approximation domain, and it is not possible to compute a descent direction as being the gradient of a function defined on a larger domain, although the initial conditions obtained from minima of lower order are on this boundary. Thus, determining a descent direction is nowadays, to a large extent, a heuristic step. While this step performs satisfactorily in cases handled so far, it is still unknown how to make it truly algorithmic.
The identification of filters modeled by an electrical circuit that was developed inside the team (see module
) has led to compute the electrical parameters of the filter. This means finding a particular realization
(
A,
B,
C,
D)of the model given by the rational approximation step. This 4tuple must satisfy constraints that come from the geometry of the equivalent electrical network and translate into some of the coefficients in
(
A,
B,
C,
D)being zero. Among the different geometries of coupling, there is one called ``the arrow form''
which is of particular interest since it is unique for a given transfer function and also easily computed. The
computation of this realization is the first step of RGC. However if the desired realization is not in arrow form, one can show that it can be deduced by an orthogonal change of basis (in general complex). In this case, RGC starts a local optimization procedure that reduces the distance
between the arrow form and the target, using successive orthogonal transformations. This optimization problem on the group of orthogonal matrices is nonconvex and has a lot of local and global minima. In fact, there is not always uniqueness of the realization of the filter in the given
geometry. Moreover, it is often interesting to know all the solutions of the problem, because the designer cannot be sure, in many cases, which one is being handled, and also because the assumptions on the reciprocal influence of the resonant modes may not be equally well satisfied for all
such solutions, hence some of them should be preferred for the design. Today, apart from the particular case where the arrow form is the desired form (this happens frequently up to degree 6) the RGC software gives no guarantee to obtain a single realization that satisfies the prescribed
constraints. In the shorttomid term, the methodology underlying the RGC software should be replaced by a heavier but systematic approach based on continuation methods and symbolic computation on which decisive progress was made this year, see section
.
PRESTOHF: a toolbox dedicated to lowpass parameter identification for hyperfrequency filters http://wwwsop.inria.fr/miaou/Fabien.Seyfert/Presto_web_page/presto_pres.htmlThe `miaou' should be replaced by `apics' here. In order to allow the industrial transfer of our methods, a Matlabbased toolbox has been developed, dedicated to the problem of identification of lowpass hyperfrequency filter parameters. It allows to run the following algorithmic steps, one after the other, or all together in a single sweep:
determination of delay components, that are caused by the access devices (automatic reference plane adjustment);
automatic determination of an analytic completion, bounded in module for each channel, (see module );
rational approximation, of fixed McMillan degree;
determination of a constrained realization.
For the matrixvalued rational approximation stage PrestoHF relies either on hyperion (Unix or Linux only) or RARL2 (platform independent), both rational approximation engines were developed within the team. Constrained realizations are computed by the RGC software. As a toolbox, PrestoHF has a modular structure, which allows one for example to include some building blocks in an already existing software.
The delay compensation algorithm is based on the following strong assumption: far off the passband, one can reasonably expect a good approximation of the rational components of
S_{11}and
S_{22}by the first few terms of their Taylor expansion at infinity, a small degree polynomial in
1/
s. Using this idea, a sequence of quadratic convex optimization problems are solved, in order to obtain appropriate compensations. In order to check the previous assumption, one has to measure the filter on a larger band, typically three times the pass band.
This toolbox is currently used by Alcatel Space in Toulouse. In a near future it should incorporate new algorithms for delaycompensation and data completion (see section ).
We have started the development of Endymion, a software licensed under the CeCILL license version two, see http://www.cecill.info. This software will offer most of the functionalities of hyperion (whose development has been abandonned in 2001), like the arl2and peb2procedures. It will be much more portable, since it is no more dependent on an external garbage collector or a plotter like agat.
The great novelty in the RAWEB2002 (Scientific Annex to the Annual Activity Report of Inria), was the use of XML as intermediate language, and the possibility of bypassing
The construction of the raweb is explained schematically on figure
. The input is either a
In the original version, one could instruct Tralicsto produce the XML output, or to convert it also to HTML or Pdf. One could also ask for a direct PostScript version (bypassing the XML phase). This is now governed by a Perl script, called rahandler.pl. One can modify this script (for instance, change the name or the pathname of the XSLT processor, or the location of the SGML catalog file); this is now the recommended procedure (of course, it is still possible to specify in the Tralicsconfiguration file these names, which are transmitted to the script). The raweb package uses a Makefile to call Tralicswithout options, and then all other tools, (in this case rahandler.plis unused).
As a byproduct, all bibliographical references of years 2000 to 2003 have been translated to XML, sorted by authors, type, year, and put on the web (currently the internal server http://www.inria.fr/interne/disc/).
One important issue was the choice of the DTD ( document type definition). On one hand, it should follow the pseudoDTD as defined for the RAWEB six years ago (the Activity Report is a set of modules, with contributors, keywords, etc), and on the other hand, it must be as close as possible to standard DTDs. We have decided to use a variant of the TEI ( text encoding initiative, see http://www.teic.org/) for the text, MathML for the mathematics, and an adhoc DTD for the bibliography. This DTD was modified in 2004, independently of Tralics. In other words, on Figure , a new arrow has to be added: it goes from the old DTD to the new one.
The main difficulty comes from the mathematics: consider a formula like . This is translated by Tralicsinto a formula that contains a script X, coded as <mi>𝒳</mi>. After conversion to the new DTD, entities are replaced by Unicode characters, so that the X becomes <mi>𝒳</mi>. This character seems to be unknown by browsers like Amaya or Mozilla, and is rendered by a question mark or a little box containing the Unicode value (here 01D4B3). This is one of the reasons why math formulas are still replaced by images; in the case $x+\alpha$, only the is converted; this has the advantage to reduce the number of images, but in some cases is not very elegant.
Conversion is done by a dedicated Perl script that extracts from the XML file all formulas, and converts them to a set of pages in a dvi file (we use here the same algorithm for converting the XML to PostScript). Each page is converted to an image via pstoimg, which is a Perl code, part of latex2html. We try to associate each image an Alt field that describes the formula, but this is difficult: for the example we get ${\#119987 _y=lim_{x\#8594 0}sin^2{(x)}}$.
The
Tralicssoftware is a C++ written
A second application is the following: when researchers wish to publish an Inria Research Report, they send their PostScript or Pdf document, together with the start of the
The main philosophy of
Tralicsis to have the same parser as
This year we added constructions like \longand \outer(if a command is not \long, \partokens are forbidden in its arguments).
Full first names are retained when translating the bibliography; of course this works only if they appear in the data base files.
There are still some unsolved problems: for instance, a figure environment should contain only graphics together with a single caption, commands defined by the picture environment are translated (but refused by the style sheet), nonmath material in a math formula is rejected (unless it is formed of characters only).
For more information, see the Tralics web page. It contains a description of each command. We have written a technical report in two parts: , ; the first part explains Tralics, and the second part its applications to the Raweb.
The fact that 2D harmonic functions are real parts of analytic functions allows one to tackle issues in singularity detection and geometric reconstruction from boundary data of solutions to Laplace equations using the meromorphic and rational approximation tools developed by the team. Some electrical conductivity defaults can be modeled by pointwise sources inside the considered domain. In dimension 2, the question made significant progress in recent years: the singularities of the function (of the complex variable) which is to be reconstructed from boundary measures are poles (case of dipolar sources) or logarithmic singularities (case of monopolar sources). Hence, the behavior of the poles of the rational or meromorphic approximants, described in module , allows one to efficiently locate their position. This is the topic of the article , where the related situation of small inhomogeneities connected to mine detection is also considered.
In 3D, epileptic regions in the cortex are often represented by pointwise sources that have to be localized from measurements on the scalp of a potential satisfying a Laplace equation (EEG, electoencephalography). Note that the patient's head is here modeled as a nested sequence of spherical layers. This inverse EEG problem is the object of a collaboration between the Apics and Odyssée Teams through the ACI ``ObsCerv''. A breakthrough was made last year which makes it possible now to proceed via best rational approximation on a sequence of 2D disks along the inner sphere , . The point here is that, up to an additive function harmonic in the 3D ball, the trace of the potential on each boundary circle coincides with a function having branched singularities in the corresponding disk. The behavior along the family of disks of the poles of their best rational approximants on each circle is strongly linked to the location of the sources, using properties discussed in sections and . (in the particular case of a unique source, we end up with a rational function); this is under study as well as a number of important related issues.
Solving Cauchy problems on an annulus or on a spherical layer in order to treat incomplete experimental data is also a necessary ingredient of the above methodology, since it is involved in the propagation of initial conditions from the boundary to the center of the domain, where
singularities are seeked, when this domain is formed of several homogeneous layers of different conductivities. On a spherical layer, this was the aim of the postdoctoral trainee of B. Atfeh
. Constructive and numerical aspects of the expected procedures (harmonic 3D projection, Kelvin and Riesz
transformation, spherical harmonics) are under study and encouraging results are already available on numerically computed data. This offers an opportunity to state and solve extremal problems for harmonic fields
for which an analog to the Toeplitz operator approach to bounded extremal problems has been obtained. More
specifically, the density of traces of harmonic gradients in
L^{2}of a subset of the 3D sphere was established, and a Toeplitz operator with symbol the characteristic function of such a subset was defined. Then, a best approximation on the subset of a general vector field by a harmonic gradient under a
L^{2}norm constraint on the complementary subset can be computed by an inverse spectral equation for the abovementioned Toeplitz operator.
As to multiply connected domains, solving Cauchy problems on an annulus is the main theme of the PhD thesis of M. Mahjoub. This arises when identifying a crack in a tube or a Robin coefficient on its inner skull. It can be formulated as a best approximation problem on part of the boundary of a doubly connected domain, which allowed both numerical algorithms and stability results to be obtained in this framework , , , . thereby generalizing the simply connected situation , .
In the 2D case again, with incomplete data, the geometric problem of finding, in a stable and constructive way, an unknown (insulating) part of the boundary of a domain is considered in the Ph.D. thesis of I. Fellah
. Approximation and analytic extension techniques described in section
together with numerical conformal transformations of the disk provide here also interesting algorithms for the
inverse problem under consideration. A related result was recently obtained, namely the
L^{p}existence and uniqueness of the solution for the Neumann problem on a piecewise
domain with inward pointing cusps (note that the endpoints of a crack are such cusps) when
1<
p<2. Although it is reminiscent of classical
L^{p}theorem on Lipschitz domains
, it seems to be a new result and the first where a cusp is permitted while still controlling the conjugate
function; the proof uses weighted norm inequalities
. Moreover, a Cauchytype representation for the solution was obtained using Smirnov classes representation
properties, and the technique generalizes to mixed boundary conditions that occur when the crack is no longer assumed to be a perfect insulator. Describing higher dimensional geometries with cusps to which the result can be extended is an interesting issue.
We also started to consider more realistic geometries for the 3D domain under consideration. A possibility is to parametrize it in such a way that its planar crosssections are quadrature domains or Rdomains. In this framework, best rational approximation can still be performed in order to recover the singularities of solutions to Laplace equations but complexity issues have to be examined carefully.
The case of an ellipsoid has been the topic of the summer internship of C. Paduret, and is that of . Finally, we begin to consider actual 3D approximation for such inverse problems. Quaternionic analysis seems to be a relevant tool, but the multiplicative side of the theory remains to be developed.
Within the postdoctoral stay of E. Sincich, we began the study of more general elliptic equations, arising in situations with variable conductivity, in particular the 2D Beltrami equation. Then, generalized harmonic conjugation allow us to state Cauchy problems as bounded extremal issues, up to the recovery (or approximation) of a quasiconformal mappings. We have in mind an 2D (doubly connected) application to plasma confinment for the thermonuclear fusion in a Tokamak; this is collaborative work being started in collaboration with J. Blum from the Laboratoire J.Dieudonné of the University of Nice.
The magnetic field produced by a magnetic dipole located at a point is
The problem is to identify the location
, and the momentum
of a sequence of
Ndipoles indexed by
k= 1, ...,
N, given measurements from a SQUID (superconducting quantum interference device). The assumption that
z_{k}is independent of
k(i.e., all dipoles lie in a plane) is made, and we assume also that
is parallel to the zaxis for all
k. In this case the previous formula simplifies to
The effect of the pickup coil needed by the SQUID can be modeled by averaging over a small disk, of radius
a. Thus we measure a quantity of the form
The objective of these studies is to have at one's disposal a panel of parametrizations that could be used for our approximation problems and could take into account some particular property coming from the physics. This could be symmetry or some other constraint on the realization matrix like for example the structure imposed by the couplings of an hyperfrequency filter .
Tangential Schur algorithms provide interesting tools to parameterize conservative functions by means of interpolation values. Several parametrizations have been derived in the past from that approach, in which a function can be represented by a balanced realization computed as a product of unitary matrices from the Schur parameters . Such parametrizations present a number of advantages in view of the approximation problems we have in mind: it ensures identifiability, takes into account the stability constraint, preserves the order and presents a nice numerical behavior. They have been used in the software RARL2 .
This year, we paid a particular attention to the symmetry constraint. Symmetric inner rational functions naturally arise in the description of physical systems which satisfy the conservation and reciprocity laws. In particular, they occur in the description of hyperfrequency and SAW filters (see sections , ). In , a Schur type algorithm, based on a twosided Nudelman interpolation problem, has been presented which provides parameters for these functions.
Surface Acoustic Waves (in short: SAW) filters consist in a series of transducers which transmit electrical power by means of surface acoustic waves propagating on a piezoelectric medium. They are usually described by a mixed scattering matrix which relates acoustic waves, currents and
voltages. By reciprocity and energy conservation, these transfers must be either lossless, contractive or positive real, and symmetric. In the design of SAW filters, the desired electrical power transmission is specified. An important issue is to characterize the functions that can actually
be realized for a given type of filter. In any case, these functions are Schur and can be completed into a conservative matrix with an increase of at most 2 of the McMillan degree, this matrix describing the global behavior of the filter. Such a completion problem is known as Darlington
synthesis and has always a solution for any higher McMillan degree in the rational case if the symmetry condition is not superimposed. However in our case, additional constraints arise from the geometry of the filter as the symmetry and certain interpolation condition. In
, a complete mathematical description of such devices is given, including realizations for the relevant
transferfunctions, as well as a necessary and sufficient condition for symmetric Darlington synthesis preserving the McMillan degree. More generally, in collaboration with P. Enqvist from KTH (Stockholm, Sweden), we characterized in
the existence of a symmetric Darlington synthesis with specified increase of the McMillan degree: a symmetric
extension of a symmetric contractive matrix
Sof degree
nexists in degree
n+
kif, and only if,
I
SS^{*}has at most
kzeros with odd multiplicity. In the language of circuit theory, this results tells us about the minimal number of gyrators to be used in circuit synthesis; an article is currently being written to report on these results.
The results of
and
have been exploited this year to produce a proof of the convergence in capacity of
L^{p}best meromorphic approximants on the circle (
p2) to Cauchy transforms of complex measure on a hyperbolic geodesic plus a rational function. Some mild conditions (bounded variation of the argument and powerthickness of the
total variation) are required on the measure, and the argument makes use of classical logarithmic potential theory together with the asymptotic convergence of the counting measure of the poles. Recall that a sequence of function converges in capacity if the capacity of the set where the
distance to the limit is greater than
goes to 0 along the sequence for each fixed
>0. Actually, we proved a slightly more precise result, namely the convergence is geometric and the poles of the approximated function attract a number of poles of the
approximant which is at least the multiplicity and not much more (the two numbers differ at most by a fixed constant). This result is important for inverse problem of mixed type, like those mentioned in section
, where monopolar and dipolar sources are present simultaneously. Quantifying the convergence further is the next
step in such a study. An article is being written on these results.
The study of matrixvalued rational approximation to matrixMarkov functions (i.e., Cauchy transforms of a positive matrix valued measure) has also been pursued, although less actively. Essentially, we proved that for Markov matrixvalued functions best meromorphic approximants have Markovtype singular part.
It is known after
that the denominators of best rational of meromorphic approximants in the
L^{p}norm on a closed curve (say the unit circle
Tto fix ideas) satisfy for
p2a nonHermitian orthogonality relation for functions described as Cauchy transforms of complex measures on a curve
(locus of singularities) contained in the unit disk
D. This has been used to assess the asymptotic behavior of the poles of such an approximant when
is a hyperbolic geometric arc, that is, under weak conditions on the measure, the counting measure of these poles converges weakstar to the equilibrium distribution of the condenser
(
T,
)where
Tis the unit circle. Non asymptotic bounds were also obtained for the sum of the complement to
of the hyperbolic angles under which the poles ``see''
: the sum of these complements over all the poles (they are
nin total if the approximant has degree
n) is bounded by the aperture of
plus twice the variation of the argument of the measure (which is independent of
n). This produces ``hard'' testable inequalities for the location of the poles, that should prove particularly valuable in inverse source problem (because they are not asymptotic in nature), see
.
The more general situation where
is a socalled ``minimal contour'' for the Green potential (of which a geodesic arc is an example) has been essentially settled with the same conclusion concerning the convergence of the counting measure of the poles. The writing up of this (rather technical) result is underway, and of
particular significance with respect to the determination of
2
Dsources or piecewise analytic cracks from overdetermined boundary data, see module
and
.
This year it was proved that strong asymptotics, that do not deal with the counting measure of the poles (this entails only results in proportion) but with the behaviour of
allof them were obtained for Cauchy transforms of smooth nonvanishing complex mesures on a hyperbolic arc in the disk, provided the density blows up at the end points of the arc. This is a new and very interesting result, that paves the way for further study on uniqueness of local best
approximants and inverse source problem. The technical problem facing us is to get rid of the blowingup assumptions at the endpoints which is induced by the technique (going over to the circle in order to use Fourier analysis and compactness of Hankel operators with continuous symbol) but
not very satisfactory. A numerical illustration of the results is shown in figures
for approximants to the function
Fgiven below.
The function :
The team also obtained this year strong convergence results for zeros of orthogonal polynomials on polynomial lemniscates. Specifically, all the zeros go to the surrounding lemniscate in the case of a meromorphic and branched positive weight for the measure. Beyond their own interest for inverse 2D boundary problems viaconformal mapping, these results have uncovered a new methodology, based on the expansion of reproducing kernels and a generalized Hadamard formula, to pass in many situations from exterior asymptotics of the Szegö type to interior asymptotics (that are traditionally much more difficult to obtain). This rather unexpected algorithm is currently being explored in details by E. Mina.
To carry out identification and design of filters under passivity constraints (such constraints are common since passive devices are ubiquitous, including in particular hyperfrequency filters), it is natural to consider the mixed bounded extremal problem
stated in section
. An algorithm to asymptotically solve this problem when
p= 2in nested spaces of polynomials has been developed, and its connection to certain affine RiemannHilbert problems has also been carried out. This connection provides a handle to analyze regularity properties of the solution, and gives us an alternative process
based on the solution of a minmax problem with saddle point conditions. It also provides sufficient conditions for hölder smoothness of the solution, that are linked to the regularity of the Cauchy transform. Such regularity condition should greatly impinge on the numerical practice of the
problem, and should be valuable to estimate delays in waveguides, thereby complementing the existing procedures dealing with this issue in PRESTOHF. An article reporting on these results is currently being written.
We studied in some generality the case of parameterized linear systems characterized by the following classical state space equation,
where
is a finite set of
rparameters and
(
A(
p),
B(
p),
C(
p))are matrices whose entries are polynomials (over the field
) of the variables
. For a parameterized system
and
we call
the transfer function of the system
(
p). Some important questions in filter synthesis concern the determination of the following parameterized sets
General results were obtained about these sets, in particular a necessary and sufficient condition ensuring their cardinality is finite. In the special case of coupledresonators an efficient algebraic formulation has been derived which allowed us to compute for nearly all common filter geometries. However for a new class of high order filters first presented in the latter procedure breaks down because of the computational complexity of the Gröbner basis computation. This led us to consider homotopic methods based on continuation techniques in order to solve the algebraic system defining . The usual framework of these methods that is based on the Bezout bound or on mixed volume computations appeared to be intractable in our case mainly because of the degeneracy of our algebraic systems: for example for a 10 ^{th}order filter, the Bezout bound is about 10 ^{44}whereas the number of solutions over the ground field is known to be only 384. To overcome this difficulty we are currently developing a continuation method which consists of the exploration of the monodromy group of an algebraic variety by following a family of paths that separate the branch points. This method is still under study but preliminary numerical results that yielded the exhaustive computation of in the latter 10 ^{th}order case are quite convincing. Using this method, we envisage to build up a precomputed filter database that would allow a fast computation of high order filters for every specific filtering characteristic.
Results were also obtained about the existence of a ``real solution'' in the set
in the case of lossless characteristics. Note that realness is essential to be able to buid the filter. For the 5
^{th}order coupling topology of figure
it was shown that one can find an open set
Uof
for which for all
pUthe set
contains no ``real'' element. Conversely it was shown, by an argument based on the BorsukUlam antipodal theorem, that for lossless characteristics and the 6
^{th}order coupling topology of figure
there generically exists at least one ``real'' element in
.
More recently we considered the application of latter results to identification procedures for tuning purposes. The aim of these ``deembedding'' procedures is to gain some information about the electromagnetic couplings implemented in a filter when starting from measurements in the frequency domain. After a rational approximation step yielding a rational model of the filter's response one faces a constrained realization problem. In particular, this implies that when dealing with filters implementing coupling topologies with ``multiple solutions`` (the fiber of equivalent realizations contains several elements) some additional experiments have to be performed on the system in order to select the ``correct'' coupling matrix, i.e. the one that is physically realized by the filter. Latter experiments are typically of differential nature and consist in the study of the influence of the variation of a single physical parameter (iris length, for example) on the fiber of equivalent realizations. The discriminant property of latter experiments is currently under study in connection with the practical problem of tuning a dualband filter (see section. ) with a coupling topology shown on Fig. whose ``realization fiber'' has, up to sign symmetries, cardinality 15.
The design of multiband responses for highfrequency filters (see section ) amounts to solve the following optimization problem of Zolotarev type:
where
(resp.
) is a finite union of compact intervals
I_{i}of the real line corresponding to the passbands (resp. stopbands), and
P_{m}(
K)stands for the set of polynomials of degree less than
mwith coefficients in the field
K. Depending on physical symmetries of the filter, it is of interest to solve problem (
) for
(``real'' problem),
(``mixed'' problem), or
(``complex'' problem). We have shown that the ``real'' Zolotarev problem can be decomposed into a sequence of concave maximization problems, the best solution of which yields the optimal solution to the original problem. A characterization in terms of an alternation property has also
be given for the solution to each of these subproblems. Based one this alternation, a Remez type algorithm has been derived. It computes the solutions to these problems in the polynomial case when the denominator
qis fixed, and allows for the computation of a dualband response (see Fig.
) according to frequency specifications (see Fig.
for an example from the spacecraft SPOT5 (CNES)). The design of an algorithm for the rational case that, unlike
methods based on linear programming, avoids sampling in frequency is currently under study. This raises the question of the "generic normality" of the approximant with respect to the interval's boundary values. This question has not received a definite answer yet. Finally the design of
efficient numerical procedures to tackle the ``mixed'' and the ``complex'' cases remains a challenging task. These matters will be pursued in V. Lunot's doctoral work.
An OMUX (Output MUltipleXor) can be modeled in the frequency domain by scattering matrices of filters, like those described in section , connected in parallel to a common wave guide, see figure . The problem of designing an OMUX that satisfies given gauge constraints translates naturally into a set of constraints on the values of the scattering matrices and phase shift introduced by the guide in the considered bandwidth.
An OMUX simulator on a matlab platform was designed in recent years. This year is has been used to test some assumptions on the OMUX'way of functioning. Among them is that each rightsection of the OMUX acts as a shortcircuit in the bandwidth of "upstream channels", and that each channel must reject a little bit in his bandwidth in order to trap energy otherwise reflected by this shortcircuit. Under the terms of a contract with AlcatelAleniaSpace, these assumptions will be used to design a dedicated software to optimize OMUXes, by first trying to optimize a channel when the others are fixed and then by looking for a fixed point over all channels.
The direct approach, currently used by the manufacturer, consists in coupling a simulator with a general purpose optimizer, in order to reduce transmission and reflection wherever they are too large. This yields unsatisfactory results in cases of high degree and narrow bandwidth, in particular because the convergence often fails and multiple initial points must be used resulting in a very lengthy and sometimes unsuccessful design. Besides, manifoldpeaks arising from the dilation of the cavities caused by increased temperature (when the satellite gets exposed to the sun), can ruin the design in operational conditions.
As a result, we expect to be able to produce a multiphased tuning procedure, first relaxed, channel after channel, then global, using a quasiNewton method. Note that the discretizations in frequency of the integral criterion and the near periodicity of the exponentials (that express the delays) interact in a complex manner and generate numerous local minima. This is one reason for analysing the optimization problem further.
The results on the local regularity of trajectories in optimal control obtained previously have been published in , . Further work in that direction, and in particular on the behavior of switches in time optimal control for threedimensional systems is in progress .
The study concerns the control of a satellite equipped with low thrust engine (like plasmic ones which are efficient with respect to fuel consumption, but deliver a thrust much smaller than conventional ``chemical'' engines, the ratio between the delivered acceleration and gravity they being of the order of 10 ^{3}, sometimes less). This problem was raised by AlcatelAlenia Space, and Alex Bombrun's PhD is supported under contract by this company.
For the transfer between two orbits (say GTOGEO), we have pursued a detailed investigation of ad hocLyapunov functions, based on the five first integrals of the noncontrolled problem, and seeked how to chose them so as to be close to the timeoptimal trajectories. In particular, we have shown that any openloop trajectory can be approximated arbitrarily well ( i.e.untill we reach a prescribed neighborhood of the target) using a feedback control law of this kind.
This theoretical result is backed up by numerical practice. In fact, simulations illustrate results that are less conservative: a rather restricted family of control Lyapunov functions (depending on a few parameters only) yield a nearly timeoptimal transfer by fitting the parameters adequately. Such numerical simulations also show that Lyapunovbased feedbacks give very satisfactory trajectories from many different initial conditions. These practical results are of great interest for satellite guidance due to the natural robustness properties of feedback control. Moreover, the easy implementation of such control laws makes them attractive as compared to genuine optimal control. The deep reasons for this unexpectedly nice behaviour are still under investigation; this work will be reported in a publication.
Another research in progress concerns the use of feedback for the controlled threebodies problem. Here again we investigate feedback control rather than openloop. This is the topic of Jonathan Chetboun's internship within APICS. This activity is also supported under contract by Alcatel Alenia Space (Cannes), where the abovementioned internship will partly take place.
Below we describe the achievements of D. Avanessoff's PhD
, defended in June. First, the links and differences between ``Monge parameterization'' and ``flatness'' have been
considerably clarified in this work. Second, tools for analyzing some overdetermined systems of PDEs where neither the number of independent variables nor the order is
a priorifixed have been designed. They are based on a valuation adapted to the control system
. The equations arising when characterizing the system's flatness involve a number of variables which is finite, but
not known
a priori... so it is tempting to take formal power series in infinitely many variables as solutions. The above tools allow us to give a meaning to solutions in such formal power series. A notion of ``very'' formal integrability was introduced, meaning existence of solutions in this
class. Obtaining a full characterization of flatness in this form is still under course. However, some results for small dimensions were obtained. For systems with three states and two controls, a sufficient condition for flatness had been given in
. We have proved that this condition is also necessary for ``
(
x,
u)flatness''(in the language of the above paragraph, a version of flatness where the number of variables to consider is decided in advance). Moreover, the previous proof using computer algebra was very intricate, so that going beyond ``
(
x,
u)flatness'' with the same method was out of reach. In
, and also in
, these systems are studied from the point of view of parameterization and the results supersede
using much more natural arguments. The conjecture is that systems that do not satisfy the above mentioned sufficient
condition are not flat at all. This is not proved yet, but a workable formulation of the question is now available.
Controllability results for systems with drift are usually obtained by a combination of local and global properties of the system under study. Local controllability properties basically follow from the knowledge of the Lie bracket configuration of the system, while global ones require particular symmetries or some sort of ergodicity. A typical example is the one of a leftinvariant control system on a Lie group. Classically, the homogeneity of the manifold given by the group structure is used to obtain global properties out of local ones.
The aim of this research line is to obtain controllability/noncontrollability results for special but inhomogeneous drifted systems.
The main object of our research is given by Dubinslike systems on Riemannian surfaces. The goal is to answer, using control techniques, the following natural question, arising from the works of Dubins: given a complete, connected, twodimensional Riemannian manifold
M, and
(
p
_{1},
v
_{1}),
(
p
_{2},
v
_{2})in
TM, does there exist a curve
in
M, with arbitrary small geodesic curvature, such that
connects
p_{1}to
p_{2}and, for
i= 1, 2,
is equal to
v_{i}at
p_{i}? The answer clearly depends on the geometric properties of
M, and gives a meaning to such properties from a control viewpoint. In
we proved that the smallcurvatureconnectedness introduced above holds for compact surfaces, for unbounded surfaces
whose Gaussian curvature tends to zero at infinity, and for surfaces which are nonnegatively curved outside a compact set. The case of nonpositively curved surfaces was addressed in
, where necessary and sufficient conditions ensuring such connectedness have been established. This has been
presented in
.
A different field of application for the analysis of controllability of inhomogeneous drifted systems is given by nonlinear switched systems. More precisely, given a switched system of the type
,
u[1, 1],
qR^{2}, where
X+
Yand
X
Yare globally asymptotically stable, we study its stability properties (global uniform asymptotic stability, uniform stability, boundedness, ...) in terms of the topology of the set where
Xand
Yare parallel. All such stability properties can be reinterpreted in terms of the behavior of attainable sets. See
.
Contracts n ^{o}04/CNES/1728/00DCT094
In the framework of a contract involving CNES, IRCOM and Inria, and whose objective is to work out a software package for identification and design of hyperfrequency devices, the work of Inria included:
the modeling of delays, see module ,
the exhaustive determination of the coupling coefficients on some case studies ),
the OMUX stimulator with exact computation of derivatives,
This contract has been renewed for 16 months starting November 2004, in order to develop a generic code for coupling determination and to carry out the optimization of OMUX.
Contract n ^{o}1 01 E 0726.
This contract started in 2001 and ended February, 2005. The topic is the design of control laws for satellites with lowthrust engines. It finances Alex Bombrun's PhD. It should be extended in 2006 with a new contract including the transfer of some prototypical software.
L. Baratchart is a member of the editorial board of Computational Methods and Function Theory.
Together with projectteams Caiman and Odyssée (INRIASophia Antipolis, ENPC), the University of Nice (J.A. Dieudonné lab.), CEA, CNRSLENA (Paris), and a few French hospitals, we participate in the national action ACI Masse de données OBSCERV, 20032006 (inverse problems, EEG). C. Paduret received financial support from this ACI.
We were awarded a grant from the region PACA for exchanges with SISSA Trieste (Italy), 20032004.
The postdoctoral training of B. Atfeh and E. Sincich are funded by Inria.
The team is the recipient of a Marie Curie EIF(Intra European Fellowship) FP62002Mobility5502062, for 24 months (20032005). This has financed Mario Sigalotti's postdoc.
The Team is a member of the Marie Curie multipartner training site Control Training Site, number HPMTCT200100278, 20012005. See http://www.supelec.fr/lss/CTS/.
The project is a member of the Working Group Control and System Theory of the ERCIMconsortium, see http://www.ladseb.pd.cnr.it/control/ercim/control.html.
NATO CLG(Collaborative Linkage Grant), PST.CLG.979703, ``Constructive approximation and inverse diffusion problems'', with Vanderbilt Univ. (Nashville, USA) and LAMSINENIT (Tunis, Tu.), 20032005.
EPSRCgrant (EP/C004418) ``Constrained approximation in function spaces, with applications'', with Leeds Univ. (UK) and Univ. Lyon I, 20052006.
STICINRIAand AireDéveloppementgrants with LAMSINENIT (Tunis, Tu.), ``Problèmes inverses du Laplacien et approximation constructive des fonctions'',
NSF EMS21RTG students exchange program (with Vanderbilt University).
The following scientists gave a talk at the seminar:
David Avanessoff, Apics Team, Paramétrisation de l'ensemble des solutions d'un système de contrôle.
Yuliya Babenko, Vanderbilt University, USA, Kolmogorov type inequalities for some special classes of functions.
Vladimir Chetverikov, Baumann University, Moscow, Flat control systems and deformations of structures on diffieties
José Grimm, Apics Team, SSH et X/Skey.
Ekaterina Iakovleva, LSS (Laboratoire des signaux et systèmes), Supelec Diffraction inverse par des petites inclusions.
Philippe Lenoir, Apics Team, Développement de méthodes pour la synthèse de filtres complexes, dits ``à solutions multiples''.
Erwin MinaDiaz, Vanderbilt University, Orthogonal polynomials on the unit circle with respect to weights having polar singularities.
Peter J. Olver, University of Minnesota, USA, New algorithms for symmetry groups and pseudogroups.
Mihaly Petreczky, CWI Amsterdam), Realization theory for hybrid systems.
Witold Respondek, Laboratoire de Mathématiques, INSA de Rouen, Canonical form, strict feedforward form and symmetries of nonlinear control systems.
Maxim Yattselev, Vanderbilt University, Meromorphic and multipoint Padé approximants for complex Cauchy transforms with polar singularities.
El Hassan Youssfi, Université de Provence, Fonctions holomorphes de type positif.
Igor Zelenko, SISSA, Trieste, Italy, A canonical frame for nonholonomic rank two distributions of maximal class.
Jonathan Partington, School of Mathematics, Leeds Univ., U.K.,
Karim Kellay, Stanislas Kupin, Stéphane Rigat, Hassan Youssfi, et l'équipe d'Analyse et Géométroie, LATPCMI, Université de Provence,
Moncef Mahjoub, LamsinENIT, Tunisie,
Pierre Rouchon, Centre Automatique et Systèmes, Ecole des Mines de Paris,
Edward B. Saff, Dept. of Mathematics, Vanderbilt University, USA,
Abdellatif El Badia, UTC Compiègne,
Yuliya Babenko, PhD student, Dept. of Mathematics, Vanderbilt University.
Ugo Boscain, SISSA, Italy,
Grégoire Charlot, University of Montpellier II,
L. Baratchart, DEA Géométrie et Analyse, LATPCMI, Univ. de Provence (Marseille), and graduate program of the University of Cyprus in Nicosia (April).
M. Olivi, Mathématiques pour l'ingénieur (Fourier analysis and integration), section Mathématiques Appliquées et Modélisation, 1ère année, Ecole Polytechnique de l'Université de Nice.
Jonathan Chetboun (ENPC)
Cristina Paduret (Mémoire de Master 3 de Mathématiques, Université de Provence, AixMarseille I.) Résolution de problèmes inverses de source dans des domaines paramétrés en dimension 3 par approximation méromorphe
Alex Bombrun, « Commande optimale, feedback, et tranfert orbital de satellites » (optimal control, feedback, and orbital transfert for low thrust satellite orbit transfer)
Imen Fellah, ``Data completion in Hardy classes and applications to inverse problems'', cotutelle with LamsinENIT (Tunis).
Vincent Lunot, « Problèmes d'approximation fréquentielle et application à la synthèse d'OMUX »,
Moncef Mahjoub, ``Complétion de données et ses application à la détermination de défauts géométriques.'' cotutelle with LamsinENIT (Tunis).
Erwin Mina Diaz, ``Asymptotic properties of orthogonal polynomials over regions and curves.''
Maxim Yattselev, ``Meromorphic approximation and nonhermitian orthogonality.''
David Avanessoff, « Linéarisation dynamique des systèmes non linéaires et paramétrage de l'ensemble des solutions » (dynamic linearization of non linear control systems, and parameterization of all trajectories). June 8, 2005.
L. Baratchart sat on the PhD defence commitee of Florence Scalas (Univ. de Provence, December), of D. Baranov (Univ. de Bordeaux, June) and on the commitee for the Habilitation of A. Borishev (Univ. de Bordeaux, June).
J. Leblond has been sitting on the commitee for the habilitation of Slim Chaabane (LAMSINENIT, Univ. Tunis II, april), and for the PhD theses of Bénédicte Dujardin and JeanGabriel Ramspacher (UNSA, December).
F. Seyfert has been sitting on the PhD defence commitee of Philippe Lenoir (IRCOM)
J.B. Pomet has been sitting on the PhD defence commitee of Vincent Andrieu (Ecole des Mines de Paris, December)
M. Sigalotti was in charge of organizing the seminar on control and identification.
L. Baratchart was a member of the ``bureau'' of the CP (Comité des Projets) of INRIASophia Antipolis untill July. He is a member of the ``commission de spécialistes'' (section 25) of the Université de Provence.
J. Grimm is a representative at the ``comité de centre''. He is a member of the organising committee of PICOF 2006 (``Inverse Problems, Control, and Shape Optimization'').
J. Leblond is a member (suppleant) of the `` Commission d'évaluation '' of INRIA, since September. She has been involved in the working group in charge (for the ``Comité des Projets'') of examining the creation proposition of the team Asclepios. She participates to the working group ``Doc''. She has been a member of the admissibility jury for CR2 researchers at INRIA Lorraine. She is a member of the scientific committee of PICOF 2006.
J. Leblond and J. Grimm are coeditors of the proceedings (to appear in 2006) of the CNRSINRIA summer school ``Harmonic analysis and rational approximation: their rôles in signals, control and dynamical systems theory'' (Porquerolles, 2003) http://wwwsop.inria.fr/apics/anap03/index.en.html .
M. Olivi is a member of the CSD (Comité de Suivi Doctoral) of the Research Unit of Sophia Antipolis.
F. Seyfert is a member of the CDL (Comité de développement logiciel) of the Research Unit of Sophia Antipolis.
A. Bombrun, B. Atfeh and L. Baratchart have presented a communication at CMFT2005 (Computational Methods and Function Theory), Joensuu, Finland (June).
L. Baratchart presented a communication at the 22nd IFIP TC 7 Conference on System Modeling and Optimization Turin, It. (july).
L. Baratchart was an invited speaker at the ``Journées d'Analyse Fonctionnelle'' of the University of Bordeaux (June).
J. Grimm gave a talk about SSH and security at Inria.
J. Leblond was invited to give a plenary talk at WIP2005 (Workshop on Inverse Problems, Marseille, Luminy, december) and a communication at the 22nd IFIP TC 7 Conference on System Modeling and Optimization Turin, It. (july). She gave communications at the annual workshop ERNSI, LouvainlaNeuve (september), at the annual workshop of the ACI ``ObsCerv'', Orsay (october), at seminars (APICSLATP, ONDESPoems).
V. Lunot gave a talk at the analysis seminar of Vanderbilt University.
M. Mahjoub gave a communication at TamTam'05 (Tendances dans les Applications Mathe'matiques en Tunisie, Alge'rie, Maroc, Tunis, april) and at the 5th International Conference on Inverse Problems in Engineering: Theory and Practice (Cambridge, UK, july).
I. Fellah gave a communication at TamTam'05.
M. Clerc has presented a poster at ISBET2005 (Bern, Switz., october).
B. Atfeh gave a talk at the minisymposium EEG of the Workshop on Optimization in Medecine, Coimbra, Portugal (july).
Mario Sigalotti gave a talk at the 22nd IFIP TC 7 Conference on System Modeling and Optimization (Turin, Italy, July).
M. Olivi gave a talk at the CDC 2005, Seville, Spain, 1215 December.
F.Seyfert gave a talk at the IMS 2005 in LosAngeles ( ``Coupling Matrix Synthesis for a New Class of Microwave Filter Configuration''), and at the Ernsi Workshop in Brussell ( ``Design and identification of algebraically parametrized linear dynamical systems'').