The Apics Team is a Project Team since January 2005.
The Team develops constructive methods for modeling, identification and control of dynamical systems.
Function theory and approximation theory in the complex domain, with applications to frequency identification and design of transfer functions, as well as 2D inverse boundary problems for the Laplace and Beltrami operators. Development of software for filter identification and the synthesis of microwave devices.
Inverse potential problems in 3D and analysis of harmonic fields with applications to source detection and electroencephalography.
Control and structure analysis of nonlinear systems: continuous stabilization, linearization, and near optimal control with applications to orbit transfer of satellites.
Industrial collaborations with Alcatel Alenia Space (Toulouse and Cannes), Temex (SophiaAntipolis), Thales AS (Paris), CNES (Toulouse), XLim (Limoges).
Exchanges with UST (Villeneuve d'Asq), University BordeauxI (Talence), University MarseilleI (CMI), CWI (the Netherlands), CNR (Italy), SISSA (Italy), the Universities of Illinois (UrbanaChampaign USA), California at San Diego and Santa Barbara (USA), Michigan at EastLansing (USA), Vanderbilt University (Nashville USA), Texas A&M (College Station USA), ISIB (Padova, Italy), Beer Sheva (Israel), Leeds (UK), Maastricht and Amsterdam (The Netherlands), TUWien (Austria), TFHBerlin (Germany), Kingston (Canada), Szegëd (Hungary), CINVESTAV (Mexico), ENIT (Tunis), VUB (Belgium), KTH (Stockholm).
The project is involved in a EMS21RTG NSF program (with Vanderbilt University), in the ACI ``ObsCerv'' (with the Teams Caiman and Odyssée from InriaSophia Antipolis, among others), in the ARC ``Sila'' (with XLim and the SALSA project at INRIA Rocquencourt), in a STIC Convention between INRIA and Tunisian Universities, in an EPSRC Grant with Leeds University (UK), in the ERCIM ``Working Group Control and Systems Theory'', in the ERNSI and TMRNCN European networks, and in a MarieCurie EIF European program.
Let us first introduce the subject of Identification in some generality.
Modelingis the process of abstracting the behavior of a phenomenon in terms of mathematical equations. It typically serves two purposes: the first one is to describe the phenomenon with minimal complexity for some specific purpose, the second one is to predictits outcome. It is used in most applied sciences, be it for design, control or prediction. However, it is seldom considered as an issue per seand today it is usually embedded in some global ``optimization'' loop.
As a general rule, the user devises a model to fit a parameterized form that reflects his own prejudice, his knowledge of the underlying physical system, and the algorithmic effort he is willing to pay. Such a tradeoff usually leads to approximate the experimental data by the prediction of the model when subject to the external excitations assumed to cause the phenomenon under study. The ability to solve this approximation problem, which is often nontrivial and illposed, impinges on the practical use of a given method.
It is when assessing the predictive power of a model that one is led to postulatethe existence of a truefunctional correspondence between data and observations, thereby entering the field of identificationproper. The predictive power of a model can be expressed in various manners all of which attempt at measuring the difference between the ``true system'' and the observations. The necessity of taking into account the discrepancy between the observed behavior and the computed behavior naturally induces the notion of noiseas a corrupting agent of the identification process. This way the noise incorporates to the model, and can subsequently be handled either in a deterministic or stochastic fashion. In deterministic mode, the quality of an identification algorithm rests with its robustness to small errors. This leads to the notion of wellposedness in numerical analysis and of stability of motion in mechanics. However, the noise is most often considered to be random, and then the ``true'' model is estimated by averaging the data. This notion allows one for a simplified description of complex systems whose underlying mechanisms are not precisely known but plausibly antagonistic. Note that, in any case, some assumptionson the noise are required in order to justify the approach (it has to be small in the deterministic case, and must satisfy some independence and ergodicity properties in the stochastic case). These assumptions can hardly be checked in practice, so that the satisfaction of the enduser is the final criterion.
Hypothesizing an exact model also results in the possibility of choosing the data in a manner suited for identifying a specific phenomenon. This often interacts in a complex manner with the localcharacter of the model with respect to the data (for instance a linear model is only valid in a neighborhood of a point).
Although identification, from a theoretical perspective, has been mostly the realm of the stochastic paradigm for more than twentyfive years, the Apics team rather develops a deterministic approach to 1dimensional deconvolution ( i.e.the identification of linear dynamical systems) which is based on approximating the FourierLaplace transform in the complex domain. Of course, the deep links stressed by the spectral theorem between time and frequency domains allow one to partly recast such a framework in a stochastic context. However, the present approach translates the problem of identification into an inverse boundaryvalue problem for the equation, namely the reconstruction from (usually partial) boundary data of an analytic function in a prescribed domain of the complex plane. One feature of this point of view is that it extends to other elliptic partial differential equations, and most naturally to the Laplace and complex Beltrami equations. Beyond these primary examples, some known properties of analytic functions used in the approach still need to be suitably generalized, and a fair portion of the team's research in inverse problems is currently devoted to such issues for the real Beltrami equation in dimension 2 and the Laplace equation in dimension 3 (see section ).
A prototypical example that illustrates the approach is the harmonic identification of dynamical systems which is widely used in the engineering practice. Here, the data are the response of the system to periodic excitations in its bandwidth. We look for a stable linear model that accounts for these data in the bandwidth, while no data are available at high frequencies (which can seldom be measured). In most cases, we want the model to be rational of suitable degree, either because this is imposed by the significance of the parameters or because complexity must remain reasonably low. Other structural constraints, arising from the physics of the phenomenon under study, often superimpose on the model. Note that, in this approach, no statistics are used for the errors, which can be due to corrupted measurements and to the limited validity of the linearity assumption.
We distinguish between an identification step (called nonparametric in a certain terminology) associated with an infinitedimensional model, and an approximation step in which the order is reduced according to specific constraints on the considered system. The first step typically consists, mathematically speaking, in reconstructing a function, analytic in the right halfplane, knowing its pointwise values on a portion of the imaginary axis. In other terms, the problem is to make the principle of analytic continuation effective on the boundary of the analyticity domain. This is a classical illposed issue (the inverse Cauchy problem for the Laplace equation) that we embed into a family of wellposed extremal problems, that may be viewed as a Tikhonovlike regularization scheme related to the spectral theory of analytic operators.
The second step is typically a rational or meromorphic approximation procedure in certain classes of analytic functions on a simply connected domain, say the right halfplane in the case of harmonic identification. To make best possible use of the alloted parameters, it is generally important in this second step to compute optimal or nearly optimal approximants. Rational approximation in the complex plane is a classical and difficult problem, for which only few effective methods exist. In relation to system theory, mainly two difficulties arise: the necessity of controlling the poles of the approximants (to ensure the stability of the model), and the need to handle matrixvalued functions when the system has several inputs and outputs. Moreover, for some inverse problems, the behavior of the poles of best approximants to certain functions constructed from the observations becomes an estimator of the singularities to be detected. This point receives much attention within the team's research.
Concerning this second step, it is worth pointing out that the analogs to rational functions in higher dimensions are the gradients of Newtonian potentials of discrete measures. Very little is known at present on the approximationtheoretic properties of such objects, and a recent endeavor of the project is to study them in the prototypical—though somewhat particular—case of a spherical geometry.
We deal with the above steps in more details through the subparagraphs to come. For convenience, we explain them on the circle rather than the line, which is the framework for discretetime rather than continuoustime systems.
The title refers to the construction of a convolution model of infinitedimension from frequency data in some bandwidth and some reference gauge outside . The class of models consists of stable transfer functions ( i.e., analytic in the domain of stability, be it the halfplane, the disk, etc), and also of transfer functions with finitely many poles in the domain of stability i.e., convolution operators corresponding to linear differential or difference equations with finitely many unstable modes. This issue arises in particular for the design and identification of linear dynamical systems, and in some inverse problems for the Laplacian in dimension two.
Since the question under study may occur on the boundary of planar domains with various shapes when it comes to inverse problems, it is common practice to normalize this boundary once and
for all, and apply in each particular case a conformal transformation to recover the normalized situation. The normalized contour chosen here is the unit circle. We denote by
Dthe unit disk, by
H^{p}the Hardy space of exponent
p(
i.e.the closure of polynomials in the
L^{p}norm on the circle if
1
p<
and the space of bounded holomorphic functions if
p=
), by
R_{N}the set of all rational functions having at most
Npoles in
D, and by
C(
X)the set of continuous functions on a space
X. We are looking for a function in
H^{p}+
R_{N}, taking on an arc
Kof the unit circle values that are close to some experimental data, and satisfying on
some gauge constraints, so that a prototypical Problem is:
(
P) Let
p1,
N0,
Kbe an arc of the unit circle
T,
fL^{p}(
K),
and
M>0; find a function
gH^{p}+
R_{N}such that
and such that
g
fis of minimal norm in
L^{p}(
K)under this constraint.
In order to impose pointwise constraints in the frequency domain (for instance if the considered models are transfer functions of lossless systems, see section ), one may wish to express the gauge constraint on in a more subtle manner, depending on the frequency:
(
) Let
p1,
N0,
Kbe an arc of the unit circle
T,
fL^{p}(
K),
and
; find a function
gH^{p}+
R_{N}such that

g

Ma.e. on
and such that
g
fis of minimal norm in
L^{p}(
K)under this constraint.
Problem
(
P)is an extension to the meromorphic case, and to incomplete data, of classical analytic extremal problems (obtained by setting
K=
Tand
N= 0), that generically go under the name
bounded extremal problems. These have been introduced and intensively studied by the Team, distinguishing the case
p=
from the cases
1
p<
, among which the case
p= 2presents an unexpected link with the Carleman reconstruction formulas
.
Deeply linked with Problem
(
P), and meaningful for assessing the validity of the linear approximation in the considered passband, is the following completion Problem:
(
) Let
p1,
N0,
Kan arc of the unit circle
T,
fL^{p}(
K),
and
M>0; find a function
such that
, and such that the distance to
H^{p}+
R_{N}of the concatenated function
fhis minimal in
L^{p}(
T)under this constraint.
A version of this problem where the constraint depends on the frequency is:
(
) Let
p1,
N0,
Kan arc the unit circle
T,
fL^{p}(
K),
and
; find a function
such that

h

Ma.e. on
, and such that the distance to
H^{p}+
R_{N}of the concatenated function
fhis minimal in
L^{p}(
T)under this constraint.
Let us mention that Problem
reduces to Problem
(
P)that in turn reduces, although implicitly, to an extremal Problem without constraint, (
i.e., a Problem of type
(
P)where
K=
T) that is denoted conventionally by
(
P
_{0}). In the case where
p=
, Problems
and
can viewed as special cases of
(
P)and
respectively, but if
p<
the situation is different. One can also choose different exponents
pon
Kand
(the Problem is then said to be of mixed type). This comes up naturally when identifying lossless systems for which the constraint

h
1must hold at each point while the data, whose signaltonoise ratio is small
at the endpoints of the bandwidth, are better approximated in the
L^{2}sense. It is perhaps nonintuitive that all these problems have in general no solution when no constraint is provided on
(that is, if
M= +
). For instance, considering Problem
, a function given by its trace on a subset
Kof positive measure on the unit circle can always be extended in such a manner that it is arbitrarily close, on
K, to a function analytic in the disk; however, it goes to infinity in norm on
when the approximation error goes to zero, unless we are in the ideal case where the initial data are
exactlythe trace on
Kof an analytical function. The phenomenon illustrates the illposedness of analytic continuation from the boundary of the analyticity domain, which is germane to the
wellknown instability of the Cauchy problem for the Laplace equation
.
The solution to
(
P
_{0})is classical if
p=
: it is given by the AdamjanArovKrein (in short: AAK) theory. If
p= 2and
N= 0, then
(
P
_{0})reduces to an orthogonal projection. AAK theory plays an important role in showing the existence and uniqueness of the solution to
when
p=
, under the assumption that the concatenated function
fbelongs to
, and for the computation of this solution by solving iteratively a spectral problem relative to a family of Hankel operators whose symbols depend implicitly on the data. The robust
convergence of this algorithm in separable HölderZygmund classes has been established
. In the Hilbertian case
p= 2, again for
N= 0, the solution of
(
P)is obtained by solving a spectral equation, this time for a Toeplitz operator, depending linearly on a parameter
that plays the role of a Lagrange multiplier and makes the dependence of the solution implicit in
M. The illposed character of analytic continuation is to the effect that, if the data are not exactly analytic, the approximation error on
Ktends to 0 if, and only if, the constraint
Mon
goes to infinity
. This phenomenon can be quantified in Sobolev
or meromorphic classes of functions
f, and asymptotic estimates of the behavior of
Mand of the error respectively can be obtained, based on a constructive diagonalization scheme for Toeplitz operators due to Rosenblum and Rovnyak, that makes the
spectral theorem effective
. These results indicate that the error
decreases much faster, as
Mincreases, if the data have a holomorphic extension to a neighborhood of the unit disk, this being conceptually interesting for discriminating between nearly analytic
data and those that are not close to a linear stable model. From the constructive viewpoint, we face the problem of representing functions through expansions that are specifically adapted to
the underlying geometry, for instance, rational bases whose poles cluster at the endpoints of
K. Research in this direction is in its infancy.
Problem
has been recently solved in the case where
p= 2(with
= 0) which encompasses all mixed problems where the exponent on
is greater than 2
. It turns out that the solution uniquely
exists and that the constraint is saturated pointwise, that is

g =
Ma.e. on
, unless
fis the trace on
Kof an
H^{2}function satisfying the constraint; the latter fact is perhaps counterintuitive. Although nonsmooth, this infinitedimensional convex problem has a critical point equation and
solves a
min
maxequation where the multiplier is a function on
. The solution can be expressed in terms of the multiplier through a normalized Cauchy transform of Carleman type. The case when
0is more delicate in that conditions have to be put jointly on
fand
for a solution to exist. More details on an algorithmic approach and pending questions can be found in section
.
Smoothness issues in Problems
(
P)and
are both delicate and important in practice. In fact, the solution to such problems is bound to be rather irregular at the endpoints of
Kunless
Mis adjusted to
f; sufficient conditions for smoothness form a topic of current research.
Let us also emphasize that
(
P)has many analogs, equally interesting, that occur in different contexts connected to conjugate functions. For instance one may consider the following extremal
problem, where the constraint on the approximant is expressed in terms of the real and imaginary parts while the criterion takes only its real part into account:
Let
p1,
Kbe an arc of the unit circle
T,
fL^{p}(
K),
, and
,
,
M>0; find a function
gH^{p}such that
and such that
is of minimal norm in
L^{p}(
K)under this constraint.
This yields a natural formulation of issues concerning the DirichletNeumann problem for the Laplace operator (see sections and ) where the data and the physical prior information bear on the real and imaginary parts of the analytic function to be recovered.
For
p= 2, existence and uniqueness of a solution have been established in
, as well as a constructive procedure which, in
addition to the Toeplitz operator that characterizes the solution of
(
P)in the case
p= 2and
N= 0, also involves a Hankel operator (this extends the results of
).
In the nonHilbertian case, where
p2,
, but still
N= 0, the solution of
(
P)can be deduced from that of
(
P
_{0})in a manner analogous to the case
p= 2, though the situation is more involved as regards duality, because one remains in a convex setup (infinitedimensional of course), for which local optimization
methods can be applied.
Up to now, if
p<
and
N>0, no demonstrably convergent solution to Problem
(
P
_{0})is available. This is due to the fact that the problem may display several local
minima. However, a coherent picture has emerged and rather efficient numerical schemes have been devised, although their convergence has only been established for prototypical classes
of functions. The essential features of the approach are summarized below.
First of all, the case
p= 2and
N>0of Problem
(
P
_{0}), which is of particular importance, reduces to rational approximation as described in more details in section
. Here, the link with classical interpolation theory,
orthogonal polynomials, and logarithmic potentials is strong and fruitful. Second, a general AAK theory in
L^{p}has been proposed which is relatively complete for
p2
. Although it does not have, for
p, the computational power of the classical theory, it has better continuity properties and stresses a continuous link between rational approximation in
H^{2}(see section
) and meromorphic approximation in the uniform norm,
allowing one to use, in either context, the techniques available from the other. Hence, similar to the case
p=
, the best meromorphic approximation with at most
npoles in the disk of a function
fL^{p}(
T)is obtained from the singular vectors of the Hankel operator of symbol
fbetween the spaces
H^{s}and
H^{2}with
1/
s+ 1/
p= 1/2, the error being here again equal to the
(
n+ 1)st singular number of the operator. This generalization has a strong topological flavor and relies on the critical points theory of LjusternikSchnirelman as
well as on the particular geometry of the Blaschke products of given degree. A matrixvalued version is currently being studied along the same lines. A noticeable common feature to all these
problems is the following: the critical point equations express nonHermitian orthogonality of the denominator (
i.e., the polynomial whose zeroes are the poles of the approximant) against polynomials of lower degree, for a complex measure that depends however on this denominator (because the
problem is nonlinear). This allows one to
extend the index theorem to the case
2
p
in order to approach the problem of
uniqueness of a local minimum,
characterize the asymptotic behavior of the poles of the approximants for functions with connected singularities that are of particular interest for inverse problems (cf. section ),
study asymptotic errors with classical techniques of potential theory, which yield estimates to be used in item 1.
In connection with the second and third items above, there are two types of asymptotics, namely weak and strong ones. Weak asymptotics begin to be reasonably understood for functions with branched singularities. Strong asymptotics for non Hermitian orthogonality relations have only been obtained recently in some particular cases, see section .
In light of these results, and despite the fact that many questions remain open, algorithmic progress is expected concerning
(
P
_{0})for
N>0and
p2in the forthcoming years. Subsequently, it is conceivable that the transition
from
(
P
_{0})to
(
P)would follow the same lines as in the analytic case
.
The case where
1
p<2remains largely open, especially from the constructive point of view, because if the approximation error can still be interpreted in terms of singular values,
the Hankel operator takes an abstract form which does not lead to a functional identification of its singular vectors. This is unfortunate as this range of values for
pis quite interesting: for instance the
L^{1}criterion induces the operator norm
in the frequency domain, which is interesting for damping perturbations. It is plausible that appropriate dualities relate the range
p<2to the range
2<
p, although this has not yet been established.
A valuable endeavor is to extend to higher dimensions (in particular in 3D) parts of the the above analysis, where harmonic fields replace analytic functions. On the ball or the halfspace, it seems that many of the necessary ingredients are available after the development of real Hardy space theory from harmonic analysis , with the notable exception of multiplicative techniques which are unfortunately essential to define Hankel operators. Any progress on these multiplicative aspects would yield corresponding progress in harmonic identification and its use in elliptic inverse problems. Some recent research developments within the team aim in this direction (see section ). Similarly, generalizing what precedes to the real Beltrami operator in 2D is a natural issue with potentially important applications (see section ). There, the basic characterization and density properties of traces of solutions on the boundary have only recently been established .
Rational approximation is the second step mentioned in section and we first consider it in the scalar case, that is, for complexvalued functions (as opposed to matrixvalued ones). The Problem can be stated as:
Let
1
p
,
fH^{p}and
nan integer; find a rational function without poles in the unit disk, and of degree at most
nthat is nearest possible to
fin
H^{p}.
The most important values of
p, as indicated in the introduction, are
p=
and
p= 2. In the latter case, the orthogonality between Hardy spaces of the disk and of the complement of the disk (the last one being restricted to functions that
vanish at infinity to exclude the constants) makes rational approximation equivalent to meromorphic approximation,
i.e., we are back to Problem
(
P)of section
with
p= 2and
K=
T. Although no demonstrably convergent algorithm is known for a single value of
p, the former Miaou project (the predecessor of Apics) has designed a steepestdescent algorithm for the case
p= 2whose convergence to a
local minimumis guaranteed in theory, and it is the first procedure satisfying this property. Roughly speaking, it is a gradient algorithm, proceeding recursively with respect to the
order
nof the approximant, that uses the particular geometry of the problem in order to restrict the search to a compact region of the parameter space
. This algorithm can generate local
minimaif several exist, thus allowing one to discriminate between them. If there is no local
maximum, a property which is satisfied when the degree is large enough, every local
minimumcan be obtained from an initial condition of lower order. It is not proved, however, that the absolute
minimumcan always be obtained using the strategy of the hyperion or RARL2 software (see section
) that consists in choosing the collection of initial
points corresponding to critical points of lower degree; note that we do not know of a counterexample either, still assuming that there is no
maximum, so there is room for a conjecture at this point.
It is only fair to say that the design of a numerically efficient algorithm whose convergence to the best approximant would be proved is the most important problem from a practical
perspective. Meantime, the algorithms developed by the team seem rather effective and although their global convergence has not been established.
A contrario, it is possible to consider an elimination algorithm when the function to approximate is rational, in order to find all critical points, since the problem is algebraic in
this case. This method is surely convergent, since it is exhaustive, but one has to compute the roots of an algebraic system with
nvariables of degree
N, where
Nis the degree of the function to approximate and there can be as many as
N^{n}solutions among which it is necessary to distinguish those that are coefficients of polynomials having all their roots in the unit disk; the latter indeed are the only ones that
generate critical points. Despite the increase of computational power, such a procedure is still unfeasible granted that realistic values for
nand
Nare like a ten and a couple of hundreds respectively (see section
).
To prove or disprove the convergence of the abovedescribed algorithms, and to check them against practical situations, the team has undergone a longhaul study of the number and nature of
critical points, depending on the class of functions to be approximated, in which tools from differential topology and operator theory team up with classical approximation theory. The study
of transfer functions of relaxation systems (
i.e., Markov functions) was initiated in
and more or less completed in
, as well as the case of
e^{z}(the prototype of an entire function with convex Taylor coefficients) and the case of meromorphic functions (
à laMontessus de Ballore)
. After these studies, a general principle has
emerged that links the nature of the critical points in rational approximation to the regularity of the decrease of the interpolation errors with the degree, and a methodology to analyze the
uniqueness issue in the case where the function to be approximated is a Cauchy integral on an open arc (roughly speaking these functions cover the case of singularities of dimension one that
are sufficiently regular (see section
) has been developed. This methodology relies on the
localization of the singularities via the analysis of families of nonHermitian orthogonal polynomials, to obtain strong estimates of the error that allow one to evaluate its relative decay.
Note in this context an analog of the Gonchar conjecture, that uniqueness ought to hold at least for infinitely many values of the degree, corresponding to a subsequence generating the
liminfof the errors. This conjecture actually suggests that uniqueness should be linked to the ratio of the tobeapproximated function and its derivative on the circle. When this
ratio is pointwise greater than 1 (
i.e., the logarithmic variation is small), it has been recently proved using Morse theory and the Schwartz lemma that uniqueness holds in degree 1
. The generalization to higher dimensions is
an exciting open question.
Another uniqueness criterion has been obtained for rational functions, inspired by the spectral techniques of AAK theory. This result is interesting in that it is not asymptotic and does not require pointwise estimates of the error; however, it assumes a rapid decrease of the errors and the current formulation calls for further investigation.
The introduction of a weight in the optimization criterion is another interesting issue induced by the necessity to balance the information one has at various frequencies with the noise. For instance in the stochastic theory, minimum variance identification leads to assign weights to the errors like the inverse of the spectral density of the noise. It should be noted that most approaches to frequency identification in the engineering practice consists in solving a weighted leastsquare minimization problem where the design of the weight has to be made so as to obtain satisfactory results using a generic ``optimization'' toolbox. This leads to consider minimizing a criterion of the form:
where, by definition,
and
is a positive finite measure on
T,
p_{m}is a polynomial of degree less or equal to
mand
q_{n}a monic polynomial of degree less or equal to
n. Such a problem is wellposed when
is absolutely continuous with respect to the Lebesgue measure and has invertible derivative in
. For instance when
is the squared modulus of an invertible analytic function, introducing
orthogonal polynomials instead of the Fourier basis makes the situation similar to the nonweighted case, at least if
mn1
. The corresponding algorithm was implemented
in the hyperion software (see section
). The analysis of the critical points equations in the
weighted case gives various counterexamples to unimodality in maximum likelihood identification
.
It is worth pointing out that meromorphicapproximation is better behaved ( i.e., essentially invariant) with respect to the introduction of a weight (see section ).
Another kind of rational approximation problems arise in designproblems, that became over years an increasingly significant part of the team's activity (see sections , , and ). These are problems where constraints on the modulusof a rational function are sought, and they occur mainly in filter design where the response is a rational function of fixed degree (the complexity of the filter), analytic and bounded by 1 in modulus on the righthalfplane (passivity), whose modulus must be as close as possible to 1 on some subset of the imaginary axis (the passband) ans as close as possible to 0 on the complementary subset (the stopband).
When translated over to the circle, a prototypical formulation consists in approximating the modulus of a given function by the modulus of a rational function of degree
n, that is, to solve for
When
p= 2this problem can be reduced to a series of standard rational approximation problems, but usually one needs to solve it for
p=
. For this, we observe upon squaring the moduli that the feasibility of
can be analysed using the FéjèrRiesz characterization of positive trigonometric polynomials on the unit circle as squared moduli of algebraic polynomials. This reduces the
issue to a convex problem in infinitedimension (because the criterion has to be evaluated at infinitely many points on the unit circle) that constitutes a fundamental tool to deal with
rational approximation in modulus. Note that the case where

fis a piecewise constant functions with values 0 and 1 can also be approached
viaclassical Zolotarev problems
, that can be solved more or less explicitly
when the passband consists of a single arc. A constructive solution in the case of several arcs (multiband filters) is one recent achievement of the team (see section
). Of course, though the modulus of the response is the
first concern in filter design, the variation of the phase must nevertheless remain under control to avoid unacceptable distortion of the signal. As a matter of fact, tradingoff abrupt
changes in modulus for a moderate derivative of the phase, which are antagonistic effects
, is an exciting but fairly open issue that
needs to be investigated more deeply for the design of high order filters.
From the point of view of design, rational approximants are indeed useful only if they can be translated into physical parameter values for the device to be built. While such problems do
not pertain to rational approximation proper, they are of utmost importance in practice. Actually, the fact that a device's response is shaped in the frequency domain whereas the device
itself must be specified in the time domain is a major difficulty in the area that reflects the fundamental problem of harmonic analysis. This is where SystemTheory enters the scene, as the
passage from the frequency response (
i.e.the transferfunction) to the linear differential or difference equations that generate this response (
i.e.the statespace representation) is the object of the socalled
realizationprocess. Algebraically speaking, a realization of a rational matrix
Hof the variable
zis a 4tuple
(
A,
B,
C,
D)of real or complex matrices of appropriate sizes such that
H(
z) =
C(
zI
A)
^{1}B+
D.
Since filters have to be considered as multipoles, the issue must indeed be tackled in a matrixvalued context that adds to the complexity. A fair share of the team's research in this direction is concerned with finding realizations meeting certain constraints (imposed by the technology in use) for a transferfunction that was obtained with the abovedescribed techniques. The current approach is to solve algebraic equations in many variables using homotopy methods, which seems to be a pathbreaking methodology in the area of filter design (see section ).
We refer here to the behavior of the poles of best meromorphic approximants, in the
L^{p}sense on a closed curve, to functions defined as Cauchy integrals of complex measures whose support lies inside the curve. If one normalizes the contour to be the unit circle (which
is no restriction in principle thanks to conformal mapping but raises of course difficult questions from the constructive point of view), we find ourselves again in the framework of sections
and
, and the invariance of the problem under such
transformation was established in
. The research so far has focused on functions
that are analytic on and outside the contour, and have singularities on an open arc inside the contour.
Generally speaking, the behavior of poles is particularly important in meromorphic approximation to obtain error rates as the degree goes large and also to tackle more constructive issues like uniqueness. However, the original motivation of Apics is to consider this issue in connection with the approximation of the solution to a DirichletNeumann problem, so as to extract information on the singularities. This approach to free boundary problems, that are classical in every respect but still quite open, illustrates the point of view of the team and gives rise to an active direction of research at the crossroads of function theory, potential theory and orthogonal polynomials.
As a general rule, critical point equations for these problems express that the polynomial whose roots are the poles of the approximant is a nonHermitian orthogonal polynomial with respect to some complex measure (that depends on the polynomial itself and therefore varies with the degree) on the singular set of the function to be approximated. New results were obtained in recent years concerning the location of such zeroes. The approach to inverse problem for the Laplacian that we outline in this section appears to be attractive when the singularities are onedimensional, for instance in the case of a cracked domain (see section ). It can be used as computationally cheap preliminary step to obtain the initial guess of a heavier but more precise numerical optimization. It is rather complementary of the recently popularized MUSICtype algorithms as it can in principle be used on a single stationary pair of DirichletNeumann data.
When the crack is sufficiently smooth, the approach in question is in fact equivalent to the meromorphic approximation of a function with two branch points, and we were able to prove
that the poles of the approximants accumulate
in a neighborhood of the geodesic hyperbolic arc that links the endpoints of the crack
. Moreover the asymptotic density of the poles
turns out to be the equilibrium distribution on the geodesic arc of the Green potential and it charges the end points, that are
de factowell localized if one is able to compute sufficiently many zeros (this is where the method could fail). It is interesting to note that these results apply also, and even more
easily, to the detection of monopolar and dipolar sources, a case where poles as well as logarithmic singularities exist. The case of more general cracks (for instance formed by a finite
union of analytic arcs) requires the analysis of the situation where the number of branch points is finite but arbitrary. We proved very recently that the poles tend to the contour
outside of which the function is analytic and singlevalued that minimizes the capacity of the condenser
, where
Tis the exterior boundary of the domain (paper in preparation, see section
). For the definition of a condenser and other basic
facts from potential theory, see
.
It would of course be very interesting to know what happens when the crack is ``absolutely non analytic'', a limiting case that can be interpreted as that of an infinite number of branch
points, and on which very little is known, although there are grounds to conjecture that the endpoints at least are still accumulation points of the poles. This is an outstanding open
question for applications to inverse problems (see section refresfissures). Concerning the problem of a general singularity, that may be two dimensional, one can formulate the following
conjecture: if
fis analytic outside and on the exterior boundary of a domain
Dand if
Kis the minimal compact set included in
Dthat minimizes the capacity of the condenser
(
T,
K)under the constraint that
fis analytic and singlevalued outside
K(it exists, it is unique, and we assume it is of positive capacity in order to avoid degenerated cases), then every limit point (in the weak star sense) of the
sequence
_{n}of probability measures having equal mass at each pole of an optimal meromorphic approximant (with at most
npoles) of
fin
L^{p}(
T)
has its support in
Kand sweeps out on the boundary of
Kto the equilibrium distribution of the condenser
(
T,
K). This conjecture, which generalizes the abovementioned results on 1D singular sets, is far from being solved in general.
We conclude by mentioning that the problem of approximating, by a rational or meromorphic function, in the
L^{p}sense on the boundary of a domain, the Cauchy transform of a real measure, localized inside the domain, can be viewed as an optimal discretization problem for a logarithmic potential
according to a criterion involving a Sobolev norm. This formulation can be generalized to higher dimensions, even if the computational power of complex analysis is then no longer available,
and this makes for a longterm research project with a wide range of applications. It is interesting to mention that the case of sources in dimension three in a spherical geometry, can be
attacked with the above 2D techniques as applied to planar sections (see section
).
Matrixvalued approximation is necessary for handling systems with several inputs and outputs, and it generates substantial additional difficulties with respect to scalar approximation,
theoretically as well as algorithmically. In the matrix case, the McMillan degree (
i.e., the degree of a minimal realization in the SystemTheoretic sense) generalizes the degree. Hence the problem reads:
Let
1
p
,
and
nan integer; find a rational matrix of size
m×
lwithout poles in the unit disk and of McMillan degree at most
nwhich is nearest possible to
in
(
H
^{p})
^{m×
l}.To fix ideas, we may define the
L^{p}norm of a matrix as the
pth root of the sum of the
ppowers of the norms of its entries.
The main interest of the Apics Team so far lies with the case
p= 2. Then, the approximation algorithm designed in the scalar case generalizes to the matrixvalued situation
. The first difficulty consists here in the
parametrization of transfer matrices of given McMillan degree
n, and the inner matrices (
i.e., matrixvalued functions that are analytic in the unit disk and unitary on the circle) of degree
nenter the picture in an essential manner: they play the role of the denominator in a fractional representation of transfer matrices using the socalled
DouglasShapiroShields factorization. The set of inner matrices of given degree has the structure of a smooth manifold that allows one to use differential tools as in the scalar case. In
practice, one has to produce an atlas of charts (parameterizations valid in a neighborhood of a point), and we must handle changes of charts in the course of the algorithm. The tangential
Schur algorithm
provides us with such a parameterization and
allowed the team to develop two rational approximation codes (see sections
and
). The first one is integrated in the endymion software
dealing with transfer matrices while the other, which is developed under the Matlab interpreter, goes by the name of RARL2 and works with realizations. Both have been experimented on
measurements by the CNES (branch of Toulouse), XLim, and Alcatel Alenia Space (Toulouse), on which they gave high quality results
in all cases encountered so far. These codes
are now of daily use by Alcatel Alenia Space and XLim, coupled with simulation software like EMXD to design physical coupling parameters for the synthesis of microwave filters made of
resonant cavities (see section
).
In the above application, obtaining physical couplings requires the computation of realizations, also called internal representation in System Theory. Among the parameterizations obtained via the Schur algorithm, some have a particular interest from this viewpoint , . They lead to a simple and robust computation of balanced realizations and form the basis of the RARL2 algorithm (see section ).
Problems relative to multiple local minima naturally arise in the matrixvalued case as well, but deriving criteria that guarantee uniqueness is even more difficult than in the scalar case. The already investigated case of rational functions of the seeked degree (the consistency problem) was solved using rather heavy machinery , and that of matrixvalued Markov functions, that are the first example beyond rational function has made progress only recently .
In practice, a method similar to the one used in the scalar case has been developed to generate local minima of a given order from those at lower orders. In short, one sets out a matrix of
degree
nby perturbation of a matrix of degree
n1where the drop in degree is due to a polezero cancellation. There is an important difference between polynomial representations of transfer matrices and their
realizations: the former lead to an embedding in a ambient space of rational matrices that allows a differentiable extension of the criterion on a neighborhood of the initial manifold, but
not the latter (the boundary is strongly singular). Generating initial conditions in a recursive manner is more delicate in terms of realizations, and some basic questions on the boundary
behavior of the gradient vector field are still open.
Let us stress that the algorithms mentioned above are first to handle rational approximation in the matrix case in a way that converges to local minima, while meeting stability constraints on the approximant.
In order to control a system, one generally relies on a model, obtained from a prioriknowledge, like physical laws, or from experimental observations. In many applications, it is enough to deal with a linear approximation around a nominal point or trajectory. However, there are important instances where linear control does not apply, either because the magnitude of the control is limited or because the linear approximation is not controllable. Moreover, certain control problems, such as path planning, are not local in nature and cannot be solved vialinear approximations.
Section describes a problem of this nature, where the controllability of the linear approximation is of little help. Besides, the structural study described in section aims at exhibiting invariants that can be used, either to bring the study back to that of simpler systems or to lay grounds for a nonlinear identification theory. The latter would give information on model classes to be used in case there is no a priorireliable information and still the blackbox linear identification is not satisfactory.
Stabilization by continuous state feedback (or output feedback which is a partial information case) consists in designing a control law which is a smooth (at least continuous) function of the state making a given point (or trajectory) asymptotically stable for the closed–loop system. One can consider this as a weak version of the optimal control problem which is to find a control that minimizes a given criterion (for instance the time to reach a prescribed state). Optimal control generally leads to a rather irregular dependence on the initial state; in contrast, stabilization is a qualitativeobjective ( i.e., to reach a given state asymptotically) which is more flexible and allows one to impose a lot more regularity.
Lyapunov functions are a wellknown tool to study the stability of noncontrolled dynamic systems. For a control system, a Control Lyapunov Functionis a Lyapunov function for the closedloop system where the feedback is chosen appropriately. It can be expressed by a differential inequality called the ``Artstein (in)equation '', that looks like the HamiltonJacobiBellmann equation but is largely underdetermined. One can easily deduce from the knowledge of a control Lyapunov function a continuous stabilizing feedback.
The team is engaged in obtaining control Lyapunov functions for certain classes of systems. This can be the first step in synthesizing a stabilizing control but, even when such a control is known beforehand, obtaining a control Lyapunov function can still be very useful to study the robustness of the stabilization, or to modify the initial control law into a more robust one. Moreover, if one has to deal with a problem where it is important to optimize a criterion, and if the optimal solution is hard to compute, one can look for a control Lyapunov function which comes ``close'' (in the sense of the criterion) to the solution of the optimization problem but leads to a control which is easier to work with.
These constructions are exploited in a joint collaborative research conducted with Alcatel Alenia Space (Cannes), where minimizing a certain cost is very important (fuel consumption / transfer time) while at the same time a feedback law is preferred because of robustness and ease of implementation (see section ).
Here we study certain transformations of models of control systems, or more accurately of equivalence classes modulo such transformations. The interest is twofold:
From the point of view of control, a command satisfying specific objectives on the transformed system can be used to control the original system including the transformation in the controller. Of course the favorable case is when the transformed system has a structure that can easily be exploited, for instance when it is a linear controllable system.
From the point of view of identification and modeling, the interest is either to derive qualitative invariants to support the choice of a nonlinear model given the observations, or to contribute to a classification of nonlinear systems which is missing sorely today. Indeed, the success of the linear model in control and identification is due to the deep understanding one has of it. In the same manner, a more complete knowledge of invariants of nonlinear systems under basic transformations is a prerequisite for a more general theory of nonlinear identification.
Concerning the classes of transformations, a static feedbacktransformation of a dynamical control system is a (nonsingular) reparametrization of the control depending on the state, together with a change of coordinates in the state space. A dynamic feedbacktransformation of a control system consists of a dynamic extension (adding new states, and assigning them a new dynamics) followed by a state feedback on the augmented system. Let us now stress two specific problems that we are tackling.
The problem of dynamic linearization, still unsolved, is that of finding explicit conditions on a system for the existence of a dynamic feedback that would make it linear.
Over the last years , the following property of control systems has been emphasized: for some systems (in particular linear ones), there exists a finite number of functions of the state and of the derivatives of the control up to a certain order, that are differentially independent ( i.e., coupled by no differential equation) and do ``parameterize all the trajectories''. This property, and its importance in control, has been brought to light in , where it is called differential flatness, the above mentioned functions being called flator linearizing functions, and it was shown, roughly speaking, that a system is differentially flat if, and only if, it can be converted to a linear system by dynamic feedback. On the one hand, this interesting property of the set of trajectories is at least as important in control as the equivalence to a linear system, and on the other hand it gives a handle for tackling the problem of dynamic linearization, namely to find linearizing functions.
An important question remains open: how can one algorithmically decide whether a given system has this property or not, i.e., is dynamically linearizable or not? This problem is both difficult and important for nonlinear control. For systems with four states and two controls, whose dynamics is affine in the control (these are the lowest dimensions for which the problem is really nontrivial), necessary and sufficient conditions for the existence of linearizing functions depending on the state and the control (but not on the derivatives of the control) can be given explicitly, but they do point at the complexity of the issue.
From the algebraicdifferential point of view, the module of differentials of a controllable system is free and finitely generated over the ring of differential polynomials in
d/
dtwith coefficients in the space of functions of the system, and for which a basis can be explicitly constructed
. The question is to find out if it has a
basis made of closed forms, that is, locally exact forms. Expressed in this way, it is an extension of the classical integrability theorem of Frobenius to the case where coefficients are
differential operators. Together with stability by exterior differentiation (the classical condition), further conditions are required here to ascertain the degree of the solutions is
finite, a midterm goal being to obtain a formal and implementable algorithm to decide whether or not a given system is flat around a regular point. One can further consider subproblems
having their own interest, like deciding flatness with a given precompensator, or characterizing ``formal'' flatness that would correspond to a weak interpretation of the differential
equation. Such questions can also be raised locally, in the neighborhood of an equilibrium point.
In what precedes, we have not taken into account the degree of smoothnessof the transformations under consideration.
In the case of dynamical systems without control, it is well known that, away from degenerate (non hyperbolic) points, if one requires the transformations to be merely continuous, every system is locallyequivalent to a linear system in a neighborhood of an equilibrium (the HartmanGrobman theorem). It is thus tempting when classifying controlsystems, to look for such equivalence modulo nondifferentiable transformations and to hope bring about some robust ``qualitative'' invariants and perhaps stable normal forms. A HartmanGrobman theorem for control systems would say for instance, that outside a ``meager'' class of models (for instance, those whose linear approximation is noncontrollable), and locally around nominal values of the state and the control, no qualitative phenomenon can distinguish a nonlinear system from a linear one, all nonlinear phenomena being thus either of global nature or singularities. Such a statement is wrong: if a system is locally equivalent to a controllable linear system via a bicontinuous transformation—a local homeomorphism in the statecontrol space—it is alsoequivalent to this same controllable linear system via a transformation that is as smooth as the system itself, at least in the neighborhood of a regular point (in the sense that the rank of the control system is locally constant), see for details; a contrario, under weak regularity conditions, linearization can be done by noncausal transformations (see ) whose structure remains unclear, but acquires a concrete meaning when the entries are themselves generated by a finitedimensional dynamics.
The above considerations call for the following question, which is important for modeling control systems: are there local ``qualitative'' differences between the behavior of a nonlinear system and that of its linear approximation when the latter is controllable?
The bottom line of the team's activity is twofold, namely function theory and optimization in the frequency domain on the one hand, and the control of systems governed by differential equations on the other hand. Therefore one can distinguish between two main families of applications: one dealing with the design and identification of diffusive and resonant systems (these are inverse problems), and one dealing with the control of certain mechanical systems. For applications of the first type, approximation techniques as described in section allow one to deconvolve linear equations, analyticity being the result of either the use of Fourier transforms or the harmonic character of the equation itself. Applications of the second type mostly concern the control of systems that are ``poorly'' controllable, for instance low thrust satellites or optical regenerators. We describe all these below in more detail.
Localizing cracks, pointwise sources or occlusions in a twodimensional material, using thermal, electrical, or magnetic measurements on its boundary is a classical inverse problem. It arises when studying fatigue of structures, behavior of conductors, or else electro and magnetoencephalography as well as the detection of buried objects (mines, etc). However, no completely satisfactory algorithm has emerged so far if no initial information on the location or on the geometry is known, because numerical integration of the inverse problem is very unstable. A technique that evolved from the singualrvalue decomposition of a parametrix like correlation matrix has become recently popular in the field under the name of MUSICtype algorithms . The methods we describe are of a different nature, and they are especially valuable when no mutually independent timevarying measurements are available, either because the measurements are stationary or because only few measurements can be made, or else because the superposition of phenomena to be analyzed ( e.g.the superposition of several sources) are mutually correlated both in time and space. These methods can also be used to approach inverse free boundary problems of Bernouilli type (see section ).
The presence of cracks in a plane conductor, for instance, or of sources in a cortex (modulo a reduction from 3D to 2D, see below) can be expressed as a lack of analyticity of the (complexified) solution of the associated DirichletNeumann problem that may in principle be approached using techniques of best rational or meromorphic approximation on the boundary of the object (see sections , and ). In this connection, the realistic case where data are available on part of the boundary only offers a typical opportunity to apply the analytic and meromorphic extension techniques developed earlier.
The 2D approach proposed here consists in constructing, from measured data on a subset
Kof the boundary
of a plane domain
D, the trace on
of a function
Fwhich is analytic in
Dexcept for a possible singularity across some subset
(typically: a crack). One can then use the approximation techniques described above in order to:
extend
Fto all of
if the data are incomplete (it may happen that
Kwhen the boundary is not fully accessible to measurements), for instance to identify an unknown Robin coefficient (see
where stability properties of the procedure
are established);
detect the presence of a defect in a computationally efficient manner ;
Thus, inverse problems of geometric type that consist in finding an unknown boundary from incomplete data can be approached in this way , often in combination with other techniques . Preliminary numerical experiments have yielded excellent results and it is now important to process real experimental data, that the team is currently busy analyzing. In particular, contacts with the Odyssée Team of Inria Sophia Antipolis (within the ACI ``ObsCerv'') has provided us with 3D magnetoencephalographic data from which 2D information was extracted (see section ). The team also made contact with other laboratories ( e.g., Vanderbilt University Physics Dept.) in order to work out 2D or 3D data from physical experiments.
We began last year to apply such methods to problems with variable conductivity governed by a 2D Beltrami equation. The application we have in mind is to plasma confinement for thermonuclear fusion in a tokamak, more precisely with the extrapolation of magnetic data on the boundary of the chamber from the outer boundary of the plasma, which is a level curve for the poloidal flux solving the original divgrad equation. Solving this inverse problem of Bernouilli type is of importance to determine the appropriate boundary conditions to be applied to the chamber in order to shape the plasma . A joint collaboration on this topic recently started with the Laboratoire J. Dieudonné at the University of Nice, and the CMILATP at the University of Marseille I. It has been the object of the postdoctoral stay of E. Sincich and is one of the collaborative research topic with S. Rigat (on leave of absence from the University of Provence) as described in section .
The goal is first to determine the shape of the surface of the plasma in the chamber from the outer boundary measurements, and in a second step to shape this boundary by choosing some appropriate magnetic flux on this outer boundary (see section ).
One of the best training grounds for the research of the team in function theory is the identification and design of physical systems for which the linearity assumption works well in the considered range of frequency, and whose specifications are made in the frequency domain. Resonant systems, either acoustic or electromagnetic based, are prototypical devices of common use in telecommunications. We shall be more specific on two examples below.
Surface acoustic waves filters are largely used in modern telecommunications especially for cellular phones. This is mainly due to their small size and low cost. Unidirectional filters, formed of Single Phase UniDirectional Transducers(in short: SPUDT) that contain inner reflectors (cf. Figure ), are increasingly used in this technological area. The design of such filters is more complex than traditional ones.
We are interested here in a filter formed of two SPUDT transducers (Figure ). Each transducer is composed of cells of the same length each of which contains a reflector and all but the last one contain a source (Figure ). These sources are all connected to an electrical circuit, and cause electroacoustic interactions inside the piezoelectric medium. In the transducer SPUDT2 represented on Figure , the reflectors are positioned with respect to the sources in such a way that near the central frequency, almost no wave can emanate from the transducer to the left ( ), this being called unidirectionality. In the right transducer SPUDT1, reflectors are positioned in a symmetric fashion so as to obtain unidirectionality to the left.
Specifications are given in the frequency domain on the amplitude and phase of the electrical transfer function. This function expresses the power transfer and can be written as
where
Yis the admittance of the coupling:
The design problem consists in finding the reflection coefficients
rand the source efficiency in both transducers so as to meet the specifications.
The transducers are described by analytic transfer functions called mixed matrices, that link input waves and currents to output waves and potentials. Physical properties of reciprocity and energy conservation endow these matrices with a rich mathematical structure that allows one to use approximation techniques in the complex domain according to the following steps:
describe the set of electrical transfer functions obtainable from the model,
set out the design problem as a rational approximation problemin a normed space of analytic functions:
where
Dis the desired electrical transfer,
use a rational approximation software (see section ) to identify the design parameters.
The first item, is the subject of ongoing research. It connects the geometry of the zeroes of a rational matrix to the existence of an inner symmetric extension without increase of the degree (the reciprocal Darlington synthesis, see section ). A collaboration with TEMEX (SophiaAntipolis) is currently being conducted on the subject.
In the domain of space telecommunications (satellite transmissions), constraints specific to onboard technology lead to the use of filters with resonant cavities in the microwave range. These filters serve multiplexing purposes (before or after amplification), and consist of a sequence of cylindrical hollow bodies, magnetically coupled by irises (orthogonal double slits). The electromagnetic wave that traverses the cavities satisfies the Maxwell equations, forcing the tangent electrical field along the body of the cavity to be zero. A deeper study (of the Helmholtz equation) states that essentially only a discrete set of wave vectors is selected. In the considered range of frequency, the electrical field in each cavity can be seen as being decomposed along two orthogonal modes, perpendicular to the axis of the cavity (other modes are far off in the frequency domain, and their influence can be neglected).
Near the resonance frequency, a good approximation of the Maxwell equations is given by the solution of a second order differential equation. One obtains thus an electrical model for our filter as a sequence of electricallycoupled resonant circuits, and each circuit will be modeled by two resonators, one per mode, whose resonance frequency represents the frequency of a mode, and whose resistance represent the electric losses (current on the surface).
In this way, the filter can be seen as a quadripole, with two ports, when plug on a resistor at one end and fed with some potential at the other end. We are then interested in the power
which is transmitted and reflected. This leads to defining a scattering matrix
S, that can be considered as the transfer function of a stable causal linear dynamical system, with two inputs and two outputs. Its diagonal terms
S_{1, 1},
S_{2, 2}correspond to reflections at each port, while
S_{1, 2},
S_{2, 1}correspond to transmission. These functions can be measured at certain frequencies (on the imaginary axis). The filter is rational of order 4 times the number of cavities (that is 16
in the example), and the key step consists in expressing the components of the equivalent electrical circuit as a function of the
S_{ij}(since there are no formulas expressing the length of the screws in terms of parameters of this electrical model). This representation is also useful to analyze the numerical
simulations of the Maxwell equations, and to check the design, particularly the absence of higher resonant modes.
In fact, resonance is not studied via the electrical model, but via a low pass equivalent obtained upon linearizing near the central frequency, which is no longer conjugate symmetric ( i.e., the underlying system may not have real coefficients) but whose degree is divided by 2 (8 in the example).
In short, the identification strategy is as follows:
measuring the scattering matrix of the filter near the optimal frequency over twice the pass band (which is 80Mhz in the example).
solving bounded extremal problems for the transmission and the reflection (the modulus of he response being respectively close to 0 and 1 outside the interval measurement, cf.section ). This provides us with a scattering matrix of order roughly 1/4 of the number of data points.
Approximating this scattering matrix by a rational transferfunction of fixed degree (8 in this example) via the endymion or RARL2 software ( cf.section ).
A realization of the transfer function is thus obtained, and some additional symmetry constraints are imposed.
Finally one builds a realization of the approximant and looks for a change of variables that eliminates nonphysical couplings. This is obtained by using algebraicsolvers and continuation algorithms on the group of orthogonal complex matrices (symmetry forces this type of transformation).
The final approximation is of high quality. This can be interpreted as a validation of the linearity hypothesis for the system: the relative
L^{2}error is less than
10
^{3}. This is illustrated by a reflection diagram (Figure
). Nonphysical coupling are less than
10
^{2}.
The above considerations are valid for a large class of filters. These developments have also been used for the design of nonsymmetric filters, useful for the synthesis of repeating devices.
The team investigates today the design of output multiplexors (OMUX) where several filters of the previous type are coupled on a common guide. In fact, it has undergone a rather general analysis of the question ``How does an OMUX work?'' With the help of numerical simulations and Schur analysis, general principles are being worked out to take into account:
the coupling between each channel and the ``Tee'' that connects it to the manifold,
the coupling between two consecutive channels.
The model is obtained upon chaining the corresponding scattering matrices, and mixes up rational elements and complex exponentials (because of the delays) hence constitutes an extension of the previous framework. Its study is being conducted under contract with Alcatel Alenia Space (Toulouse) (see sections and ).
The use of satellites in telecommunication networks motivates a lot of research in the area of signal and image processing; see for instance section for an illustration.
Of course, this requires that satellites be adequately located and positioned (correct orientation). This problem and similar ones continue to motivate research in control within the team. Generally speaking, aerospace engineering requires sophisticated control techniques for which optimization is often crucial, due to the extreme functioning conditions.
The team has been working for two years on control problems in orbital transfer with lowthrust engines, under contract with Alcatel Space Cannes (see section ). Technically, the reason for using these (ionic) low thrust engines, rather than chemical engines that deliver a much higher thrust, is that they require much less ``fuel''; this is decisive because the total mass is limited by the capacity of the launchers: less fuel means more payload, while fuel represents today an impressive part of the total mass.
From the control point of view, the low thrust makes the transfer problem delicate. In principle of course, the control law leading to the right orbit in minimum time exists, but it is quite heavy to obtain numerically and the computation is nonrobust against many unmodelled phenomena. Considerable progress on the approximation of such a law by a feedback has been made, and numerical experiments have been conducted (see section ).
For more than 10 years, physicists have been working on the realization of elementary quantum gates with the goal to build in the future a quantum computer ( cf.the cavity quantum electrodynamics experiments with circular Rydberg atoms at the Ecole Normale Supérieure in Paris as well as the handling of trapped ions with lasers at Innsbruck University). The main difficulty to overcome for the effective construction of a quantum computer is the decoherence that results form the coupling of Qbits with their environment: entangled states are difficult to achieve and to maintain over a significant period of time. The goal is to adapt existing control techniques and if necessary to propose new ones for modeling and controlling open quantum systems. In particular, for Qbits coupled with the environment, controllability and disturbance rejection issues arise when trying to design a control that drives the system from one pure quantum state to another (quantum gate) while compensating for the decoherence induced by the environment.
In order to take decoherence into account, one has to use the Heisenberg point of view where the density matrix is used instead of the probability amplitude (Schrödinger point of view); this framework takes into account the coupling with a large environment (reservoir) and its irreversible effects. Under weak coupling and short environment autocorrelation time, the evolution can be described by a differential equation, called the master equationwhich has a welldefined structure under the socalled Lindblad operators ; it yields a finitedimensional bilinear control system, that has not been thoroughly studied up to now. This is the subject of ongoing research. A more sophisticated model is the BlochRedfield formalism ; it does not have a finitedimensionnal state (in the controltheoretic sense of this word), but it seems more realistic when the control undergoes fast variations. There is numerical evidence (see ) that, in this model, the control can effectively act against dissipation.
This is a very new research topic for the team. We report in section on some investigation that started within the postdoctoral stay of Hamza Jirari.
The increased capacity of numerical channels in information technology is a major industrial challenge. The most performing means nowadays for transporting signals from a server to the user and backwards is via optical fibers. The use of this medium at the limit of its capacity of response causes new control problems in order to maintain a safe signal, both in the fibers and in the routing and regeneration devices.
In recent past, the team has worked in collaboration with Alcatel R&I (Marcoussis) on the control of ``alloptic'' regenerators. Although no collaboration is presently active, we consider this a potentially rich domain of applications
The work presented in section lies upstream with respect to applications. However, beyond the fact that deciding whether a given system is linear modulo an adequate compensator is clearly conceptually important, it is fair to say that ``flat outputs'' are of considerable interest for path planning . Moreover, as indicated in section , a better understanding of the invariants of nonlinear systems under feedback would result in significant progress in identification.
The development of a
RARL2 (Réalisation interne et Approximation Rationnelle L2) is a software for rational approximation (see section ).
This software takes as input a stable transfer function of a discrete time system represented by
either its internal realization,
or its first
NFourier coefficients,
or discretized values on the circle.
It computes a local best approximant which is
stable, of prescribed McMillan degree, in the
L^{2}norm.
It is germane to the arl2 function of endymion from which it differs mainly in the way systems are represented: a polynomial representation is used in endymion, while RARL2 uses realizations, this being very interesting in certain cases. It is implemented in Matlab. This software handles multivariablesystems (with several inputs and several outputs), and uses a parameterization that has the following advantages
it incorporates the stability requirement in a builtin manner,
it allows the use of differential tools,
it is wellconditioned, and computationally cheap.
An iterative research strategy on the degree of the local minima, similar in principle to that of arl2, increases the chance of obtaining the absolute minimum (see section ) by generating, in a structured manner, several initial conditions. Contrary to the polynomial case, we are in a singular geometry on the boundary of the manifold on which minimization takes place, which forbids the extension of the criterion to the ambient space. We have thus to take into account a singularity on the boundary of the approximation domain, and it is not possible to compute a descent direction as being the gradient of a function defined on a larger domain, although the initial conditions obtained from minima of lower order are on this boundary. Thus, determining a descent direction is nowadays, to a large extent, a heuristic step. While this step performs satisfactorily in cases handled so far, it is still unknown how to make it truly algorithmic.
The identification of filters modeled by an electrical circuit that was developed by the team (see section
) has led to compute the electrical parameters of the
underlying filter. This means finding a particular realization
(
A,
B,
C,
D)of the model given by the rational approximation step. This 4tuple must satisfy constraints that come from the geometry of the equivalent electrical network and
translate into some of the coefficients in
(
A,
B,
C,
D)being zero. Among the different geometries of coupling, there is one called ``the arrow form''
which is of particular interest since it is
unique for a given transfer function and also easily computed. The computation of this realization is the first step of RGC. Subsequently, if the target realization is not in arrow form, one
can nevertheless show that it can be deduced from the arrowform by a complex orthogonal change of basis. In this case, RGC starts a local optimization procedure that reduces the distance
between the arrow form and the target, using successive orthogonal transformations. This optimization problem on the group of orthogonal matrices is nonconvex and has a lot of local and global
minima. In fact, there is not always uniqueness of the realization of the filter in the given geometry. Moreover, it is often interesting to know all the solutions of the problem, because the
designer cannot be sure, in many cases, which one is being handled, and also because the assumptions on the reciprocal influence of the resonant modes may not be equally well satisfied for all
such solutions, hence some of them should be preferred for the design. Today, apart from the particular case where the arrow form is the desired form (this happens frequently up to degree 6)
the RGC software gives no guarantee to obtain a single realization that satisfies the prescribed constraints. The software DedaleHF (see
), which is the successor of RGC, solves in a guaranteed
manner this constraint realization problem.
PRESTOHF: a toolbox dedicated to lowpass parameter identification for microwave filters http://wwwsop.inria.fr/apics/personnel/Fabien.Seyfert/Presto_web_page/presto_pres.html. In order to allow the industrial transfer of our methods, a Matlabbased toolbox has been developed, dedicated to the problem of identification of lowpass microwave filter parameters. It allows one to run the following algorithmic steps, either individually or in a single shot:
determination of delay components, that are caused by the access devices (automatic reference plane adjustment);
automatic determination of an analytic completion, bounded in module for each channel, (see section );
rational approximation of fixed McMillan degree;
determination of a constrained realization.
For the matrixvalued rational approximation step, PrestoHF relies either on hyperion (Unix or Linux only) or RARL2 (platform independent), both rational approximation engines were developed within the team. Constrained realizations are computed by the RGC software. As a toolbox, PrestoHF has a modular structure, which allows one for example to include some building blocks in an already existing software.
The delay compensation algorithm is based on the following strong assumption: far off the passband, one can reasonably expect a good approximation of the rational components of
S_{11}and
S_{22}by the first few terms of their Taylor expansion at infinity, a small degree polynomial in
1/
s. Using this idea, a sequence of quadratic convex optimization problems are solved, in order to obtain appropriate compensations. In order to check the previous
assumption, one has to measure the filter on a larger band, typically three times the pass band.
This toolbox is currently used by Alcatel Space in Toulouse and a license agreement has been recently negotiated with Thales airborne systems. XLim (University of Limoges) is a heavy user of PrestoHF among the academic filtering community and some free license agreements are currently being considered with the microwave department of the University of Erlangen (Germany) and the Royal Military College (Kingstone, Canada).
We started the development of Endymion, http://wwwsop.inria.fr/apics/endymion/index.html, a software licensed under the CeCILL license version two, see http://www.cecill.info. This software will offer most of the functionalities of hyperion (whose development has been abandoned in 2001), like the arl2and peb2procedures. It will be much more portable, since it is no more dependent on an external garbage collector or a plotter like agat. The symbolic evaluation, based on the Lisp reader, has been tested, debugged and documented.
DedaleHF is a software meant to solve exhaustively the coupling matrix synthesis problem in reasonable time for the users of the filtering community. For a given coupling topology the coupling matrix synthesis problem (C.M. problem for short) consists in finding all possible electromagnetic coupling values between resonators that yield a realization of a given filter characteristics (see section ). Solving the latter problem is crucial during the design step of a filter in order to derive its physical dimensions as well as during the tuning process where coupling values need to be extracted from frequency measurements (see Figure ).
DedaleHf consists in two parts: a database of coupling topologies as well as a dedicated predictorcorrector code. Roughly speaking each reference file of the database contains, for a given coupling topology, the complete solution to the C.M. problem associated to a particular filtering characteristics. The latter is then used as a starting point for a predictorcorrector integration method that computes the solution to the C.M. problem of the user, i.e.the one corresponding to a userspecified filter characteristics. The reference files are computed off line using Groebner basis techniques or numerical techniques based on the exploration a monodromy group. The use of such a continuation technique combined with an efficient implementation of the integrator produces a drastic reduction of the computational time, say, by a factor of 20.
Access to the database and integrator code is done via the web on http://wwwsop.inria.fr/apics/Dedale. The software is free of charge for academical research purposes: a registration is however needed in order to have access to the tool's full functionality. Up to now 50 users have registered among the world (mainly: Europe, U.S.A, Canada and China) and 1500 reference files have been downloaded.
The great novelty in the RAWEB2002 (Scientific Annex to the Annual Activity Report of Inria) was the use of XML as intermediate language, and the possibility of bypassing
The construction of the raweb is explained schematically on figure
. The input is either a
A second application of
Tralicsis the following: when researchers wish to publish an Inria Research Report, they send their PostScript or Pdf document, together with the start of the
The main philosophy of
Tralicsis to have the same parser as
Three major versions were released this year, namely 2.7, 2.8 and 2.9. The activity report uses version 2.8, the two research reports, and , were updated for version 2.9. The Tralics web pagecontains the documentation and a link to the sources (the software is licensed under CeCILL, it is an open source free software).
We changed the implementation of the \perscommand, according to the semantics of the RA2006. Depending on the context, two or three additional arguments are required, their value must belong to a list defined by the raweb team, and given in the ra.tcffile. The command \refercitewas added as a consequence of another change in the semantics of the Raweb.
The concept of tcffiles was added in version 2.7. For a target type like ra(activity report) or rr(research report) this file describes how certain commands should be translated. For instance, in the case of ra, it will contain the list of research themes, in the case of rr, it contains the list of metadata (author names, abstract, etc.) mentioned above. Before version 2.7, a single file contained everything. In the current distribution there are 14 tcf files, plus a model of ra.tcf, the actual file being distributed in Raweb package.
The software can be further parameterised: if you translate a file A, that loads a class B and a package C, then
Tralicsreads files
A.ult,
B.cltand
C.pltinstead of the
The bibtex support was enhanced. It is now possible to define in a configuration file additional entry types, as well as additional fields. The field list of a nonbuiltin entry type is formed of all standard fields (in some order) plus user defined ones.
We mentioned above an application where mathematical formulas were evaluated by redefining commands and category codes. There is an application of the same type (the goal being to put paper
abstracts on the Web) where the same formula has to be typeset twice, once producing a MathML formula, and once a
We implemented all extensions defined by eTeX (these are enabled by default in every modern implementation of
We changed the internal encoding: it is now UTF8. As a consequence every 16bit character is a valid character, can have a category code, a lc code, etc.Both input and output can be UTF8 or latin1. Entity names like ` ' are no longer created (character entities are used instead), except for math symbols, like ` α'.
The research report describing
Tralicshas been converted to XML, then to HTML, and put on the web. The second part of this report describes how we did this (the DTD, the style sheets, the
ultfiles etc.), as well as another example (a thesis) that was fully converted to XML then HTML. In this case, some parts of the XML file has to be converted to images via
Solving overdetermined Cauchy problems for the Laplace equation on an annulus (in 2D) or a spherical layer (in 3D) in order to treat incomplete experimental data is a necessary ingredient of the team's approach to inverse source problems, in particular for applications to E.E.G. since the latter involves propagating the initial conditions from the boundary to the center of the domain where the singularities ( i.e.the sources) are sought. Here, the domain is typically made of several homogeneous layers of different conductivities.
Solving Cauchy problems on a 2D annulus is the main topic of M. Mahjoub's PhD thesis. This issue arises when identifying a crack in a tube or a Robin coefficient on the inner skull thereof. It can be formulated as a best approximation problem on part of the boundary of a doubly connected domain, and both numerical algorithms and stability results were obtained in this framework , . They generalize those previously obtained in simply connected situations , .
Still in the 2D case with incomplete data, the geometric problem of finding, in a stable and constructive manner, some unknown (insulating) part of the boundary of a domain was
considered in the Ph.D. thesis of I. Fellah. Approximation and analytic extension techniques described in section
, together with numerical conformal transformations of
the disk, here also provide us with interesting algorithms for the inverse problem under consideration. A related result was recently obtained, namely the
L^{p}existence and uniqueness of the solution to the Neumann problem on a piecewise
domain with inward pointing cusps (note that the endpoints of a crack are such cusps) when
1<
p<2. Although it is reminiscent of classical
L^{p}theorems on Lipschitz domains
, it seems to be a new result and the first
one dealing with a cusp while still controlling the conjugate function; the proof uses weighted norm inequalities
. Moreover, a Cauchytype representation for
the solution was obtained using properties of Smirnov classes, and the technique generalizes to mixed boundary conditions that occur when the crack is no longer assumed to be a perfect
insulator. Describing higher dimensional geometries with cusps to which the result can be extended is an interesting issue.
Cauchy problems on 3D spherical layers offer an opportunity to state and solve extremal problems for harmonic fields for which an analog of the Toeplitz operator approach to bounded
extremal problems
has been obtained. More specifically, the
density of traces of harmonic gradients in
L^{2}of an open subset of the 3D sphere was established, and a Toeplitz operator with symbol the characteristic function of such a subset was defined. Then, a best approximation on the
subset of a general vector field by a harmonic gradient under a
L^{2}norm constraint on the complementary subset can be computed by an inverse spectral equation for the abovementioned Toeplitz operator. Constructive and numerical aspects of the
procedure (harmonic 3D projection, Kelvin and Riesz transformation, spherical harmonics) are under study and encouraging results have been obtained on numerically simulated data
,
.
With the postdoctoral stay of E. Sincich, a collaboration with the CMILATP (University Marseille I) began on elliptic equations corresponding to diffusion processes with variable
conductivity. In particular, the 2D divgrad equation which leads to the socalled
real Beltrami equationwas investigated. In the case of a smooth simply connected domain, we started analyzing this year the existence of solutions in Sobolev
W^{1,
p}classes for
p>2and the characterization of their traces on the boundary using generalized CauchyRiemann equations. We also introduced less regular solutions of Hardytype
(
i.e.having bounded integral
L^{p}means on 1D contours tending to the boundary). This should allow for us to state Cauchy problems as bounded extremal issues in classes of generalized analytic functions, whose
properties are currently under study
together with the behavior of the associated
Cauchy and Beurling operators as well as
W^{1,
p}estimates for generalized Riesz transforms.
The application that initially motivated this study is described in the next section.
Let us briefly describe a potential application of inverse boundary problems for the Beltrami equation, on a 2D doubly connected domain, to plasma confinement for thermonuclear fusion in a Tokamak; this collaborative work was started in collaboration with the Laboratoire J. Dieudonné (University of Nice). In the particular case at hand, it seems possible to explicitly compute a basis of solutions (Bessel functions) that should greatly help the computations (see ) but the techniques should be valuable more generally.
In the most recent tokamaks, like Jet or ITER, an interesting feature of the level curves of the poloidal flux is the occurrence of a cusp (a saddle point of the poloidal flux, called an X point), and it is desirable to shape the plasma according to a level line passing through this X point for physical reasons relating to the efficiency of the energy transfer.
The problem we have in mind here is of dual Bernoulli type. Classically, the interior Bernoulli problem on a domain
(see
) is to find a closed subset
and a harmonic function
uin
such that
u= 0on
,
u= 1on
A, and
u/
=
Qon
A, where
Qis a given positive constant and
indicates the outer normal. A natural generalization is obtained on letting
usatisfy a more general diffusion equation
u) = 0
in , for some nonconstant conductivity >0.
The dual problem arises when both
uand
u/
nare given on the known boundary
while
u=
Qis constant on
A. Note that this issue is
overdetermined, that is, the boundary data on
have to satisfy some compatibility conditions (of generalized CauchyRiemann type). One motivation for the dual problem is the observation that, in the transversal section of a Tokamak
(which is a disk if the vessel is idealized into a torus), the socalled poloidal flux is subject to (
) outside the plasma volume for some simple explicit real
analytic function
, while the boundary of the plasma is a level line of this flux
. Actually, when looking for a X point, the
main interest is attached to the smallest connected socalled ``elliptic'' solution, which makes for a definite object of study among all other solutions.
When
is constant and
u/
nhas zero mean on
, it is wellknown that
uhas a conjugate function
vsuch that
u+
ivis holomorphic in
. More generally, as soon as
is bounded away from zero and
u/
nhas zero mean on
, a generalized conjugate exists such that
f=
u+
ivsatisfies the socalled
real Beltrami equation:
where
= (1
)/(1 +
). Moreover, the DirichletNeumann data for
udetermine the boundary values of
fon
(up to an additive imaginary constant). For fixed
Aand
Q, we intend to study the extremal problem of best approximating these values by (the trace on
of) a solution to (
) under the constraint that it has nonnegative real part
at most
Qon
A. This is an infinitedimensional convex problem whose Lagrange parameter will indicate both how to deform and how to modify
Alocally in order to improve the criterion.
The fact that 2D harmonic functions are real parts of analytic functions allows one to tackle issues in singularity detection and geometric reconstruction from boundary data of solutions to the Laplace equations using the meromorphic and rational approximation tools developed by the team. Some electrical conductivity defaults can be modeled by pointwise sources inside the considered domain. In dimension 2, the question made significant progress in recent years: the singularities of the function (of the complex variable) which is to be reconstructed from boundary measures are poles (case of dipolar sources) or logarithmic singularities (case of monopolar sources). Hence, the behavior of the poles of the rational or meromorphic approximants, described in section , allows one to efficiently locate their position. This is the topic of the article , where the related situation of small inhomogeneities connected to mine detection is also considered.
The problem of sources recovery can be handled in 3D balls by using best rational approximation on 2D cross sections (disks) from traces of the boundary data on the corresponding circles. It turns out that each of these traces coincides with a 2D analytic functions in the slicing plane, that has branched singularities inside the disk . These singularities are related to the actual location of the sources (namely, they reach in turn a maximum in modulus when the plane contains one of the sources). Hence, we are back to the 2D framework where approximately recovering these singularities can be performed using best rational approximation.
We also started to consider more realistic geometries for the 3D domain under consideration. A possibility is to parametrize it in such a way that its planar crosssections are quadrature domains or Rdomains. In this framework, best rational approximation can still be performed in order to recover the singularities of solutions to Laplace equations, but complexity issues have to be examined carefully. The preliminary case of an ellipsoid is the topic of the work in progress . Note that it requires the computation of an explicit basis of ellipsoidal harmonics.
Finally, we begin to consider actual 3D approximation for such inverse problems. Quaternionic analysis seems to be a relevant tool, but the multiplicative side of the theory remains to be developed.
In 3D, epileptic regions in the cortex are often represented by pointwise sources that have to be localized from measurements on the scalp of a potential satisfying a Laplace equation (EEG, electoencephalography). Note that the patient's head is here modeled as a nested sequence of spherical layers. This inverse EEG problem is the object of a collaboration between the Apics and Odyssée Teams through the ACI ``ObsCerv''. A breakthrough was made last year which makes it possible now to proceed via best rational approximation on a sequence of 2D disks along the inner sphere , , . The point here is that, up to an additive function harmonic in the 3D ball, the trace of the potential on each boundary circle coincides with a function having branched singularities in the corresponding disk. The behavior along the family of disks of the poles of their best rational approximants on each circle is strongly linked to the location of the sources, using properties discussed in sections and . In the particular case of a unique source, we end up with a rational function which makes for an easy detection; when there are several sources, their localisation requires a slightly more sophisticated machinery to make the convergence of poles of meromorphic approximants effective (see section ). This and other related issues including some preprocessing of the function are still under study.
The goal of this work is to implement a stock of parameterizations that could be used for approximation purposes (see section ) while taking into account some specific properties induced by the physics such as symmetry, passivity, or some other constraint on the realization matrix like the structure of the coupling in a microwave filter (see section ).
Tangential Schur algorithms provide interesting tools to parameterize conservative functions by means of interpolation data. An atlas of charts has been derived from NevanlinnaPick interpolation values, in which a function can be represented by a balanced realization computed as a product of unitary matrices from the Schur parameters . Such an atlas presents a number of advantages in view of the approximation problems we have in mind: it ensures identifiability, takes into account the stability constraint, and presents a nice numerical behavior. It has been used in the software RARL2 (se section ). More general interpolation values can be used, associated with a Nudelman (contour integral) interpolation problem. New atlases can also be built, which allow for us to deal with systems having real coefficients. We paid special attention to an atlas which uses a nice mutual encoding property of lossless functions and has been implemented in a new version of the software RARL2. A paper reporting on these has been accepted for publication . All these atlases present a lot of flexibility to design an adapt charts when necessary, for example if one wants to change chart running an optimization process. For example, using a realization in Schur form, a chart can always be found in which all the Schur vectors are zero. Now, the balanced realizations obtained in a given chart possess no particular structure. However, upon choosing the interpolation points at zero and the directions in a particular manner among standard basis vectors, a subatlas can be specified in which the balanced canonical forms have a staircase structure with the property that the corresponding controllability matrix is positive uppertriangular. Such canonical forms are of interest because of their nice behaviour under truncation. The corresponding atlas is minimal in that no chart can be left out without loosing the covering property on the manifold. These results will be published in the LAA special issue in honor of Paul Fuhrmann . Up to now, they only concern discretetime transfer functions. Continuoustime, which is relevant in many applications, is under study and preliminary results have been obtained in the SISO case that relate the wellknown Schwartz form to a boundary interpolation problem on the imaginary axis.
Surface Acoustic Waves (in short: SAW) filters consist in a series of transducers which transmit electrical power by means of surface acoustic waves propagating on a piezoelectric medium. They are usually described by a mixed scattering matrix which relates acoustic waves, currents and voltages. By reciprocity and energy conservation, these transfers must be either lossless, contractive or positive real, and symmetric. In the design of SAW filters, the desired electrical power transmission is specified. An important issue is to characterize analytically the functions that can actually be realized for a given type of filter.
In any case these functions lie in the Schur class, and if they have degree
nthey can be imbedded into a conservative matrix of McMillan degree at most
n+ 2. This conservative matrix describes the global behavior of the filter. Such a completion problem is known as the Darlington synthesis, and it has always a
solution without increase of the McMillan degree in the rational case. However in our case, additional constraints arise from the geometry of the filter, like the symmetry and certain
interpolation condition, and these are responsible for the increase of the degree by 2. In
, a complete mathematical description of such
devices is given, including realizations for the relevant transfer functions, as well as a necessary and sufficient condition for symmetric Darlington synthesis preserving the McMillan degree.
More generally, in collaboration with P. Enqvist from KTH (Stockholm, Sweden), we characterized the existence of a symmetric Darlington synthesis with specified increase of the McMillan degree:
a symmetric extension of a symmetric contractive matrix
Sof degree
nexists in degree
n+
kif, and only if,
I
SS^{*}has at most
kzeros with odd multiplicity
,
. In the language of circuit theory, this result
tells us about the minimal number of gyrators to be used in circuit synthesis. These results have been extended to the case of realvalued functions using a frequency domain approach
. In these studies, only extensions of twice the
initial size have been considered. In view of multiport synthesis applications it is highly desirable to generalize these results to other types of extensions. Of particular interest is the
extension of a scalar Schur function, from a first row extension of any size, to a square symmetric conservative matrix. The techniques developed so far for the symmetric Darlington synthesis
should enable us to carry out such a generalization.
Regular contact has been made with TEMEX (SophiaAntipolis), leading to a new approach in designing an ``ideal SAW filter'' . This filter has a ``symmetric'' geometry, the left transducer
being reflected from the right transducer, and its scattering matrix
Sis not only in the Schur class but in fact conservative. The electrical power transmission, on which the specifications are given, is the transmission part of
S. This scattering matrix is related through a Cayley transform to the admittance matrix, and in the case of an ideal filter, it is completely determined by the
transmission part of the admittance. We then use Zolotarev optimization methods (see section
) to design the admittance. This stage is still not final
in that only the poles of the transmission function are optimized whereas the specific links between its denominator and numerator have not yet taken into account. The behavior of the
electrical power transmission is very similar to that of the admittance and can be easily tuned using the latter (see Figure
).
The results of
and
were exploited last year to prove the
convergence in capacity of
L^{p}best meromorphic approximants on the circle (
p2) to Cauchy transforms of complex measure supported on a hyperbolic geodesic
plus a rational function. Some mild conditions (bounded variation of the argument and powerthickness of the total variation) were required on the measure. Recall that a sequence of functions
is said to converge in capacity if the (logarithmic) capacity of the set where the distance to the limit is greater than
goes to 0 along the sequence for each fixed
>0. An article has been submitted reporting on these results
. This year, we proved strong asymptotics for
the aforementioned functions when the density of the measure does not vanish and its growth at the endpoints is like a fractional power, yielding a strong geometric convergence in this case
with a reproduction of the polar singularities of the function with their multiplicities. This result is important for inverse problem of mixed type, like those mentioned in section
, where monopolar and dipolar sources are handled
simultaneously. Convergence was even obtained on the support of the measure, if the latter has analytic density. An article is currently being written on this topic.
It is known after
that the denominators of best rational of
meromorphic approximants in the
L^{p}norm on a closed curve (say the unit circle
Tto fix ideas) satisfy for
p2a nonHermitian orthogonality relation for functions described as Cauchy
transforms of complex measures on a curve
(locus of singularities) contained in the unit disk
D. This has been used to assess the asymptotic behavior of the poles of such an approximant when
is a hyperbolic geodesic arc. More precisely, under weak regularity conditions on the measure, the counting measure of these poles converges weakstar to the equilibrium distribution of
the condenser
(
T,
)where
Tis the unit circle. Non asymptotic bounds were also obtained for the sum of the complement to
of the hyperbolic angles under which the poles ``see''
: the sum of these complements over all the poles (they are
nin total if the approximant has degree
n) is bounded by the aperture of
plus twice the variation of the argument of the measure (which is independent of
n). This produces ``hard'' testable inequalities for the location of the poles, that should prove particularly valuable in inverse source problems (because they are not
asymptotic in nature), see
and
. We proved this year strong asymptotics that do
not deal with the counting measure of the poles (this entails only results in proportion) but with the behavior of
allof them. They were obtained for Cauchy transforms of smooth nonvanishing complex measures on a hyperbolic geodesic arc in the disk, provided the density increases at least like a
fractional power at the endpoints of the arc. This new and interesting result generalizes most of the previous works on a segment
,
, and paves the way for further study on
uniqueness of local best approximants and inverse source problems. The technical problem facing us is to get rid of the growth assumptions at the endpoints which are induced by the technique
(going over to the circle in order to use Fourier analysis and compactness of Hankel operators with continuous symbol). The addition of a rational term which is not singular on the arc to the
approximated function has also been handled, generalizing results of Gonchar and Suetin
. A numerical illustration is shown in Figures

for various approximants to the functions
Fand
Ggiven below.
The more general situation where
is a socalled ``minimal contour'' for the Green potential (of which a geodesic arc is the simplest example) has been settled this year with the same conclusion concerning the weak*
convergence of the counting measure of the poles. The technique uses a potential theoretic analysis of the
nth root of the error and a refinement of Parfenov's estimates on the asymptotics of singular values of Hankel operators. The writing up of this result is underway. It
is of particular significance with respect to the determination of several 2D sources or piecewise analytic cracks from overdetermined boundary data (see sections
and
).
To carry out the identification and design of filters under passivity constraints (such constraints are common since passive devices are ubiquitous, including in particular microwave
filters), it is natural to consider the mixed bounded extremal problem
stated in section
. An algorithm to asymptotically solve this problem when
p= 2in nested spaces of polynomials was developed last year, and this year a dual approach along the lines of convex optimization theory (although in an
infinitedimensional context) has been investigated. Specifically, the gradient of the dual functional was computed when
flies in the
Llog
LZygmund class, and this paves the way for an algorithm with stronger convergence properties than the polynomial one. A connection with normalized Cauchy transforms
has also been been carried out, providing a handle to analyze the regularity properties of the solution. More precisely, a condition on the constraint
Mat the endpoints of the bandwidth
Khas been given that ensures the continuity of
gfor smooth
f. Such regularity conditions should greatly impinge on the numerical practice of the problem, and should be valuable to estimate delays in waveguides, thereby
complementing the existing procedures dealing with this issue in PrestoHF. An article reporting on these results is currently being written.
We studied in some generality the case of parameterized linear systems characterized by the following classical state space equation,
where
is a finite set of
rparameters and
(
A(
p),
B(
p),
C(
p))are matrices whose entries are polynomials (over the field
) of the variables
p_{1}, ...,
p_{r}. For a parameterized system
and
we call
the transfer function (or transfer matrix) of the system
(
p). Some important questions in filter synthesis concern the determination of the following parameterized sets
General results were obtained about these sets, in particular a necessary and sufficient condition ensuring that their cardinality is finite. In the special case of
coupledresonators, an efficient algebraic formulation has been derived which allowed us to compute
for nearly all filter geometries of common use by means of the Gröbner engine FGb developed by the SALSA project at INRIARocquencourt. However for a new class of high order filters
first presented in
the procedure breaks down because of the
complexity of the Gröbner basis computation. This led us to consider instead homotopy methods based on continuation techniques, in order to solve the algebraic system defining
. The usual complexity od such methods, based on the Bezout bound or on mixed volume computations, appeared to be extremely pessimistic in our case because of the degeneracy of our
system: for a 10
^{th}order filter the Bezout bound is about
10
^{44}whereas the number of actual solutions over the ground field
is only 384. To overcome this difficulty we are currently developing a continuation method which consists in exploring the monodromy group of an algebraic variety by following a family
of paths that separate the branch points. More precisely let
h(
x,
a)be a polynomial irreducible system depending on a scalar complex parameter
a(
xmight be multivariate) and generically of dimension zero for a given value of
a. Suppose now that we are given a particular solution
h(
x_{0},
a_{0}) = 0and we want to compute the complete solution set for the value
a_{1}. It can be shown that lifting a family of paths (in the complex plane) from
a_{0}to
a_{1}that separate the branch points (
i.e.those values of
afor which the rootfunctions
axcannot be locally defined) will yield a complete solution set to
h(
x,
a_{1}) = 0(see Figure
). At the moment, the family of paths is constructed in a
brute force manner and leads to heuristics with no formal guarantee on the exhaustivity of the solution set thus obtained but an asymptotic one: provided some real spacing parameter between
paths is small enough the algorithm yields a complete solution set. Improvements of these methods (including a systematic way of ``chasing'' branch points) are now under study.
For applications of these techniques to microwave filter synthesis, improving the computational time was necessary which led to the design of the software and filter topology database DedaleHF (see section ). It is based on a continuation method where some particular realizations fibers are computed offline using a Gröbner basis or else the preceding homotopy method that allows us to speed up considerably online computations.
Using our software, it remained to show that filters with multiple solutions topologies are actually realizable in practice. In particular some ambiguities occurring during the tuning step needed to be removed. This was done by introducing the notion of ``discriminant experiment'' which amounts to trace the fiber of possible realizations on varying a single physical parameter (tuning a screw for example). Techniques to choose this parameter so as to identify among all possible coupling matrices the one implemented by the device have been developed. They allowed us to perform the practical realization of two filters based on the topologies of Figures and that admit respectively fibers of cardinality 15 and 48. This work was carried out in collaboration with XLim. It was supported by the collaborative action ARC Silafunded by INRIA. Application of this work for synthesis of high order multiband filters was published in and presented in .
On introducing the ratio of the numerators of the transmission and reflexion of the scattering matrix, the design of multiband responses for highfrequency filters (see section ) reduces to the following normalized optimization problem of Zolotarev type :
where
(resp.
) is a finite union of compact intervals
I_{i}of the real line corresponding to the passbands (resp. stopbands), and
P_{m}(
K)stands for the set of polynomials of degree less than
mwith coefficients in the field
K. Depending on physical symmetries of the filter, it is of interest to solve problem (
) for
(``real'' problem),
(``mixed'' problem), or
(``complex'' problem). We have shown that the ``real'' Zolotarev problem can be decomposed into a sequence of concave maximization problems, the best solution of which yields the optimal
solution to the original problem. A characterization in terms of an alternation property has also be given for the solution to each of these subproblems. Based one this alternation, a Remez
type algorithm has been derived. It computes the solutions to these problems in the polynomial case (
i.e.when the denominator
qis fixed), and allows for the computation of a dualband response (see Figure
) according to frequency specifications (see Figure
for an example from the spacecraft SPOT5 (CNES)). Further,
we designed an algorithm in the rational case which, unlike methods based on linear programming, avoids the sampling over all frequencies and is currently under study. This raises the question
of the ``generic normality'' of the approximant with respect to the location of the intervals. This question has not received a definite answer yet. Finally the design of efficient numerical
procedures to tackle the ``mixed'' and the ``complex'' cases remains a challenging task. These matters will be pursued in V. Lunot's doctoral work. The Remez algorithm and its application to
filter synhtesis are described in
,
.
An OMUX (Output MUltipleXor) can be modeled in the frequency domain through scattering matrices of filters, like those described in section , connected in parallel onto a common guide (see Figure ). The problem of designing an OMUX with specified performance in a given frequency range naturally translates into a set of constraints on the values of the scattering matrices and of the phase shift introduced by the guide in the considered bandwidth.
An OMUX simulator on a Matlab platform was designed in recent years and used to test some assumptions on the way the OMUX operates. Among them is that each rightsection of the OMUX, when the guide gets oriented from the left to the right with its common access lying to the extreme left, acts as a shortcircuit in the bandwidth of those channels lying ``upstream'' ( i.e.those channels lying to the left of the considered section, so that they are reached first by a wave emanating from the access of the guide). Another assumption is that each channel must reject a little bit in its bandwidth in order to trap energy otherwise reflected by the abovementioned shortcircuit. Under the terms of a recently signed contract with Alcatel Alenia Space (see section ), these assumptions will be used to design a dedicated software to optimize OMUXes, by first trying to optimize a channel when the others are fixed and then looking for a fixed point over all channels.
The direct approach, currently used by the manufacturer, consists in coupling a simulator with a general purpose ``optimizer'', in order to reduce transmission and reflection wherever they are too large. This yields unsatisfactory results in cases of high degree and narrow bandwidth, in particular because the convergence often fails and multiple initial points must be used resulting in a very lengthy and sometimes unsuccessful search. Besides, manifoldpeaks arising from the dilation of the cavities caused by increased temperature (when the satellite gets exposed to the sun), can ruin the design in operational conditions. We expect to be able to produce a multiphased tuning procedure, first relaxed, channel after channel, then global, using a quasiNewton method.
This is part of the doctoral work of M. Petreczky on hybrid systems; he visited the team in 20052006 under the CTS program (see section ).
Motivated by orbit transfer with lowthrust engines (see section ), we developed, for conservative systems with small control, a notion of average control system . Using averaging techniques in this context is rather natural, since the free system produces a fast periodic motion and the smallcontrol a slow one. In this vein, some recent literature proceeds as follows: the control is assigned, for instance optimal control via Pontryagin's Maximum Principle or some feedback designed beforehand, and then averaging is performed on the resulting ordinary differential equation to analyze its behavior, or rather its limit behavior when the control magnitude tends to zero.
The novelty of the work in is to introduce averaging before assigning the control, hence getting a control systemthat better describes the limit behavior. This concept of average control system is convenient when comparing different control strategies. For instance, it allowed us in to give a partial answer to an open question stated in on estimating the minimum transfertime between two elliptic orbit when the thrust magnitude tends to zero.
The study concerns the control (orbit transfer) of satellites equipped with low thrust engines like plasmic ones, which are efficient with respect to fuel consumption, but deliver a thrust which is much smaller than conventional ``chemical'' engines (the ratio between the delivered acceleration and the gravity being of the order of 10 ^{3}, sometimes less). This problem was raised by Alcatel Alenia Space, and Alex Bombrun's PhD thesis is supported by this company under contract (see section ).
The first results (and maybe the most interesting ones in practice) dealt with the computation of feedback controls, using ad hocLyapunov functions, that approximated very well timeoptimal trajectories . The easy implementation of such control laws makes them attractive as compared to genuine optimal control.
Results obtained this year concern the asymptotic study of this problem when the thrust tends to zero, that captures the ``low thrust'' effect. We used the average control system described in section . The result in on discontinuous feedback was motivated by the use of feedback control on the average system.
Another piece of work was concerned with the use of feedback for the controlled threebodies problem. This was the topic of Jonathan Chetboun's internship. This activity is also supported under contract by Alcatel Alenia Space (Cannes), where the abovementioned internship partly took place. Within Apics, this research allowed us to numerically simulate a mission like SMART1 (earthmoon) with feedback controls . A more conceptual understanding of the method still needs to be developed.
Collision avoidance is important, for instance during the formation configuration of a cluster of space vehicles. Motivated by the practical relevance of this problem, we conducted a preliminary study aiming at building ``artificial potentials'' that depend on the position and velocity. The report mainly contains computations of the boundary set on which this potential should be large to avoid collision. Inside the boundary is the set in which coliision avoidance can be guaranteed.
This reasearch has been initiated with the postdoctoral stay of H. Jirari, and its goal was described in section . The systems under study here are quantum systems of low dimension (1 or 2 Qbits) whose autonomous dynamics is given by the Schrödinger equation and whose interaction with the environement is described by some dissipative term. According to the latter, one distinguishes between the BlochRedfeld model and the (more heavily simplified) Lindblad model. In , numerical evidence was obtained that in the BlochRedfeld model, at least in small dimensional ``simple'' cases, there exists a control that drives the system from some pure initial state to a target one, while compensating the dissipative effect of the environment. We have pursued this numerical effort on more strongly coupled environments, computing via optimal control techniques. At the same time, we begun the investigation of the controltheoretic structure of the abovementioned models. In contrast with the Lindblad model, where decoherence is unavoidable in certain cases, it seems that the BlochRedfeld model exhibits a more robust behaviour which calls for further understanding.
Contract n ^{o}04/CNES/1728/00DCT094 involving CNES, XLIM and INRIA, whose objective is to work out a software package for identification and design of microwave devices. The work at INRIA includes:
the modeling of delays (see section ),
the exhaustive determination of the coupling coefficients on some case studies (see section ),
the OMUX stimulator with exact computation of derivatives.
This contract has been renewed for 16 months starting November 2004, in order to develop a generic code for coupling determination and to carry out the optimization of OMUX.
A contract reference B00375 has been signed between INRIA and Alcatel Alenia Space (branch of Toulouse), in which INRIA will design and provide a software for OMUX simulation with efficient initial condition for an optimisation algorithm based on recursive tuning of the channels.
A contract is in the final stage of approval between the two partners. It bears on a Lyapunovfunctionbased design methodology to set up a feedback law achieving prescribed orbital transfer for a satellite with lowthrust. A numerical Matlab code to demonstrate the validity of the method will be included.
L. Baratchart is a member of the editorial board of Computational Methods and Function Theoryand Complex Analysis and Operator Theory.
Together with projectteams Caiman and Odyssée (INRIASophia Antipolis, ENPC), the University of Nice (J.A. Dieudonné lab.), CEA, CNRSLENA (Paris), and some French hospitals (PitiéSalpêtrière in Paris, Timone in Marseille), we participate in the national action ACI Masse de données OBSCERV, 20032006 (inverse problems, EEG).
The postdoctoral training of E. Sincich was funded by INRIA.
The Team was a member of the Marie Curie multipartner training site Control Training Site, number HPMTCT200100278, 20012006. See http://www.supelec.fr/lss/CTS/. This network ended April, 2006.
The project is a member of the Working Group Control and System Theory of the ERCIMconsortium, see http://www.ladseb.pd.cnr.it/control/ercim/control.html.
EPSRCgrant (EP/C004418) ``Constrained approximation in function spaces, with applications'', with Leeds University (UK) and the University Lyon I, 20052006.
STICINRIAand AireDéveloppementgrants with LAMSINENIT (Tunis), « Problèmes inverses du Laplacien et approximation constructive des fonctions » (from which M. Zghal and M. El Guenechi received financial support for their trainees).
NSF EMS21RTG students exchange program (with Vanderbilt University).
The following scientists gave a talk at the seminar:
Christophe Prieur, LAAS CNRS Toulouse, Robust stabilization of nonlinear control systems by means of hybrid feedbacks.
Ed B. Saff, Vanderbilt University, Asymptotics of polynomial zeros: beware of plots!
H. Jirari, Institut für Physik, Universität Graz, Autriche, Contrôle optimal d'un qubit.
Luca Rondi, Département de Mathématiques et Informatique, Univ. Trieste, Italie, A variational approach to the reconstruction of cracks.
Mazyar Mirrahimi, INRIA, SOSSO2,. Identification de paramètres pour un système quantique
Sacha Borichev, CMILATP, University Marseille I, Un théorème d'unicité pour l'espace de Korenblum.
Maxym Yattselev, On Baxter's Theorem with Meromorphic Approximation in Mind.
Jonathan Partington, School of Mathematics, Leeds University, U.K.
Karim Kellay, Stanislas Kupin, Stéphane Rigat, Hassan Youssfi, et l'équipe d'Analyse et Géométrie. LATPCMI, Université Marseille I,
Fehmi Ben Hassen and Moncef Mahjoub, LamsinENIT, Tunisie.
Pierre Rouchon, Centre Automatique et Systèmes, Ecole des Mines de Paris.
Edward B. Saff, Dept. of Mathematics, Vanderbilt University, USA.
Ugo Boscain, SISSA, Italy.
Grégoire Charlot, University of Montpellier II.
Vladimir Peller, University of Michigan at East Lansing, USA.
Alexei Poltoratski, Texas A&M University, College Station, USA.
Maxim Yattselev, Vanderbilt University, Nashville, USA.
Mihaly Petreczky, CWI, Amsterdam, The Netherlands.
L. Baratchart, DEA Géométrie et Analyse, LATPCMI, University Marseille I.
M. Olivi, Mathématiques pour l'ingénieur (Fourier analysis and integration), section Mathématiques Appliquées et Modélisation, 1ère année, Ecole Polytechnique de l'Université de Nice.
Moufida El Guenichi , « Problème inverse d'identification d'un coefficient de Robin nonlinéaire : stabilité », cotutelle with LamsinENIT (Tunis).
Meriem Zghal, « Problème inverse d'identification d'un coefficient de Robin nonlinéaire : algorithmes de résolution », cotutelle with LamsinENIT (Tunis).
Jonathan Chetboun (ENPC), « Feedback et poussée faible pour le problème à plus d'un corps central »
Youssef El Fassy Fihry (École des Mines), « Étude d'ensembles accessibles et dispositifs anticollision ».
Alex Bombrun, « Commande optimale, feedback, et tranfert orbital de satellites » (optimal control, feedback, and orbital transfert for low thrust satellite orbit transfer).
Imen Fellah, ``Data completion in Hardy classes and applications to inverse problems'', cotutelle with LamsinENIT (Tunis).
Vincent Lunot, « Problèmes fréquentiels extrémaux, approximation rationnelle sous contrainte Schur et application à la synthèse de filtres ».
Moncef Mahjoub, « Complétion de données et ses application à la détermination de défauts géométriques ». cotutelle with LamsinENIT (Tunis).
Meriem Zghal, ``Meromorphic approximation and inverse problems related to EEGMEG'', cotutelle with LamsinENIT (Tunis).
L. Baratchart was on the PhD reading committee of Mihaly Petreczky (CWI Amsterdam).
J.B. Pomet was on the PhD defense committee of Mihaly Petreczky (CWI Amsterdam).
J. Leblond was on the master committee of Hichem Bouraoui, LamsinEnit, Tunis.
L. Baratchart is a member of the « commission de spécialistes » (section 25) of the Université de Provence. He was a member of the scientific committee of the conference PICOF 2006
J. Grimm is in charge of organizing the seminar on control and identification.
J. Grimm is a representative at the « comité de centre ». He is a member of the organising committee of PICOF 2006 (``Inverse Problems, Control, and Shape Optimization'').
J. Leblond is a substitute member of the « Commission d'évaluation » of INRIA, since September ; she took part to several evaluation seminars and was sitting on hiring committees for CR and DR recruitment and promotion. She was a member of the scientific committee of the conference PICOF 2006. She participates to the working group « Doc » and is in charge with the Séminaires Croisés of the Research Unit.
J. Leblond and J. Grimm were coeditors of the proceedings of the CNRSINRIA summer school ``Harmonic analysis and rational approximation: their role in signals, control and dynamical systems theory'' (Porquerolles, 2003) http://wwwsop.inria.fr/apics/anap03/index.en.html .
M. Olivi is a member of the CSD (Comité de Suivi Doctoral) of the Research Unit of Sophia Antipolis.
F. Seyfert is a member of the CDL (Comité de Développement Logiciel) of the Research Unit of Sophia Antipolis.
J.B. Pomet is a representative at the « comité technique paritaire » (CTP).
A. Bombrun presented communications at the ``6 ^{th}AIMS Conference on Dynamical Systems, Diff. Equations and Applications'' in june (Poitiers), and at the ``Joint CTSHYCON Workshop on nonlinear and hybrid systems'' in july (Paris).
L. Baratchart gave a conference at IWOTA 2006 (International workshop on operator theory and its applications), Seoul, Korea, in July.
L. Baratchart and M. Yattselev gave a talk at the HSFOconference (Holomorphic Spaces of Functions and their Operators) at the CIRM, Luminy, France, in July.
L. Baratchart, A. Bombrun and J. Leblond gave communications at MTNS'06 (Mathematical Theory of Networks and Systems), Kyoto, Japan, in July.
J.B. Pomet was an invited speaker at ``Workshop on Geometry of vector distributions, differential equations, and variational problems'' (Trieste, Italy, December).
V. Lunot and P. Lenoir gave a talk at the International Microwave Symposium (San Francisco).
F. Seyfert was an invited speaker at ``Workshop on Efficient Computation of Gröbner Bases'' (Linz, Austria, February)
F. Seyfert gave a talk at the ``International Workshop on Microwave Filters'', (Toulouse, France, October)
Concerning their joint work with Apics, our collaborators took the following actions.
M. Clerc presented a poster at HBM (Human Brain Mapping) Firenze, Italy, June;
F. Ben Hassen gave a communication at Famelap (Mathematics of Finite Elements), London, UK, in June;
M. Jaoua gave a communication at CARI'06, Bénin, and an invited plenary talk at the Conference « Equations différentielles et Applications » at Annabe, Algeria, both in November.