The project was terminated June the 30th, 2003. A proposal for a new project named APICS has been submitted to the steering comittee of Inria Sophia Antipolis.

The Team develops effective methods for modelling, identification and control of dynamical systems.

Meromorphic and rational approximation in the complex domain, application to identification of transfer functions and matrices as well as singularity detection for 2-D Laplace operators. Development of software for frequency domain identification and synthesis of transfer matrices.

Control and structure of non-linear systems: continuous stabilization, non-linear transformations (linearization, classification).

Industrial collaborations with Alcatel-Space, Alcatel-R&I, CNES, IRCOM, Thomson-MX.

Exchanges with CWI (the Netherlands), CNR (Italy), the Universities of Illinois (Urbana-Champaign), of South Florida (Tampa), of California (San Diego), of Alabama (Mobile), of Minnesota (Minneapolis), of Vanderbilt (Nashville), of Padova (Italy), of Beer Sheva (Israel), of Leeds (GB), of Maastricht and of Amsterdam (The Netherlands), of TU-Wien (Austria), of TFH-Berlin (Germany), of Kingston (Canada), of Szegëd (Hungary), of Colorado School of Mines, of CINVESTAV (Mexico), ENIT (Tunis), VUB (Belgium).

The project is involved in a NATO Collaborative Linkage Grant (with Vanderbilt University and ENIT-LAMSIN), in the ACI ``Obs-Crev'' (with the Teams Caiman and Odyssée from Inria-Sophia Antipolis, among others), in the ERCIM ``Working Group Control and Systems Theory'', in the TMR-ERNSI and TMR-NCN European research networks.

Let us first introduce the subject of Identification in some generality.

Abstracting in the form
of mathematical equations the behavior of a phenomenon is
a step called modeling. It typically serves two purposes: the first
is to describe the phenomenon with minimal complexity for some specific
purpose,
the second is to predict its outcome. This is general practice in
most applied sciences, be it for design, control or prediction,
although it is generally thought of as yet another optimization problem.

As a general rule, the user imposes the model to fit a parameterized form that reflects one's own prejudice,knowledge of the underlying physical system, and the algorithmic effort consented. Looking for such a trade-off usually raises the question of approximating the experimental data by the prediction of the model when the latter is subject to external excitations assumed to be the cause of the phenomenon under study. The ability to solve this approximation problem, which is often non-trivial and ill-posed, often conditions the practical usefulness of a given method.

It is when the predictive potential of a model is to be assessed that one is
led to postulate the existence of a true functional
correspondence between data and observations, thereby entering
the field of
identification itself. The predictive power of a model can be
expressed in various manners all of which are attempts to measure the
difference between the
true model and the observations. The necessity of taking into account the
difference between the observed behavior and the computed behavior induces
naturally the notion of noise as a corrupting factor
of the identification
process. This noise incorporates into the model, and can be handled in
a deterministic mode, where the quality of an identification algorithm is its
robustness to small errors. This notion is that of well-posedness in
numerical analysis or stability of motion in mechanics. The noise however is
often considered to be random, and then the true model is estimated by
averaging the data.
This notion allows approximate but reasonnably simple descriptions of complex
systems whose mechanisms are not well known but plausibly antagonistic.
Note that,
in any case, some assumptions on the noise are required in order to
justify the approach (it has to be small in the deterministic case, and must
satisfy some independence and ergodicity properties in the stochastic
case). These assumptions can hardly be checked in practice, so that the
satisfaction of the end-user is the final criterion.

Hypothesizing an exact model also results in the
possibility of choosing the data in a manner suited for identifying a specific
phenomenon. This often interacts in a complex manner with the
local character of the model with respect to the data (for instance a
linear model is only valid in a neighborhood of a point).

We now turn to the activity of the team proper in identification. Although the subject, on the academic level, has been the realm of the stochastic paradigm for more than twenty years, it is in a deterministic approach to identification of linear dynamical systems (i.e. 1-D convolution processes) based on approximation in the complex domain, that the Team made perhaps its most original contributions. Naturally, the deep links stressed by the spectral theorem between time and frequency domains induce well-known parallels between function theory and probability, and the work of the Miaou-project can be partly recast from the stochastic viewpoint. However, the issue was rather tackled by translating the problem of identification into an inverse problem, namely the reconstruction, from boundary data, of an analytic function in a domain of the plane. For convolution equations in dimension one—that is, ordinary differential equations possibly in infinite dimensional spaces —such a translation is provided by the Fourier transform. For certain elliptic partial differential equations in dimension two, Identification is also connected to analytic continuation, but this time it is the form of the fundamental solution that introduces holomorphy, especially in the case of the Laplacian whose solutions are logarithmic potentials.

The data are considered without postulating an exact model, but we simply look for a convenient approximation to the data in a range of frequency representing the working conditions of the underlying system. A prototypical example that illustrates our approach is the harmonic identification of dynamical systems which is widely used in the engineering practice, where the data are the responses of the system to periodic excitations in its band-width. We look for a stable linear model that describes correctly the behavior in this band-width, although the model can be inaccurate at high frequencies (that can seldom be measured). In most cases, we also want this model to be rational, of suitable degree, either because this degree is imposed by the physical significance of the parameters, or because it must remain of reasonably low order to allow the efficient use of the model for control, estimation or simulation. Other structural constraints, arising from the physics of the phenomenon to be modeled, often superimpose on the model. Note that, in this approach, no statistics are used for the errors, which can originate from corrupted measurements or from the limitated validity of the linear hypothesis.

We distinguish between an identification step (called non-parametric in a certain terminology) that is provided with an infinite dimensional model, and an approximation step in which the order is reduced subject to certain specific constraints on the considered system. The first step typically consists, mathematically speaking, in reconstructing a function, analytic in the right half-plane, knowing its pointwise values on a portion of the imaginary axis, in other terms, to make the principle of analytic continuation effective on the boundary of the analyticity domain. This is a classical question which is ill-posed (inverse Cauchy problem for the Laplace equation) that we embed into a family of well-posed extremal problems. The second step is typically a rational or meromorphic approximation procedure (but approximating families other than rational functions may be considered) in a space of functions analytic in a simply connected open subset, say the right half-plane in the case of harmonic identification. To make the best possible use of the allowable number of parameters, or to priviledge some specific physical parameters of the system, it is generally important, in the second step, to compute optimal or nearly optimal approximants. Rational approximation in the complex plane is a classical and difficult problem, for which only few effective methods exist. In relation to system theory, two main difficulties arise: the necessity of controlling the poles of the approximants (to ensure the stability of the model), and the need to handle matrix-valued functions in the case where the system has several inputs and outputs.

Rational approximation in the

We shall explain in more detail the above two steps in the sub-paragraphs
to come. For convenience, we shall approach them on the circle rather than
the line, which is the framework for discrete-time rather than continuous-time
systems. The two frameworks are mathematically equivalent via a Möbius
transform.

The title refers to the construction of a convolution model of
infinite dimension from frequency data in some bandwidth
i.e. analytic in the domain
of stability, be it the half-plane, the disk, etc),
and possibly also transfer functions with finitely many poles in the
domain of stability
i.e, convolution
operators corresponding to linear differential or difference equations
with finitely many unstable
modes. This issue arises in particular
for the design and identification of linear dynamical systems,
and in certain inverse problems for the Laplacian in dimension two.

Since the question under study may occur on the boundary of planar domains of
various shapes when it comes to inverse problems, it is common practice to normalize this boundary once and for
all, and to apply in each particular case a conformal transformation to
bring back to the normalized situation.
The normalized contour chosen here is
the unit circle. We denote by

(

In order to impose pointwise constraints in the frequency domain
(for instance if the considered models are transfer functions
of loss-less systems, see section ), one may wish to express
the gauge constraint on

( ${P}^{\prime}$) Let $p\ge 1$, $N\ge 0$,

Problem (bounded extremal problems. These have been introduced and
intensively studied by the Team, distinguishing the case

Deeply linked with Problem (, and meaningful for assessing
the validity of the linear approximation in the considered pass-band, is the
following completion Problem:

( ${P}^{\prime \prime}$) Let
$p\ge 1$, $N\ge 0$,

A version of this problem where the constraint depends on the frequency is:

( ${P}^{\prime \prime \prime}$) Let $p\ge 1$,
$N\ge 0$,

Let us mention that Problem ( ${P}^{\prime \prime}$) reduces to Problem

The solution to ( ${P}_{0}$) is classical if

We emphasize that ( has many analogs, equally
interesting, that occur in different contexts
connected to conjugate functions.
For instance one may consider the following extremal Problem,
germane to Problem

Let $f\in {L}^{2}\left(K\right)$, $\psi \in {L}^{2}(T\setminus K)$ and $M>0$;
find a function
$g\in {H}^{2}$ such that $\Vert \text{Im}g-\psi {\Vert}_{{L}^{2}(T\setminus K)}\le M$
and such that $g-f$ has minimal norm in ${L}^{2}\left(K\right)$.

Existence and uniqueness of the solution have been established in ( when

In the non-Hilbertian case, where ( can be deduced from that of

If ( ${P}_{0}$) which is proved convergent. However, the
progress that were made allow us to conceive a coherent picture of
the main issues and to develop rather efficient
numerical schemes whose global convergence has been proved for prototypical
classes of functions in Approximation theory.
The essential features of the approach are summarized below.

First of all, in the case ( ${P}_{0}$) can be reduced to that of rational approximation
which is described in more details in
section . Here, the
link with classical interpolation theory, orthogonal polynomials, and
logarithmic potentials is strong and fruitful. Second, a general AAK theory
in

The case where

Rational approximation is the second step mentioned in section and we first approach it in the scalar case, for complex-valued functions (as opposed to matrix-valued ones). The Problem can be stated as:

Let $1\le p\le \infty $, $f\in {H}^{p}$ and

The most important values of ( of section with

It is only fair to say that the design of a numerically efficient
algorithm whose
convergence to the best approximant would be proved is the most
important problem from a practical perspective.
However, the algorithms developed by the team seem rather
effective and although their global
convergence has not been established.
A contrario, it is possible to consider an elimination algorithm
when the function to approximate is rational, in order to find all critical
points, since the problem is algebraic in this case. This method is surely
convergent, since it is exhaustive, but one has to compute the roots of an
algebraic system with

To prove or disprove the convergence of the above-described algorithms, and
to check them against practical situations, the team has undergone a
long-haul study of the number and nature of critical points, depending on the
class of functions to be approximated, in which
tools from differential topology and
operator theory team up with classical approximation theory.
The study of transfer functions of relaxation systems (i.e.
Markov functions) was initiated in cf. section )
has been developed. This
methodology relies on the localization of the singularities via
the analysis of
families of non-Hermitian orthogonal polynomials, to obtain
strong estimates of the error that allow one to evaluate its relative decay.
Note
in this context an analogue of the Gonchar conjecture, that uniqueness ought
to hold at least for infinitely many values of the degree.
Another uniqueness criterion has
been obtained

The introduction of a weight in the optimization criterion is an interesting issue induced by the necessity to balance the information one has at the various frequencies. For instance in the stochastic theory, minimum variance identification leads to weight the error by the inverse of the spectral density of the noise. It is worth noting that most approaches to frequency identification in the engineering practice consists of posing a least-square minimization problem, and to weigh the terms so as to obtain a suitable result using a generic optimization toolbox. In this way we are led to consider minimizing a criterion of the form:

where, by definition,

and

Another kind of rational approximation, that arises in several design problems where only constraints on the modulus are seeked, consists of approximating the module of a function by the module of a rational function, that is, solving for

This problem is strongly related to the previous ones;
in fact, it can be reduced to a convergent
series of standard rational approximation
problems. Note also that if i.e. if the feasibility of

is required, one can use the Féjèr-Riesz characterization of positive trigonometric polynomials on the unit as squared moduli of algebraic polynomials to approach this issue as a convex problem in infinite dimension. This constitutes another fundamental direction for dealing with rational approximation in modulus that arises naturally in filter design problems.

We want here to study the behavior of poles of optimal
meromorphic approximants in

Generally speaking, the behavior of poles is particularly important in meromorphic approximation for the analysis of the error decrease with the degree and for most constructive aspects like uniqueness, so that everything here could take place in section . However, it is the original motivation of the team to consider this issue in connection with the approximation of the solution to a Dirichlet-Neumann problem, so as to extract information on the singularities of that solution. This way to tackle a free boundary problem, classical in every respect but still widely open, illustrates the approach of the team to certain inverse problems, and gives rise to an active direction of research at the crossroads of function theory, potential theory and orthogonal polynomials.

As a general rule, critical point equations for these problems express that the
polynomial whose roots are the poles of the approximant
is a non-Hermitian orthogonal polynomial with
respect to some complex measure on the singular set
of the function to be approximated.
New results were obtained over the last three years
concerning the location of such zeroes,
and the approach to inverse problem for the Laplacian that we outline
in this section appears to be attractive when the singularities
are one-dimensional, for instance in
the case of a cracked domain (see section
). In case the crack is
sufficiently smooth, the approach in question
is in fact equivalent to meromorphic
approximation of a function with two branch points, and one has been able to
prove de facto well localized if one is able to compute sufficiently many
zeros (this is where the method is not fully constructive).
It is interesting to note that these results
apply also, and even more easily, to the detection of monopolar and dipolar
sources, a case where poles as well as
logarithmic singularities exist. The case of more general cracks (for instance
formed by a finite union of analytic arcs) requires the analysis of the
situation
where there the number of branch points is finite but arbitrary. It is
conjectured that the poles tend to the contour
connected; this covers a large number of interesting cases, including
the case of general polynomial cracks, or of cracks consisting of
sufficiently smooth arcs.
This breakthrough, we hope, will constitute a
substantial progress towards a proof of the general case.
It would of course be very interesting to know what happens when
the crack is ``absolutely non analytic'', a limiting case
that can be interpreted as that of an infinite number of branch points,
and on which very little is known. Concerning the problem of a general
singularity, in the light of what precedes, one can formulate the following
conjecture: if has its support in .
Yet this conjecture is far from being solved.

We conclude by mentioning that the problem of approximating,
by a rational or meromorphic function, in the

Matrix-value approximation is necessary for handling systems with several
inputs and outputs, and generates substantial additional difficulties
with respect to scalar approximation,
theoretically as well as algorithmically. In the matrix case,
the McMillan degree (i.e. the degree of a minimal realization in
the System-Theoretic sense) generalizes the degree. Hence the problem reads:
Let $1\le p\le \infty $, $\mathcal{F}\in ({H}^{p}{)}^{m\times l}$ and
To fix ideas, we may define
the

The main interest of Miaou so far lies in the case i.e. matrix-valued functions
that are analytic in the unit disk and unitary on the circle) of degree

In this application, obtaining physical couplings requires the computation
of realizations, also called internal representation in system
theory. Among the parameterizations obtained via the Schur algorithm, some
have a particular interest from this viewpoint

Problems relative to multiple local minima are naturally also present as in
the scalar case, but deriving criteria that guarantee uniqueness is much
more difficult than in the scalar case. The case of rational functions
of the proper degree already uses rather heavy machinery
cf.
section ).

In practice, a method similar to the one used in the scalar case, has
been developed to generate local minima at a given order from those at
lower order. In short, one sets out a matrix of degree

Let us stress that the algorithms mentioned above are first to handle rational approximation in the matrix case in a way that converges to local minima, while meeting stability constraints on the approximant.

The asymptotic study of likelihood estimators is a natural companion
to the research on rational approximation described above. The context
is ultra-classical. Given a discrete process

where regular
(i.e. purely non-deterministic) stationary processes.
Identification in this context appears then as a rational approximation
problem for
which the classical theory makes a trade-off between two antagonist factors,
namely the bias error on the one hand that decreases when

If one introduces now as a new variable the rational matrix

and if

where

The consistency problem arises from the fact that the measure i.e.

In the preceding result, consistency holds in the sense of pointwise
convergence of the estimates on the manifold of transfer functions
of given size
and order. One contribution of the Miaou team has been to show that
the result holds
even if we do not postulate a causal dependency between inputs and outputs, the
measure

In order to control a system, one generally relies on a model, obtained from
a priori knowledge like physical laws or experimental observations. In many
applications, one is satisfied with a linear approximation around a design
point or a trajectory. It is however very important to study non-linear
systems (or models) and their control for the following reasons. First, some
systems have, near interesting working points, a linear approximation that is
non-controllable so that linearization is ineffective, even
locally. Secondly, even if the linearized model is controllable, one may wish to
extend the working domain beyond the validity domain of the linear
approximation. Work described in module proceeds from this
problematics. Finally, some control problems, such as path planning,
are not of local nature and cannot be answered by
a linearly approached model. The structural study described in module
has for purpose to exhibit invariants that can be used,
either for reducing the study to simpler systems or for being used as a
foundation of a non-linear identification theory, that would give informations
on model classes to be used in the case where there is no a priori reliable
information, and that black-box linear identification is not satisfactory. The
success of the linear model, in control or in identification, has its
cause in the fine understanding one has of it; in the same fashion, a better
mastery of invariants of non-linear models for some transformations is a
prerequisite to a true theory of non-linear identification and control. In
what follows, all non-linear systems are supposed to have a state space of
finite dimension.

Stabilization by continuous state feedback — or output feedback, that is,
the partial information case — consists of designing a control that is a
smooth (at least continuous) function of the state and such that a design
point (or a trajectory) is asymptotically stable, for the closed system. One
can consider this as a weakened version of the optimal control problem: to
compute a control that optimizes exactly a given criterion (for instance to go
somewhere in minimal time) leads in general to a very irregular dependence on
the state; stabilization is a qualitative objective (to go somewhere
asymptotically) less constraining than minimization of a criterion, and leaves
of course more latitude and allows to impose for instance a lot of
regularity. Stabilization problems are often solved, at least near a regular
design point, by well-mastered control theory methods; the methods studied
here deal with the behavior near points where linear methods are inefficient
(non-controllable linear approximation) or tend to master the behavior on a
larger zone in the state space. A very important question is the robustness of
the stability: in fact, control laws depend heavily on the structure of the
model and asymptotic stability conservation for nearby structures or parameter
values is not granted. We shall explain hereafter two research directions
followed by the Team.

It is known that a certain number of non-linear systems, although
controllable, cannot be stabilized by a control that is a continuous function
of the state alone time
dependency, for instance periodic. Researches in the Team, with collaboration
from the Icare Team, have played an important role in establishing these results

Lyapunov functions are a well-known tool for the study of the stability of
non-control dynamic systems. For a control system a
Control Lyapunov Function is a Lyapunov function for the system
closed by a given command. This can be expressed by a differential inequality
that is called the ``Artstein equation

We are interested in the Team in obtaining control Lyapunov functions. This can be the first step in synthesizing a stabilizing control, but even when a stabilizing control is already known, obtaining a control Lyapunov function can be very useful for studying robustness of the stabilization, or for modifying the initial control law to a more robust one; also if one has to deal with a problem where it is important to optimize a criterion, and that the optimal solution is hard to compute, one can look for a control Lyapunov function that is ``near'' the solution of the optimization problem, and that leads to a stabilizing control easier to work on, and of a cost (in the sense of the criterion) not far from the optimum.

Recent work in the Team has consisted, starting from objects that are
``nearly'' control Lyapunov functions, and that are explicitly
constructible, or at least easily described, in distorting them, constructively, into
control Lyapunov functions, or, on the contrary, depending on the case, to
show that such a construction is impossible. In

Note that these constructions are exploited in the study requested by Alcatel Space (see module ), where choice is left between the use of optimal control techniques or stabilization.

A static feedback transformation of a dynamical control system is
a (non-singular) reparametrization of control, depending on the state, and
possibly, a change of coordinates in the state space. A dynamic
feedback transformation of a dynamic control system consists in a
dynamic extension (adding new states, and assigning then a new dynamics)
followed by a state feedback on the augmented system.

From the point of view of the control, the interest of these transformations is that a command that allows to satisfy some objectives on the transformed system can be used to control the original system including the possibly extended dynamics in the controller. Of course the favorable case is when the transformed system has a structure that can more easily be exploited than the original one, for instance a linear controllable system.

From the point of view of identification and modeling, in the non-linear case, the interest is as mentioned above, either to derive qualitative invariants that can support the choice of a non-linear model given some observations, or to contribute to a classification of non-linear systems that is missing sorely today for elaborating real methods in non-linear identification.

These two problems studied in the Team are now developped.

The problem of dynamic linearization, still unsolved, is that of finding explicit conditions on a system for the existence of a dynamical feedback that would make it linear.

These last years differential flatness, the above mentioned functions being
called flat or linearizing functions, and it was shown, roughly
speaking, that a system is differentially flat if, and only if, it can be
converted to a linear system by dynamic feedback. On one hand, this property
of the set of trajectories has in itself an interest at least as important for
control than the equivalence to a linear system, and on the other hand it
gives a handle for tackling the problem of dynamic linearization, namely to
find linearizing functions.

An important question remains still open: how can one algorithmically decide
that a given system has or not such functions, i.e. is dynamically
linearizable or not? This problem is both difficult and important for
non-linear control. For systems with four states and two controls, whose
dynamic is affine in the control (these are the lowest dimensions for which the
problem is really non-trivial), necessary and sufficient
conditions

From the algebraic-differential point of view, the module of differentials of
a controllable system is free and of finite dimension over the ring of
differential polynomials in

In what precedes, we have not taken into account the degree of
smoothness of the transformations under consideration.

In the case of dynamical systems without control, it is well known
(Hartman-Grobman theorem) that, away from degenerate (non hyperbolic) points,
if one requires the transformations to be merely continuous,
every system is locally equivalent to a
linear system in a neighborhood of an equilibrium. It is tempting thus, in the
frame of a classification of control systems, to look for such
equivalence modulo non-differentiable transformations and to hope bring about some
robust ``qualitative'' invariants and perhaps stable normal forms. An equivalent of the
Hartman-Grobman theorem for control systems would say for instance, that
outside a ``rare'' class of models (for instance, those whose linear
approximation is non-controllable), and locally near fixed values of the state
and the control, no qualitative phenomenon can distinguish a non-linear system
from a linear one, all non-linear phenomena being hence either of global
nature or singularities. Such a statement is wrong: if a system is locally
equivalent to a controllable linear system via a bi-continuous
transformation—a local homeomorphism in the state-control space—it is
also equivalent to this same controllable linear system via a
transformation that is as smooth as the system itself, at least in the neighborhood of a
regular point (in the sense where the rank of the control system is locally
constant), see a contrario, under weak
regularity conditions, linearization can be done by non-causal
transformations (see the same report) whose structure remains unclear, but take
a concrete sense when the entries are generated by a finite dimensional
dynamics.

The above considerations call for the following question, important for modeling control systems: are there local ``qualitative'' differences between the behavior of a non-linear system and its linear approximation in the case the latter is controllable?

The activity of the team focuses on two bottom lines, namely optimization in the frequency domain on the one hand, and the control of systems governed by differential equations on the other hand. Therefore one can distinguish between two main families of applications: one dealing with design and inverse problems for diffusive and resonant systems, and one dealing with control of cetain mechanical or optical systems. For applications of the first type, approximation techniques as described in module allow one to deconvolve linear equations, analyticity being the result of either the use of Fourier transforms or the harmonic character of the equation itself. Concerning the second type of applications, they mostly concern the control of systems that are ``poorly'' controllable, for instance low thrust satellites or optical regenerators. We describe all these applications below in more detail.

Localizing cracks, pointwise sources or occlusions in a two-dimensional material, using thermal, electrical, or magnetic measurements on its boundary is a classical inverse problem. It arises when studying fatigue of structures, behavior of conductors, or else magneto-encephalography as well as the detection of buried objects (mines, etc). However, no really efficient algorithm has emerged so far if no initial information on the location or on the geometry is known, because numerical integration of the inverse problem is very unstable. The presence of cracks in a plane conductor, for instance, or of sources in a cortex (modulo a conversion of 3D data to 2D, see later) can be expressed as an analyticity defect of the solution of the associated Dirichlet-Neumann problem, and may in principle be approached using techniques of best rational or meromorphic approximation on the boundary of the object (see sections to and ). The realistic case where data are available only on a part of the boundary is a typical example of application of the analytic and meromorphic extension techniques developed earlier.

The 2D approach proposed here consists in constructing, from
measured data on a subset

extend

to allF $\Gamma $ if the data are incomplete (it may happen that$K\ne \Gamma $ ) if the boundary is not fully accessible to measurements), in order to identify for instance an unknown Robin coefficient, see, where stability properties of the procedure are established; detect the presence of a defect

$\gamma $ in a computationally efficient manner;;

Thus, inverse problems of geometric type that consist in finding an unknown
boundary from incomplete data can be approached this way

Among the research perspectives opened by these applications, there lies a non-classical approximation problem where residues would be constrained so as to incorporate in the structure of the approximant some features inherited from the fact that we have to estimate a logarithmic potential with a boundary condition, see module . Experiments have been carried out with real residues for a straight crack, which indeed indicate a critical configuration on the crack. However, parametrizing through poles and residues produces global singularities that are undesirable, hence we need to adopt another parametrization based on the coefficients of the polynomials; this requires further study.

In the long term, we envisage generalizing this type of methods to the case of problems with variable conductivity coefficients, as well as to the Helmholtz equation. Using convergence properties of approximation algorithms in order to establish stability results for some of these inverse problems is also an appealing direction for future research.

One of the best training ground for the research of the team in function theory is the identification and design of physical systems for which the linearity assumption is well-satisfied in the working range of frequency, and whose specifications are made in frequency domain. Resonant systems, acoustic or electromagnetic, are prototypical examples of common use in telecommunications. We shall be more specific on two examples below.

Surface acoustic waves filters are largely used in modern telecommunications especially for cellular phones. This is mainly due to their small size and low cost. Unidirectional filters, formed of SPUDT transducers that contain inner reflectors (cf. Figure ), are increasingly used in this technological area. The design of such filters is more complex than traditional ones.

We are interested here in a filter formed of two SPUDT transducers
(Figure ). Each transducer is composed of cells of the same length

Specifications are given in the frequency domain on the amplitude and phase of the electrical transfer function. This function expresses the power transfer and can be written as

where

The design problem consists in finding the reflection coefficients

The transducers are described by analytic transfer functions called mixed matrices, that link input waves and currents to output waves and potentials. Physical properties of reciprocity and energy conservation endow these matrices with a rich mathematical structure that allows one to use approximation techniques in the complex domain (see module ) according to the following steps:

describe the set

$\mathcal{E}$ of electrical transfer functions obtainable from the model,set out the design problem as a

rational approximation problemin a normed space of analytic functions:${min}_{E\in \mathcal{E}}\Vert D-E\Vert ,$ where

is the desired electrical transfer,D use a rational approximation software (see modules and ) to identify the design parameters.

The first item, is the subject of ongoing research. It connects the geometry of the zeroes of a rational matrix to the existence of an inner symmetric extension without increase of the degree (reciprocal Darlingtom synthesis). Let us mention that the interest of the team for this application started through a collaboration with Thomson Microsonics in 1999.

In the domain of space telecommunications (satellite transmissions), constraints specific to onboard technology lead filters with resonant cavities to be used in the hyperfrequency range. These filters are used for multiplexing (before or after amplification), and consist of a sequence of cylindrical hollow bodies, magnetically coupled by irises (orthogonal double slits). The electromagnetic wave that traverses the cavities satisfies the Maxwell equations, forcing the tangent electrical field along the body of the cavity to be zero. A deeper study (of the Helmholtz equation) states that essentially only a discrete set of wave vectors is selected. In the considered range of frequency, the electrical field in each cavity can be seen as being decomposed along two orthogonal modes, perpendicular to the axis of the cavity (other modes are far away, and their influence can be neglected).

Near the resonance frequency, a good approximation of the Maxwell equations is given by the solution of a second order differential equation. One obtains thus an electrical model for our filter as a sequence of electrically-coupled resonant circuits, and each circuit will be modeled by two resonators, one per mode, whose resonance frequency represents the frequency of a mode, and whose resistance represent the electric losses (current on the surface).

In this way, the filter can be seen as a quadripole, with two ports, when
plug on a resistor at one end and fed with some potential at the other.
We are
then interested in the power transmitted and reflected. This leads to
defining a
scattering matrix

In reality, the resonance is not studied via the electrical model,
but via a low pass
equivalent obtained upon linearizing near the central frequency, which is no
longer
conjugate symmetric (i.e. he underlying system may not have real
coefficients) but whose degree is divided by 2 (8 in the example).

In short, the identification strategy is as follows:

measuring the scattering matrix of the filter near the optimal frequency over twice the pass band (which is 80Mhz in the example).

solving bounded extremal problems, in

${H}^{2}$ norm for the transmission and in Sobolev norm for the reflection (the module of he response being respectively close to 0 and 1 outside the interval measurement) cf. module . This gives a scattering matrix of order roughly 1/4 of the number of data points.Then one rationally approximate with fixed degree (8 in the occurrence) via the hyperion software cf. module and .

A realization of the transfer function is thus obtained, and some symmetry constraints are added here.

Finally one builds a realization of the approximant and he looks for a change of variables that kills non-physical couplings. This is obtained by using algebraic-solvers and continuation algorithms on the group of orthogonal complex matrices (the symmetry forces this kind of change of basis).

The final approximation is of high quality. This can be interpreted as
a validation of the linearity hypothesis for the system:
the relative

The above considerations are valid for a large class of filters. These developments have also been used for the design of unsymmetric filters, useful for the synthesis of repeating devices.

The team extends today its investigations, to the design of output multiplexors (OMUX) that couple several filters of the previous type on a manifold. The objective is to establish a global model for the behavior that takes into account

within each channel the coupling between the filter and the Tee that connects it to the manifold,

the coupling between two consecutive channels.

The model is obtained upon chaining the transfer matrices associated to the scattering matrices. It mixes rational elements and complex exponentials (because of the delays) and constitutes an extension of the previous framework. Under contract with the CNES (see ), the team has started a study of the design with gauge constraints, based on function theoretical tools.

The use of satellites in telecommunication networks motivates a lot of research in the area of signal and image processing. Problems of spatial mechanics and satellite control are also vital to these new technologies. For instance, fuel represents half of the total mass of the satellite, which is an obstacle to the missioning charge (devices for telecommunications, image processing surveillance, etc), since the total mass is limited by the capacity of the launchers.

Hence it is natural to seek more efficient propulsion means. Progress in physics permit today effective ``electrical'' propulsion modes (ionic engines, plasma, etc) that have a better efficiency, but a much smaller instantaneous thrust than traditional chemical rockets. This raises difficult control problems, whose study by the team is carried out in collaboration with Alcatel-Space Cannes, see module .

Note that spatial mechanics is a domain that poses a great deal of delicate control problems, due to the extreme conditions and long lease of life of satellites.

The increased capacity of numerical channels in information technology is a major industrial challenge. The most performing means nowadays for transporting the signals from a server to the user and backwards is via optic fibers. The use of this medium at the limit of its time of response causes new control problems in order to maintain a safe signal, both in the fibers and in the routing and regeneration devices.

The team has been associated, under contract with Alcatel R&I (see module ), in the control of the ``all-optic'' regenerators.

The works presented in module are upstream from
applications. However, beyond the fact that deciding whether a given
system is linear modulo an adequate compensator is clearly conceptually
useful, the use of ``flat outputs'' for path planning has
a great interest, see for instance the European Control
Conference

There was no major development concerning the hyperion software this year. It was used in research contracts with CNES and Alcatel Space, as well as for numerical tests in crack detections.

On the other hand, we started to create a library named bibapics,
a set of matlab-callable routines, that offers the same
functionalities as hyperion, and is compatible with its system of batch files.
It uses XML as language for descriptions of tasks.

The development of a Tralics was sent to the APP in December 2002. Its IDDN number is
InterDepositDigitalNumber = IDDN.FR.001.510030.000.S.P.2002.000.31235.
Binary versions are available for Linux, Solaris, Windows and Mac-OS X.
Its web page is

RARL2 (Réalisation interne et Approximation Rationnelle L2) is a software for
rational approximation (see module ). Its web
page is

either it internal realization

or its

first Fourier coefficientsN or discretized values on the circle

It computes a best approximant (local minimum)
stable, of given McMillan degree, in the

It is somehow related to the arl2 function of hyperion (see
module ) and differs in the way it
represents the systems: a polynomial representation is used in hyperion, while
RARL2 uses a realization, this being very interesting in some cases. It is
implemented in MATLAB.
This software handles multi-variable systems (with several inputs and
several outputs), and uses a parameterization that has the following
advantages

it handles only

stable systems, so that the result is necessarily stable,it allows the use of differential tools, and can identify uniquely a system,

it is well-conditioned, and computations are cheap.

An iterative research strategy on the degree of the local minima, similar in principle to that of arl2, increases the chance of obtaining the absolute minimum (see module ) by generating, in a structured manner, several initial conditions. Contrary to the polynomial case, we are in a singular geometry on the boundary of the manifold on which minimization takes place, which forbids the extension of the criterion to the ambient space. We have thus to take into account a singularity on the boundary of the approximation domain, and it is not possible to compute a descent direction as being the gradient of a function defined on a larger domain, although the initial conditions obtained from minima of lower order are on this boundary. Thus, determining a descent direction is nowadays, to a large extent, a heuristic step. This step works well in the cases handled up to now, but research are under way in order to make this step ruly algorithmic.

The RGC software (Réalisation interne à géométrie contrainte) has no web page.

The identification of filters modeled by an electrical
circuit that was developed inside the team (see module )
leads to compute the electrical parameters of the
filter. This means finding a particular realization

PRESTO-HF: a toolbox dedicated to lowpass parameter identification for
hyperfrequency filters

In order to allow the industrial transfer of our methods, a Matlab-based toolbox has been developed, dedicated to the problem of identification of low-pass hyperfrequency filter parameters. It allows to run the following algorithmic steps, one after the other, or all together in a single sweep:

determination of delay components, that are caused by the access devices (automatic reference plane adjustment);

automatic determination of an analytic completion, bounded in module for each channel, (see module );

rational approximation, of fixed McMillan degree;

determination of a constrained realization.

For the matrix-valued rational approximation stage Presto-HF relies either on hyperion (Unix or Linux only) or RARL2 (platform independent), both rational approximation engines are developed within the team. Constrained realizations are computed by the RGC software. As a toolbox, Presto-HF has a modular structure, which allows one for example to include some building blocks in an already existing software.

The delay compensation algorithm is based on the following strong assumption:
far off the passband, one can reasonably expect a good approximation of the
rational components of

This toolbox is currently used by Alcatel Space in Toulouse.

The great novelty in the RAWEB2002 (Scientific Annex to the Annual Activity
Report of Inria), was the use of XML as intermediate language, and the
possibility of bypassing

The first step of this new writing scheme has been to put on the Web, for
the year 2001, together
with the HTML version (obtained by Latex2HTML) and the PostScript version
(obtained by

This XML version of 2001 was obtained via a Perl script

One important issue was the choice of the DTD (document type definition).
One one hand, it should follow the pseudo-DTD as defined for the RAWEB since
five years (the Activity Report is a set of modules, with contributors,
key-words, etc), and on the other hand, it must be as close as possible to
standards DTDs. We have decided to use a variant of the TEI
(text encoding initiative,
see

The translation from XML to HTML is done via an XSLT style sheet
and the Gnome tools (xsltproc being an efficient processor). The main difficulty
comes from the mathematics: we have decided to translate all formulas into
images, (in the case $x+\alpha$, only the

The translation of the XML text to a Pdf or PostScript document is a two-phase
process: first a style sheet is used, that converts the XML
into an XSL-FO document, by adding some formatting instructions (in this
phase, we explain for instance that the text font should be Times). This file
is formatted by

The Tralics software is a C++ written \section, and emits warnings for bibliographic entries
that are not of the current year; it can also generate a draft version of the
PostScript output that does not require the XML tools to be installed. On the
other hand, Tralics knows of over one thousand commands (included those
forbidden by the raweb), and is linked to the preview-latex package of David
Kastrup.

The main philosophy of Tralics is to have the same parser as \chardef,
\catcode, \ifx, \expandafter, \csname, etc.,
that are not described in the \endlinechar, \read,
\uppercase, \endinput, which are less used, and a bit tricky.
Note that a construction like \ifdim\wd0>0pt\fi is recognised by the
parser, but there is no way to change the size of the box number zero, so that
the test is always false.

Some commands (like \dump or \patterns are not implemented,
because they neither affect parsing nor produce an output. All commands that
produce a dvi output in \chapter,
\it, etc), environnments (figure, table, notes), mathematics,
and of course all commands needed by the Raweb (for instance, ``topics''
management). There are some unresolved problems: for instance, Tralics
understands only basic array specifications (r, l, c, and bar, not p or @),
non-math material in a math formula is rejected (unless it is formed of characters
only), a figure environment should contain only graphics together with a
single caption, commands defined by the picture environment are translated
(but refused by the style sheet). Finally, because it is too complicated to
parse the result of Bibtex, we decided to use our own bibtex-to-latex
translator (this is not the best solution).

For more information, see the

These parametrization issues have been studied for several years in
the project.
Atlases of charts have been derived from a matrix Schur algorithm associated
with Nevanlinna-Pick interpolation data. In a chart, a lossless function
can be represented by a balanced realization computed as a product of unitary
matrices. Moreover, an adapted chart for a given lossless
function can be built from a realization in Schur form.
Such a parametrization presents a lot of advantages : it ensures
identifiability, takes into account the stability constraint and preserves the
order and presents a nice numerical behavior.
This parametrization has been used in the software RARL2 which deals with
rational approximation in

The natural framework for these studies is that of complex functions while,
in most applications, systems are real-valued and their transfer functions
have real coefficients. We may of course restrict our parametrization by
imposing real interpolation data, but in this case our strategy to find an
adapted chart from the Schur does not work anymore. In order to preserve all
the nice properties of the previous parametrization, it appears that we must
consider a more general interpolation problem, that is the contour
integral interpolation problem of Nudelman. Doing this, we can follow the
previous approach, and build an atlas of charts for real lossless function
(of fixed degree and size), which allow for a recursive construction of
balanced realizations and such that the Schur real form provides an adapted
chart. This new results have been presented at the CDC03

Surface Acoustic Waves (in short: SAW) filters consist in a series of transducers which
transmit electrical power by
means of surface acoustic waves propagating on a piezoelectric medium. They
are usually described by a mixed scattering matrix which
relates acoustic waves, currents and voltages. By reciprocity and energy
conservation, these transfers must be either lossless, contractive or
positive real, and symmetric.
In the design of SAW filters, the desired electrical power transmission is
specified. An important issue is to characterize the functions that can
actually be realized for a given type of filter.
In any case, these functions are Schur and can be completed into a
conservative matrix with an increase of at most 2 of the McMillan degree,
this matrix describing the global behavior of the filter.
Such a completion problem is known as Darlington synthesis
and has always a solution for any higher McMillan degree
in the rational case if the
symmetry condition is of no concern. However in our case,
additional constraints arise from the geometry of the filter as the
symmetry and certain interpolation condition. In

Meromorphic approximation of Markov functions in the

The matrix version of a Markov function is the Cauchy transform of a
positive matrix valued measure. For those, it has been proved this year
that a best

Back to scalar-valued functions, a natural
generalization of Markov functions is the class of Cauchy integrals
with respect to some complex measure supported on symmetric contours
for the Green potential in the unit disk, i.e. the
Green potential has equal normal derivative on either side of
the contour. Thus the generalization is twofold:
symmetric contours generalize the segment and complex
measures generalize positive ones. Such Cauchy integrals were
studied by H. Stahl who showed the convergence in capacity of Padé approximants
for them

It has been shown in

According to what precedes, the poles of the best rational or meromorphic approximants of the ``complex solution'' of the Laplacian on a cracked domain converge if the crack is ``analytic enough'', to the geodesic arc that joins its endpoints, with a density that charges these endpoints (since it is a property of the equilibrium measure). This gives substantial information on the location of the crack. The case of definitely more general cracks, for instance piecewise polynomial, under suitable regularity conditions for the data to analyze, can be reduced to the case of a function with a finite number, but maybe more than two, branch points.

After having conjectured that, for a finite but arbitrary number of branch
points, the asymptotic pole distribution is the equilibrium distribution on
the continuum

The study of the Problem ( ${P}^{\prime}$) defined in section has
been carried out in the case where

An algorithm that consists in discretizing the modulus constraint and using Lagrange duality-based optimization techniques as in section has already been implemented and performs satisfactorily.

Another generalization of problem ( in the analytic
framework where

Let $f\in {L}^{2}\left(K\right)$, $\psi \in {L}^{2}(T\setminus K)$ and $M>0$;
find a function
$g\in {H}^{2}$ such that $\Vert \text{Im}g-\psi {\Vert}_{{L}^{2}(T\setminus K)}\le M$
and such that $g-f$ has minimal norm in ${L}^{2}\left(K\right)$.

Let $p\ge 1$,

This is a natural formulation for issues concerning Dirichlet-Neumann problem for the Laplace operator, see sections and , where data and physical prior information concern real (or imaginary) parts of analytic functions.

For ( in the case

Situations with other values of

The fact that 2D harmonic functions are real parts of
analytic functions allows one to tackle issues in singularity
detection and geometric reconstruction from boundary data of
solutions to Laplace equations using the meromorphic and rational approximation tools
developed by the team.
Some electrical conductivity defaults can be modeled by pointwise sources
inside the considered domain. In dimension 2, the question made
significant progress last year. In this situation, the singularities of the
function (of the complex variable) which is to be reconstructed from
boundary measures are poles (case of dipolar sources) or
logarithmic singularities (case of monopolar sources). Hence, the
behavior of the poles of the rational or meromorphic approximants, described in
modules to , allow one to efficiently
locate their position. This, together with corresponding
software implementation, is part of the subject of the
Ph.D. thesis of F. Ben Hassen and a paper is in preparation

In 3D, epileptic regions
in the cortex are often represented by pointwise sources that have to be
localized from potential measures on the scalp of a potential
difference, that is the solution of a Laplace equation (EEG,
electoencephalography). Note that the
head is here modeled as a sequence of spherical layers. This inverse
EEG problem is the object of a collaboration
between the Miaou and Odyssée Teams through the ACI ``Obs-Cerv''.
A nice breakthrough has been done this year which makes it
possible now to process via best rational approximation on a sequence of 2D
disks along the sphere

In the 2D case again, but with incomplete data, the geometric problem of finding, in a constructive way, an unknown (insulating) part of the boundary of a domain is considered in the Ph.D. thesis of I. Fellah. Approximation and analytic extension techniques described in section together with numerical conformal transformations of the disk provide here also interesting algorithms as well as stability properties for the inverse problem under consideration.

Finally, solving Cauchy problems analytic on an annulus or on a
spherical layer is also a necessary
ingredient of the methodology, since it is involved in the propagation of
initial conditions from the boundary to the center of the domain, where
singularities are seeked, when this domain is formed of several homogeneous
layers of different conductivities (as in the EEG problem above).
On a 2D annulus, this issue, which is the main theme of the PhD thesis
of M. Mahjoub, arises when
identifying a crack in a tube or a Robin coefficient on its inner
skull. It can be formulated as a completion
problem on the boundary of a doubly connected domain, which allows us
to get both numerical algorithms and stability results in this
framework

In a priori fixed.
A notion of ``very'' formal integrability was introduced, and the equations
arising when characterizing flatness are proved to have this property.

Also, the final version of our results on topological linearization (see
``topological equivalence'' in section )
became available

We study here the problem of analytic extension of pointwise frequency
measurements of a dissipative linear system, where the strong assumption is
that the unknown part is well-modeled by a polynomial in

We want to solve the following problem:

where

and such that

Two problems arise however, in order to make the previous construction effective:

localization of the

${x}_{i}$ tuning of the multipliers

${\lambda}_{i}$

In order to obtain an estimation of the

Concerning the tuning of the Lagrange multipliers, we decided to solve the dual problem of concave maximization associated to the dicretized version of (). The constraints in this maximization problem are linear positivity constraints of the multipliers. The computation of the gradient and the Hessian associated to this problem allowed the implementation of an efficient algorithm for solving () inside the PRESTO-HF software. Note also that techniques similar to those proposed here are under study, in order to merge them with the solution of the problem explained in section .

We studied in some generality the case of parameterized linear systems characterized by the following classical state space equation,

where

General results were obtained about these sets, in particular a
necessary and sufficient condition ensuring their
cardinals are finite. In the special case of coupled-resonators an
efficient algebraic formulation has been derived which allowed
us to compute

Our next goal is to build a software package for users of the filtering community which implements our ideas (the package almost already exists but in a prototypic form). Theoretically there remains a striking question concerning the generic existence of a ``real solution'' to the filter realization problem when starting from a loss-less transfer. Results have already been obtained in this direction for particular coupling geometries but we conjecture a much more general property to hold.

An OMUX (Output MUltipleXor) can be modeled in the frequency domain by chaining
of scattering matrices of filters as those described in
section , connected in parallel to a common access via a
wave guide, see figure . The problem of designing
the OMUX so as to satisfy
gauge constraints is then naturally translated into a set of constraints on the
values of the scattering matrices and phase shift introduced by the guides
in the considered bandwidth.

In a first step, in order to be able to test our methods and to
compare them with the tuning done by Alcatel Space, we have designed an OMUX
simulator on a matlab platform. The direct approach, as used by the
manufacturer, is of course to couple this simulator with an optimizer, in
order to reduce transmission and reflection wherever they are too large. This
is what we have done, using the matlab optimizer, choosing an integral

Thus, we have observed, that, for each frequency, the constraints can be
interpreted as a sequence of conditions that concern each channel one after the
other, and express that the reflection, evaluated at this frequency, belongs
to a disk whose center and radius depend on the channels
and the lengths of the guides that are not adjacent to the considered one. The hyperbolic
geometry comes into play naturally, via the chaining formulas, and it produces
a relative decoupling between the different parameters
(channel length and filter). In particular, this shows that the tuning of
each filter and each length should be possible in a diagonal manner, if we had
an efficient rational approximation algorithm with pointwise constraints
(the approximant should be Schur). This is an interesting question, both for
applications and in itself, that will be studied in the future.

As a result, we should be able to construct a multi-phased tuning procedure, first relaxed, channel after channel, then global, using a quasi-Newton method. Note the the discretizations in frequency of the integral criterion and the near periodicity of the exponentials (that express the delays) interact in a complex manner, and generate numerous local minima.

Contract №1 03 E 1034

In the framework of a contract that links CNES, IRCOM and Inria, whose objective is to realize a software package for identification and design of hyperfrequency devices, the work of Inria has been

modeling and analysis of an IMUX, see module ,

study of the structure and computation of the coupling parameters associated to physical parameters for a given geometry. (see module ),

turbo-engine for hyperion,

modeling and algorithmic analysis of an OMUX see module .

In this contract, we promised version 0.57 of hyperion to both partners. This contract has been renewed in 2003.

Selling of a license of hyperion, RARL2 and RGC.

Contract №1 01 E 0726.

This contract started in 2001, for three years. The objective is to find control laws for posting spaceships (satellites) with new generation engines, that have excellent throughput, but a very low thrust.

Contract №1 02 E 0517. This was a one year contract, that ended formally in February, 2003.

Digital signals in optic fiber networks need some ``regeneration'' and also converting from a wavelength to another. The most powerful way is to avoid decoding the signal and regenerate in a purely analogic way using nonlinear optic components. The device under consideration was based on a SOA-MZI (Mac Zendher Interferometer using Semi-conductor Optical Amplifiers). Its tuning is very delicate, and is very sensitive to variations of the input signal (and these variations do occur). The goal was to set up a control procedure for such a device, to compensate for variations in the input signal. A regulation for a simpler device was already available, and a multivariable control was needed here.

We have contributed to develop a control law that performed well on the laboratory experiments. Alcatel decided to file a patent

concerning this control procedure. The main reason of this success was a modeling effort.

L. Baratchart is member of the editorial board of Computational
Methods in Function Theory.

Together with project-teams Caiman and Odyssée
(INRIA-Sophia Antipolis, ENPC), the University of Nice (J.A. Dieudonné lab.),
CEA, CNRS-LENA (Paris), and a few French hospitals, we are part of the
national action ACI Masse de données « OBS-CERV », 2003-2006 (inverse
problems, EEG).

The region PACA (Provence Alpes Côte d'Azur) is partially supporting
the post-doctaral stay of Per Enquist until May, 2004. We also obtained a (modest) grant from
the region for exchanges with SISSA Trieste (Italy), 2003-2004.

The Team is member of the TMR network
European Research Network on System Identification (ERNSI), see

The team obtained a Marie Curie EIF (Intra European Fellowship)
FP6-2002-Mobility-5-502062, for 24 months (2003-2005). This finances Mario
Sigalotti's post-doc.

The Team is a member of the Marie Curie multi-partner training site
Control Training Site, number HPMT-CT-2001-00278, 2001-2005. See

The project is member of Working Group Control and System Theory
of the ERCIM consortium, see

NATO CLG (Collaborative Linkage Grant), PST.CLG.979703,
« Constructive approximation and inverse diffusion problems », with
Vanderbilt Univ. (Nashville, USA) et le LAMSIN-ENIT (Tunis, Tu.), 2003-2005.

In addition to the ``Scientific advisors'' and to the ``Visiting scientists'' listed in section , the following scientists visited us in 2003.

Mohamed Jaoua (Lamsin-ENIT, Tunis).

Herbert Stahl (TU Berlin).

Nejat Olgac, Univ. of Connecticut (Mechanical Engineering), ``On Linear Time Invariant, Time Delayed Systems (Lti-Tds)''.

Emmanuelle Crepeau, Université Paris Sud (candidate CR2 2003).

Bronislaw Jakubczyk, Académie des Sciences de Pologne, Varsovie, ``Classification des systèmes de contrôle sur le plan et de leurs bifurcations'',

Pascale Vitse, Université Laval à Québec, ``Une approche tensorielle au problème de la couronne opératorielle''.

Pascale Vitse, Université de Besançon, ``Interpolation Libre Par Des Polynômes De Degré Fixé''.

Grégoire Charlot, SISSA, Trieste (Italie), (candidat CR2 2003), ``Contrôle optimal pour les systèmes quantiques à

niveaux d'énergie''.n Maureen Clerc, INRIA, Team Odyssée, ``l'électro-encephalographie - problème direct et inverse''.

Tarek Hamel, Laboratoire LSC FRE-CNRS 2494, Univ. d'Evry Val d'Essonne, ``Modélisation et stabilisation d'un drone à 4 voilures tournantes''.

Vladimir Peller, Michigan State University, Mini cours sur la Théorie analytique des opérateurs à valeurs vectorielles (AAK matriciel).

Benedicte Dujardin, Observatoire de la Côte d'Azur, ``Polynômes orthogonaux de Szego et approximation rationnelle.''

D. Avanessoff gave lectures in general mathematics at University of Nice - Sophia Antipolis.

L. Baratchart, DEA Géométrie et Analyse, LATP-CMI, Univ. de Provence (Marseille).

J. Leblond teaches mathematics in the 12-15 cycle of Montessori les Pouces Verts.

Antoine Chaillet, « Fonction de Lyapunov contrôlée pour le transfert d'orbite avezc rendez-vous en faible poussée » (control Lyapunov functions for low thrust orbital transfer). DEA Université Paris-sud (Orsay).

David Avanessoff, « Linéarisation dynamique des systèmes non linéares et parametrage de l'ensemble des solutions » (dynamic linearization of non linear control systems, and parameterization of all trajectories).

Fehmi Ben Hassen, « Localisation de sources ponctuelles par approximation rationnelle et méromorphe », co-tutelle with Lamsin-ENIT (Tunis).

Alex Bombrun, « Commande optimale, feedback, et tranfert orbital de satellites » (optimal control, feedback, and orbital transfert for low thrust satttelite orbit transfer)

Imen Fellah, ``Data completion in Hardy classes and applications to inverse problems'', co-tutelle with Lamsin-ENIT (Tunis).

Reinhold Küstner, ``Asymptotic Zero Distribution of Orthogonal Polynomials with respect to Complex Measures having Argument of Bounded Variation'', May 27, 2003.

F. Wielonsky is on leave to the University of Lille.

J.-B. Pomet is in charge of organizing a seminar on control and identification.

L. Baratchart is a member of the ``bureau'' of the CP (Comité des Projets) of INRIA-Sophia Antipolis.

J. Grimm is a member of the CUMI (Comité des utilisateurs des moyens informatiques) of the Research Unit of Sophia Antipolis.

J. Leblond is part of the Colors Committee of INRIA-Sophia Antipolis.

J.-B. Pomet is a representative at the ``comité de centre''.

Several members of the team have participated in the Direction (co-director:
J. Leblond), Scientific (L. Baratchart), and Organization (J. Grimm,
F. Limouzis) Committees of the
CNRS-INRIA summer school ``Harmonic analysis and rational approximation: their
rôles in signals, control and dynamical systems theory'',
Porquerolles, september.

The whole team has also been deeply involved in establishing and writing down the proposition of a new project-team, named Apics.

Talks, courses, sessions, software demonstrations at the
CNRS-INRIA summer school ``Harmonic analysis and rational approximation: their
rôles in signals, control and dynamical systems theory'',
Porquerolles, september.

J. Grimm gave a talk about Tralics at Eurotex 2003 (Brest)

David Avanessoff and Mario Sigalotti gave talks at the ``2nd Junior European Meeting Control Theory and Stabilization'', Torino, It.

J. Leblond was invited to give a talk at the Applied Analysis Seminar, LATP, Univ. Provence (Aix-Marseille I), at the MEEG Workshop in UTC, Compiègne, ``Inverse problems in medical imaging : sources localization for EEG/MEG'' and the Infinite Dimensional Dynamical Systems (IDDS), Exeter, UK.

M. Olivi has given a lecture at CDC 2003, Maui, Hawaii (USA), 9-12 December.

F. Seyfert gave a talk at the ``Journées nationales du calcul formel
2003'' about the use of computer algebra based methods for the
exhaustive computation of couplings parameters, at ``Advances in
constructive approximation (Nashville)'' about the mixed

L. Baratchart was an invited speaker at ``Advances in Constructive Approximation'' Conference , May 2003, Vanderbilt University (Tenessee), and at the colloquium of Michigan State University (East Lansing) in March 2003.

F. Wielonsky delivered a talk at the workshop ``Complex Analysis and Inverse Problems'', December 15-19, I.H.P. (Paris).