The Team aims at designing and developing constructive methods in modeling, identification and control of dynamical, resonant and diffusive systems.

Function theory and approximation theory in the complex domain, with applications to frequency identification of linear systems and inverse boundary problems for the Laplace and conjugate-Beltrami operators:

System and circuit theory with applications to the modeling of analog microwave devices. Development of dedicated software for the synthesis of such devices.

Inverse potential problems in 2-D and 3-D and harmonic analysis with applications to non-destructive control (from magneto/electro-encephalography in medical engineering or plasma confinement in tokamaks for nuclear fusion to inverse magnetization problems in paleomagnetism).

Control and structure analysis of non-linear systems with applications to orbit transfer of satellites.

Collaboration under contract with Thales Alenia Space (Toulouse, Cannes, and Paris), CNES (Toulouse), XLim (Limoges), CEA-IRFM (Cadarache).

Exchanges with UST (Villeneuve d'Asq), University Bordeaux-I (Talence), University of Orléans (MAPMO), University of Pau (EPI Inria Magique-3D), University Marseille-I (CMI), CWI (the Netherlands), SISSA (Italy), the Universities of Illinois (Urbana-Champaign USA), California at San Diego and Santa Barbara (USA), Michigan at East-Lansing (USA), Vanderbilt University (Nashville USA), Texas A&M (College Station USA), ISIB (CNR Padova, Italy), Milan Polytechnico (Milan, Italy), Beer Sheva (Israel), RMC (Kingston, Canada), University of Erlangen (Germany), Leeds (UK), Maastricht University (The Netherlands), Cork University (Ireland), Vrije Universiteit Brussel (Belgium), TU-Wien (Austria), TFH-Berlin (Germany), CINVESTAV (Mexico), ENIT (Tunis), KTH (Stockholm), University of Bilbao (Universidad del País Vasco / Euskal Herriko Unibertsitatea, Spain).

The project is involved in the ANR projects AHPI (Math., coordinator) and Filipix (Telecom.), in a EMS21-RTG NSF program (with Vanderbilt University, Nashville, USA), in an NSF Grant with Vanderbilt University and the MIT, in an EPSRC Grant with Leeds University (UK).

Identification typically consists in approximating experimental data by the prediction of a model belonging to some model class. It consists therefore of two steps, namely the choice of a suitable model class and the determination of a model in the class that fits best with the data. The ability to solve this approximation problem, often non-trivial and ill-posed, impinges on the effectiveness of a method.

Particular attention is payed within the team to the class of stable linear time-invariant systems, in particular resonant ones, and in isotropically diffusive systems, with techniques that
dwell on functional and harmonic analysis. In fact one often restricts to a smaller class—
*e.g.*, rational models of suitable degree (resonant systems, see section
) or other structural constraints—and this leads us to split the identification
problem in two consecutive steps:

Seek a stable but infinite (numerically: high) dimensional model to fit the data. Mathematically speaking, this step consists in reconstructing a function analytic in the
right half-plane or in the unit disk (the transfer function), from its values on an interval of the imaginary axis or of the unit circle (the band-width). We will embed this classical
ill-posed issue (
*i.e.*, the inverse Cauchy problem for the Laplace equation) into a family of well-posed extremal problems, that may be viewed as a regularization scheme of Tikhonov-type. These
problems are infinite-dimensional but convex (see section
).

Approximate the above model by a lower order one reflecting further known properties of the physical system. This step aims at reducing the complexity while bringing physical significance to the design parameters. It typically consists of a rational or meromorphic approximation procedure with prescribed number of poles in certain classes of analytic functions. Rational approximation in the complex domain is a classical but difficult non-convex problem, for which few effective methods exist. In relation to system theory, two specific difficulties superimpose on the classical situation, namely one must control the region where the poles of the approximants lie in order to ensure the stability of the model, and one has to handle matrix-valued functions when the system has several inputs and outputs, in which case the number of poles must be replaced by the McMillan degree (see section ).

When identifying elliptic (Laplace, conjugate-Beltrami) partial differential equations from boundary data, point 1. above can be recast as an inverse boundary-value problem with (overdetermined Dirichlet-Neumann) data on part of the boundary of a plane domain (recover a function, analytic in a domain, from incomplete boundary data). As such, it arises naturally in higher dimensions when analytic functions get replaced by gradients of harmonic functions (see section ). Initial motivations of the team include:

free boundary problems in plasma control;

the recovery of sources, that arises for instance in magneto/electro-encephalography;

the detection of cracks and occlusions in non-destructive control.

We aim at generalizing this approach to the conjugate-Beltrami equation in dimension 2 (section ) and to the Laplace equation in dimension 3 (section ).

Step 2. above,
*i.e.*, meromorphic approximation with prescribed number of poles—is used to approach other inverse problems beyond harmonic identification. In fact, the way the singularities of the
approximant (
*i.e.*, its poles) relate to the singularities of the approximated function is an all-pervasive theme in approximation theory: for appropriate classes of functions, the location of the
poles of the approximant can be used as an estimator of the singularities of the approximated function (see section
).

We provide further details on the two steps mentioned above in the sub-paragraphs to come.

Given a planar domain

A standard extremal problem on the disk is :

(

When seeking an analytic function in

(

Here

To fix terminology we generically refer to (
*bounded extremal problem*. The solution to this convex infinite-dimensional optimization problem can be obtained upon iteratively solving spectral equations for appropriate Hankel and
Toeplitz operators, that involve a Lagrange parameter, and whose right hand-side is given by the solution to (

Various modifications of

The above-mentioned problems can be stated on an annular geometry rather than on a disk. For
*seek the inner boundary*knowing it is a level curve of the flux (see section
). Here, the Lagrange parameter indicates which deformation should be applied
on the inner contour in order to improve data fitting.

Continuing effort is currently payed by the team to carry over bounded extremal problems and their solution to more general settings.

Such generalizations are twofold: on the one hand Apics considers 2-D diffusion equations with variable (but for now isotropic) conductivity, on the other hand it investigates the ordinary
Laplacian in

An isotropic diffusion equation in dimension 2 can be recast as a so-called conjugate or real Beltrami equation
. This way analytic functions get replaced by “generalized” ones in
problems (

At present, bounded extremal problems for the

Let as before

A natural generalization of problem (

(

Problem (

Only for

The case
*complement*of
*stable*rational approximant to
*not*be unique.

The former Miaou project (predecessor of Apics) has designed an adapted steepest-descent algorithm for the case
*local minimum*is guaranteed; until now it seems to be the only procedure meeting this property. Roughly speaking, it is a gradient algorithm that proceeds recursively with respect to
the order
*critical points*of lower degree (as done by the Endymion software, section
, and RARL2 software, section
).

In order to establish convergence results of the algorithm to the global minimum, Apics has undergone a long-haul study of the number and nature of critical points, in which tools from
differential topology and operator theory team up with classical approximation theory. The main discovery is that the nature of the critical points (
*e.g.*, local minima, saddles...) depends on the decrease of the interpolation error to
*i.e.*, Markov functions)
, the exponential function, and meromorphic functions
. The case where

A common feature to all these problems is that critical point equations express non-Hermitian orthogonality relations for the denominator of the approximant. This is used in an essential
manner to assess the behavior of the poles of the approximants to functions with branched singularities, which is of particular interest for inverse source problems (
*cf.* section
).

In higher dimensions, the analog of problem (

Certain constrained rational approximation problems, of special interest in identification and design of passive systems, arise when putting additional requirements on the approximant, for
instance that it should be smaller than 1 in modulus. Such questions have become over years an increasingly significant part of the team's activity (see section
). When translated over to the circle, a prototypical formulation consists in
approximating the modulus of a given function by the modulus of a rational function of degree
*i.e.*, the transfer-function) and the linear differential equations that generate this response (
*i.e.*, the state-space representation), which is the object of the so-called
*realization*process. Since filters have to be considered as dual-mode cavities, the realization issue must indeed be tackled in a

We refer here to the behavior of the poles of best meromorphic approximants, in the

Generally speaking, the behavior of poles is particularly important in meromorphic approximation to obtain error rates as the degree goes large and also to tackle constructive issues like
uniqueness. However, the original motivation of Apics is to consider this issue in connection with the approximation of the solution to a Dirichlet-Neumann problem, so as to extract
information on the singularities. The general theme is thus
*how do the singularities of the approximant reflect those of the approximated function?*The approach to inverse problem for the 2-D Laplacian that we outline here is attractive when the
singularities are zero- or one-dimensional (see section
). It can be used as a computationally cheap preliminary step to obtain the
initial guess of a more precise but heavier numerical optimization.

As regards crack detection or source recovery, the approach in question boils down to analyzing the behavior of best meromorphic approximants of a function with branch points. We were able to prove ( , ) that the poles of the approximants accumulate in a neighborhood of the geodesic hyperbolic arc that links the endpoints of the crack, or the sources . Moreover, the asymptotic density of the poles turns out to be the equilibrium distribution on the geodesic arc of the Green potential and this distribution puts heavy charge at the end points, that are thus well localized if one is able to compute sufficiently many zeros (this is where the method could fail). The case of more general cracks, as well as situations when three or more sources, require handling a finite but arbitrary number of branch points. These are outstanding open questions for applications to inverse problems (see section ), as for the problem of a general singularity, that may be two dimensional.

Results of this type open new perspectives in non-destructive control, in that they link issues of current interest in approximation theory (the behavior of zeroes of non-Hermitian orthogonal polynomials) to some classical inverse problems for which a dual approach is thereby proposed: to approximate the boundary conditions by true solutions of the equations, rather than approximating (by discretization) the equation itself.

We wish to point out that rational or meromorphic approximation to the Cauchy transform of measure can be viewed as discretization of the logarithmic potential of that measure. If
approximation takes place in the

Matrix-valued approximation is necessary for handling systems with several inputs and outputs, and it generates substantial additional difficulties with respect to scalar approximation,
theoretically as well as algorithmically. In the matrix case, the McMillan degree (
*i.e.*, the degree of a minimal realization in the System-Theoretic sense) generalizes the degree.

The problem we want to consider reads:
*Let
$\mathcal{F}\in {\left({H}^{2}\right)}^{m\times l}$and
$n$an integer; find a rational matrix of size
$m\times l$without poles in the unit disk and of McMillan degree at most
$n$which is nearest possible to
$\mathcal{F}$in
${\left({H}^{2}\right)}^{m\times l}$.*Here the

The approximation algorithm designed in the scalar case generalizes to the matrix-valued situation
. The first difficulty here consists in the parametrization of
transfer matrices of given McMillan degree
*i.e.*, matrix-valued functions that are analytic in the unit disk and unitary on the circle) of degree

The set of inner matrices of given degree has the structure of a smooth manifold that allows one to use differential tools as in the scalar case. In practice, one has to produce an atlas of charts (parametrization valid in a neighborhood of a point), and we must handle changes of charts in the course of the algorithm. Such parametrization can be obtained from interpolation theory and Schur type algorithms, the parameters being interpolation vectors or matrices ( , , ). Some of these parametrizations have a particular interest for computation of realizations ( , ), involved in the estimation of physical quantities for the synthesis of resonant filters. Two rational approximation software codes (see sections and ) have been developed in the team.

Problems relative to multiple local minima naturally arise in the matrix-valued case as well, but deriving criteria that guarantee uniqueness is even more difficult than in the scalar case. The already investigated case of rational functions of the sought degree (the consistency problem) was solved using rather heavy machinery . The case of matrix-valued Markov functions, the first example beyond rational functions, has undergone progress only recently .

Let us stress that the algorithms mentioned above are first to handle rational approximation in the matrix case in a way that converges to local minima, while meeting stability constraints on the approximant.

Using the terminology defined in the beginning of section , the class of models considered here is the one of finite dimensional nonlinear control systems. In many cases, a linear control based on the linear approximation around a nominal point or trajectory is sufficient. However, there are important instances where it is not, either because the magnitude of the control is limited or because the linear approximation is not controllable, or else in control problems like path planning, that are not local in nature.

State feedback stabilization consists in designing a control law which is a function of the state and makes a given point (or trajectory) asymptotically stable for the closed-loop system.
That function of the state must bear some regularity, at least enough regularity to allow the closed-loop system to make sense; continuous or smooth feedback would be ideal, but one may also
be content with discontinuous feedback if robustness properties are not defeated. One can consider this as a weak version of the optimal control problem which is to find a control that
minimizes a given criterion (for instance the time to reach a prescribed state). Optimal control generally leads to a rather irregular dependence on the initial state; in contrast,
stabilization is a
*qualitative*objective (
*i.e.*, to reach a given state asymptotically) which is more flexible and allows one to impose much more regularity.

Lyapunov functions are a well-known tool to study the stability of non-controlled dynamic systems. For a control system, a
*Control Lyapunov Function*is a Lyapunov function for the closed-loop system where the feedback is chosen appropriately. It can be expressed by a differential inequality called the
“Artstein (in)equation”
, reminiscent of the Hamilton-Jacobi-Bellmann equation but largely
under-determined. One can easily deduce a continuous stabilizing feedback control from the knowledge of a control Lyapunov function; also, even when such a control is known beforehand,
obtaining a control Lyapunov function can still be very useful to deal with robustness issues.

Moreover, if one has to deal with a problem where it is important to optimize a criterion, and if the optimal solution is hard to compute, one can look for a control Lyapunov function which comes “close” (in the sense of the criterion) to the solution of the optimization problem but leads to a control easier to deal with.

A class of systems of interest to us is the one of systems with a conservative drift and a small control (whose effect is small in magnitude compared to the drift). A prototype is the
control of a satellite with low thrust propellers: the conservative drift is the classical Kepler problem and the control is small compared to earth attraction. We developed, starting with
Alex Bombrun's PhD
, original averaging methods, that differ from classical methods in
that the average is a
*control system*,
*i.e.*, the averaging process does not depend on the control strategy. A reference paper has been submitted
.

These constructions were exploited in a joint collaborative research conducted with Thales Alenia Space (Cannes), where minimizing a certain cost is very important (fuel consumption / transfer time) while at the same time a feedback law is preferred because of robustness and ease of implementation (see section ).

Optimal transport is the problem of finding the cheapest transformation that moves a given initial measure to a given final one, where the cost of the transformation is obtained by integrating against the measure a point-to-point cost that may be a squared Euclidean distance or a Riemannian distance on a manifold or more exotic ones where some directions are privileged that naturally lean towards optimal control.

The problem has a long history which goes back to the pioneering works and , and was more recently revised and revitalized by and . At the same time, applications to many domains ranging from image processing to shape reconstruction or urban planning were developed, see a survey in .

We are interested in optimal transport problems with a cost coming from optimal control,
*i.e.*coming from minimizing an integral quadratic cost, among trajectories that are subject to differential constraints coming from a control system, and also in using geometric control
methods in general transport problems
. The case of controllable affine control systems without drift (in
which case the cost is the sub-Riemannian distance) is studied in
,
,
. Systems with drift are the topic of A. Hindawi's PhD.

The optimal transport problem in this setting borrows methods from control; at the same time, it may help understanding optimal control: the problem of moving optimally from a point to another is a singular limit of the problem of moving optimally a measure with a smooth density to another one when the measures tend to Dirac masses.

See new results in section .

The motivations for a detailed study of equivalence classes and invariance of models of control systems under various classes of transformations are two-fold:

From the point of view of control, a command satisfying specific objectives on the transformed system can be used to control the original system including the transformation in the controller.

From the point of view of identification and modeling, the interest is either to derive qualitative invariants to support the choice of a non-linear model given the observations, or to contribute to a classification of non-linear models which is missing sorely today. This is a prerequisite for a general theory of non-linear identification; indeed, the success of the linear model in control and identification is more due to the deep understanding one has of it than to some universal character of linearity.

The interested reader can find a richer overview (in french) in the first chapter of .

A
*static feedback*transformation is a (non-singular) re-parametrization of the control depending on the state, together with a change of coordinates in the state space. Static equivalence
has motivated a very wide literature; in the differentiable case, classification is performed in relatively low dimensions; it gives insight on models and also points out that this
equivalence is “too fine”,
*i.e.*, very few systems are equivalent and normal forms are far from being stable. This motivates the search for a coarser equivalence that would account for more qualitative phenomena.
The Hartman-Grobman theorem states that every ordinary differential equation (
*i.e.*, dynamical system without control) is locally equivalent, in a neighborhood of a non-degenerate equilibrium, to a linear system via a transformation that is solely bi-continuous,
whereas smoothness requires many more invariants. This was a motivation to study
*topological*(non necessarily smooth) equivalence. A “Hartman Grobman Theorem for control systems” is stated in
under weak regularity conditions; it is too abstract to be relevant
to the above considerations on qualitative phenomena: linearization is performed by functional non-causal transformations rather than feedback transformations
*stricto sensu*; it however acquires a concrete meaning when the inputs are themselves generated by finite-dimensional dynamics. A stronger Hartman Grobman Theorem for control systems
(where transformations are homeomorphisms in the state-control space) in fact cannot hold
: almost all topologically linearizable control systems are
differentiably (in the same class of regularity as the system itself) linearizable. In general (equivalence between nonlinear systems), topological invariants are still a subject of interest
to us.

A
*dynamic feedback*transformation consists of a dynamic extension (adding new states, and assigning them new dynamics) followed by a state feedback on the augmented system; dynamic
equivalence is another attempt to enlarge classes of equivalence. It is indeed strictly more general than static equivalence: it is known that many systems are dynamic equivalent but not
static equivalent to a linear controllable system. The classes containing a linear controllable system are the ones of
*differentially flat systems*; it turns out (see
) that many practical systems are in this class and that being
“flat” also means that all the solutions to the systems are given by a
*(Monge) parametrization*that describes the solutions without any integration.

An important question remains open: how can one algorithmically decide whether a given system has this property or not,
*i.e.*, is dynamic linearizable or not? The mathematical difficulty is that no a priori bound is known on the order of the differential operator giving the parametrization. Within the
team, results on low dimensional systems have been obtained
; the above mentioned difficulty is not solved for these systems but
results are given with
*a priori*prescribed bounds on this order.

For general dynamic equivalence as well as flatness, very few invariants are known. In particular, the fact that the size of the extra dynamics contained in the dynamic transformation (or
the order of the above mentioned differential operator, for flatness) is not a priori bounded makes it very difficult to prove that two systems are
*not*dynamic feedback equivalent, or that a system is
*not*flat. Many simple systems pointed out in
are conjectured not to be flat but no proof is available. The only
known general necessary condition for flatness is the so-called ruled surface criterion; it was generalised by the team to dynamic equivalence between arbitrary nonlinear systems in
.

Another attempt towards conditions for flatness used the differential algebraic point of view: the module of differentials of a controllable system is, generically, free and finitely
generated over the ring of differential polynomials in

This domain is mostly connected to the techniques described in section .

We are mainly concerned with classical inverse problems like the one of localizing defaults (as cracks, pointwise sources or occlusions) in a two or three dimensional domain from boundary data (which may correspond to thermal, electrical, or magnetic measurements), of a solution to Laplace or to some conductivity equation in the domain. These defaults can be expressed as a lack of analyticity of the solution of the associated Dirichlet-Neumann problem that may be approached, in balls, using techniques of best rational or meromorphic approximation on the boundary of the object (see section ).

Actually, it turns out that traces of the boundary data on 2-D cross sections (disks) coincide with analytic functions in the slicing plane, that has branched singularities inside the disk . These singularities are related to the actual location of the sources (namely, they reach in turn a maximum in modulus when the plane contains one of the sources). Hence, we are back to the 2-D framework where approximately recovering these singularities can be performed using best rational approximation.

In this connection, the realistic case where data are available on part of the boundary only offers a typical opportunity to apply the analytic extension techniques (see section ) to Cauchy type issues, a somewhat different kind of inverse problems in which the team is strongly interested.

The approach proposed here consists in recovering, from measured data on a subset

The analytic approximation techniques (section
) first allow us to extend

From these extended data on the whole boundary, one can obtain information on the presence and location of

This is the case in dimension 2, using classical links between analyticity and harmonicity , but also in dimension 3, at least in spherical or ellipsoidal domains, working on 2-D plane sections, , .

The previous two steps were shown to provide a robust way of locating sources from incomplete boundary data in a 2-D situation with several annular layers. Numerical experiments have already yielded excellent results in 3-D situations and we are now on the way to process real experimental magneto-encephalographic data, see also sections and . The PhD thesis of A.-M. Nicu is concerned with these applications, see , in collaboration with the Athena team of Inria Sophia Antipolis, and with neuroscience teams in partner-hospitals (hosp. Timone, Marseille).

Such methods are currently being generalized to problems with variable conductivity governed by a 2-D conjugate-Beltrami equation, see , . The application we have in mind is to plasma confinement for thermonuclear fusion in a Tokamak, more precisely with the extrapolation of magnetic data on the boundary of the chamber from the outer boundary of the plasma, which is a level curve for the poloidal flux solving the original div-grad equation. Solving this inverse problem of Bernoulli type is of importance to determine the appropriate boundary conditions to be applied to the chamber in order to shape the plasma . These issues are the topics of the PhD theses of S. Chaabi and Y. Fischer , and of a joint collaboration with the CEA-IRFM (Cadarache), the Laboratoire J.-A. Dieudonné at the Univ. of Nice-SA, and the CMI-LATP at the Univ. of Marseille I (see section ).

Inverse potential problems are also naturally encountered in magnetization issues that arise in nondestructive control. A particular application, which is the object of a joint NSF-supported
project with Vanderbilt University and MIT, is to geophysics where the remanent magnetization of a rock is to be analyzed using a squid-magnetometer in order to analyze the rock history;
specifically, the analysis of Martian rocks is conducted at MIT, for instance to understand if inversions of the magnetic field took place there. Mathematically speaking, the problem is to
recover the (3-D valued) magnetization

outside the volume

In turns out that discretization issues in geophysics can also be approached by these approximation techniques. Namely, in geodesy or for GPS computations, one may need to get a best discrete approximation of the gravitational potential on the Earth's surface, from partial data collected there. This is also the topic of the PhD theses of A.-M. Nicu, and of a beginning collaboration with a physicist colleague (IGN, LAREG, geodesy). Related geometrical issues (finding out the geoid, level surface of the gravitational potential) are worthy of consideration as well.

This domain is mostly connected to the techniques described in section .

One of the best training grounds for the research of the team in function theory is the identification and design of physical systems for which the linearity assumption works well in the considered range of frequency, and whose specifications are made in the frequency domain. Resonant systems, either acoustic or electromagnetic based, are prototypical devices of common use in telecommunications.

In the domain of space telecommunications (satellite transmissions), constraints specific to onboard technology lead to the use of filters with resonant cavities in the microwave range. These filters serve multiplexing purposes (before or after amplification), and consist of a sequence of cylindrical hollow bodies, magnetically coupled by irises (orthogonal double slits). The electromagnetic wave that traverses the cavities satisfies the Maxwell equations, forcing the tangent electrical field along the body of the cavity to be zero. A deeper study (of the Helmholtz equation) states that essentially only a discrete set of wave vectors is selected. In the considered range of frequency, the electrical field in each cavity can be seen as being decomposed along two orthogonal modes, perpendicular to the axis of the cavity (other modes are far off in the frequency domain, and their influence can be neglected).

Near the resonance frequency, a good approximation of the Maxwell equations is given by the solution of a second order differential equation. One obtains thus an electrical model for our filter as a sequence of electrically-coupled resonant circuits, and each circuit will be modeled by two resonators, one per mode, whose resonance frequency represents the frequency of a mode, and whose resistance represent the electric losses (current on the surface).

In this way, the filter can be seen as a quadripole, with two ports, when plugged on a resistor at one end and fed with some potential at the other end. We are then interested in the power
which is transmitted and reflected. This leads to defining a scattering matrix

In fact, resonance is not studied via the electrical model, but via a low-pass equivalent circuit obtained upon linearizing near the central frequency, which is no longer conjugate symmetric
(
*i.e.*, the underlying system may not have real coefficients) but whose degree is divided by 2 (8 in the example).

In short, the identification strategy is as follows:

measuring the scattering matrix of the filter near the optimal frequency over twice the pass band (which is 80Mhz in the example).

solving bounded extremal problems for the transmission and the reflection (the modulus of he response being respectively close to 0 and 1 outside the interval measurement, cf. section ). This provides us with a scattering matrix of order roughly 1/4 of the number of data points.

Approximating this scattering matrix by a rational transfer-function of fixed degree (8 in this example) via the Endymion or RARL2 software (cf. section ).

A realization of the transfer function is thus obtained, and some additional symmetry constraints are imposed.

Finally one builds a realization of the approximant and looks for a change of variables that eliminates non-physical couplings. This is obtained by using algebraic-solvers and continuation algorithms on the group of orthogonal complex matrices (symmetry forces this type of transformation).

The final approximation is of high quality. This can be interpreted as a validation of the linearity hypothesis for the system: the relative

The above considerations are valid for a large class of filters. These developments have also been used for the design of non-symmetric filters, useful for the synthesis of repeating devices.

The team currently investigates the design of output multiplexors (OMUX) where several filters of the previous type are coupled on a common guide. In fact, it has undergone a rather general analysis of the question “How does an OMUX work?” With the help of numerical simulations and Schur analysis, general principles are being worked out to take into account:

the coupling between each channel and the “Tee” that connects it to the manifold,

the coupling between two consecutive channels.

The model is obtained upon chaining the corresponding scattering matrices and it intermingles rational elements and complex exponentials (because of the delays) hence constitutes an extension of the previous framework. Its study is being conducted under contract with Thales Alenia Space (Toulouse) (see sections ).

This domain is mostly related to the techniques described in section .

Generally speaking, aerospace engineering requires sophisticated control techniques for which optimization is often crucial, due to the extreme functioning conditions. The use of satellites
in telecommunication networks motivates a lot of research in the area of signal and image processing; see for instance section
. Of course, this requires that satellites be adequately controlled, both in
position and orientation (the so-called
*attitude*of the satellite). This problem and similar ones continue to motivate research in control. The team has been working for six years on control problems in orbital transfer with
low thrust engines, including four years under contract with Thales Alenia Space (formerly Alcatel Space) in Cannes.

Technically, the reason for using these (ionic) low thrust engines, rather than chemical engines that deliver a much higher thrust, is that they require much less “fuel”; this is decisive because the total mass is limited by the capacity of the launchers: less fuel means more payload, while fuel represents today an impressive part of the total mass.

From the control point of view, the low thrust makes the transfer problem delicate. In principle of course, the control law leading to the right orbit in minimum time exists, but it is
numerically involved, ill-conditionned and the computation is non-robust against many unmodelled phenomena. Considerable progress on the approximation of such a law by a feedback has been
carried out using
*ad hoc*Lyapunov functions. These approximate surprisingly well time-optimal trajectories. The easy implementation of such control laws makes them attractive as compared to genuine optimal
control. Here the

Tralics is a

RARL2 (Réalisation interne et Approximation Rationnelle L2) is a software for rational approximation (see section
)
http://

This software takes as input a stable transfer function of a discrete time system represented by either of

its internal realization,

its first

discretized values on the circle.

It computes a local best approximant which is
*stable, of prescribed McMillan degree*, in the

It is akin to the arl2 function of Endymion (see section
) from which it differs mainly in the way systems are represented: a polynomial
representation is used in Endymion, while RARL2 uses realizations. It is implemented in Matlab. This software handles
*multi-variable*systems (with several inputs and several outputs), and uses a parametrization with the following advantages:

it incorporates the stability requirement in a built-in manner,

it allows the use of differential tools,

it is well-conditioned, and computationally efficient.

An iterative research strategy on the degree of the local minima, similar in principle to that of arl2, increases the chance of obtaining the absolute minimum by generating, in a structured manner, several initial conditions.

RARL2 performs the rational approximation step in our applications to filter identification (section ) as well as sources or cracks recovery (section ). It was released to the universities of Delft, Maastricht, Cork and Brussels. The parametrization embodied in RARL2 was recently used for a multi-objective control synthesis problem provided by ESTEC-ESA, The Netherlands. An extension of the software to the case of triple poles approximants is now available. It provides satisfactory results in the source recovery problem and it is used by FindSources3D (see ).

The identification of filters modeled by an electrical circuit that was developed by the team (see section
) led us to compute the electrical parameters of the underlying filter. This
means finding a particular realization

PRESTO-HF: a toolbox dedicated to lowpass parameter identification for microwave filters http://www-sop.inria.fr/apics/personnel/Fabien.Seyfert/Presto_web_page/presto_pres.html. In order to allow the industrial transfer of our methods, a Matlab-based toolbox has been developed, dedicated to the problem of identification of low-pass microwave filter parameters. It allows one to run the following algorithmic steps, either individually or in a single shot:

determination of delay components caused by the access devices (automatic reference plane adjustment),

automatic determination of an analytic completion, bounded in modulus for each channel,

rational approximation of fixed McMillan degree,

determination of a constrained realization.

For the matrix-valued rational approximation step, Presto-HF relies either on hyperion (see ) (Unix or Linux only) or RARL2 (platform independent), two rational approximation engines developed within the team. Constrained realizations are computed by the RGC software. As a toolbox, Presto-HF has a modular structure, which allows one for example to include some building blocks in an already existing software.

The delay compensation algorithm is based on the following strong assumption: far off the passband, one can reasonably expect a good approximation of the rational components of

This toolbox is currently used by Thales Alenia Space in Toulouse and a license agreement has been recently negotiated with Thales airborne systems. XLim (University of Limoges) is a heavy user of Presto-HF among the academic filtering community and some free license agreements are currently being considered with the microwave department of the University of Erlangen (Germany) and the Royal Military College (Kingston, Canada).

The core of the
*Endymion*system (a follow-up to hyperion) is formed by a library that handles numbers (short integers, arbitrary size rational numbers, floating point numbers, quadruple and octuple
precision floating point numbers, arbitrary precision real numbers, complex numbers), polynomials, matrices, etc. Specific data structures for the rational approximation algorithm
*arl2*and the bounded extremal problem
*bep*are also available. One can mention for instance splines, Fourier series, Schur matrices, etc. These data structures are manipulated by dedicated algorithms (matrix inversion, roots
of polynomials, a gradient-based algorithm for minimizing

The development of
*Endymion*,
http://www-sop.inria.fr/apics/endymion/index.htmlhas come to an end. The software is still maintained and sources are available on the ftp server.

Dedale-HF is a software dedicated to solve exhaustively the coupling matrix synthesis problem in reasonable time for the users of the filtering community. For a given coupling topology, the coupling matrix synthesis problem (C.M. problem for short) consists in finding all possible electromagnetic coupling values between resonators that yield a realization of given filter characteristics (see section ). Solving the latter problem is crucial during the design step of a filter in order to derive its physical dimensions as well as during the tuning process where coupling values need to be extracted from frequency measurements (see Figure ).

Dedale-HF consists in two parts: a database of coupling topologies as well as a dedicated predictor-corrector code. Roughly speaking each reference file of the database contains, for a given
coupling topology, the complete solution to the C.M. problem associated to particular filtering characteristics. The latter is then used as a starting point for a predictor-corrector
integration method that computes the solution to the C.M. problem of the user,
*i.e.*, the one corresponding to user-specified filter characteristics. The reference files are computed off line using Groebner basis techniques or numerical techniques based on the
exploration of a monodromy group. The use of such a continuation technique combined with an efficient implementation of the integrator produces a drastic reduction, by a factor of 20, of the
computational time.

Access to the database and integrator code is done via the web on http://www-sop.inria.fr/apics/Dedale/WebPages. The software is free of charge for academic research purposes: a registration is however needed in order to access full functionality. Up to now 90 users have registered world wide (mainly: Europe, U.S.A, Canada and China) and 4000 reference files have been downloaded.

As mentioned in an extension of this software that handles symmetrical networks is under construction.

FindSources3D is a software dedicated to source recovery for the inverse EEG problem, in 3-layer spherical settings, from pointwise data (see
http://

Sollya is an interactive tool where the developers of mathematical floating-point libraries (libm) can experiment before actually developing code. The environment is safe with respect to
floating-point errors,
*i.e.*, the user precisely knows when rounding errors or approximation errors happen, and rigorous bounds are always provided for these errors.

Amongst other features, it offers a fast Remez algorithm for computing polynomial approximations of real functions and also an algorithm for finding good polynomial approximants with floating-point coefficients to any real function. It also provides algorithms for the certification of numerical codes, such as Taylor Models, interval arithmetic or certified supremum norms.

It is available as a free software under the CeCILL-C license at
http://

The major use of Tralics remains the production of the RaWeb (Scientific Annex to the Annual Activity Report of Inria). The software is described in
,
,
,
. Other applications of Tralics consist in putting scientific papers
on the Web; for instance Cedram (
http://
*Journal de théorie des nombres de Bordeaux*, uses Tralics for the abstracts and plans to translate full papers. Tralics is also used by Zentralblatt for converting comments, rewiews and
abstracts. The Software has been presented at the DML2010 conference
. Tralics has been used for the HTML+MathML documentation of the open
TURNS software
http://

Solving overdetermined Cauchy problems for the Laplace equation on a spherical layer (in 3-D) in order to process incomplete experimental data is a necessary ingredient of the team's
approach to inverse source problems, in particular for applications to EEG since the latter involves propagating the initial conditions from the boundary to the center of the domain where the
singularities (
*i.e.*, the sources) are sought after. Here, the domain is typically made of several homogeneous layers of different conductivities.

Such problems offer an opportunity to state and solve extremal problems for harmonic fields for which an analog of the Toeplitz operator approach to bounded extremal problems
has been obtained in
. Still, a best approximation on the subset of a general vector
field generated by a harmonic gradient under a

Issues of robust interpolation on the sphere from incomplete pointwise data are also under study in order to improve numerical accuracy of our reconstruction schemes. Spherical harmonics, Slepian bases and related special functions are of special interest (thesis of A.-M. Nicu), while splines, spherical wavelets, cubature techniques should be considered as well.

It turns out that Slepian functions are eigenfunctions of truncated Toeplitz operators in the complex plane (the framework of 2-D problems). These properties will be used in order to
quantify the robustness properties of our resolution schemes for

The analogous problem in

The above issue is also interesting in

The problem of sources recovery can be handled in 3-D balls by using best rational approximation on 2-D cross sections (disks) from traces of the boundary data on the corresponding circles (see section ).

In 3-D, functional or clinical active regions in the cortex are often represented by pointwise sources that have to be localized from measurements on the scalp of a potential satisfying a Laplace equation (EEG, electroencephalography). In the work it was shown how to proceed via best rational approximation on a sequence of 2-D disks cut along the inner sphere, for the case where there are at most 2 sources. A long-haul research on the behaviour of poles of best rational approximants of fixed degree to functions with branch points was completed this year , which shows that the technique carries over to finitely many sources.

In this connection, a dedicated software “FindSources3D” (see section ) has been developed, in collaboration with the team Athena.

Further, it appears that in the rational approximation step of these schemes,
*multiple*poles possess a nice behaviour with respect to the branched singularities (see figure
). This is due to the very basic physical assumptions on the model (for EEG
data, one should consider
*triple*poles). Though numerically observed, there is no mathematical justification so far why these multiple poles have such strong accumulation properties, which remains an intriguing
observation. This is the topic of
.

Also, magnetic data from MEG (magneto-encephalography) will soon become available, which should enhance the accuracy of source recovery algorithms.

This approach also appears to be interesting for geophysical issues, namely discretizing the gravitational potential by means of pointwise masses. This is one topic of A.-M. Nicu's PhD thesis.

Magnetic sources localization from observations of the field away from the support of the magnetization is an issue under investigation in a joint effort with the Math. department of Vanderbilt University and the Earth Sciences department at MIT. The goal is to recover the magnetic properties of rock samples (meteorites) from fine measurements extremely close to the sample that can nowadays be obtained using SQUIDs (supraconducting coil devices).

The magnetization operator is the Riesz potential of the divergence of the magnetization, When the latter has bounded variation, we already described the kernel of this operator (the so-called silent magnetizations or silent source distributions) in terms of measures whose balayage on the boundary of the sample vanishes. This however, is not so very effective, computationally.

The case of a thin slab (the magnetization is then modelled as a vector field on a portion of the plane) has proved more amenable. We have shown that that silent sources from above or
below can be characterized via the Hardy-Hodge decomposition mentioned in section
. The smoothness assumptions have been weakened considerably to accomodate
magnetizations that may be any distribution with compact support, more generally any finite sum of partial derivatives of any order of

Meanwhile, the severe ill-posedness of the reconstruction challenges discrete Fourier methods, one of the main problems being the truncation of the observations outside the range of the SQUID measurements. A next step will be to develop the extrapolation techniques initiated by the project team, using bounded extremal problems, in an attempt to overcome this issue.

In collaboration with the CMI-LATP (University Marseille I) and in the framework of the ANR AHPI, the team considers 2-D diffusion processes with variable conductivity. In particular its
complexified version, the so-called
*conjugate*or
*real Beltrami equation*, was investigated. In the case of a smooth domain, and for Lipschitz conductivity, we analyzed the Dirichlet problem for solutions in Sobolev and then in Hardy
classes
.

Their traces merely lie in

This year we generalized the construction to finitely connected Dini-smooth domains and

In the transversal section of a tokamak (which is a disk if the vessel is idealized into a torus), the so-called poloidal flux is subject to some conductivity equation outside the plasma volume for some simple explicit smooth conductivity function, while the boundary of the plasma (in the Tore Supra tokamak) is a level line of this flux . Related magnetic measurements are available on the chamber, which furnish incomplete boundary data from which one wants to recover the inner (plasma) boundary. This free boundary problem (of Bernoulli type) can be handled through the solutions of a family of bounded extremal problems in generalized Hardy classes of solutions to real Beltrami equations, in the annular framework. Such approximation problems also allow us to approach a somewhat dual extrapolation issue, raised by colleagues from the CEA for the purpose of numerical simulation. It consists in recovering magnetic quantities on the outer boundary (the chamber) from an initial guess of what the inner boundary (plasma) is.

In the particular case at hand, the conductivity is

On the half-plane, the conductivity

We explored this year a new application field for our rational approximation methods. We studied the problem of fitting a probability density function from a large set of financial data. The class of density function that we considered is that of non negative EPT (Exponential-Trigonometric-Polynomials) functions that seems to provide a very relevant framework for probabilistic calculations. Parseval's theorem implies that approximating the rational transform is equivalent to approximating the density itself. During his visit, Conor Sexton (a PHD student of Bernard Hanzon) adapted and ran the RARL2 software on this problem. The results were encouraging albeit the major problem of imposing positivity is still under study.

The theory of orthogonal polynomials on the unit circle is a most classical piece of analysis which is still the object of intensive studies. The asymptotic behaviour of orthogonal
polynomials is of special interest for many issues pertaining to approximation theory and to spectral theory of differential operators. Its connection with prediction theory of stationary
stochastic processes has long been known
. Namely, the

As compared to orthogonal polynomials, orthogonal rational functions have not been much considered up to now. They were apparently introduced by Dzrbasjan but the first systematic exposition seems to be the monograph by Bultheel et al. where the emphasis is more on the algebraic side of the theory. In fact, the asymptotic analysis of orthogonal rational functions is still in its infancy.

We recently developed an analog of the Kolmogorov-Krein-Szegö theorem
for orthogonal rational functions which is first of its kind in that
it allows for the poles of these functions to approach the unit circle, generalizing previously known results for compactly supported singular set. Dwelling on this asymptotic analysis of
orthogonal rational functions, we developed a prediction theory for certain, possibly
*nonstationary*stochastic processes that we call
*Blaschke varying*processes.
. These are characterized by a spectral calculus where time shift
corresponds to multiplication by an elementary Blaschke product (that may depend on the time instant considered). This class of processes contains the familiar Gaussian stationary processes,
but it contains many more that exhibit a much more varied behaviour. For instance, the process may be asymptotically deterministic along certain subsequences and nondeterministic along others.
The optimal predictor is constructed from the spectral measure
*via*orthogonal rational functions, and its asymptotic behaviour is characterized by the above-mentioned generalization of the Kolmogorov-Krein-Szegö theorem. In the same vein, we also
developed prediction theory for another class of nonstationary processes, the so-called
*Cauchy*-processes, that may be characterized as stationary processes feedind in turn a sequence of varying filters of degree 1. Their covariance matrices can be charatcerized via
Nevanlinna-Pick interpolation. The issue of characterizing covariance sequences of Blaschke processes is still open. Their identification raises the problem of constructing optimal Schur
rational approximants to a given Schur function.

We demonstrated in a recent past, under mild smoothness assumptions, the possibility of convergent rational interpolation to Cauchy integrals of complex measures on analytic Jordan arcs and
their strong asymptotics
,
. Subsequently, we started investigating the case of Cauchy integrals
on so-called symmetric contours for the logarithmic potential. These correspond to functions with more than two branched singularities, like those arising in the slicing method for source
recovery in a sphere when there is more than one source (see section
). Recently we obtained weak asymptotics in this case, and dwelled on them to
elucidate the asymptotic of poles of best

We presently study strong asymptotics, limiting ourselves at present to a threefold geometry, and to the case of Padé approximants (interpolation at a single point with high order). The result is that uniform convergence can only take place if the weights of the branches of the threefold with respect to the equilibrium distribution are rational. If they are rationally dependent, a spurious pole clusters to certain curves within the domain of analyticity, and if they are rationally independent, exactly one pole exhibits chaotic behaviour in the complex plane. Moreover, we have shown that the chaotic situation is generic, in a measure theoretic sense, with respect to the location of branchpoints. This generalizes and sharpens results of Suetin for Cauchy integrals on disconnected pieces of a smooth symmetric contour. It is the first time that a branched contour is analyzed with respect to general densities. A paper is being written to report these results.

We pursued our work on circuit realisations of filter responses with mixed type (inductive or capacitive) coupling elements and constrained topologies . We now focus on the use of resonating couplings in the design of asymmetric filter's characteristics without the use cross-coupling in order to simplify the practical implementation. In parallel, efforts are being payed to improve the synthesis method for higher order filters, having in mind application to diplexers with high number of symmetrically located transmission zeros.

Our work on the synthesis problem for diplexers has continued this year. Based on the polynomial structure highlighted last year , a synthesis algorithm was devised and allowed the effective synthesis of multiplexer characteristics (see figure ). As opposed to other synthesis algorithms the latter only involves polynomial computations on the sub-filters of the overall multiplexer, allowing its application to synthesis problems relative to devices with a numerous number of ports. The latter is based on the recursive solving of an extended Nevanlinna-Pick problem, first introduced in . The convergence of this fixed point procedure is under study as well as its extension to the synthesis of general multiplexer.

While we presented our work on the de-embedding diplexers at
we extended the latter to general multiplexers. The multiplexer
de-embedding problem we study is the following. Let

The de-embedding question is the following: given

When stating the de-embedding problem in terms of chain rather that scattering matrices, the latter becomes linear,

For a generic junction the de-embedding problem has a unique solution for

Degeneracy depends on the junction's response and can be explained by the occurence of filtering responses that hide behind the junction. These are responses

In the light of these theoretical results the tractability of the practical problem is currently under investigation. For example, it appears that in practice the

A reference paper on the construction and properties of an “average control system”, has been submitted ; it is based on Alex Bombrun's doctoral work (defended in 2007). It connects properties of convergence of solutions of highly oscillating control systems to those of an average control system, when the frequency of oscillation goes high. Likewise, it details (on a time-interval that goes to infinity) the properties of solutions of a conservative system with small controls in relation to those of an average system as the magnitude of control goes to zero. It also gives many properties of this average control system that has “more controls” than the original system, and yields, when this number of new controls is maximal, a Finsler metric on the state manifold. It is however difficult to compute explicitly and is never twice differentiable.

More exploration on this average system and the corresponding Finsler metric is planned.

A new collaboration with the CNES and the University of Bilbao began this year. The goal is to help the development of amplifiers, in particular to detect unstability at an early stage of the design.

Currently, Electrical Engineers from the University of Bilbao, under contract with CNES (the French Space Agency), use heuristics to diagnose instability before the circuit is physically implemented. We intend to set up a rigorously founded algorithm instead, based on properties of transfer function of such amplifiers which belong to particular classes of analytic functions.

We completed the first stage of this collaboration, in that we now have a formal definition of stability within these classes: a stable function is one which, when connected in parallel with
a large resistor, yields a
*i.e.*, resistors, self, capacities, LEDs, and transmission lines): they are the rational functions in the variable and the exponentials thereof.

This work has been performed in collaboration with members of the teams Arénaire in Lyon and Caramel in Nancy. The overall and long-term goal is to enhance the quality of numerical computations. Several aspects are studied:

A first topic is the development of software code for the multiprecision evaluation of elementary and special functions. Developing such codes is a long and error-prone
task. It is hence relevant to automatically generate such codes whenever possible. A first step has been to design an algorithm that automatically generates multiprecision code for the
evaluation of constant expressions with an a priori guaranteed error
. This is usually necessary for the evaluation,
*e.g.*, of the first terms of a Taylor series.

Another topic consists in the design of algorithms that allow developers of double precision mathematical libraries (so-called libm) to certify their library. In the
process of developing a libm, one usually replaces the function

Finally, a more general endeavor is to develop a tool that helps developers of libms in their task. This is performed by the software Sollya, which has originally been developed in the Arénaire team, in collaboration with C. Lauter and M. Joldeş. A new release has been performed this year .

Contract (reference Inria: 2470, CNES: 60465/00) involving CNES, XLim and Inria, whose objective is to work out a software package for identification and design of microwave devices. The work at Inria concerns the design of multiband filters with constraints on the group delay. The problem is to control the logarithmic derivative of the modulus of a rational function, while meeting specifications on its modulus.

Contract (reference CNES: RS10/TG-0001-019) involving CNES, University of Bilbao (UPV/EHU) and Inria whose objective is to set up a methodology for testing the stability of amplifying devices. The work at Inria concerns the design of frequency optimization techniques to identify the linearized response and analyze the linear periodic components.

Apics collaborates with the CEA-IRFM (Cadarache), through a grant with the
**Région PACA**, for the thesis of Y. Fischer.

Apics is part of the regional working group SBPI (Signal, Noise, Inverse Problems), with teams from Observatoire de la Côte d'Azur and Géoazur (CNRS) http://www-sop.inria.fr/apics/sbpi.

AHPI (Analyse Harmonique et Problèmes Inverses), is a “Projet blanc” in Mathematics involving Inria-Sophia (L. Baratchart coordinator), the Université de Provence (LATP, Aix-Marseille), the Université Bordeaux I (LATN), the Université d'Orléans (MAPMO), Inria-Bordeaux and the Université de Pau (Magique 3D). It aims at developing Harmonic Analysis techniques to approach inverse problems in seismology, electro-encephalography, tomography and nondestructive control.

Filipix (FILtering for Innovative Payload with Improved fleXibility) is a “Projet Thématique en Télécommunications”, involving Inria-Sophia (Apics), XLim, Thales Alenia Space (Centre de Toulouse, coordinator).

APICS is part of the European Research Network on System Identification (ERNSI).

Subject: System identification concerns the construction, estimation and validation of mathematical models of dynamical physical or engineering phenomena from experimental data.

**NSF CMG**collaborative research grant DMS/0934630, “Imaging magnetization distributions in geological samples”, with Vanderbilt University and the MIT (USA).

**Cyprus NF grant**“Orthogonal polynomials in the complex plane: distribution of zeros, strong asymptotics and shape reconstruction.”

Yannick Privat (CNRS, ENS Cachan, antenne Bretagne).

Tao Qian (University of Macau)

Nikos Stylianopoulos (University of Cyprus).

Maxim Yattselev (University of Oregon at Eugene).

Bernard Hanzon and Conor Sexton (University of Cork).

Ralf Peeters (University of Maastricht).

Smain Amari (Royal Military College of Canada).

Matteo Oldoni (Polytech Milan, Italy)

Maher Moakher and Moncef Mahjoub (LAMSIN-ENIT, Tunis).

Alex L. Castro (PUc, Rio de Janeiro)

Olivier Cots (ENSIEEHT, Toulouse)

Natalya Shcherbakova (ENSIEEHT, Toulouse)

Edward Saff (Vanderbilt University)

L. Baratchart is Inria's representative at the « conseil scientifique » of the Univ. Provence (Aix-Marseille). He was a member of the “Comité de sélection” of the Univ. of Bordeaux I (section 25). He is a member of the program committee of SYSID 2012 and MTNS 2012.

S. Chevillard and J. Grimm are representatives at the « comité de centre » (Research Center INRIA-Sophia).

S. Chevillard is representative of the « comité de centre » at the « comité des projets » (Research Center INRIA-Sophia).

J. Leblond was a member of the « Commission d'Évaluation » (CE) of INRIA, until June, and since then, of the “Conseil Scientifique” (CS). Within the Research Center, she is a member of the « Commission d'Animation Scientifique » (CAS-MASTIC) and holds (since March) a supporting task for the researchers.

M. Olivi is a member of the CSD (Comité de Suivi Doctoral) of the Research Center. She is responsible for scientific mediation until July.

J.-B. Pomet is the president of “Commission de Suivi Doctoral” of INRIA Sophia Antipolis. J.-B. Pomet was a representative at the « comité technique paritaire » (CTP) of INRIA until September.

F.Seyfert is a member of the CUMIR.

L. Baratchart is a member of the editorial board of
*Computational Methods and Function Theory*and
*Complex Analysis and Operator Theory*.

He was an invited speaker at the Conference on Blaschke products and their Applications (Fields Institute, Toronto), at the Conference “Computational Complex Analysis and Approximation Theory” (Protaras, Cyprus) and at the conference “Recent Trends in Analysis” (Bordeaux). He was a speaker at the ERNSI Workshop on System Identification (Nice) and a colloquium speaker at the University of Wuhan (China).

He was co-organizer (with A. Borichev and N. Nikolskii) of the summer school
*B*ellman Functions in Harmonic Analysis, held at INRIA-Sophia-Antipolis-Méditerranée.

B. Bonnard gave a plenary talk at the “Workshop on Weak KAM theory” in Bordeaux. He was invited to give a talk at LAGEP (Lyon) on optimal contrast in NMR.

Y. Fischer was an invited speaker at the conference WIS&E (Mexico, Nov.). He gave a talk at the SMAI congress (Guidel, May), and at the seminars of the teams MAC (LAAS, Toulouse, Jan.), LMAP (Univ. Pau et Pays de l'Adour, Mar.), at the working groups ITER (lab. JLL, Univ. Paris 6), plasma (JAD lab., Univ. Nice Sophia-Antipolis).

Y. Fischer and M. Olivi gave a talk at the Journées d'Identification et de Modélisation Expérimentale (JIME, Douai, Apr.).

J. Leblond was an invited speaker at the Journées d'Analyse Mathématique et Applications (JAMA 2011, Hammamet, March), at the Second International Conference on the Mathematical Sciences, (Buea, Cameroon, May). She gave a communication at the seminar of the team Analyse et Géométrie, LATP-CMI, Univ Aix-Marseille I (Feb.), and at the meeting of the ANR project AHPI, Bordeaux (Nov.).

A.-M. Nicu gave a communication at the Journées d'Analyse Mathématique et Applications (JAMA 2011, Hammamet, March).

M. Olivi organized the ERNSI workshop in Nice (Sept.).

J.-B. Pomet gave a talk at the conference “New Trends in Astrodynamics and Applications”, Courant Institute, New York.

E. Pozzi gave a communication at the meeting of the GDR AFHA (Clermont-Ferrand, Oct.).

S. Chevillard organized the “Computer arithmetic” session at the “Rencontres arithmétiques de l'informatique mathématique” (RAIM'11, Perpignan, France).

S. Chevillard gave a talk at the conference “ARITH 20”, Tuebingen, Germany.

F. Seyfert gave two talks at the "European Microwave Conference (EuMC 2011)" in Manchester on the topics of filter synthesis and de-embedding of multiplexers. He was at the origin of a European work-group hosted by ESA on the topic of compact multiplexer design, whose first meeting took place in Noordwijkerhout at the end of November.

Licence (or equivalent) : Martine Olivi and Sylvain Chevillard (with Maureen Clerc), Mathématiques pour l'ingénieur (Fourier analysis and integration), section Mathématiques Appliquées et Modélisation, 3rd year, École Polytechnique Univ. Nice-Sophia Antipolis (EPU), France. 13 hours of plenary course, and 26 hours of practical sessions (divided in two groups of students, hence 52 hours of teaching in total).

Doctorat (or equivalent) : Juliette Leblond, Real and complex analysis with applications to other sciences (10h), CIMPA-UNESCO-MICINN Research School, Univ. Buea, Cameroon.

PhD & HdR :

PhD : Yannick Fischer, « Approximation des classes de fonctions analytiques généralisées et résolution de problèmes inverses pour les tokamaks », Univ. Nice Sophia-Antipolis, Nov. 2011 , J. Leblond, L. Baratchart.

PhD in progress : Ana-Maria Nicu, « Approximation et représentation des fonctions sur la sphère. Applications aux problèmes inverses de la géodésie et l'imagerie médicale. » Univ. Nice Sophia-Antipolis, since Nov. 2008, J. Leblond

PhD in progress : Slah Chaabi, « Problèmes extrémaux pour l'équation de Beltrami réelle 2-dimensionnelle et application à la détermination de frontières libres », L. Baratchart (co-advised with Univ. Aix-Marseille I).

PhD in progress : Ahed Hindawi « Transport optimal en contrôle », Univ. Nice-Sophia Antipolis, co-advisors: J.-B. Pomet, L. Rifford (PR Univ. Nice Sophia-Antipolis), started October, 2008.

B. Bonnard was President of the PhD defense committees of B. Daoud (Université de Bourgogne, Dijon) and K. Zhang (Univ. C. Bernard, Lyon 1).

Juliette Leblond was a member of the PhD defense committees of P. Eyimi (Univ. Poitiers), D. Duc Thang (UTC), and of the HdR defense committee of F. Delvare (Univ. Orléans-Bourges).

Jean-Baptiste Pomet was a member of the PhD defense committee of B. Daoud (Université de Bourgogne, Dijon).

Martine Olivi was a member of the PhD defense committees of Christian Fischer (École des Mines, Sophia-Antipolis).

Fabien Seyfert was a member of the PhD defense of Monica Mendoza (Technical University Cartagena, Spain) and Hussein Ezzeddine (Xlim, Lioges, France).