The team develops constructive, function-theoretic approaches to inverse problems arising in modeling and design, in particular for electro-magnetic systems as well as in the analysis of certain classes of signals.

Data typically consist of measurements or desired behaviors. The general thread is to approximate them by families of solutions to the equations governing the underlying system. This leads us to consider various interpolation and approximation problems in classes of rational and meromorphic functions, harmonic gradients, or solutions to more general elliptic partial differential equations (PDE), in connection with inverse potential problems. A recurring difficulty is to control the singularities of the approximants.

The mathematical tools pertain to complex and harmonic analysis, approximation theory, potential theory, system theory, differential topology, optimization and computer algebra. Targeted applications include:

identification and synthesis of analog microwave devices (filters, amplifiers),

non-destructive control from field measurements in medical engineering (source recovery in magneto/electro-encephalography), paleomagnetism (determining the magnetization of rock samples), and nuclear engineering (plasma shaping in tokamaks).

In each case, the endeavor is to develop algorithms resulting in dedicated software.

Within the extensive field of inverse problems, much of the research by Apics
deals with reconstructing solutions of classical elliptic PDEs from their
boundary behavior. Perhaps the simplest example lies with
harmonic identification of a stable linear dynamical system:
the transfer-function *e.g.* the Cauchy formula.

Practice is not nearly as simple, for *i.e.* to locate the

Step 1 relates to extremal
problems and analytic operator theory, see Section .
Step 2 involves optimization, and some Schur analysis
to parametrize transfer matrices of given Mc-Millan degree
when dealing with systems having several inputs and outputs,
see Section .
It also makes contact with the topology of rational functions, in particular
to count
critical points and to derive bounds, see Section . Step 2 raises
further issues in approximation theory regarding the rate of convergence and
the extent to which singularities of the
approximant (*i.e.* its poles) tend to singularities of the
approximated function; this is where logarithmic potential theory
becomes instrumental, see Section .

Applying a realization procedure to the result of step 2 yields an identification procedure from incomplete frequency data which was first demonstrated in to tune resonant microwave filters. Harmonic identification of nonlinear systems around a stable equilibrium can also be envisaged by combining the previous steps with exact linearization techniques from .

A similar path can be taken to approach design problems in the frequency domain, replacing the measured behavior by some desired behavior. However, describing achievable responses in terms of the design parameters is often cumbersome, and most constructive techniques rely on specific criteria adapted to the physics of the problem. This is especially true of filters, the design of which traditionally appeals to polynomial extremal problems , . Apics contributed to this area the use of Zolotarev-like problems for multi-band synthesis, although we presently favor interpolation techniques in which parameters arise in a more transparent manner, see Section .

The previous example of harmonic identification
quickly suggests a generalization
of itself. Indeed, on identifying *i.e.*, the field) on part of a hypersurface (a curve in 2-D)
encompassing the support of

Inverse potential problems are severely indeterminate because infinitely many
measures within an open set produce the same field outside this set; this phenomenon is called
*balayage* . In the two steps approach
previously described ,
we implicitly removed this indeterminacy by requiring in step 1
that the measure
be supported on the boundary (because we seek a function holomorphic
throughout the right half space), and
by requiring in step 2
that the measure be discrete in the left half-plane. The discreteness
assumption also prevails in 3-D inverse source problems, see
Section . Conditions
that ensure uniqueness of the solution to the inverse potential
problem are part of the so-called regularizing assumptions which are needed
in each case to derive efficient algorithms.

To recap, the gist of our approach is to approximate boundary data by (boundary traces of) fields arising from potentials of measures with specific support. Note that it is different from standard approaches to inverse problems, where descent algorithms are applied to integration schemes of the direct problem; in such methods, it is the equation which gets approximated (in fact: discretized).

Along these lines, Apics advocates the use of steps 1 and 2 above, along with some singularity analysis, to approach issues of nondestructive control in 2-D and 3-D , . The team is currently engaged in two kinds of generalizations, to be described further in Section . The first deals with non-constant conductivities in 2-D, where Cauchy-Riemann equations characterizing holomorphic functions are replaced by conjugate Beltrami equations characterizing pseudo-holomorphic functions; next in line are 3-D situations that we begin to consider, see Sections and . There, we seek applications to inverse free boundary problems such as plasma confinement in the vessel of a tokamak, or inverse conductivity problems like those arising in impedance tomography. The second generalization lies with inverse source problems for the Laplace equation in 3-D, where holomorphic functions are replaced by harmonic gradients; applications are to EEG/MEG and inverse magnetization problems in paleomagnetism, see Section .

The approximation-theoretic tools developed by Apics to handle issues mentioned so far are outlined in Section . In Section to come, we describe in more detail which problems are considered and which applications are targeted.

By standard properties of conjugate differentials, reconstructing Dirichlet-Neumann boundary conditions
for a function harmonic in a plane domain,
when these boundary conditions are known already on a subset

Another application by the team deals with non-constant conductivity
over a doubly connected domain, the set *Tore Supra*
. The procedure is fast because no numerical integration of
the underlying PDE is needed, as an explicit basis of solutions to the
conjugate Beltrami equation in terms of Bessel functions
was found in this case. Generalizing this approach in a more systematic
manner to free boundary problems of Bernoulli type,
using descent
algorithms based on shape-gradient for such approximation-theoretic
criteria, is an interesting prospect, still to be pursued.

The piece of work we just mentioned requires defining and studying Hardy spaces of the conjugate-Beltrami equation, which is an interesting topic by itself. For Sobolev-smooth coefficients of exponent greater than 2, this was done in references and . The case of the critical exponent 2 is treated in , which apparently provides the first example of well-posedness for the Dirichlet problem in the non-strictly elliptic case: the conductivity may be unbounded or zero on sets of zero capacity and, accordingly, solutions need not be locally bounded.

The 3-D version of step 1 in Section is another
subject investigated by Apics: to recover a harmonic function
(up to a constant) in a ball or a half-space from partial knowledge of its
gradient on the boundary. This prototypical inverse problem
(*i.e.* inverse to the Cauchy problem for the Laplace equation)
often recurs in electromagnetism. At present, Apics is involved with
solving instances of this inverse problem arising
in two fields, namely medical imaging
*e.g.* for electroencephalography (EEG)
or magneto-encephalography (MEG), and
paleomagnetism (recovery of rocks magnetization)
, , see Section .
In this connection, we collaborate with two groups of partners:
Athena Inria project-team,
CHU La Timone, and BESA company on the one hand,
Geosciences Lab. at MIT and Cerege CNRS Lab.on the other hand.
The question is considerably more difficult than its 2-D
counterpart, due mainly to the lack of multiplicative structure for harmonic
gradients. Still,
considerable progress has been made over the last years
using methods of harmonic analysis and operator theory.

The team is further concerned with 3-D generalizations and applications to
non-destructive control of step 2 in Section .
A typical problem is here to localize inhomogeneities or defaults such as
cracks, sources or occlusions in a planar or 3-dimensional object,
knowing thermal, electrical, or
magnetic measurements on the boundary.
These defaults can be expressed as a lack of harmonicity
of the solution to the associated Dirichlet-Neumann problem,
thereby posing an inverse potential problem in order to recover them.
In 2-D, finding an optimal discretization of the
potential in Sobolev norm amounts to solve a best rational approximation
problem, and the question arises as to how the location of the
singularities of the approximant (*i.e.* its poles)
reflects the location of the singularities of the potential
(*i.e.* the defaults we seek). This is a fairly deep issue
in approximation theory, to which Apics contributed convergence results
for certain classes of fields
expressed as Cauchy integrals over extremal contours for
the logarithmic potential
, .
Initial schemes to locate cracks or sources
*via* rational approximation on
planar domains were obtained this way , , . It is remarkable that finite inverse source problems
in 3-D balls, or more general algebraic surfaces,
can be approached using these 2-D techniques upon slicing the
domain into planar sections
, .
This bottom line generates a steady research activity
within Apics, and again applications are sought to medical imaging and
geosciences, see Sections ,
and .

Conjectures can be raised on the behavior of optimal potential discretization in 3-D, but answering them is an ambitious program still in its infancy.

Through contacts with CNES (French space agency),
members of the team became involved in identification and tuning
of microwave electromagnetic filters used in space telecommunications,
see Section . The initial problem was
to recover, from band-limited frequency measurements, physical
parameters of the device under examination.
The latter consists of interconnected dual-mode resonant cavities with
negligible loss, hence its scattering matrix is modeled by a

This is where system theory comes into play, through the
so-called *realization* process mapping
a rational transfer function in the frequency domain
to a state-space representation of the underlying system
of linear differential equations in the time domain.
Specifically, realizing the scattering matrix
allows one to construct
a virtual electrical network, equivalent to the filter,
the parameters of which mediate in between the frequency response
and the
geometric characteristics of the cavities (*i.e.* the tuning parameters).

Hardy spaces provide a framework to transform this ill-posed issue into a series of regularized analytic and meromorphic approximation problems. More precisely, the procedure sketched in Section goes as follows:

infer from the pointwise boundary data in the bandwidth
a stable transfer function (*i.e.* one which is holomorphic
in the right half-plane), that may be infinite dimensional
(numerically: of high degree). This is done by solving
a problem analogous to

A stable rational approximation of appropriate degree to the model obtained in the previous step is performed. For this, a descent method on the compact manifold of inner matrices of given size and degree is used, based on an original parametrization of stable transfer functions developed within the team .

Realizations of this rational approximant are computed. To be useful, they must satisfy certain constraints imposed by the geometry of the device. These constraints typically come from the coupling topology of the equivalent electrical network used to model the filter. This network is composed of resonators, coupled according to some specific graph. This realization step can be recast, under appropriate compatibility conditions , as solving a zero-dimensional multivariate polynomial system. To tackle this problem in practice, we use Gröbner basis techniques and continuation methods which team up in the Dedale-HF software (see Section ).

Let us mention that extensions of classical coupling matrix theory to frequency-dependent (reactive) couplings have lately been carried-out for wide-band design applications, although further study is needed to make them computationally effective.

Subsequently Apics started to investigate issues pertaining to
design rather than identification.
Given the topology of the filter,
a basic problem in this connection is to find the optimal response
subject to specifications
that bear on rejection, transmission and group delay of the
scattering parameters.
Generalizing the classical approach based on Chebyshev polynomials
for single band
filters, we recast the problem of multi-band response synthesis
as a generalization of the classical Zolotarev min-max problem
for rational functions .
Thanks to quasi-convexity, the latter
can be solved efficiently using iterative methods relying on linear
programming. These were implemented in the software
easy-FF (see Section ). Currently, the team is engaged
in synthesis of more complex microwave devices
like multiplexers and routers, which connect several
filters through wave guides.
Schur analysis plays an important role here, because
scattering matrices of passive systems are of Schur type
(*i.e.* contractive in the stability region).
The theory originates with the work of I. Schur ,
who devised a recursive test to
check for contractivity of a holomorphic function in the disk.
The so-called Schur parameters of a function
may be viewed as Taylor coefficients for the hyperbolic metric of the disk, and
the fact that Schur functions are contractions for that metric lies at the
root of Schur's test.
Generalizations thereof turn out to be efficient to parametrize
solutions to contractive interpolation problems .
Dwelling on this, Apics contributed
differential parametrizations (atlases of charts) of lossless
matrix functions , which
are fundamental to our rational approximation
software RARL2 (see Section ).
Schur analysis is also instrumental to approach de-embedding issues,
and provides one with considerable
insight into the so-called matching problem. The latter consists in
maximizing the power a multiport can pass to a given load, and for
reasons of efficiency it
is all-pervasive in microwave and electric network design, *e.g.* of
antennas, multiplexers, wifi cards and more. It can be viewed as a
rational approximation problem in the hyperbolic metric, and the team
presently gets to grips with this hot topic using multipoint
contractive interpolation in
the framework of the (defense funded) ANR COCORAM,
see Sections and .

In recent years,
our attention was driven by CNES and UPV (Bilbao)
to questions about stability of high-frequency amplifiers,
see Section .
Contrary to previously discussed devices, these are *active* components.
The response of an amplifier can be linearized around a
set of primary current and voltages,
and then admittances of the corresponding electrical network
can be computed at various frequencies, using the so-called harmonic
balance method.
The initial goal is to check for stability of the linearized model,
so as to ascertain existence of a well-defined working state.
The network is composed of lumped electrical elements namely
inductors, capacitors, negative *and* positive reactors,
transmission lines, and controlled current sources.
Our research so far focuses on describing the algebraic structure
of admittance functions, so as to set up a function-theoretic framework
where the two-steps approach outlined in Section
can be put to work. The main discovery so far is that
the unstable part of each partial transfer function is rational,
see Section .

To find an analytic function

Here *a priori*
assumptions on
the behavior of the model off

To fix terminology, we refer to *bounded extremal problem*.
As shown in , ,
, the solution to this convex
infinite-dimensional optimization problem can be obtained
when

(

The case

Various modifications of

The analog of Problem *seek the inner
boundary*, knowing it is a level curve of the solution..
In this case, the Lagrange parameter indicates
how to deform the inner contour in order to improve
data fitting.
Similar topics are discussed in Sections and
for more general equations than the Laplacian, namely
isotropic conductivity equations of the form

Though originally considered in dimension 2,
Problem

When

On the ball, the analog
of Problem

When *Hardy-Hodge* decomposition,
allowing us to express a *i.e.* those generating no field
in the upper half space) .

Just like solving problem

Problem

Companion to problem

Note that

The techniques set forth in this section are used to solve
step 2 in Section and instrumental to
approach inverse boundary value problems
for the Poisson equation

We put

A natural generalization of problem

(

Only for

The case where *stable* rational
approximant to *not* be unique.

The former Miaou project (predecessor of Apics) designed a dedicated
steepest-descent algorithm
for the case *local minimum* is
guaranteed; until now it seems to be the only procedure meeting this
property. This gradient algorithm proceeds
recursively with respect to *critical points* of lower degree
(as is done by the RARL2 software, Section ).

In order to establish global convergence results, Apics has undertaken a
deeper study of the number and nature of critical points
(local minima, saddle points...), in which
tools from differential topology and
operator theory team up with classical interpolation theory
, .
Based on this work,
uniqueness or asymptotic uniqueness of the approximant
was proved for certain classes of functions like
transfer functions of relaxation
systems (*i.e.*
Markov functions) and more
generally Cauchy integrals over hyperbolic geodesic arcs .
These are the only results of this kind. Research by Apics on this topic
remained dormant for a while by reasons of opportunity,
but revisiting the work in higher dimension is still
a worthy endeavor. Meanwhile,
an analog to AAK theory
was carried out for

A common
feature to the above-mentioned problems
is that critical point equations
yield non-Hermitian orthogonality relations for the denominator
of the approximant. This stresses connections with interpolation,
which is a standard way to build approximants,
and in many respects best or near-best rational approximation
may be regarded as a clever manner to pick interpolation points.
This was exploited in , ,
and is used in an essential manner to assess the
behavior of poles of best approximants to functions with branched
singularities,
which is of particular interest for inverse source problems
(*cf.* Sections and ).

In higher dimensions, the analog of Problem

Besides,
certain constrained rational approximation problems, of special interest
in identification
and design of passive systems, arise when putting additional
requirements on the approximant, for instance that it should be smaller than 1
in modulus (*i.e.* a Schur function). In particular, Schur interpolation
lately received renewed attention
from the team, in connection with matching problems.
There, interpolation data are subject to
a well-known compatibility condition (positive definiteness of the so-called
Pick matrix), and the main difficulty is to put interpolation
points on the boundary of

Matrix-valued approximation is necessary to handle systems with several
inputs and outputs but it generates additional difficulties
as compared to scalar-valued approximation,
both theoretically and algorithmically. In the matrix case,
the McMillan degree (*i.e.* the degree of a minimal realization in
the System-Theoretic sense) generalizes the usual notion of degree
for rational functions.

The basic problem that we consider now goes as follows:
*let $\mathcal{F}\in {\left({H}^{2}\right)}^{m\times l}$ and $n$ an
integer; find a rational matrix of size $m\times l$ without
poles in the unit disk and of McMillan degree at most $n$ which is nearest possible
to $\mathcal{F}$ in ${\left({H}^{2}\right)}^{m\times l}$.*
Here the

The scalar approximation algorithm derived in
and mentioned in
Section
generalizes to
the matrix-valued situation . The
first difficulty here is to parametrize
inner matrices (*i.e.* matrix-valued functions
analytic in the unit disk and unitary on the unit circle) of
given McMillan degree degree

Difficulties relative to multiple local minima of course arise in
the matrix-valued case as well, and deriving criteria that
guarantee uniqueness is even
more difficult than in the scalar case. The case of rational functions
of degree

Let us stress that RARL2 seems the only algorithm handling rational approximation in the matrix case that demonstrably converges to a local minimum while meeting stability constraints on the approximant.

We refer here to the behavior of poles of best
meromorphic approximants, in the

Generally speaking in approximation theory, assessing the
behavior of poles of rational approximants is essential
to obtain error rates as the degree goes large, and to tackle
constructive issues like
uniqueness. However, as explained in Section ,
Apics considers this issue foremost as a means
to extract information on
singularities of the solution to a
Dirichlet-Neumann problem.
The general theme is thus: *how do the singularities
of the approximant reflect those of the approximated function?*
This approach to inverse problem for the 2-D Laplacian turns out
to be attractive when singularities
are zero- or one-dimensional (see Section ). It can be used
as a computationally cheap
initial condition for more precise but much heavier
numerical optimizations which often do not even converge
unless properly initialized.
As regards crack detection or source recovery, this approach
boils down to
analyzing the behavior of best meromorphic
approximants of a function with branch points.
For piecewise analytic cracks, or in the case of sources, we were able to
prove (, , ),
that the poles of the
approximants accumulate, when the degree goes large,
to some extremal cut of minimum weighted
logarithmic capacity connecting
the singular points of the crack, or the sources
.
Moreover, the asymptotic density
of the poles turns out to be the Green equilibrium distribution
on this cut in

The case of two-dimensional singularities is still an outstanding open problem.

It is remarkable that inverse source problems inside a sphere or an ellipsoid in 3-D can be approached with such 2-D techniques, as applied to planar sections (see Section ). The technique is implemented in the software FindSources3D, see Section .

Sylvain Chevillard, joined team in November 2010. His coming
resulted in Apics hosting a research activity in certified computing,
centered on the software *Sollya* of which S. Chevillard is a
co-author, see Section . On the one hand, Sollya is an
Inria software which still requires some tuning to a growing community of
users. On the other hand, approximation-theoretic methods
at work in Sollya are potentially useful for certified solutions to
constrained analytic problems described in Section .
However, developing Sollya is not a long-term objective of Apics.

Application domains are naturally linked to the problems described in Sections and . By and large, they split into a systems-and-circuits part and an inverse-source-and-boundary-problems part, united under a common umbrella of function-theoretic techniques as described in Section .

This work is performed in collaboration with Maureen Clerc and Théo Papadopoulo from the Athena Project-Team, and Jean-Paul Marmorat (Centre de mathématiques appliquées - CMA, École des Mines de Paris).

Solving overdetermined Cauchy problems for the Laplace equation on a
spherical layer (in 3-D) in order to extrapolate
incomplete data (see Section ) is
a necessary
ingredient of the team's approach to inverse source problems, in particular
for applications to EEG. Indeed, the latter involves propagating the
initial conditions through several layers of different conductivities,
from the boundary shell
down to the center of the domain where the
singularities (*i.e.* the sources) lie.
Once propagated
to the innermost sphere, it turns out that traces of the
boundary data on 2-D cross sections coincide
with analytic functions with branched singularities
in the slicing plane
. The singularities are
related to the actual location of the sources, namely their moduli
reach in turn a
maximum when the plane contains one of the sources. Hence we are
back to the 2-D framework of Section ,
and recovering these singularities
can be performed *via* best rational approximation.
The goal is to produce a fast and sufficiently accurate
initial guess on the number
and location of the sources in order to run heavier
descent algorithms on the direct problem, which are more precise but
computationally costly and often
fail to converge if not properly initialized.

Numerical experiments give very good results on simulated data and we are now engaged in the process of handling real experimental data (see Sections and ), in collaboration with the Athena team at Inria Sophia Antipolis, neuroscience teams in partner-hospitals (la Timone, Marseille), and the BESA company (Munich).

Generally speaking, inverse potential problems, similar to the one appearing in Section , occur naturally in connection with systems governed by Maxwell's equation in the quasi-static approximation regime. In particular, they arise in magnetic reconstruction issues. A specific application is to geophysics, which led us to form the Inria Associate Team “IMPINGE” (Inverse Magnetization Problems IN GEosciences) together with MIT and Vanderbilt University. A recent collaboration with Cerege (CNRS, Aix-en-Provence), in the framework of the ANR-project MagLune, completes this picture, see Section .

To set up the context, recall that the Earth's geomagnetic field is generated by convection of the liquid metallic core (geodynamo) and that rocks become magnetized by the ambient field as they are formed or after subsequent alteration. Their remanent magnetization provides records of past variations of the geodynamo, which is used to study important processes in Earth sciences like motion of tectonic plates and geomagnetic reversals. Rocks from Mars, the Moon, and asteroids also contain remanent magnetization which indicates the past presence of core dynamos. Magnetization in meteorites may even record fields produced by the young sun and the protoplanetary disk which may have played a key role in solar system formation.

For a long time, paleomagnetic techniques were only capable of analyzing bulk samples and compute their net magnetic moment. The development of SQUID microscopes has recently extended the spatial resolution to sub-millimeter scales, raising new physical and algorithmic challenges. This associate team aims at tackling them, experimenting with the SQUID microscope set up in the Paleomagnetism Laboratory of the department of Earth, Atmospheric and Planetary Sciences at MIT. Typically, pieces of rock are sanded down to a thin slab, and the magnetization has to be recovered from the field measured on a parallel plane at small distance above the slab.

Mathematically
speaking, both inverse source problems for EEG from Section and inverse magnetization problems described presently
amount to recover the (3-D valued) quantity

outside the volume

This work is conducted in part with Yannick Privat, CNRS, Lab. J.-L. Lions, Paris.

The team has engaged in the study of
problems with variable conductivity
*cf.* in particular
(see Section ).

This is joint work with Stéphane Bila (XLIM, Limoges) and Jean-Paul Marmorat (Centre de mathématiques appliquées (CMA), École des Mines de Paris).

One of the best training grounds for function-theoretic applications by the team is the identification and design of physical systems whose performance is assessed frequency-wise. This is the case of electromagnetic resonant systems which are of common use in telecommunications.

In space telecommunications (satellite transmissions), constraints specific to on-board technology lead to the use of filters with resonant cavities in the microwave range. These filters serve multiplexing purposes (before or after amplification), and consist of a sequence of cylindrical hollow bodies, magnetically coupled by irises (orthogonal double slits). The electromagnetic wave that traverses the cavities satisfies the Maxwell equations, forcing the tangent electrical field along the body of the cavity to be zero. A deeper study of the Helmholtz equation states that an essentially discrete set of wave vectors is selected. In the considered range of frequency, the electrical field in each cavity can be decomposed along two orthogonal modes, perpendicular to the axis of the cavity (other modes are far off in the frequency domain, and their influence can be neglected).

Near the resonance frequency, a good approximation of Maxwell's equations is given by the solution of a second order differential equation. Thus, one obtains an electrical model of the filter as a sequence of electrically-coupled resonant circuits, each circuit being modeled by two resonators, one per mode, the resonance frequency of which represents the frequency of a mode, and whose resistance accounts for electric losses (current on the surface) of the cavities.

This way, the filter can be seen as a quadripole, with two ports, when
plugged on a resistor at one end and fed with some potential at the other end.
One is now
interested in the power which is transmitted and reflected. This leads
one to define a
scattering matrix

In fact, resonance is not studied via the electrical model,
but via a low-pass
equivalent circuit obtained upon linearizing near the central frequency, which is no
longer
conjugate symmetric (*i.e.* the underlying system may no longer
have real
coefficients) but whose degree is divided by 2 (8 in the example).

In short, the strategy for identification is as follows:

measuring the scattering matrix of the filter near the optimal frequency over twice the pass band (which is 80MHz in the example).

Solving bounded extremal problems for the transmission and the reflection (the modulus of he response being respectively close to 0 and 1 outside the interval measurement, cf. Section ). This provides us with a scattering matrix of order roughly 1/4 of the number of data points.

Approximating this scattering matrix by a rational transfer-function of fixed degree (8 in this example) via the Endymion or RARL2 software (cf. Section ).

A realization of the transfer function is thus obtained, and some additional symmetry constraints are imposed.

Finally one builds a realization of the approximant and looks for a change of variables that eliminates non-physical couplings. This is obtained by using algebraic-solvers and continuation algorithms on the group of orthogonal complex matrices (symmetry forces this type of transformation).

The final approximation is of high quality. This can be interpreted as
a validation of the linearity hypothesis for the system:
the relative

The above considerations are valid for a large class of filters. These developments have also been used for the design of non-symmetric filters, which are useful for the synthesis of repeating devices.

The team also investigates problems relative to the design of optimal responses for microwave devices. The resolution of a quasi-convex Zolotarev problems was proposed, in order to derive guaranteed optimal multi-band filter responses subject to modulus constraints . This generalizes the classical single band design techniques based on Chebyshev polynomials and elliptic functions. The approach relies on the fact that the modulus of the scattering parameter

The filtering function appears to be the ratio of two polynomials

The relative simplicity of the derivation of a filter's response, under
modulus constraints, owes much to the possibility of
forgetting about Feldtkeller's equation and express all design constraints
in terms of the filtering function. This no longer the case when
considering the synthesis

Through contacts with CNES (Toulouse) and UPV (Bilbao),
Apics got further involved three years ago
with the design of amplifiers which, unlike filters, are active devices.
A prominent issue here is stability. A twenty years back, it was not
possible to simulate unstable responses, and only after building a device
could one detect instability. The advent of so-called *harmonic balance*
techniques, which compute steady state responses of linear elements in
the frequency domain and look for a periodic state in the time domain of
a network connecting these linear elements *via*
static nonlinearities made it possible to compute the harmonic response
of a (possibly nonlinear and unstable) device .
This has had tremendous impact on
design, and there is a growing demand for software analyzers.

There are two types of stability involved. The first is stability of a fixed
point around which the linearized transfer function
accounts for small signal amplification. The second is stability of a
limit cycle which is reached when the input signal is no longer small
and truly nonlinear amplification is attained
(*e.g.* because of saturation).
Work by the team so far is concerned with the first type of stability,
and emphasis is put on defining and extracting the “unstable part” of the
response, see Section .

Status: Currently under development. A stable version is maintained.

This software is developed in collaboration with Jean-Paul Marmorat (Centre de mathématiques appliquées (CMA), École des Mines de Paris).

RARL2 (Réalisation interne et Approximation Rationnelle L2) is a software for
rational approximation (see Section )
http://

The software RARL2 computes, from a given matrix-valued function in *stable and of prescribed McMillan degree*
(see Section ). It was initially developed in the context of linear (discrete-time) system theory and makes an heavy use of the classical concepts in this field. The matrix-valued function to be approximated can be viewed as the transfer function of a multivariable discrete-time stable system. RARL2 takes as input either:

its internal realization,

its first

discretized (uniformly distributed) values on the circle. In this case, a least-square criterion is used instead of
the

It thus performs model reduction in case 1) and 2) and frequency data identification in case 3). In the case of band-limited frequency data, it could be necessary to infer the behavior of the system outside the bandwidth before performing rational approximation (see Section ). An appropriate Möbius transformation allows to use the software for continuous-time systems as well.

The method is a steepest-descent algorithm. A parametrization of MIMO systems is used, which ensures that the stability constraint on the approximant is met. The implementation, in Matlab, is based on state-space representations.

The number of local minima can be large so that the choice of an initial point for the optimization may play a crucial role. In this connection, two methods can be used: 1) An initialization with a best Hankel approximant. 2) An iterative research strategy on the degree of the local minima, similar in principle to that of RARL2, increases the chance of obtaining the absolute minimum by generating, in a structured manner, several initial conditions.

RARL2 performs the rational approximation step in our applications to filter identification (see Section ) as well as sources or cracks recovery (see Section ). It was released to the universities of Delft, Maastricht, Cork, Brussels and Macao. The parametrization embodied in RARL2 was also used for a multi-objective control synthesis problem provided by ESTEC-ESA, The Netherlands. An extension of the software to the case of triple poles approximants is now available. It is used by FindSources3D (see Section ).

Status: A stable version is maintained.

This software is developed in collaboration with Jean-Paul Marmorat (Centre de mathématiques appliquées (CMA), École des Mines de Paris).

The identification of filters modeled by an electrical
circuit that was developed by the team (see Section )
led us to compute the electrical parameters of the underlying
filter. This means finding a particular realization

Status: Currently under development. A stable version is maintained.

PRESTO-HF: a toolbox dedicated to lowpass parameter identification for microwave filters http://www-sop.inria.fr/apics/Presto-HF. In order to allow the industrial transfer of our methods, a Matlab-based toolbox has been developed, dedicated to the problem of identification of low-pass microwave filter parameters. It allows one to run the following algorithmic steps, either individually or in a single shot:

determination of delay components caused by the access devices (automatic reference plane adjustment),

automatic determination of an analytic completion, bounded in modulus for each channel,

rational approximation of fixed McMillan degree,

determination of a constrained realization.

For the matrix-valued rational approximation step, Presto-HF relies on RARL2 (see Section ). Constrained realizations are computed by the RGC software. As a toolbox, Presto-HF has a modular structure, which allows one for example to include some building blocks in an already existing software.

The delay compensation algorithm is based on the following assumption:
far off the passband, one can reasonably expect a good approximation of the
rational components of

This toolbox is currently used by Thales Alenia Space in Toulouse, Thales airborn systems and a license agreement has been recently negotiated with TAS-Espagna. XLIM (University of Limoges) is a heavy user of Presto-HF among the academic filtering community and some free license agreements are currently being considered with the microwave department of the University of Erlangen (Germany) and the Royal Military College (Kingston, Canada). A time-limited license has been bought by Flextronics for testing purposes.

Status: Currently under development. A stable version is maintained.

Dedale-HF is a software dedicated to solve exhaustively the coupling matrix synthesis problem in reasonable time for the filtering community. Given a coupling topology, the coupling matrix synthesis problem (C.M. problem for short) consists in finding all possible electromagnetic coupling values between resonators that yield a realization of given filter characteristics. Solving the latter problem is crucial during the design step of a filter in order to derive its physical dimensions as well as during the tuning process where coupling values need to be extracted from frequency measurements (see Figure ).

Dedale-HF consists in two parts: a database of coupling topologies as well as a dedicated predictor-corrector code. Roughly speaking each reference file of the database contains, for a given coupling topology, the complete solution to the C.M. problem associated to particular filtering characteristics. The latter is then used as a starting point for a predictor-corrector integration method that computes the solution to the C.M. corresponding to the user-specified filter characteristics. The reference files are computed off-line using Gröbner basis techniques or numerical techniques based on the exploration of a monodromy group. The use of such continuation techniques, combined with an efficient implementation of the integrator, drastically reduces the computational time.

Access to the database and integrator code is done via the web on http://www-sop.inria.fr/apics/Dedale/WebPages. The software is free of charge for academic research purposes: a registration is however needed in order to access full functionality. Up to now 90 users have registered world wide (mainly: Europe, U.S.A, Canada and China) and 4000 reference files have been downloaded.

A license for this software has been sold end of 2011 to TAS-Espagna, in order to tune filters with topologies having multiple solutions. For this, Dedale-HF teams up with Presto-HF.

Status: A stable version is maintained.

This software has been developed by Vincent Lunot (Taiwan Univ.) during his PhD. He still continues to maintain it.

EasyFF is a software dedicated to the computation of complex, in particular multi-band filtering functions. The software takes as input, specifications on the modulus of the scattering matrix (transmission and rejection), the filter's order and the number of transmission zeros. The output is an "optimal" filtering characteristic in the sense that it is the solution of an associated min-max Zolotarev problem. Computations are based on a Remez-type algorithm (if transmission zeros are fixed) or on linear programming techniques if transmission zeros are part of the optimization .

Status: Currently under development. A stable version is maintained.

This software is developed in collaboration with Maureen Clerc and Théo Papadopoulo from the Athena Project-Team, and with Jean-Paul Marmorat (Centre de mathématiques appliquées - CMA, École des Mines de Paris).

FindSources3D

A new release of FindSources3D is now available, which will be demonstrated and distributed, in particular to the medical team we maintain contact with (hosp. la Timone, Marseille). The preliminary step (“cortical mapping”) is now solved using expansion in spherical harmonics, along with a constrained approximation scheme.

Another release is being prepared, due to strong interest by the German
company BESA GmbH

Status: Currently under development. A stable version is maintained.

This software is developed in collaboration with Christoph Lauter (LIP6) and Mioara Joldeş (LAAS).

Sollya is an interactive tool where the developers of mathematical floating-point libraries (libm) can experiment before actually developing code. The environment is safe with respect to floating-point errors, *i.e.* the user precisely knows when rounding errors or approximation errors happen, and rigorous bounds are always provided for these errors.

Among other features, it offers a fast Remez algorithm for computing polynomial approximations of real functions and also an algorithm for finding good polynomial approximants with floating-point coefficients to any real function. As well, it provides algorithms for the certification of numerical codes, such as Taylor Models, interval arithmetic or certified supremum norms.

It is available as a free software under the CeCILL-C license at http://

The research in this section is partly joint work with Qian Tao (Univ. Macao).

It was proved in
that a vector field with *Hardy-Hodge* decomposition,
is valid not only for

The Hardy-Hodge decomposition was used in
to find the kernel of the planar magnetization operator, namely a
potential of
the form () with *e.g.* a sphere)
is silent in the unbounded component of
the complement of that surface if, and only if there is no harmonic
gradient from inside in its Hardy-Hodge decomposition.
An article is being written on this topic.

We also considered the case where

These results shed light on the indeterminacy of inverse source problems.

This work is conducted in collaboration with Maureen Clerc and Théo Papadopoulo from the Athena EPI, and with Jean-Paul Marmorat (Centre de mathématiques appliquées - CMA, École des Mines de Paris).

In 3-D, functional or clinically active regions in the cortex are often modeled by point-wise sources that must be localized from measurements of a potential on the scalp. Inside the cortex, identified to a ball after the cortical mapping step, the potential satisfies a Poisson equation whose right-hand side is a linear combination of gradients of Dirac masses (the sources in EEG). In the work it was shown how best rational approximation on a family of circles, cut along parallel planes on the sphere, can be used to recover the sources when they are at most 2 of them. Later, results on the behavior of poles in best rational approximation of fixed degree to functions with branch points helped justifying the technique for finitely many sources (see section ).

The dedicated software FindSources3D (see section ),
developed, in collaboration with the team Athena and the CMA,
dwells on these ideas. Functions to be approximated in 2-D slices
turn out to have additional *multiple* poles at their branch points
so that, in the rational approximation step,
it is beneficial to consider approximants with
multiple poles as well
(for EEG data, one should consider *triple* poles). Though numerically
observed in , there is no mathematical
justification so far why these multiple poles are attracted more strongly than
simple poles to the singularities of the approximated function.
This intriguing property, however,
definitely helps source recovery . This year we used it to automatically estimate the “most plausible”
number of sources (numerically: up to 3 at the moment).
Such enhancements were prompted by
a developing collaboration with the BESA company,
which is interested in automatic detection of the number of sources
(which was left to the user until recently).

Soon, magnetic data from MEG (magneto-encephalography) will become available together with EEG data; indeed, it is now possible to use simultaneously the corresponding measurement devices. We expect this to improve the accuracy of our algorithms.

In relation to other brain exploration modalities like electrical impedance tomography (EIT, see ), we also consider identifying electrical conductivity in the head. This is the topic of the PhD of C. Papageorgakis, co-advised with the Athena project-team and BESA GmbH. Specifically, in layered models, we are concerned with estimating conductivity of the skull (intermediate layer). Indeed, the skull consists of a hard bone part, the conductivity of which is more or less known, and spongy bone compartments whose conductivities may vary considerably with individuals.

A preliminary question in this connection is:
can one uniquely recover a homogeneous skull conductivity from
a single EEG recording when the sources and the
conductivities of other layers are known? And if sources are not known,
which additional information do we need?
These are issues currently under investigation.
To put them into perspective, recall the famous Caldèron problem of deducing
a bounded (nonconstant) conductivity from the knowledge of all
possible pairs consisting of a potential and its current flux
at the boundary. In dimension 3, when the conductivity is not smooth
(less than *i.e.* if two conductivities can have the
same pairs of boundary potential and flux).
A weaker, discrete version of this problem is:
if the conductivity takes on finitely many values and the geometry
of the level sets is known, does a finite set of
pairs of boundary potential and flux allow one to recover it?
This is a significant question to be tackled for
the purpose of source recovery in EEG
with known geometry but unknown conductivities inside the head.

This work is carried out in the framework of the “équipe associée Inria” IMPINGE, comprising Eduardo Andrade Lima and Benjamin Weiss from the Earth Sciences department at MIT (Boston, USA) and Douglas Hardin and Edward Saff from the Mathematics department at Vanderbilt University (Nashville, USA),

Localizing magnetic sources from measurements of the magnetic field
away from the support of the magnetization is the fundamental
issue under investigation by IMPINGE. The goal is to determine
magnetic properties of rock
samples (*e.g.* meteorites or stalactites), from fine field measurements
close to the sample that
can nowadays be obtained using SQUIDs (superconducting coil devices).
Currently, rock samples are cut into thin slabs and the magnetization
distribution is considered to lie in a plane, which makes for a
somewhat less indeterminate framework than EEG
because “less” magnetizations can produce the same field
(for the slab has no inner volume). Note however that EEG data
consist of both potential and current values at the boundary,
whereas in the present setting only
values of the normal magnetic field are provided to us.

We set up last year a heuristic procedure to recover regularly spaced dipolar magnetizations, *i.e.* magnetizations composed of dipoles placed at the points of a regular rectangular

This year, we performed more systematic experiments on real data (namely Allende chondrules and Hawaian basalt) provided by the SQUID scanning microscope at MIT lab. Cropping the support of *e.g.*, chondrules). On the other hand, when the support of the sample is spread out (*e.g.*, Hawaian basalt), the reduction of active components of

When the support can be significantly shrunk while keeping the residue small (*i.e.*, explaining the data satisfactorily), estimates of the net moment based on the dipolar model obtained by inversion seem to be good. They apparently supersede the measurements by magnetometers as well as by dipole fitting procedures set up at MIT. It is interesting to notice that the magnetization obtained by our inversion procedure, either before or after shrinking the support, often does not resemble the true magnetization, even when it yields correct moment and field. This can be seen on synthetic examples and may be surmised on real data, thereby confirming that recovering the net moment and recovering the magnetization are rather different problems, the latter being
considerably more ill-posed than the former.

One specific difficulty with chondrule-type examples has been
to account for their thickness: they are indeed small spheres and their
3-D character cannot be completely ignored. In order to use the inversion procedure set up in the plane, we investigated the following question.
Assume that the sample has some thickness, but small enough that the
magnetization at a point *i.e.* that it is of the form

The case where the magnetization is flat but spread out on the sample is more difficult. First of all, the computational effort becomes significant and led us to use the cluster at Inria Sophia Antipolis. We succeeded in obtaining full inversions for the Hawaian basalt. The residue (approximation error) is moderate but not impressively small, which indicates that we reach the limit of modeling magnetizations by a regular grid of dipoles. However the computation of the moment compares favorably with estimates previously obtained by a different technique at MIT lab. Still, using a cluster and two days of evaluation to obtain a coarse estimate of the net moment of a sample is rather inefficient and calls for new investigations.

We also experimented an alternative regularization procedure, based on

We now develop new methods in order to estimate the net moment of the magnetization, based on improvements of previously used Fourier techniques, and recently we reformulated the problem with the help of Kelvin transforms. It has been realized that the success of net moment recovery hinges on the ability to extrapolate the measurements. In particular, we managed to considerably improve previous estimates by means of data extension based on dipolar field asymptotics.

In the course of inverting the field map, we singled out magnetizations which are numerically (almost) silent from above though not from below. This illustrates how ill-posed (unstable) the problem, as theory predicts that no compactly supported magnetization can be exactly silent from above without being also exactly silent from below. Although such magnetizations seem to have small moment and therefore do not endanger the possibility of recovering the net moment,
their existence is certainly an obstacle to inversion of the field map without extra measurements or hypotheses (*e.g.*, measuring from below or
assuming unidirectionality).

In the course of the doctoral work by D. Ponomarev,
the study of the 2D spectral decomposition of the truncated Poisson operator
has been undertaken. It is a simplified version of the relation between the magnetization and the magnetic potential.
We considered several formulations in terms of singular integral equations and matrix Riemann-Hilbert problems, and focused on finding closed form solutions for various approximations of the Poisson operator in terms of
a the ratio between the distance

Lately, Apics became a partner of the ANR project MagLune, dealing with Lunar magnetism, a in collaboration with the Geophysics and Planetology Department of Cerege, CNRS, Aix-en-Provence, see section . The research is just starting, and will focus on computing net moments of lunar rock samples collected by NASA.

Collaboration with Laurent Bourgeois (ENSTA ParisTech, Lab. Poems), Elodie Pozzi (Univ. Bordeaux, IMB), Emmanuel Russ (Univ. Grenoble, IJF).

**Generalized Hardy classes**

As we mentioned in section
,
2-D diffusion equations of the form

The study of such Hardy spaces for Lipschitz

**Best constrained analytic approximation**

Several questions about the behavior of solutions to the
bounded extremal problem

This work has been done in collaboration with Stéphane Bila (Xlim, Limoges, France), Hussein Ezzedin (Xlim, Limoges, France), Damien Pacaud (Thales Alenia Space, Toulouse, France), Giuseppe Macchiarella (Politecnico di Milano, Milan, Italy), and Matteo Oldoni (Siae Microelettronica, Milan, Italy).

Filter synthesis is usually performed under the hypothesis that both ports of the filter are loaded on a constant resistive load (usually 50 Ohm). In complex systems, filters are however cascaded with other devices, and end up being loaded, at least at one port, on a non purely resistive frequency varying load. This is for example the case when synthesizing a multiplexer: each filter is here loaded at one of its ports on a common junction. Thus, the load is by construction non constant with the frequency, and not purely resistive either. Likewise, in an emitter-receiver, the antenna is followed by a filter. Whereas the antenna can usually be regarded as a resistive load at some frequencies, this is far from being true on the whole working band. A mismatch between the antenna and the filter, however, causes irremediable power losses, both in emission and transmission. Our goal is therefore to develop a filter synthesis method that allows to match varying loads on specific frequency bands.

The matching problem of minimizing

where

which accounts for the losslessness of the filter. This problem can be seen as an extended Nevanlinna-Pick interpolation problem, that was considered in
when the interpolation points *open* left half-plane. The method in the last reference does
not extend to imaginary interpolation point and we
used rather different, differential-topological techniques
to prove that this problem has a unique solution,
which can be computed by continuation. In the setting of multiplexer synthesis, where this result must e applied recursively to each filter,
we showed the existence of a fixed point for the tuning procedure,
based on Brouwer's fixed point theorem. These results were presented at the MTNS , at the plenary of session of Ernsi workshop 2014, and they lie at the heart of the ANR Cocoram on co-integration of filters and antennas (). Implementation of the continuation algorithm has been done under contract with CNES and yields encouraging results.
Generalizations of the interpolation problem where the monic condition
is relaxed are under study in the framework of co-integration
of filters and antennas.

This work is pursued in collaboration with Thales Alenia Space, Siae Microelettronica, Xlim and under contract with CNES-Toulouse (see section ).

Let *via* assembly of T-junctions.
This makes the problem extremely sensitive to measurement noise. It was also noticed that in practical applications, scattering measurements of the junction are hardly available.

It is therefore natural to consider the following de-embedding problem. Given

the

the coupling geometry of their circuital realization is known,

what can be said about the filter's response? Note that the above assumptions do not bear on the junction. Nevertheless, we showed that the filter's responses are identifiable up to a constant matrix chained at their nearest port to the junction . It was proved also that the uncertainty induced by the chain matrix bears only on the resonant frequency of the last cavity of each filter, as well as on their output coupling. Most of the filters' parameters can therefore be recovered in principle. The approach is constructive and relies on rational approximation to certain scattering parameters, as well as on some extraction procedure similar to Darlington's synthesis. Software development is under way and experimental studies have started on data provided to us by Thales Alenia Space and by Siae Microelettronica. A mid-term objective is to extend Presto-HF (see Section ) so as to handle de-embedding problems for multiplexers and more generally for multi-ports.

This work is performed under contract with CNES-Toulouse and the University of Bilbao. The goal is to help designing amplifiers, in particular to detect instability at an early stage of the design.

Currently, electrical engineers from the University of Bilbao, under contract with CNES (the French Space Agency), use heuristics to detect instability before an amplifying circuit is physically built. Our goal is to set up a rigorously founded algorithm, based on properties of transfer functions of such amplifiers, which belong to particular classes of analytic functions.

In non-degenerate cases, non-linear electrical components can be replaced by their first order approximation when studying stability in the small signal regime. Using this approximation, diodes appear as negative resistors and transistors as current sources controlled by the voltage at certain nodes of the circuit.

Over the last three years, we studied several features of transfer functions of amplifying electronic circuits:

We characterized the class of transfer functions which can be realized with ideal components linearized active components, together with standard passive components (resistors, inductors, capacitors and transmission lines). It is exactly the field of rational functions in the complex variable and in the hyperbolic cosines and identity-times-hyperbolic sines of polynomials of degree 2 with real negative roots.

We introduced a realistic notion of stability,
by terming stable a circuit whose transfer function belongs to

We constructed unstable circuits having no pole in the right half-plane, which came as a surprise to our partners.

In order to circumvent these pathological examples, we introduced a realistic hypothesis that there are small inductive and capacitive effects to active components. Our main result is that a realistic circuit without poles on the imaginary axis is unstable if and only if it has poles in the right half-plane. Moreover, there can only be finitely many of them.

This year, we were led to modify our definition of stability,
taking a hint from scattering theory. We say that a transfer function

This is joint work with Nikos Stylianopoulos (Univ. of Cyprus).

We study the asymptotic behavior of weighted orthogonal polynomials on
a bounded simply connected plane domain

where

locally uniformly outside the convex hull of
*on*

When

This is joint work with Maxim Yattselev (Purdue Univ. at Indianapolis, USA).

We proved in
that the normalized counting measure of
poles of best

This contract (reference Inria: 7066, CNES: 127 197/00)
involving CNES, XLIM and Inria, focuses on the development
of synthesis algorithms for

This contract (reference CNES: RS14/TG-0001-019) involving CNES, University of Bilbao (UPV/EHU) and Inria aims at setting up a methodology for testing the stability of amplifying devices. The work at Inria is concerned with the design of frequency optimization techniques to identify the unstable part of the linearized response and analyze the linear periodic components.

This is a research agreement between Inria (Apics and Athena teams) and the German company BESA

Contract (no. 2014-05764) funding the research grant of C. Papageorgakis, see Sections , .

The ANR (Astrid) project COCORAM (Co-design et co-intégration de réseaux d’antennes actives multi-bandes pour systèmes de radionavigation par satellite) started January 2014. We are associated with three other teams from XLIM (Limoges University), respectively specialized in filters, antennas and amplifiers design. The core idea of the project is to work on the co-integration of various microwave devices in the context of GPS satellite systems in particular it provides us with an opportunity to work on matching problems (see Section ).

The ANR project MagLune (Magnétisme de la Lune) has been approved by July 2014. It involves the Cerege (Centre de Recherche et d’Enseignement de Géosciences de l’Environnement, joint laboratory between Université Aix-Marseille, CNRS and IRD), the IPGP (Institut de Physique du Globe de Paris) and ISTerre (Institut des Sciences de la Terre). Associated with Cerege are Inria (Apics team) and Irphe (Institut de Recherche sur les Phénomènes Hors Équilibre, joint laboratory between Université Aix-Marseille, CNRS and École Centrale de Marseille). The goal of this project (led by geologists) is to understand the past magnetic activity of the Moon, especially to answer the question whether it had a dynamo in the past and which mechanisms were at work to generate it. Apics will participate in the project by providing mathematical tools and algorithms to recover the remanent magnetization of rock samples from the moon on the basis of measurements of the magnetic field it generates. The techniques described in Section are instrumental for this purpose.

Apics is part of the European Research Network on System Identification (ERNSI) since 1992.

System identification deals with the derivation, estimation and validation of mathematical models of dynamical phenomena from experimental data.

Title: Inverse Magnetization Problems IN GEosciences.

Inria principal investigator: Laurent Baratchart

International Partner (Institution - Laboratory - Researcher):

MIT - Department of Earth, Atmospheric and Planetary Sciences (United States) - Benjamin Weiss

Duration: 2013 - 2015

See details at : http://

The purpose of the associate team IMPINGE is to develop efficient algorithms to recover the magnetization distribution of rock slabs from measurements of the magnetic field above the slab using a SQUID microscope (developed at MIT). The US team also involves a group at Vanderbilt Univ.

**MIT-France seed funding** is a competitive collaborative research
program ran
by the Massachusetts Institute of Technology (Cambridge, Ma, USA). Together with
E. Lima and . Weiss from the Earth and Planetary Sciences dept. at MIT,
Apics obtained two-years support from the above-mentioned program to run a project entitled:
“Development of Ultra-high Sensitivity Magnetometry for Analyzing Ancient Rock Magnetism”

**Cyprus NF grant** was obtained by N. Stylianopoulos (Univ. Cyprus)
to conduct joint research with L. Baratchart, E.B. Saff (Vanderbilt Univ.)
and V. Totik (Univ. Szeged, Hungary). The title of the grant is:
“Orthogonal polynomials in the complex plane: distribution of zeros, strong asymptotics and shape reconstruction”.

Doug Hardin (Vanderbilt Univ., Nashville, USA, Aug 2014)

Benjamin Lanfer (BESA, Munich, Germany, Oct 2014)

Eduardo A. Lima (MIT, Cambridge, USA, Mar 2014)

Moncef Mahjoub (ENIT LAMSIN, Tunis, Tunisia, Jun 2014)

Michael Northington (Vanderbilt Univ., Nashville, USA, Aug 2014)

Yves Rolain (Vrije Universiteit Brussel, Belgium, June 2014)

Maxim Yattselev (Indiana University–Purdue University, Indianapolis, USA, May 2014)

Olga Permiakova, Master 2 Computational Biology - UNSA (5 months), Inverse source problem for electromagnetic fields, with physical applications.

Collaboration under contract with Thales Alenia Space (Toulouse, Cannes, and Paris), CNES (Toulouse), XLIM (Limoges), University of Bilbao (Universidad del País Vasco / Euskal Herriko Unibertsitatea, Spain), BESA company (Munich), Flextronics.

Regular contacts with research groups at UST (Villeneuve d'Asq), Universities of Bordeaux-I (Talence), Orléans (MAPMO), Aix-Marseille (CMI-LATP), Nice Sophia Antipolis (Lab. JAD), Grenoble (IJF and LJK), Paris 6 (P. et M. Curie, Lab. JLL), Inria Saclay (Lab. Poems), Cerege-CNRS (Aix-en-Provence), CWI (the Netherlands), MIT (Boston, USA), Vanderbilt University (Nashville USA), Steklov Institute (Moscow), Michigan State University (East-Lansing, USA), Texas A&M University (College Station USA), University of Urana-Champaign at Indianapolis (Indianapolis, USA), Politecnico di Milano (Milan, Italy), University of Trieste (Italy), RMC (Kingston, Canada), University of Leeds (UK), of Maastricht (The Netherlands), of Cork (Ireland), Vrije Universiteit Brussel (Belgium), TU-Wien (Austria), TFH-Berlin (Germany), ENIT (Tunis), KTH (Stockholm), University of Cyprus (Nicosia, Cyprus), University of Macau (Macau, China), SIAE Microelettronica (Milano).

The project is involved in the GDR-project AFHP (CNRS), in the ANR (Astrid program) project COCORAM (with XLIM, Limoges, and DGA), in the ANR (Défis de tous les savoirs program) project MagLune (with Cerege, IPGP, ISTerre, Irphe), in a MIT-France collaborative seed funding, in the Associate Inria Team IMPINGE (with MIT, Boston), and in a CSF program (with University of Cyprus).

L. Baratchart was a plenary speaker at Constructive Functions 2014 (June 2014) in Nashville, USA (TN). He was an invited speaker at the Complex Analysis Meeting of the Russian Academy of Sciences (April 2014) in Saint Petersburg, Russia, at the International Conference on Orthogonal Polynomials, Integrable Systems and their Applications (October 2014) in Shanghai, China, and at the conference Foundations of Constructive Mathematics (December 2014) in Montevideo. He was a visitor at Vanderbilt university, at MIT, at the University of Macao and at the University of Cyprus. He was a speaker at the seminar of Université de Bordeaux.

M. Caenepeel gave a talk at the 33th Benelux Meeting on Systems and Control (The Netherlands) at the 18th IEEE Workshop on Signal and Power Integrity in Ghent (Belgium) and he presented a poster at the ERNSI meeting in Ostende (Belgium).

S. Chevillard gave a talk at PICOF 2014 (May 2014) in Hammamet, Tunisia, at Constructive Functions 2014 (June 2014) in Nashville, USA (TN). He was an invited speaker at “Journée scientifique SMAI-SIGMA 2014” (November 2014) in Paris.

J. Leblond organized an invited session at PICOF 2014

S. Lefteriu was an invited speaker at the Max Planck Institute and presented a poster at the meeting of the working group GT Identification.

M. Olivi gave a talk at the MTNS 2014 conference in Groningen (The Netherlands) .

D. Ponomarev gave a talk at the 10th AIMS Conference on Dynamical Systems, Differential Equations and Applications (July 2014) , in Madrid, Spain, at the seminar of the team Analyse, Géométrie, Topologie (AGT), Institut de Mathématiques de Marseille, Aix-Marseille Université (May 2014), and at the seminar of the team Defi, Inria Saclay - Ecole Polytechnique (Nov. 2014).

F. Seyfert gave a talk at the MTNS 2014 in Groeningen, at the IMS 2014 in Tampa and was invited to give a plenary lecture at the Ernsi meeting in Ostende.

L. Baratchart was a member of the program committee of MTNS (Mathematical Theory of Networks and Systems) 2014, Groningen, The Netherlands.

L. Baratchart is a member of the Editorial Boards of *Constructive Methods and Function Theory* and *Complex Analysis and Operator Theory*.

**Colles**: S. Chevillard is giving “Colles” at Centre International de Valbonne (CIV) (2 hours per week).

PhD in progress: D. Ponomarev, Inverse problems for planar conductivity and Schrödinger PDEs, since Nov. 2012 (advisors: J. Leblond, L. Baratchart).

PhD in progress: M. Caenepeel, The development of models for the design of RF/microwave filters, since Feb. 2013 (advisors: Y. Rolain, M. Olivi, F. Seyfert).

PhD in progress: C. Papageorgakis, Conductivity model estimation, since Oct 2014 (advisors: J. Leblond, M. Clerc, B. Lanfer).

M. Olivi was a referee of the PhD manuscript of P. Vuillemin (Univ. Toulouse) and of the PhD manuscript of F. Cheng (Univ. Lorraine).

J. Leblond was a member of the PhD defense committee of L. Jassionnesse (Univ. Dijon, Nov 2014).

F. Seyfert was a member of the PhD defense committee of Le Ha Vy Nguyen (Univ. Paris Sud, Inria project DISCO)

L. Baratchart was a speaker at “Café in” (Oct. 2014, Inria Sophia-Antipolis-Méditerranée).

J. Leblond is a member of the Committee MASTIC. She was an invited speaker at the seminar associated with the lecture by G. Berry at the Collège de France (Jan. 2014).

M. Olivi is president of the Committee MASTIC (Commission d'Animation et de Médiation Scientifique) https://

S. Chevillard is representative at the “comité de centre” and at the “comité des projets” (Research Center Inria-Sophia).

J. Leblond is an elected member of the “Conseil Scientifique”and of the “Commission Administrative Paritaire” of Inria. She is one of the two researchers in charge of the mission “Conseil et soutien aux chercheurs” within the Research Center.

M. Olivi is responsible for scientific mediation and co-president of the committee MASTIC.