The team develops constructive, function-theoretic approaches to inverse problems arising in modeling and design, in particular for electro-magnetic systems as well as in the analysis of certain classes of signals.

Data typically consist of measurements or desired behaviors. The general thread is to approximate them by families of solutions to the equations governing the underlying system. This leads us to consider various interpolation and approximation problems in classes of rational and meromorphic functions, harmonic gradients, or solutions to more general elliptic partial differential equations (PDE), in connection with inverse potential problems. A recurring difficulty is to control the singularities of the approximants.

The mathematical tools pertain to complex and harmonic analysis, approximation theory, potential theory, system theory, differential topology, optimization and computer algebra. Targeted applications include:

identification and synthesis of analog microwave devices (filters, amplifiers),

non-destructive control from field measurements in medical engineering (source recovery in magneto/electro-encephalography), and paleomagnetism (determining the magnetization of rock samples).

In each case, the endeavor is to develop algorithms resulting in dedicated software.

Within the extensive field of inverse problems, much of the research by Apics
deals with reconstructing solutions of classical elliptic PDEs from their
boundary behavior. Perhaps the simplest example lies with
harmonic identification of a stable linear dynamical system:
the transfer-function *e.g.* the Cauchy formula.

Practice is not nearly as simple, for *i.e.* to locate the

Step 1 relates to extremal
problems and analytic operator theory, see Section .
Step 2 involves optimization, and some Schur analysis
to parametrize transfer matrices of given Mc-Millan degree
when dealing with systems having several inputs and outputs,
see Section .
It also makes contact with the topology of rational functions, in particular
to count
critical points and to derive bounds, see Section . Step 2 raises
further issues in approximation theory regarding the rate of convergence and
the extent to which singularities of the
approximant (*i.e.* its poles) tend to singularities of the
approximated function; this is where logarithmic potential theory
becomes instrumental, see Section .

Applying a realization procedure to the result of step 2 yields an identification procedure from incomplete frequency data which was first demonstrated in to tune resonant microwave filters. Harmonic identification of nonlinear systems around a stable equilibrium can also be envisaged by combining the previous steps with exact linearization techniques from .

A similar path can be taken to approach design problems in the frequency domain, replacing the measured behavior by some desired behavior. However, describing achievable responses in terms of the design parameters is often cumbersome, and most constructive techniques rely on specific criteria adapted to the physics of the problem. This is especially true of filters, the design of which traditionally appeals to polynomial extremal problems , . Apics contributed to this area the use of Zolotarev-like problems for multi-band synthesis, although we presently favor interpolation techniques in which parameters arise in a more transparent manner, see Section .

The previous example of harmonic identification
quickly suggests a generalization
of itself. Indeed, on identifying *i.e.*, the field) on part of a hypersurface (a curve in 2-D)
encompassing the support of

Inverse potential problems are severely indeterminate because infinitely many
measures within an open set produce the same field outside this set; this phenomenon is called
*balayage* . In the two steps approach
previously described,
we implicitly removed this indeterminacy by requiring in step 1
that the measure
be supported on the boundary (because we seek a function holomorphic
throughout the right half-space), and
by requiring in step 2
that the measure be discrete in the left half-plane (in fact: a sum of
point masses

To recap, the gist of our approach is to approximate boundary data by (boundary traces of) fields arising from potentials of measures with specific support. This differs from standard approaches to inverse problems, where descent algorithms are applied to integration schemes of the direct problem; in such methods, it is the equation which gets approximated (in fact: discretized).

Along these lines, Apics advocates the use of steps 1 and 2 above, along with some singularity analysis, to approach issues of nondestructive control in 2-D and 3-D , , . The team is currently engaged in the generalization to inverse source problems for the Laplace equation in 3-D, to be described further in Section . There, holomorphic functions are replaced by harmonic gradients; applications are to EEG/MEG and inverse magnetization problems in geosciences, see Section .

The approximation-theoretic tools developed by Apics to handle issues mentioned so far are outlined in Section . In Section to come, we describe in more detail which problems are considered and which applications are targeted.

By standard properties of conjugate differentials, reconstructing Dirichlet-Neumann boundary conditions
for a function harmonic in a plane domain,
when these conditions are already known on a subset

Another application by the team deals with non-constant conductivity
over a doubly connected domain, the set *Tore Supra*
.
The procedure is fast because no numerical integration of
the underlying PDE is needed, as an explicit basis of solutions to the
conjugate Beltrami equation in terms of Bessel functions
was found in this case. Generalizing this approach in a more systematic
manner to free boundary problems of Bernoulli type,
using descent
algorithms based on shape-gradient for such approximation-theoretic
criteria, is an interesting prospect now under study in the team..

The piece of work we just mentioned requires defining and studying Hardy spaces of the conjugate-Beltrami equation, which is an interesting topic by itself. For Sobolev-smooth coefficients of exponent greater than 2, they were investigated in , . The case of the critical exponent 2 is treated in , which apparently provides the first example of well-posedness for the Dirichlet problem in the non-strictly elliptic case: the conductivity may be unbounded or zero on sets of zero capacity and, accordingly, solutions need not be locally bounded. Exponent 2 seems also to be the key to a similar theory on general (rectifiable) domains in the plane, for exponent 2 is all one is left with in general after a conformal transformation of the domain.

The 3-D version of step 1 in Section is another
subject investigated by Apics: to recover a harmonic function
(up to an additive constant) in a ball or a half-space from partial knowledge of its
gradient. This prototypical inverse problem
(*i.e.* inverse to the Cauchy problem for the Laplace equation)
often recurs in electromagnetism. At present, Apics is involved with
solving instances of this inverse problem arising
in two fields, namely medical imaging
*e.g.* for electroencephalography (EEG)
or magneto-encephalography (MEG), and
paleomagnetism (recovery of rocks magnetization)
, , see Section . In this connection, we collaborate with two groups of partners:
Athena Inria project-team,
CHU La Timone, and BESA company on the one hand,
Geosciences Lab. at MIT and Cerege CNRS Lab. on the other hand.
The question is considerably more difficult than its 2-D
counterpart, due mainly to the lack of multiplicative structure for harmonic
gradients. Still,
substantial progress has been made over the last years
using methods of harmonic analysis and operator theory.

The team is further concerned with 3-D generalizations and applications to
non-destructive control of step 2 in Section .
A typical problem is here to localize inhomogeneities or defaults such as
cracks, sources or occlusions in a planar or 3-dimensional object,
knowing thermal, electrical, or
magnetic measurements on the boundary.
These defaults can be expressed as a lack of harmonicity
of the solution to the associated Dirichlet-Neumann problem,
thereby posing an inverse potential problem in order to recover them.
In 2-D, finding an optimal discretization of the
potential in Sobolev norm amounts to solve a best rational approximation
problem, and the question arises as to how the location of the
singularities of the approximant (*i.e.* its poles)
reflects the location of the singularities of the potential
(*i.e.* the defaults we seek). This is a fairly deep issue
in approximation theory, to which Apics contributed convergence results
for certain classes of fields
expressed as Cauchy integrals over extremal contours for
the logarithmic potential
, , .
Initial schemes to locate cracks or sources
*via* rational approximation on
planar domains were obtained this way , , . It is remarkable that finite inverse source problems
in 3-D balls, or more general algebraic surfaces,
can be approached using these 2-D techniques upon slicing the
domain into planar sections
, .
More precisely, each section cuts out a planar domain, the boundary of which
carries data which can be proved to match an algebraic function. The
singularities of this algebraic function are not located at the 3-D sources,
but are related to them: the section contains a source if and only if some
function of the singularities in that section meets a relative extremum. Using
bisection it is thus possible to determine an extremal place along all sections
parallel to a given plane direction, up to some threshold which has to be
chosen small enough that one does not miss a source. This way, we reduce the
original source problem in 3-D to a sequence of inverse poles and branchpoints
problems in 2-D.
This bottom line generates a steady research activity
within Apics, and again applications are sought to medical imaging and
geosciences, see Sections ,
and .

Conjectures may be raised on the behavior of optimal potential discretization in 3-D, but answering them is an ambitious program still in its infancy.

Through contacts with CNES (French space agency),
members of the team became involved in identification and tuning
of microwave electromagnetic filters used in space telecommunications,
see Section . The initial problem was
to recover, from band-limited frequency measurements, physical
parameters of the device under examination.
The latter consists of interconnected dual-mode resonant cavities with
negligible loss, hence its scattering matrix is modeled by a

This is where system theory comes into play, through the
so-called *realization* process mapping
a rational transfer function in the frequency domain
to a state-space representation of the underlying system
of linear differential equations in the time domain.
Specifically, realizing the scattering matrix
allows one to construct
a virtual electrical network, equivalent to the filter,
the parameters of which mediate in between the frequency response
and the
geometric characteristics of the cavities (*i.e.* the tuning parameters).

Hardy spaces provide a framework to transform this ill-posed issue into a series of regularized analytic and meromorphic approximation problems. More precisely, the procedure sketched in Section goes as follows:

infer from the pointwise boundary data in the bandwidth
a stable transfer function (*i.e.* one which is holomorphic
in the right half-plane), that may be infinite dimensional
(numerically: of high degree). This is done by solving
a problem analogous to

A stable rational approximation of appropriate degree to the model obtained in the previous step is performed. For this, a descent method on the compact manifold of inner matrices of given size and degree is used, based on an original parametrization of stable transfer functions developed within the team , .

Realizations of this rational approximant are computed. To be useful, they must satisfy certain constraints imposed by the geometry of the device. These constraints typically come from the coupling topology of the equivalent electrical network used to model the filter. This network is composed of resonators, coupled according to some specific graph. This realization step can be recast, under appropriate compatibility conditions , as solving a zero-dimensional multivariate polynomial system. To tackle this problem in practice, we use Gröbner basis techniques and continuation methods which team up in the Dedale-HF software (see Section ).

Let us mention that extensions of classical coupling matrix theory to frequency-dependent (reactive) couplings have been carried-out in recent years for wide-band design applications.

Apics also investigates issues pertaining to
design rather than identification.
Given the topology of the filter,
a basic problem in this connection is to find the optimal response
subject to specifications
that bear on rejection, transmission and group delay of the
scattering parameters.
Generalizing the classical approach based on Chebyshev polynomials
for single band
filters, we recast the problem of multi-band response synthesis
as a generalization of the classical Zolotarev min-max problem
for rational functions .
Thanks to quasi-convexity, the latter
can be solved efficiently using iterative methods relying on linear
programming. These were implemented in the software
easy-FF (see easy-FF). Currently, the team is engaged
in the synthesis of more complex microwave devices
like multiplexers and routers, which connect several
filters through wave guides.
Schur analysis plays an important role here, because
scattering matrices of passive systems are of Schur type
(*i.e.* contractive in the stability region).
The theory originates with the work of I. Schur ,
who devised a recursive test to
check for contractivity of a holomorphic function in the disk.
The so-called Schur parameters of a function
may be viewed as Taylor coefficients for the hyperbolic metric of the disk, and
the fact that Schur functions are contractions for that metric lies at the
root of Schur's test.
Generalizations thereof turn out to be efficient to parametrize
solutions to contractive interpolation problems .
Dwelling on this, Apics contributed
differential parametrizations (atlases of charts) of lossless
matrix functions , , which
are fundamental to our rational approximation
software RARL2 (see Section ).
Schur analysis is also instrumental to approach de-embedding issues,
and provides one with considerable
insight into the so-called matching problem. The latter consists in
maximizing the power a multiport can pass to a given load, and for
reasons of efficiency it
is all-pervasive in microwave and electric network design, *e.g.* of
antennas, multiplexers, wifi cards and more. It can be viewed as a
rational approximation problem in the hyperbolic metric, and the team
presently deals with this hot topic using
contractive interpolation with constraints on boundary peak points,
within the framework of the (defense funded) ANR COCORAM,
see Sections and .

In recent years,
our attention was driven by CNES and UPV (Bilbao)
to questions about stability of high-frequency amplifiers,
see Section .
Contrary to previously discussed devices, these are *active* components.
The response of an amplifier can be linearized around a
set of primary current and voltages,
and then admittances of the corresponding electrical network
can be computed at various frequencies, using the so-called harmonic
balance method.
The initial goal is to check for stability of the linearized model,
so as to ascertain existence of a well-defined working state.
The network is composed of lumped electrical elements namely
inductors, capacitors, negative *and* positive reactors,
transmission lines, and controlled current sources.
Our research so far has focused on describing the algebraic structure
of admittance functions, so as to set up a function-theoretic framework
where the two-steps approach outlined in Section
can be put to work. The main discovery is that
the unstable part of each partial transfer function is rational and can
be computed by analytic projection,
see Section . We now start investigating the
linearized
harmonic transfer-function around a periodic cycle, to check for stability
under non necessarily small inputs. This generalization
generates both doctoral and postdoctoral work by new students in the team.

To find an analytic function

Here *a priori*
assumptions on
the behavior of the model off

To fix terminology, we refer to *bounded extremal problem*.
As shown in , ,
,
the solution to this convex
infinite-dimensional optimization problem can be obtained
when

(

The case

Various modifications of

The analog of Problem *seek the inner
boundary*, knowing it is a level curve of the solution.
In this case, the Lagrange parameter indicates
how to deform the inner contour in order to improve
data fitting.
Similar topics are discussed in Section for more general equations than the Laplacian, namely
isotropic conductivity equations of the form

Another instance of problem

Though originally considered in dimension 2,
Problem

When

On the ball, the analog
of Problem

When *Hardy-Hodge* decomposition,
allowing us to express a *i.e.* those generating no field
in the upper half space) .

Just like solving problem

Problem

Companion to problem

Note that

The techniques set forth in this section are used to solve
step 2 in Section and they are instrumental to
approach inverse boundary value problems
for the Poisson equation

We put

A natural generalization of problem

(

Only for

The case where *stable* rational
approximant to *not* be unique.

The former Miaou project (predecessor of Apics) designed a dedicated
steepest-descent algorithm
for the case *local minimum* is
guaranteed; until now it seems to be the only procedure meeting this
property. This gradient algorithm proceeds
recursively with respect to *critical points* of lower degree
(as is done by the RARL2 software, Section ).

In order to establish global convergence results, Apics has undertaken a
deeper study of the number and nature of critical points
(local minima, saddle points...), in which
tools from differential topology and
operator theory team up with classical interpolation theory
, .
Based on this work,
uniqueness or asymptotic uniqueness of the approximant
was proved for certain classes of functions like
transfer functions of relaxation
systems (*i.e.*
Markov functions) and more
generally Cauchy integrals over hyperbolic geodesic arcs .
These are the only results of this kind. Research by Apics on this topic
remained dormant for a while by reasons of opportunity,
but revisiting the work in higher dimension is
a worthy and timely endeavor today. Meanwhile,
an analog to AAK theory
was carried out for

A common
feature to the above-mentioned problems
is that critical point equations
yield non-Hermitian orthogonality relations for the denominator
of the approximant. This stresses connections with interpolation,
which is a standard way to build approximants,
and in many respects best or near-best rational approximation
may be regarded as a clever manner to pick interpolation points.
This was exploited in , ,
and is used in an essential manner to assess the
behavior of poles of best approximants to functions with branched
singularities,
which is of particular interest for inverse source problems
(*cf.* Sections
and ).

In higher dimensions, the analog of Problem

Besides,
certain constrained rational approximation problems, of special interest
in identification
and design of passive systems, arise when putting additional
requirements on the approximant, for instance that it should be smaller than 1
in modulus (*i.e.* a Schur function). In particular, Schur interpolation
lately received renewed attention
from the team, in connection with matching problems.
There, interpolation data are subject to
a well-known compatibility condition (positive definiteness of the so-called
Pick matrix), and the main difficulty is to put interpolation
points on the boundary of

Matrix-valued approximation is necessary to handle systems with several
inputs and outputs but it generates additional difficulties
as compared to scalar-valued approximation,
both theoretically and algorithmically. In the matrix case,
the McMillan degree (*i.e.* the degree of a minimal realization in
the System-Theoretic sense) generalizes the usual notion of degree
for rational functions. For instance when poles are simple, the McMillan degree is the sum of the ranks of the residues.

The basic problem that we consider now goes as follows:
*let $\mathcal{F}\in {\left({H}^{2}\right)}^{m\times l}$ and $n$ an
integer; find a rational matrix of size $m\times l$ without
poles in the unit disk and of McMillan degree at most $n$ which is nearest possible
to $\mathcal{F}$ in ${\left({H}^{2}\right)}^{m\times l}$.*
Here the

The scalar approximation algorithm derived in
and mentioned in
Section
generalizes to
the matrix-valued situation . The
first difficulty here is to parametrize
inner matrices (*i.e.* matrix-valued functions
analytic in the unit disk and unitary on the unit circle) of
given McMillan degree degree

Difficulties relative to multiple local minima of course arise in
the matrix-valued case as well, and deriving criteria that
guarantee uniqueness is even
more difficult than in the scalar case. The case of rational functions
of degree

Let us stress that RARL2 seems the only algorithm handling rational approximation in the matrix case that demonstrably converges to a local minimum while meeting stability constraints on the approximant. It is still a working pin of many developments by Apics on frequency optimization and design.

We refer here to the behavior of poles of best
meromorphic approximants, in the

Generally speaking in approximation theory, assessing the
behavior of poles of rational approximants is essential
to obtain error rates as the degree goes large, and to tackle
constructive issues like
uniqueness. However, as explained in Section ,
the original twist by Apics is to consider this issue also as a means
to extract information on
singularities of the solution to a
Dirichlet-Neumann problem.
The general theme is thus: *how do the singularities
of the approximant reflect those of the approximated function?*
This approach to inverse problem for the 2-D Laplacian turns out
to be attractive when singularities
are zero- or one-dimensional (see Section ). It can be used
as a computationally cheap
initial condition for more precise but much heavier
numerical optimizations which often do not even converge
unless properly initialized.
As regards crack detection or source recovery, this approach
boils down to
analyzing the behavior of best meromorphic
approximants of given pole cardinality to a function with branch points, which is the prototype of
a polar singular set.
For piecewise analytic cracks, or in the case of sources, we were able to
prove (, , ),
that the poles of the
approximants accumulate, when the degree goes large,
to some extremal cut of minimum weighted
logarithmic capacity connecting
the singular points of the crack, or the sources
.
Moreover, the asymptotic density
of the poles turns out to be the Green equilibrium distribution
on this cut in

The case of two-dimensional singularities is still an outstanding open problem.

It is remarkable that inverse source problems inside a sphere or an ellipsoid in 3-D can be approached with such 2-D techniques, as applied to planar sections, see Section . The technique is implemented in the software FindSources3D, see Section .

In addition to the above-mentioned research activities, Apics develops and maintains a number of long-term software tools that either implement and illustrate effectiveness of the algorithms theoretically developed by the team or serve as tools to help further research by team members. We present briefly the most important of them.

Scientific Description

Dedale-HF consists in two parts: a database of coupling topologies as well as a dedicated predictor-corrector code. Roughly speaking each reference file of the database contains, for a given coupling topology, the complete solution to the coupling matrix synthesis problem (C.M. problem for short) associated to particular filtering characteristics. The latter is then used as a starting point for a predictor-corrector integration method that computes the solution to the C.M. corresponding to the user-specified filter characteristics. The reference files are computed off-line using Gröbner basis techniques or numerical techniques based on the exploration of a monodromy group. The use of such continuation techniques, combined with an efficient implementation of the integrator, drastically reduces the computational time.

Dedale-HF has been licensed to, and is currently used by TAS-Espana

Functional Description

Dedale-HF is a software dedicated to solve exhaustively the coupling matrix synthesis problem in reasonable time for the filtering community. Given a coupling topology, the coupling matrix synthesis problem consists in finding all possible electromagnetic coupling values between resonators that yield a realization of given filter characteristics. Solving the latter is crucial during the design step of a filter in order to derive its physical dimensions, as well as during the tuning process where coupling values need to be extracted from frequency measurements.

Participant: Fabien Seyfert

Contact: Fabien Seyfert

FindSources3D-bolis

Keywords: Health - Neuroimaging - Visualization - Compilers - Medical - Image - Processing

Functional Description

FindSources3D is a software program dedicated to the resolution of inverse source problems in electroencephalography (EEG). From pointwise measurements of the electrical potential taken by electrodes on the scalp, FindSources3D estimates pointwise dipolar current sources within the brain in a spherical model.

After a first data transmission “cortical mapping” step, it makes use of best rational approximation on 2-D planar cross-sections and of the software RARL2 in order to locate singularities. From those planar singularities, the 3-D sources are estimated in a last step.

This version of FindSources3D provides a modular, ergonomic, accessible and interactive platform, with a convenient graphical interface and a tool that can be distributed and used, for EEG medical imaging. Modularity is now granted (using the tools dtk, Qt, with compiled Matlab libraries). It offers a detailed and nice visualization of data and tuning parameters, processing steps, and of the computed results (using VTK).

Participants: Juliette Leblond, Maureen Clerc Gallagher, Théodore Papadopoulo, Jean-Paul Marmorat and Nicolas Schnitzler

Contact: Juliette Leblond

URL: http://

Scientific Description

For the matrix-valued rational approximation step, Presto-HF relies on RARL2. Constrained realizations are computed using the Dedale-HF software. As a toolbox, Presto-HF has a modular structure, which allows one for example to include some building blocks in an already existing software.

The delay compensation algorithm is based on the following assumption: far off the pass-band, one can reasonably expect a good approximation of the rational components of S11 and S22 by the first few terms of their Taylor expansion at infinity, a small degree polynomial in 1/s. Using this idea, a sequence of quadratic convex optimization problems are solved, in order to obtain appropriate compensations. In order to check the previous assumption, one has to measure the filter on a larger band, typically three times the pass band.

This toolbox has been licensed to, and is currently used by Thales Alenia Space in Toulouse and Madrid, Thales airborne systems and Flextronics (two licenses). XLIM (University of Limoges) is a heavy user of Presto-HF among the academic filtering community and some free license agreements have been granted to the microwave department of the University of Erlangen (Germany) and the Royal Military College (Kingston, Canada).

Functional Description

Presto-HF is a toolbox dedicated to low-pass parameter identification for microwave filters. In order to allow the industrial transfer of our methods, a Matlab-based toolbox has been developed, dedicated to the problem of identification of low-pass microwave filter parameters. It allows one to run the following algorithmic steps, either individually or in a single stroke:

• Determination of delay components caused by the access devices (automatic reference plane adjustment),

• Automatic determination of an analytic completion, bounded in modulus for each channel,

• Rational approximation of fixed McMillan degree,

• Determination of a constrained realization.

Participants: Fabien Seyfert, Jean-Paul Marmorat and Martine Olivi

Contact: Fabien Seyfert

Réalisation interne et Approximation Rationnelle L2

Scientific Description

The method is a steepest-descent algorithm. A parametrization of MIMO systems is used, which ensures that the stability constraint on the approximant is met. The implementation, in Matlab, is based on state-space representations.

RARL2 performs the rational approximation step in the software tools PRESTO-HF and FindSources3D. It is distributed under a particular license, allowing unlimited usage for academic research purposes. It was released to the universities of Delft and Maastricht (the Netherlands), Cork (Ireland), Brussels (Belgium), Macao (China) and BITS-Pilani Hyderabad Campus (India).

Functional Description

RARL2 is a software for rational approximation. It computes a stable rational L2-approximation of specified order to a given L2-stable (L2 on the unit circle, analytic in the complement of the unit disk) matrix-valued function. This can be the transfer function of a multivariable discrete-time stable system. RARL2 takes as input either:

• its internal realization,

• its first N Fourier coefficients,

• discretized (uniformly distributed) values on the circle. In this case, a least-square criterion is used instead of the L2 norm.

It thus performs model reduction in the first or the second case, and leans on frequency data identification in the third. For band-limited frequency data, it could be necessary to infer the behavior of the system outside the bandwidth before performing rational approximation.

An appropriate Möbius transformation allows to use the software for continuous-time systems as well.

Participants: Jean-Paul Marmorat and Martine Olivi

Contact: Martine Olivi

Keywords: Numerical algorithm - Supremum norm - Curve plotting - Remez algorithm - Code generator - Proof synthesis

Functional Description

Sollya is an interactive tool where the developers of mathematical floating-point libraries (libm) can experiment before actually developing code. The environment is safe with respect to floating-point errors, i.e. the user precisely knows when rounding errors or approximation errors happen, and rigorous bounds are always provided for these errors.

Among other features, it offers a fast Remez algorithm for computing polynomial approximations of real functions and also an algorithm for finding good polynomial approximants with floating-point coefficients to any real function. As well, it provides algorithms for the certification of numerical codes, such as Taylor Models, interval arithmetic or certified supremum norms.

It is available as a free software under the CeCILL-C license.

Participants: Sylvain Chevillard, Christoph Lauter, Mioara Joldes and Nicolas Jourdan

Partners: CNRS - ENS Lyon - UCBL Lyon 1

Contact: Sylvain Chevillard

Application domains are naturally linked to the problems described in Sections and . By and large, they split into a systems-and-circuits part and an inverse-source-and-boundary-problems part, united under a common umbrella of function-theoretic techniques as described in Section .

Generally speaking, inverse potential problems, similar to the one appearing in Section , occur naturally in connection with systems governed by Maxwell's equation in the quasi-static approximation regime. In particular, they arise in magnetic reconstruction issues. A specific application is to geophysics, which led us to form the Inria Associate Team “Impinge” (Inverse Magnetization Problems IN GEosciences) together with MIT and Vanderbilt University. A recent collaboration with Cerege (CNRS, Aix-en-Provence), in the framework of the ANR-project MagLune, completes this picture, see Section .

To set up the context, recall that the Earth's geomagnetic field is generated by convection of the liquid metallic core (geodynamo) and that rocks become magnetized by the ambient field as they are formed or after subsequent alteration. Their remanent magnetization provides records of past variations of the geodynamo, which is used to study important processes in Earth sciences like motion of tectonic plates and geomagnetic reversals. Rocks from Mars, the Moon, and asteroids also contain remanent magnetization which indicates the past presence of core dynamos. Magnetization in meteorites may even record fields produced by the young sun and the protoplanetary disk which may have played a key role in solar system formation.

For a long time, paleomagnetic techniques were only capable of analyzing bulk samples and compute their net magnetic moment. The development of SQUID microscopes has recently extended the spatial resolution to sub-millimeter scales, raising new physical and algorithmic challenges. The associate team IMPINGE aims at tackling them, experimenting with the SQUID microscope set up in the Paleomagnetism Laboratory of the department of Earth, Atmospheric and Planetary Sciences at MIT. Typically, pieces of rock are sanded down to a thin slab, and the magnetization has to be recovered from the field measured on a planar region at small distance from the slab.

Mathematically
speaking, both inverse source problems for EEG from Section and inverse magnetization problems described presently
amount to recover the (3-D valued) quantity

outside the volume

Another timely instance of inverse magnetization problems lies with geomagnetism. Satellites orbiting around the Earth measure the magnetic field at many points, and nowadays it is a challenge to extract global information from those measurements. In collaboration with C. Gerhards from the University of Vienna, Apics has started to work on the problem of separating the magnetic field due to the magnetization of the globe's crust from the magnetic field due to convection in the liquid metallic core. The techniques involves are variants, in a spherical context, from those developed within the IMPINGE associate team for paleomagnetism, see Section .

This work is conducted in collaboration with Maureen Clerc and Théo Papadopoulo from the Athena EPI.

Solving overdetermined Cauchy problems for the Laplace equation on a
spherical layer (in 3-D) in order to extrapolate
incomplete data (see Section ) is
a necessary
ingredient of the team's approach to inverse source problems, in particular
for applications to EEG, see . Indeed, the latter involves propagating the
initial conditions through several layers of different conductivities,
from the boundary shell
down to the center of the domain where the
singularities (*i.e.* the sources) lie.
Once propagated
to the innermost sphere, it turns out that traces of the
boundary data on 2-D cross sections coincide
with analytic functions with branched singularities
in the slicing plane ,
. The singularities are
related to the actual location of the sources, namely their moduli
reach in turn a
maximum when the plane contains one of the sources. Hence we are
back to the 2-D framework of Section ,
and recovering these singularities
can be performed *via* best rational approximation.
The goal is to produce a fast and sufficiently accurate
initial guess on the number
and location of the sources in order to run heavier
descent algorithms on the direct problem, which are more precise but
computationally costly and often
fail to converge if not properly initialized. Our belief
is that such a localization process can add a geometric, valuable piece of
information to the standard temporal analysis of EEG signal records.

Numerical experiments obtained with our software FindSources3D give very good results on simulated data and we are now engaged in the process of handling real experimental data (see Sections and ), in collaboration with the Athena team at Inria Sophia Antipolis, neuroscience teams in partner-hospitals (la Timone, Marseille), and the BESA company (Munich).

This is joint work with Stéphane Bila (XLIM, Limoges).

One of the best training grounds for function-theoretic applications by the team is the identification and design of physical systems whose performance is assessed frequency-wise. This is the case of electromagnetic resonant systems which are of common use in telecommunications.

In space telecommunications (satellite transmissions), constraints specific to on-board technology lead to the use of filters with resonant cavities in the microwave range. These filters serve multiplexing purposes (before or after amplification), and consist of a sequence of cylindrical hollow bodies, magnetically coupled by irises (orthogonal double slits). The electromagnetic wave that traverses the cavities satisfies the Maxwell equations, forcing the tangent electrical field along the body of the cavity to be zero. A deeper study of the Helmholtz equation states that an essentially discrete set of wave vectors is selected. In the considered range of frequency, the electrical field in each cavity can be decomposed along two orthogonal modes, perpendicular to the axis of the cavity (other modes are far off in the frequency domain, and their influence can be neglected).

Near the resonance frequency, a good approximation to the Helmholtz equations is given by a second order differential equation. Thus, one obtains an electrical model of the filter as a sequence of electrically-coupled resonant circuits, each circuit being modeled by two resonators, one per mode, the resonance frequency of which represents the frequency of a mode, and whose resistance accounts for electric losses (surface currents) in the cavities.

This way, the filter can be seen as a quadripole, with two ports, when
plugged onto a resistor at one end and fed with some potential at the other end.
One is now
interested in the power which is transmitted and reflected. This leads
one to define a
scattering matrix

In fact, resonance is not studied via the electrical model,
but via a low-pass
equivalent circuit obtained upon linearizing near the central frequency, which is no
longer
conjugate symmetric (*i.e.* the underlying system may no longer
have real
coefficients) but whose degree is divided by 2 (8 in the example).

In short, the strategy for identification is as follows:

measuring the scattering matrix of the filter near the optimal frequency over twice the pass band (which is 80MHz in the example).

Solving bounded extremal problems for the transmission and the reflection (the modulus of he response being respectively close to 0 and 1 outside the interval measurement, cf. Section ) in order to get a models for the scattering matrix as an analytic matrix-valued function. This provides us with a scattering matrix known to be close to a rational matrix of order roughly 1/4 of the number of data points.

Approximating this scattering matrix by a true rational transfer-function of appropriate degree (8 in this example) via the Endymion or RARL2 software (cf. Section ).

A state space realization of

Finally one builds a realization of the approximant and looks for a change of variables that eliminates non-physical couplings. This is obtained by using algebraic-solvers and continuation algorithms on the group of orthogonal complex matrices (symmetry forces this type of transformation).

The final approximation is of high quality. This can be interpreted as
a confirmation of the linearity assumption on the system:
the relative

The above considerations are valid for a large class of filters. These developments have also been used for the design of non-symmetric filters, which are useful for the synthesis of repeating devices.

The team further investigates problems relative to the design of optimal responses for microwave devices. The resolution of a quasi-convex Zolotarev problems was proposed, in order to derive guaranteed optimal multi-band filter responses subject to modulus constraints . This generalizes the classical single band design techniques based on Chebyshev polynomials and elliptic functions. The approach relies on the fact that the modulus of the scattering parameter

The filtering function appears to be the ratio of two polynomials

The relative simplicity of the derivation of a filter's response, under
modulus constraints, owes much to the possibility of
forgetting about Feldtkeller's equation and express all design constraints
in terms of the filtering function. This no longer the case when
considering the synthesis

Through contacts with CNES (Toulouse) and UPV (Bilbao),
Apics got additionally involved
in the design of amplifiers which, unlike filters, are active devices.
A prominent issue here is stability. A twenty years back, it was not
possible to simulate unstable responses, and only after building a device
could one detect instability. The advent of so-called *harmonic balance*
techniques, which compute steady state responses of linear elements in
the frequency domain and look for a periodic state in the time domain of
a network connecting these linear elements *via*
static non-linearities made it possible to compute the harmonic response
of a (possibly nonlinear and unstable) device .
This has had tremendous impact on
design, and there is a growing demand for software analyzers.
The team is also becoming active in this area.

In this connection, there are two types of stability involved. The first is stability of a fixed
point around which the linearized transfer function
accounts for small signal amplification. The second is stability of a
limit cycle which is reached when the input signal is no longer small
and truly nonlinear amplification is attained
(*e.g.* because of saturation).
Work by the team so far has been concerned with the first type of stability,
and emphasis is put on defining and extracting the “unstable part” of the response, see Section . The stability check for
limit cycles is now under investigation.

This section is concerned with inverse problems for 3-D Poisson-Laplace equations, among which source recovery issues. Though the geometrical settings differ in Sections and , the characterization of silent sources (those giving rise to a vanishing field) is one common problem to both cases. The latter has been resolved in the magnetization setup for thin slabs . The case of volumetric distribution is currently being investigated, starting with magnetization distributions on closed surfaces to which the general volumetric case can be reduced by balayage.

This work is carried out in the framework of the Inria Associate Team Impinge, comprising Eduardo Andrade Lima and Benjamin Weiss from the Earth Sciences department at MIT (Boston, USA) and Douglas Hardin, Michael Northington, Edward Saff and Cristobal Villalobos from the Mathematics department at Vanderbilt University (Nashville, USA).

The overall goal of Impinge is to determine
magnetic properties of rock
samples (*e.g.* meteorites or stalactites) from weak field measurements
close to the sample that
can nowadays be obtained using SQUIDs (superconducting quantum interference
devices). During previous years, we always considered the case when the rock is cut into slabs so thin that the magnetization distribution could be considered to lie in a plane. This year, we started considering the situation where the thickness

We focused on net moment recovery, the net moment of a magnetization being given by its mean value on the sample. The net moment is a valuable piece of information to Physicists and has the advantage of being well-defined: whereas two different magnetizations can generate the same field, the net moment depends only on the field and not on the magnetization itself. Hence the goal may be described as building a numerical magnetometer, capable of analyzing data close to the sample. This is in contrast to classical magnetometers which regard the latter as a single dipole, an approximation which is only valid away from the sample and is not suitable to handle weak fields which get quickly blurred by ambient magnetic sources. This research effort was paid in two different, complementary directions.

The first approach consists in computing asymptotic expansions of the integrals

The second approach attempts to generalize the previous expansions. The initial question is: given measurements of

We also performed preliminary numerical experiments which are very encouraging, but still need to be pushed further in connection with the delicate issue of how dense should the grid of data points be in order to reach a prescribed level of precision. An article on this topic is in preparation.

The team Apics is a partner of the ANR project MagLune on Lunar magnetism, headed by the Geophysics and Planetology Department of Cerege, CNRS, Aix-en-Provence (see Section ). Recent studies let geoscientists to think that the Moon used to have a magnetic dynamo for a while, yet the exact process that triggered and fed this dynamo is still not understood, much less why it stopped. The overall goal of the project is to devise models to explain how this dynamo phenomenon was possible on the Moon.

The geophysicists from Cerege went this year to NASA to perform measurements on a few hundreds of samples brought back from the Moon by Apollo missions. The samples are kept inside bags with a protective atmosphere, and geophysicists are not allowed to open the bags, nor to take out samples from NASA facilities. Moreover, the process must be carried out efficiently as a fee is due to NASA by the time when handling these moon samples. Therefore, measurements were performed with some specific magnetometer designed by our colleagues from Cerege. This device measures the components of the magnetic field produced by the sample, at some discrete set of points located on circles belonging to three cylinders (see Figure ). The objective of Apics is to enhance the numerical efficiency of post-processing data obtained with this magnetometer.

This year, we continued the approach initiated in 2015 during K. Mavreas' internship: under the hypothesis that the field can be well explained by a single magnetic dipole, and using ideas similar to those underlying the FindSources3D tool (see Sections and ), we try to recover the position and moment of the dipole. The rational approximation technique that we are using gives, for each circle of measurements, a partial information about the position of the dipole. These partial informations obtained on all nine circles must then be combined in order to recover the exact position. Theoretically speaking, the nine partial informations are redundant and the position could be obtained by several equivalent techniques. But in practice, due to the fact that the field is not truly generated by a single dipole, and also because of noise in the measurements and numerical errors in the rational approximation step, all methods do not show the same reliability when combining the partial results. We studied several approaches, testing them on synthetic examples, with more or less noise, in order to propose a good heuristic for the reconstruction of the position. This is still on-going work.

This is a joint work with Pei Dang and Tao Qian from the University of Macao.

In the case where the curvature is constant
(*i.e.* for spheres and planes), one recovers using the previous result
that silent distribution
have no inner harmonic gradient component, whereas in the case of more general surfaces one finds they have to satisfy a spectral equation for the double layer potential. This also furnishes a characterization of volumetric silent distributions by saying that their balayage to the boundary of the volume (which is a closed surface) is silent. An article is being written on this topic.

This is a joint work with Christian Gerhards from the University of Vienna.

The techniques based on solving bounded extremal problems, set forth in Section to estimate the net moment of a planar magnetization, may be used to approach the problem of decomposing the magnetic field of the Earth into its crustal and core components, when adapted to a spherical geometry.

Indeed, in geomagnetism it is of interest to separate the Earth's core magnetic field from the crustal magnetic field. However, satellite measurements can only sense the superposition of the two contributions. In practice, the measured magnetic field is expanded in terms of spherical harmonics and a separation into crust and core contribution is done empirically by a sharp cutoff in the spectral domain. Under the assumption that the crustal magnetic field is supported on a strict subset of the Earth's surface, which is nearly verified as some regions on the globe are only very weakly magnetic, one can state an extremal problem to find a linear form yielding an arbitrary coefficient of the expansion in spherical harmonics on the crustal field, while being nearly zero on the core contribution. An article is being prepared to report on this research.

This work is conducted in collaboration with Jean-Paul Marmorat and Nicolas Schnitzler, together with Maureen Clerc and Théo Papadopoulo from the Athena EPI.

In 3-D, functional or clinically active regions in the cortex are often modeled by pointwise sources that have to be localized from measurements, taken by electrodes on the scalp, of an electrical potential satisfying a Laplace equation (EEG, electroencephalography). In the works , on the behavior of poles in best rational approximants of fixed degree to functions with branch points, it was shown how to proceed via best rational approximation on a sequence of 2-D disks cut along the inner sphere, for the case where there are finitely many sources (see Section ).

In this connection, a dedicated software FindSources3D (see Section ) is being developed, in collaboration with the team Athena and the CMA. In addition to the modular and ergonomic platform version of FindSources3D,
a new (Matlab) version of the software that automatically performs the estimation of the quantity of sources is being built.
It uses an alignment criterion in addition to other clustering tests for the selection.
It appears that, in the rational approximation step,
*multiple* poles possess a nice behavior with respect to branched
singularities. This is due to the very physical assumptions on the model
(for EEG data, one should consider *triple* poles). Though numerically
observed in , there is no mathematical
justification so far why multiple poles generate such strong accumulation
of the poles of the approximants. This intriguing property, however,
is definitely helping source recovery. It is used in order to automatically estimate the “most plausible”
number of sources (numerically: up to 3, at the moment).
Last but not least, this new version may take as inputs actual EEG measurements, like time signals, and performs a suitable singular value decomposition in order to separate independent sources.

In connection with these and other brain exploration modalities like electrical impedance tomography (EIT), we are now studying conductivity estimation problems. This is the topic of the PhD research work of C. Papageorgakis (co-advised with the Athena project-team and BESA GmbH). In layered models, it concerns the estimation of the conductivity of the skull (intermediate layer). Indeed, the skull was assumed until now to have a given isotropic constant conductivity, whose value can differ from one individual to another. A preliminary issue in this direction is: can we uniquely recover and estimate a single-valued skull conductivity from one EEG recording? This has been established in the spherical setting when the sources are known, see . Situations where sources are only partially known and the geometry is more realistic than a sphere are currently under study. When the sources are unknown, we should look for more data (additional clinical and/or functional EEG, EIT, ...) that could be incorporated in order to recover both the sources locations and the skull conductivity. Furthermore, while the skull essentially consists of a hard bone part, which may be assumed to have constant electrical conductivity, it also contains spongy bone compartments. These two distinct components of the skull possess quite different conductivities. The influence of the second on the overall model is currently being studied.

This is collaborative work with Stéphane Bila (XLIM, Limoges, France), Yohann Sence (XLIM, Limoges, France), Thierry Monediere (XLIM, Limoges, France), Francois Torrès (XLIM, Limoges, France) in the context of the ANR Cocoram (see Section ).

Filter synthesis is usually performed under the hypothesis that both ports of the filter are loaded on a constant resistive load (usually 50 Ohm). In complex systems, filters are however cascaded with other devices, and end up being loaded, at least at one port, on a non purely resistive frequency varying load. This is for example the case when synthesizing a multiplexer: each filter is here loaded at one of its ports on a common junction. Thus, the load varies with frequency by construction, and is not purely resistive either. Likewise, in an emitter-receiver, the antenna is followed by a filter. Whereas the antenna can usually be regarded as a resistive load at some frequencies, this is far from being true on the whole pass-band. A mismatch between the antenna and the filter, however, causes irremediable power losses, both in emission and transmission. Our goal is therefore to develop a method for filter synthesis that allows us to match varying loads on specific frequency bands, while enforcing some rejection properties away from the pass-band.

The matching problem of minimizing

When the degree

where

which accounts for the losslessness of the filter. The frequencies

The previous interpolation procedure provides us with a matching/rejecting filtering characteristics at a discrete set of frequencies. This may serve as a
starting point for heavier optimization procedures, where the matching and rejection specifications are expressed uniformly over the bandwidth. Although the practical results thus obtained have shown to be quite convincing, we have no proof of their global optimality. This led us to seek alternative approaches able to assess, at least in simple cases, global optimality of the derived response. By optimality we mean, as in classical filtering, the ability to derive the uniformly best matching response in a given pass-band, while ensuring some rejection constraints on a stop-band. Following the approach of Fano and Youla, we considered the problem of designing a

This work was conducted in collaboration with Yves Rolain (VUB, Brussels, Belgium). The goal is to automatize and improve our computer-aided tuning (CAT) method for coupled-resonator microwave synthesis, which is based on rational approximation and circuit extraction as explained before. The novelty here lies with estimating the Jacobian of the function that relates the physical filter design parameters to the extracted coupling parameters. Lately commercial full-wave electromagnetic (EM) simulators provide the adjoint sensitivities of the S-parameters with respect to the geometrical parameters. This information allows us for an efficient estimation of the Jacobian since it no longer requires finite difference based evaluation. Our tuning method first extracts the physically implemented coupling matrix, and then estimates the corresponding Jacobian. Next it compares the extracted coupling matrix to the target coupling matrix (golden goal). Using the difference between the coupling matrices and the pseudo-inverse of the estimated Jacobian, a correction that brings the design parameters closer to the golden goal is obtained. This process is repeated iteratively until the correction becomes sufficiently small with respect to a user-specified goal. In the case of coupling structures with multiple solutions, the Jacobian is calculated for each admissible solution. This paper presents a criterion to identify the physical solution among the different possibilities. The CAT method has been applied to the design of a cascaded triplet (CT) filter implemented in a microstrip technology. This filter is a well-known examples of a non-canonical coupling structure. See for details.

This work is performed under contract with CNES-Toulouse and the University of Bilbao as well as in collaboration with Adam Cooman (VUB, Brussels, Belgium). The goal is to help design amplifiers, in particular to detect instability at an early stage of the design. Activity in this area is gaining importance with the coming of a doctoral and a postdoctoral student along with planned software developments.

Performing a stability analysis during the design of any electronic circuit is critical to guarantee its correct operation. A closed-loop stability analysis can be performed by analyzing the impedance presented by the circuit at a well-chosen node without internal access to the simulator. If any of the poles of this impedance lie in the complex right half-plane, the circuit is unstable. The classic way to detect unstable poles is to fit a rational model on the impedance. This rational approximation has to deal with model order selection, which is difficult in circuits with transmission lines. In the practical approach we develop in collaboration with Adam Cooman, a projection-based method is proposed which splits the impedance into a stable and an unstable part by projecting on an orthogonal basis of stable and unstable functions. Working with a projection instead of a rational approximation greatly simplifies the stability analysis. When the projection is mapped from the complex plane to the unit disc, it boils down to calculating a Fourier series. If a significant part of the impedance is projected on the unstable part, a low-order rational approximation is fitted on this unstable part to find the location of the unstable poles. See for details. Adapting such tools to check the stability of a trajectory, linearizing around the latter, is tantamount to develop a similar theory for time-varying periodic systems. This is the subject of S. Fueyo's PhD work.

The overall and long-term goal is to enhance the quality of numerical computations. The software tool Sollya (see Section ), developed together with C. Lauter (Université Pierre et Marie Curie) intends to provide an interactive environment for performing numerically rigorous computations. During year 2016, we released version 5.0 (in June) and version 6.0 (in October) of Sollya. Among other things, these releases have heavily improved the internal handling of polynomial expressions and the speed of the faithful evaluation of functions. They also make the library API more complete and fix most of the reported bugs. Another important novelty of 2016 is that Sollya is now officially included in the Debian Linux distribution.

We extended this year exterior asymptotics for orthonormal polynomials
with respect to a weight on a planar region

locally uniformly outside the convex hull of

The result goes much beyond those previously known, which either assume
analyticity of

This contract (reference Inria: 7066, CNES: 127 197/00)
involving CNES, XLIM and Inria, focuses on the development
of synthesis algorithms for

This contract (reference CNES: RS14/TG-0001-019) involving CNES, University of Bilbao (UPV/EHU) and Inria aims at setting up a methodology for testing the stability of amplifying devices. The work at Inria is concerned with the design of frequency optimization techniques to identify the unstable part of the linearized response and analyze the linear periodic components.

This is a research agreement between Inria (Apics and Athena teams) and the German company BESA

Flextronics, active in the manufacturing of communication devices all over the world, bought two sets of licenses for Presto-HF and Dedale-HF. Deployment of our tools in their production facilities for wireless communication units is being studied.

Contract Provence Alpes Côte d'Azur (PACA) Region - Inria, BDO (no. 2014-05764) funding the research grant of C. Papageorgakis, see Sections , .

The team participates in the project WIMAG (Wave IMAGing) funded by the IDEX UCA-Jedi. It aims at identifying and gathering the research and development by partners of UCA involved in wave imaging systems. Other partners are UNS and CNRS (GéoAzur, I3S, LEAT, LJAD), together with Orange Labs.

The team participates in the transversal action C4PO funded by the IDEX UCA-Jedi. This “Center for Planetary Origin” brings together scientists from various fields to advance and organize Planetary Science at the the University of Nice, and supports research and teaching initiatives within its framework.

The ANR (Astrid) project COCORAM (Co-design et co-intégration de réseaux d’antennes actives multi-bandes pour systèmes de radionavigation par satellite) started January 2014. We are associated with three other teams from XLIM (Limoges University), geared respectively towards filters, antennas and amplifiers design. The core idea of the project is to realize dual band reception an emission chains by co-conceiving the antenna, the filters, and the amplifier. We are specifically in charge of the theoretical design of the filters, matching the impedance of a bi-polarized dual band antenna. This is a perfect training ground to test, apply and adapt our work on matching problems (see Section ).

The ANR project MagLune (Magnétisme de la Lune) has been approved July 2014. It involves the Cerege (Centre de Recherche et d’Enseignement de Géosciences de l’Environnement, joint laboratory between Université Aix-Marseille, CNRS and IRD), the IPGP (Institut de Physique du Globe de Paris) and ISTerre (Institut des Sciences de la Terre). Associated with Cerege are Inria (Apics team) and Irphe (Institut de Recherche sur les Phénomènes Hors Équilibre, joint laboratory between Université Aix-Marseille, CNRS and École Centrale de Marseille). The goal of this project (led by geologists) is to understand the past magnetic activity of the Moon, especially to answer the question whether it had a dynamo in the past and which mechanisms were at work to generate it. Apics participates in the project by providing mathematical tools and algorithms to recover the remanent magnetization of rock samples from the moon on the basis of measurements of the magnetic field it generates. The techniques described in Section are instrumental for this purpose.

Apics is part of the European Research Network on System Identification (ERNSI) since 1992.

System identification deals with the derivation, estimation and validation of mathematical models of dynamical phenomena from experimental data.

Title: Inverse Magnetization Problems IN GEosciences.

International Partner (Institution - Laboratory - Researcher):

Massachusetts Institute of Technology (United States) - Department of Earth, Atmospheric and Planetary Sciences - Benjamin P. Weiss

Start year: 2016

See also: http://

The associate team Impinge is concerned with the inverse problem of recovering a magnetization distribution from measurements of the magnetic field above rock slabs using a SQUID microscope (developed at MIT). The application domain is to Earth and planetary sciences. Indeed, the remanent magnetization of rocks provides valuable information on their history. This is a renewal of the previous Associate Team Impinge that ended 2015. The US team also involves a group of Mathematicians (D. Hardin, M. Northington, E.B. Saff) at Vanderbilt University.

**MIT-France seed funding** is a competitive collaborative research
program ran
by the Massachusetts Institute of Technology (Cambridge, Ma, USA). Together with
E. Lima and B. Weiss from the Earth and Planetary Sciences dept. at MIT,
Apics obtained two-years support from the above-mentioned program to run a project entitled:
“Development of Ultra-high Sensitivity Magnetometry for Analyzing Ancient Rock Magnetism”

**NSF Grant** L. Baratchart, S. Chevillard and J. Leblond are
external investigators in the NSF Grant 2015-2018,
"Collaborative Research: Computational
methods for ultra-high sensitivity magnetometry of geological samples"
led by E.B. Saff (Vanderbilt Univ.) and B. Weiss (MIT).

Christian Gerhards (Universität Wien, Vienna, Austria, September 5-9).

Douglas Hardin (Vanderbilt University, Nashville, Tennessee, USA, June 11-21).

Nuutti Hyvonen (Aalto University, Finland, June 13-14).

Benjamin Lanfer (BESA, Munich, Germany, February 4-5).

Eduardo Lima (MIT, Boston, Massachusetts, USA, June 13-17).

Michael Northington (Vanderbilt University, Nashville, Tennessee, USA, June 11-22).

Vladimir Peller (University of Michigan at East Lansing, June 10-24).

Cristobal Villalobos (Vanderbilt University, Nashville, Tennessee, USA, June 8-21).

Collaboration under contract with Thales Alenia Space (Toulouse, Cannes, and Paris), CNES (Toulouse), XLIM (Limoges), University of Bilbao (Universidad del País Vasco / Euskal Herriko Unibertsitatea, Spain), BESA company (Munich), Flextronics.

Regular contacts with research groups at UST (Villeneuve d'Asq), Universities of Bordeaux-I (Talence), Orléans (MAPMO), Aix-Marseille (CMI-LATP), Nice Sophia Antipolis (Lab. JAD), Grenoble (IJF and LJK), Paris 6 (P. et M. Curie, Lab. JLL), Inria Saclay (Lab. Poems), Cerege-CNRS (Aix-en-Provence), CWI (the Netherlands), MIT (Boston, USA), Vanderbilt University (Nashville USA), Steklov Institute (Moscow), Michigan State University (East-Lansing, USA), Texas A&M University (College Station USA), Indiana University-Purdue University at Indianapolis, Politecnico di Milano (Milan, Italy), University of Trieste (Italy), RMC (Kingston, Canada), University of Leeds (UK), of Maastricht (the Netherlands), of Cork (Ireland), Vrije Universiteit Brussel (Belgium), TU-Wien and Universiät Wien (Austria), TFH-Berlin (Germany), ENIT (Tunis), KTH (Stockholm), University of Cyprus (Nicosia, Cyprus), University of Macau (Macau, China), SIAE Microelettronica (Milano).

The project is involved in the GDR-project AFHP (CNRS), in the ANR (Astrid program) project COCORAM (with XLIM, Limoges, and DGA), in the ANR (Défis de tous les savoirs program) project MagLune (with Cerege, IPGP, ISTerre, Irphe), in a MIT-France collaborative seed funding, in the Associate Inria Team Impinge (with MIT, Boston), and in a NSF grant (with Vanderbilt University and MIT).

L. Baratchart gave a talk at the Shanks workshop “Mathematical methods for inverse magnetization problems arising in geosciences”, organized at Vanderbilt University (Nashville, USA), March 2016, a talk at “SEAM”, organized by the AMS at
USF (Tampa, USA), a talk at “AppOpt” organized by ICIMAF in Havana
(Cuba) http://

S. Chevillard gave a talk at the Shanks workshop “Mathematical methods for inverse magnetization problems arising in geosciences”, organized at Vanderbilt University (Nashville, USA), March 2016.

B. Hanzon gave a presentation at the
CDC 2016 pre-workshop on "realization theory and its role in system identification" (joint work with M. Olivi and R. Peeters) https://

J. Leblond presented a communication at the above-mentioned Shanks Workshop, at the conference PICOF 2016 (Problèmes Inverses, Contrôle, Optimisation de Formes, Autrans, France, June 1-3 2016, http://

M. Olivi gave a talk at the conference SIGMA'2016 (Signal-Image-Géométrie-Modélisation-Approximation). http://

F. Seyfert presented a communication at the 22nd International Symposium on Mathematical Theory of Netowrks and Systems https://

K. Mavreas presented a communication at the Conference Advances in Lunar Magnetism: from Paleomagnetism to Dynamos, Cargèse, France, June 1-3, 2016, http://

C. Papageorgakis presented a communication at the Conference PICOF 2016 and at the Science Day in BESA company, Munich, Germany, December 15, 2016.

D. Ponomarev presented a communication at the above-mentioned Shanks Workshop and a poster at the Conference PICOF 2016.

D. Martinez Martinez gave a seminar at the department ELEC of the Vrije Uniniversiteit of Brussels (sept. 18) and at the Universidad Polit´ecnica de Cartagena, ETSI (December 14). He gave a talk at the 2016 IEEE International Conference on Antenna Measurements & Applications, Syracuse (NY), USA, October 23-27.

K. Mavreas and C. Papageorgakis were among the PhD students in charge of the PhD students Seminar within the Research Center.

J. Leblond was one of the co-organizers of the 3rd “Journée Mathématiques et Parité”, IHP, Paris, July 8, 2016, http://

L. Baratchart was a member of the program committee of “Mathematical Theory of Network and Systems” (MTNS) 2016, Minneapolis, Minnesota, USA.

J. Leblond was a member of the Scientific Committee of the Conference PICOF 2016.

L. Baratchart is sitting on the Editorial Board of the journals *Constructive Methods and Function Theory* and *Complex Analysis and Operator Theory*.

L. Baratchart served as a reviewer for several journals (Annales Inst. Fourier, SIMA, Numerical Algorithms, Journal of Approx. Theory, Complex Variables and Elliptic Equations, ...)

J. Leblond was a reviewer for the journal *Multidimensional Systems and Signal Processing, Czechoslovak Mathematical Journal*.

M. Olivi was a reviewer for the journals *Automatica* and *IEEE Transactions on Automatic Control*
and for the IEEE Conference on Decision and Control.

F. Seyfert was a reviewer for the journal *IEEE Microwave Theory and Techniques*.

L. Baratchart was an invited speaker at the “25-th Summer Meeting in
Mathematical Analysis”, organized by the Russian Academy of Sciences
at the Euler Institute (St-Petersburg, Russia)
http://

S. Chevillard was invited to give a talk at the Fifth Approximation Days, International conference on constructive complex approximation, http://

J. Leblond was a plenary speaker at the Conference WiS&E 2016 (Waves in Sciences and Engineering), http://

F. Seyfert was invited to give a talk at the Workshop on Mathematical Aspects of Network Synthesis http://

L. Baratchart is a member of the Mathematical panel of experts of ANR.

S. Chevillard was representative at the “comité de centre” and at the “comité des projets” (Research Center Inria-Sophia) until September 2016.

J. Leblond is an elected member of the “Conseil Scientifique” and of the “Commission Administrative Paritaire” of Inria. Until May, she was in charge of the mission “Conseil et soutien aux chercheurs” within the Research Center. She is also a member of the “Conseil Académique” of the Univ. Côte d'Azur (UCA).

**Colles**: S. Chevillard is giving “Colles” at Centre International de Valbonne (CIV) (2 hours per week).

PhD: D. Ponomarev, *Some inverse problems with partial data*, Université Nice Sophia Antipolis, defended on June 14, 2016 (advisors: J. Leblond, L. Baratchart).

PhD: M. Caenepeel, *The development of models for the design of RF/microwave filters*, Vrije Universiteit Brussel (VUB), defended on October 19, 2016 (advisors: Y. Rolain, M. Olivi, F. Seyfert).

PhD in progress: C. Papageorgakis, *Conductivity model estimation*, since October 2014 (advisors: J. Leblond, M. Clerc, B. Lanfer).

PhD in progress: K. Mavreas, *Inverse source problems in planetary sciences: dipole localization in Moon rocks from sparse magnetic data*, since October 2015 (advisors: S. Chevillard, J. Leblond).

PhD in progress: D. Martinez Martinez, *Méthodologie et Outils de Synthèse pour des Fonctions de Filtrage Chargées par des Impédances complexes*, since October 2015, advisors: L. Baratchart and F. Seyfert.

PhD in progress: G. Bose, Filter Design to Match Antennas, since December 2016, advisors: F. Ferrero and F. Seyfert.

PhD in progress: S. Fueyo, Cycles limites et stabilité dans les circuits,since October 2016, advisors: L. Baratchart and J.B. Pomet.

L. Baratchart sat on the PhD defense committee of
d'Alexey Agaltsov (Ecole Polytechnique, http://*Habilitation à diriger des recherches* of E. Abakumov (Université Paris-Est,
Marne-la-Vallée, http://

J. Leblond was a member of the “Jury d'admissibilité du concours CR” of the Inria Research Center and of the “Comités de Sélection” for professors at UNSA (Polytech Nice) and at the University Paris-Sud Orsay (March-May 2016). She was a reviewer for the PhD thesis of Silviu Ioan Filip, Univ. Lyon, December 2016.

F. Seyfert was a member of the PhD jury of Adam Cooman at the ELEC. department of the VUB (Bruxelles, Belgium). The PhD's title is “Distorsion Analysis of Analog Electronic Circtuits Using Modulated Signals”.

M. Olivi is responsible for Scientific Mediation and president of the Committee MASTIC (Commission d’Animation et de Médiation Scientifique) https://

K. Mavreas and C. Papageorgakis actively participated to events organized by the Committee MASTIC (Fête de la Science, ...).