The team develops constructive, function-theoretic approaches to inverse problems arising in modeling and design, in particular for electro-magnetic systems as well as in the analysis of certain classes of signals.

Data typically consist of measurements or desired behaviors. The general thread is to approximate them by families of solutions to the equations governing the underlying system. This leads us to consider various interpolation and approximation problems in classes of rational and meromorphic functions, harmonic gradients, or solutions to more general elliptic partial differential equations (PDE), in connection with inverse potential problems. A recurring difficulty is to control the singularities of the approximants.

The mathematical tools pertain to complex and harmonic analysis, approximation theory, potential theory, system theory, differential topology, optimization and computer algebra. Targeted applications mostly concern non-destructive control from field measurements in medical engineering (source recovery in magneto/electro encephalography), paleomagnetism (determining the magnetization of rock samples), and since recently, obstacle identification (finding electrical characteristics of an object), for which an endeavor of the team is to develop algorithms resulting in dedicated software.

Within the extensive field of inverse problems, much of the research by Factas deals with reconstructing solutions of classical elliptic PDEs from their boundary behavior. Perhaps the simplest example lies with harmonic identification of a stable linear dynamical system: the transfer-function e.g., the Cauchy formula.

Practice is not nearly as simple, for i.e., to locate the

Step 1 relates to extremal problems and analytic operator theory, see Section 3.3.1. Step 2 involves optimization, and some Schur analysis to parametrize transfer matrices of given Mc-Millan degree when dealing with systems having several inputs and outputs, see Section 3.3.2. It also makes contact with the topology of rational functions, in particular to count critical points and to derive bounds, see Section 3.3.2. Step 2 raises further issues in approximation theory regarding the rate of convergence and the extent to which singularities of the approximant (i.e., its poles) tend to singularities of the approximated function; this is where logarithmic potential theory becomes instrumental, see Section 3.3.3.

Applying a realization procedure to the result of step 2 yields an identification procedure from incomplete frequency data which was first demonstrated in 85 to tune resonant microwave filters. Harmonic identification of nonlinear systems around a stable equilibrium can also be envisaged by combining the previous steps with exact linearization techniques from 36.

The previous example of harmonic identification quickly suggests a generalization of itself. Indeed, on identifying i.e., the field) on part of a hypersurface (a curve in 2-D) encompassing the support of

Inverse potential problems are severely indeterminate because infinitely many measures within an open set of balayage 77. In the two steps approach previously described, we implicitly removed this indeterminacy by requiring in step 1 that the measure be supported on the boundary (because we seek a function holomorphic throughout the right half-space), and by requiring in step 2 that the measure be discrete in the left half-plane (in fact: a finite sum of point masses

To recap, the gist of our approach is to approximate boundary data by (boundary traces of) fields arising from potentials of measures with specific support. This differs from standard approaches to inverse problems, where descent algorithms are applied to integration schemes of the direct problem; in such methods, it is the equation which gets approximated (in fact: discretized).

Along these lines, Factas advocates the use of steps 1 and 2 above, along with some singularity analysis, to approach issues of nondestructive control in 2-D and 3-D 1, 45, 49. The team is currently engaged in the generalization to inverse source problems for the Laplace equation in 3-D, to be described further in Section 3.2.1. There, holomorphic functions are replaced by harmonic gradients; applications are to inverse source problems in neurosciences (in particular in EEG/MEG) and inverse problems in geosciences.

The approximation-theoretic tools developed by Factas to handle issues mentioned so far are outlined in Section 3.3. In Section 3.2 to come, we describe in more detail which problems are considered and which applications are targeted.

We also began to investigate inverse scattering problems of plane waves by obstacles (playing here the role of a source term), with partners at LEAT. Such problems are again governed by Maxwell's equations and, in the time-harmonic regime, these reduce to Helmholtz equations depending on the frequency of the plane wave. Such issues have applications to detection and identification of metal objects, and this is part of LEAT research program, but at this early stage our study has remained academic (see Section 6.4).

By standard properties of conjugate differentials, reconstructing Dirichlet-Neumann boundary conditions for a function harmonic in a plane domain, when these conditions are already known on a subset

Such issues naturally arise in nondestructive testing of 2-D (or 3-D cylindrical) materials from partial electrical measurements on the boundary. For instance, the ratio between the tangential and the normal currents (the so-called Robin coefficient) tells one about corrosion of the material. Thus, solving Problem

Studying Hardy spaces of conjugate Beltrami equations is another interesting topic. For Sobolev-smooth coefficients of exponent greater than 2, they were investigated in 5, 37. The case of the critical exponent 2 is treated in 33, which apparently provides the first example of well-posed Dirichlet problem in the non-strictly elliptic case: the conductivity may be unbounded or zero on sets of zero capacity and, accordingly, solutions need not be locally bounded. More importantly perhaps, the exponent 2 is also the key to a corresponding theory on very general (still rectifiable) domains in the plane, as coefficients of pseudo-holomorphic functions obtained by conformal transformation onto a disk are merely of

Generalized Hardy classes as above are used in 34 where we address the uniqueness issue in the classical Robin inverse problem on a Lipschitz domain of

The 3-D version of step 1 in Section 3.1 is another subject investigated by Factas: to recover a harmonic function (up to an additive constant) in a ball or a half-space from partial knowledge of its gradient. This prototypical inverse problem (i.e., inverse to the Cauchy problem for the Laplace equation) often recurs in electromagnetism. At present, Factas is involved with solving instances of this inverse problem arising in two fields, namely medical imaging, e.g., for electroencephalography (EEG) or magneto-encephalography (MEG), and paleomagnetism (recovery of rocks magnetization) 1, 41, see Section 6.1. The question is considerably more difficult than its 2-D counterpart, due mainly to the lack of multiplicative structure for harmonic gradients. Still, substantial progress has been made over the last years using methods of harmonic analysis and operator theory.

The team is further concerned with 3-D generalizations and applications to non-destructive control of step 2 in Section 3.1. A typical problem is here to localize inhomogeneities or defaults such as cracks, sources or occlusions in a planar or 3-dimensional object, knowing thermal, electrical, or magnetic measurements on the boundary. These defaults can be expressed as a lack of harmonicity of the solution to the associated Dirichlet-Neumann problem, thereby posing an inverse potential problem in order to recover them. In 2-D, finding an optimal discretization of the potential in Sobolev norm amounts to solve a best rational approximation problem, and the question arises as to how the location of the singularities of the approximant (i.e., its poles) reflects the location of the singularities of the potential (i.e., the defaults we seek). This is a fairly deep issue in approximation theory, to which the project Apics (predecessor of Factas2) contributed convergence results for certain classes of fields (expressed as Cauchy integrals over extremal contours for the logarithmic potential 7, 42, 60). Initial schemes to locate cracks or sources via rational approximation on planar domains were obtained this way 45, 49, 61. It is remarkable that finite inverse source problems in 3-D balls, or more general algebraic surfaces, can be approached using these 2-D techniques upon slicing the domain into planar sections 9, 46. More precisely, each section cuts out a planar domain, the boundary of which carries data which can be proved to match an algebraic function. The singularities of this algebraic function are not located at the 3-D sources, but are related to them: the section contains a source if and only if some function of the singularities in that section meets a relative extremum. Using bisection it is thus possible to determine an extremal place along all sections parallel to a given plane direction, up to some threshold which has to be chosen small enough that one does not miss a source. This way, we reduce the original source problem in 3-D to a sequence of inverse poles and branch-points problems in 2-D. This bottom line generates a steady research activity within Factas, and again applications are sought to medical imaging and geosciences, see Sections 4.2, 4.3 and 6.1.

Conjectures may be raised on the behavior of optimal potential discretization in 3-D, but answering them is an ambitious program still in its infancy.

Through contacts with CNES (French space agency), members of the team became involved in identification and tuning of microwave electromagnetic filters used in space telecommunications.
The initial problem was to recover, from band-limited frequency measurements, physical parameters of the device under examination. The latter consists of interconnected dual-mode resonant cavities with negligible loss, hence its scattering matrix is modeled by a

This is where system theory comes into play, through the so-called realization process mapping a rational transfer function in the frequency domain to a state-space representation of the underlying system of linear differential equations in the time domain. Specifically, realizing the scattering matrix allows one to construct a virtual electrical network, equivalent to the filter, the parameters of which mediate in between the frequency response and the geometric characteristics of the cavities (i.e., the tuning parameters).

Hardy spaces provide a framework to transform this ill-posed issue into a series of regularized analytic and meromorphic approximation problems. More precisely, the procedure sketched in Section 3.1 goes as follows:

Factas also investigates issues pertaining to design rather than identification. Given the topology of the filter, a basic problem in this connection is to find the optimal response subject to specifications that bear on rejection, transmission and group delay of the scattering parameters. Generalizing the classical approach based on Chebyshev polynomials for single band filters, we recast the problem of multi-band response synthesis as a generalization of the classical Zolotarev min-max problem for rational functions 28, 80. Thanks to quasi-convexity, the latter can be solved efficiently using iterative methods relying on linear programming. These were implemented in the software easy-FF. Currently, the team is engaged in the synthesis of more complex microwave devices like multiplexers and routers, which connect several filters through wave guides. Schur analysis plays an important role here, because scattering matrices of passive systems are of Schur type (i.e., contractive in the stability region). The theory originates with the work of I. Schur 84, who devised a recursive test to check for contractivity of a holomorphic function in the disk. The so-called Schur parameters of a function may be viewed as Taylor coefficients for the hyperbolic metric of the disk, and the fact that Schur functions are contractions for that metric lies at the root of Schur's test. Generalizations thereof turn out to be efficient to parametrize solutions to contractive interpolation problems 31. Dwelling on this, Factas contributed differential parametrizations (atlases of charts) of lossless matrix functions 29, 81, 72 which are fundamental to our rational approximation software RARL2 (see Section 3.4.5). Schur analysis is also instrumental to approach de-embedding issues, and provides one with considerable insight into the so-called matching problem. The latter consists in maximizing the power a multiport can pass to a given load, and for reasons of efficiency it is all-pervasive in microwave and electric network design, e.g., of antennas, multiplexers, wifi cards and more. It can be viewed as a rational approximation problem in the hyperbolic metric. Factas made significant contributions to this subject 6, in particular within the framework of the (defense funded) ANR Cocoram.

In recent years, our attention was driven by CNES and UPV (Bilbao) to questions about stability of high-frequency amplifiers. Contrary to previously discussed devices, these are active components. The response of an amplifier can be linearized around a set of primary current and voltages, and then admittances of the corresponding electrical network can be computed at various frequencies, using the so-called harmonic balance method. The initial goal is to check for stability of the linearized model, so as to ascertain existence of a well-defined working state. The network is composed of lumped electrical elements namely inductors, capacitors, negative and positive resistors, transmission lines, and controlled current sources. Our research so far has focused on describing the algebraic structure of admittance functions, so as to set up a function-theoretic framework where the two-steps approach outlined in Section 3.1 can be put to work. The main discovery is that the unstable part of each partial transfer function is rational and can be computed by analytic projection, see 10. We now start investigating the linearized harmonic transfer-function around a periodic cycle, to check for stability under non necessarily small inputs.

In dimension 2, the prototypical problem to be solved in step 1 of Section 3.1 may be described as: given a domain

To find an analytic function

There, a priori assumptions on the behavior of the model off

To fix terminology, we refer to bounded extremal problem. As shown in 11, 44, 47, 55, the solution to this convex infinite-dimensional optimization problem can be obtained when

(

In the case

Various modifications of

In view of our current research on stability of active devices via analyticity of the harmonic transfer function, on inverse magnetization issues, and on inverse scattering via identification of the frequency response, bounded extremal problems for analytic functions are receiving renewed interest by the team. In such issues, a function on an interval of the real line (or an arc of the circle) must be approximated by the trace of a function holomorphic in the half-plane (or the disk), that meets suitable size constraints.

The analog of Problem seek the inner boundary, knowing it is a level curve of the solution. In this case, the Lagrange parameter indicates how to deform the inner contour in order to improve data fitting. Similar topics are discussed in Section 3.2.1 for more general equations than the Laplacian, namely isotropic conductivity equations of the form i.e., varies in the space). Then, the Hardy spaces in Problem

Though originally considered in dimension 2, Problem

When

On the ball, the analog of Problem

When Hardy-Hodge decomposition, allowing us to express a i.e., those generating no field in the upper half space) 41.

Just like solving problem

Problem

Companion to problem

Note that

The techniques set forth in this section are used to solve step 2 in Section 3.2 and they are instrumental to approach inverse boundary value problems for the Poisson equation

We put

A natural generalization of problem

(

Only for

The case where stable rational approximant to not be unique.

The Miaou project (predecessor of Apics) already designed a dedicated steepest-descent algorithm for the case local minimum is guaranteed; the algorithm has evolved over years and still now, it seems to be the only procedure meeting this property. This gradient algorithm proceeds recursively with respect to critical points of lower degree (as can be done with the RARL2 software, Section 3.4.5).

In order to establish global convergence results, the team has undertaken a deeper study of the number and nature of critical points (local minima, saddle points, ...), in which tools from differential topology and operator theory team up with classical interpolation theory 51, 54. Based on this work, uniqueness or asymptotic uniqueness of the approximant was proved for certain classes of functions like transfer functions of relaxation systems (i.e., Markov functions) 56 and more generally Cauchy integrals over hyperbolic geodesic arcs 57. These are the only results of this kind. Research on this topic remained dormant for a while by reasons of opportunity, but revisiting the work 32 in higher dimension is a worthy and timely endeavor today. Meanwhile, an analog to AAK theory was carried out for

A common feature to the above-mentioned problems is that critical point equations yield non-Hermitian orthogonality relations for the denominator of the approximant. This stresses connections with interpolation, which is a standard way to build approximants, and in many respects best or near-best rational approximation may be regarded as a clever manner to pick interpolation points. This was exploited in 58, 59, and is used in an essential manner to assess the behavior of poles of best approximants to functions with branched singularities, which is of particular interest for inverse source problems (cf. Sections 3.4.3 and 6.1).

In higher dimensions, the analog of Problem

Besides, certain constrained rational approximation problems, of special interest in identification and design of passive systems, arise when putting additional requirements on the approximant, for instance that it should be smaller than 1 in modulus (i.e., a Schur function). In particular, Schur interpolation lately received renewed attention from the team, in connection with matching problems. There, interpolation data are subject to a well-known compatibility condition (positive definiteness of the so-called Pick matrix), and the main difficulty is to put interpolation points on the boundary of

Matrix-valued approximation is necessary to handle systems with several inputs and outputs but it generates additional difficulties as compared to scalar-valued approximation, both theoretically and algorithmically. In the matrix case, the McMillan degree (i.e., the degree of a minimal realization in the System-Theoretic sense) generalizes the usual notion of degree for rational functions. For instance when poles are simple, the McMillan degree is the sum of the ranks of the residues.

The basic problem that we consider now goes as follows: let $\mathcal{F}\in {\left({H}^{2}\right)}^{m\times l}$ and $n$ an integer; find a rational matrix of size $m\times l$ without poles in the unit disk and of McMillan degree at most $n$ which is nearest possible to $\mathcal{F}$ in ${\left({H}^{2}\right)}^{m\times l}$. Here the

The scalar approximation algorithm derived in 35 and mentioned in Section 3.3.2 generalizes to the matrix-valued situation 69. The first difficulty here is to parametrize inner matrices (i.e., matrix-valued functions analytic in the unit disk and unitary on the unit circle) of given McMillan degree

Difficulties relative to multiple local minima of course arise in the matrix-valued case as well, and deriving criteria that guarantee uniqueness is even more difficult than in the scalar case. The case of rational functions of degree

Let us stress that RARL2 seems the only algorithm handling rational approximation in the matrix case that demonstrably converges to a local minimum while meeting stability constraints on the approximant. It is still a working pin of many developments by Factas on frequency optimization and design.

We refer here to the behavior of poles of best meromorphic approximants, in the

Generally speaking, in approximation theory, assessing the behavior of poles of rational approximants is essential to obtain error rates as the degree goes large, and to tackle constructive issues like uniqueness. However, as explained in Section 3.2.1, the original twist by Apics, now Factas, is to consider this issue also as a means to extract information on singularities of the solution to a Dirichlet-Neumann problem. The general theme is thus: how do the singularities of the approximant reflect those of the approximated function? This approach to inverse problem for the 2-D Laplacian turns out to be attractive when singularities are zero- or one-dimensional (see Section 4.2). It can be used as a computationally cheap initial condition for more precise but much heavier numerical optimizations which often do not even converge unless properly initialized. As regards crack detection or source recovery, this approach boils down to analyzing the behavior of best meromorphic approximants of given pole cardinality to a function with branch points, which is the prototype of a polar singular set. For piecewise analytic cracks, or in the case of sources, we were able to prove (7, 49, 42), that the poles of the approximants accumulate, when the degree goes large, to some extremal cut of minimum weighted logarithmic capacity connecting the singular points of the crack, or the sources 45. Moreover, the asymptotic density of the poles turns out to be the Green equilibrium distribution on this cut in

The case of two-dimensional singularities is still an outstanding open problem.

It is remarkable that inverse source problems inside a sphere or an ellipsoid in 3-D can be approached with such 2-D techniques, as applied to planar sections, see Section 6.1. The technique is implemented in the software FindSources3D, see Section 3.4.3.

Another, extremely classical technique to approximate –more accurately: extrapolate– a function given pointwise values is to compute a rational interpolant of minimal degree to match the values. This method, know as Padé (or multipoint Padé) approximation has been intensively studied for decades 30
but fails to produce
pointwise convergence, even if the data are analytic. The best it can give in
general is convergence in capacity, at least to functions whose singular set
has capacity zero, and this does not prevent spurious poles of the approximant
from wandering about the domain of analyticity of the approximated function
79. This phenomenon is standard in numerical practice, and gives rise
in physics and engineering circles to a distinction between “mathematical” and “physical” poles; note that this distinction ignores the possibility that
the function has other singularities than poles (for example branch-points
or essential singularities).
A modification of the multipoint Padé technique, where the degree is kept much smaller than the number of data and only approximate interpolation is performed
in the least-square sense, has become especially popular over the last decade
under the name vector fitting;
this is in trend with the soaring development of computational methods in
the frequency domain. Although their behavior looks similar
to the one of multipoint Padé approximants from a numerical viewpoint,
there seems to be no convergence result available for such approximate interpolants so far. Motivated by the behavior of numerical schemes developed at LEAT
to recover resonance frequencies of conductors under
electromagnetic inverse scattering (see section 4.5),
we started investigating the behavior of such least-square rational approximants
to functions with polar singular set, see section 6.4.

In addition to the above-mentioned research activities, Factas develops and maintains a number of long-term software tools that either implement and illustrate effectiveness of the algorithms theoretically developed by the team or serve as tools to help further research by team members. We present briefly the most important of them.

To minimise prototyping costs, the design of analog circuits is performed using computer-aided design tools which simulate the circuit's response as accurately as possible.

Some commonly used simulation tools do not impose stability, which can result in costly errors when the prototype turns out to be unstable. A thorough stability analysis is therefore a very important step in circuit design. This is where pisa is used.

pisa is a Matlab toolbox that allows designers of analog electronic circuits to determine the stability of their circuits in the simulator. It analyses the impedance presented by a circuit to determine the circuit's stability. When an instability is detected, pisa can estimate location of the unstable poles to help designers fix their stability issue.

Dedale-HF consists in two parts: a database of coupling topologies as well as a dedicated predictor-corrector code. Roughly speaking each reference file of the database contains, for a given coupling topology, the complete solution to the coupling matrix synthesis problem associated to particular filtering characteristics. The latter is then used as a starting point for a predictor-corrector integration method that computes the solution to the coupling matrix synthesis problem corresponding to the user-specified filter characteristics. The reference files are computed off-line using Gröbner basis techniques or numerical techniques based on the exploration of a monodromy group. The use of such continuation techniques, combined with an efficient implementation of the integrator, drastically reduces the computational time.

Dedale-HF has been licensed to, and is currently used by TAS-Espana.

For the matrix-valued rational approximation step, Presto-HF relies on RARL2. Constrained realizations are computed using the Dedale-HF software. As a toolbox, Presto-HF has a modular structure, which allows one for example to include some building blocks in an already existing software.

The delay compensation algorithm is based on the following assumption: far off the pass-band, one can reasonably expect a good approximation of the rational components of S11 and S22 by the first few terms of their Taylor expansion at infinity, a small degree polynomial in 1/s. Using this idea, a sequence of quadratic convex optimization problems are solved, in order to obtain appropriate compensations. In order to check the previous assumption, one has to measure the filter on a larger band, typically three times the pass band.

This toolbox has been licensed to (and is currently used by) Thales Alenia Space in Toulouse and Madrid, Thales airborne systems and Flextronics (two licenses). Xlim (University of Limoges) is a heavy user of Presto-HF among the academic filtering community and some free license agreements have been granted to the microwave department of the University of Erlangen (Germany) and the Royal Military College (Kingston, Canada).

The method is a steepest-descent algorithm. A parametrization of MIMO systems is used, which ensures that the stability constraint on the approximant is met. The implementation, in Matlab, is based on state-space representations.

RARL2 performs the rational approximation step in the software tools PRESTO-HF and FindSources3D. It is distributed under a particular license, allowing unlimited usage for academic research purposes. It was released to the universities of Delft and Maastricht (the Netherlands), Cork (Ireland), Brussels (Belgium), Macao (China) and BITS-Pilani Hyderabad Campus (India).

RARL2 is a software for rational approximation. It computes a stable rational L2-approximation of specified order to a given L2-stable (L2 on the unit circle, analytic in the complement of the unit disk) matrix-valued function. This can be the transfer function of a multivariable discrete-time stable system. RARL2 takes as input either:

It thus performs model reduction in the first or the second case, and leans on frequency data identification in the third. For band-limited frequency data, it could be necessary to infer the behavior of the system outside the bandwidth before performing rational approximation.

An appropriate Möbius transformation allows to use the software for continuous-time systems as well.

Sollya is an interactive tool where the developers of mathematical floating-point libraries (libm) can experiment before actually developing code. The environment is safe with respect to floating-point errors, i.e., the user precisely knows when rounding errors or approximation errors happen, and rigorous bounds are always provided for these errors.

Among other features, it offers a fast Remez algorithm for computing polynomial approximations of real functions and also an algorithm for finding good polynomial approximants with floating-point coefficients to any real function. As well, it provides algorithms for the certification of numerical codes, such as Taylor Models, interval arithmetic or certified supremum norms.

It is available as a free software under the CeCILL-C license.

Application domains are naturally linked to the problems described in Sections 3.2.1 and 3.2.2, under a common umbrella of function-theoretic techniques as described in Section 3.3.

Solving over-determined Cauchy problems for the Laplace equation on a spherical layer (in 3-D) in order to extrapolate incomplete data (see Section 3.2.1) is a necessary ingredient of the team's approach to inverse source problems, in particular for applications to EEG, see 9. Indeed, the latter involves propagating the initial conditions through several layers of different conductivities, from the boundary shell down to the center of the domain where the singularities (i.e., the sources) lie. Once propagated to the innermost sphere, it turns out that traces of the boundary data on 2-D cross sections coincide with analytic functions with branched singularities in the slicing plane 7, 46. The singularities are related to the actual location of the sources, namely their moduli reach in turn a maximum when the plane contains one of the sources. Hence we are back to the 2-D framework of Section 3.3.3, and recovering these singularities can be performed via best rational approximation. The goal is to produce a fast and sufficiently accurate initial guess on the number and location of the sources in order to run heavier descent algorithms on the direct problem, which are more precise but computationally costly and often fail to converge if not properly initialized. Our belief is that such a localization process can add a geometric, valuable piece of information to the standard temporal analysis of EEG signal records.

In this connection, a dedicated software FindSources3D (FS3D, see Section 3.4.3) has been developed, in collaboration with the Inria team Athena (now Cronos) and the CMA - Mines ParisTech. Its Matlab version now incorporates the treatment of MEG data, the aim being to handle simultaneous EEG–MEG recordings available from our partners at INS, hospital la Timone, Marseille. Indeed, it is now possible to use simultaneously EEG and MEG measurement devices, in order to measure both the electrical potential and a component of the magnetic field (its normal component on the MEG helmet, that can be assumed to be spherical). Solving the inverse source problem from joint EEG and MEG data actually improves accuracy of the source estimation. Note that FS3D takes as inputs actual EEG and MEG measurements, like time signals, and performs a suitable singular value decomposition in order to separate independent sources.

It appears that, in the rational approximation step,
multiple poles possess a nice behavior with respect to branched
singularities. This is due to the very physical assumptions on the model from dipolar current sources:
for EEG data that correspond to measurements of the electrical potential, one should consider triple poles; this will also be the case for MEG – magneto-encephalography – data. However, for (magnetic) field data produced by magnetic dipolar sources within rocks, one should consider poles of order five. Though numerically
observed in 9, there is no mathematical
justification so far why multiple poles generate such strong accumulation
of the poles of the approximants. This intriguing property, however,
is definitely helping source recovery and will be the topic of further study.
It is used in order to automatically estimate the “most plausible”
number of sources (numerically: up to 3, at the moment).

Furthermore, other approaches are being studied for EEG and other brain imaging modalities. They consist in regularizing the inverse source problem by a total variation constraint on the source term (a measure), added to the quadratic data approximation criterion (see Section 6.1.2) and presently focus on surface-distributed models. This is similar to the path that is taken for inverse magnetization problems (see Sections 4.3 and 6.1.1 ).

Generally speaking, inverse potential problems, similar to the one appearing in Section 4.2, occur naturally in connection with systems governed by Maxwell's equation in the quasi-static approximation regime. In particular, they arise in magnetic reconstruction issues. A specific application is to geophysics, which led us to form the Inria Associate Team Impinge (Inverse Magnetization Problems IN GEosciences) together with MIT and Vanderbilt University that
reached the end of its
term in 2018.

To set up the context, recall that the Earth's geomagnetic field is generated by convection of the liquid metallic core (geodynamo) and that rocks become magnetized by the ambient field as they are formed or after subsequent alteration. Their remanent magnetization provides records of past variations of the geodynamo, which is used to study important processes in Earth sciences like motion of tectonic plates and geomagnetic reversals. Rocks from Mars, the Moon, and asteroids also contain remanent magnetization which indicates the past presence of core dynamos. Magnetization in meteorites may even record fields produced by the young sun and the protoplanetary disk which may have played a key role in solar system formation.

For a long time, paleomagnetic techniques were only capable of analyzing bulk samples and compute their net magnetic moment. The development of SQUID microscopes has recently extended the spatial resolution to sub-millimeter scales, raising new physical and algorithmic challenges. The associate team Impinge aimed at tackling them, experimenting with the SQUID microscope set up in the Paleomagnetism Laboratory of the department of Earth, Atmospheric and Planetary Sciences at MIT. Typically, pieces of rock are sanded down to a thin slab, and the magnetization has to be recovered from the field measured on a planar region at small distance from the slab.

Mathematically speaking, both inverse source problems for EEG from Section 4.2 and inverse magnetization problems described presently amount to recover the (3-D valued) quantity

outside the volume

Another timely instance of inverse magnetization problems lies with geomagnetism. Satellites orbiting around the Earth measure the magnetic field at many points, and nowadays it is a challenge to extract global information from those measurements. In collaboration with C. Gerhards (Geomathematics and Geoinformatics Group, Technische Universität Bergakademie Freiberg, Germany), we started to work on the problem of separating the magnetic field due to the magnetization of the globe's crust from the magnetic field due to convection in the liquid metallic core. The techniques involved are variants, in a spherical context, from those developed within the Impinge associate team for paleomagnetism, see Section 6.1.1.

Through contacts with CNES (Toulouse) and UPV (Bilbao), the team got involved in the design of amplifiers which, unlike filters, are active devices. A prominent issue here is stability. Twenty years ago, it was not possible to simulate unstable responses, and only after building a device could one detect instability. The advent of so-called harmonic balance techniques, which compute steady state responses of linear elements in the frequency domain and look for a periodic state in the time domain of a network connecting these linear elements via static non-linearities made it possible to compute the harmonic response of a (possibly nonlinear and unstable) device 87. This has had tremendous impact on design, and there is a growing demand for software analyzers. The team is also becoming active in this area.

In this connection, there are two types of stability involved. The first is stability of a fixed point around which the linearized transfer function accounts for small signal amplification. The second is stability of a limit cycle which is reached when the input signal is no longer small and truly nonlinear amplification is attained (e.g., because of saturation). Initial applications by the team have been concerned with the first type of stability, and emphasis was put on defining and extracting the “unstable part” of the response, see Section 6.2. Since then, the stability check for limit cycles has made important theoretical advances. Specifically,
the exponential stability of the high frequency limit of a circuit was established last year in 4,
implying that there are at most finitely many unstable poles and no other unstable singularity for the
monodromy operator around the cycle. Furthermore, the links between the monodromy operator and the (operator-valued) harmonic transfer function of the linearized system along the trajectory were
brought to light in 23.
Numerical algorithms are now under investigation, while important
pending issues involve: (i) whether poles of the harmonic transfer function
must be poles
of each entry thereof, at least generically and (ii) describing
the stable spectrum
of the harmonic transfer function of the linearized system, in particular
understand its continuous and essential part.

One of the best training grounds for function-theoretic applications by the team is the identification and design of physical systems whose performance is assessed frequency-wise. This is the case of electromagnetic resonant systems which are of common use in telecommunications.

In space telecommunications (satellite transmissions), constraints specific to on-board technology lead to the use of filters with resonant cavities in the microwave range. These filters serve multiplexing purposes (before or after amplification), and consist of a sequence of cylindrical hollow bodies, magnetically coupled by irises (orthogonal double slits). The electromagnetic wave that traverses the cavities satisfies the Maxwell equations, forcing the tangent electrical field along the body of the cavity to be zero. A deeper study of the Helmholtz equation states that an essentially discrete set of wave vectors is selected.

Study of resonances also led us to another inverse problem. There, the singularity expansion method featuring the above-mentioned discrete set of wave vectors is instrumental to the object detection. In this respect, we started an academic collaboration with LEAT (Univ. Côte d'Azur, France, J.-Y. Dauvignac, N. Fortino, Y. Zaki) on the topic of inverse scattering using frequency dependent measurements. As opposed to classical electromagnetic imaging where several spatially located sensors are used to identify the shape of an object by means of scattering data at a single frequency, a discrimination process between different metallic objects is here being sought for by means of a single, or a reduced number of sensors that operate on a whole frequency band. For short the spatial multiplicity and complexity of antenna sensors is here traded against a simpler architecture performing a frequency sweep.

The setting is shown on Figure 1. The total field

The subscripts

In order to gain some insight we started a full study of the particular case when the scatterer is a spherical PEC (Perfectly Electric Conductor). In this case Maxwell equations can be solved «explicitly» by means of expansions in series of vectorial spherical harmonics. We showed in particular that in this case

where

In order to perform the rational approximation of the function

where

where the coefficients

Numerical simulations showed that even though the creeping wave part is negligible in front of the optic part at high frequencies, it needs to be taken into account around the band of measured frequencies for the rational approximation.

Furthermore, the physical interpretation of these two terms leads to consider that the creeping wave part should carry more information about the scatterer and we want to investigate the conjecture that the poles of

We plan in the future to investigate a generalization of this form for other PEC scatterers. See Section 6.4.

The goal is to invert magnetizations carried by a rock sample from measurements of the magnetic field nearby. A typical application is when the sample is shaped into thin slabs, with measurements taken by a superconducting quantum interference device (SQUID). Figure 2 sketches the corresponding experimental set up, brought up to our knowledge by collaborators from the Earth and Planetary Sciences Laboratory at MIT, in the case when the sample is modeled as a parallelepiped.

When the measurement area is a disk (or even a cylinder, if its thickness is not negligible), the issue of asymptotic estimates has undergone new developments. As part of A. Yousfi's thesis work, we derived a much more straightforward way of establishing the asymptotic estimates of the moments of

Instability of high-order asymptotic formulas for the net moment and the lack of the measurements naturally give rise to two aspects: noise filtering and field prolongation. One way to attempt both issues at once is to use the so-called spectral extrapolation. Such an approach consists in finding a suitable choice of basis functions which are adapted to problem structure and geometry, but at the same time are naturally defined outside of the original region. Using an eigenbasis of the suitable integral operator related to the forward problem, we attempt to construct a field extrapolant which would extend the available measurements in a way respecting the structure of the problem (such as harmonicity, support of the source, qualitative behavior at infinity). Restricting the number of terms in the expansion over this basis also provides a regularization strategy achieving a desired degree of the field denoising.

The work on the field extrapolation initiated at the end of 2022 has been advanced further. In particular, there were several directions taken to develop a different methodology.

First, after realizing that the previously developed spectral extrapolation procedure based on the forward operator featuring only one kernel function (the vertical derivative of the Poisson kernel) was somewhat naive, we have demonstrated that the extrapolation results could be improved by introducing several local extrapolation steps. Namely, by a procedure reminiscent of the chain method for local analytic continuation, we gradually increase the extrapolation area up to some intermediate size after which the previous naive procedure can be applied with a better outcome. The local stepping is computationally expensive and the details of its particular implementation should be investigated more thoroughly for its further improvement.

However, we have delved into developing another approach in a somewhat similar fashion. Namely, it was observed that the spectral extrapolation approach could be made exact if the basis functions are chosen as vector-eigenfunctions of a

A third approach for the field extrapolation is much more in the spirit of previous works of the team. Here, we use the techniques of constrained approximation in Hilbert spaces. Namely, given a tolerance level for measurement errors, we search for an extrapolant which agrees in a best possible manner with the original field data respecting the harmonicity of the field. Currently, we have done it only for the scalar potential simulated data (a toy problem for the internship work of M. Khalid Omer) rather than the vertical field component data (that would correspond to the physical SQUID setup at EAPS, MIT). The extension of this methodology to the latter case must be straightforward.

Finally, we have also started exploring a possibility of obtaining an extrapolant directly from solution of some integral equation. Even though, we have succeeded obtaining such an equation, no constructive results are yet available here. Any progress in this direction would allow dealing with volumetric magnetization sources.

Besides, we pursued a joint research effort with E. A. Lima and B. P. Weiss (EAPS dept. MIT, USA), C. S. Borlina (EAPS dept. Johns Hopkins Univ. USA) and D. Hardin (Vanderbilt Univ. USA), investigating the question whether is it possible to estimate the moment of an unknown magnetization by the one of a magnetization producing a field close enough to the observations? In other words, is full-inversion a demonstrably good (though admittedly costly) method to estimate the moment? We showed in 2022 that the answer is essentially no. More precisely, the field-to-moment map (that exists because silent magnetizations have zero moment) is discontinuous, even when magnetizations are endowed with a rather weak topology (like the weak-star topology on distributions of fixed order and given support), even if models for the magnetization are restricted to a very small class. In 2023, we started studying the speed of approximation of a given field by the field of a multipole when the degree of the latter goes large. This speed must be compared with the norm of the operator mapping the field (or the observed portion thereof) to the moment within the model class of multipoles, in order to decide if full inversion allows one to efficiently estimate the moment this way. This line of investigation is still in progress, and an article has been published that describes the method 15.

In collaboration with D. Hardin at Vanderbilt University (Nashville, USA), We also made theoretical progress describing the outcome of regularization techniques that penalize the total variation of an unknown measure modeling the magnetization, for inverse magnetization problems on compact samples of zero 3-D Lebesgue measure with connected complement. Namely, we proved that natural discretizations of the regularized criterion have a unique minimizer in this case, which allows us to establish the consistency of such schemes. Also when the sample is volumic (like a ball), we showed that these schemes select a solution on the boundary of the sample (which is the balayage of the “true” magnetization, whatever the latter is).

In another connection, the development of QDM sensors has heightened
the need for 3D-inversion of field maps. Indeed, while the QDM can be put very close to the surface of the sample which makes for improved resolution,
the thickness of the sample becomes appreciable compared to the
distance to the sensor and can no longer be neglected. In
his thesis 20, Masimba Nemaire has taken up the study of
silent magnetization of

Our software FindSources3D (FS3D, see Sections 3.4.3, 4.2) dedicated to pointwise source estimation in EEG–MEG is now used by some of our collaborators. Together with M. Darbas (LAGA, Univ. Sorbonne Paris Nord) and P.-H. Tournier (labo. JLL, Sorbonne Univ), we considered the EEG inverse problem with a variable conductivity in the intermediate skull layer, in order to model hard / spongy bones, especially for neonates. Coupled with FS3D, the related transmission step is performed using a mixed variational regularization and finite elements (FreeFem++) on tetrahedral meshes, and furnishes very promising results 14.

We studied the uniqueness of a critical point of the quadratic criterion in the electroencephalography (EEG) problem for a single dipole situation (PhD of P. Asensio 19, see also 22). This issue is essential for the use of descent algorithms. This leads to the study of the following criterion:

where i.e., a unique location and moment pair

where

We considered a different class of source models, not necessarily
dipolar, and related estimation algorithms. Such models may be supported on
the surface of the cortex or in the volume of the encephalon.
We represent sources by vector-valued measures, and in
order to favor sparsity in this infinite-dimensional setting
we use a TV (i.e., total variation) regularization term as in Section 6.1.1. The approach follows that of
8 and is implemented through two different algorithms, whose
convergence properties were studied in the PhD thesis of M. Nemaire 20.
Tests on synthetic data provided good quality results, though they are quite numerically costly to obtain.

Progresses were made on the inverse problem of “Stereo” EEG (SEEG), where the potential is measured by deep electrodes and sensors within the brain as in the scheme of Figure 3. Assuming that the
current source term

The associated forward and inverse problems were solved for both an infinite medium conductor and a more realistic single model of the brain

The numerical implementation was done by approximating the density

The inverse problem for SEEG is ill-posed and a Tychonov regularization is used in order to solve the problem: find a distribution

We now consider

We are now able to handle MEG, EEG, SEEG modalities, simultaneously or not. The simultaneous handling of the different modalities is made the more straightforward by coupling the source localization problem and the inverse transmission problem.

The data transmission needs to be considered in the source localization problem so that the problem remains faithful to the data (electric or magnetic) when realistic head geometries are used in the computations. This also allows one to exploit properties of the problem that bring more regularity to the solutions. These considerations warrant that the transmission problem be solved simultaneously with the source localization problem, but in this connection the vector spaces used to model the sources and electrical properties need to be handled with care. The mathematical formulation allows for the source to be a distribution whereas the electric potential is always a function. This leads to the consideration that the optimal methods to recover the source and electric potential may not be the same. We proposed an alternating minimization procedure to solve the problem. We managed to show that this procedure converges to the desired solution and converges to the Tychonov functional linearly.

For more general source terms (vector valued measures,

This year we continued the study, initiated in 23 in collaboration with J.-B. Pomet from the McTao team (Inria Sophia Antipolis), of the harmonic transfer function (HTF) of a linear difference delay control system having finitely many (possibly non-commensurable) delays and periodic coefficients (having the same period). That is, an input-output system of the form:

where

In a joint work with T. Qian and P. Dang from the university of Macao
53,
we proved that on a compact hypersurface i.e., i.e.i.e., that produce no magnetic field in the unbounded component of the complement of the surface) form the orthogonal space to harmonic gradients inside the surface. This result teams up with the Hardy-Hodge decomposition to produce a description of silent
magnetizations distributions of

Incented by these works, we developed a version of the Hardy-Hodge decomposition
for the Helmholtz equation, in collaboration with
H. Haddar from the team IDEFIX (Inria Saclay & ENSTA-Paris Tech), 25.
Our motivation to study such issues stems from a linearized model for an acoustic
scattering problem with
anisotropic layer, in the limit as the thickness of the layer goes to 0.
From the mathematical point of view, the problem generalizes
the one of recovering magnetizations mentioned above.
Here, we work with sources

where

Like in the Poisson case, it can be shown that

We also pursued our work on a direct characterization of silent regular for the zero extension, and in this case
one can describe the
norm-minimizing equivalent vector-field to a given vector-field

When i.e., vector fields representing magnetizations that produce the zero magnetic field outside

Some fundamental aspects of the temporal behavior of the solution of the time-domain wave equation (in spatial dimensions 1, 2 and 3) have been studied in a recent work 21 (previously a preprint, now completed and submitted for journal publication) in connection with designing an efficient time-domain solver for the Helmholtz equation in high-frequency regime and with smooth coefficients. In physical 3D settings, the rate of the stabilization of the solution is linked to the location and the strength of the resonances (poles of meromorphic continuation of the resolvent operator). Since the information on the resonances is useful for the inverse problems (object identification), some of the already developed numerical methods could be employed for their efficient computation. Those methods would often rely on the complex-scaling (PML) approach to the open-domain problems 75 and have yet to be compared with the present rational-approximation viewpoint adopted by the team.

Another connection between time-domain methods for elliptic PDE problems and inverse problems is the so-called controllability approach which has recently received a renewed interest 65. In such an approach, one seeks to accelerate temporal convergence to the stationary solution of the time-domain wave equation with a periodic source term by minimizing the difference of the initial solution and the solution after one time-period. Once properly set up, this would yield the solution to the original Helmholtz equation without large computational costs of simulating the time-domain problem for long times. Potential extension of such methods could be a possibility for future joint work with Atlantis team.

Related to our collaboration with LEAT, the rational approximation of the transfer function

where i.e., that the capacity of the set where the pointwise error is
bigger than some spurious poles that
wander about the domain of analyticity.

This year we were able to obtain an analog of the Nuttal-Pommerenke theorem on the converge in capacity, not for problem (3) as we seek in the framework of our collaboration with LEAT, but for a least square analog of Padé approximation that goes as follows:

where

Some direct problems have been considered from an analytical viewpoint that could shed light on their inverse counterparts.

The work 18 (previously a preprint, now a published work) deals with the extension of a classical model of contact problem with sliding punch in presence of material wear. The semi-explicit form of the obtained results and the generality of the model call for looking into relevant inverse problems. These could include, for example, determining an optimal initial shape of the punch profile that would minimize the worn volume.

In the joint work 17 with E. Pozzi (Saint Louis Univ., Missouri, USA), we present an operator theoretical approach of some inverse problems. It gives an overview of the solution to linear forward and inverse problems for a class of elliptic PDE in two-dimensional domains, in the framework of Banach spaces and operators. We focused on the conductivity equation, with smooth enough conductivity (see Section 3.2.1), and with Dirichlet or Neumann or mixed boundary conditions. For domains with Dini-smooth boundary, the approach is based on the properties of generalized Hardy spaces.

When solving fixed point problems using iterative methods, the rate of convergence is determined by the spectral radius of the operator and how quickly the iterative methods settle to the convergence rates is determined by the growth of the resolvent of the operator. It is therefore of interest to study the spectra of operators, here we studied the spectra of Toeplitz operators, together with S. Kupin (IMB, Univ. Bordeaux) and L. Golinskii (B. Verkin Institute for Low Temperature Physics and Engineering, Kharkiv, Ukraine).

Let

The overall and long-term goal is to enhance the quality of numerical computations.

This year, we worked on improving the implementation of the glibc for the ARM architecture. Indeed, a vectorized implementation has been proposed by J. Ramsay during the summer on the mailing-list of the glibc, based on two steps: first, a range reduction to reduce the problem to the evaluation of

We did a full error analysis of the proposed implementation, including an exhaustive search on some selected sub-intervals to find the actual worst cases for the range reduction, the evaluation error and the combined error. We also proposed new values for the coefficients of the polynomial, in order to make it practically optimal among all polynomials in term of the maximal total error (i.e., the error combining the range reduction error, the approximation error and the evaluation error). This work was done in collaboration with G. Melquiond (Toccata, Inria) and P. Zimmermann (Caramba, Inria). S. Chevillard presented it at the “Rencontres nationales du groupe de travail Arith du GDR IM” (RAIM 2023) and a report is currently being written on the subject.

Factas is part of the European Research Network on System Identification (ERNSI) since 1992. System identification deals with the derivation, estimation and validation of mathematical models of dynamical phenomena from experimental data.

ANR-18-CE40-0035, “REProducing Kernels in Analysis and beyond” (2019–2023).

Led by Aix-Marseille Univ. (IMM), involving Factas team, together with Bordeaux (IMB), Paris-Est, Toulouse Universities.

The project consists of several interrelated tasks dealing with topical problems in modern complex analysis, operator theory and their important applications to other fields of mathematics including approximation theory, probability, and control theory. The project is centered around the notion of the so-called reproducing kernel of a Hilbert space of holomorphic functions. Reproducing kernels are very powerful objects playing an important role in numerous domains such as determinantal point processes, signal theory, Sturm-Liouville and Schrödinger equations.

This project supported the PhD of M. Nemaire within Factas, co-advised by IMB partners.

GDR “Analyse Fonctionnelle, Harmonique et Probabilités”.

Led by Gustave Eiffel Univ. (LAMA), involving Factas team, together with several universities.

The GDR is concerned with five main axes: linear dynamics, Banach spaces and their operators, holomorphic dynamics, harmonic analysis, analysis and probability, and with the interactions between them.

L. Baratchart is a member of the program committee of the conference
Inverse problems: modeling and simulation (IPMS).
“

L. Baratchart is a member of the editorial board of
the journals Computational Methods and Function Theory (CMFT) and
Complex Analysis and Operator Theory (CAOT).

J. Leblond and M. Olivi are members of Terra Numerica.