EN FR
EN FR

2024Activity reportProject-TeamFACTAS

RNSR: 201822627W
  • Research center Inria Centre at Université Côte d'Azur
  • Team name: Functional Analysis for ConcepTion and Assessment of Systems
  • Domain:Applied Mathematics, Computation and Simulation
  • Theme:Optimization and control of dynamic systems

Keywords

Computer Science and Digital Science

  • A6.1.1. Continuous Modeling (PDE, ODE)
  • A6.2.1. Numerical analysis of PDE and ODE
  • A6.2.5. Numerical Linear Algebra
  • A6.2.6. Optimization
  • A6.3.1. Inverse problems
  • A6.3.3. Data processing
  • A6.3.4. Model reduction
  • A6.3.5. Uncertainty Quantification
  • A6.4.4. Stability and Stabilization
  • A6.5.4. Waves
  • A8.2. Optimization
  • A8.3. Geometry, Topology
  • A8.4. Computer Algebra
  • A8.10. Computer arithmetic

Other Research Topics and Application Domains

  • B2.6.1. Brain imaging
  • B2.8. Sports, performance, motor skills
  • B3.1. Sustainable development
  • B3.3. Geosciences
  • B5.4. Microelectronics
  • B8.4. Security and personal assistance
  • B9.1. Education
  • B9.5.5. Mechanics

1 Team members, visitors, external collaborators

Research Scientists

  • Juliette Leblond [Team leader, Inria, Senior Researcher]
  • Laurent Baratchart [Inria, Emeritus]
  • Sylvain Chevillard [Inria, Researcher]
  • Martine Olivi [Inria, Researcher]
  • Dmitry Ponomarev [Inria, ISFP]

PhD Students

  • Mubasharah Khalid Omer [Université Côte d’Azur]
  • Anass Yousfi [Université Côte d’Azur]

Interns and Apprentices

  • Dmytro Dmytrenko [Inria, Intern, from Apr 2024 until Aug 2024]

Administrative Assistants

  • Florence Barbara [Inria, until Jun 2024, part-time]
  • Vanessa Wallet [Inria, from Jul 2024, part-time]

External Collaborators

  • Jean-Paul Marmorat [CMA, Mines ParisTech, Sophia Antipolis]
  • Fabien Seyfert [HighFSolutions, Nice]

2 Overall objectives

The team develops constructive function-theoretic approaches to inverse problems arising in modeling and design, in particular for electro-magnetic systems as well as in the analysis of certain classes of signals.

Data typically consist of measurements or desired behaviors. The general thread is to approximate them by families of solutions to the equations governing the underlying system. This leads us to consider various interpolation, extrapolation, and approximation problems in classes of rational and meromorphic functions, harmonic gradients, or solutions to more general elliptic partial differential equations (PDE), in connection with inverse potential problems. A recurring difficulty is to control the singularities of the approximants.

The mathematical tools pertain to complex, functional analysis, harmonic analysis, approximation theory, operator theory, potential theory, system theory, differential topology, optimization and computer algebra.

Targeted applications mostly concern non-destructive control from potential or field measurements in medical engineering (source recovery in magneto/electro-encephalography), paleo-magnetism (determining the magnetization of rock samples), and more recently obstacle identification (finding electrical characteristics of an object) as well as inverse problems in orthopedic surgery. For all of these, an endeavor of the team is to develop algorithms resulting in dedicated software.

3 Research program

Within the extensive field of inverse problems, much of the research by Factas deals with reconstructing solutions to classical PDE in dimension 2 or 3 along with their singularities, granted some knowledge of their behavior on part of the domain or of its boundary.

Such problems are severely ill-posed (in the sense of Hadamard): they may have no solution (whenever data are corrupted, since the underlying forward operator may only have dense range), several solutions (non-uniqueness, as the forward operator could be non-injective), and even in situations where there exists a unique solution (whence the forward operator is invertible), they suffer from instability (lack of continuity of the inverse operator). Their resolution thus requires regularizing assumptions or regularization processes, in order to set up well-posed problems and to derive efficient algorithms that furnish suitable approximated solutions.

The considered linear elliptic PDE are related to the Maxwell and wave equations, particularly in the quasi-static or time harmonic regime. This involves in particular Laplace, Poisson and conductivity equations, in which the source term often appears in divergence form. However, the Helmholtz equation also comes up as a formulation of the wave equation in the monochromatic regime.

The gist of our approach is to approximate the data by actual solutions of these PDE, assumed to lie in appropriate function spaces. This differs from standard approaches to inverse problems, where descent algorithms are applied to integration schemes of the direct problem; in such methods, it is the equation which gets approximated (in fact: discretized). This also naturally leads us to study convergent algorithms to approximate solutions of such infinite-dimensional optimization problems by solutions to finite-dimensional ones.

3.1 Elliptic PDEs and operators

Inverse problems studied by Factas involve systems governed by an equation of the form ϕ=Ψ(m), where is an elliptic partial differential operator and Ψ(m) a source term depending on some unknown quantity m. The data consist of incomplete measurements of the potential ϕ or its gradient (the field) in a portion of space, away from the support of the source.

3.1.1 Inverse problems of Cauchy type

Laplace equation in dimension 2.

Here, as in the next section, we are concerned with the simplest case where ϕ=Δϕ=0 (without any source term, actually with m=0) in some planar domain Ω2, with Δ to indicate the Euclidean Laplacian. The given data consist in measurements of ϕ and its normal derivative on a subset EΩ of the domain's boundary, assuming they are somewhat regular; say, they should at least belong to Lp(E) for some p1. The aim is to recover the harmonic function ϕ from partial knowledge of the Dirichlet-Neumann data, which is a classical boundary value problem. Identifying 2 with , conjugate-gradients of harmonic functions become holomorphic functions. More precisely, whenever ϕ is harmonic in a domain Ω, it admits a conjugate harmonic function ϕ˜ such that, thanks to the Cauchy-Riemann equations, the function f=ϕ+iϕ˜ is holomorphic in Ω; that is: ¯f=0. This framework was first advocated in 59 and subsequently received considerable attention. Therefore, reconstructing a function harmonic in a plane domain Ω when Dirichlet-Neumann boundary conditions are already known on a subset EΩ is equivalent to recover a holomorphic function in Ω from its boundary values on E. It makes good sense in holomorphic Hardy spaces where functions are entirely determined by their values on boundary subsets of positive linear measure, which is the framework for Problem (hyperlinkhrefproblemPP) below, for simply-connected smooth enough domains Ω conformally equivalent to the unit disk 𝔻.

Let 𝕋=𝔻 be the unit circle. We denote by Hp the Hardy space of 𝔻 with exponent p, which is the closure of polynomials in Lp(𝕋)-norm if 1p< and the space of bounded holomorphic functions in 𝔻 if p=. Functions in Hp have well-defined boundary values in Lp(𝕋), which makes it possible to speak of (traces of) analytic functions on the boundary. To find an analytic function g in 𝔻 matching some measured values f approximately on a subset K of 𝕋, with K and 𝕋K of positive Lebesgue measure, we formulate the best constrained approximation problem (or bounded extremal problem “BEP”).

hypertargetnameproblemP(P)Let 1p, fLp(K), wLp(𝕋K) and M>0; find a function gHp such that

hyperlinkhrefproblemP(P)g-wLp(𝕋K)M and g-f is of minimal norm in Lp(K) under this constraint.

There, w is a reference behavior capturing a priori assumptions on the solution off K (if known; otherwise, w can be set to 0), while M is some admissible deviation thereof. The value of p reflects the assumptions made on the given data. As shown in 41, 45, 48, the solution to this well-posed convex infinite-dimensional optimization problem can be obtained when p1 upon iterating with respect to a Lagrange parameter the solution to spectral equations for appropriate Hankel and Toeplitz operators1. These spectral equations involve the solution to the special case K=𝕋 of (hyperlinkhrefproblemPP), which is a standard extremal problem 61.

In the Hilbertian framework p=2, whenever f does not belong to the approximant set, problem (hyperlinkhrefproblemPP) rephrases as the following Tikhonov regularized problem.

Let fL2(K), wL2(𝕋K) and λ>0; find a function gH2 that minimizes g-fL2(K)2+λg-wL2(𝕋K)2.

Note that the Lagrange parameter λ is uniquely determined by the constraint g-wLp(𝕋K)=M. The numerical resolution of (hyperlinkhrefproblemPP) is performed by expanding the functions under study in the Fourier basis.

Problem (hyperlinkhrefproblemPP) also allows one to formulate the recovery of the so-called Robin coefficient on part of the boundary, see 34, 52. Various modifications of (hyperlinkhrefproblemPP) can be tailored to meet specific needs. For instance, one can also impose bounds on the real or imaginary part of g-w on 𝕋K, together with prescribed pointwise values in 𝔻 11, which is useful when considering Dirichlet-Neumann problems. The analog of Problem (hyperlinkhrefproblemPP) on an annulus, K being now a subset of the outer boundary, can be seen as a means to recover a harmonic function on the inner boundary from Dirichlet-Neumann data on K 68.

These considerations make it clear how to state similar problems in higher dimensions and for more general operators than the Laplacian, provided solutions are essentially determined by the trace of their gradient on part of the boundary which is the case for sufficiently smooth elliptic equations provided that K has some interior in Ω2 34, 90.

Laplace equation in dimension 3.

Though originally considered in dimension 2, Problem (hyperlinkhrefproblemPP) carries over naturally to higher dimensions where analytic functions get replaced by gradients of harmonic functions. Namely, for n>2, given some open set Ωn and some n-valued vector field F on an open subset O of the boundary of Ω, we seek a harmonic function in Ω (where Δϕ=0) whose gradient is close to F on O. This is another subject investigated by Factas, for the recovery of a harmonic function (up to an additive constant) in a ball or a half-space from partial knowledge of its gradient in Ω, with ϕ=F known on O. The question is significantly more difficult than its 2-D counterpart considered in the above paragraph, due mainly to the lack of multiplicative structure for harmonic gradients. Still, substantial progress has been made over the last years using methods of harmonic analysis and operator theory.

When Ω is a ball or a half-space, a substitute for holomorphic Hardy spaces is provided by the Stein-Weiss Hardy spaces3 of harmonic gradients 85.

On the unit ball 𝔹n, the analog of Problem (hyperlinkhrefproblemPP) is Problem (hyperlinkhrefproblemPnPn):

(hypertargetnameproblemPnPn) Let 1p. Fix Γ an open subset of the unit sphere 𝕊n.

(hyperlinkhrefproblemPnPn) Let further FLp(Γ) and WLp(𝕊Γ) be n-valued vector fields. Given M>0, find

(hyperlinkhrefproblemPnPn)a harmonic gradient GHp(𝔹) such that G-WLp(𝕊Γ)M and G-F is of minimal

(hyperlinkhrefproblemPnPn)norm in Lp(Γ) under this constraint.

When p=2, the BEP (hyperlinkhrefproblemPnPn) was solved in 1 as well as its analog on a shell, when the tangent component of F is a gradient (when Γ is Lipschitz-smooth, the general case follows easily from this). The solution extends the work in 41 to the 3-D case, using a generalization of Toeplitz operators and expansions on the spherical harmonic basis. An important ingredient is a refinement of the Hodge decomposition, that we call the Hardy-Hodge decomposition, see Section 3.2.1.

Just like solving problem (hyperlinkhrefproblemPP) appeals to the solution of a standard extremal problem when K=𝕋, our ability to solve problem (hyperlinkhrefproblemPnPn) will depend on the possibility to tackle the special case where Γ=𝕊. This is a simple problem when p=2 by virtue of the Hardy-Hodge decomposition together with orthogonality of H2(𝔹) and H2(n𝔹¯), which is the reason why we were able to solve (hyperlinkhrefproblemPnPn) in this case. Other values of p cannot be treated as easily and are still under investigation, especially the case p= which is of particular interest and presents itself as a 3-D analog to the Nehari problem 78. A companion to this problem is the one below, where the space of tangent divergence-free fields on 𝕊 is denoted by D(𝕊).

Let 1p and FLp(𝕊) be a n-valued vector field. Find GHp(𝔹) and DD(𝕊) such that G+D-FLp(𝕊) is minimum.

This question is especially relevant to electro-encephalography (EEG) and inverse magnetization issues, see Sections 4.1, 4.2. The latter problem can be reduced to the former in 2-D, since divergence-free vector fields on 2 supported on 𝕋 are real multiples of ieiθ, but it is no longer so in higher dimension. Both problems arise in connection with inverse potential problems in divergence form, see Sections 3.2.23.2.3.

Conductivity equation.

Similar approaches can be considered for more general equations than the Laplacian, for instance isotropic conductivity equations of the form ϕ= div (σϕ)=0 where σ is no longer a constant function but admits positive values4. Then, the relevant Hardy spaces in Problem (hyperlinkhrefproblemPP) are those associated to a so-called conjugate Beltrami equation: ¯f=νf¯ 66, with ν=(1-σ)/(1+σ), which are studied for 1<p< in 6, 33, 37, 53. Expansions of solutions needed to constructively handle such issues in the specific case of linear fractional conductivities have been expounded in 58. Studying Hardy spaces of conjugate Beltrami equations is of interest in its own right. For Sobolev-smooth coefficients of exponent greater than 2, they were investigated in 6, 37. The case of the critical exponent 2 is treated in 33, which provides an initial example of well-posed Dirichlet problem in the non-strictly elliptic case: the conductivity may be unbounded or zero on sets of zero capacity and, accordingly, solutions need not be locally bounded. More importantly perhaps, the exponent 2 is also the key to a corresponding theory on general rectifiable domains in the plane, as coefficients of pseudo-holomorphic functions obtained by conformal transformation onto a disk are no better than L2-integrable in general, even if the initial problem has higher summability. Such generalizations are under study, in collaboration with E. Pozzi (Saint Louis University, Missouri, USA) and E. Russ (Université J. Fourier, Grenoble), and nontrivial connections between the regularity of the conformal parameterization of the domain and the range of exponents p for which the Dirichlet problem is solvable in Lp have been brought to light. In fact, for Lipschitz domains at least, this range of exponents coincides with the interval of p for which the modulus of the derivative of a conformal map from the unit disk onto Ω satisfies the so-called Ap property of Hunt-Muckenhoupt-Wheeden.

Such generalized Hardy classes were also used in 34 where to address the uniqueness issue in the classical Robin inverse problem on a Lipschitz domain Ωn, n2, with uniformly bounded Robin coefficient, L2 Neumann data and conductivity of Sobolev class W1,r(Ω), r>n. We showed that uniqueness of the Robin coefficient on a subset of the boundary, given Cauchy data on the complementary part, does hold in dimension n=2, thanks to a unique continuation result, but needs not hold in higher dimension. This raises an open issue on harmonic gradients for n3, namely whether positivity of the Robin coefficient is compatible with identical vanishing of the boundary gradient on a subset of positive measure.

3.1.2 Data extension problems

Closely related to the inverse problems are problems of the data extension type. They differ from Cauchy-type problems of Section 3.1.1 in that the given data are located in the interior of the domain Ω of validity of the equation rather than on its boundary Ω. More precisely, these problems arise when the solution of an elliptic PDE with unknown coefficients or boundary data is given in some sub-region of Ω, and one wishes to extend this knowledge to a bigger region there. Determining the solution in the bigger region may be the primary goal, but motivation to consider such extension problems may also come from an attempt at improving solution to an inverse problem (e.g., an inverse source estimation problem, see Section 3.2.3), or generating additional information making that problem amenable to asymptotic methods where an asymptotic parameter is related to the size of the data region (see Section 3.4).

While extension problems are generally easier than inverse problems since one may avoid the non-uniqueness issue, usually the extension process is still unstable and appropriate regularization must be used as long as data are not exact.

Due to the high regularity of solutions to elliptic PDEs away from the support of the source term, many extension problems can be addressed using certain types of analytic continuation.

A relevant example to the class of applied problems considered by Factas (see Section 4.2) is given by the Poisson equation Δϕ= div m (where ϕ=Δϕ and Ψ(m)= div m) in 3 with an unknown square summable 3-valued function m supported in S×0 for some bounded set S2. It is assumed that ϕ is measured on S×hΩ for some h>0, Ω being the upper half-space. The primary goal is to estimate ϕ on S˜×h˜ for some set S˜, such that SS˜2 and h˜h. This issue can be solved through an extension problem which seeks to determine the scalar valued function ϕ on 2×h. The final goal is then achieved (if h˜>h; otherwise this step is not even needed) by the so-called upward continuation process, i.e., computing the Poisson transform of ϕ on 2×h with the “height” parameter h˜-h. Note that, by formulating such an extension problem, we have by-passed the full inverse source problem of finding the function m, that admits non-unique solutions. An illustration of such an approach for data extension in context of the inverse magnetization problem can be found in 18, 19, 23, 27.

3.1.3 Spectral issues

Solving inverse problems by a linear least-square approach leads to an equation featuring the operator AA, with A being the forward operator governing the direct problem, mapping m to the measurements (potential ϕ or field ϕ), and A its adjoint (in appropriate function spaces). The range of the operator A is usually strictly smaller than the model space for measurements, hence the solution of the inverse problem is unstable. When A is a compact integral operator, stability analysis and regularization can be achieved through singular-value decomposition. When additionally A is self-adjoint and of convolution type (convolution operators naturally arise as inverses of differential ones), the situation reduces to the eigenvalue problem for a convolution operator, typically on a bounded region. Here, not only the study of rates of convergence of eigenvalues to zero is important (to quantify and mitigate the ill-posedness of the original inverse problem), but also the determination of the corresponding eigenfunctions as they lead to efficient practical implementations (indeed, by the spectral theorem, computing AAA2 is no more difficult than computing A). Finding an explicit form of eigenfunctions is a virtually impossible task, for solutions of convolution integral equations on a finite region are explicitly solvable only on extremely rare occasions, even in a one-dimensional setting. Consequently, one naturally resorts to numerical methods and asymptotic constructions when the region is large. However, the latter are still challenging for integral equations with kernel functions related to Poisson kernel. This motivated the Factas members' work 46 which, together with its further generalization 79, extend explicit asymptotic constructions beyond the classes of integral equations where previous results were applicable 65, 67, 69. Finally, we note that since convolution operators are closely related to Toeplitz operators, this makes contact with numerical inversion of large Toeplitz matrices 51.

3.2 Inverse source problems

Given an elliptic PDE of the form ϕ=Ψ(m) as in Section 3.1, the corresponding inverse source problem consists in recovering the quantity m from the data, that typically consist of measurements of the potential ϕ or the field away from the support of the source. Usually the support of m is assumed to be compact, and a super-set thereof is specified. This kind of issues may thus be seen as parameterized inverse potential problems (parameterized by m, that is). They arise naturally in non-destructive testing, medical imaging, paleo-magnetism, gravimetry and geosciences, as well as in inverse scattering, see Section 4. The second and third application domains pertain to static electromagnetism, a framework in which source terms typically occur in divergence form; that is, Ψ(m)= div m where m is a 3-valued field or distribution. As a rule, the operator is then of the form ϕ= div (Σϕ), where Σ is valued in positive definite matrices satisfying fixed ellipticity bounds and relates to the electromagnetic characteristics of the medium. In this case, the problem amounts to recover a vector field (namely: m/ det (Σ)) knowing a super-set S of its support and the gradient summand of its Helmholtz decomposition outside of S, for the Riemannian metric with tensor ( det Σ)Σ-1. The simplest case, of course, occurs in the Euclidean setting, where is the ordinary Laplacian, which corresponds to homogeneous media.

3.2.1 Hardy-Hodge decomposition

In its original form, the Hardy-Hodge decomposition allows one to express a n-valued vector field in Lp(𝕊n-1), 1<p<, as the sum of a vector field in Hp(𝔹n), a vector field in Hp(n𝔹n¯), and a tangential divergence free vector field on 𝕊n-1. Here, 𝔹n and 𝕊n-1 are, respectively, the open unit ball of n and its boundary sphere. If p=1 or p=, Lp must be replaced, as is classical, by the real Hardy space H1(𝕊n-1) or the space BMO(𝕊n) of functions with bounded mean oscillation on 𝕊. This decomposition is valid more generally on any C1-smooth surface, and even on Lipschitz surfaces if one restricts the range of p to an interval around 2 47. It appears to play a fundamental role in inverse potential problems, and was first introduced on the plane to describe silent magnetizations supported in 2  40 (see Sections 3.2.2 and 4.2). It has been a forerunner to similar decompositions where Hardy spaces in a domain get replaced by silent sources in that domain 38, and there are currently attempts at generalizing it to more general elliptic operators than the Laplacian. In fact, the Hardy-Hodge decomposition can be viewed as a Hodge decomposition in degree 1 of currents of Lp-class supported on a hyper-surface in ambient Euclidean space, and generalizing the former to more general elliptic operators amounts to generalize the latter to more general Riemannian metrics.

3.2.2 Silent sources

A salient feature of inverse source problems is that the forward operator A is often not injective. The nature of its null-space depends both on the function Ψ making Ψ(m) the source, on the geometry of the set S containing the support of m and the smoothness of the model class, as well as on the type of measurements. Abusing terminology slightly, those m belonging to that null space are called “silent sources” even though, strictly speaking, the source term is Ψ(m) rather than m.

The occurrence of nontrivial silent sources hinders most approaches to inverse source problems, and their study appears to be necessary in order to derive consistent regularization schemes. Indeed, discretizing beforehand will typically turn an inverse problem with non-injective forward operator A into a full rank but ill-conditioned finite-dimensional one, whereas the very structure of the null space could yield an ansatz that may restore uniqueness, for example normalized representatives or suitable notions of sparsity for m, which in turn suggest appropriate regularization terms when minimizing the distance between the outcome of the model and the data.

This point of view leads one to state and approximately solve continuous optimization problems depending on some chosen regularization method, and is similar in spirit to an “off-the-grid” approach as in 57. The fact that inverse source problems for elliptic PDE can be recast in terms of integral forward operators, using Green functions, only adds to the comparison with the reference just mentioned. However, a major difference with the approach developed there is that the so-called “source condition” will almost never hold in our case, which prevents analogous consistency estimates to apply.

When the source term is in divergence form; i.e., when Ψ(m)= div m, and if we assume that the measurements are faithful, there are roughly speaking two possibilities: either S has Lebesgue measure zero and does not separate the space, in which case silent sources are divergence free ( div m=0), or S fails to meet one of these conditions and the class of silent sources becomes considerably larger. In the former case one says that S is slender, and the distinction between the slender and non-slender cases is apparent from the works 38, 40, 49.

Silent sources in the slender case can be described rather completely when m is modeled by 3-valued functions or measures. Notions of sparsity have been drawn from this characterization 40, and several types of constructive approaches to reconstruction and net moment estimation, with different assumptions and algorithms, are currently under study by Factas. In contrast, silent sources in the non-slender case were understood only recently for Lp-functions on domains which are not too wild, see 39, 44 together with the PhD thesis 75 of M. Nemaire. Reconstruction algorithms in this case are still in their infancy. Besides, understanding silent sources when m is a vector measure is tantamount to characterize when the Helmholtz-Hodge decomposition of such a measure again consists of measures. This is an open issue in harmonic analysis.

Silent sources of L2-class on a closed Lipschitz surface were also analyzed in the Factas team for the Helmholtz equation with (possibly complex) wave number k, namely Δϕ+k2ϕ= div m (where ϕ=Δϕ+k2ϕ, Ψ(m)= div m), in collaboration with H. Haddar from the Idefix project team, with applications to the modeling of non-isotropic scattering. The situation is a bit more involved than with the Laplacian (that is: when k=0) and depends on whether k is a Neumann eigenvalue or not 17.

3.2.3 Source estimation

A classical approach to inverse problems is to minimize with respect to the unknown m, belonging to a model class E1 (usually a Banach space), a criterion of the form d(f,Am)+F(λ,m), where A:E1E2 is the forward operator, assumed to be compact and mapping E1 into a measurement space E2 endowed with a metric d, while f is the data and F(λ,m) is a smooth positive penalty term, depending in an increasing manner on a non-negative regularizing parameter λ, that satisfies xF(0,x)>0 for all x>0. The identification scheme then consists in estimating the unknown m0 by a minimizer of the criterion, for some appropriate small positive value of λ designed to offset the measurement error involved with f (in standard applications, A has dense range). A minimal requirement, then, is that this identification scheme should be consistent in the limit, when the measurement error e goes to zero and the regularization parameter λ also goes to zero, in a manner that may depend on e. When the forward operator is injective, such consistency is more or less automatic, at least in a weak sense; but when it is not injective, consistency can only be achieved upon making additional assumptions on m0, that ensure it is a solution of minimum norm to f=Am0. This is why the penalty term F should be chosen in relation to the null space of A.

Let us now specialize to inverse source problems for the Laplacian with right hand side in divergence form: Δϕ= div m (i.e., ϕ=Δϕ, Ψ(m)= div m), assuming that the forward operator A is faithful, compact and, say, valued in a Hilbert space (it could be a measurement of the field in a region of space away from the source). A common, yet simplifying hypothesis in EEG source problems (Section 4.1), is to assume that the support of m is contained in a closed surface S homeomorphic to a ball (an idealized model of gray matter), in such a way that m is normal to S and of L2-class there. Then, standard properties of layer potentials imply that m is uniquely determined by the field, so the forward operator is injective in this case and consistency is guaranteed (under the previous hypothesis). This example contrasts the next one, which is important for inverse magnetization problems (Section 4.2): assume that m ranges over 3-valued measures supported on a set of zero Lebesgue measure that does not disconnect the Euclidean space 3 (the so-called slender case). Then, the null space of A consists of divergence free measures; see Section 3.2.2. Now, by a result of S. Smirnov 83 such measures are superpositions of line integrals, therefore measures whose support contains no arc (a so-called “purely 1-unrectifiable set”) are mutually singular to the null space of A. Consequently, for 3-valued measures whose support is sparse in that it contains no arc, consistent identification schemes can be obtained upon minimizing f-Am2+λmTV with respect to m, with ·TV to indicate the total variation norm. This was expounded in 50 and ongoing research recently showed that the same holds for more general elliptic operators than the Laplacian. This is an example of how the null space of the forward operator can suggest an ansatz on the solution (here a measure-theoretic notion of sparsity) and impinge on the choice of the regularization.

The non-slender case, that involves important frameworks for S like closed surfaces or volumes, is less understood. Deriving interesting ansatz for 3-valued measures in connection with the kernel of the forward operator in such situations is subject to ongoing research within Factas.

Of course, this approach to inverse source problems requires to solve infinite-dimensional optimization problems, which in turn calls for some discretization techniques. A classical idea, pervading throughout numerical analysis, is to approximate the solution of such an infinite-dimensional problem by a sequence of solutions to finite-dimensional ones. In the slender case, a suitable sequence of finite-dimensional optimization problems can be obtained by replacing the space of 3-valued measures supported on S by a k-dimensional subspace k thereof, and arrange things so that a nested sequence kk+1 is weakly dense in the space of such measures. Then, one is left to solve a sequence of finite-dimensional least square problems with l1 constraints. Convergence issues are currently being addressed within Factas in collaboration with colleagues at Vanderbilt University and the University of Vienna. The absence of a source condition makes such developments a novel piece of research.

When the space E1 where the unknown m is sought is a Hilbert space, replacing a test space of functions by a sequence of nested finite-dimensional sub-spaces, as outlined above in the case of 3-valued measures, is also suggestive of L-curve methods from singular value decomposition to approximately solve infinite-dimensional linear equations like Am=f, for A a compact operator. A possible regularization parameter is then the number of terms retained in the singular vectors expansion, and the main difficulty is, of course, to numerically estimate sufficiently large number of such singular vectors in a precise enough manner (see Sections 3.1.3 and 7.4).

We also mention that solving less ambitious inverse problems than source reconstruction is often regarded as a more attainable, but still valuable endeavor. In particular, for inverse magnetization problems (see Section 4.2), this can be said of net moment recovery. Unlike the magnetization m, its net moment, the integral of m, is simply a vector which is entirely determined by the field, because silent sources have zero moment. Hence, it should be considerably easier to estimate. Nevertheless, this task is far from trivial in practice, mainly because field measurements are performed in a limited region of space and are thus incomplete. The design of net moment estimators is another avenue explored by Factas, after initial work in this direction reported in 36, 42 and, more recently, in 81.

3.3 Rational approximation, behavior of poles

Rational approximation to holomorphic functions of one complex variable is a long standing chapter of classical analysis, with notable applications to number theory, spectral theory and numerical analysis. Over the last decades, it has become a cornerstone of modeling in Systems Engineering, and it can also be construed as a technique to regularize inverse source problems in the plane, where the degree is the regularizing parameter. Indeed, by partial fraction expansion, a rational function can be viewed as the complex derivative of a discrete logarithmic potential with as many masses as the degree (assuming that the poles are simple); that is, if f is rational of degree N, then ¯f=j=1Najδzj where the zj are its poles and δzj is a Dirac unit mass at zj. Moreover, a holomorphic function is the complex derivative of the logarithmic potential of its own values on a curve encompassing the domain of analyticity (this is the Cauchy formula); hence, rational approximation aims at representing as well as possible a logarithmic potential by a discrete potential with prescribed number of masses, in the sense that their derivatives should be close (a Sobolev-type approximation).

Predecessors of Factas (the Apics and Miaou project teams) have designed a dedicated steepest-descent algorithm for quadratic approximation criteria whose convergence to a local minimum is guaranteed. This gradient algorithm may either be initialized by a preliminary approximation method, or recursively proceed with respect to the degree N of the approximant, on a compactification of the parameter space 35, as can be done with the RARL2 software (see Section 3.5.3). It has proved to be effective in applications carried out by the team (see for instance 12 for the identification of micro-wave filters, and Sections 4.1, 4.3).

However, finding best rational approximants of prescribed degree to a specific function, say in the uniform norm on a given set, seems out of reach except in rare, particular cases. Instead, constructive rational approximation has focused on estimating optimal convergence rates and deriving approximation schemes coming close to meet them, or studying computationally appealing approximants like Padé interpolants and their variants. Two main issues are then the effective computation of optimal or near optimal approximants of given degree, and the connection between the singularities of the approximant (the poles) and those of the approximated function. Factas has contributed to both.

As regards near-optimal approximants, their design requires a knowledge of optimal rates in the situation at hand. In recent years, we were active determining lower bounds on that rate, a piece of information which is crucial but difficult to obtain. Our methods are topological in nature (Ljustenik-Schnirelman theory, genus of compact symmetric sets), like most techniques in the area, and in collaboration with Q. Tao from the University of Macao, we devised algorithms to compute lower bounds in best L2 approximation by stable rational function of given degree on the unit circle, which is first of this kind and sometimes quite precise, see 4. Research in this direction is still going on, in particular on best L2 approximation of uni-modular functions that cannot be approximated in sup norm, except when they are rational of admissible degree, in which case they are, of course, their own best approximant.

We refer here to the behavior of best rational approximants of given degree, in the L-sense on a compact subset of the domain of analyticity, to complex analytic functions f that can be continued analytically (possibly in a multi-valued manner) except perhaps over a set of logarithmic capacity zero in the plane. When the continuation of f has finitely many branches; that is, if the Riemann surface to which this continuation extends analytically in a single-valued manner (except perhaps on a polar subset thereof) is compact, then the behavior as N goes large of rational approximants of degree N whose N-th root error is asymptotically smallest possible (in particular the asymptotic behavior of best approximants) has recently been elucidated rather completely in this joint research effort involving Factas.

Another classical technique to approximate –more accurately: extrapolate– a function, given a set of pointwise values, is to compute a rational interpolant of minimal degree to match the values. This method, known as Padé (or multi-point Padé) approximation has been intensively studied for decades 32 but fails to produce pointwise convergence, even if the data are analytic. The best it can give in general, at least to functions whose singular set has capacity zero, is convergence in capacity which does not prevent poles of the approximant from wandering about the domain of analyticity of the approximated function, but does imply that each pole of the approximated function attracts a pole of the approximant 71. This phenomenon is well-known in numerical analysis, and leads Physicists and Engineers to distinguish between “mathematical” and “physical” poles. A modification of the multi-point Padé technique, in which the degree is kept much smaller than the number of data and approximate interpolation is performed in the least-square sense, has become especially popular over the last decade under the name vector fitting; it teams up with a barycentric representation of rational functions satisfying prescribed interpolation conditions, known as AAA (for Anderson-Antoulas Adaptive) scheme. Though the behavior of this least square substitute to Padé approximation, defined by Equation (1) in Section 4.4, resembles the one of multi-point Padé approximants from a numerical viewpoint, there has been apparently no convergence result for such approximate interpolants so far. Motivated by the outcome of numerical schemes developed by our partners to recover resonance frequencies of conductors under electromagnetic inverse scattering, the PhD thesis 31 of P. Asensio started investigating the behavior of such least-square rational approximants to functions with polar singular set, and dwelling on this work, we were recently able to show convergence in capacity thereof.

Regarding complex rational approximation as a means to tackle inverse source problems in the plane makes for a unifying point of view on various deconvolution techniques, from system identification and time series analysis to frequency-wise inverse scattering and non-destructive testing. But still more interestingly perhaps, it is suggestive of similar approaches to problems in higher dimension, where holomorphic functions generalize to harmonic gradients and rational functions to finite linear combinations of dipoles, see Section 3.1.2. This line of research is only starting, but seems to offer new avenues in connection with applications.

3.4 Asymptotic analysis

Asymptotic analysis deals with understanding behavior or explicit construction of the solution when a parameter entering a problem is either small or large. Factas has been involved in applications of asymptotic analysis in different contexts including both formal constructions and their rigorous justifications.

One type of asymptotic analysis for dynamical problems is the large-time behavior analysis. A rather classical issue here is that of limiting amplitude principle for wave equation. This principle states that the solution of the time-dependent wave equation with a periodic-in-time monochromatic source term feiωt necessarily stabilizes for large times t to the solution of the corresponding Helmholtz equation: div (αϕ)+ω2βϕ=-f for suitable known functions α, β and f depending on space variables. Revisiting this topic is motivated by recent popularity of numerical time-domain approaches (e.g., 28, 29, 64, 86) to elliptic PDE problems through efficient solution of auxiliary time-dependent equations where, for example, computational effort needed to be concentrated only on wave front neighborhood which is small for high-frequencies, a notoriously difficult regime for numerical solution of Helmholtz problems. With this respect, not only the fact of the time convergence in the limiting amplitude principle is important but also quantification of its rate. Dmitry Ponomarev was involved in the work 13 that deals with the limiting amplitude principle for a non-homogeneous medium wave-equation in different dimensions (in the dimension 1, a slight modification of the classical limiting amplitude principle was proposed). The analysis approach relies on proving that the solution decays to zero for a source-free problem with localized initial data and radiation boundary condition, an interesting problem of asymptotic analysis on its own, even in dimension 1 30. In 80, convergence to the periodic motion was also shown in a totally different context of one model of mechanical sliding contact problem with wear 20.

Previously, in a nonlinear context, rigorous asymptotic analysis 77 was instrumental to justify a parabolic model of pulse propagation in photo-polymers by comparing solutions of that model with those of the original Maxwell's system.

In the context of inverse problems, asymptotic analysis is useful when applied to the magnitude of the regularization parameter. When the latter tends to its limiting value, a solution of the regularized problem with ideal (noiseless) input data should tend to the exact solution. In presence of noise, it is important to relate this convergence rate to the value of the problem's constraint in the asymptotic regime of the regularization parameter.

The works 46, 79 on convolution integral equations on large domains, mentioned in Section 3.1.3, are an example of constructive asymptotic analysis. Here, one of the difficulty comes from a singular perturbation. Indeed, in the asymptotic limit of infinite size of the region, the spectral problem solution cease to exist since the integral operator loses the compactness property.

In the context of net magnetization reconstruction in the inverse magnetization problem (Sections 3.2 and 4.2), situation when the measurement area size is large leads to a different kind of application of constructive asymptotic analysis 36, 42, 81. Here, explicit constructions of the solution estimates are performed to different asymptotic orders with respect to the measurement region size. The higher-order estimates can give good accuracy already for relatively small value of the measurement region but are much more unstable with respect to the perturbation of the measured data. This problem also exhibits another interesting asymptotic phenomenon which is somewhat similar to the “boundary layer” common for boundary-value problem for differential equations with small or large parameters. In particular, the solution (for tangential components of net moment) is composed of a global leading-order quantity (where formal passage to the asymptotic limit can be performed) and a correction term which is localized in a region that shrinks in the asymptotic limit.

3.5 Software tools of the team

In addition to the above-mentioned research activities, Factas develops and maintains a number of long-term software tools that either implement and illustrate effectiveness of the algorithms theoretically developed by the team or serve as tools to help further research by team members. We briefly present the most important of them, which have been developed over the past few years.

3.5.1 pisa

  • Name:
    pisa
  • Keywords:
    Electrical circuit, Stability
  • Functional Description:

    To minimize prototyping costs, the design of analog circuits is performed using computer-aided design tools which simulate the circuit's response as accurately as possible.

    Some commonly used simulation tools do not impose stability, which can result in costly errors when the prototype turns out to be unstable. A thorough stability analysis is therefore a very important step in circuit design. This is where pisa is used.

    pisa is a Matlab toolbox that allows designers of analog electronic circuits to determine the stability of their circuits in the simulator. It analyzes the impedance presented by a circuit to determine the circuit's stability. When an instability is detected, pisa can estimate location of the unstable poles to help designers fix their stability issue.

  • URL:
  • Publications:
  • Authors:
    Adam Cooman, David Martinez Martinez, Fabien Seyfert, Martine Olivi
  • Contact:
    Fabien Seyfert

3.5.2 DEDALE-HF

  • Keyword:
    Microwave filter
  • Scientific Description:

    Dedale-HF consists in two parts: a database of coupling topologies as well as a dedicated predictor-corrector code. Roughly speaking each reference file of the database contains, for a given coupling topology, the complete solution to the coupling matrix synthesis problem associated to particular filtering characteristics. The latter is then used as a starting point for a predictor-corrector integration method that computes the solution to the coupling matrix synthesis problem corresponding to the user-specified filter characteristics. The reference files are computed off-line using Gröbner basis techniques or numerical techniques based on the exploration of a monodromy group. The use of such continuation techniques, combined with an efficient implementation of the integrator, drastically reduces the computational time.

    Dedale-HF has been licensed to, and is currently used by TAS-Espana.

  • Functional Description:
    Dedale-HF is a software dedicated to solve exhaustively the coupling matrix synthesis problem in reasonable time for the filtering community. Given a coupling topology, the coupling matrix synthesis problem consists in finding all possible electromagnetic coupling values between resonators that yield a realization of given filter characteristics. Solving the latter problem is crucial during the design step of a filter in order to derive its physical dimensions as well as during the tuning process where coupling values need to be extracted from frequency measurements.
  • URL:
  • Contact:
    Fabien Seyfert
  • Participant:
    Fabien Seyfert

3.5.3 RARL2

  • Name:
    Réalisation interne et Approximation Rationnelle L2
  • Keyword:
    Approximation
  • Scientific Description:

    The method is a steepest-descent algorithm. A parameterization of MIMO systems is used, which ensures that the stability constraint on the approximant is met. The implementation, in Matlab, is based on state-space representations.

    RARL2 performs the rational approximation step in the software tools PRESTO-HF and FindSources3D. It is distributed under a particular license, allowing unlimited usage for academic research purposes. It was released to the universities of Delft and Maastricht (the Netherlands), Cork (Ireland), Brussels (Belgium), Macao (China) and BITS-Pilani Hyderabad Campus (India).

  • Functional Description:

    RARL2 is a software for rational approximation. It computes a stable rational L2-approximation of specified order to a given L2-stable (L2 on the unit circle, analytic in the complement of the unit disk) matrix-valued function. This can be the transfer function of a multivariable discrete-time stable system. RARL2 takes as input either:

    • its internal realization,
    • its first N Fourier coefficients,
    • discretized (uniformly distributed) values on the circle. In this case, a least-square criterion is used instead of the L2 norm.

    It thus performs model reduction in the first or the second case, and leans on frequency data identification in the third. For band-limited frequency data, it could be necessary to infer the behavior of the system outside the bandwidth before performing rational approximation.

    An appropriate Möbius transformation allows to use the software for continuous-time systems as well.

  • URL:
  • Contact:
    Martine Olivi
  • Participants:
    Jean-Paul Marmorat, Martine Olivi

3.5.4 FindSources3D

  • Keywords:
    Health, Neuroimaging, Visualization, Compilers, Medical, Image, Processing
  • Scientific Description:
    Though synthetic data could be static, actual signal recordings are dynamical. The time dependency is either neglected and the data processed instant by instant, or separated from the space behavior using a singular value decomposition (SVD). This preliminary step allows to estimate the number of independent activities (uncorrelated sources) and to select the corresponding quantity of principal static components. After a first data transmission (“cortical mapping”) step of the static data, using the harmonicity property of the potential in the outermost layers (solving BEP problems on spherical harmonics bases), FS3D makes use of best rational approximation on families of 2-D planar cross-sections and of the software RARL2 in order to locate singularities and to determine the expected quantity of sources. From those planar singularities, the 3-D sources are finally estimated, together with their moment, in a last clustering step. Through this process, FS3D is able to recover time correlated sources, which is an important advantage. When simultaneously available, EEG and MEG data can now be processed together, and this also improves the recovery performance. In case of dynamical data, a recent additional step is to find the linear combination of the preliminary selected static components (change of basis) that produces source estimates which minimize the error with respect to data, an original criterion, which allows to improve the recovery quality.
  • Functional Description:
    FindSources3D (FS3D) is a software program written in Matlab dedicated to the resolution of inverse source problems in brain imaging, electroencephalography (EEG) and magnetoencephalography (MEG). From data consisting in pointwise measurements of the electrical potential taken by electrodes on the scalp (EEG), or of a component of the magnetic field taken on a helmet (MEG), FS3D estimates pointwise dipolar current sources within the brain in a spherical layered model. Each layer (brain, skull, scalp) is assumed to have a constant conductivity.
  • URL:
  • Contact:
    Juliette Leblond
  • Participants:
    Jean-Paul Marmorat, Juliette Leblond, Maureen Clerc, Nicolas Schnitzler, Théodore Papadopoulo

3.5.5 PRESTO-HF

  • Keywords:
    CAO, Telecommunications, Microwave filter
  • Scientific Description:

    For the matrix-valued rational approximation step, Presto-HF relies on RARL2. Constrained realizations are computed using the Dedale-HF software. As a toolbox, Presto-HF has a modular structure, which allows one for example to include some building blocks in an already existing software.

    The delay compensation algorithm is based on the following assumption: far off the pass-band, one can reasonably expect a good approximation of the rational components of S11 and S22 by the first few terms of their Taylor expansion at infinity, a small degree polynomial in 1/s. Using this idea, a sequence of quadratic convex optimization problems are solved, in order to obtain appropriate compensations. In order to check the previous assumption, one has to measure the filter on a larger band, typically three times the pass band.

    This toolbox has been licensed to (and is currently used by) Thales Alenia Space in Toulouse and Madrid, Thales airborne systems and Flextronics (two licenses). Xlim (University of Limoges) is a heavy user of Presto-HF among the academic filtering community and some free license agreements have been granted to the microwave department of the University of Erlangen (Germany) and the Royal Military College (Kingston, Canada).

  • Functional Description:
    Presto-HF is a toolbox dedicated to low-pass parameter identification for microwave filters. In order to allow the industrial transfer of our methods, a Matlab-based toolbox has been developed, dedicated to the problem of identification of low-pass microwave filter parameters. It allows one to run the following algorithmic steps, either individually or in a single stroke:
    • Determination of delay components caused by the access devices (automatic reference plane adjustment),
    • Automatic determination of an analytic completion, bounded in modulus for each channel,
    • Rational approximation of fixed McMillan degree,
    • Determination of a constrained realization.
  • URL:
  • Contact:
    Fabien Seyfert
  • Participants:
    Fabien Seyfert, Jean-Paul Marmorat, Martine Olivi

3.5.6 Sollya

  • Keywords:
    Computer algebra system (CAS), Supremum norm, Proof synthesis, Code generator, Remez algorithm, Curve plotting, Numerical algorithm
  • Functional Description:

    Sollya is an interactive tool where the developers of mathematical floating-point libraries (libm) can experiment before actually developing code. The environment is safe with respect to floating-point errors, i.e., the user precisely knows when rounding errors or approximation errors happen, and rigorous bounds are always provided for these errors.

    Among other features, it offers a fast Remez algorithm for computing polynomial approximations of real functions and also an algorithm for finding good polynomial approximants with floating-point coefficients to any real function. As well, it provides algorithms for the certification of numerical codes, such as Taylor Models, interval arithmetic or certified supremum norms.

    It is available as a free software under the CeCILL-C license.

  • URL:
  • Contact:
    Sylvain Chevillard
  • Participants:
    Christoph Lauter, Jérôme Benoit, Marc Mezzarobba, Mioara Joldes, Nicolas Jourdan, Sylvain Chevillard
  • Partners:
    CNRS, UPMC, ENS Lyon, LIP6, UCBL Lyon 1, Loria

3.5.7 FootprintTools

  • Name:
    FootprintTools
  • Keywords:
    CO2, Carbon footprint
  • Functional Description:

    The tool contains Python scripts that allow one to extract the information about missions paid by the team during a given year and electronic equipment bought by the team in the past years, from the Inria information system. Convenient functions are proposed to explore, filter and format the obtained data, so as to use them to compute the carbon footprint as accurately as possible.

    A HTML survey is also provided to easily query the members of a team about their habits in terms of meals and commute travels.

  • URL:
  • Contact:
    Sylvain Chevillard

4 Application domains

Most of the targeted applications by the team pertain to the context of Maxwell's equations, under various specific assumptions.

4.1 Some inverse problems for cerebral imaging

Solving over-determined Cauchy problems for the Laplace equation on a spherical layer (in 3-D) in order to extrapolate incomplete data (see Section 3.1.1) is an ingredient of the team's approach to inverse source problems in electro-encephalography (EEG), see 9. The model comes from Maxwell's equation in the quasi-static regime, whence div (σϕ)= div m in a ball Ω which is the union of nested layers Ωi (brain, skull, scalp for i=0,1,2), where the singularities lie in Ω0 (i.e., the current sources m are supported in SΩ0, with the notation of Section 3), see Figure51. The inverse EEG source problem consists in recovering m from pointwise values of the electric potential ϕ measured on the scalp Γ2 (together with the assumption that the normal current vanishes there). It first involves transmitting the data from the scalp Γ2 down to the cortex Γ0. Whenever the Ωi are of different constant conductivities, ϕ satisfies Laplace equation in the outermost shells Ω2 and Ω1 and this “cortical mapping” step is performed using integral representations of ϕ on the spheres Γi (obtained through convolution by the Poisson kernels of the balls and using Green formula, a specific use of boundary element methods), followed by expansions on spherical harmonic bases and Tikhonov regularization.

Assuming m to be a linear combination of Dirac masses (dipolar sources), it turns out (by convolution with the fundamental solution of 3-D Laplace equation) that traces of ϕ on 2-D cross sections of Γ0 coincide with functions with branched singularities in the slicing plane 7, 43. These singularities are related to the actual location of the sources. Hence we are back to the 2-D framework of Section 3.3, and recovering these singularities can be performed via best rational approximation. The goal is to produce a fast and sufficiently accurate initial guess on the number and location of the sources in order to run heavier descent algorithms on the direct problem, which are more precise but computationally costly and often fail to converge if not properly initialized. Such a localization process can add a geometric, valuable piece of information to the standard temporal analysis of EEG signal records. It appears that, in the rational approximation step, multiple poles possess a nice behavior with respect to branched singularities. This is due to the very physical assumptions on the model from dipolar current sources, for both EEG data and MEG (magneto-encephalography) data that correspond to measurements of the magnetic field, as well as for (magnetic) field data produced by magnetic dipolar sources within rocks (see Section 4.2). Though numerically observed in 9, there is no mathematical justification so far why branched singularities generate such strong accumulation of the poles of the approximants. This intriguing property, however, is definitely helping source recovery and will be the topic of further study. It is used in order to automatically estimate the “most plausible” number of sources (numerically: up to 3, at the moment). In this connection, a software FindSources3D (FS3D, see Section 3.5.4) dedicated to pointwise source estimation in EEG–MEG has been developed. We also studied the uniqueness of a critical point of the quadratic criterion in the EEG source problem in Ω0 for a single dipole situation (see 15), an important issue for the use of descent algorithms.

Together with M. Darbas (LAGA, Université Sorbonne Paris Nord) and P.-H. Tournier (JLL laboratory, Sorbonne Université), we recently considered the EEG inverse problem with a variable conductivity in the intermediate skull layer Ω1, in order to model hard / spongy bones, especially for neonates. The related transmission step is then performed using a mixed variational regularization and finite elements on tetrahedral meshes, and the coupling with FS3D for dipolar source estimation furnishes promising results 56.

Other approaches have been studied for EEG, MEG and “Stereo” EEG (SEEG), where the potential is measured by deep electrodes and sensors within the brain as in the scheme of Figure 1, and for more realistic geometries of the head.

Figure 1

Three nested ovoids illustrating the three layers of the head (denoted Ωi with i=0,1,2, 0 being the index of the inner layer). The conductivity of the i-th layer is denoted with σi and its boundary is denoted with Γi. Two red straight lines illustrate how deep electrodes take measurements in the inner layer. Little segments perpendicular to each electrode illustrate the different sensors that lie along the electrode. A blue closed curve, denoted S, in the inner layer illustrates the surface where the sources lie, and little arrows show that their orientation is normal to the curve.

Figure 1: Schematic view of a 3 layered head model. Deep SEEG electrodes with sensors along them (red) in Ω0, current source term m distributed on SΩ0 with normally oriented dipoles.

Assuming that the current source term m is a 3-valued vector field (measure, or distribution) supported on a surface SΩ0 (the gray / white matters interface within the brain) and normally oriented to S, they consist in regularizing the inverse source problem by a total variation (TV) constraint - to favor sparsity - on m, added to the quadratic data approximation criterion (see Section 3.2). This is similar to the path that is taken for inverse magnetization problems (see Section  4.2). The approach follows that of 8 and is implemented through algorithms whose convergence properties were studied in 14, 75. We are now able to handle MEG, EEG, SEEG modalities, simultaneously or not. The simultaneous handling of the different modalities is made more straightforward by coupling the source localization problem and the inverse transmission problem. Tests on synthetic data provided good quality results, though they are quite numerically costly to obtain. This opens up the possibility to consider sources that may exhibit properties usually associated with distributions rather than functions.

4.2 Inverse magnetization issues for planar and volumetric samples

Among other things, geoscientists are interested in understanding the magnetic characteristics of ancient rocks. Indeed, ferro-magnetic particles in a rock carry a magnetization that has been acquired when the rock was hot, under the influence of the magnetic field that was ambient at that time. For an igneous rock for instance, and if no subsequent event has heated it up, this corresponds to the time when the rock was formed. If the rock can be dated, recovering its magnetization hence provides valuable information about the history of the magnetic field. This gives elements for better understanding to key questions such as: what was the magnetic field of the sun when the solar system was at the proto-planetary phase? when did the magnetic dynamo of Mars stop? when did the magnetic dynamo of Earth start?

The magnetization of a rock is not directly measurable. However, it produces a tiny magnetic field, which can be measured if the sample is isolated from other sources of magnetic field. A category of instruments of particular importance with that respect are the magnetic microscopes. They are used to measure the field produced by a fairly small sample: either a simple grain, or a wider sample that has been first prepared by gluing it on some support and polishing it until getting only a thin slab. The microscope operates at some distance above the sample and measures the magnetic field. The typical experimental set up is represented on Figure 2.

Figure 2

Schematic view of the experimental setup : the horizontal plane on which the rock sample lies (the sample being a parallelepiped with height r and a square basis of half-size s). Its basis is at height 0, and the center of the square basis is at the origin of the system of coordinates). Above, the measurements are performed on a horizontal plane at height z. Here, for instance, it is a square, whose half-size is R and which is horizontally centered.

Figure 2: Schematic view of the experimental setup of a magnetic microscope. The sample lies on a horizontal plane at height 0 and its support is included in a parallelepiped (in red). The field produced by the sample is measured at points of a horizontal region, say a square, at height z (in gray).

The magnetization m is modeled as a 3-valued vector field defined on the rock sample which is assumed to be a subset of S. One advantage of the magnetic microscopes is that they operate close to the sample, i.e., the height z of measurement is small compared to the horizontal characteristic lengths s and R. The thickness of the sample is also small compared to s. However, the ratio r/z is not necessarily negligible. In the cases when it is indeed negligible, the sample can be supposed planar instead of being volumetric (i.e., r=0) from a practical point of view; in this case, t becomes a planar variable (t1,t2)S (a square) and m is actually a magnetization density.

The surface QR of measurements is, most of the time, supposed to be a centered square of half-size R, but in some situations it might be convenient to consider only the data available on a centered disk DR of radius R. The microscope provides a map of the field on the whole surface: measurements are provided at many points of the surface and, from a practical point of view, one may assume that the field is known everywhere on QR. In addition to the presence of noise in the measurements, an important limitation is that, depending on the microscope technology, it is frequent that only one component of the field be measured.

The team has a long-standing collaboration with the Earth and Planetary Sciences Laboratory at MIT. They have a superconducting quantum interference device (SQUID). The sensor is a tiny vertical coil maintained at temperature close to 0 Kelvin, which provides it with superconducting characteristics. In order to maintain the sensor at very low temperature while the microscope operates in a room at normal temperature, the sensor is isolated behind a sapphire window. Though thin, this window enforces a measurement height z such that rzs and the sample can usually be supposed planar. Also, because the coil only allows to measure the field along its axis, the SQUID only provides measurements of B3 and not the whole field.

Another type of microscopes consists in the quantum diamond microscopes (QDM). They use properties of special diamonds which, when properly excited with a laser and a microwave field, become luminescent in the presence of a magnetic field. From the difference of brightness of this luminescence under slightly different frequencies of the microwave field, one can recover the amplitude of a given component of the magnetic field. This mechanism is already in use to provide a magnetic microscope at Harvard University (Massachusetts, USA). We are collaborating with geoscientists of the Geophysics and Planetology Department of Cerege (CNRS, Aix-en-Provence) and physicists from ENS Paris Saclay to help them designing their own QDM. The promises of the QDM are manifolds, see also Section 7.3. First, they should allow measurements of the field at a height z above the sample that is way smaller than what is permitted with the SQUID. This improves the spatial resolution and together with the conditioning of the inverse magnetization problem. However, this also imposes to model the sample as a 3-D object, as its thickness r becomes usually comparable to z in this case. Second, the technology of the QDM could make it possible, in principle, to measure the three components of the field instead of only one (see Section 7.4), which opens the way to new regularization techniques for the inverse problems. However, the measurement of a field with a QDM is more indirect than with a SQUID and this raises specific issues: in particular, it might be necessary to impose an external field on the sample, called a bias field, in order to resolve ambiguities when recovering the field from the brightness of the luminescence, and such a bias field can actually perturb the magnetic properties of the rock sample under study.

The issues raised by the inverse magnetization problem in the framework of magnetic microscopes such as SQUIDs or QDMs are numerous and we got several contributions on the subject over time. Particularly important for the full recovery of the magnetization are the silent sources, i.e., the magnetization that belong to the kernel of the direct operator, or in other terms, those magnetization that produce no field on the measurement area, see Section 3.2.2. We fully characterized such magnetizations in the thin-plate hypothesis (i.e., when r is assumed to be 0), 40. Contrary to the full magnetization, the total net moment (i.e., the integral of the magnetization over the sample) is in principle a piece of information that could be retrieved from the measurements, since silent sources have a null net moment. In order to recover the total net moment, we proposed to use linear estimators and defined a bounded extremal problem to find good such linear estimators 3. However, additional hypotheses must be added in order to ensure the uniqueness of a solution for the full inversion problem. Such an hypothesis is provided when the support of the magnetization inside the sample is supposed to be sparse, e.g., composed of isolated points or 1-D curves. In this case, we proposed to use total variation regularization to solve the inverse problem 8, 50, see Section 3.2.3.

4.3 Inverse magnetization issues with the lunometer

Measurements of the remanent magnetic field of the Moon let geoscientists think that the Moon used to have a magnetic dynamo for some time, but the exact process that triggered and fed this dynamo is not yet understood, much less why it stopped. In particular, the Moon is too small to have a convecting dynamo like the Earth has. In order to address this question, our geoscientists colleagues at Cerege decided to systematically analyze the rock samples brought back from the Moon by Apollo missions.

The samples are kept inside bags with a protective atmosphere, and geophysicists are not allowed to open the bags, nor to take out the samples from NASA facilities. Moreover, the measurements must be performed with a passive device in order to ensure that the samples would not be altered by the measurements: in particular no cooling or heating is allowed, and neither is the use of anything producing a magnetic field like, e.g., motors. Finally, since the measurements must be performed directly at NASA, the instrument must be easy to take apart and to assemble once on site. The overall time devoted to measuring all samples is limited and each sample must be analyzed quickly (typically within a few minutes). For all these reasons, our colleagues from Cerege designed a specific magnetometer called the “lunometer”: this device provides measurements of the components of the magnetic field produced by the sample, at some discrete set of points located on disks belonging to three cylinders (see Figure 3). The goal was not to get a deep understanding of the magnetic properties of the studied samples with such a rudimentary instrument but rather to help selecting a few of them that seems really interesting to study in more details: this would be used to file a request to NASA to buy sub-samples of a few grams on which more instructive (though possibly destructive) experiments could be performed.

Figure 3

The three orthogonal planes of the 3D canonical system of coordinates. Perpendicular to each of the planes, there is a cylinder and on each cylinder three sections are highlighted as circles drawn in black, blue and red. On each circle, regularly spaced dots illustrate the points where measurements are performed.

Figure 3: Typical measurements obtained with the lunometer of Cerege. Measurements of the field are performed on nine circles, given as sections of three cylinders. On each circle, only one component of the field is measured: the component Bh along the axis of the corresponding cylinder (blue points), the component Bn radial with respect to the circle (black points), or the component Bτ tangential to the circle (red points).

The collaboration with Cerege on this topic started in the framework of the MagLune ANR project whose overall objective was to devise models to explain how a dynamo phenomenon was possible on the Moon. Our contribution is to design methods to tell, from the measurements provided by the Lunometer, whether the remanent magnetization of the sample under study could be well modeled by a single magnetic dipole, and if so, what would be the position and magnetic moment of this dipole. To this end, we use ideas similar to those underlying the FindSources3D tool (see Sections 3.3, 3.5.4): we use rational approximation techniques to recover the position of the dipole; recovering the moment is then a rather simple linear problem. The rational approximation solver gives, for each circle of measurements, a partial information about the position of the dipole. These partial informations obtained on all nine circles must then be combined in order to recover the exact position. Theoretically speaking, the nine partial informations are redundant and the position could be obtained by several equivalent techniques. But in practice, due to the fact that the field is not truly generated by a single dipole, and also because of noise in the measurements and numerical errors in the rational approximation step, all methods do not show the same reliability when combining the partial results. We studied several approaches, testing them on synthetic examples, with more or less noise, in order to propose a good heuristic for the reconstruction of the position 73.

4.4 Shape identification of metallic objects

We started an academic collaboration with partners at LEAT (Laboratoire d'Électronique, Antennes, Télécommunications, Université Côte d'Azur – CNRS) on the topic of inverse scattering using frequency dependent measurements. As opposed to classical electromagnetic imaging where several spatially located sensors are used to identify the shape of an object by means of scattering data at a single frequency, a discrimination process between different metallic objects is here being sought for by means of a single, or a reduced number of sensors that operate on a whole frequency band. The spatial multiplicity and complexity of antenna sensors is here traded against a simpler architecture performing a frequency sweep.

The setting is shown on Figure 4. The total field Et is the sum of the incident field Ein (here a plane wave) and scattered field Es: at every point X=(r,θ,φ) in space we have Et=Ein+Es. A harmonic time dependency (eiωt), is supposed for the incident wave, so that by linearity of Maxwell equations and after a transient state, the scattered field at the observation point Xo is related to the emitted planar wave field at the emission point Xe via the transfer function H: Es(Xo)=H(ω,Xo)Ein(Xe) (the dependency in Xe is omitted in H since the emission point is fixed). Under regularity conditions on the scatterer's boundary, the function H can be shown to admit an analytic continuation into the complex left half-plane for the s variable, away from a discrete set (with a possible accumulation point at infinity) where it admits poles. Thus, H is a meromorphic function in the variable s. Its poles are called the resonating frequencies (resonances) of the scattering object. Recovering these resonating frequencies from frequency scattering measurement, that is measurements of H at particular ωj's (actually, iωj's) is the primary objective of this project.

Figure 4

A schematic view of the problem: an antenna (horizontal and on the left) emits a planar wave (propagating horizontally, towards the right and shown in red) with an emitted electric field E perpendicular to it (represented by a vertical arrow on the figure). On the right, a sphere of radius a reflects the wave. The reflected wave is radial to the sphere, directed towards its exterior, and is shown in blue. A reception antenna points towards the center of the sphere, and is located at a distance r of the center of the sphere, while making an angle θ with respect to the horizontal. It measures the reflected electric field, which is perpendicular to it.

Figure 4: Sphere illuminated by an electromagnetic plane wave - measurement of the scattered wave.

We started a study of the particular case when the scatterer is a spherical PEC (Perfectly Electric Conductor). In this case, Maxwell equations can be solved by means of expansions in series of vectorial spherical harmonics. We showed in particular that in this case H admits a simple structure involving a meromorphic function with poles at zeroes of the spherical Hankel functions and their derivatives. Identification procedures, surprisingly close to the ones we developed in connection with amplifier stability analysis (Section 4.6), were studied to gain information about the resonating frequencies by means of a rational approximation of this function 91.

In order to perform the rational approximation (see Section 3.3), the behavior of H outside the range of measured frequencies, specifically at high frequencies, has been studied for the particular case when the scatterer is a spherical PEC (Perfectly Electric Conductor). In this case, H can be written as H=HO+HC, where HO and HC are respectively the optic and creeping wave parts. Their high-frequency behaviors are given by series expansions whose coefficients are identified when Xo=Xe. The asymptotics of HO is called the Luneberg-Kline expansion; its first terms were analytically computed (solving eikonal and transport equations), 31.

Numerical simulations showed that even though HC is negligible with respect to HO at high frequencies, it needs to be taken into account around the band of measured frequencies for the rational approximation. Furthermore, the physical interpretation of these two terms leads to consider that HC should carry more information about the scatterer and we want to investigate the conjecture that the poles of H are those of HC hence that HO is analytic.

The rational approximation of the transfer function H is performed with a least-squares substitute to multi-point Padé approximation: the approximant Rk,n[H]=pk,n/qk,n is given by:

( p k , n , q k , n ) = argmin p 𝒫 n , q 𝒫 k 1 j = 1 N p ( z j ) - H ( z j ) q ( z j ) 2 , 1

where n, k and N are integers such that k+n+1N and (zj)j=1...N is a collection of points at which H is analytic. Here, 𝒫n is the set of polynomials of degree less than n and 𝒫k1 is the set of monic polynomials of degree k. An analog of the Nuttal-Pommerenke theorem for a least square version of classical Padé approximants was obtained in 31, where the values (p-Hq)(zj) in (1) get replaced by the j-th coefficients of the Taylor expansion of (p-Hq) at a given point in the domain of analyticity of H. It says that if H is holomorphic and single-valued on , except perhaps on a polar set, then p/q converges to H in capacity as k, n go to infinity as well as N, in such a way that n/k remains bounded and NC(k+n) for some constant C>0; convergence in capacity means that for each ε>0, the capacity of the set where the pointwise error is bigger than ε goes to zero.

Dwelling on this work, we recently showed under similar conditions on k, n, N, and provided that the interpolation points zi remain in a bounded set, that convergence in capacity still holds for the solutions to (1); this is a least square analog of Wallin's theorem in multi-point Padé approximation.

This result is interesting as it entails that poles of H can indeed be retrieved as limit points of certain poles of pk,n/qk,n, while explaining the chaotic behavior of other, so-called spurious poles that wander about the domain of analyticity. We plan to investigate other PEC scatterers.

4.5 Inverse problems in orthopedic surgery

Apart from more classical medical imaging domains, inverse problems find a rather surprising application in the field of orthopedics, see 60, 70, 74 and Section 8.2.

We are concerned, in particular, with a hip prosthetic surgery when an insertion of an acetabular cup (AC) implant into a bone by press-fit is performed with the use of an instrumented hammer. Such a hammer is equipped with a sensor capable to measure impact momentum (force) and hence yield important information about the bone quality and the stability of an AC implant. These are, indeed, crucial pieces of information to have in real-time during a surgery. On the one hand, if the achieved bone-implant contact area is not sufficiently large, osseointegration may fail eventually leading to an aseptic loosening of the implant and a necessity of a revision surgery. On the other hand, if the insertion of the implant is too deep, the generated stresses may induce fractures or bone tissue necrosis.

The mathematical side of the process is far from trivial. First of all, contact mechanics is a highly nonlinear problem due to geometrical constraint on the solution. Already a basic problem of an elastic body on a rigid foundation is a free-boundary problem with an unknown effective contact surface which is characterized by the so-called Signorini conditions, nonlinear constraints of Karush-Kuhn-Tucker type involving stress and displacements. In the present case, several coupled problems have to be solved since regions corresponding to the bone, the implant and the hammer all possess different material properties. Moreover, the deformations cannot be considered small, consequently, a hypo-elastic description is more appropriate than that of linear elasticity. In such a formulation, a rate relation between Cauchy stress and strain tensors replaces a linear stress-strain constitutive law, hence its integration induces additional non-linearity. Finally, the bone is a porous multi-scale medium, and appropriate homogenization model should be deduced, with adequate parameter fitting.

For tackling inverse problems (e.g., that of determining material parameters), the direct formulation has to be solved in such an effective way that iterative approaches are not prohibitively expensive. This motivates exploration of model-order reduction strategies that would, in particular, allow efficient integration of the system in time.

4.6 Stability and design of active devices

Through contacts with CNES (Toulouse) and UPV (Bilbao), the team got involved in the design of amplifiers which, unlike filters, are active devices. A prominent issue here is stability. Twenty years ago, it was not possible to simulate unstable responses, and only after building a device could one detect instability. The advent of so-called harmonic balance techniques, which compute steady state responses of linear elements in the frequency domain and look for a periodic state in the time domain of a network connecting these linear elements via static non-linearities made it possible to compute the harmonic response of a (possibly nonlinear and unstable) device 87. This has had tremendous impact on design, and there is a growing demand for software analyzers. In this connection, there are two types of stability involved. The first is stability of a fixed point around which the linearized transfer function accounts for small signal amplification. The second is stability of a limit cycle which is reached when the input signal is no longer small and truly nonlinear amplification is attained (e.g., because of saturation).

Initial applications by the team have been concerned with the first type of stability, and emphasis was put on defining and extracting the “unstable part” of the response. We showed that under realistic dissipativity assumptions at high frequency for the building blocks of the circuit, the linearized transfer functions are meromorphic in the complex frequency variable s, with at most finitely many unstable poles in the right half-plane 2. Dwelling on the unstable/stable decomposition in Hardy Spaces, we developed a procedure to assess the stability or instability of the transfer functions at hand, from their evaluation on a finite frequency grid 10, that was further improved in 55 to address the design of oscillators. This has resulted in the development of a software library called Pisa (see Section 3.5.1), aiming at making these techniques available to practitioners. Extending this methodology to the strong signal case, where linearization is considered around a periodic trajectory, is considerably more difficult and has received much attention by the team in recent years. The exponential stability of the high frequency limit of a circuit was established in 5, implying that there are at most finitely many unstable poles and no other unstable singularity for the monodromy operator around the cycle. Furthermore, the links between the monodromy operator and the (operator-valued) “harmonic transfer function” (HTF) of the linearized system along the trajectory were brought to light in 24: the system is exponentially stable if and only if its HTF is bounded an analytic in a right half-plane of of the form {z>-ε} for some ε>0. We deal here with input-output system of the form:

y ( t ) = j = 1 N D j ( t ) y ( t - τ j ) + u ( t ) , t > t 0

where τ1<<τN are positive delays and D1(t),...,DN(t) real d×d matrices depending periodically on time t, while u is the d-valued input and y(t) the d-valued output (complex coefficients can of course be handled in the same way). The HTF, that generalizes the usual transfer function of linear constant systems, is an analytic function of the Laplace-Fourier variable which is valued in the space of operators on L2([0,T),d) (T is the period of the system). It can be defined as follows: if a periodic linear control system at rest is fed from initial instant ti=- with an input signal v(t)exteiνt where v is T-periodic and x>0 is large enough, then the output is of the form 𝐇(x+iν)(v)exteiνt, where 𝐇(x+iν) is the harmonic transfer evaluated at x+iν (an operator) and 𝐇(x+iν)(v) is the T-periodic function which is the image of the T-periodic function v under 𝐇(x+iν). That is to say, an exponentially modulated input wave of frequency ν carried by a T-periodic signal is mapped to an output of the same form, and the HTF maps the input carrier to the output carrier. For stable systems, this is asymptotically true if the initial time is a finite instant; otherwise, the unstable transient will prevent this mathematical solution from being physical. Other definitions are given in 24.

We were able to construct a simple nonlinear circuit whose linearization around a periodic trajectory has a spectrum containing a whole circle; we currently investigate whether the singularities of the Fourier coefficients of the HTF (that are themselves analytic functions) also contain that circle. Indeed, just like a series of functions may diverge even though the summands are smooth, it is a priori possible that the HTF has a singularity at a point whereas its Fourier coefficients do not. Note that these coefficients are all one can estimate by harmonic balance techniques, and therefore the above question is of great practical relevance. We also investigate the relation between our stability criterion (that the HTF should be bounded and analytic on a “vertical” half-plane containing the origin), and the weaker requirement that the HTF exists pointwise in a half-plane. Connections with the representation of Volterra equations with jumps in the kernel are also a motivation for such a study, see 16.

4.7 Tools for numerically guaranteed computations

The overall and long-term goal is to enhance the quality of numerical computations. This includes developing algorithms whose convergence is proved not only when assuming that the numerical computations are performed in exact real or complex arithmetic, but rather when really accounting for the fact that the computations are performed with an inexact arithmetic (usually floating-point arithmetic). A numerical result alone is of little interest if no rigorous bound is provided together with it, in order to ensure that the real theoretical result is proved to be not to far from the computed result.

A specific way of contributing to this objective is to develop efficient numerical implementations of mathematical functions with rigorous bounds. We do sometimes provide such implementations. The software tool Sollya (see Section 3.5.6), developed together with C. Lauter (University of Texas at El Paso, UTEP) is also an achievement of the team in that respect. This tool intends to provide an interactive environment for performing numerically rigorous computations. Sollya comes as a standalone tool and also as a C library that allows one to benefit from all the features of the tool in C programs.

4.8 Imaging and modeling ancient materials

This is an additional activity of the team, linked to image classification in archaeology, pursued in collaboration with the partners listed in Section 8.3.

The pottery style is classically used as the main cultural marker within Neolithic studies. Archaeological analyses focus on pottery technology, and particularly on the first stages of pottery manufacturing processes. These stages are the most demonstrative for identifying the technical traditions, as they are considered as crucial in apprenticeship processes. Until now, the identification of pottery manufacturing methods was based on macro-traces analysis, i.e., surface topography, breaks and discontinuities indicating the type of elements (coils, slabs, ...) and the way they were put together for building the pots. Overcoming the limitations inherent to the macroscopic pottery examination requires a complete access to the internal structure of the pots. Micro-computed tomography can be used for exploring ancient materials micro-structure. This non-invasive method provides quantitative data for a big set of proxies and is perfectly adapted to the analysis of cultural heritage materials.

The main challenge of our current analyses aims to overcome the lack of existing protocols to apply in order to quantify observations. In order to characterize the manufacturing sequences, the mapping of the paste variability (distribution and composition of temper) and the discontinuities linked to different classes of pores, fabrics and/or organic inclusions appears promising. The totality of the acquired images composes a set of 2-D and 3-D data at different resolutions and with specific physical characteristics related to each acquisition modality. Specific shape recognition methods need to be developed by application of robust imaging techniques and 3-D-shapes recognition algorithms.

We devised in 54 a method to isolate pores from the 3-D data volumes in binary 3-D images, to which we apply a process named Hough transform (derived from Radon transform). This method, of which the generalization from 2-D to 3-D is quite recent, allows us to evaluate the presence of parallel lines going through the pores. The quantity of such lines and their parallelism furnish good indicators of the “coiling” manufacturing, that they allow to distinguish from the other “spiral patchwork” technique, in particular. Other possibilities of investigation will be analyzed, such as deep learning techniques.

5 Social and environmental responsibility

  • Martine Olivi is a member of the CLDD (Commission Locale de Développement Durable). She is a member of the GDS EcoInfo (CNRS) and of Labos1point5.
  • Sylvain Chevillard and Martine Olivi are members of the organizing committee of the RESET seminar (Redirection Écologique et Sociale : Échanges Transdisciplinaires), an inter-lab and interdisciplinary seminar in Sophia, dedicated to the themes of ecological transition and sustainable development.
  • Sylvain Chevillard and Martine Olivi were trained to animate “Ma Terre en 180 minutes” workshops (a three-hours workshop were participants participate to a role play of a research laboratory committed into dividing by two its carbon footprint, and looking for practical solutions to reach this goal) and Sylvain Chevillard co-animated such a workshop with two other people as part of events hosted at Université Côte d'Azur, mirroring COP29.
  • Sylvain Chevillard is involved in teaching environmental issues at Polytech Sophia Antipolis (see Section 9.2).
  • Martine Olivi gave a lecture : Digital systems and sustainability at Doctoriales 2024, a day for scientific cross fertilization between the PhD students from DS4H. She organized a round table: Croissance économique ou habitabilité de la planète : faut-il choisir ?, as part of events hosted at Université Côte d'Azur, mirroring COP29.

6 Highlights of the year

6.1 Awards

Dmitry Ponomarev was awarded a runner-up poster prize at the 4th IMA Conference on Inverse Problems in Bath (September), see Section 9.1.4.

6.2 Contract of Objectives, Means and Performance

Note : Readers are advised that the Institute does not endorse the text in the “Highlights of the year” section, which is the sole responsibility of the team leader.

Inria management has imposed on the institute a new Contract of Objectives, Means and Performance (COMP) with the French government, for the period 2024–2028. It presages major changes for Inria, regarding both its missions and the way it operates. These changes, whose precise nature and impact on the staff are unclear, should become effective as soon as 2025 but have not been the subject of any consultation, and inasmuch as the collaboration of Inria's staff is necessary to turn this disruption into a successful change, we are concerned that the top management has remained deaf to several votes and petitions opposing these policies.

The multiplication of new missions and priorities, particularly those related to the “program agency” or oriented towards defense applications, pushes the research carried out at Inria in the background. The constraints induced by this COMP will restrain the independence of scientists and teams, as well as their freedom to select research topics and collaborators.

Our main concern with the COMP lies with the following items:

  • Placement of Inria in a “zone à régime restrictif” (ZRR).
  • Restriction of international and industrial collaborations to partners chosen by the institute's management, with no clear indication of the rules.
  • Individual financial incentives for researchers involved in strategic partnerships, whose topics are steered by the program agency.
  • Priority given to “dual” research with both military and civilian applications, materialized by tighter links with the Ministry of Defense.

7 New results

7.1 Field extrapolation for inverse magnetization problem

Participants: Dmytro Dmytrenko, Juliette Leblond, Mubasharah Khalid Omer, Dmitry Ponomarev.

A simple fact that more measured data must yield better results manifests itself as a slower decay rate of singular values of the forward operator A (introduced in Section 3.1.3) for a larger measurement region. The same phenomenon happens when the abundance of data is not only quantitative but qualitative. Namely, for the inverse magnetization problem (see Section 4.2), reconstruction results demonstrate obvious improvements when measurements of all three components of the magnetic field are used instead of only one. Because of the harmonicity of field, the knowledge of only one (normal) component of the field actually determines two other (tangential) components. However, this operation (Riesz transformation) is stable when the normal component of the field is known on the entire horizontal plane. Since in practice the actual measurements of the field data are limited to a portion of the plane, we are thus led to consider an extrapolation problem. This problem is well-defined in the ideal case of knowledge of exact data, i.e., one can uniquely construct the field defined on the whole plane which agrees with the given data on its portion. In case of perturbed data, the problem lacks stability but can nevertheless be effectively addressed. To this effect, we have pursued the development of two approaches.

The first approach is fairly straightforward and is based on the regularized deconvolution of the field in order to find an auxiliary magnetization-dependent function on 2 (see Section 3.1.2). Re-application of the Poisson transform to this function allows us to generate the field on the whole plane at height x3=h thus obtaining an extrapolant. This approach can be efficiently implemented when an appropriate finite-dimensional subspace is chosen. A particular choice that was investigated was the span of spherical harmonics mapped to the horizontal plane by a suitable Kelvin transform. Since Kelvin transform preserves harmonicity and orthogonality, these functions are naturally restrictions of harmonic functions and application of the Poisson operator to them is computationally cheap; harmonic extension of spherical harmonics can be obtained without any convolution, these are simply solid harmonics. Other choices of basis functions can be efficiently arranged products of orthogonal systems on L2 such as Laguerre or Malmquist-Takenaka functions. This extrapolation approach is developed for planar L2 magnetization distributions (see Section 3.1.2), but it does not profit from a usually known compact support of the magnetization. On the other hand, since a compact support is not a requirement, it can also be an advantage allowing to use this approach for three-dimensional magnetizations. Indeed, a volumetric sample is easily shown to be equivalent to a planar magnetization distribution (supported on the entire plane) at the height corresponding to the distance between the measurement plane and the original sample. This is the topic of the PhD thesis of Mubasharah Khalid Omer .

The second extrapolation approach 18, 19, 23, 27 assumes planar magnetization with square-integrable divergence of its tangential components and relies on the knowledge of its support (which is the case in practice). Namely, it is assumed that the planar measurement region covers the magnetization support in the underlying plane. This method relies on the representation of the field as sum of two convolution operators acting on a combination of tangential components of magnetization (actually, the divergence thereof) and the normal one, respectively. This involves kernel functions essentially coinciding with Poisson kernel and its vertical derivative. The extrapolated field is generated from the same relation once the two mentioned magnetization-dependent quantities are found. To this effect, we have developed a so-called double-spectral approach (in the spirit of Section 3.1.3). The approach consists in 2 stages. First, an additional (data-independent) problem is solved to generate an auxiliary kernel function whose convolution swaps (to a certain degree) the action of two other convolution operators. This can be achieved by the spectral decomposition of a truncated Poisson operator, i.e., explicit computation of its eigenvalues and eigenfunctions. At the second stage, the problem is immersed in a higher dimension and the self-adjoint system of integral equations is solved again by a spectral expansion. The use of spectral decomposition for the solution of the auxiliary problem at the first stage allows to efficiently characterize the null-space of the matrix operator occurring at the second stage. The use of another spectral method at the second stage (i.e., using eigen-elements of the matrix operator as a solution basis) is convenient for de-noising possibly perturbed field measurements by means of truncation of higher-order modes. The number of basis elements at both stages thus serve as regularization parameters which have to be adjusted according to the level of noise in measurements or to the accuracy of numerical computations of auxiliary quantities. With this respect, it should be mentioned that since the field extrapolation problem is subject to more constraints than that for reconstruction of the full magnetization, the additional relations (such as vanishing of total integral of B3 over 2) can be employed for choosing regularization parameters.

7.2 Net moment estimation

Participants: Sylvain Chevillard, Juliette Leblond, Jean-Paul Marmorat, Anass Yousfi.

We continued the work started in the past years to establish formulas for the integrals of the form

domain P ( x ) B 3 ( x ) d x

where P is a polynomial and the domain of integration is either the square QR (and actually more generally, any rectangle) or the disk DR of radius R (the notations are those of Section 4.2, see especially Figure 2). This is the topic of the PhD thesis of Anass Yousfi .

On the one hand, in the case of the disk, exact formulas are not available but we had obtained last year asymptotic expansions with respect to variable R as it grows large. Due to unexpected personal difficulties, the report that we were working on has been delayed and is still on-going work. However, progresses were achieved by simplifying some proofs, while formulas have been implemented and successfully compute the sought asymptotic expansion.

On the other hand, in the case of the square, we obtained exact formulas, not only for B3 but also for the other components of the field. More specifically, denoting by abuse of notations Bj the (forward) operator from L2(S) to L2(QR) that maps a magnetization to the corresponding component of the field (j{1,2,3}), its adjoint operator Bj plays an important role for the estimation of the net moment from measured data. Recursion formulas allow one to compute an explicit form for Bj(P) for any polynomial P 22. Our goal is now to see how this can be used for the evaluation of the solution to the extremal problem presented in 3, by expanding it on the basis of the eigenfunctions of the Laplace operator as suggested in the above cited article. We did some preliminary experiments in that direction by comparing the efficiency of two methods for the evaluation of integrals of the form QRsin(k1x1)sin(k2x2)B3(x)dx (when k1 and k2 go large): either applying classical quadrature rules or approximate the factor sin(k1x1)sin(k2x2) by a polynomial P(x1,x2) and apply the formulas described above for the operator B3. Another kind of quadrature formulas, specifically tuned for oscillatory integrals also retained our attention: these are Filon quadrature rules 72. It is not clear yet which combination of these three strategies is optimal depending on the value of k1 and k2. Progresses on that respect are expected.

7.3 QDM magnetometry

Participants: Sylvain Chevillard, Juliette Leblond, Dmitry Ponomarev, Anass Yousfi.

Compared to SQUID microscopy (which provided the team with a concrete context for developing methodology for the inverse magnetization problem), Quantum-Diamond Magnetometric (QDM) devices are capable of producing high-resolution field maps, potentially of more than one component, and extremely close to a magnetic sample 62, 82, 88. One of such devices is currently assembled in Cerege that the team already has a collaboration with (see Sections 4.2, 4.3).

The measurement principle of QDM can be roughly described as follows. A point defect in the crystalline lattice of diamond is introduced and consists of a nitrogen-vacancy (NV) pair. Quantum structure of this NV center leads to its crucial magneto-optical properties. The NV center is a two-spin system (2 out of its 6 electrons with a spin 1/2 are unpaired) whose triplet state (i.e., that with the total spin equal to 1) is of particular interest. Denoting 𝚖𝚜 the projection of the total spin of the NV center onto its axis, the following is observed. When excited by laser, NV states with 𝚖s=1 and 𝚖s=0 relax back to the ground state differently causing the drop in signal when diamond's illumination intensity is measured for different frequencies. Furthermore, when an exterior magnetic field is added, it lifts degeneracy of the states 𝚖s=±1 according to the Zeeman effect and thus causes the splitting of this intensity peak (drop) in two. Difference of frequencies of these two peaks is proportional to the magnitude of projection of the field onto NV access which gives a possibility of a magnetic measurement. Since a NV center can be, in general, oriented along 4 different directions (according to the diamond crystalline lattice symmetry), instead of 2, 8 peaks are observed experimentally, and reconstruction of the magnetic field through absolute value of its projections on those axes is not a trivial task.

Mathematical contribution here can lead to a better design of experimental set-ups. As a first step, we have taken up an issue of improving a quantitative estimate of the Zeeman effect relating the distance between peaks to the field projection magnitude beyond the linear order and estimating the error. This is important because the transverse (with respect to the NV axis) component of the field may not necessarily be small.

7.4 Inverse magnetization problem with 3-component field measurements

Participants: Laurent Baratchart, Sylvain Chevillard, Juliette Leblond, Dmitry Ponomarev.

Since missing components of the magnetic field can be either obtained by extrapolation (see Section 7.1), or by QDM measurements (see Section 7.3), it is of major interest to extend the methodology of previously developed approaches (dealing with different aspects of inverse magnetization problems) to take into account availability of this new type of information. This concerns, for example, reconstruction of net moments (see Sections 3.2.2, 4.2, 7.2) by means of asymptotic estimates 36, 42, 81 and through solution of an auxiliary bounded extremal problem 3. A preliminary work in this direction has already started. As already mentioned in Section 7.1, the results for the magnetization reconstruction (its minimal L2-norm representative, among equivalent magnetizations producing the same field data) with L2 constraints are shown to be significantly better when all three components of the field are used. Numerical implementation, however, depends on how well the magnetization is represented in a chosen finite-dimensional basis. In particular, we have used suitably ordered tensor products of one-dimensional modified Fourier basis functions (eigenfunction of Laplace operator with zero Neumann boundary data) which are great for smooth magnetizations, but not so much for “grainy” ones. For the universal choice, one perhaps should proceed with a wavelet-type basis which can represent multi-scale features and consider its adaptation taking into account the exact form of “silent” magnetic sources.

7.5 Rational and meromorphic approximation

Participants: Laurent Baratchart.

In a joint research effort with M. Yattselev from Indiana University at Indianapolis, we completed this year a long term project that started years ago as a collaboration with the late H. Stahl. It deals with the behavior of rational approximants of given degree N to a holomorphic function f, on a compact subset O of the domain of analyticity bounded by Jordan curve γ, under the assumption that f can be continued analytically to the exterior of γ (possibly in a multiple-valued manner) except perhaps over a set of logarithmic capacity zero of the extended plane.

If we let RN indicate rational functions of degree N with no pole in O, a best rational approximant is a member rN of RN minimizing f-rNL(γ); we say that a sequence rN of rational approximants, with rNRN, is asymptotically optimal in the N-th root sense if lim supNf-rNL(γ)1/N is as small as possible (see also Section 3.3).

The behavior of rational approximants is companion to the one of meromorphic approximants to f on γ. To define the latter, let RN indicate rational functions of degree N with no pole in O and 𝒜γ be the space of bounded holomorphic functions in the exterior of γ, that extend continuously up to γ. For our present purpose, a meromorphic function is the trace on γ of a member of MN:=𝒜γ+RN, and a best meromorphic approximant solves the following extremal problem:

(EN)  Let N0 be an integer, and f as above; to find a function gNMN such that gN-f is of minimal norm in L(γ).

That (EN) is well-posed depends on the fact that f extends analytically across γ by assumption. When γ is rectifiable, the unique solution is given by AAK theory (named after Adamjan, Arov and Krein) in ¯O, which connects the spectral decomposition of Hankel operators with best approximation 78. Again, we say that a sequence gN of meromorphic approximants, with gNRN, is asymptotically optimal in the N-th root sense if lim supNf-gNL(γ)1/N is as small as possible. It follows from the work in 63, 89 that, for f as above, lim supNf-rNL(γ)1/N is a true limit whenever rN is asymptotically optimal in the N-th root sense, and this limit is equal to exp{-2/ cap (O,𝒦f)} where 𝒦f is the compact set minimizing the capacity of the condenser (O,𝒦f) outside of which the extension of f is single-valued. Moreover, the same holds for meromorphic approximants that are asymptotically optimal in the N-th root sense, by 76. The existence and uniqueness of 𝒦f, up to a polar set, is a consequence of 84. If the extension of f has no branch-point then 𝒦f= and the convergence of rational or meromorphic approximants that are asymptotically optimal in the N-th root sense is faster than geometric, otherwise cap (O,𝒦f)>0 and the convergence is geometric.

Now, if the extension of f outside γ has finitely many branches; that is, if it is analytic and single valued on a compact Riemann surface 𝒮 (except possibly on a polar subset of 𝒮), then we were able to show that rational or meromorphic approximants that are asymptotically optimal in the N-th root sense converge in capacity in the complement of 𝒦f, meaning that for any ε>0 the set of points where the error is larger than ε has a logarithmic capacity that tends to zero when N goes to infinity. Moreover, if 𝒮 is a sphere (equivalently: if the continuation of f has no branch-point), then this convergence is faster than geometric and there is a sequence of approximants, converging at least as fast, whose poles coalesce to the polar singular set of f. And if 𝒮 has positive genus (equivalently: if the continuation of f has branch-points), then the normalized counting measures of the poles of the approximants converge weak-star, as N tends to infinity, to the equilibrium distribution on the plate 𝒦f of the condenser (O,𝒦f); see 25. Interpolation techniques and complex orthogonal polynomials, that form the technical core of the work in 63 are of no help in our case, as it is not even known if interpolation takes place for best approximants in the sup norm. The proof proceeds from completely different ideas, reducing to AAK-type of approximants and analyzing sub-level sets of the logarithmic error function on the underlying Riemann surface; we refer the reader to 25 for details.

Let us also mention that convergence in capacity of least square substitutes to multi-point Padé approximants (1) to functions with polar singular set on (see Section 4.4 for a precise description) was established by elaborating on the techniques developed in 31 and reworking normalization issues. An article is being written to report on this result.

7.6 Schur functions minimization under Nevanlinna-Pick constraints

Participants: Martine Olivi, Fabien Seyfert.

In the past, Factas has devoted a great deal of effort to the problem of impedance matching in filter synthesis. Filter synthesis is usually performed under the hypothesis that both ports of the filter are loaded on a constant, resistive load (usually 50 Ω). In complex systems, however, filters are cascaded with other devices and end up being loaded on a non purely resistive, frequency varying, load. The load thus varies with frequency by construction, and is not purely resistive either. Likewise, in an emitter-receiver, the antenna is followed by a filter; however, the antenna is far from being purely resistive on the whole passband, and a mismatch between the antenna and the filter then causes irremediable power losses, both for emission and transmission. Our goal has therefore been to develop methods for filter synthesis that allow us to match varying loads on specific frequency bands, while enforcing rejection properties away from the passband. The problem is highly nontrivial because matching requires the response of the filter, which is holomorphic in the right half-plane (since the filter is a stable device), to approximate a function which is holomorphic in the left half-plane.

An approach for solving this issue amounts to consider the following optimization problem: find a Schur function with minimum supremum norm in a frequency band, which satisfies a set of Nevanlinna-Pick interpolation constraints. A convex approach to this problem was developed by D. Martínez Martínez and G. Bose during their PhD theses. The preprint 26 synthesizes their work and generalizes it to the case of a mixed Nevanlinna-Pick problem, with interpolation points both inside and on the boundary of the analyticity domain.

8 Partnerships and cooperations

8.1 International research visitors

8.1.1 Visits to international teams

Research stays abroad

Participants: Laurent Baratchart.

  • Visited institution:
    Vanderbilt university
  • Country:
    USA
  • Dates:
    January 1 to May 15, 2024
  • Context of the visit:
    collaboration with Profs. D. Hardin and E. B. Saff on inverse source problems
  • Mobility program/type of mobility:
    visiting professor

8.2 National initiatives

ANR R2D2

Participants: Dmytro Dmytrenko, Dmitry Ponomarev.

ANR-21-IDES-0004, “Welcome package” (2022–2027). It allowed to support the internship of Dmytro Dmytrenko . A premise for its topic was the known instability of the inverse magnetization problem and, in particular, of high-order asymptotic formulas for reconstruction of the net moment from the magnetic field measurement (see Sections 3.4, 4.2). This has been tested before for additive Gaussian white noise. In practice, however, the noise of electronic devices such as SQUID magnetometer is far from being white, it is closer to the so-called 1/f (or flickering) noise. The idea of this short internship was, therefore, to test numerically the sensitivity of the previous asymptotic results for the net moment for other types of noise, namely, the colored noise.

ANR MoDyBe

Participants: Juliette Leblond, Dmitry Ponomarev.

ANR-23-CE45-0011-03, “Modeling the dynamic behavior of implants used in total hip arthroplasty” (2023–2027). Led by the laboratory Modélisation et Simulation Multi Échelle, UMR CNRS 8208 and the Université Paris-Est Créteil (UPEC), involving Factas team, together with the department of Orthopedic surgery of the Institut Mondor de Recherche Biomédicale, U955 Inserm and UPEC. Cement-less implants are increasingly used in clinical practice of arthroplasty. They are inserted in the host bone using impacts performed with an orthopedic hammer (press-fit procedure, see Section 4.5). However, the rate of revision surgery is still high, which is a public health issue of major importance. The press-fit phenomenon occurring at implant insertion induces bio-mechanical effects in the bone tissues, which should ensure the stability of the implant during the surgery (“primary stability”). Despite a routine clinical use, implant failures, which may have dramatic consequences, still occur and are difficult to anticipate. Just after surgery, the implant fixation relies on the pre-stressed state of bone tissue around the implant. In order to avoid aseptic loosening, a compromise must be found by the surgeon. On the one hand, sufficient primary stability can be ensured by minimizing micro-motion at the bone-implant interface in order to promote osteo-integration phenomena. On the other hand, excessive stresses in bone tissue around the implant must be avoided, as they may lead to bone necrosis or fractures. This raises the following mathematical issues. What is the appropriate mechanical model of the implant insertion process into the bone? What are the suitable high-performance computing methods to accurately solve the above modeling equations for the bone-implant interaction subject to dynamic excitations? Which robust inversion approaches can be employed to retrieve the quantities of interest of the bone-implant interaction such as the bone-implant contact area? The ANR project MoDyBe aims to address these issues.

Réseau Thématique.

Participants: Laurent Baratchart, Sylvain Chevillard, Juliette Leblond, Martine Olivi, Dmitry Ponomarev.

Factas is part of the “réseau thématique” ANAlyse et InteractionS (ANAIS). It gathers people doing fundamental and applied research concerning function spaces and operators, dynamical systems, auto-similarity, probabilities, signal and image processing.

8.3 Regional initiatives

DIM PAMIR CÉRAJUM

Participants: Juliette Leblond.

“Ancient ceramic production sequences revisited: Contributions from deep learning and digital twins”, Domaine de recherche et d'Innovation Majeur, PAtrimoines Matériels - Innovation, expérimentation et Résilience, Région Île de France (2024–2027). Led by the lab. Trajectoires (CNRS / Univ. Paris 1), with partners at Ipanema inst. (CNRS), IFPEN, CEPAM and Inria (CNRS / Université Côte d'Azur). The project consists in developing approaches based on Artificial Intelligence to exploit images from micro-tomographic acquisitions of Early Neolithic ceramics from Western Europe, see Section 4.8. The first objective is to classify these ceramics according to their material composition and the techniques used in their manufacture. Deep learning techniques will be used in order to develop a model capable of performing this classification quickly and reliably. The scope of this initial model will then be expanded by creating a “digital twin” of the ceramics in a reverse-engineering perspective. This is the topic of the PhD thesis of Thaïs Wuillemin.

9 Dissemination

9.1 Promoting scientific activities

9.1.1 Scientific events: organisation

Member of the organizing committees
  • Mubasharah Khalid Omer joined the organizing team for the weekly PhD seminars at Inria Université Côte d'Azur (November).

9.1.2 Journal

Member of the editorial boards

Laurent Baratchart is sitting on the editorial board of Computational Methods and Function Theory and Complex Analysis and Operator Theory.

Reviewer - reviewing activities
  • Sylvain Chevillard was a reviewer for International Journal of Approximate Reasoning and for Complex Analysis and Operator Theory.
  • Dmitry Ponomarev was a reviewer for SIAM Journal on Imaging Sciences and for Mathematical Reviews.

9.1.3 Invited talks

  • Mubasharah Khalid Omer was invited to give a presentation at the PhD seminar at Inria Université Côte d'Azur (November).
  • Dmitry Ponomarev gave invited talks at:
    1. Mini-symposium “Recent Advances in Inverse Scattering Theory and Applications” of International Conference on Inverse Problems: Modeling and Simulation (IPMS) 2024, Malta (May – June).
    2. “Inverse Problems” section of the 2nd International AMS-UMI Joint Meeting, Palermo (July).
    3. “Several Complex Variables” section of the same meeting.
  • Laurent Baratchart gave invited talks at:

    1. Sixth international meeting on approximation theory, numerical linear algebra and its applications, University of Lille (May),
    2. SIGMA-2024, CIRM, Marseille (October–November),
    3. Advances in Operator Theory with Applications to Mathematical Physics Chapman University, ORANGE, CA (November).

    He was also a speaker at the Complex Analysis seminar of the Aix-Marseille Université, Marseille (October).

9.1.4 Contributed talks

  • Anass Yousfi delivered a presentation at the 10th European Conference on Numerical Methods in Electromagnetism (NUMELEC 2024), Toulouse (July).
  • Dmitry Ponomarev gave contributed talks at:

    1. Complex Days 2024, Nice, February.
    2. 16th International Conference on Mathematical and Numerical Aspects of Wave Propagation (Waves 2024), Berlin (June – July).

    He also presented a poster at the 4th IMA Conference on Inverse Problems, Bath (September), see Section 6.1.

9.1.5 Research administration

  • Juliette Leblond is a member of the “Commission Administrative Paritaire” of Inria; she was a member of the “Conseil Scientifique” of Inria (until June).

9.2 Teaching - Supervision - Juries

9.2.1 Teaching

  • Sylvain Chevillard gives “Colles” (oral examination preparing undergraduate students for the competitive examination to enter French Engineering Schools) at Centre International de Valbonne (CIV) (2 hours per week). In 2024, he contributed to the course Environmental Issues of Polytech Sophia Antipolis and addressed to all students of Polytech, whatever their pathway (level L3): the students had to attend several 1-hour conferences among a list of proposed conferences, and attend practical sessions (TP) where they would practically think about environmental questions (computation of their personal carbon footprint, introduction to the OpenLCA software to perform life-cycle analysis, participation to “Fresque du climat”). Sylvain Chevillard gave (twice) a conference on Carbon Footprints and animated 4 hours of practical sessions with the OpenLCA software.
  • Dmitry Ponomarev conducted tutorials (“Travaux dirigés”) for the Analysis part of the course “Fundamentals of Mathematics 2” at Université Côte d'Azur in the Spring semester.

9.2.2 Supervision

  • PhD in progress: Anass Yousfi , Methods to estimate the net magnetic moment of rocks, since October 2022, advisors: Sylvain Chevillard , Juliette Leblond .
  • PhD in progress: Mubasharah Khalid Omer , Field preprocessing and treatment of complex samples in the paleo-magnetic context, since October 2023, advisors: Juliette Leblond , Dmitry Ponomarev .
  • Internship: Dmytro Dmytrenko , Field preprocessing and treatment of complex samples in the paleo-magnetic context, April-August, advisors: Juliette Leblond , Dmitry Ponomarev .

9.2.3 Juries

  • Juliette Leblond was a member of the “Comité de sélection MCF 26”, Université de Lille (May). She also was a member of the examining committee for the defense of the PhD thesis of Sahar Borzooei, ED STIC, Université Côte d'Azur (September).
  • Laurent Baratchart reported on the PhD thesis of Xinpeng Huang, titled Basis functions meet spatiospectral localization: studies in spherical coordinates, defended at the BergAkademie Freiberg (Germany, September).

9.3 Popularization

9.3.1 Specific official responsibilities in science outreach structures

9.3.2 Productions (articles, videos, podcasts, serious games, ...)

  • Martine Olivi with Laurence Farhi (Inria-Learning Lab) and Benjamin Ninassi (Inria-DGDS) conceived the game “Mines de rien, mon smartphone pollue” to help people discover the diversity of metals used to manufacture a smartphone and the environmental impact of their extraction. This wooden game was manufactured by SNJL.

9.3.3 Participation in Live events

  • Dmitry Ponomarev gave an “In'Tro” presentation at Inria Université Côte d'Azur describing one of his research topics.
  • “Faites des sciences à Mouans-Sartoux” : Martine Olivi demonstrated the game “Mines de rien, mon smartphone pollue”.

10 Scientific production

10.1 Major publications

10.2 Publications of the year

International journals

International peer-reviewed conferences

  • 18 inproceedingsD.Dmitry Ponomarev. A Method to Extrapolate the Data for the Inverse Magnetisation Problem with a Planar Sample.IPMS 2024 - Inverse Problems: Modeling and SimulationCirkewwa, MaltaMay 2024HALback to textback to text
  • 19 inproceedingsD.Dmitry Ponomarev. Inverse problem in paleomagnetism: Making the most of the measured data.Proceedings of the Complex Systems Academy of ExcellenceFourth Complex DaysComplex Days4NICE, France2024HALback to textback to text

National peer-reviewed Conferences

  • 20 inproceedingsD.Dmitry Ponomarev. A generalised time-evolution model for the sliding punch problem with wear.16ème Colloque National en Calcul de Structures (CSMA 2024)Hyères, FranceMay 2024HALback to text

Conferences without proceedings

  • 21 inproceedingsF.Françoise Berthoud, M.Martine Olivi and L.Laurent Lefèvre. Beaucoup de verrous, peu de leviers ! Sobriété numérique : le cas est grave mais pas désespéré.JRES (Journées réseaux de l'enseignement et de la recherche ) 2024Rennes, FranceDecember 2024HAL
  • 22 inproceedingsS.Sylvain Chevillard, J.Juliette Leblond and A.Anass Yousfi. On analytic formulas useful to solve the inverse magnetization problem.10ème Conférence Européenne sur les Méthodes Numériques en Electromagnétisme (NUMELEC 2024)Toulouse, France2024HALback to text
  • 23 inproceedingsD.Dmitry Ponomarev. Inverse magnetisation problem for paleomagnetism: reconstruction of net magnetisation through asymptotic analysis and field extrapolation.Waves 2024 - 16th International Conference on Mathematical and Numerical Aspects of Wave PropagationBerlin, GermanyJune 2024HALback to textback to text

Reports & preprints

Other scientific publications

  • 27 inproceedingsD.Dmitry Ponomarev. On some constructive aspects of an inverse problem in paleomagnetism.4th IMA Conference on Inverse Problems from Theory to ApplicationBath, United KingdomSeptember 2024HALback to textback to text

10.3 Cited publications

  • 28 articleD.Daniel Appelo, F.Fortino Garcia and O.Olof Runborg. WaveHoltz: Iterative solution of the Helmholtz equation via the wave equation.SIAM Journal on Scientific Computing4242020, A1950--A1983back to text
  • 29 articleA.Anton Arnold, S.Sjoerd Geevers, I.Ilaria Perugia and D.Dmitry Ponomarev. An adaptive finite element method for high-frequency scattering problems with smoothly varying coefficients.Computers & Mathematics with Applications1092022, 1--14back to text
  • 30 articleA.Anton Arnold, S.Sjoerd Geevers, I.Ilaria Perugia and D.Dmitry Ponomarev. On the exponential time-decay for the one-dimensional wave equation with variable coefficients.Communications on Pure and Applied Analysis21102022, 3389back to text
  • 31 thesisP.Paul Asensio. Inverse problems of source localization with applications to EEG and MEG.Université Côte d'AzurSeptember 2023HALback to textback to textback to textback to text
  • 32 bookG. A.George A. Baker and P.Peter Graves-Morris. Padé approximants.Cambridge University Press2010back to text
  • 33 articleL.Laurent Baratchart, A.Alexandre Borichev and S.Slah Chaabi. Pseudo-holomorphic functions at the critical exponent.Journal of the European Mathematical Society1892016HALDOIback to textback to text
  • 34 articleL.Laurent Baratchart, L.Laurent Bourgeois and J.Juliette Leblond. Uniqueness results for inverse Robin problems with bounded coefficient.Journal of Functional Analysis2016HALDOIback to textback to textback to text
  • 35 articleL.Laurent Baratchart, M.M. Cardelli and M.Martine Olivi. Identification and rational L 2 approximation: a gradient algorithm.Automatica271991, 413--418back to text
  • 36 articleL.Laurent Baratchart, S.Sylvain Chevillard, J.Juliette Leblond, E. A.Eduardo Andrade Lima and D.Dmitry Ponomarev. Asymptotic method for estimating magnetic moments from field measurements on a planar grid.HAL preprint: hal-014211572018back to textback to textback to text
  • 37 articleL.Laurent Baratchart, Y.Yannick Fischer and J.Juliette Leblond. Dirichlet/Neumann problems and Hardy classes for the planar conductivity equation.Complex Variables and Elliptic Equations2014, 41HALDOIback to textback to text
  • 38 articleL.Laurent Baratchart, C.Christian Gerhards and A.Alexander Kegeles. Decomposition of L2-vector fields on Lipschitz surfaces: characterization via null-spaces of the scalar potential.SIAM Journal on Mathematical Analysis5342021, 4096 - 4117HALDOIback to textback to text
  • 39 articleL.Laurent Baratchart, C.Christian Gerhards, A.Alexander Kegeles and P.Peter Menzel. Unique reconstruction of simple magnetizations from their magnetic potential.Inverse Problems37109 2021, 105006HALDOIback to text
  • 40 articleL.Laurent Baratchart, D. P.Douglas P. Hardin, E. A.Eduardo Andrade Lima, E. d.Edwar dB. Saff and B.Benjamin Weiss. Characterizing kernels of operators related to thin-plate magnetizations via generalizations of Hodge decompositions.Inverse Problems2912013, URL: https://inria.hal.science/hal-00919261DOIback to textback to textback to textback to text
  • 41 articleL.Laurent Baratchart and J.Juliette Leblond. Hardy approximation to L p functions on subsets of the circle with 1p<.Constructive Approximation141998, 41--56back to textback to text
  • 42 inproceedingsL.Laurent Baratchart, J.Juliette Leblond, E. A.Eduardo Andrade Lima and D.Dmitry Ponomarev. Magnetization moment recovery using Kelvin transformation and Fourier analysis.Journal of Physics: Conference Series9041IOP Publishing2017, 012011back to textback to textback to text
  • 43 articleL.Laurent Baratchart, J.Juliette Leblond and J.-P.Jean-Paul Marmorat. Sources identification in 3D balls using meromorphic approximation in 2D disks.Electronic Transactions on Numerical Analysis (ETNA)252006, 41--53back to text
  • 44 unpublishedL.Laurent Baratchart, J.Juliette Leblond and M.Masimba Nemaire. Silent sources in L p and Helmholtz-type decompositions., URL: https://inria.hal.science/hal-03915548v2back to text
  • 45 articleL.Laurent Baratchart, J.Juliette Leblond and J. n.Jonatha nR. Partington. Hardy approximation to L functions on subsets of the circle.Constructive Approximation121996, 423--435back to text
  • 46 inproceedingsL.Laurent Baratchart, J.Juliette Leblond and D.Dmitry Ponomarev. Solution of a homogeneous version of Love type integral equation in different asymptotic regimes.International Conference on Integral Methods in Science and EngineeringSpringer2019, 67--79back to textback to text
  • 47 unpublishedL.Laurent Baratchart, D.Dang Pei and T.Tao Qian. Hardy-Hodge decomposition of vector fields on compact Lipschitz hypersurfaces., in preparationback to text
  • 48 articleL.Laurent Baratchart and F.Fabien Seyfert. An L p analog to AAK theory for p2.Journal of Functional Analysis19112002, 52--122back to text
  • 49 articleL.Laurent Baratchart, C.Cristobal Villalobos-Guillén and D. P.Douglas P. Hardin. Inverse potential problems in divergence form for measures in the plane.ESAIM: COCV272021back to text
  • 50 articleL.Laurent Baratchart, C.Cristobal Villalobos-Guillén, D.Douglas Hardin, M.Michael Northington and E.Edward Saff. Inverse Potential Problems for Divergence of Measures with Total Variation Regularization.FoCM2020back to textback to text
  • 51 bookA.Albrecht Böttcher and S. M.Sergei M Grudsky. Toeplitz matrices, asymptotic linear algebra and functional analysis.67Springer2000back to text
  • 52 articleS.Slim Chaabane, I.I. Fellah, M.Mohamed Jaoua and J.Juliette Leblond. Logarithmic stability estimates for a Robin coefficient in 2D Laplace inverse problems.Inverse Problems2012004, 49--57URL: https://dx.doi.org/10.1088/0266-5611/20/1/003back to text
  • 53 phdthesisS.Slah Chaabi. Analyse complexe et problèmes de Dirichlet dans le plan : équation de Weinstein et autres conductivités non bornées.Mathématiques et Informatique de Marseille2013back to text
  • 54 articleV. L.Vanna Lisa Coli, L.Louise Gomart, D. F.Didier F Pisani, S. X.Serge X. Cohen, L.Laure Blanc-Féraud, J.Juliette Leblond and D.Didier Binder. Micro-computed tomography for discriminating between different forming techniques in ancient pottery: new segmentation method and pore distribution recognition.Archaeometry2021HALDOIback to text
  • 55 inproceedingsA.Adam Cooman, F.Fabien Seyfert and S.Smain Amari. Estimating unstable poles in simulations of microwave circuits.IMS 2018Philadelphia, United States6 2018HALback to text
  • 56 articleM.Marion Darbas, J.Juliette Leblond, J.-P.Jean-Paul Marmorat and P.-H.Pierre-Henri Tournier. Numerical resolution of the inverse source problem for EEG using the quasi-reversibility method.Inverse Problems39112023, 115003HALDOIback to text
  • 57 articleV.V. Duval and G.G. Peyré. Exact support recovery for sparse spikes deconvolution.FoCM2015back to text
  • 58 phdthesisY.Yannick Fischer. Approximation des des classes de fonctions analytiques généralisées et résolution de problèmes inverses pour les tokamaks.Université Nice Sophia Antipolis2011, URL: https://tel.archives-ouvertes.fr/tel-00643239/back to text
  • 59 articleA.A. Friedman and M.M. Vogelius. Determining cracks by boundary measurements.Indiana Univ. Math. J.3831989, 527--556back to text
  • 60 articleX.Xing Gao, M.Manon Fraulob and G.Guillaume Haïat. Biomechanical behaviours of the bone--implant interface: a review.Journal of The Royal Society Interface161562019, 20190259back to text
  • 61 bookJ. ..J .B. Garnett. Bounded analytic functions.Academic Press1981back to text
  • 62 articleD. R.David R Glenn, R. R.Roger R Fu, P.Pauli Kehayias, D.David Le Sage, E. A.Eduardo A Lima, B. P.Benjamin P Weiss and R. L.Ronald L Walsworth. Micrometer-scale magnetic imaging of geological samples using a quantum diamond microscope.Geochemistry, Geophysics, Geosystems1882017, 3254--3267back to text
  • 63 articleA.A.A. Gonchar and E.E.A. Rakhmanov. Equilibrium distributions and degree of rational approximation of analytic functions.Mat. Sb. (N.S.)134(176)31987, 306--352, 447back to textback to text
  • 64 articleM. J.Marcus J Grote and J. H.Jet Hoe Tang. On controllability methods for the Helmholtz equation.Journal of Computational and Applied Mathematics3582019, 306--326back to text
  • 65 articleV.V Hutson. Asymptotic solutions of integral equations with convolution kernels.Proceedings of the Edinburgh Mathematical Society1411964, 5--19back to text
  • 66 bookT.Tadeusz Iwaniec and G.Gaven Martin. Geometric function theory and non-linear analysis.Oxford Univ. Press2001back to text
  • 67 articleC.Charles Knessl and J. B.Joseph B Keller. Asymptotic properties of eigenvalues of integral equations.SIAM Journal on Applied Mathematics5111991, 214--232back to text
  • 68 articleJ.Juliette Leblond, M.Moncef Mahjoub and J. R.Jonathan R. Partington. Analytic extensions and Cauchy-type inverse problems on annular domains: stability results.J. Inv. Ill-Posed Problems1422006, 189--204back to text
  • 69 articleA.Anthony Leonard and T.TW Mullikin. Integral equations with difference kernels on finite intervals.Transactions of the American Mathematical Society1161965, 465--473back to text
  • 70 articleH. A.Hugues Albini Lomami, C.Camille Damour, G.Giuseppe Rosi, A.-S.Anne-Sophie Poudrel, A.Arnaud Dubory, C.-H.Charles-Henri Flouzat-Lachaniette and G.Guillaume Haiat. Ex vivo estimation of cementless femoral stem stability using an instrumented hammer.Clinical Biomechanics762020, 105006back to text
  • 71 inbookD. S.Doron S. Lubinski. Topics in Classical and Modern Analysis. Applied and Numerical Harmonic Analysis.M.M. Abell, E.E. Iacob, A.A. Stokolos, S.S. Taylor, S.S. Tikhonov and J.J. Zhu, eds. Birkhäuser2019, The Spurious Side of Diagonal Multipoint Padé Approximantsback to text
  • 72 articleH.Hassan Majidian. A comparative study of Filon-type rules for oscillatory integrals.Journal of Numerical Analysis and Approximation Theory5313 2024, 130--143DOIback to text
  • 73 thesisK.Konstantinos Mavreas. An inverse source problem in planetary sciences. Dipole localization in Moon rocks from sparse magnetic data.Université Côte d'AzurJanuary 2020HALback to text
  • 74 articleA.Adrien Michel, V.-H.Vu-Hieu Nguyen, R.Romain Bosc, R.Romain Vayron, P.Philippe Hernigou, S.Salah Naili and G.Guillaume Haiat. Finite element model of the impaction of a press-fitted acetabular cup.Medical & biological engineering & computing552017, 781--791back to text
  • 75 thesisM.Masimba Nemaire. Inverse potential problems, with applications to quasi-static electromagnetics.Université de BordeauxMarch 2023HALback to textback to text
  • 76 articleO.O.G. Parfënov. Estimates for singular numbers of the Carleson embedding operator.Mat. Sb. (N.S.)4501--518DOIback to text
  • 77 articleD.Dmitry Pelinovsky and D.Dmitry Ponomarev. Justification of a nonlinear Schrödinger model for laser beams in photopolymers.Zeitschrift für angewandte Mathematik und Physik652014, 405--433back to text
  • 78 bookV.Vladimir Peller. Hankel Operators and their Applications.Springer2003back to textback to text
  • 79 articleD.Dmitry Ponomarev. Asymptotic solution to convolution integral equations on large and small intervals.Proceedings of the Royal Society A47722482021, 20210025back to textback to text
  • 80 inproceedingsD.Dmitry Ponomarev. Generalised model of wear in contact problems: the case of oscillatory load.International Conference on Integral Methods in Science and EngineeringSpringer2023, 255--267back to text
  • 81 articleD.Dmitry Ponomarev. Magnetisation Moment of a Bounded 3D Sample: Asymptotic Recovery from Planar Measurements on a Large Disk Using Fourier Analysis.HAL preprint: hal-038135592023back to textback to textback to text
  • 82 phdthesisL.Loïc Rondin. Réalisation d'un magnétomètre à centre coloré NV du diamant.École normale supérieure de Cachan-ENS Cachan2012back to text
  • 83 articleS. ..S .K. Smirnov. Decomposition of solenoidal vector charges into elementary solenoids and the structure of normal one-dimensional currents.Algebra i Analiz4Transl. St Petersburg Math. Journal, 5(4), pp. 841--867, 19941993, 206--238back to text
  • 84 articleH.H. Stahl. Extremal domains associated with an analytic function I.Complex Variables Theory Appl.441985, 311--324URL: https://doi.org/10.1080/17476938508814117DOIback to text
  • 85 bookE. ..E .M. Stein. Harmonic Analysis.Princeton University Press1993back to text
  • 86 articleC. C.Christiaan C Stolk. A time-domain preconditioner for the Helmholtz equation.SIAM Journal on Scientific Computing4352021, A3469--A3502back to text
  • 87 bookA.Almudena Suàrez and R.Raymond Quéré. Stability analysis of nonlinear microwave circuits.Artech House2003back to text
  • 88 phdthesisL.Loïc Toraille. Utilisation de centres NV comme capteurs de champs magnétiques à haute pression dans des cellules à enclumes de diamant.Université Paris Saclay (COmUE)2019back to text
  • 89 bookJ.J.L. Walsh. Interpolation and Approximation by Rational Functions in the Complex Domain.American Mathematical Society Colloquium Publications, Vol. XXAmerican Mathematical Society, Providence, R.I.1965, x+405back to text
  • 90 inproceedingsT.T. Wolff. Counterexamples with harmonic gradients in 𝐑 3 .Essays on Fourier analysis in honor of Elias M. Stein42Math. Ser.Princeton Univ. Press1995, 321--384back to text
  • 91 inproceedingsY.Yasmina Zaky, N.Nicolas Fortino, J.-Y.Jean-Yves Dauvignac, F.Fabien Seyfert, M.Martine Olivi and L.Laurent Baratchart. Comparison of SEM Methods for Poles Estimation from Scattered Field by Canonical Objects.Renaissance meets advancing technologyFlorence, Italy9 2020, 6URL: https://hal.science/hal-02941637DOIback to text
  1. 1Under the assumption fH+C(𝕋)L(𝕋) for p=. In the case p=1, partial results are known but computational issues remain open.
  2. 2There is a subtle difference here between dimension 2 and higher. Indeed, a function holomorphic on a plane domain is defined by its non-tangential limit on a boundary subset of positive linear measure, but there are non-constant harmonic functions in the 3-D ball, C1 up to the boundary sphere, yet having vanishing gradient on a subset of positive measure of the sphere. Such a “bad” subset, however, cannot have interior points on the sphere.
  3. 3Though harmonic function theories on half-spaces and balls are equivalent through Kelvin transforms, conformal maps are severely restricted when n>2, so that general domains Ω can no longer be normalized; related Hardy spaces have not been much studied so far.
  4. 4Note that for constant conductivities σ, we are back to the above case of the Laplacian, and that for smooth enough σ, the conductivity PDE can be reformulated as a stationary Schrödinger equation.
  5. 5Observe that Figure 1 actually describes a setup related to SEEG, see below, and distributed source terms m, in a more general geometrical setting.