The research activity of our team is dedicated to the design, analysis and implementation of efficient numerical methods to solve inverse and shape/topological optimization problems, eventually including system uncertainties, in connection with acoustics, electromagnetism, elastodynamics, diffusion, and fluid mechanics.

Sought practical applications include radar and sonar applications, bio-medical imaging techniques, non-destructive testing, structural design, composite materials, diffusion magnetic resonance imaging, fluid-driven applications in aerospace/energy fields.

Roughly speaking, the model problem consists in determining information on, or optimizing the geometry (topology) and the physical properties of unknown targets from given constraints or measurements, for instance, measurements of diffracted waves or induced magnetic fields. Moreover, system uncertainties can ben systematically taken into account to provide a measure of confidence of the numerical prediction.

In general this kind of problems is non-linear. The inverse ones are also severely ill-posed and therefore require special attention from regularization point of view, and non-trivial adaptations of classical optimization methods.

Our scientific research interests are the following:

We were particularly interested in the development of the following themes

The research activity of our team is dedicated to the design, analysis and implementation of efficient numerical methods to solve inverse and shape/topological optimization problems, eventually including system uncertainties, in connection with wave imaging, structural design, non-destructive testing and medical imaging modalities. We are particularly interested in the development of fast methods that are suited for real-time applications and/or large scale problems. These goals require to work on both the physical and the mathematical models involved and indeed a solid expertise in related numerical algorithms. A part of the research activity is also devoted to take into account system uncertainties in the solving of inverse/optimization problems. At the interface of physics, mathematics, and computer science, Uncertainty Quantification (UQ) focuses on the development of frameworks and methods to characterize uncertainties in predictive computations. Uncertainties and errors arise at different stages of the numerical simulation. First, errors are introduced due to the physical simplifications in the mathematical modeling of the system investigated; other errors come from the numerical resolution of the mathematical model, due in particular to finite discretization and computations with finite accuracy and tolerance; finally, errors are due a limited knowledge of input quantities (parameters) appearing in the definition of the numerical model being solved.

This section intends to give a general overview of our research interests and themes. We choose to present them through the specific academic example of inverse scattering problems (from inhomogeneities), which is representative of foreseen developments on both inversion and (topological) optimization methods. The practical problem would be to identify an inclusion from measurements of diffracted waves that result from the interaction of the sought inclusion with some (incident) waves sent into the probed medium. Typical applications include biomedical imaging where using micro-waves one would like to probe the presence of pathological cells, or imaging of urban infrastructures where using ground penetrating radars (GPR) one is interested in finding the location of buried facilities such as pipelines or waste deposits. This kind of applications requires in particular fast and reliable algorithms.

By “imaging” we refer to the inverse problem where the concern is only the location and the shape of the inclusion, while “identification” may also indicate getting informations on the inclusion physical parameters.

Both problems (imaging and identification) are non linear and ill-posed (lack of stability with respect to measurements errors if some careful constrains are not added). Moreover, the unique determination of the geometry or the coefficients is not guaranteed in general if sufficient measurements are not available. As an example, in the case of anisotropic inclusions, one can show that an appropriate set of data uniquely determine the geometry but not the material properties.

These theoretical considerations (uniqueness, stability) are not only important in understanding the mathematical properties of the inverse problem, but also guide the choice of appropriate numerical strategies (which information can be stably reconstructed) and also the design of appropriate regularization techniques. Moreover, uniqueness proofs are in general constructive proofs, i.e. they implicitly contain a numerical algorithm to solve the inverse problem, hence their importance for practical applications. The sampling methods introduced below are one example of such algorithms.

A large part of our research activity is dedicated to numerical methods applied to the first type of inverse problems, where only the geometrical information is sought. In its general setting the inverse problem is very challenging and no method can provide universally satisfying solution (respecting the balance cost-precision-stability). This is why in the majority of the practically employed algorithms, some simplification of the underlying mathematical model is used, according to the specific configuration of the imaging experiment. The most popular ones are geometric optics (the Kirchhoff approximation) for high frequencies and weak scattering (the Born approximation) for small contrasts or small obstacles. They actually give full satisfaction for a wide range of applications as attested by the large success of existing imaging devices (radar, sonar, ultrasound, X-ray tomography, etc.), that rely on one of these approximations.

In most cases, the used simplification result in a linearization of
the inverse problem and therefore is usually valid only if the latter is
weakly non-linear. The development of simplified models
and the improvement of their efficiency is still a very active research
area. With that perspective, we are particularly interested in deriving and
studying higher order asymptotic models associated with small geometrical
parameters such as: small obstacles, thin coatings, wires, periodic media,

A larger part of our research activity is dedicated to algorithms that avoid the use of such approximations and that are efficient where classical approaches fail: i.e. roughly speaking when the non linearity of the inverse problem is sufficiently strong. This type of configuration is motivated by the applications mentioned below, and occurs as soon as the geometry of the unknown media generates non negligible multiple scattering effects (multiply-connected and closely spaces obstacles) or when the used frequency is in the so-called resonant region (wave-length comparable to the size of the sought medium). It is therefore much more difficult to deal with and requires new approaches. Our ideas to tackle this problem is mainly motivated and inspired by recent advances in shape and topological optimization methods and in so-called sampling methods.

Sampling methods are fast imaging solvers adapted to multi-static data (multiple receiver-transmitter pairs) at a fixed frequency. Even if they do not use any linearization the forward model, they rely on computing the solutions to a set of linear problems of small size, that can be performed in a completely parallel procedure. Our team has already a solid expertise in these methods applied to electromagnetic 3-D problems. The success of such approaches was their ability to provide a relatively quick algorithm for solving 3-D problems without any need for a priori knowledge on the physical parameters of the targets. These algorithms solve only the imaging problem, in the sense that only the geometrical information is provided.

Despite the large efforts already spent in the development of this type of methods, either from the algorithmic point of view or the theoretical one, numerous questions are still open. These attractive new algorithms also suffer from the lack of experimental validations, due to their relatively recent introduction. We also would like to invest on this side by developing collaborations with engineering research groups that have experimental facilities. From the practical point of view, the most potential limitation of sampling methods would be the need of a large amount of data to achieve a reasonable accuracy. On the other hand, optimization methods do not suffer from this constrain but they require good initial guess to ensure convergence and reduce the number of iterations. Therefore it seems natural to try to combine the two class of methods in order to calibrate the balance between cost and precision.

Among various shape optimization methods, the Level Set method seems to be particularly suited for such a coupling. First, because it shares similar mechanism as sampling methods: the geometry is captured as a level set of an “indicator function” computed on a cartesian grid. Second, because the two methods do not require any a priori knowledge on the topology of the sought geometry. Beyond the choice of a particular method, the main question would be to define in which way the coupling can be achieved. Obvious strategies consist in using one method to pre-process (initialization) or post-process (find the level set) the other. But one can also think of more elaborate ones, where for instance a sampling method can be used to optimize the choice of the incident wave at each iteration step. The latter point is closely related to the design of so called “focusing incident waves” (which are for instance the basis of applications of the time-reversal principle). In the frequency regime, these incident waves can be constructed from the eigenvalue decomposition of the data operator used by sampling methods. The theoretical and numerical investigations of these aspects are still not completely understood for electromagnetic or elastodynamic problems.

Other topological optimization methods, like the homogenization method or the topological gradient method, can also be used, each one provides particular advantages in specific configurations. It is evident that the development of these methods is very suited to inverse problems and provide substantial advantage compared to classical shape optimization methods based on boundary variation. Their applications to inverse problems has not been fully investigated. The efficiency of these optimization methods can also be increased for adequate asymptotic configurations. For instance small amplitude homogenization method can be used as an efficient relaxation method for the inverse problem in the presence of small contrasts. On the other hand, the topological gradient method has shown to perform well in localizing small inclusions with only one iteration.

A broader perspective would be the extension of the above mentioned techniques to time-dependent cases. Taking into account data in time domain is important for many practical applications, such as imaging in cluttered media, the design of absorbing coatings or also crash worthiness in the case of structural design.

For the identification problem, one would like to also have information on the physical properties of the targets. Of course optimization methods is a tool of choice for these problems. However, in some applications only a qualitative information is needed and obtaining it in a cheaper way can be performed using asymptotic theories combined with sampling methods. We also refer here to the use of so called transmission eigenvalues as qualitative indicators for non destructive testing of dielectrics.

We are also interested in parameter identification problems arising in diffusion-type problems. Our research here is mostly motivated by applications to the imaging of biological tissues with the technique of Diffusion Magnetic Resonance Imaging (DMRI). Roughly speaking DMRI gives a measure of the average distance travelled by water molecules in a certain medium and can give useful information on cellular structure and structural change when the medium is biological tissue. In particular, we would like to infer from DMRI measurements changes in the cellular volume fraction occurring upon various physiological or pathological conditions as well as the average cell size in the case of tumor imaging. The main challenges here are 1) correctly model measured signals using diffusive-type time-dependent PDEs 2) numerically handle the complexity of the tissues 3) use the first two to identify physically relevant parameters from measurements. For the last point we are particularly interested in constructing reduced models of the multiple-compartment Bloch-Torrey partial differential equation using homogenization methods.

The Team devotes a large effort focused on the formulation, implementation and validation of numerical methods for using scientific computing to drive experiments and available data (coming from models, simulation and experiments) by taking into account the system uncertainty. The team is also invested in exploiting the intimate relationship between optimisation and UQ to make Optimisation Under Uncertainty (OUU) tractable. A part of these activities is declined to the simulation of high-fidelity models for fluids, in three main fields, aerospace, energy and environment.

The Team is working on developing original UQ representations and algorithms to deal with complex and large scale models, having high dimensional input parameters with complexes influences. We are organizing our core research activities along different methodological UQ developments related to the challenges discussed above. Obviously, some efforts are shared by different initiatives or projects, and some of them include the continuous improvement of the non-intrusive methods constituting our software libraries. These actions are not detailed in the following, to focus the presentation on more innovative aspects, but we mentioned nonetheless the continuous developments and incorporation in our libraries of advanced sparse grid methods, sparsity promoting strategies and low rank methods.

An effort is dedicated to the efficient construction of surrogate models that are central in both forward and backward UQ problems, aiming at large-scale simulations relevant to engineering applications, with high dimensional input parameters.

Sensitivity analyses and other forward UQ problems (e.g., estimation of failure probabilities, rare events,. . . ) depends on the input uncertainty model. Most often, for convenience or because of the lack of data, the independence of the uncertain inputs is assumed. In the Team, we are investigating approaches dedicated to a) the construction of uncertainty models that integrate the available information and expert knowledge(s) in a consistent and objective fashion. To this end, several mathematical frameworks are already available, e.g the maximum entropy principle, likelihood maximization and moment matching methods, but their application to real engineering problems remains scarce and their systematic use raises multiple challenges, both to construct the uncertainty model and to solve the related UQ problems (forward and backward). Because of the importance of the available data and expertise to build the model, the contributions of the Team in these areas depend on the needs and demands of end-users and industrial partners.

To mitigate computational complexity, the Team is exploring multi-fidelity approaches in the context of expensive simulations. We combine predictions of models with different levels of discretizations and physical simplifications to construct, at a controlled cost, reliable surrogate models of simulation outputs or directly objective functions and possibly constraints, to enable the resolution of robust optimization and stochastic inverse problems. Again, one difficulty to be addressed by the Team is the design of the computer experiments to obtain the best multi-fidelity model at the lowest cost (of for a prescribed computational budgets), with respect to the end use of the model. The last point is particularly challenging as it calls for accuracy for output values that are usually unknown a priori but must be estimated as the model construction proceeds.

Conventional radar imaging techniques (ISAR, GPR, etc.) use backscattering data to image targets. The commonly used inversion algorithms are mainly based on the use of weak scattering approximations such as the Born or Kirchhoff approximation leading to very simple linear models, but at the expense of ignoring multiple scattering and polarization effects. The success of such an approach is evident in the wide use of synthetic aperture radar techniques.

However, the use of backscattering data makes 3-D imaging a very challenging problem (it is not even well understood theoretically) and as pointed out by Brett Borden in the context of airborne radar: “In recent years it has become quite apparent that the problems associated with radar target identification efforts will not vanish with the development of more sensitive radar receivers or increased signal-to-noise levels. In addition it has (slowly) been realized that greater amounts of data - or even additional “kinds” of radar data, such as added polarization or greatly extended bandwidth - will all suffer from the same basic limitations affiliated with incorrect model assumptions. Moreover, in the face of these problems it is important to ask how (and if) the complications associated with radar based automatic target recognition can be surmounted.” This comment also applies to the more complex GPR problem.

Our research themes will incorporate the development, analysis and testing of several novel methods, such as sampling methods, level set methods or topological gradient methods, for ground penetrating radar application (imaging of urban infrastructures, landmines detection, underground waste deposits monitoring, ) using multistatic data.

Among emerging medical imaging techniques we are particularly interested in those using low to moderate frequency regimes. These include Microwave Tomography, Electrical Impedance Tomography and also the closely related Optical Tomography technique. They all have the advantage of being potentially safe and relatively cheap modalities and can also be used in complementarity with well established techniques such as X-ray computed tomography or Magnetic Resonance Imaging.

With these modalities tissues are differentiated and, consequentially can be imaged, based on differences in dielectric properties (some recent studies have proved that dielectric properties of biological tissues can be a strong indicator of the tissues functional and pathological conditions, for instance, tissue blood content, ischemia, infarction, hypoxia, malignancies, edema and others). The main challenge for these functionalities is to built a 3-D imaging algorithm capable of treating multi-static measurements to provide real-time images with highest (reasonably) expected resolutions and in a sufficiently robust way.

Another important biomedical application is brain imaging. We are for instance interested in the use of EEG and MEG techniques as complementary tools to MRI. They are applied for instance to localize epileptic centers or active zones (functional imaging). Here the problem is different and consists into performing passive imaging: the epileptic centers act as electrical sources and imaging is performed from measurements of induced currents. Incorporating the structure of the skull is primordial in improving the resolution of the imaging procedure. Doing this in a reasonably quick manner is still an active research area, and the use of asymptotic models would offer a promising solution to fix this issue.

One challenging problem in this vast area is the identification and imaging of defaults in anisotropic media. For instance this problem is of great importance in aeronautic constructions due to the growing use of composite materials. It also arises in applications linked with the evaluation of wood quality, like locating knots in timber in order to optimize timber-cutting in sawmills, or evaluating wood integrity before cutting trees. The anisotropy of the propagative media renders the analysis of diffracted waves more complex since one cannot only relies on the use of backscattered waves. Another difficulty comes from the fact that the micro-structure of the media is generally not well known a priori.

Our concern will be focused on the determination of qualitative information on the size of defaults and their physical properties rather than a complete imaging which for anisotropic media is in general impossible. For instance, in the case of homogeneous background, one can link the size of the inclusion and the index of refraction to the first eigenvalue of so-called interior transmission problem. These eigenvalues can be determined form the measured data and a rough localization of the default. Our goal is to extend this kind of idea to the cases where both the propagative media and the inclusion are anisotropic. The generalization to the case of cracks or screens has also to be investigated.

In the context of nuclear waste management many studies are conducted on the possibility of storing waste in a deep geological clay layer. To assess the reliability of such a storage without leakage it is necessary to have a precise knowledge of the porous media parameters (porosity, tortuosity, permeability, etc.). The large range of space and time scales involved in this process requires a high degree of precision as well as tight bounds on the uncertainties. Many physical experiments are conducted in situ which are designed for providing data for parameters identification. For example, the determination of the damaged zone (caused by excavation) around the repository area is of paramount importance since microcracks yield drastic changes in the permeability. Level set methods are a tool of choice for characterizing this damaged zone.

In biological tissues, water is abundant and magnetic resonance imaging (MRI) exploits the magnetic property of the nucleus of the water proton. The imaging contrast (the variations in the grayscale in an image) in standard MRI can be from either proton density, T1 (spin-lattice) relaxation, or T2 (spin-spin) relaxation and the contrast in the image gives some information on the physiological properties of the biological tissue at different physical locations of the sample. The resolution of MRI is on the order of millimeters: the greyscale value shown in the imaging pixel represents the volume-averaged value taken over all the physical locations contained that pixel.

In diffusion MRI, the image contrast comes from a measure of the average distance the water molecules have moved (diffused) during a certain amount of time. The Pulsed Gradient Spin Echo (PGSE) sequence is a commonly used sequence of applied magnetic fields to encode the diffusion of water protons. The term 'pulsed' means that the magnetic fields are short in duration, an the term gradient means that the magnetic fields vary linearly in space along a particular direction. First, the water protons in tissue are labelled with nuclear spin at a precession frequency that varies as a function of the physical positions of the water molecules via the application of a pulsed (short in duration, lasting on the order of ten milliseconds) magnetic field. Because the precessing frequencies of the water molecules vary, the signal, which measures the aggregate phase of the water molecules, will be reduced due to phase cancellations. Some time (usually tens of milliseconds) after the first pulsed magnetic field, another pulsed magnetic field is applied to reverse the spins of the water molecules. The time between the applications of two pulsed magnetic fields is called the 'diffusion time'. If the water molecules have not moved during the diffusion time, the phase dispersion will be reversed, hence the signal loss will also be reversed, the signal is called refocused. However, if the molecules have moved during the diffusion time, the refocusing will be incomplete and the signal detected by the MRI scanner if weaker than if the water molecules have not moved. This lack of complete refocusing is called the signal attenuation and is the basis of the image contrast in DMRI. the pixels showning more signal attenuation is associated with further water displacement during the diffusion time, which may be linked to physiological factors, such as higher cell membrane permeability, larger cell sizes, higher extra-cellular volume fraction.

We model the nuclear magnetization of water protons in a sample due to diffusion-encoding magnetic fields by a multiple compartment Bloch-Torrey partial differential equation, which is a diffusive-type time-dependent PDE. The DMRI signal is the integral of the solution of the Bloch-Torrey PDE. In a homogeneous medium, the intrinsic diffusion coeffcient D will appear as the slope of the semi-log plot of the signal (in approporiate units). However, because during typical scanning times, 50-100ms, water molecules have had time to travel a diffusion distance which is long compared to the average size of the cells, the slope of the semi-log plot of the signal is in fact a measure of an 'effective' diffusion coefficient. In DMRI applications, this measured quantity is called the 'apparent diffusion coefficient' (ADC) and provides the most commonly used form the image contrast for DMRI. This ADC is closely related to the effective diffusion coefficient obtainable from mathematical homogenization theory.

Specific actions are devoted to the problem of atmospheric reentry simulations. We focus on several aspects : i) on the development of innovative algorithms improving the prediction of hypersonic flows and including system uncertainties, ii) on the application of these methods to the atmospheric reentry of space vehicles for the control and the optimization of the trajectory, iii) on the debris reentry, which is of fundamental importance for NASA, CNES and ESA. Several works are already initiated with funding from CNES, Thales, and ASL. An ongoing activity concerns the design of the Thermal Protection System (TPS) that shields the spacecraft from aerothermal heating, generated by friction at the surface of the vehicle. The TPS is usually composed of different classes of materials, depending on the mission and the planned trajectory. One major issue is to model accurately the material response to ensure a safe design. High-fidelity material modeling for ablative materials has been developed by NASA, but a lot of work is still needed concerning the assessment of physical and modeling uncertainties during the design process. Our objective is to set up a predictive numerical tool to reliably estimate the response of ablative materials for different aerothermal conditions.

An important effort is dedicated to the simulation of fluids featuring complex thermodynamic behavior, in the context of two distinct projects: the VIPER project, funded by Aquitaine Region, and a project with CWI (Scientific Computing Group). Dense gases (DGs) are defined as single-phase vapors operating at temperatures and pressures conditions close to the saturation curve. The interest in studying complex dynamics of compressible dense gas flows comes from the potential technological advantages of using these fluids in energy conversion cycles, such as in Organic Rankine Cycles (ORCs) which used dense gases as energy converters for biomass fuels and low-grade heat from geothermal or industrial waste heat sources. Since these fluids feature large uncertainties in their estimated thermodynamic properties (critical properties, acentric factor, etc.), a meaningful numerical prediction of the performance must necessarily take into account these uncertainties. Other sources of uncertainties include, but are not limited to, the inlet boundary conditions which are often unknown in dense gases applications. Moreover, a robust optimization must also include the more generic uncertainty introduced by the machining tolerance in the construction of the turbine blades.

L. Audibert, M. Bonazzoli, H. Haddar, T. A. Vu

The aim of this work is to analyze the convergence of the gradient descent method to solve inverse problems, where the corresponding forward problem is solved iteratively, by fixed point or Krylov subspace methods, preconditioned with domain decomposition techniques. In particular, we are interested in the case where the inner iterations are incomplete, i.e. stopped before achieving convergence. One-shot methods, which iterate at the same time on the forward problem solution and on the inverse problem unknown, have been studied in the optimization literature where strong convexity is usually assumed to ensure convergence. In the framework of inverse problems such assumptions are not verified in general. As a first step, the analysis is being performed on the case of linear inverse problems.

L. Audibert, L. Chesnel, H. Haddar, K. Napal

We consider the problem of imaging a crack network embedded in some homogeneous background from measured multi-static far field data generated by acoustic plane waves. We propose two novel approaches that can be seen as extensions of linear sampling-type methods and that provide indicator functions which are sensitive to local cracks densities. The first approach uses multiple frequencies data to compute spectral signatures associated with artificially embedded localized obstacles. The second approach also exploits the idea of incorporating an artificial background but uses data for a single frequency. The indicator function is built using a similar concept as for differential sampling methods: compare the solution of the interior transmission problem for healthy inclusion with the one with embedded cracks. The performance of the methods is tested and discussed on synthetic examples and the numerical results are compared with the ones obtained using the classical factorization method. 1

H. Haddar and F. Pourahmadian

Differential evolution indicators are introduced for 3D spatiotemporal imaging of micromechanical processes in complex materials where progressive variations due to manufacturing and/or aging are housed in a highly scattering background of a-priori unknown or uncertain structure. In this vein, a three-tier imaging platform is established where: (1) the domain is periodically (or continuously) subject to illumination and sensing in an arbitrary configuration; (2) sequential sets of measured data are deployed to distill segment-wise scattering signatures of the domain's internal structure through carefully constructed, non-iterative solutions to the scattering equation; and (3) the resulting solution sequence is then used to rigorously construct an imaging functional carrying appropriate invariance with respect to the unknown stationary components of the background e.g., pre-existing interstitial boundaries and bubbles. This gives birth to differential indicators that specifically recover the 3D support of micromechanical evolution within a network of unknown scatterers. The direct scattering problem is formulated in the frequency domain where the background is comprised of a random distribution of monolithic fragments. The constituents are connected via highly heterogeneous interfaces of unknown elasticity and dissipation which are subject to spatiotemporal evolution. The support of internal boundaries are sequentially illuminated by a set of incident waves and thusinduced scattered fields are captured over a generic observation surface. The performance of the proposed imaging indicator is illustrated through a set of numerical experiments for spatiotemporal reconstruction of progressive damage zones featuring randomly distributed cracks and bubbles 19.

H. Haddar and X. Liu

We develop a factorization method to obtain explicit characterization of a (possibly non-convex) impedance scattering object from measurements of time-dependent causal scattered waves in the far field regime. In particular, we prove that far fields of solutions to the wave equation due to particularly modified incident waves, characterize the obstacle by a range criterion involving the square root of the time derivative of the corresponding far field operator. Our analysis makes essential use of a coercivity property of the solution of the initial boundary value problem for the wave equation in the Laplace domain. This forces us to consider this particular modification of the far field operator. The latter in fact, can be chosen arbitrarily close to the true far field operator given in terms of physical measurements. We provide validating numerical examples in 2D on synthetic data. The latter is generated using a FDTD solver with PML 15.

H. Haddar and A. Konschin

We analyze the Factorization method to reconstruct the geometry of a local defect in a periodic absorbing layer using almost only incident plane waves at a fixed frequency. A crucial part of our analysis relies on the consideration of the range of a carefully designed far field operator, which characterizes the geometry of the defect. We further provide some validating numerical results in a two dimensional setting 14.

L. Audibert, H. Girardon and H. Haddar

Non-destructive testing is an essential tool to assess the safety of the facilities within nuclear plants. In particular, conductive deposits on U-tubes in steam generators constitute a major danger as they may block the cooling loop. To detect these deposits, eddy-current probes are introduced inside the U-tubes to generate currents and measuring back an impedance signal. Based on earlier work on this subject, we develop a shape optimization technique with regularized gradient descent to invert these measurements and recover the deposit shape. To deal with the unknown, and possibly complex, topological nature of the latter, we propose to model it using a level set function. The methodology is first validated on synthetic axisymmetric configurations and fast convergence in ensured by careful adaptation of the gradient steps and regularization parameters. We then consider a more realistic modeling that incorporates the support plate and the presence of imperfections on the tube interior section. We employ in particular an asymptotic model to take into account these imperfections and treat them as additional unknowns in our inverse problem. A multi-objective optimization strategy, based on the use of different operating frequencies, is then developed to solve this problem. Various numerical experimentation with synthetic data demonstrated the viability of our approach. The approach is also successfully validated against experimental data 25. The approach is then extended to the full 3D eddy current model where various HPC techniques have been incorporated to make the inversion time plausible for industrial test cases 24.

L. Bourgeois, L. Chesnel

We are interested in the classical ill-posed Cauchy problem for the Laplace equation. One method to approximate the solution associated with compatible data consists in considering a family of regularized well-posed problems depending on a small parameter à la Grisvard do not work and instead, we apply the Kondratiev approach. We describe the procedure in detail to keep track of the dependence in

M. Aussal, Y. Boukari and H. Haddar

We propose and study a data completion algorithm for recovering missing data from the knowledge of Cauchy data on parts of the same boundary. The algorithm is based on surface representation of the solution and is presented for the Helmholtz equation. This work is an extension of the data completion algorithm proposed by the two last authors where the case of data available of a closed boundary was studied. The proposed method is a direct inversion method robust with respect to noisy incompatible data. Classical regularization methods with discrepancy selection principles can be employed and automatically lead to a convergent schemes as the noise level goes to zero. We conduct 3D numerical investigations to validate our method on various synthetic examples 2.

F. Cakoni, D. Colton, H. Haddar

We develop a conceptually unified approach for characterizing and determining scattering poles and interior eigenvalues for a given scattering problem. Our approach explores a duality stemming from interchanging the roles of incident and scattered fields in our analysis. Both sets are related to the kernel of the relative scattering operator mapping incident fields to scattered fields, corresponding to the exterior scattering problem for the interior eigenvalues and the interior scattering problem for scattering poles. Our discussion includes the scattering problem for a Dirichlet obstacle where duality is between scattering poles and Dirichlet eigenvalues, and the inhomogeneous scattering problem where the duality is between scattering poles and transmission eigenvalues. Our new characterization of the scattering poles suggests a numerical method for their computation in terms of scattering data for the corresponding interior scattering problem 6.

A. Bera, A.-S. Bonnet-Ben Dhia, L. Chesnel

We consider the propagation of acoustic waves at a given wavenumber in a waveguide which is unbounded in one direction. We explain how to construct penetrable obstacles characterized by a physical coefficient

L. Chesnel, S.A. Nazarov

We investigate a time-harmonic wave problem in a waveguide. We work at low frequency so that only one mode can propagate. It is known that the scattering matrix exhibits a rapid variation for real frequencies in a vicinity of a complex resonance located close to the real axis. This is the so-called Fano resonance phenomenon. And when the geometry presents certain properties of symmetry, there are two different real frequencies such that we have either

L. Chesnel, S.A. Nazarov, J. Taskinen

We consider the propagation of surface water waves in a straight planar channel perturbed at the bottom by several thin curved tunnels and wells. We propose a method to construct non reflecting underwater topographies of this type at an arbitrary prescribed wave number. To proceed, we compute asymptotic expansions of the diffraction solutions with respect to the small parameter of the geometry taking into account the existence of boundary layer phenomena. We establish error estimates to validate the expansions using advances techniques of weighted spaces with detached asymptotics. In the process, we show the absence of trapped surface waves for perturbations small enough. This analysis furnishes asymptotic formulas for the scattering matrix and we use them to determine underwater topographies which are non-reflecting. Theoretical and numerical examples are given. 8

M. Bonazzoli, X. Claeys, F. Nataf, P.-H. Tournier

In this work we analyze the convergence of the one-level overlapping domain decomposition preconditioner SORAS (Symmetrized Optimized Restricted Additive Schwarz) applied to a generic linear system whose matrix is not necessarily symmetric/self-adjoint nor positive definite. By generalizing the theory for the Helmholtz equation developed in [I.G. Graham, E.A. Spence, and J. Zou, SIAM J.Numer.Anal., 2020], we identify a list of assumptions and estimates that are sufficient to obtain an upper bound on the norm of the preconditioned matrix, and a lower bound on the distance of its field of values from the origin. We stress that our theory is general in the sense that it is not specific to one particular boundary value problem. Moreover, it does not rely on a coarse mesh whose elements are sufficiently small. As an illustration of this framework, we prove new estimates for overlapping domain decomposition methods with Robin-type transmission conditions for the heterogeneous reaction-convection-diffusion equation. An article on this topic has been submitted 26.

M. Bonazzoli, X. Claeys

This work is about the scattering of an acoustic wave by an object composed of piecewise homogenous parts and an arbitrarily heterogeneous part. We propose and analyze a formulation that couples, adopting a Costabel-type approach, boundary integral equations for the homogeneous subdomains with domain variational formulations for the heterogenous subdomain. This is an extension of Costabel FEM-BEM coupling to a multi-domain configuration, with junctions points allowed, i.e. points where three or more subdomains abut. Usually just the exterior unbounded subdomain is treated with the BEM; here we wish to exploit the BEM whenever it is applicable, that is for all the homogenous parts of the scattering object, since it yields a reduction in the number of unknowns compared to the FEM. Our formulation is based on the multi-trace formalism for acoustic scattering by piecewise homogeneous objects; here we allow the wavenumber to vary arbitrarily in a part of the domain. We prove that the bilinear form associated with the proposed formulation satisfies a Gårding coercivity inequality, which ensures stability of the variational problem if it is uniquely solvable. We identify conditions for injectivity and construct modified versions immune to spurious resonances. An article on this topic will be submitted soon.

L. Chesnel, S.A. Nazarov

We consider the propagation of time harmonic acoustic waves in a device with three channels. The wave number is chosen such that only the piston mode can propagate. The main goal of this work is to present a geometry which can serve as an energy distributor. More precisely, the geometry is first designed so that for an incident wave coming from one channel, the energy is almost completely transmitted in the two other channels. Additionally, tuning a bit two geometrical parameters, we can control the ratio of energy transmitted in the two channels. The approach is based on asymptotic analysis for thin slits around resonance lengths. We also provide numerical results to illustrate the theory. 30

L. Chesnel, S.A. Nazarov

We consider the propagation of the piston mode in an acoustic waveguide obstructed by two screens with small holes. In general, due to the features of the geometry, almost no energy of the incident wave is transmitted through the structure. The goal of this article is to show that tuning carefully the distance between the two screens, which form a resonator, one can get almost complete transmission. We obtain an explicit criterion, not so obvious to intuit, for this phenomenonto happen. Numerical experiments illustrate the analysis. 29

A.-S. Bonnet-Ben Dhia, L. Chesnel, M. Rihani

In this work, we are interested in the analysis of time-harmonic Maxwell's equations in presence of a conical tip of a material with negative dielectric constants. When these constants belong to some critical range, the electromagnetic field exhibits strongly oscillating singularities at the tip which have infinite energy. Consequently Maxwell's equations are not well-posed inthe classical

R. Bunoiu, L. Chesnel, K. Ramdani, M. Rihani

In this work, we are interested in the homogenization of time-harmonic Maxwell's equations in a composite medium with periodically distributed small inclusions of a negative material. Here a negative material is a material modelled by negative permittivity and permeability. Due to the sign-changing coefficients in the equations, it is not straightforward to obtain uniform energy estimates to apply the usual homogenization techniques. The goal of this work is to explain how to proceed in this context. The analysis of Maxwell's equations is based on a precise study of two associated scalar problems: one involving the sign-changing permittivity with Dirichlet boundary conditions, another involving the sign-changing permeability with Neumann boundary conditions. For both problems, we obtain a criterion on the physical parameters ensuring uniform invertibility of the corresponding operators as the size of the inclusions tends to zero. In the process, we explain the link existing with the so-called Neumann-Poincaré operator, complementing the existing literature on this topic. Then we use the results obtained for the scalar problems to derive uniform energy estimates for Maxwell's system. At this stage, an additional difficulty comes from the fact that Maxwell's equations are also sign-indefinite due to the term involving the frequency. To cope with it, we establish some sort of uniform compactness result 5.

J.-R. Li and C. Fang

Diffusion Magnetic Resonance Imaging (DMRI) is a promising tool to obtain useful information on microscopic structure and has been extensively applied to biological tissues.

We obtained the following results.

The modeling of the diffusion MRI signal from moving and deforming organs such as the heart is challenging due to significant motion and deformation of the imaged medium during the signal acquisition. Recently, a mathematical formulation of the Bloch-Torrey equation, describing the complex transverse magnetization due to diffusion-encoding magnetic field gradients, was developed to account for the motion and deformation. In that work, the motivation was to cancel the effect of the motion and deformation in the MRI image and the space scale of interest spans multiple voxels. In the present work, we adapt the mathematical equation to study the diffusion MRI signal at the much smaller scale of biological cells.

We start with the Bloch-Torrey equation defined on a cell that is moving and deforming and linearize the equation around the magnitude of the diffusion-encoding gradient. The result is a second order signal model in which the linear term gives the imaginary part of the diffusion MRI signal and the quadratic term gives the apparent diffusion coefficient (ADC) attributable to the biological cell. We numerically validate this model for a variety of motions and deformations. 17.

The diffusion MRI signal arising from neurons can be numerically simulated by solving the Bloch-Torrey partial differential equation. In this paper we present the Neuron Module that we implemented within the Matlab-based diffusion MRI simulation toolbox SpinDoctor. SpinDoctor uses finite element discretization and adaptive time integration to solve the Bloch-Torrey partial differential equation for general diffusion-encoding sequences, at multiple b-values and in multiple diffusion directions. In order to facilitate the diffusion MRI simulation of realistic neurons by the research community, we constructed finite element meshes for a group of 36 pyramidal neurons and a group of 29 spindle neurons whose morphological descriptions were found in the publicly available neuron repository NeuroMorpho.Org. These finite elements meshes range from having 15,163 nodes to 622,553 nodes. We also broke the neurons into the soma and dendrite branches and created finite elements meshes for these cell components. Through the Neuron Module, these neuron and cell components finite element meshes can be seamlessly coupled with the functionalities of SpinDoctor to provide the diffusion MRI signal attributable to spins inside neurons. We make these meshes and the source code of the Neuron Module available to the public as an open-source package.

To illustrate some potential uses of the Neuron Module, we show numerical examples of the simulated diffusion MRI signals in multiple diffusion directions from whole neurons as well as from the soma and dendrite branches, and include a comparison of the high b-value behavior between dendrite branches and whole neurons. In addition, we demonstrate that the neuron meshes can be used to perform Monte-Carlo diffusion MRI simulations as well. We show that at equivalent accuracy, if only one gradient direction needs to be simulated, SpinDoctor is faster than a GPU implementation of Monte-Carlo, but if many gradient directions need to be simulated, there is a break-even point when the GPU implementation of Monte-Carlo becomes faster than SpinDoctor. Furthermore, we numerically compute the eigenfunctions and the eigenvalues of the Bloch-Torrey and the Laplace operators on the neuron geometries using a finite elements discretization, in order to give guidance in the choice of the space and time discretization parameters for both finite elements and Monte-Carlo approaches. Finally, we perform a statistical study on the set of 65 neurons to test some candidate biomakers that can potentially indicate the soma size. This preliminary study exemplifies the possible research that can be conducted using the Neuron Module. 10.

Giulio Gori, Olivier P. Le Maître, Pietro M. Congedo

The present work concerns the inference of the coefficients of fluid-dependent thermodynamic models, applicable to complex molecular compounds with non-ideal effects. The main objective is to numerically assess the potential of using experimental measurements of some expansion flows to infer the model parameters. The Bayesian formulation incorporates uncertainties in the flow conditions and measurement errors and compares the measurements with the predictions of Computational Fluid Dynamics (CFD) simulations which depend on the parameter values. The resulting parameters posterior distribution is sampled using a Markov-Chain Monte-Carlo method. Polynomial-Chaos (PC) surrogates substitute the CFD predictions in the definition of the Bayesian posterior, in order to alleviate the computational burden of solving multiple CFD problems. We rely on synthetic data i.e., generated numerically, to assess the potential of expansion flow experiments. Using synthetic data prevents experimental bias, enables the control of model errors (thermodynamic and flow models) and permits the measurement of quantities in conditions that would be hardly achievable in practice. We test three expansion flows with increasing non-ideal effects. Our analyses reveal that the considered experiments have limited potential for the inference of the thermodynamic coefficients. Measuring the temperature, in addition to pressure, improves the posterior knowledge of the specific heat ratio, but other parameters remain highly uncertain. Also, the selection of an expansion condition yielding higher non-ideal effects somehow improves the inference, but the trend is limited, and experimenting with these conditions may be challenging. Our work also supports the use of Bayesian analysis with synthetic data to investigate, analyze, and design new experiments in the future.

A. Del Val, O.P. Le Maitre, P.M. Congedo, T. Magin, O. Chazot

This work deals with the inference of catalytic recombination parameters from plasma wind tunnel experiments for reusable thermal protection materials. One of the critical factors affecting the performance of such materials is the contribution to the heat flux of the exothermic recombination reactions at the vehicle surface. The main objective of this work is to develop a dedicated Bayesian framework that allows us to compare uncertain measurements with model predictions which depend on the catalytic parameter values. Our framework accounts for all the uncertainties involved in the model definition and incorporates all measured variables with their respective uncertainties. The physical model used for the estimation consists of a 1D boundary layer solver along the stagnation line. The chemical production term included in the surface mass balance depends on the catalytic recombination efficiency. As not all the different quantities needed to simulate a reacting boundary layer can be measured or known (such as the flow enthalpy at the inlet boundary), we propose an optimization procedure built on the construction of the likelihood function to determine their most likely values based on the available experimental data. This procedure avoids the need to introduce any a priori estimates on the nuisance quantities, namely, the boundary layer edge enthalpy, static and dynamic pressures, and wall temperatures, which would entail the use of very wide priors. Furthermore, we substitute the optimal likelihood of the experimental measurements with a surrogate model to make the inference procedure both faster and more robust. We show that the resulting Bayesian formulation yields meaningful and accurate posterior probability distributions of the catalytic parameters with a reduction of more than twenty pourcent of the standard deviation with respect to previous works. We also study the implications of an extension of the experimental procedure on the improvement of the quality of the inference.

G. Gori, P.M. Congedo, O. Le Maitre

We propose a confidence-based design approach robust to turbulence closures model-form uncertainty in Reynolds-Averaged Navier-Stokes computational models. The Eigenspace Perturbation Method is employed to compute turbulence closure uncertainty estimates of the performance targeted by the optimizer. The magnitude of the uncertainty estimates is exploited to establish an indicator parameter associated to the credibility of numerical prediction. The proposed approach restricts the optimum search only to design space regions for which the credibility indicator suggests trustworthy RANS model predictions. In this way, we improve the efficiency of the design process, potentially avoiding designs for which the computational model is unreliable. The reference test case consists in a two-dimensional single element airfoil resembling a morphing wing section in a high-lift configuration. Results show that the prediction credibility constraint has a non negligible impact on the definition of the optimal design.

A. Fracassi, P.M. Congedo, A. Ghidoni

Centrifugal pumps, being used nowadays for many applications, must be suited for a wide range of pressure ratios and flow rates. To overcome difficulties arising from the design and performance prediction of this class of turbomachinery, many researchers proposed the coupling of CFD codes and optimization algorithms for the fast and robust design. However, uncertainties are present in most engineering applications such as turbomachines, and their in uence on turbomachinery performance should be considered. In this work we apply some advanced optimization techniques to the blade optimization of an ERCOFTAC-like pump, and we assess the robustness of the optimal profiles through an uncertainty propagation study. The main source of uncertainty is constituted by the uncertainty of the operating conditions, primarily the rotational speed of the pump shaft that affects also the flow rate.

F. Fusi, P.M. Congedo, A. Guardone, G. Quaranta

This work compares the deterministic and robust approaches to improve the aerodynamic design of helicopter airfoils. The two formulations are different due to the characteristics of each approach. In the deterministic case, the objective of optimization is the minimization of drag while maintaining a level of lift that guarantees satisfaction of trimming condition. In the case of robust design, a range of angles of attack and not a single trim condition is considered. Thus, the robust optimization takes the lift-to-drag ratio as a measure of the performance of the airfoil, imposing at the same time inequality constraint on the lift coefficient to guarantee a sufficient level of lift, and then checking after optimization that the trimming condition can be satisfied. The two approaches are compared showing pros and cons of the robust framework. In general, the robust approach shows the capability to reach the same mean performance of the deterministic one, but with a lower degradation of performances in off-design situations considered through the uncertainty. On the other side, the difficulties in imposing the lift trim condition for the robust formulation may lead to results of limited use.

P.M. Congedo is the Inria Coordinator of the CWI-Inria Inria International Lab. IIL CWI-Inria

PhD funding from the regional program Dim Math Innov (2020-2022), 3D audio for connecting children with classrooms. (PhD of D. Lerévérend)

P.M. Congedo is the coordinator of the Sustainable development committe of Inria Saclay Île-de-France

J.R. Li is a member of Inria's Commission d'Evaluation.