The research activity of our team is dedicated to the design, analysis and implementation of efficient numerical methods to solve inverse and shape/topological optimization problems, eventually including system uncertainties, in connection with acoustics, electromagnetism, elastodynamics, diffusion, and fluid mechanics.
Sought practical applications include radar and sonar applications, bio-medical imaging techniques, non-destructive testing, structural design, composite materials, diffusion magnetic resonance imaging, fluid-driven applications in aerospace/energy fields.
Roughly speaking, the model problem consists in determining information on, or optimizing the geometry (topology) and the physical properties of unknown targets from given constraints or measurements, for instance, measurements of diffracted waves or induced magnetic fields. Moreover, system uncertainties can ben systematically taken into account to provide a measure of confidence of the numerical prediction.
In general this kind of problems is non-linear. The inverse ones are also severely ill-posed and therefore require special attention from regularization point of view, and non-trivial adaptations of classical optimization methods.
Our scientific research interests are the following:
Theoretical understanding and analysis of the forward and inverse mathematical models, including in particular the development of simplified models for adequate asymptotic configurations.
The design of efficient numerical optimization/inversion methods which are quick and robust with respect to noise. Special attention will be paid to algorithms capable of treating large scale problems (e.g. 3-D problems) and/or suited for real-time imaging.
Propose new methods and develop advanced tools to perform uncertainty quantification for optimization/inversion.
Development of prototype softwares for specific applications or tutorial toolboxes.
We were particularly interested in the development of the following themes
Qualitative and quantitative methods for inverse scattering problems
Topological optimization methods
Forward and inverse models for Diffusion MRI
Forward/Backward uncertainty quantification methods for optimization/inversion problems in the context of expensive computer codes.
The research activity of our team is dedicated to the design, analysis and implementation of efficient numerical methods to solve inverse and shape/topological optimization problems, eventually including system uncertainties, in connection with wave imaging, structural design, non-destructive testing and medical imaging modalities. We are particularly interested in the development of fast methods that are suited for real-time applications and/or large scale problems. These goals require to work on both the physical and the mathematical models involved and indeed a solid expertise in related numerical algorithms. A part of the research activity is also devoted to take into account system uncertainties in the solving of inverse/optimization problems. At the interface of physics, mathematics, and computer science, Uncertainty Quantification (UQ) focuses on the development of frameworks and methods to characterize uncertainties in predictive computations. Uncertainties and errors arise at different stages of the numerical simulation. First, errors are introduced due to the physical simplifications in the mathematical modeling of the system investigated; other errors come from the numerical resolution of the mathematical model, due in particular to finite discretization and computations with finite accuracy and tolerance; finally, errors are due a limited knowledge of input quantities (parameters) appearing in the definition of the numerical model being solved.
This section intends to give a general overview of our research interests and themes. We choose to present them through the specific academic example of inverse scattering problems (from inhomogeneities), which is representative of foreseen developments on both inversion and (topological) optimization methods. The practical problem would be to identify an inclusion from measurements of diffracted waves that result from the interaction of the sought inclusion with some (incident) waves sent into the probed medium. Typical applications include biomedical imaging where using micro-waves one would like to probe the presence of pathological cells, or imaging of urban infrastructures where using ground penetrating radars (GPR) one is interested in finding the location of buried facilities such as pipelines or waste deposits. This kind of applications requires in particular fast and reliable algorithms.
By “imaging” we refer to the inverse problem where the concern is only the location and the shape of the inclusion, while “identification” may also indicate getting informations on the inclusion physical parameters.
Both problems (imaging and identification) are non linear and ill-posed (lack of stability with respect to measurements errors if some careful constrains are not added). Moreover, the unique determination of the geometry or the coefficients is not guaranteed in general if sufficient measurements are not available. As an example, in the case of anisotropic inclusions, one can show that an appropriate set of data uniquely determine the geometry but not the material properties.
These theoretical considerations (uniqueness, stability) are not only important in understanding the mathematical properties of the inverse problem, but also guide the choice of appropriate numerical strategies (which information can be stably reconstructed) and also the design of appropriate regularization techniques. Moreover, uniqueness proofs are in general constructive proofs, i.e. they implicitly contain a numerical algorithm to solve the inverse problem, hence their importance for practical applications. The sampling methods introduced below are one example of such algorithms.
A large part of our research activity is dedicated to numerical methods applied to the first type of inverse problems, where only the geometrical information is sought. In its general setting the inverse problem is very challenging and no method can provide universally satisfying solution (respecting the balance cost-precision-stability). This is why in the majority of the practically employed algorithms, some simplification of the underlying mathematical model is used, according to the specific configuration of the imaging experiment. The most popular ones are geometric optics (the Kirchhoff approximation) for high frequencies and weak scattering (the Born approximation) for small contrasts or small obstacles. They actually give full satisfaction for a wide range of applications as attested by the large success of existing imaging devices (radar, sonar, ultrasound, X-ray tomography, etc.), that rely on one of these approximations.
In most cases, the used simplification result in a linearization of
the inverse problem and therefore is usually valid only if the latter is
weakly non-linear. The development of simplified models
and the improvement of their efficiency is still a very active research
area. With that perspective, we are particularly interested in deriving and
studying higher order asymptotic models associated with small geometrical
parameters such as: small obstacles, thin coatings, wires, periodic media,
A larger part of our research activity is dedicated to algorithms that avoid the use of such approximations and that are efficient where classical approaches fail: i.e. roughly speaking when the non linearity of the inverse problem is sufficiently strong. This type of configuration is motivated by the applications mentioned below, and occurs as soon as the geometry of the unknown media generates non negligible multiple scattering effects (multiply-connected and closely spaces obstacles) or when the used frequency is in the so-called resonant region (wave-length comparable to the size of the sought medium). It is therefore much more difficult to deal with and requires new approaches. Our ideas to tackle this problem is mainly motivated and inspired by recent advances in shape and topological optimization methods and in so-called sampling methods.
Sampling methods are fast imaging solvers adapted to multi-static data (multiple receiver-transmitter pairs) at a fixed frequency. Even if they do not use any linearization the forward model, they rely on computing the solutions to a set of linear problems of small size, that can be performed in a completely parallel procedure. Our team has already a solid expertise in these methods applied to electromagnetic 3-D problems. The success of such approaches was their ability to provide a relatively quick algorithm for solving 3-D problems without any need for a priori knowledge on the physical parameters of the targets. These algorithms solve only the imaging problem, in the sense that only the geometrical information is provided.
Despite the large efforts already spent in the development of this type of methods, either from the algorithmic point of view or the theoretical one, numerous questions are still open. These attractive new algorithms also suffer from the lack of experimental validations, due to their relatively recent introduction. We also would like to invest on this side by developing collaborations with engineering research groups that have experimental facilities. From the practical point of view, the most potential limitation of sampling methods would be the need of a large amount of data to achieve a reasonable accuracy. On the other hand, optimization methods do not suffer from this constrain but they require good initial guess to ensure convergence and reduce the number of iterations. Therefore it seems natural to try to combine the two class of methods in order to calibrate the balance between cost and precision.
Among various shape optimization methods, the Level Set method seems to be particularly suited for such a coupling. First, because it shares similar mechanism as sampling methods: the geometry is captured as a level set of an “indicator function” computed on a cartesian grid. Second, because the two methods do not require any a priori knowledge on the topology of the sought geometry. Beyond the choice of a particular method, the main question would be to define in which way the coupling can be achieved. Obvious strategies consist in using one method to pre-process (initialization) or post-process (find the level set) the other. But one can also think of more elaborate ones, where for instance a sampling method can be used to optimize the choice of the incident wave at each iteration step. The latter point is closely related to the design of so called “focusing incident waves” (which are for instance the basis of applications of the time-reversal principle). In the frequency regime, these incident waves can be constructed from the eigenvalue decomposition of the data operator used by sampling methods. The theoretical and numerical investigations of these aspects are still not completely understood for electromagnetic or elastodynamic problems.
Other topological optimization methods, like the homogenization method or the topological gradient method, can also be used, each one provides particular advantages in specific configurations. It is evident that the development of these methods is very suited to inverse problems and provide substantial advantage compared to classical shape optimization methods based on boundary variation. Their applications to inverse problems has not been fully investigated. The efficiency of these optimization methods can also be increased for adequate asymptotic configurations. For instance small amplitude homogenization method can be used as an efficient relaxation method for the inverse problem in the presence of small contrasts. On the other hand, the topological gradient method has shown to perform well in localizing small inclusions with only one iteration.
A broader perspective would be the extension of the above mentioned techniques to time-dependent cases. Taking into account data in time domain is important for many practical applications, such as imaging in cluttered media, the design of absorbing coatings or also crash worthiness in the case of structural design.
For the identification problem, one would like to also have information on the physical properties of the targets. Of course optimization methods is a tool of choice for these problems. However, in some applications only a qualitative information is needed and obtaining it in a cheaper way can be performed using asymptotic theories combined with sampling methods. We also refer here to the use of so called transmission eigenvalues as qualitative indicators for non destructive testing of dielectrics.
We are also interested in parameter identification problems arising in diffusion-type problems. Our research here is mostly motivated by applications to the imaging of biological tissues with the technique of Diffusion Magnetic Resonance Imaging (DMRI). Roughly speaking DMRI gives a measure of the average distance travelled by water molecules in a certain medium and can give useful information on cellular structure and structural change when the medium is biological tissue. In particular, we would like to infer from DMRI measurements changes in the cellular volume fraction occurring upon various physiological or pathological conditions as well as the average cell size in the case of tumor imaging. The main challenges here are 1) correctly model measured signals using diffusive-type time-dependent PDEs 2) numerically handle the complexity of the tissues 3) use the first two to identify physically relevant parameters from measurements. For the last point we are particularly interested in constructing reduced models of the multiple-compartment Bloch-Torrey partial differential equation using homogenization methods.
The Team devotes a large effort focused on the formulation, implementation and validation of numerical methods for using scientific computing to drive experiments and available data (coming from models, simulation and experiments) by taking into account the system uncertainty. The team is also invested in exploiting the intimate relationship between optimisation and UQ to make Optimisation Under Uncertainty (OUU) tractable. A part of these activities is declined to the simulation of high-fidelity models for fluids, in three main fields, aerospace, energy and environment.
The Team is working on developing original UQ representations and algorithms to deal with complex and large scale models, having high dimensional input parameters with complexes influences. We are organizing our core research activities along different methodological UQ developments related to the challenges discussed above. Obviously, some efforts are shared by different initiatives or projects, and some of them include the continuous improvement of the non-intrusive methods constituting our software libraries. These actions are not detailed in the following, to focus the presentation on more innovative aspects, but we mentioned nonetheless the continuous developments and incorporation in our libraries of advanced sparse grid methods, sparsity promoting strategies and low rank methods.
An effort is dedicated to the efficient construction of surrogate models that are central in both forward and backward UQ problems, aiming at large-scale simulations relevant to engineering applications, with high dimensional input parameters.
Sensitivity analyses and other forward UQ problems (e.g., estimation of failure probabilities, rare events,. . . ) depends on the input uncertainty model. Most often, for convenience or because of the lack of data, the independence of the uncertain inputs is assumed. In the Team, we are investigating approaches dedicated to a) the construction of uncertainty models that integrate the available information and expert knowledge(s) in a consistent and objective fashion. To this end, several mathematical frameworks are already available, e.g the maximum entropy principle, likelihood maximization and moment matching methods, but their application to real engineering problems remains scarce and their systematic use raises multiple challenges, both to construct the uncertainty model and to solve the related UQ problems (forward and backward). Because of the importance of the available data and expertise to build the model, the contributions of the Team in these areas depend on the needs and demands of end-users and industrial partners.
To mitigate computational complexity, the Team is exploring multi-fidelity approaches in the context of expensive simulations. We combine predictions of models with different levels of discretizations and physical simplifications to construct, at a controlled cost, reliable surrogate models of simulation outputs or directly objective functions and possibly constraints, to enable the resolution of robust optimization and stochastic inverse problems. Again, one difficulty to be addressed by the Team is the design of the computer experiments to obtain the best multi-fidelity model at the lowest cost (of for a prescribed computational budgets), with respect to the end use of the model. The last point is particularly challenging as it calls for accuracy for output values that are usually unknown a priori but must be estimated as the model construction proceeds.
Conventional radar imaging techniques (ISAR, GPR, etc.) use backscattering data to image targets. The commonly used inversion algorithms are mainly based on the use of weak scattering approximations such as the Born or Kirchhoff approximation leading to very simple linear models, but at the expense of ignoring multiple scattering and polarization effects. The success of such an approach is evident in the wide use of synthetic aperture radar techniques.
However, the use of backscattering data makes 3-D imaging a very challenging problem (it is not even well understood theoretically) and as pointed out by Brett Borden in the context of airborne radar: “In recent years it has become quite apparent that the problems associated with radar target identification efforts will not vanish with the development of more sensitive radar receivers or increased signal-to-noise levels. In addition it has (slowly) been realized that greater amounts of data - or even additional “kinds” of radar data, such as added polarization or greatly extended bandwidth - will all suffer from the same basic limitations affiliated with incorrect model assumptions. Moreover, in the face of these problems it is important to ask how (and if) the complications associated with radar based automatic target recognition can be surmounted.” This comment also applies to the more complex GPR problem.
Our research themes will incorporate the development, analysis and testing of several novel methods, such as sampling methods, level set methods or topological gradient methods, for ground penetrating radar application (imaging of urban infrastructures, landmines detection, underground waste deposits monitoring, ) using multistatic data.
Among emerging medical imaging techniques we are particularly interested in those using low to moderate frequency regimes. These include Microwave Tomography, Electrical Impedance Tomography and also the closely related Optical Tomography technique. They all have the advantage of being potentially safe and relatively cheap modalities and can also be used in complementarity with well established techniques such as X-ray computed tomography or Magnetic Resonance Imaging.
With these modalities tissues are differentiated and, consequentially can be imaged, based on differences in dielectric properties (some recent studies have proved that dielectric properties of biological tissues can be a strong indicator of the tissues functional and pathological conditions, for instance, tissue blood content, ischemia, infarction, hypoxia, malignancies, edema and others). The main challenge for these functionalities is to built a 3-D imaging algorithm capable of treating multi-static measurements to provide real-time images with highest (reasonably) expected resolutions and in a sufficiently robust way.
Another important biomedical application is brain imaging. We are for instance interested in the use of EEG and MEG techniques as complementary tools to MRI. They are applied for instance to localize epileptic centers or active zones (functional imaging). Here the problem is different and consists into performing passive imaging: the epileptic centers act as electrical sources and imaging is performed from measurements of induced currents. Incorporating the structure of the skull is primordial in improving the resolution of the imaging procedure. Doing this in a reasonably quick manner is still an active research area, and the use of asymptotic models would offer a promising solution to fix this issue.
One challenging problem in this vast area is the identification and imaging of defaults in anisotropic media. For instance this problem is of great importance in aeronautic constructions due to the growing use of composite materials. It also arises in applications linked with the evaluation of wood quality, like locating knots in timber in order to optimize timber-cutting in sawmills, or evaluating wood integrity before cutting trees. The anisotropy of the propagative media renders the analysis of diffracted waves more complex since one cannot only relies on the use of backscattered waves. Another difficulty comes from the fact that the micro-structure of the media is generally not well known a priori.
Our concern will be focused on the determination of qualitative information on the size of defaults and their physical properties rather than a complete imaging which for anisotropic media is in general impossible. For instance, in the case of homogeneous background, one can link the size of the inclusion and the index of refraction to the first eigenvalue of so-called interior transmission problem. These eigenvalues can be determined form the measured data and a rough localization of the default. Our goal is to extend this kind of idea to the cases where both the propagative media and the inclusion are anisotropic. The generalization to the case of cracks or screens has also to be investigated.
In the context of nuclear waste management many studies are conducted on the possibility of storing waste in a deep geological clay layer. To assess the reliability of such a storage without leakage it is necessary to have a precise knowledge of the porous media parameters (porosity, tortuosity, permeability, etc.). The large range of space and time scales involved in this process requires a high degree of precision as well as tight bounds on the uncertainties. Many physical experiments are conducted in situ which are designed for providing data for parameters identification. For example, the determination of the damaged zone (caused by excavation) around the repository area is of paramount importance since microcracks yield drastic changes in the permeability. Level set methods are a tool of choice for characterizing this damaged zone.
In biological tissues, water is abundant and magnetic resonance imaging (MRI) exploits the magnetic property of the nucleus of the water proton. The imaging contrast (the variations in the grayscale in an image) in standard MRI can be from either proton density, T1 (spin-lattice) relaxation, or T2 (spin-spin) relaxation and the contrast in the image gives some information on the physiological properties of the biological tissue at different physical locations of the sample. The resolution of MRI is on the order of millimeters: the greyscale value shown in the imaging pixel represents the volume-averaged value taken over all the physical locations contained that pixel.
In diffusion MRI, the image contrast comes from a measure of the average distance the water molecules have moved (diffused) during a certain amount of time. The Pulsed Gradient Spin Echo (PGSE) sequence is a commonly used sequence of applied magnetic fields to encode the diffusion of water protons. The term 'pulsed' means that the magnetic fields are short in duration, an the term gradient means that the magnetic fields vary linearly in space along a particular direction. First, the water protons in tissue are labelled with nuclear spin at a precession frequency that varies as a function of the physical positions of the water molecules via the application of a pulsed (short in duration, lasting on the order of ten milliseconds) magnetic field. Because the precessing frequencies of the water molecules vary, the signal, which measures the aggregate phase of the water molecules, will be reduced due to phase cancellations. Some time (usually tens of milliseconds) after the first pulsed magnetic field, another pulsed magnetic field is applied to reverse the spins of the water molecules. The time between the applications of two pulsed magnetic fields is called the 'diffusion time'. If the water molecules have not moved during the diffusion time, the phase dispersion will be reversed, hence the signal loss will also be reversed, the signal is called refocused. However, if the molecules have moved during the diffusion time, the refocusing will be incomplete and the signal detected by the MRI scanner if weaker than if the water molecules have not moved. This lack of complete refocusing is called the signal attenuation and is the basis of the image contrast in DMRI. the pixels showning more signal attenuation is associated with further water displacement during the diffusion time, which may be linked to physiological factors, such as higher cell membrane permeability, larger cell sizes, higher extra-cellular volume fraction.
We model the nuclear magnetization of water protons in a sample due to diffusion-encoding magnetic fields by a multiple compartment Bloch-Torrey partial differential equation, which is a diffusive-type time-dependent PDE. The DMRI signal is the integral of the solution of the Bloch-Torrey PDE. In a homogeneous medium, the intrinsic diffusion coeffcient D will appear as the slope of the semi-log plot of the signal (in approporiate units). However, because during typical scanning times, 50-100ms, water molecules have had time to travel a diffusion distance which is long compared to the average size of the cells, the slope of the semi-log plot of the signal is in fact a measure of an 'effective' diffusion coefficient. In DMRI applications, this measured quantity is called the 'apparent diffusion coefficient' (ADC) and provides the most commonly used form the image contrast for DMRI. This ADC is closely related to the effective diffusion coefficient obtainable from mathematical homogenization theory.
Specific actions are devoted to the problem of atmospheric reentry simulations. We focus on several aspects : i) on the development of innovative algorithms improving the prediction of hypersonic flows and including system uncertainties, ii) on the application of these methods to the atmospheric reentry of space vehicles for the control and the optimization of the trajectory, iii) on the debris reentry, which is of fundamental importance for NASA, CNES and ESA. Several works are already initiated with funding from CNES, Thales, and ASL. An ongoing activity concerns the design of the Thermal Protection System (TPS) that shields the spacecraft from aerothermal heating, generated by friction at the surface of the vehicle. The TPS is usually composed of different classes of materials, depending on the mission and the planned trajectory. One major issue is to model accurately the material response to ensure a safe design. High-fidelity material modeling for ablative materials has been developed by NASA, but a lot of work is still needed concerning the assessment of physical and modeling uncertainties during the design process. Our objective is to set up a predictive numerical tool to reliably estimate the response of ablative materials for different aerothermal conditions.
An important effort is dedicated to the simulation of fluids featuring complex thermodynamic behavior, in the context of two distinct projects: the VIPER project, funded by Aquitaine Region, and a project with CWI (Scientific Computing Group). Dense gases (DGs) are defined as single-phase vapors operating at temperatures and pressures conditions close to the saturation curve. The interest in studying complex dynamics of compressible dense gas flows comes from the potential technological advantages of using these fluids in energy conversion cycles, such as in Organic Rankine Cycles (ORCs) which used dense gases as energy converters for biomass fuels and low-grade heat from geothermal or industrial waste heat sources. Since these fluids feature large uncertainties in their estimated thermodynamic properties (critical properties, acentric factor, etc.), a meaningful numerical prediction of the performance must necessarily take into account these uncertainties. Other sources of uncertainties include, but are not limited to, the inlet boundary conditions which are often unknown in dense gases applications. Moreover, a robust optimization must also include the more generic uncertainty introduced by the machining tolerance in the construction of the turbine blades.
Fellowship for the participation to the Center for Turbulence Research Summer Program in Stanford University, June-July 2018 (PM Congedo, G. Gori).
This software solves forward and inverse problems for the Helmholtz equation in 2-D.
Functional Description: This software is written in Fortran 90 and is related to forward and inverse problems for the Helmholtz equation in 2-D. It includes three independent components. * The first one solves to scattering problem using integral equation approach and supports piecewise-constant dielectrics and obstacles with impedance boundary conditions. * The second one contains various samplings methods to solve the inverse scattering problem (LSM, RGLSM(s), Factorization, MuSiC) for near-field or far-field setting. * The third component is a set of post processing functionalities to visualize the results
Participant: Houssem Haddar
Contact: Houssem Haddar
SAXS inversion using LMA and HSPY models
Keyword: SAXS measurements
Functional Description: This software determines nanoparticles size distribution from SAXS measurements (Small Angle X-ray Scattering). It contains two different approaches. The first one is based on a linear LMA model with automatic search for model parameters. The second approach uses a non-linear inversion of the HSPY model.
Authors: Marc Bakry and Houssem Haddar
Contact: Marc Bakry
Keywords: Simulation - PDE - Diffusion imaging - MRI
Functional Description: We developed a Matlab toolbox for solving the multiple-compartments Bloch-Torrey partial differential equation in 3D to simulate the water proton magnetization of a sample under the influence of diffusion-encoing magnetic field gradient pulses. We coupled the finite element spatial discretization with several ODE solvers in time that are available inside Matlab.
Result: the code will be made available on GitHub in 2019.
Participant: Jing Rebecca Li
Contact: Jing Rebecca Li
F. Cakoni, H. Haddar and A. Lechleiter
We develop a factorization method to obtain explicit characterization of a (possibly non-convex) Dirichlet scattering object from measurements of time-dependent causal scattered waves in the far field regime. In particular, we prove that far fields of solutions to the wave equation due to particularly modified incident waves, characterize the obstacle by a range criterion involving the square root of the time derivative of the corresponding far field operator. Our analysis makes essential use of a coercivity property of the solution of the Dirichlet initial boundary value problem for the wave equation in the Laplace domain. This forces us to consider this particular modification of the far field operator. The latter in fact, can be chosen arbitrarily close to the true far field operator given in terms of physical measurements.
F. Cakoni, H. Haddar and T.P Nguyen
We consider the imaging of local perturbations of an infinite penetrable periodic layer. A cell of this periodic layer consists of several bounded inhomogeneities situated in a known homogeneous media. We use a differential linear sampling method to reconstruct the support of perturbations without using the Green’s function of the periodic layer nor reconstruct the periodic background inhomogeneities. The justification of this imaging method relies on the well-posedeness of a nonstandard interior transmission problem, which until now was an open problem except for the special case when the local perturbation did not intersect the background inhomogeneities. The analysis of this new interior transmission problem is the main focus of this paper. We then complete the justification of our inversion method and present some numerical examples that confirm the theoretical behavior of the differential indicator function determining the reconstructable regions in the periodic layer.
M. Bakry, H. Haddar and O. Bunau
The Local Monodisperse Approximation (LMA) is a two-parameters model commonly employed for the retrieval of size distributions from the small angle scattering (SAS) patterns obtained on dense nanoparticle samples (e.g. dry powders and concentrated solutions). This work features an original, beyond state-of-the-art implementation of the LMA model resolution for the inverse scattering problem. Our method is based on the Expectation Maximization iterative algorithm and is free from any fine tuning of model parameters. The application of our method on SAS data acquired in laboratory conditions on dense nanoparticle samples is shown to provide very good results.
L. Audibert, L. Chesnel, H. Haddar and Kevish Napal
We consider the problem of detecting the presence of sound-hard cracks in a non homogeneous reference medium from the measurement of multi-static far field data. First, we provide a factorization of the far field operator in order to implement the Generalized Linear Sampling Method (GLSM). The justification of the analysis is also based on the study of a special interior transmission problem. This technique allows us to recover the support of the inhomogeneity of the medium but fails to locate cracks. In a second step, we consider a medium with a multiply connected inhomogeneity assuming that we know the far field data at one given frequency both before and after the appearance of cracks. Using the Differential Linear Sampling Method (DLSM), we explain how to identify the component(s) of the inhomogeneity where cracks have emerged. The theoretical justification of the procedure relies on the comparison of the solutions of the corresponding interior transmission problems without and with cracks. Finally we illustrate the GLSM and the DLSM providing numerical results in 2D. In particular, we show that our method is reliable for different scenarios simulating the appearance of cracks between two measurements campaigns
P.M. Congedo, F. Sanson, T. Magin, F. Panerai
Quantifying the catalytic properties of reusable thermal protection system materials is essential for the design of atmospheric entry vehicles. Their properties quantify the recombination of oxygen and nitrogen atoms into molecules, and allow for accurate computation of the heat flux to the spacecraft. Their rebuilding from ground test data, however, is not straightforward and subject to uncertainties. We propose a fully Bayesian approach to reconstruct the catalytic properties of ceramic matrix composites from sparse high-enthalpy facility experimental data with uncertainty estimates. The results are compared to those obtained by means of an alternative reconstruction procedure, where the experimental measurements are also treated as random variables but propagated through a deterministic solver. For the testing conditions presented in this work, the contribution to the measured heat flux of the molecular recombination is negligible. Therefore, the material catalytic property cannot be estimated precisely. Moreover, epistemic uncertainties are rigorously included, such as the unknown reference calorimeter catalytic property.
P.M. Congedo, G. Gori, O. Le Maitre, A. Guardone
The present work develops a Bayesian framework for the inference of complex fluid thermodynamic model parameters. The objective is to numerically assess the potential of using experimental measurements to reduce the aleatoric and epistemic uncertainties inherent the Peng-Robinson thermodynamic fluid model for flows of fluids in the non-ideal regimes. Our Bayesian framework is tailored to the design of the TROVA (Test-Rig for Organic VApors) experimental facility, at Politecnico di Milano. Computational Fluid Dynamics (CFD) simulations are used to predict the flow field within the designed test section whereas surrogate models (Polynomial-Chaos expansion) are constructed to account for the predictions dependence on the thermodynamic model parameters. First, synthetic data are generated in the attempt of reproducing a real test case, which is considered as the reference experiment, actually achieved in the TROVA facility. We investigate the resulting posterior uncertainties and assess the knowledge brought by using diverse type of measurements obtained for a flow in the non-ideal regime. Results reveal that the exploitation of pressure measurements only do not allow to infer the thermodynamic coefficients. Indeed, the material-dependent parameters remain highly uncertain.
H. Girardon, H. Haddar and L. Audibert
Non-destructive testing is an essential tool to assess the safety of the facilities within nuclear plants. In particular, conductive deposits on U-tubes in steam generators constitute a major danger as they may block the cooling loop. To detect these deposits, eddy-current probes are introduced inside the U-tubes to generate currents and measuring back an impedance signal. Based on earlier work on this subject, we develop a shape optimization technique with regularized gradient descent to invert these measurements and recover the deposit shape. To deal with the unknown, and possibly complex, topological nature of the latter, we propose to model it using a level set function. The methodology is first validated on synthetic axisymmetric configurations and fast convergence in ensured by careful adaptation of the gradient steps and regularization parameters. We then consider a more realistic modeling that incorporates the support plate and the presence of imperfections on the tube interior section. We employ in particular an asymptotic model to take into account these imperfections and treat them as additional unknowns in our inverse problem. A multi-objective optimization strategy, based on the use of different operating frequencies, is then developed to solve this problem. Various numerical experimentations with synthetic data demonstrated the viability of our approach.
A.-S. Bonnet-Ben Dhia, L. Chesnel and V. Pagneux
We consider the reflection-transmission problem in a waveguide with obstacle. At certain frequencies, for some incident waves, intensity is perfectly transmitted and the reflected field decays exponentially at infinity. We show that such reflectionless modes can be characterized as eigenfunctions of an original non-selfadjoint spectral problem. In order to select ingoing waves on one side of the obstacle and outgoing waves on the other side, we use complex scalings (or Perfectly Matched Layers) with imaginary parts of different signs. We prove that the real eigenvalues of the obtained spectrum correspond either to trapped modes (or bound states in the continuum) or to reflectionless modes. Interestingly, complex eigenvalues also contain useful information on weak reflection cases. When the geometry has certain symmetries, the new spectral problem enters the class of
L. Audibert, L. Chesnel and H. Haddar
We are interested in the problem of retrieving information on the refractive index
L. Chesnel and V. Pagneux
We consider the propagation of waves in a waveguide with Neumann boundary conditions. We work at low wavenumber focusing our attention on the monomode regime. We assume that the waveguide is symmetric with respect to an axis orthogonal to the longitudinal direction and is endowed with a branch of height L whose width coincides with the wavelength of the propagating modes. In this setting, tuning the parameter L, we prove the existence of simple geometries where the transmission coefficient is equal to one (perfect invisibility). We also show that these geometries, for possibly different values of L, support so called trapped modes (non zero solutions of finite energy of the homogeneous problem) associated with eigenvalues embedded in the continuous spectrum.
F. Cakoni, H. Haddar and L. Audibert
We developed a general mathematical framework to determine interior eigenvalues from a knowledge of the modified far field operator associated with an unknown (anisotropic) inhomogeneity. The modified far field operator is obtained by subtracting from the measured far field operator the computed far field operator corresponding to a well-posed scattering problem depending on one (possibly complex) parameter. Injectivity of this modified far field operator is related to an appropriate eigenvalue problem whose eigenvalues can be determined from the scattering data, and thus can be used to obtain information about material properties of the unknown inhomogeneity. We discuss here two examples of such modification leading to a Steklov eigenvalue problem, and a new type of the transmission eigenvalue problem. We present some numerical examples demonstrating the viability of our method for determining the interior eigenvalues form far field data.
H. Boujlida, H Haddar and M. Khenissi
We consider the transmission eigenvalue problem for a medium surrounded by a thin layer of inhomogeneous material with different refractive index. We derive explicit asymptotic expansion for the transmission eigenvalues with respect to the thickness of the thin layer. We prove error estimate for the asymptotic expansion up to order 1 for simple eigenvalues. This expansion can be used to obtain explicit expressions for constant index of refraction.
H. Haddar and S. Meng
we consider the transmission eigenvalue problem for Maxwell’s equations corresponding to non-magnetic inhomogeneities with contrast in electric permittivity that has fixed sign (only) in a neighborhood of the boundary. Following the analysis made by Robbiano in the scalar case we study this problem in the framework of semiclassical analysis and relate the transmission eigenvalues to the spectrum of a Hilbert-Schmidt operator. Under the additional assumption that the contrast is constant in a neighborhood of the boundary, we prove that the set of transmission eigenvalues is discrete, infinite and without finite accumulation points. A notion of generalized eigenfunctions is introduced and a denseness result is obtained in an appropriate solution space.
L. Chesnel, S.A. Nazarov
We investigate a time-harmonic wave problem in a waveguide. By means of asymptotic analysis techniques, we justify the so-called Fano resonance phenomenon. More precisely, we show that the scattering matrix considered as a function of a geometrical parameter
L. Chesnel, V. Pagneux
We consider a time-harmonic scattering wave problem in a 2D waveguide at wavenumber k such that one mode is propagating in the far field. For a given k, playing with one scattering branch of finite length, we demonstrate how to construct geometries with zero transmission. The main novelty in this result is that the symmetry of the geometry is not needed: the proof relies on the unitary structure of the scattering matrix. Then, from a waveguide with zero transmission, we show how to build geometries supporting trapped modes associated with eigenvalues embedded in the continuous spectrum. For this second construction, using the augmented scattering matrix and its unitarity, we play both with the geometry and the wavenumber. The mathematical analysis is supplemented by numerical illustrations of the results.
G. Allaire and L. Jakabcin.
We introduce a model and several constraints for shape and topology optimization of structures, built by additive manufacturing techniques. The goal of these constraints is to take into account the thermal residual stresses or the thermal deformations, generated by processes like Selective Laser Melting, right from the beginning of the structural design optimization. In other words, the structure is optimized concurrently for its final use and for its behavior during the layer by layer production process. It is well known that metallic additive manufacturing generates very high temperatures and heat fluxes, which in turn yield thermal deformations that may prevent the coating of a new powder layer, or thermal residual stresses that may hinder the mechanical properties of the final design. Our proposed constraints are targeted to avoid these undesired effects. Shape derivatives are computed by an adjoint method and are incorporated into a level set numerical optimization algorithm. Several 2-d and 3-d numerical examples demonstrate the interest and effectiveness of our approach.
G. Allaire, P. Geoffroy-Donders and O. Pantz
This work is concerned with the topology optimization of structures made of periodically perforated material, where the microscopic periodic cell can be macroscopically modulated and oriented. The main idea is to optimize the homogenized formulation of this problem, which is an easy task of parametric optimization, then to project the optimal microstructure at a desired lengthscale, which is a delicate issue, albeit computationally cheap. The main novelty of our work is, in a plane setting, the conformal treatment of the optimal orientation of the microstructure. In other words, although the periodicity cell has varying parameters and orientation throughout the computational domain, the angles between its members or bars are conserved. The main application of our work is the optimization of so-called lattice materials which are becoming increasingly popular in the context of additive manufacturing. Several numerical examples are presented for single and multiple loads problems, as well as for compliance or more general objective functions.
G. Allaire, F. Feppon, F. Bordeu, J. Cortial and C. Dapogny
Hadamard's method of shape differentiation is applied to topology optimization of a weakly coupled three physics problem. The coupling is weak because the equations involved are solved consecutively, namely the steady state Navier-Stokes equations for the fluid domain, first, the convection diffusion equation for the whole domain, second, and the linear thermo-elasticity system in the solid domain, third. Shape sensitivities are derived in a fully Lagrangian setting which allows us to obtain shape derivatives of general objective functions. An emphasis is given on the derivation of the adjoint interface condition dual to the one of equality of the normal stresses at the fluid solid interface. The arguments allowing to obtain this surprising condition are specifically detailed on a simplified scalar problem. Numerical test cases are presented using a level set mesh evolution method. It is demonstrated how the implementation enables to treat a variety of shape optimization problems.
G. Allaire and B. Bogosel
In additive manufacturing process support structures are often required to ensure the quality of the final built part. In this article we present mathematical models and their numerical implementations in an optimization loop, which allow us to design optimal support structures. Our models are derived with the requirement that they should be as simple as possible, computationally cheap and yet based on a realistic physical modeling. Supports are optimized with respect to two different physical properties. First, they must support overhanging regions of the structure for improving the stiffness of the supported structure during the building process. Second, supports can help in channeling the heat flux produced by the source term (typically a laser beam) and thus improving the cooling down of the structure during the fabrication process. Of course, more involved constraints or manufacturability conditions could be taken into account, most notably removal of supports. Our work is just a first step, proposing a general framework for support optimization. Our optimization algorithm is based on the level set method and on the computation of shape derivatives by the Hadamard method. In a first approach, only the shape and topology of the supports are optimized, for a given and fixed structure. In second and more elaborated strategy, both the supports and the structure are optimized, which amounts to a specific multiphase optimization problem. Numerical examples are given in 2-d and 3-d.
G. Allaire, J.Martinez-Frutos, C. Dapogny, F. Periago
Porosity is a well-known phenomenon occurring during various manufacturing processes (casting, welding, additive manufacturing) of solid structures, which undermines their reliability and mechanical performance. The main purpose of this article is to introduce a new constraint functional of the domain which controls the negative impact of porosity on elastic structures in the framework of shape and topology optimization. The main ingredient of our modelling is the notion of topological derivative, which is used in a slightly unusual way: instead of being an indicator of where to nucleate holes in the course of the optimization process, it is a component of a new constraint functional which assesses the influence of pores on the mechanical performance of structures. The shape derivative of this constraint is calculated and incorporated into a level set based shape optimization algorithm. Our approach is illustrated by several two- and three-dimensional numerical experiments of topology optimization problems constrained by a control on the porosity effect.
L. Bourgeois, L. Chesnel, S. Fliss
We study the propagation of elastic waves in the time-harmonic regime in a waveguide which is unbounded in one direction and bounded in the two other (transverse) directions. We assume that the waveguide is thin in one of these transverse directions, which leads us to consider a Kirchhoff-Love plate model in a locally perturbed 2D strip. For time harmonic scattering problems in unbounded domains, well-posedness does not hold in a classical setting and it is necessary to pre- scribe the behaviour of the solution at infinity. This is challenging for the model that we consider and constitutes our main contribution. Two types of boundary conditions are considered: either the strip is simply supported or the strip is clamped. The two boundary conditions are treated with two different methods. For the simply supported problem, the analysis is based on a result of Hilbert basis in the transverse section. For the clamped problem, this property does not hold. Instead we adopt the Kondratiev's approach, based on the use of the Fourier transform in the unbounded direction, together with techniques of weighted Sobolev spaces with detached asymptotics. After introducing radiation conditions, the corresponding scattering problems are shown to be well-posed in the Fredholm sense. We also show that the solutions are the physical (outgoing) solutions in the sense of the limiting absorption principle.
G. Allaire, A. Lamacz and J. Rauch
This work examines the accuracy for large times of asymptotic expansions from periodic homogenization of wave equations.
As usual,
D. V. Nguyen, J. Jansson, J. Hoffman and J.-R. Li.
The Bloch-Torrey equation describes the evolution of the spin (usually water proton) magnetization under the influence of applied magnetic field gradients and is commonly used in numerical simulations for diffusion MRI and NMR. Microscopic heterogeneity inside the imaging voxel is modeled by interfaces inside the simulation domain, where a discontinuity in the magnetization across the interfaces is produced via a permeability coefficient on the interfaces. To avoid having to simulate on a computational domain that is the size of an entire imaging voxel, which is often much larger than the scale of the microscopic heterogeneity as well as the mean spin diffusion displacement, smaller representative volumes of the imaging medium can be used as the simulation domain. In this case, the exterior boundaries of a representative volume either must be far away from the initial positions of the spins or suitable boundary conditions must be found to allow the movement of spins across these exterior boundaries.
Many approaches have been taken to solve the Bloch-Torrey equation but an efficient high performance computing framework is still missing. In this paper, we present formulations of the interface as well as the exterior boundary conditions that are computationally efficient and suitable for arbitrary order finite elements and parallelization. In particular, the formulations are based on the partition of unity concept which allows for a discontinuous solution across interfaces conforming with the mesh with weak enforcement of real (in the case of interior interfaces) and artificial (in the case of exterior boundaries) permeability conditions as well as an operator splitting for the exterior boundary conditions. The method is straightforward to implement and it is available in FEniCS for moderate-scale simulations and in FEniCS-HPC for large-scale simulations. The order of accuracy of the resulting method is validated in numerical tests and a good scalability is shown for the parallel implementation. We show that the simulated dMRI signals offer good approximations to reference signals in cases where the latter are available and we performed simulations for a realistic model of a neuron to show that the method can be used for complex geometries.
D. V. Nguyen, J. Jansson, H. T. A. Tran, J. Hoffman and J.-R. Li.
The Bloch-Torrey partial differential equation describes the evolution of the transverse magnetization of the imaged sample under the influence of diffusion-encoding magnetic field gradients inside the MRI scanner. The integral of the magnetization inside a voxel gives the simulated diffusion MRI signal. This paper proposes a finite element discretization on manifolds in order to simulate the diffusion MRI signal in domains that have a thin layer or a thin tube geometrical structure. Suppose that the three-dimensional domain has a thin layer structure: points in the domain can be obtained by starting on the two-dimensional manifold and moving along a depth (thickness) function. For this type of domains, we propose a finite element discretization formulated on a surface triangulation of the manifold. The variable thickness of the domain is included in the weak formulation on the surface triangular elements. A simple modification extends the approach to `thin tube' domains where a manifold in one dimension and a two-dimensional variable cross-section describe the points in the domain. We conducted a numerical study of the proposed approach by simulating the diffusion MRI signals from the extracellular space (a thin layer medium) and from neurons (a thin tube medium), comparing the results with the reference signals obtained using a standard three-dimensional finite element discretization. We show good agreement between the simulated signals using our proposed method and the reference signals. The approximation becomes better as the diffusion time increases. The method helps to significantly reduce the required simulation time, computational memory, and difficulties associated with mesh generation, thus opening the possibilities to simulating complicated structures at low cost for a better understanding of diffusion MRI in the brain.
K. V. Nguyen, D. Le Bihana, L. Ciobanua and J.-R. Li
The nerve cells of the Aplysia are much larger than mammalian neurons. Using the Aplysia ganglia to study the relationship between the cellular structure and the diffusion MRI signal can potentially shed light on this relationship for more complex organisms. We measured the dMRI signal of chemically-fixed abdominal ganglia of the Aplysia at several diffusion times. At the diffusion times measured and observed at low b-values, the dMRI signal is mono-exponential and can be accurately represented by the parameter ADC (Apparent Diffusion Coefficient).
We performed numerical simulations of water diffusion for three types of cells in the abdominal ganglia: the large cell neurons, the bag cells, and the nerve cells. For the bag cells and nerves cells, we created spherical and cylindrical geometrical configurations that are consistent with known information about the cellular structures from the literature. We used the simulation results to obtain information about the intrinsic diffusion coefficient in these cells.
For the large cell neurons, we created geometrical configurations by
segmenting high resolution T
Then, using the analytical short time approximation (STA) formula for the ADC, we showed that in order to explain the experimentally observed behavior in the large cell neurons, it is necessary to consider the nucleus and the cytoplasm as two separate diffusion compartments. By using a two compartment STA model, we were able to illustrate the effect of the highly irregular shape of the cell nucleus on the ADC.
H. Haddar, M. Kchaou and M. Moakher
We use homogenization theory to establish a new macroscopic model for the complex transverse water proton magnetization in a voxel due to diffusion-encoding magnetic field gradient pulses in the case of biological tissue with impermeable membranes. In this model, new higher-order diffusion tensors emerge and offer more information about the structure of the biological tissues. We explicitly solve the macroscopic model to obtain an ordinary differential equation for the diffusion MRI signal that has similar structure as diffusional kurtosis imaging models. We finally present some validating numerical results on synthetic examples showing the accuracy of the model with respect to signals obtained by solving the Bloch-Torrey equation.
J.R. Li, H. Haddar and I. Mekkaoui
We performed simulations for a collaborative project with Demian Wassermann of the Parietal team on distinguishing between Spindle and pyramidal neurons with Multi-shell Diffusion MRI.
We continue in the simulation and modeling of heart diffusion MRI with the post-doc project of Imen Mekkaoui, funded by Inria-EPFL lab. The project is co-supervised with Jan Hesthaven, Chair of Computational Mathematics and Simulation Science (MCSS), EPFL.
J. R. Li and J. Hao
This is the start of a collaborative effort between the Defi team and Dr. Hassan Rahioui at the centre hospitalier Sainte Anne and l'Université Paris Diderot.
We started a new research direction in algorithm and software development for analysis and classification of EEG measurements during the administration of neuropsychological tests for AD/HD with the PhD project of Jingjing Hao, co-supervised with Dr. Hassan Rahioui, Chef du pôle psychiatrique du 7e arrondissement de Paris rattaché au centre hospitalier Sainte-Anne.
Result: unfortunately Jingjing Hao will not be able to continue with this PhD project as of Jan 2019. We will modify the project in consultation with Dr. Rahioui and continue it in another format.
P.M. Congedo, M. Rivier
This work is devoted to tackling constrained multi-objective optimisation under uncertainty problems. In particular, the SABBa (Surrogate-Assisted Bounding-Box approach) framework is applied and extended to handle both robust and reliability-based constrained optimisation problems. This approach aims at efficiently dealing with uncertainty-based optimisation problems, with approximated robustness and reliability measures. A Bounding-Box (or conservative box) is defined as a multi-dimensional product of intervals centred on approximated objectives and constraints and containing the underlying true values. In SABBa, this approach is supplemented with a Surrogate-Assisting strategy, which is very effective to reduce the overall computational cost, notably during the last iterations of the optimisation. The efficiency of the method is further increased using the concept of Pareto Optimal Probability (POP) computed for each box, and proposing some estimations for conservative error computation and box refinement using a Gaussian Process (GP).
P.M. Congedo, N. Razaaly
This study presents an original and fast robust shape optimization approach to overcome the limitation of a deterministic optimization that neglects operating conditions variability, applied on a typical 2D ORC turbine cascade (Biere). Flow around the blade is solved by means of inviscid simulation using the open-source SU2 code, considering Non-Ideal gas effects modeled through the use of the Peng-Robinson-Stryjek-Vera equation of state, from which a Quantity of Interest (QoI) is recovered. We propose here a mono-objective formulation consisting in minimizing the
P.M. Congedo, A. Cortesi, G. El Jannoun
In this work, an algorithm for the construction of a low-cost and accurate metamodel is proposed, having in mind redcomputationally expensive applications. It has two main features. First, Universal Kriging is coupled with sparse Polynomial Dimensional Decomposition (PDD) to build a metamodel with improved accuracy. The polynomials selected by the adaptive PDD representation are used as a sparse basis to build a Universal Kriging surrogate model. Secondly, a numerical method, derived from anisotropic mesh adaptation, is formulated in order to adaptively insert a fixed number of new training points to an existing Design of Experiments. The convergence of the proposed algorithm is analyzed and assessed on different test functions with an increasing size of the input space. Finally, the algorithm is used to propagate uncertainties in two high-dimensional real problems related to the atmospheric reentry.
P.M. Congedo, N. Razaaly
Calculation of tail probabilities is of fundamental importance in several domains, such as in risk assessment. One major challenge consists in the computation of low-failure probability in cases characterized by multiple-failure regions, especially when an unbiased estimation of the error is required. Methods developed in literature rely mostly on the construction of an adaptive surrogate, tackling some problems such as the metamodel building criterion and the global computational cost, at the price of a generally biased estimation of the failure probability. In this work, we propose a novel algorithm suitable for low-failure probability and multiple-failure regions, permitting to both building an accurate metamodel and to provide a statistically consistent error. Indeed, an importance sampling technique is used, which is quasi-optimal since permits, by exploiting the knowledge of the metamodel, to provide two unbiased estimators of the failure probability. Additionally, a gaussian mixture-based importance sampling technique is proposed, permitting to drastically reduce the computational cost when estimating some reference values, or the failure probability directly from the metamodel. Several numerical examples are carried out, showing the very good performances of the proposed method with respect to the state-of-the-art in terms of accuracy and computational cost. A physical test-case, focused on the numerical simulation of non-ideal gas turbine cascades, is also investigated to illustrate the capabilities of the method on an industrial case.
P.M. Congedo, F. Sanson, O. Le Maitre
The simulation of complex multi-physics phenomena often requires the use of coupled solvers, modelling different physics (fluids, structures, chemistry, etc) with largely differing computational complexities. We call Systems of Solvers (SoS) a set of interdependent solvers where the output of an upstream solver can be the input of a downstream solvers. In this work we restrict ourselves to weakly coupled problems. A system of solvers typically encapsulate a large number of uncertain input parameters, challenging classical Uncertainty Quantification (UQ) methods such as spectral expansions and Gaussian process models which are affected by the curse of dimensionality. In this work, we develop an original mathematical framework, based on Gaussian Processes (GP) to construct a global metamodel of the uncertain SoS that can be used to solve forward and backward UQ problems. The key idea of the proposed approach is to determine a local GP model for each solver of the SoS. These local GP models are built adaptively to satisfy criteria based on the global output error estimation, which can be decomposed (following an ANOVA-like decomposition) into contributions from individual GP models. This decomposition enables one to select the local GP models that need be refined to efficiently reduce the global error using computer experiment design methods or Bayesian optimization. This framework is then applied to a space object reentry problem.
P.M. Congedo, G. Gori, A. Guardone, M. Zocca
The first-ever experimental validation of a flow simulation software for Non-Ideal Compressible-Fluid Dynamics (NICFD) flows is presented.
Numerical results from the open-source suite SU2 are compared against pressure and Mach number measurements of supersonic flows of siloxane fluid MDM (Octamethyltrisiloxane, C
P.M. Congedo, N. Razaaly, G. Persico
Typical energy sources for Organic Rankine Cycle (ORC) power systems feature variable heat load and turbine inlet/outlet thermodynamic conditions. The use of organic compounds with heavy molecular weight introduces uncertainties in the fluid thermodynamic modeling. In addition, the peculiarities of organic fluids typically lead to supersonic turbine configurations featuring supersonic flows and shocks, which grow in relevance in the aforementioned off-design conditions; these features also depend strongly on the local blade shape, which can be influenced by the geometric tolerances of the blade manufacturing. This study presents an Uncertainty Quantification (UQ) analysis on a typical supersonic nozzle cascade for ORC applications, by considering a two-dimensional high-fidelity turbulent Computational Fluid Dynamic (CFD) model. Kriging-based techniques are used in order to take into account at a low computational cost, the combined effect of uncertainties associated to operating conditions, fluid parameters, and geometric tolerances. The geometric variability is described by a finite Karhunen-Loeve expansion representing a non-stationary Gaussian random field, entirely defined by a null mean and its autocorrelation function. Several results are illustrated about the ANOVA decomposition of several quantities of interest for different operating conditions, showing the importance of geometric uncertainties on the turbine performances.
P.M. Congedo, F. Sanson, O. Le Maitre, J.-M. Bouilly, C. Bertorello
The prediction of risk associated with the reentry of a man made space object is critical but subject to input parameter uncertainties. To compute the risk one needs to determine whether the object survives to reentry and if it does where it falls on Earth. Expensive numerical models can be used to answer both questions but they can only be evaluated a limited number of times to propagate the uncertainties. In this work, we present an original approach to construct an accurate surrogate model of the numerical models using a limited number of solver evaluations. Using Gaussian Processes, the constructed surrogate model is able two answer both questions (survivability and impact location) in order to provide an accurate description of the risk. The surrogate model can achieve high level of accuracy in terms of risk estimation using dedicated active learning strategies. The efficiency of the method is illustrated on analytical test cases and an actual space object reentry case.
A CIFRE PhD thesis started in December 2015 with Safran Tech. The student is Mrs Perle Geoffroy who is working on "topology optimization by the homogenization method in the context of additive manufacturing".
A CIFRE PhD thesis started in April 2017 with Safran Tech. The student is M. Florian Feppon who is working on "topology optimization for a coupled thermal-fluid-structure system”.
A CIFRE PhD thesis started in October 2017 with Renault. The student is Mrs Lalaina Rakotondrainibe who is working on "topology optimization of connections between mechanical parts”.
A CIFRE PhD thesis started November 2017 with EDF. The student is H. Girardon who is working on "level set method for eddy current non destructive testting".
A CIFRE PhD thesis started May 2017 with ArianeGroup. The student is M. Mickael Rivier who is working on "Optimization under uncertainty methods for expensive computer codes".
A CIFRE PhD thesis started November 2018 with CEA CESTA. The student is M. Paul Novello who is working on "Deep Learning for atmospheric reentry".
The SOFIA project (SOlutions pour la Fabrication Industrielle Additive métallique) started in the summer of 2016. Its purpose is to make research in the field of metallic additive manufacturing. The industrial partners include Michelin, FMAS, ESI, Safran and others. The academic partners are different laboratories of CNRS, including CMAP at Ecole Polytechnique. The project is funded for 6 years by BPI (Banque Publique d'Investissement).
G. Allaire is participating to the TOP project at IRT SystemX which started in February 2017. It is concerned with the development of a topology optimization platform with industrial partners (Renault, Safran, Airbus, ESI).
FUI project Saxsize. This three years project started in October 2015 and extended till April 2019 and it involves Xenocs (coordinator), Inria (DEFI), Pyxalis, LNE, Cordouan and CEA. It is a followup of Nanolytix where a focus is put on SAXS quantifications of dense nanoparticle solutions.
Contract with THALES, Activity around the numerical certification of debris codes, Coordinator: P.M. Congedo.
Contract with ArianeGroup, Activity around techniques for Uncertainty Quantification, Coordinator: P.M. Congedo.
Title : Virtual prototyping of EVE engines
Type : Co-funded from Region Aquitaine and Inria
Duration : 36 months
Starting : October 2018
Coordinator : P.M. Congedo
Abstract : The main objective of this thesis is the construction of a numerical platform, for permitting an efficient virtual prototyping of the EVE expander. This will provide EXOES with a numerical tool, that is much more predictive with respect to the tools currently available and used in EXOES, by respecting an optimal trade-off in terms of complexity/cost needed during an industrial design process.i Two research axes will be mainly developed. First, the objective is to perform some high- predictive numerical simulation for reducing the amount of experiments, thanks to a specific devel- opment of RANS tools (Reynolds Averaged Navier-Stokes equations) for the fluids of interest for EXOES. These tools would rely on complex thermodynamic models and a turbulence model that should be modified. The second axis is focused on the integration of the solvers of different fidelity in a multi-fidelity platform for performing optimization under uncertainties. The idea is to evaluate the system performances by using massively the low-fidelity models, and by correcting these estimations via only few calculations with the high-fidelity code.
Program: H2020 MSCA-ITN
Project acronym: UTOPIAE
Project title: Handling the unknown at the edge of tomorrow
Duration: January 2017- December 2020
Coordinator: M. Vasile (Strathclyde University)
Other partners: see http://
UTOPIAE is a European research and training network looking at cutting edge methods bridging optimisation and uncertainty quantification applied to aerospace systems. The network will run from 2017 to 2021, and is funded by the European Commission through the Marie Skłodowska-Curie Actions of H2020. The network is made up of 15 partners across 6 European countries, including the UK, and one international partner in the USA, collecting mathematicians, engineers and computer scientists from academia, industry, public and private sectors.
Mission statement : To train, by research and by example, 15 Early Stage Researchers in the field of uncertainty quantification and optimisation to become leading independent researchers and entrepreneurs that will increase the innovation capacity of the EU. To equip the researchers with the skills they will need for successful careers in academia and industry. To develop fundamental mathematical methods and algorithms to bridge the gap between Uncertainty Quantification and Optimisation and between Probability Theory and Imprecise Probability Theory for Uncertainty Quantification to efficiently solve high-dimensional, expensive and complex engineering problems.
P.M. Congedo is the Inria Coordinator of the CWI-Inria Inria International Lab.
IIL CWI-Inria
Associate Team involved in the International Lab:
Title: Computational Methods for Uncertainties in Fluids and Energy Systems
International Partner (Institution - Laboratory - Researcher):
CWI (Netherlands) - Scientific Computing Group - Daan Crommelin
Start year: 2017
See also: https://project.inria.fr/inriacwi/projects/communes/
This project aims to develop numerical methods capable to take into account efficiently unsteady experimental data, synthetic data coming from numerical simulation and the global amount of uncertainty associated to measurements, and physical-model parameters. We aim to propose novel algorithms combining data-inferred stochastic modeling, uncertainty propagation through computer codes and data assimilation techniques. The applications of interest are both related to the exploitation of renewable energy sources: wind farms and solar Organic Rankine Cycles (ORCs).
University of Zurich : R. Abgrall. Collaboration on high order adaptive methods for CFD and uncertainty quantification.
Politecnico di Milano, Aerospace Department (Italy) : Pr. A. Guardone. Collaboration on ALE for complex flows (compressible flows with complex equations of state).
von Karman Institute for Fluid Dynamics (Belgium). With Pr. T. Magin we work on Uncertainty Quantification problems for the identification of inflow condition of hypersonic nozzle flows.
Rutgers University. Collaboration with Pr. F. Cakoni on transmission eigenvalues.
University of Delaware. Collaboration with Pr. D. Colton on inverse scattering theory
Ecole Nationale des Ingénieurs de Tunis. Collaboration with Pr. M. Moakher on Diffusion MRI
Fioralba Cakoni, one month, July 15-August 14, 2018
P. Congedo is General Chair of the CWI-Inria workshop at Inria Paris research centre in Paris on September 25, 26 2018.
L. Chesnel co-organized the Journée de rentrée (2018) of the Centre de Mathématiques Appliquées of École Polytechnique.
L. Chesnel co-organizes the seminar of the Centre de Mathématiques Appliquées of École Polytechnique.
L. Chesnel co-organizes the joint seminar of the Inria teams Defi-M3DISIM-Poems.
H. Haddar co-organized a minisymposium at waves conference, Karlsruhe, July 2018
H. Haddar co-organized a minisymposium at the conference Inverse problems: modeling and simulation, Malta, May 2018
H. haddar co-organized a minisymposium at the Faculty of DSIT of Ecole polytechnique, May 2018.
J.R. Li is Co-organizer of the summer school Ecole d'Été France Excellence, Data science for document analysis and understanding sponsored by the French Embassy in China, 07/2018. 4 weeks.
G. Allaire is member of the editorial boards of
Book series "Mathématiques et Applications" of SMAI and Springer,
ESAIM/COCV, Structural and Multidisciplinary Optimization,
Discrete and Continuous Dynamical Systems Series B,
Computational and Applied Mathematics,
Mathematical Models and Methods in Applied Sciences (M3AS),
Annali dell'Universita di Ferrara,
OGST (Oil and Gas Science and Technology),
Journal de l'Ecole Polytechnique - Mathématiques,
Journal of Optimization Theory and Applications.
P.M. Congedo is Editor of Mathematics and Computers in Simulation, MATCOM (Elsevier).
H. Haddar is
Member the editorial board of Inverse Problems
Associate Editor of the SIAM Journal on Scientific Computing
Guest editor of for a special issue in the journal Inverse Problems
We reviewed papers for top international journals in the main scientific themes of the team.
G. Allaire
Seminar at BCAM, Bilbao (january 2018).
Séminaire JOFA, Pau (juin 2018).
ECCM-ECFD, ECCOMAS, Glasgow (juin 2018).
Summer school Sendai, Japon (August 2018).
Fifth workshop on thin structures, Naples, September 13-15, 2018.
Current trends and open problems in computational solid mechanics, Hannover, October 8-9, 2018.
L. Chesnel
Conference on Mathematics of Wave Phenomena, July 2018.
SIAM conference on imaging science, Bologna, June 2018.
Inverse problems: modeling and simulation, Malta, May 2018.
Workshop du GDR ondes, Jussieu, March 2018.
P.M. Congedo
von Karman Institute Symposium, Bruxelles, April 2018.
CERFACS Seminar, Toulouse, October 2018
VKI Lecture Series on Uncertainty Quantification, Bruxelles, September 2018.
Seminar at CEA-CESTA, Le Barp, September 2018.
H. Haddar
ICAV conference, Hammamet, March 2018
SIAM conference on imaging science, Bologna, June 2018.
Waves conference, Karsruhe, July 2018
Seminar at LJK, Grenoble, October 2018
Kickoff workshop of Mecawave, November 2018
Workshop "Inverse Problems: Theory and Applications", Reims, 2018
G. Allaire is a board member of Institut Henri Poincaré (IHP). He is the chairman of the scientific council of IFPEN (French Petroleum Institute and New Energies). He is the chairman of the scientific council of AMIES (Agency for Interaction in Mathematics with Business and Society).
G. Allaire is a member of th"comité national" CNRS, section 41 (mathematics).
G. Allaire is a member of the scientific board of the Gaspard Monge program on optimization (PGMO) at the Jacques Hadamard Mathematical Foundation.
J.R. Li is Member of the SIAM Committee on Programs and Conferences 2017-2019.
J.R. Li is Member Elu of Inria Commission d'Evaluation, 2015-present.
J.R. Li is correspondant International for Centre de Mathematiques Appliquees, Ecole Polytechnique, 2018-present.
J.R. Li is responsable for the Ecole Polytechnique part of the French-Vietnam Master Program in Applied Mathematics, 2016-present.
Master: Grégoire Allaire, Approximation Numérique et Optimisation, for students in the second year of Ecole Polytechnique curriculum: 8 lessons of 1h30.
Master: Grégoire Allaire, Transport and diffusion, for students in the third year of Ecole Polytechnique curriculum. 9 lessons of 2h jointly with F. Golse.
Master: Houssem Haddar, Approximation Numérique et Optimisation, for students in the second year of Ecole Polytechnique curriculum: 8 TDs of 4h.
Master: Houssem Haddar, Waves and imaging: Concepts, Theory and Applications, Master M2 ”mathematical modeling”: 9 lessons of 3h.
Master: Lucas Chesnel, Variational analysis for partial differential equations, for students in the second year of Ecole Polytechnique curriculum: 8 TDs of 4h.
Master: Lucas Chesnel, Numerical approximation and optimisation, for students in the second year of Ecole Polytechnique curriculum: 2 TDs of 4h + one project.
Master: Lucas Chesnel, Modal Modélisation mathématique par la démarche expérimentale, for students in the second year of Ecole Polytechnique curriculum: 5 TDs of 2h.
Master: Grégoire Allaire, Optimal design of structures, for students in the third year of Ecole Polytechnique curriculum. 9 lessons of 1h30.
Master: Grégoire Allaire, Theoretical and numerical analysis of hyperbolic systems of conservation laws, Master M2 ”mathematical modeling”, 8 lessons of 3h.
Master: Jing Rebecca Li, Lecturer of course Mathematical and numerical foundations of modeling and simulation using partial differential equations French-Vietnam Master in Applied Mathematics, University of Science, Ho Chi Minh City, 10/2018. 2 weeks.
Master: P.M. Congedo, Numerical methods in Fluid Mechanics, ENSTA ParisTech, 12 h.
Doctorat: Houssem Haddar, Lecturer at the kickoff workshop of GDR mecawave. Introduction to Inverse problems (2x1h30).
Doctorat: P.M. Congedo, Introduction to Uncertainty Quantification, 12h, Doctorate School of University of Bordeaux, France.
Ph.D. A. Bissuel, Linearized Navier Stokes equations for optimization, floating and aeroaccoustic (Dassault Aviation, defended in January 2018). G. Allaire.
Ph.D. P. Geoffroy, Topology optimization by the homogenization method in the context of additive manufacturing (defended in December 2018). G. Allaire.
Ph.D. in progress: S. Houbar sur la cavitation dans le fluide caloporteur induite par les mouvements des assemblages d'un réacteur (CEA, to be defended in 2020). G. Allaire and G. Campioni.
Ph.D. in progress: M. Boissier, Optimisation couplée de la topologie des formes et de la trajectoire de lasage en fabrication additive (to be defended in 2020). G. Allaire and Ch. Tournier.
L. Rakotondrainibe sur l'optimisation des liaisons enre pièces dans les système mécaniques (Renault, to be defended in 2020). G. Allaire.
F. Feppon sur l'optimisation topologique de systèmes couplés fluide-solide-thermique (Safran, to be defended in 2020). G. Allaire and Ch. Dapogny.
Q. Feng sur les éléments finis multi-échelles pour Navier Stokes incompressible en milieu encombré (CEA, to be defended in 2019). G. Allaire and P. Omnes.
J. Desai sur l'optimisation topologique de structures au comportement non-linéaire avec des méthodes de déformation de maillage (IRT SystemX, to be defended in 2021). G. Allaire and F. Jouve,
Ph.D. in progress: B. Charfi, Identification of the sigular support of a GIBC, to be defended in 2019, H. Haddar and S. Chaabane
PhD in progress: K. Napal, Transmission eigenvalues and non destructive testing of concrete like materials , to be defended in 2019, L. Chesnel H. Haddar and L. Audibert
PhD in progress: M. Kchaou, Higher order homogenization tensors for DMRI modeling, to be defended in 2019, H. Haddar and M. Moakher
PhD in progress: H. Girardon, Non destructive testing of PWR tubes using eddy current rotating coils, to be defended in 2021, H. Haddar and L. Audibert
PhD in progress: J. Hao, Thesis topic: Algorithm and software development for analysis and classification of EEG measurements during administration of neuropsychological tests for AD/HD, 2017, J.R. Li and H. Rahioui. PhD stopped in Jan 2019.
PhD in progress: M. Rihani, Maxwell's equations in presence of metamaterials (to be defended in 2021), A.-S. Bonnet-BenDhia and L. Chesnel.
PhD in progress: F. Sanson, UQ in systems of solver for the atmospheric reentry (to be defended in July 2019), P.M. Congedo, O. Le Maitre.
PhD in progress: N. Razaaly, Optimization under uncertainties of ORC turbine cascades, (to be defended in July 2019), P.M. Congedo.
PhD in progress: M. Rivier, optimization under uncertainty through a Bounding-Box concept (to be defended in May 2020), P.M. Congedo.
PhD in progress: Joao Reis, Advanced methods for stochastic elliptic PDEs (to be defended in October 2020), P.M. Congedo, O. Le Maitre.
PhD in progress: G. Gori, Bayesian calibration of complex thermodynamic flows (to be defended in January 2019), P.M. Congedo, O. Le Maitre, A. Guardone.
PhD in progress: Anabel Del Val, Advanced bayesian methods for aerospace applications (to be defended in October 2020), P.M. Congedo, O. Le Maitre, O. Chazot, T. Magin.
PhD in progress: J. Carlier, Residual distribution schemes for cavitating two-phase flows (to be defended in October 2019), P.M. Congedo, M. Pelanti, R. Abgrall.
PhD in progress: P. Novello, Deep learning for reentry atmosperic flows (to be defended in November 2021), P.M. Congedo, D. Lugato, G. Poette.
PhD in progress: E. Solai, Virtual Prototyping of the EVE expander (to be defended in October 2021), P.M. Congedo, H. Beaugendre.
P.M. Congedo is Deputy Coordinator of "Maths/Engineering" Program of the Labex Mathématiques Hadamard.
J.R. Li is Member Elu of Inria Commission d'Evaluation, 2015-present.
L. Chesnel provided some numerical experiments used in the exhibition "Rencontres diffractantes : quand les mathématiques inspirent l'art...". This exhibition was presented at Ensta ParisTech and at Inria Saclay.
P.M. Congedo presented some research activities in aerospace in the context of UnithéauCafé du centre Inria de Saclay Ile-de-France.
P.M. Congedo made a presentation in the context of the Fete de la science 2018, to several groups of young students (around 10-12 years old).
H. Haddar made a joint presentation with O. Bunau from Xenocs on nanoparticle imaging using small angle X-ray diffraction technology in the context of UnithéauCafé at Inria de Saclay Ile-de-France.