The focus of our research is on the development of novel parallel numerical algorithms and tools appropriate for state-of-the-art mathematical models used in complex scientific applications, and in particular numerical simulations. The proposed research program is by nature multi-disciplinary, interweaving aspects of applied mathematics, computer science, as well as those of several specific applications, as porous media flows, elasticity, wave propagation in multi-scale media, molecular simulations, and inverse problems.

Our first objective is to develop numerical methods and tools for complex scientific and industrial applications that will enhance their scalable execution on the emergent heterogeneous hierarchical models of massively parallel machines. Our second objective is to integrate the novel numerical algorithms into a middle-layer that will hide as much as possible the complexity of massively parallel machines from the users of these machines.

The research described here is directly relevant to several steps of the numerical simulation chain. Given a numerical simulation that was expressed as a set of differential equations, our research focuses on mesh generation methods for parallel computation, novel numerical algorithms for linear algebra, as well as algorithms and tools for their efficient and scalable implementation on high performance computers. The validation and the exploitation of the results is performed with collaborators from applications and is based on the usage of existing tools. In summary, the topics studied in our group are the following:

Solvers for numerical linear algebra:

domain decomposition methods, preconditioning for iterative methods

In the engineering, researchers, and teachers communities, there is a
strong demand for simulation frameworks that are simple to install and
use, efficient, sustainable, and that solve efficiently and accurately
complex problems for which there are no dedicated tools or codes
available. In our group we develop FreeFem++ (see https://

The current users of FreeFem++ are mathematicians, engineers, university professors, and students. In general for these users the installation of public libraries as MPI, MUMPS, Ipopt, Blas, lapack, OpenGL, fftw, scotch, PETSc, SLEPc is a very difficult problem. For this reason, the authors of FreeFem++ have created a user friendly language, and over years have enriched its capabilities and provided tools for compiling FreeFem++ such that the users do not need to have special knowledge of computer science. This leads to an important work on porting the software on different emerging architectures.

Today, the main components of parallel FreeFem++ are:

All these components are parallel, except for the last point which is not in the focus of our research. However for the moment, the parallel mesh generation algorithm is very simple and not sufficient, for example it addresses only polygonal geometries. Having a better parallel mesh generation algorithm is one of the goals of our project. In addition, in the current version of FreeFem++, the parallelism is not hidden from the user, it is done through direct calls to MPI. Our goal is also to hide all the MPI calls in the specific language part of FreeFem++. In addition to these in-house domain decomposition methods, FreeFem++ is also linked to PETSc solvers which enables an easy use of third parties parallel multigrid methods.

Iterative methods are widely used in industrial applications, and preconditioning is the most important research subject here. Our research considers domain decomposition methods and iterative methods and its goal is to develop solvers that are suitable for parallelism and that exploit the fact that the matrices are arising from the discretization of a system of PDEs on unstructured grids.

One of the main challenges that we address is the lack of robustness and scalability of existing methods as incomplete LU factorizations or Schwarz-based approaches, for which the number of iterations increases significantly with the problem size or with the number of processors. This is often due to the presence of several low frequency modes that hinder the convergence of the iterative method. To address this problem, we study different approaches for dealing with the low frequency modes as coarse space correction in domain decomposition or deflation techniques.

We also focus on developing boundary integral equation methods that would be adapted to the simulation of wave propagation in complex physical situations, and that would lend themselves to the use of parallel architectures. The final objective is to bring the state of the art on boundary integral equations closer to contemporary industrial needs. From this perspective, we investigate domain decomposition strategies in conjunction with boundary element method as well as acceleration techniques (H-matrices, FMM and the like) that would appear relevant in multi-material and/or multi-domain configurations. Our work on this topic also includes numerical implementation on large scale problems, which appears as a challenge due to the peculiarities of boundary integral equations.

The design of new numerical methods that are robust and that have well proven convergence properties is one of the challenges addressed in Alpines. Another important challenge is the design of parallel algorithms for the novel numerical methods and the underlying building blocks from numerical linear algebra. The goal is to enable their efficient execution on a diverse set of node architectures and their scaling to emerging high-performance clusters with an increasing number of nodes.

Increased communication cost is one of the main challenges in high performance computing that we address in our research by investigating algorithms that minimize communication, as communication avoiding algorithms. The communication avoiding algorithmic design is an approach originally developed in our group since more than ten years (initially in collaboration with researchers from UC Berkeley and CU Denver). While our first results concerned direct methods of factorization as LU or QR factorizations, our recent work focuses on designing robust algorithms for computing the low rank approximation of a matrix or a tensor. We focus on both deterministic and randomized approaches.

Our research also focuses on solving problems of large size that feature high dimensions as in molecular simulations. The data in this case is represented by objects called tensors, or multilinear arrays. The goal is to design novel tensor techniques to allow their effective compression, i.e. their representation by simpler objects in small dimensions, while controlling the loss of information. The algorithms are aiming to being highly parallel to allow to deal with the large number of dimensions and large data sets, while preserving the required information for obtaining the solution of the problem.

We study the simulation of compositional multiphase flow in porous media with different types of applications, and we focus in particular on reservoir/bassin modeling, and geological CO2 underground storage. All these simulations are linearized using Newton approach, and at each time step and each Newton step, a linear system needs to be solved, which is the most expensive part of the simulation. This application leads to some of the difficult problems to be solved by iterative methods. This is because the linear systems arising in multiphase porous media flow simulations cumulate many difficulties. These systems are non-symmetric, involve several unknowns of different nature per grid cell, display strong or very strong heterogeneities and anisotropies, and change during the simulation. Many researchers focus on these simulations, and many innovative techniques for solving linear systems have been introduced while studying these simulations, as for example the nested factorization [Appleyard and Cheshire, 1983, SPE Symposium on Reservoir Simulation].

We focus on methods related to the blend of time reversal techniques and absorbing boundary conditions (ABC) used in a non standard way. Since the seminal paper by [M. Fink et al., Imaging through inhomogeneous media using time reversal mirrors. Ultrasonic Imaging, 13(2):199, 1991.], time reversal is a subject of very active research. The principle is to back-propagate signals to the sources that emitted them. The initial experiment was to refocus, very precisely, a recorded signal after passing through a barrier consisting of randomly distributed metal rods. In [de Rosny and Fink. Overcoming the diffraction limit in wave physics using a time-reversal mirror and a novel acoustic sink. Phys. Rev. Lett., 89 (12), 2002], the source that created the signal is time reversed in order to have a perfect time reversal experiment. In 36, we improve this result from a numerical point of view by showing that it can be done numerically without knowing the source. This is done at the expense of not being able to recover the signal in the vicinity of the source. In 37, time dependent wave splitting is performed using ABC and time reversal techniques. We now work on extending these methods to non uniform media.

All our numerical simulations are performed in FreeFem++ which is very flexible. As a byproduct, it enables us to have an end user point of view with respect to FreeFem++ which is very useful for improving it.

We are interested in the development of fast numerical methods for the simulation of electromagnetic waves in multi-scale situations where the geometry of the medium of propagation may be described through caracteristic lengths that are, in some places, much smaller than the average wavelength. In this context, we propose to develop numerical algorithms that rely on simplified models obtained by means of asymptotic analysis applied to the problem under consideration.

Here we focus on situations involving boundary layers and localized singular
perturbation problems where wave propagation takes place in media whose geometry or material
caracteristics are submitted to a small scale perturbation localized around a point, or a surface,
or a line, but not distributed over a volumic sub-region of the propagation medium. Although a huge
literature is already available for the study of localized singular perturbations and boundary layer
pheneomena, very few works have proposed efficient numerical methods that rely on asymptotic
modeling. This is due to their functional framework that naturally involves singular functions,
which are difficult to handle numerically. The aim of this part of our reasearch is to develop and analyze
numerical methods for singular perturbation methods that are prone to high order numerical approximation,
and robust with respect to the small parameter characterizing the singular perturbation.

We focus on computationally intensive numerical algorithms arising in the data analysis of current and forthcoming Cosmic Microwave Background (CMB) experiments in astrophysics. This application is studied in collaboration with researchers from University Paris Diderot, and the objective is to make available the algorithms to the astrophysics community, so that they can be used in large experiments.

In CMB data analysis, astrophysicists produce and analyze
multi-frequency 2D images of the universe when it was 5% of its
current age. The new generation of the CMB experiments observes the
sky with thousands of detectors over many years, producing
overwhelmingly large and complex data sets, which nearly double every
year therefore following Moore's Law. Planck
(http://

Molecular simulation is one of the most dynamic areas of scientific computing. Its field of application is very broad, ranging from theoretical chemistry and drug design to materials science and nanotechnology. It provides many challenging problems to mathematicians, and computer scientists.

In the context of the ERC Synergy Grant EMC2 we address several important limitations of state of the art molecular simulation. In particular, the simulation of very large molecular systems, or smaller systems in which electrons interact strongly with each other, remains out of reach today. In an interdisciplinary collaboration between chemists, mathematicians and computer scientists, we focus on developing a new generation of reliable molecular simulation algorithms and software.

FreeFem++ is a partial differential equation solver. It has its own language. freefem scripts can solve multiphysics non linear systems in 2D and 3D.

Problems involving PDE (2d, 3d) from several branches of physics such as fluid-structure interactions require interpolations of data on several meshes and their manipulation within one program. FreeFem++ includes a fast 2d̂-tree-based interpolation algorithm and a language for the manipulation of data on multiple meshes (as a follow up of bamg (now a part of FreeFem++ ).

FreeFem++ is written in C++ and the FreeFem++ language is a C++ idiom. It runs on Macs, Windows, Unix machines. FreeFem++ replaces the older freefem and freefem+.

Part of our research focuses on computing accurate low rank matrix approximations through randomized or deterministic approaches.

In 29 we introduce a parallel algorithm for computing the low rank approximation

In 16 we propose linear-time CUR approximation algorithms for admissible matrices obtained from the hierarchical form of Boundary Element matrices. We propose a new approach called geometric sampling to obtain indices of most significant rows and columns using information from the domains where the problem is posed. Our strategy is tailored to Boundary Element Methods (BEM) since it uses directly and explicitly the cluster tree containing information from the problem geometry.

To address problems arising in computational chemistry, we focus on computing low rank tensor approximation. We present in 33 a parallel algorithm that generates a low-rank approximation of a distributed tensor using QR decomposition with tournament pivoting (QRTP). The algorithm generates factor matrices for a Tucker decomposition by applying QRTP to the unfolding matrices of a tensor distributed block-wise (by sub-tensor) on a set of processors. For each unfolding mode the algorithm logically reorganizes (unfolds) the processors so that the associated unfolding matrix has a suitable logical distribution. We also establish error bounds between a tensor and the compressed version of the tensor generated by the algorithm.

In 35 we consider the problem of developing parallel decomposition and approximation algorithms for high dimensional tensors. We focus on a tensor representation named Tensor Train (TT). It stores a

In 19 a numerical method is proposed to compress a tensor by constructing a piece-wise tensor approximation. This is defined by partitioning a tensor into sub-tensors and by computing a low-rank tensor approximation (in a given format) in each sub-tensor. Neither the partition nor the ranks are fixed a priori, but, instead, are obtained in order to fulfill a prescribed accuracy and optimize, to some extent, the storage. The different steps of the method are detailed and some numerical experiments are proposed to assess its performances.

In 28 a randomized Gram-Schmidt algorithm is developed for orthonormalization of high-dimensional vectors or QR factorization. The proposed process can be less computationally expensive than the classical Gram-Schmidt process while being at least as numerically stable as the modified Gram-Schmidt process. Our approach is based on random sketching, which is a dimension reduction technique consisting in estimation of inner products of high-dimensional vectors by inner products of their small efficiently-computable random projections, so-called sketches. This allows to perform the projection step in Gram-Schmidt process on sketches rather than high-dimensional vectors with a minor computational cost. This also provides an ability to efficiently certify the output. The proposed Gram-Schmidt algorithm can provide computational cost reduction in any architecture. The benefit of random sketching can be amplified by exploiting multi-precision arithmetic. We provide stability analysis for multi-precision model with coarse unit roundoff for standard high-dimensional operations. Numerical stability is proven for the unit roundoff independent of the (high) dimension of the problem. The proposed Gram-Schmidt process can be applied to Arnoldi iteration and result in new Krylov subspace methods for solving high-dimensional systems of equations or eigenvalue problems. Among them we chose randomized GMRES method as a practical application of the methodology.

In 22 we consider component separation, which is one of the key stages of any modern cosmic microwave background data analysis pipeline. It is an inherently nonlinear procedure and typically involves a series of sequential solutions of linear systems with similar but not identical system matrices, derived for different data models of the same data set. Sequences of this type arise, for instance, in the maximization of the data likelihood with respect to foreground parameters or sampling of their posterior distribution. However, they are also common in many other contexts. In this work we consider solving the component separation problem directly in the measurement (time-) domain. This can have a number of important benefits over the more standard pixel-based methods, in particular if non-negligible time-domain noise correlations are present, as is commonly the case. The approach based on the time-domain, however, implies significant computational effort because the full volume of the time-domain data set needs to be manipulated. To address this challenge, we propose and study efficient solvers adapted to solving time-domain-based component separation systems and their sequences, and which are capable of capitalizing on information derived from the previous solutions. This is achieved either by adapting the initial guess of the subsequent system or through a so-called subspace recycling, which allows constructing progressively more efficient two-level preconditioners. We report an overall speed-up over solving the systems independently of a factor of nearly 7, or 5, in our numerical experiments, which are inspired by the likelihood maximization and likelihood sampling procedures, respectively.

In 14, 13, and 15 we explore adaptive preconditioners that integrate a posteriori error estimators. In particular we introduce an adaptive preconditioner for iterative solution of sparse linear systems arising from partial differential equations with self-adjoint operators. This preconditioner allows to control the growth rate of a dominant part of the algebraic error within a fixed point iteration scheme. Several numerical results that illustrate the efficiency of this adaptive preconditioner with a PCG solver are presented and the preconditioner is also compared with a previous variant in the literature.

In 11, we introduce an adaptive domain decomposition (DD) method for solving saddle point problems defined as a block two by two matrix. The algorithm does not require any knowledge of the constrained space. We assume that all sub matrices are sparse and that the diagonal blocks are the sum of positive semi definite matrices. The latter assumption enables the design of adaptive coarse space for DD methods. Numerical results on three dimensional elasticity problems on steel-rubber structures discretized with 1 billion degrees of freedom are shown.

In 30, we analyze the convergence of the one-level overlapping domain decomposition preconditioner SORAS (Symmetrized Optimized Restricted Additive Schwarz) applied to a generic linear system whose matrix is not necessarily symmetric/self-adjoint nor positive definite. By generalizing the theory for the Helmholtz equation developed in [I.G. Graham, E.A. Spence, and J. Zou, SIAM J. Numer. Anal., 2020], we identify a list of assumptions and estimates that are sufficient to obtain an upper bound on the norm of the preconditioned matrix, and a lower bound on the distance of its field of values from the origin. We stress that our theory is general in the sense that it is not specific to one particular boundary value problem.
As an illustration of this framework, we prove new estimates for overlapping domain decomposition methods with Robin-type transmission conditions for the heterogeneous reaction-convection-diffusion equation.

In 31, we consider the domain decomposition method approach to solve the Helmholtz equation. Double sweep based approaches for overlapping decompositions are presented. In particular, we introduce an overlapping splitting double sweep (OSDS) method valid for any type of interface boundary conditions. Despite the fact that first order interface boundary conditions are used, the OSDS method demonstrates good stability properties with respect to the number of subdomains and the frequency even for heterogeneous media. In this context, convergence is improved when compared to the double sweep methods in [Nataf,1997] and [Vion et al., 2014 and 2016] for all of our test cases: waveguide, open cavity, and wedge problems.

In the context of seismic imaging, frequency-domain full-waveform inversion (FWI) is suitable for long-offset stationary-recording acquisition, since reliable subsurface models can be reconstructed with a few frequencies and attenuation is easily implemented without computational overhead. In the frequency domain, wave modeling is a Helmholtz-type boundary-value problem which requires to solve a large and sparse system of linear equations per frequency with multiple right-hand sides (sources). This system can be solved with direct or iterative methods. While the former are suitable for FWI application on 3D dense OBC acquisitions covering spatial domains of moderate size, the later should be the approach of choice for sparse node acquisitions covering large domains (more than 50 millions of unknowns). Fast convergence of iterative solvers for Helmholtz problems remains however challenging in high frequency regime, due to the non definiteness of the Helmholtz operator and also to the discretization constraints in order to minimize the dispersion error for a given frequency, hence requiring efficient preconditioners.

For such wave propagation problems, we continued development of two-level ORAS Schwarz domain decomposition preconditioners, where the second level consists in inexact coarse solves of a coarse problem with added absorption discretized on a coarse mesh. In particular, we developed and tested a single precision implementation, together with incomplete Cholesky factorization of local fine level matrices, allowing a significant gain in computing time and memory consumption without loss of robustness, as shown in Table 1 for a 3D acoustic simulation on the Overthrust benchmark. For this same benchmark, we were thus able to solve the problem at 20 Hz frequency with P3 finite elements on an unstructured mesh adapted to the local wavelength, a problem comprising 2.3 billion unknowns and solved in 37 seconds on 16960 cores. Table 2 gathers some results at different frequencies, for a regular mesh and for an adapted unstructured mesh. This work was the subject of conference papers and was presented this year at the EAGE and SEG geophysics conferences.

In 26, we consider the problem of redatuming. In inverse problems, redatuming data consists in virtually moving the sensors from the original acquisition location to an arbitrary position. This is an essential tool for target oriented inversion. An exact redatuming method which has the peculiarity to be robust with respect to noise is proposed. Our iterative method is based on the Time Reversal Absorbing Conditions (TRAC) approach and avoids the need for a regularization strategy. Numerical results and comparisons with other redatuming approaches illustrate the robustness of our method.

In 34 we propose a new strategy for solving by the parareal algorithm highly oscillatory ordinary differential equations which are characteristics of a six-dimensional Vlasov equation. For the coarse solvers we use reduced models, obtained from two-scale asymptotic expansions. Such reduced models have a low computational cost since they are free of high oscillations. The parareal method allows to improve their accuracy in a few iterations through corrections by fine solvers of the full model. We demonstrate the accuracy and the efficiency of the strategy in numerical experiments of short time and long time simulations of charged particles submitted to a large magnetic field. In addition, the convergence of the parareal method is obtained uniformly with respect to the vanishing stiff parameter.

In 20, we consider an hypersingular symmetric positive definite operators stemming from boundary integral equation (BIE), and we adapt the two-level domain decomposition stratgey based on the GenEO coarse space to this context of non-local operator. We derive an estimate on the condition number, showing the scalability of this method, and provide numerical results based on the BemTool and HPDDM libraries that confirm the theoretically predicted efficiency of this approach.

In 17 we consider a scalar wave propagation in harmonic regime modelled by Helmholtz equation with heterogeneous coefficients. Using the Multi-Trace Formalism (MTF), we propose a new variant of the Optimized Schwarz Method (OSM) that can accomodate the presence of cross-points in the subdomain partition. This leads to the derivation of a strongly coercive formulation of our Helmholtz problem posed on the union of all interfaces. The corresponding operator takes the form "identity + contraction".

In the field of Domain Decomposition (DD), Optimized Schwarz Method (OSM) appears to be one of the prominent techniques to solve large scale time-harmonic wave propagation problems. It is based on appropriate transmission conditions using carefully designed impedance operators to exchange information between sub-domains. The efficiency of such methods is however hindered by the presence of cross-points, where more than two sub-domains abut, if no appropriate treatment is provided. In 32, we propose a new treatment of the cross-point issue for the Helmholtz equation that remains valid in any geometrical interface configuration. We exploit the multi-trace formalism to define a new exchange operator with suitable continuity and isometry properties. We then develop a complete theoretical framework that generalizes classical OSM to partitions with cross points and contains a rigorous proof of geometric convergence, uniform with respect to the mesh discretization, for appropriate positive impedance operators. Extensive numerical results in 2D and 3D are provided as an illustration of the performance of the proposed method.

In 27 we consider the time-harmonic electromagnetic transmission problem for the unit sphere. Appealing to a vector spherical harmonics analysis, we prove the first stability result of the local multiple trace formulation (MTF) for electromagnetics, originally introduced by Hiptmair and Jerez-Hanckes [Adv. Comp. Math. 37 (2012), 37-91] for the acoustic case, paving the way towards an extension to general piecewise homogeneous scatterers. Moreover, we investigate preconditioning techniques for the local MTF scheme and study the accumulation points of induced operators. In particular, we propose a novel second-order inverse approximation of the operator. Numerical experiments validate our claims and confirm the relevance of the preconditioning strategies.

ANR Decembre 2017 - March 2022 This project is in the area of data analysis of cosmological data sets as collected by contemporary and forthcoming observatories. This is one of the most dynamic areas of modern cosmology. Our special target are data sets of Cosmic Microwave Background (CMB) anisotropies, measurements of which have been one of the most fruitful of cosmological probes. CMB photons are remnants of the very early evolution of the Universe and carry information about its physical state at the time when the Universe was much younger, hotter, and denser, and simpler to model mathematically. The CMB has been, and continues to be, a unique source of information for modern cosmology and fundamental physics. The main objective of this project is to empower the CMB data analysis with novel high performance tools and algorithms superior to those available today and which are capable of overcoming the existing performance gap. Partners: AstroParticules et Cosmologie Paris 7 (PI R. Stompor), ENSAE Paris Saclay.

October 2015 - March 2021, Laura Grigori is Principal Coordinator for Inria Paris. Funding for Inria Paris is 145 Keuros. The funding for Inria is to combine Krylov subspace methods with parallel in time methods. Partners: University Pierre and Marie Curie, J. L. Lions Laboratory (PI Y. Maday), CEA, Paris Dauphine University, Paris 13 University.

ANR appel à projet générique October 2015 - September 2020

This project in scientific computing aims at developing new domain decomposition methods for massively parallel simulation of electromagnetic waves in harmonic regime. The specificity of the approach that we propose lies in the use of integral operators not only for solutions local to each subdomain, but for coupling subdomains as well. The novelty of this project consists, on the one hand, in exploiting multi-trace formalism for domain decomposition and, on the other hand, considering optimized Schwarz methods relying on Robin type transmission conditions involving quasi-local integral operators. Partners: Inria Alpines (PI X. Claeys), Inria Poems, Inria Magique 3D.

ANR appel à projet générique 2019.

S. Hirstoaga and P.-H. Tournier are members of the project MUFFIN, whose objective is to explore and optimize original computational scenarios for multi-scale and high dimensional transport codes, with priority applications in plasma physics. Several approximation methods are planed to be developed. It is at the frontier of computing and numerical analysis and intends to reduce the computational burden in the context of intensive calculation. Principal Investigator: B. Després (Sorbonne University).