The MATHERIALS project-team was created jointly by the École des Ponts ParisTech (ENPC) and Inria in 2015. It is the follow-up and an extension of the former project-team MICMAC originally created in October 2002. It is hosted by the CERMICS laboratory (Centre d'Enseignement et de Recherches en Mathématiques et Calcul Scientifique) at École des Ponts. The permanent research scientists of the project-team have positions at CERMICS and at two other laboratories of École des Ponts: Institut Navier and Laboratoire Saint-Venant. The scientific focus of the project-team is to analyze and improve the numerical schemes used in the simulation of computational chemistry at the microscopic level and to create simulations coupling this microscopic scale with meso- or macroscopic scales (possibly using parallel algorithms). Over the years, the project-team has accumulated an increasingly solid expertise on such topics, which are traditionally not well known by the community in applied mathematics and scientific computing. One of the major achievements of the project-team is to have created a corpus of literature, authoring books and research monographs on the subject 1, 2, 3, 4, 6, 5, 7 that other scientists may consult in order to enter the field.

Our group, originally only involved in electronic structure computations, continues to focus on many numerical issues in quantum chemistry, but now expands its expertise to cover several related problems at larger scales, such as molecular dynamics problems and multiscale problems. The mathematical derivation of continuum energies from quantum chemistry models is one instance of a long-term theoretical endeavour.

Quantum Chemistry aims at understanding the properties of matter through
the modelling of its behavior at a subatomic scale, where matter is
described as an assembly of nuclei and electrons.
At this scale, the equation that rules the interactions between these
constitutive elements is the Schrödinger equation. It can be
considered (except in few special cases notably those involving
relativistic phenomena or nuclear reactions)
as a universal model for at least three reasons. First it contains all
the physical
information of the system under consideration so that any of the
properties of this system can in theory be deduced from the
Schrödinger
equation associated to it. Second, the Schrödinger equation does not
involve any
empirical parameters, except some fundamental constants of Physics (the
Planck constant, the mass and charge of the electron, ...); it
can thus be written for any kind of molecular system provided its
chemical
composition, in terms of natures of nuclei and number of electrons,
is known. Third, this model enjoys remarkable predictive
capabilities, as confirmed by comparisons with a large amount of
experimental data of various types.
On the other hand, using this high quality model requires working with
space and time scales which are both very
tiny: the typical size of the electronic cloud of an isolated atom is
the Angström (all macroscopic properties can be
simply upscaled from the consideration of the short time behavior of a
tiny sample of matter. Many of them derive from ensemble or bulk
effects, that are far from being easy to understand and to model.
Striking examples are found in solid state materials or biological
systems. Cleavage, the ability of minerals to naturally split along
crystal surfaces (e.g. mica yields to thin flakes), is an ensemble
effect. Protein folding is
also an ensemble effect that originates from the presence of the
surrounding medium; it is responsible for peculiar properties
(e.g. unexpected acidity of some reactive site enhanced by special
interactions) upon which vital processes are based.
However, it is undoubtedly true that many macroscopic phenomena originate from
elementary processes which take place at the atomic scale. Let us
mention for instance the fact that
the elastic constants of a perfect crystal or the color of a chemical
compound (which is related to the wavelengths
absorbed or emitted during optic transitions between electronic
levels) can be evaluated by atomic scale calculations. In the same
fashion, the lubricative properties of graphite are essentially due to a
phenomenon which can be entirely modeled at the atomic scale.
It is therefore reasonable to simulate the behavior of matter at the
atomic scale in order to understand what is going on at the
macroscopic one.
The journey is however a long one. Starting from the basic
principles of Quantum Mechanics to model the matter at the subatomic
scale,
one finally uses statistical mechanics to reach the macroscopic
scale. It is often necessary to rely on intermediate steps to deal with
phenomena which take place on various mesoscales.
It may then be possible to couple one description of the system with some
others within the so-called multiscale models.
The sequel indicates how this journey can be completed
focusing on the first smallest scales (the subatomic one), rather than on the
larger ones.
It has already been mentioned that at the subatomic scale,
the behavior of nuclei and electrons is governed by the Schrödinger
equation, either in its time-dependent form
or in its time-independent form. Let us only mention at this point that

The time-dependent equation is a first-order linear evolution
equation, whereas the time-independent equation is a linear eigenvalue
equation.
For the reader more familiar with numerical analysis
than with quantum mechanics, the linear nature of the problems stated
above may look auspicious. What makes the
numerical simulation of these equations
extremely difficult is essentially the huge size of the Hilbert
space: indeed, this space is roughly some
symmetry-constrained subspace of nonlinear partial differential equations,
each
of these equations being posed on

As the size of the systems one wants to study increases, more efficient
numerical techniques need to be resorted to. In computational chemistry,
the typical scaling law for the complexity of computations with respect
to the size of the system under study is

An alternative strategy to reduce the complexity of ab initio
computations is to try to couple different models at different
scales. Such a mixed strategy can be either a sequential one or a
parallel one, in the sense that

The coupling of different scales can even go up to the macroscopic scale, with methods that couple a microscopic representation of matter, or at least a mesoscopic one, with the equations of continuum mechanics at the macroscopic level.

The orders of magnitude used in the microscopic representation of
matter are far from the orders of magnitude of the macroscopic
quantities we are used to: The number of particles under
consideration in a macroscopic sample of material is of the order of
the Avogadro number

To give some insight into such a large number of particles contained in
a macroscopic sample, it is helpful to
compute the number of moles of water on earth. Recall that one mole of water
corresponds to 18 mL, so that a standard glass of water contains roughly
10 moles, and a typical bathtub contains i.e.

For practical numerical computations
of matter at the microscopic level, following the dynamics of every atom would
require simulating

Describing the macroscopic behavior of matter knowing its microscopic
description
therefore seems out of reach. Statistical physics allows us to bridge the gap
between microscopic and macroscopic descriptions of matter, at least on a
conceptual
level. The question is whether the estimated quantities for a system of

Despite its intrinsic limitations on spatial and timescales, molecular simulation has been used and developed over the past 50 years, and its number of users keeps increasing. As we understand it, it has two major aims nowadays.

uid10First, it can be
used as a numerical microscope, which allows us to perform
“computer” experiments.
This was the initial motivation for simulations at the microscopic level:
physical theories were tested on computers.
This use of molecular simulation is particularly clear in its historic
development, which was triggered and sustained by the physics of simple
liquids. Indeed, there was no good analytical theory for these systems,
and the observation of computer trajectories was very helpful to guide the
physicists'
intuition about what was happening in the system, for instance the mechanisms
leading to molecular diffusion. In particular,
the pioneering works on Monte Carlo methods by Metropolis et al., and the first
molecular dynamics
simulation of Alder and Wainwright were performed because of such motivations.
Today, understanding the behavior of matter at the
microscopic level can still be difficult from an experimental viewpoint
(because of the high resolution required, both in time and in
space), or because we simply do not know what to look for!
Numerical simulations are then a valuable tool to test some
ideas or obtain some data to process and analyze in order
to help assessing experimental setups. This is
particularly true for current nanoscale systems.

Another major aim of molecular simulation, maybe even more important than the
previous one,
is to compute macroscopic
quantities or thermodynamic properties,
typically through averages of some functionals of the system.
In this case, molecular simulation is a
way to obtain quantitative information on a system,
instead of resorting to approximate theories, constructed for simplified models,
and giving only qualitative answers.
Sometimes, these properties are accessible through experiments,
but in some cases only numerical computations are possible
since experiments may be unfeasible or too costly
(for instance, when high pressure or large temperature regimes are considered,
or when studying materials not yet synthesized).
More generally, molecular simulation is a tool to explore the links between
the microscopic and macroscopic properties of a material, allowing
one to address modelling questions such as “Which microscopic ingredients are
necessary
(and which are not) to observe a given macroscopic behavior?”

Over the years, the project-team has developed an increasing expertise on multiscale modeling for materials science at the continuum scale. The presence of numerous length scales in material science problems indeed represents a challenge for numerical simulation, especially when some randomness is assumed on the materials. It can take various forms, and includes defects in crystals, thermal fluctuations, and impurities or heterogeneities in continuous media. Standard methods available in the literature to handle such problems often lead to very costly computations. Our goal is to develop numerical methods that are more affordable. Because we cannot embrace all difficulties at once, we focus on a simple case, where the fine scale and the coarse-scale models can be written similarly, in the form of a simple elliptic partial differential equation in divergence form. The fine scale model includes heterogeneities at a small scale, a situation which is formalized by the fact that the coefficients in the fine scale model vary on a small length scale. After homogenization, this model yields an effective, macroscopic model, which includes no small scale (the coefficients of the coarse scale equations are thus simply constant, or vary on a coarse length scale). In many cases, a sound theoretical groundwork exists for such homogenization results. The difficulty stems from the fact that the models generally lead to prohibitively costly computations (this is for instance the case for random stationary settings). Our aim is to focus on different settings, all relevant from an applied viewpoint, and leading to practically affordable computational approaches. It is well-known that the case of ordered (that is, in this context, periodic) systems is now well-understood, both from a theoretical and a numerical standpoint. Our aim is to turn to cases, more relevant in practice, where some disorder is present in the microstructure of the material, to take into account defects in crystals, impurities in continuous media... This disorder may be mathematically modeled in various ways.

Such endeavors raise several questions. The first one, theoretical in nature, is to extend the classical theory of homogenization (well developed e.g. in the periodic setting) to such disordered settings. Next, after homogenization, we expect to obtain an effective, macroscopic model, which includes no small scale. A second question is to introduce affordable numerical methods to compute the homogenized coefficients. An alternative approach, more numerical in nature, is to directly attack the oscillatory problem by using discretization approaches tailored to the multiscale nature of the problem (the construction of which is often inspired by theoretical homogenization results).

The team members have continued their study of algorithms for solving the ground-state problem in Kohn-Sham density functional theory, the long-term goal being the construction of robust and efficient numerical methods with guaranteed error bounds. M. Herbst and A. Levitt have investigated numerically the convergence of self-consistent iterations, and designed a new linesearch strategy. This makes computations more robust and less reliant on user-chosen parameters, without slowing down the algorithms 44. E. Cancès, G. Dusson, G. Kemlin and A. Levitt have designed and implemented computable error estimates for properties of realistic models of solids. This demonstrates the inaccuracy of standard bounds, and improves them using a two-level splitting of the error in high and low frequencies 31.

The team members have also focused on justifying rigorously and computing efficiently the response properties of molecules and materials. In 38, M-S. Dupuy (Sorbonne Université) and A. Levitt investigate dynamical response properties of a simple model of molecules, the difficulty being that the truncation of the computational domain discretizes the continuous spectrum of the Schrödinger operator, and therefore changes the nature of the response function from a continuous function to a sum of discrete poles. The convergence is proven in an appropriate weak sense, and sharp error estimates for a regularization method are given.

In 41 , L. Garrigue studied the inverse problem of density functional theory. This consists in finding the potential which produces some targeted one-body density, in ground or excited states configurations, and can be seen as a control problem. The theoretical results of the article show that up to some regularization, the problem is well-posed, even for excited states. An algorithm is proposed, and numerical simulations show that in all the cases tested the inverse potential is obtained by a well-posed maximization of a dual functional. This provides a solution to theoretical and numerical problems which were previously solved only in dimension one.

In his post-doctoral work co-supervised by Claude Le Bris (MATHERIALS) and Pierre Rouchon (Inria QUANTIC), Masaaki Tokieda addresses various issues related to the numerical simulation and the fundamental understanding of several models of physical systems likely candidates to play a crucial role in quantum computing. More specifically, he studies several pathways to efficiently account for adiabatic elimination in the simulation of composite quantum systems in interactions, modeled by Lindblad type master equations. The specific question currently under study is the expansion up to high orders and the compatibility of such an expansion with the formal requirements of consistency of quantum mechanical evolutions. He is also planning to address various other connected issues, all aiming at better fundamental understanding and a more effective simulation of open quantum systems.

The Strictly Correlated Electrons (SCE) limit of the Levy-Lieb functional in Density Functional Theory (DFT) gives rise to a symmetric multi-marginal optimal transport problem with Coulomb cost, where the number of marginal laws is equal to the number of electrons in the system, which can be very large in relevant applications. In 8, we design a numerical method, built upon constrained overdamped Langevin processes to solve previously introduced Moment Constrained Optimal Transport (MCOT) relaxations of symmetric multi-marginal optimal transport problems with Coulomb cost. Some minimizers of such relaxations can be written as discrete measures charging a low number of points belonging to a space whose dimension, in the symmetrical case, scales linearly with the number of marginal laws. We leverage the sparsity of those minimizers in the design of the numerical method and prove that any local minimizer to the resulting problem is actually a global one. We illustrate the performance of the proposed method by numerical examples which solves MCOT relaxations of 3D systems with up to 100 electrons.

Tensor methods have proved to be very powerful tools in order to represent high-dimensional objects with low complexity. Such methods prove to have a wide range of applications in quantum chemistry, for instance for the approximation of the ground state wavefunction of a molecular system when the number of electrons is large. The DMRG method is one example of such a numerical scheme. Research efforts are led in the team so as to propose new methodological developments in order to improve on the current state-of-the-art tensor methods.

In 19, V. Ehrlacher, M. Fuente-Ruiz and D. Lombardi (Inria COMMEDIA) introduce a method to compute an approximation of a given tensor as a sum of Tensor Trains (TTs), where the order of the variates and the values of the ranks can vary from one term to the other in an adaptive way. The numerical scheme is based on a greedy algorithm and an adaptation of the TT-SVD method. The proposed approach can also be used in order to compute an approximation of a tensor in a Canonical Polyadic format (CP), as an alternative to standard algorithms like Alternating Linear Squares (ALS) or Alternating Singular Value Decomposition (ASVD) methods. Some numerical experiments are presented, in which the proposed method is compared to ALS and ASVD methods for the construction of a CP approximation of a given tensor and performs particularly well for high-order tensors. The interest of approximating a tensor as a sum of Tensor Trains is illustrated in several numerical test cases.

In 20, V. Ehrlacher, L. Grigori (Inria ALPINES), Damiano Lombardi (Inria COMMEDIA) and Hao Song propose a numerical method to compress a tensor by constructing a piece-wise tensor approximation. This is constructed by partitioning a tensor into sub-tensors and by constructing a low-rank tensor approximation (in a given format) in each sub-tensor. Neither the partition nor the ranks are fixed a priori, but, instead, they are obtained in order to fulfill a prescribed accuracy and optimize the storage. Some numerical experiments are proposed to illustrate the method.

The aim of computational statistical physics is to compute macroscopic properties of materials starting from a microscopic description, using concepts of statistical physics (thermodynamic ensembles and molecular dynamics). The contributions of the team can be divided into five main topics: (i) the development of methods for sampling the configuration space; (ii) the efficient computation of dynamical properties which requires to sample metastable trajectories; (iii) the simulation of nonequilibrium systems and the computation of transport coefficients; (iv) coarse-graining techniques to reduce the computational cost of molecular dynamics simulations and gain some insights on the models; (v) the use of particle methods for sampling and optimization.

Various dynamics are used in computational statistical physics to sample the configurations of the system according to the target probability measure at hand and approximate expectations as time averages along one realization. It is important to have a good theoretical understanding of the performance of sampling methods, in order to choose the optimal parameters in actual numerical simulations, for instance to have a variance as small as possible for time averages along one realization. A common dynamics is the so-called Langevin dynamics, which corresponds to a Hamiltonian dynamics perturbed by an Ornstein–Uhlenbeck process on the momenta. The generator associated with this stochastic differential equation is hypoelliptic (at best). Proving the longtime convergence of the semigroup requires dedicated tools, under a general strategy coined as hypocoercivity, and usually involves prefactors which are difficult to estimate quantitatively; see the review 51 by G. Stoltz. Bounds on the asymptotic variance on the other hand only require bounds on the resolvent of the generator. In the work 13 in collaboration with E. Camrud, D. Herzog (Iowa State, USA) and M. Gordina (University of Connecticut, USA), G. Stoltz derives such estimates for singular potentials such as the ones arising from Lennard–Jones interactions.

For discretizations of Langevin dynamics, hypocoercive techniques cannot be used as such, and other techniques have to be used to obtain quantitative longtime convergence estimates. In a work with A. Durmus, A. Enfroy (ENS Paris Saclay, France) and Eric Moulines (École polytechnique, France), G. Stoltz obtained such estimates by Lyapunov techniques combined with minorization estimates, the key point being that these estimates are uniform in the timestep 39. From a technical viewpoint, the minorization condition is derived by considering the dynamics on small physical times, in which case it can be seen as some perturbation of a (degenerate) Gaussian process.

Another work which falls into the category of sampling methods is 50, where G. Stoltz shows with I. Sekkat, one of his PhD students, how to correct Langevin dynamics in which the force appearing in the update of the momenta, which corresponds to the opposite of the gradient of the potential energy function, is not computed exactly but approximated with some unbiased estimator. This induces some bias on the probability measure which is sampled. This bias can however be corrected with a method known as Adaptive Langevin dynamics, where the friction coefficient is a variable adjusted by some Nosé–Hoover feedback mechanism. They showed how to extend this method to the case where the covariance of the gradient noise is not constant, and also derived error estimates on the bias incurred on the invariant measure 50.

Multiplicative noises can also be introduced in the case of (overdamped) Langevin dynamics by considering non separable Hamiltonians. G. Stoltz and T. Lelièvre along with R. Santet, one of their PhD students, are actively working on unbiased numerical sampling in those cases, where properties such as time reversibility of the Hamiltonian flow have to be exactly reproduced numerically.

One important aim of sampling methods is to compute free energy differences. In 46, T. Lelièvre, in collaboration with L. Maurin and P. Monmarché (Sorbonne Université), studies the robustness of the Adaptive Biasing Force method under non-conservative forces. In particular, using classical entropy techniques, the exponential convergence to equilibrium is proven.

Let us also mention the work 29 where R. Blel, V. Ehrlacher, and T. Lelièvre analyse the convergence of a variance reduction technique for the computation of parameter-dependent expectations using a reduced basis paradigm.

One research direction which has been developed over the past ten years in the team is to analyse metastable trajectories using the notion of quasi-stationary distribution. A stochastic process, when trapped in a metastable state, reaches a local equilibrium (the quasi-stationary distribution) before leaving the state. This viewpoint is useful to rigorously justify the use of jump Markov processes to model the exit event, which is interesting both theoretically to provide a sound justification to Markov State models, and numerically to justify a class of algorithms called accelerated molecular dynamics. In 26, T. Lelièvre, M. Ramil and J. Reygner (Ecole des Ponts) prove the existence of a quasi-stationary distribution for the Langevin process trapped in a bounded domain in position, as well as the exponential convergence towards this distribution for the process conditioned to stay in the bounded domain. Compared to the case of of the overdamped Langevin dynamics, this result requires to circumvent two main difficulties: the Langevin process is a degenerate diffusion process (noise only acts on velocities) and the domain is unbounded in velocity. The high friction limit of this model was further studied by M. Ramil in 49.

Many systems in computational statistical physics are not at equilibrium. This is in particular the case when one wants to compute transport coefficients, which determine the response of the system to some external perturbation. For instance, the thermal conductivity relates an applied temperature difference to an energy current through Fourier's law, while the mobility coefficient relates an applied external constant force to the average velocity of the particles in the system.

Petr Plechac (University of Delaware, USA), Ting Wang (Army Research Lab, USA) and G. Stoltz considered in 48 numerical schemes for computing the linear response of steady-state averages of Langevin dynamics with respect to a perturbation of the drift part of the stochastic differential equation. The schemes are based on Girsanov's change-of-measure theory to reweight trajectories with factors derived from a linearization of the Girsanov weights. They investigate both the discretization error and the finite-time approximation error. The designed numerical schemes are shown to be of bounded variance with respect to the integration time, which is a desirable feature for long time simulation. They also show how the discretization error can be improved to second order accuracy in the time step by modifying the weight process in an appropriate way, based on a formulation using a product of discrete martingales.

Another method to compute transport coefficients is based on a direct evaluation of the linear response. From a numerical viewpoint, this method works better when the regime of linear response holds true for forcings as large as possible. Renato Spacek started his PhD work on this topic, the aim being to find (possibly non-physical) forcings which have the desired linear response, while minimizing the nonlinear part of the response.

Coarse-grained descriptions of physical systems can be derived once one has a set of so-called reaction coordinates, which is a low dimensional (nonlinear) fonction of the system configuration which describes the main features of the system, and allows in particular to compute free energy profiles. Reaction coordinates are often based on an intuitive understanding of the system, and one would like to complement this intuition or even replace it with automated tools. With Z. Belkacemi and E. Gkeka (Sanofi, France), T. Lelièvre and G. Stoltz studied a machine learning technique to find reaction coordinates to be used in conjunction with free energy biasing techniques such as the adaptive biasing force method. This allows for instance to improve the sampling of configurations of complex proteins. The approach is based on autoencoders, for which the bottleneck layer provides a low dimensional representation of high dimensional atomistic systems. Applications to alanine dipeptive and chignolin are considered.

Thomas Pigeon, a PhD student funded by Inria in the framework of the joint laboratory with IFPEN, is currently working on interpretations of autoencoders based on conditional expectations, and more generally on the use of techniques from machine learning to find reaction coordinates which can be used to sample dynamical properties and rare events, and not only free energy profiles.

In a different direction, it is well-known that molecular dynamics simulations often require extremely long trajectories to be computed, either to reach equilibrium or to see meaningful events such as transitions from one metastable well to another. It is thus tempting to use the parareal algorithm, which has been proposed two decades ago as a method for performing parallel-in-time computations, in order to improve the efficiency of such simulations. In collaboration with U. Sharma (Freie Universitat Berlin, Germany), F. Legoll and T. Lelièvre have introduced in 25 an adaptive variant of the parareal algorithm, specifically tailored to molecular dynamics simulations. The algorithm is adaptive in the sense that the time horizon of the simulation is iteratively adjusted, in order to prevent the algorithm from simulating parts of the trajectory where the accuracy is deemed to be insufficient. A significant gain in efficiency is obtained in comparison to the standard parareal algorithm.

A natural follow-up of this work is to implement this algorithm in LAMMPS (a very well distributed software in the applied communities), that will open the way to the simulation of realistic physical systems. This work is currently performed by O. Gorynina, F. Legoll and T. Lelièvre, in collaboration with D. Perez (Los Alamos Nat. Lab., USA).

This research area was recently developed with the arrival of Urbain Vaes in the project-team. In a first preprint, drawing inspiration from consensus-based methods in the optimization community, U. Vaes together with J. A. Carrillo (University of Oxford, United Kingdom), F. Hoffmann (Hausdorff Center for Mathematics, Germany) and A. M. Stuart (Caltech, USA) proposed a sampling method based on a system of particles interacting via the mean and covariance of the empirical probability distribution, appropriately reweighted by a power of the target probability distribution 34. They studied the long-time behavior of the mean-field equation describing the evolution of the particle system in the limit of infinitely many particles. A reasoning based on Laplace's method enables to prove, in simple settings, the exponential convergence of the solution to a Gaussian approximation of the target probability distribution. In a second project, U. Vaes together with G. A. Pavliotis (Imperial College London, United Kingdom) and A. M. Stuart studied a Langevin-type sampling method in which the gradient of the potential is approximated using a multiscale particle system 47. The goal was to develop a method relying on a gradient approximation similar in structure to that of ensemble Kalman-based methods but which, unlike these methods, could be systematically refined in order to reduce the sampling error. Techniques from multiscale analysis were used to prove the convergence of the numerical solution to a preconditioned overdamped Langevin dynamics, in an appropriate limit for the parameters of the method. In another project, U. Vaes together with J. A. Carrillo and C. Totzeck (University of Wuppertal, Germany) investigated the viability of a simple penalization approach for incorporating constraints in consensus-based and ensemble Kalman methods for optimization 35. The efficiency of the proposed method was studied by means of careful numerical experiments.

From the theoretical viewpoint, the project-team has pursued the development of a general theory for homogenization of deterministic materials modeled as periodic structures with defects. This work, performed in collaboration with X. Blanc, P.-L. Lions and P. Souganidis, is also the topic of the ongoing PhD thesis of R. Goudey. We recall that the periodic setting is the oldest traditional setting for homogenization. Alternative settings include the quasi- and almost-periodic settings, and the random stationary setting. From a modeling viewpoint, assuming that multiscale materials are periodic is however an idealistic assumption: natural media (such as the subsoil) have no reason to be periodic, and manufactured materials, even though indeed sometimes designed to be periodic, are often not periodic in practice, e.g. because of imperfect manufacturing processes, of small geometric details that break the periodicity and can be critical in terms of industrial performances, ...Quasi- and almost-periodic settings are not appropriate answers to this difficulty. Using a random stationary setting may be tempting from a modelization viewpoint (in the sense that all what is not known about the microstructure can be “hidden” in a probabilistic description), but this often leads to prohibitively expensive computations, since the model is very general. The direction explored by the project-team consists in considering periodic structures with defects, a setting which is rich enough to fit reality while still leading to affordable computations.

Considering defects in the structure raises many mathematical questions. From an overall perspective, homogenization is based upon the determination of corrector functions, useful to compute the homogenized properties of the materials as well as to provide a fine-scale description of the oscillatory solution. In general, corrector problems are posed on the whole space. In the periodic and random stationary settings, it turns out that the corrector problems can actually be posed on a bounded domain. Powerful tools (e.g. Rellich compactness theorems) can then be used (to establish well-posedness and qualitative properties of the correctors, ...). The presence of defects breaks this property, making the corrector problem non-compact. Additional tools (such as the concentration-compactness method or the theory of Calderón-Zygmund operators) are required to circumvent this difficulty.

Starting from the simplest case (localized defects in a purely diffusive equation, a setting for which we were able to show two-scale expansion results), we have followed two directions: (i) considering more complex equations (advection-diffusion equations, Hamilton-Jacobi equations, ...) for which the defects, although localized, may have an impact on a larger and larger neighborhood, and (ii) considering more complex (i.e. less localized) defects, as in 43 (where defects become increasingly rare but do not decay at infinity); defects in the form of interfaces between two perfectly periodic materials also fall within this research direction. A monograph that summarizes the contributions of the project-team on this topic, along with a general perspective on the field, has recently been submitted for publication.

Also in the context of homogenization theory, O. Gorynina, C. Le Bris and F. Legoll have concluded their work on the question of how to determine the homogenized coefficient of heterogeneous media without explicitly performing an homogenization approach. This work is a follow-up on earlier works over the years by C. Le Bris and F. Legoll in collaboration with K. Li and next S. Lemaire. More precisely, they have completed the mathematical study and the numerical improvement of a computational approach initially introduced by R. Cottereau (CNRS Marseille). This approach combines, in the Arlequin framework, the original fine-scale description of the medium (modelled by an oscillatory coefficient) with an effective description (modelled by a constant coefficient) and optimizes upon the coefficient of the effective medium to best fit the response of a purely homogeneous medium. In the limit of asymptotically infinitely fine structures, the approach yields the value of the homogenized coefficient. Various computational improvements have been suggested in 22, while the theoretical study of the approach has been performed in 42.

From a numerical perspective, the Multiscale Finite Element Method (MsFEM) is a classical strategy to address the situation when the homogenized problem is not known (e.g. in difficult nonlinear cases), or when the scale of the heterogeneities, although small, is not considered to be zero (and hence the homogenized problem cannot be considered as a sufficiently accurate approximation).

During the year, several research tracks have been pursued in this general direction.

The MsFEM approach uses a Galerkin approximation of the problem on a pre-computed basis, obtained by solving local problems mimicking the problem at hand at the scale of mesh elements, with carefully chosen right-hand sides and boundary conditions. The initially proposed version of MsFEM uses as basis functions the solutions to these local problems, posed on each mesh element, with null right-hand sides and with the coarse P1 elements as Dirichlet boundary conditions. Various improvements have next been proposed, such as the oversampling variant, which solves local problems on larger domains and restricts their solutions to the considered element. In collaboration with U. Hetmaniuk (at the time at the University of Washington in Seattle, USA), C. Le Bris, F. Legoll and P.-L. Rothé have completed in 45 the study of a MsFEM method improved differently. They have considered a variant of the classical MsFEM approach with enrichments based on Legendre polynomials, both in the bulk of the mesh elements and on their interfaces. A convergence analysis of this new variant has been performed. In addition, residue type a posteriori error estimators have been proposed and certified, leading to a numerical strategy where the degree of enrichment is locally adapted in order to reach, at the smallest computational cost, a given error. Promising numerical results have been obtained.

Within the ongoing PhD of R. Biezemans, several directions of research have been explored. The results obtained are currently being collected in manuscripts in preparation. First, despite the fact that MsFEM approaches have been proposed many years ago, it turns out that not all specific settings are covered by the numerical analyses existing in the literature. Together with A. Lozinski (currently in delegation in the team), R. Biezemans, C. Le Bris and F. Legoll have extended the analysis of MsFEM to the case of rectangular meshes and that of coefficients that are not necessarily Hölder continuous. An ongoing research effort is devoted to further improving the error estimates. Second, many (if not all) MsFEM approaches proposed up to date are intrusive. They require to change the finite element basis set, and adjust it to the problem at hand. Obviously, not every software developer will allow such a modification. Motivated by this observation, R. Biezemans, C. Le Bris, F. Legoll and A. Lozinski have investigated how MsFEM approaches can be adapted (at the possible price of a marginal loss in their efficiency) so that they become as little intrusive as possible (and thus can be easily used in the context of single scale legacy softwares). Promising results have been obtained in that vein for several variants of the MsFEM approach. Third, they have studied how to extend MsFEM approaches to various types of equations, beyond the purely diffusive case, and in particular to the case of multiscale advection-diffusion problems, in the advection-dominated regime. Thin boundary layers are present in the exact solution, and numerical approaches should be carefully adapted to this situation, e.g. using stabilization. How stabilization and the multiscale nature of the problem interplay with one another is a challenging question, and several MsFEM variants have been compared.

In a different direction, we note that many numerical analysis studies of the MsFEM are focused on obtaining a priori error bounds. In collaboration with L. Chamoin, who was in delegation in the project-team a few years ago from ENS Cachan, F. Legoll has been working on a posteriori error analysis for MsFEM approaches, with the aim of developing error estimation and adaptation tools. L. Chamoin and F. Legoll have extended to the MsFEM case approaches (for the control of the error in energy norm and for the control of the error on quantities of interest 18) that are classical in the computational mechanics community for single scale problems, and which are based on the so-called Constitutive Relation Error (CRE). While performing these works for multiscale problems, they have reviewed the literature on a posteriori error estimation for single scale problems. This work has led to the recent pedagogical review article 36.

In 2021, S. Boyaval has pursued his research to improve mathematical models of non-Newtonian fluids for application to large time-space domains, when waves propagate at finite speed. The symmetric-hyperbolic system of conservation laws introduced in 2020 to unify the elastodynamics of hyperelastic materials with viscous fluid dynamics (through viscoelasticity of Maxwell-type) has been extended 30, in particular to cover the non-isothermal flows (highly relevant for application to polymer melts).

Many research activities of the project-team are conducted in close collaboration with private or public companies: CEA, EDF, IFPEN, Sanofi, OSMOS Group. The project-team is also supported by the Office of Naval Research and the European Office of Aerospace Research and Development, for multiscale simulations of random materials. All these contracts are operated at and administrated by the École des Ponts, except the contracts with IFPEN, which are administrated by Inria.

T. Lelièvre, G. Stoltz and F. Legoll participate in the Laboratoire International Associé (LIA) CNRS / University of Illinois at Urbana-Champaign on complex biological systems and their simulation by high performance computers. This LIA involves French research teams from Université de Nancy, Institut de Biologie Structurale (Grenoble) and Institut de Biologie Physico-Chimique (Paris). The LIA has been renewed for 4 years, starting January 1st, 2018.

Eric Cancès has been awarded a Simons Targeted Grant “Moiré materials magic” (September 2021 - August 2026). His co-PIs are Allan MacDonald (UT Austin, coordinating PI), Svetlana Jitomirskaya (UC Irvine), Efthimios Kaxiras (Harvard), Lin Lin (UC Berkeley), Mitchell Luskin (University of Minnesota), Angel Rubio (Max-Planck Institut), Maciej Zworski (UC Berkeley).

The ERC Synergy Grant EMC2 (ERC Grant Agreement number 810367, PIs E. Cancès, L. Grigori, Y. Maday, J.-P. Piquemal) has started in September 2019.

The Euro HPC grant TIME-X (PIs: Y. Maday and G. Samaey), focusing on parallel-in-time computations and in which F. Legoll and T. Lelièvre participate, has started on 1st April 2021.

The project-team is involved in several ANR projects:

Members of the project-team are participating in the following GdR:

The project-team is involved in two Labex: the Labex Bezout (2011-) and the Labex MMCD (2012-).

C. Le Bris is a participant to the Inria Challenge EQIP (Engineering for Quantum Information Processors), in particular in collaboration with P. Rouchon (QUANTIC project-team).

R. Benda co-organizes the PhD students and postdocs seminar of CERMICS.

S. Boyaval

E. Cancès
is a member of the editorial boards of Mathematical Modelling and Numerical Analysis

V. Ehrlacher

C. Le Bris

F. Legoll

T. Lelièvre

A. Levitt co-organizes the applied mathematics seminar of the CERMICS lab, and the internal seminar of the EMC2 project (Sorbonne Université).

G. Stoltz

The members of the project-team have taught the following courses.

At École des Ponts 1st year (equivalent to L3):

At École des Ponts 2nd year (equivalent to M1):

At the M2 “Mathématiques de la modélisation” of Sorbonne Université:

At other institutions:

The following PhD theses supervised by members of the project-team have been defended:

The following PhD theses supervised by members of the project-team are ongoing:

Project-team members have participated in the following PhD juries:

Project-team members have participated in the following habilitation juries:

Project-team members have participated in the following selection committees:

Members of the project-team have delivered lectures in the following seminars, workshops and conferences:

Members of the project-team have delivered the following series of lectures:

Members of the project-team have presented posters in the following seminars, workshops and international conferences:

Members of the project-team have participated (without giving talks nor presenting posters) in the following seminars, workshops and international conferences: