The MATHERIALS project-team was created jointly by the École des Ponts ParisTech (ENPC) and Inria in 2015. It is the follow-up and an extension of the former project-team MICMAC originally created in October 2002. It is hosted by the CERMICS laboratory (Centre d'Enseignement et de Recherches en Mathématiques et Calcul Scientifique) at École des Ponts. The permanent research scientists of the project-team have positions at CERMICS and at two other laboratories of École des Ponts: Institut Navier and Laboratoire Saint-Venant. The scientific focus of the project-team is to analyze and improve the numerical schemes used in the simulation of computational chemistry at the microscopic level and to create simulations coupling this microscopic scale with meso- or macroscopic scales (possibly using parallel algorithms). Over the years, the project-team has accumulated an increasingly solid expertise on such topics, which are traditionally not well known by the community in applied mathematics and scientific computing. One of the major achievements of the project-team is to have created a corpus of literature, authoring books and research monographs on the subject 1, 2, 3, 4, 6, 5, 7 that other scientists may consult in order to enter the field.

Our group, originally only involved in electronic structure computations, continues to focus on many numerical issues in quantum chemistry, but now expands its expertise to cover several related problems at larger scales, such as molecular dynamics problems and multiscale problems. The mathematical derivation of continuum energies from quantum chemistry models is one instance of a long-term theoretical endeavour.

Quantum Chemistry aims at understanding the properties of matter through
the modelling of its behavior at a subatomic scale, where matter is
described as an assembly of nuclei and electrons.
At this scale, the equation that rules the interactions between these
constitutive elements is the Schrödinger equation. It can be
considered (except in few special cases notably those involving
relativistic phenomena or nuclear reactions)
as a universal model for at least three reasons. First it contains all
the physical
information of the system under consideration so that any of the
properties of this system can in theory be deduced from the
Schrödinger
equation associated to it. Second, the Schrödinger equation does not
involve any
empirical parameters, except some fundamental constants of Physics (the
Planck constant, the mass and charge of the electron, ...); it
can thus be written for any kind of molecular system provided its
chemical
composition, in terms of natures of nuclei and number of electrons,
is known. Third, this model enjoys remarkable predictive
capabilities, as confirmed by comparisons with a large amount of
experimental data of various types.
On the other hand, using this high quality model requires working with
space and time scales which are both very
tiny: the typical size of the electronic cloud of an isolated atom is
the Angström (all macroscopic properties can be
simply upscaled from the consideration of the short time behavior of a
tiny sample of matter. Many of them derive from ensemble or bulk
effects, that are far from being easy to understand and to model.
Striking examples are found in solid state materials or biological
systems. Cleavage, the ability of minerals to naturally split along
crystal surfaces (e.g. mica yields to thin flakes), is an ensemble
effect. Protein folding is
also an ensemble effect that originates from the presence of the
surrounding medium; it is responsible for peculiar properties
(e.g. unexpected acidity of some reactive site enhanced by special
interactions) upon which vital processes are based.
However, it is undoubtedly true that many macroscopic phenomena originate from
elementary processes which take place at the atomic scale. Let us
mention for instance the fact that
the elastic constants of a perfect crystal or the color of a chemical
compound (which is related to the wavelengths
absorbed or emitted during optic transitions between electronic
levels) can be evaluated by atomic scale calculations. In the same
fashion, the lubricative properties of graphite are essentially due to a
phenomenon which can be entirely modeled at the atomic scale.
It is therefore reasonable to simulate the behavior of matter at the
atomic scale in order to understand what is going on at the
macroscopic one.
The journey is however a long one. Starting from the basic
principles of Quantum Mechanics to model the matter at the subatomic
scale,
one finally uses statistical mechanics to reach the macroscopic
scale. It is often necessary to rely on intermediate steps to deal with
phenomena which take place on various mesoscales.
It may then be possible to couple one description of the system with some
others within the so-called multiscale models.
The sequel indicates how this journey can be completed
focusing on the first smallest scales (the subatomic one), rather than on the
larger ones.
It has already been mentioned that at the subatomic scale,
the behavior of nuclei and electrons is governed by the Schrödinger
equation, either in its time-dependent form
or in its time-independent form. Let us only mention at this point that

The time-dependent equation is a first-order linear evolution
equation, whereas the time-independent equation is a linear eigenvalue
equation.
For the reader more familiar with numerical analysis
than with quantum mechanics, the linear nature of the problems stated
above may look auspicious. What makes the
numerical simulation of these equations
extremely difficult is essentially the huge size of the Hilbert
space: indeed, this space is roughly some
symmetry-constrained subspace of nonlinear partial differential equations,
each
of these equations being posed on

As the size of the systems one wants to study increases, more efficient
numerical techniques need to be resorted to. In computational chemistry,
the typical scaling law for the complexity of computations with respect
to the size of the system under study is

An alternative strategy to reduce the complexity of ab initio
computations is to try to couple different models at different
scales. Such a mixed strategy can be either a sequential one or a
parallel one, in the sense that

The coupling of different scales can even go up to the macroscopic scale, with methods that couple a microscopic representation of matter, or at least a mesoscopic one, with the equations of continuum mechanics at the macroscopic level.

The orders of magnitude used in the microscopic representation of
matter are far from the orders of magnitude of the macroscopic
quantities we are used to: The number of particles under
consideration in a macroscopic sample of material is of the order of
the Avogadro number

To give some insight into such a large number of particles contained in
a macroscopic sample, it is helpful to
compute the number of moles of water on earth. Recall that one mole of water
corresponds to 18 mL, so that a standard glass of water contains roughly
10 moles, and a typical bathtub contains i.e.

For practical numerical computations
of matter at the microscopic level, following the dynamics of every atom would
require simulating

Describing the macroscopic behavior of matter knowing its microscopic
description
therefore seems out of reach. Statistical physics allows us to bridge the gap
between microscopic and macroscopic descriptions of matter, at least on a
conceptual
level. The question is whether the estimated quantities for a system of

Despite its intrinsic limitations on spatial and timescales, molecular simulation has been used and developed over the past 50 years, and its number of users keeps increasing. As we understand it, it has two major aims nowadays.

uid10First, it can be
used as a numerical microscope, which allows us to perform
“computer” experiments.
This was the initial motivation for simulations at the microscopic level:
physical theories were tested on computers.
This use of molecular simulation is particularly clear in its historic
development, which was triggered and sustained by the physics of simple
liquids. Indeed, there was no good analytical theory for these systems,
and the observation of computer trajectories was very helpful to guide the
physicists'
intuition about what was happening in the system, for instance the mechanisms
leading to molecular diffusion. In particular,
the pioneering works on Monte Carlo methods by Metropolis et al., and the first
molecular dynamics
simulation of Alder and Wainwright were performed because of such motivations.
Today, understanding the behavior of matter at the
microscopic level can still be difficult from an experimental viewpoint
(because of the high resolution required, both in time and in
space), or because we simply do not know what to look for!
Numerical simulations are then a valuable tool to test some
ideas or obtain some data to process and analyze in order
to help assessing experimental setups. This is
particularly true for current nanoscale systems.

Another major aim of molecular simulation, maybe even more important than the
previous one,
is to compute macroscopic
quantities or thermodynamic properties,
typically through averages of some functionals of the system.
In this case, molecular simulation is a
way to obtain quantitative information on a system,
instead of resorting to approximate theories, constructed for simplified models,
and giving only qualitative answers.
Sometimes, these properties are accessible through experiments,
but in some cases only numerical computations are possible
since experiments may be unfeasible or too costly
(for instance, when high pressure or large temperature regimes are considered,
or when studying materials not yet synthesized).
More generally, molecular simulation is a tool to explore the links between
the microscopic and macroscopic properties of a material, allowing
one to address modelling questions such as “Which microscopic ingredients are
necessary
(and which are not) to observe a given macroscopic behavior?”

Over the years, the project-team has developed an increasing expertise on multiscale modeling for materials science at the continuum scale. The presence of numerous length scales in material science problems indeed represents a challenge for numerical simulation, especially when some randomness is assumed on the materials. It can take various forms, and includes defects in crystals, thermal fluctuations, and impurities or heterogeneities in continuous media. Standard methods available in the literature to handle such problems often lead to very costly computations. Our goal is to develop numerical methods that are more affordable. Because we cannot embrace all difficulties at once, we focus on a simple case, where the fine scale and the coarse-scale models can be written similarly, in the form of a simple elliptic partial differential equation in divergence form. The fine scale model includes heterogeneities at a small scale, a situation which is formalized by the fact that the coefficients in the fine scale model vary on a small length scale. After homogenization, this model yields an effective, macroscopic model, which includes no small scale (the coefficients of the coarse scale equations are thus simply constant, or vary on a coarse length scale). In many cases, a sound theoretical groundwork exists for such homogenization results. The difficulty stems from the fact that the models generally lead to prohibitively costly computations (this is for instance the case for random stationary settings). Our aim is to focus on different settings, all relevant from an applied viewpoint, and leading to practically affordable computational approaches. It is well-known that the case of ordered (that is, in this context, periodic) systems is now well-understood, both from a theoretical and a numerical standpoint. Our aim is to turn to cases, more relevant in practice, where some disorder is present in the microstructure of the material, to take into account defects in crystals, impurities in continuous media... This disorder may be mathematically modeled in various ways.

Such endeavors raise several questions. The first one, theoretical in nature, is to extend the classical theory of homogenization (well developed e.g. in the periodic setting) to such disordered settings. Next, after homogenization, we expect to obtain an effective, macroscopic model, which includes no small scale. A second question is to introduce affordable numerical methods to compute the homogenized coefficients. An alternative approach, more numerical in nature, is to directly attack the oscillatory problem by using discretization approaches tailored to the multiscale nature of the problem (the construction of which is often inspired by theoretical homogenization results).

A track of the project-team's activity this year has been the investigation of continuum eigenstates, as opposed to the bound states that form much of the project-team's usual focus. Such states are relevant to the study of processes where electrons propagate away from the system under consideration, such as ionization. They are delocalized, complicating their discretization. In 35 and 36, together with colleagues from the Laboratoire de Chimie Théorique at Sorbonne Université, É. Cancès and A. Levitt have proposed a method to compute the photoionization spectrum for atoms in time-dependent density functional theory (TDDFT) in the Sternheimer formalism. This method, inspired by similar schemes in numerical wave propagation, employs an analytic Dirichlet-to-Neumann map to impose correct boundary conditions on the Sternheimer equations, which appears mathematically as a perturbation of an Helmholtz equation with a Coulomb potential. In 56, E. Letournel and A. Levitt, together with physicist colleagues from CEA Grenoble, have proposed a method to compute electronic resonances in crystals with defects. The method involves the computation of analytic continuations of Green functions of periodic operators, which is accomplished by a complex deformation of the Brillouin zone.

E. Cancès, G. Kemlin and A. Levitt have studied the numerical properties of response computations in density functional theory at finite temperature. They have proposed a method based on a Schur complement to increase the stability and efficiency of iterative solvers for the Sternheimer equations 49.

Together with D. Gontier (U. Paris Dauphine and ENS Paris), E. Cancès and L. Garrigue provided a formal derivation of a reduced model for twisted bilayer graphene (TBG) from Density Functional Theory. This derivation is based on a variational approximation of the TBG Kohn-Sham Hamiltonian and asymptotic limit techniques. In contrast with other approaches, it does not require the introduction of an intermediate tight-binding model. The so-obtained model is similar to that of the Bistritzer-MacDonald (BM) model but contains additional terms. Its parameters can be easily computed from Kohn-Sham calculations on single-layer graphene and untwisted bilayer graphene with different stackings. It allows one in particular to estimate the parameters

With G. Dusson (CNRS and U. of Franche-Comté) E. Cancès, G. Kemlin and L. Vidal proposed in 45 general criteria to construct optimal atomic centered basis sets in quantum chemistry. They focuses in particular on two criteria, one based on the ground-state one-body density matrix of the system and the other based on the ground-state energy. The performance of these two criteria was numerically tested and compared on a parametrized eigenvalue problem, which corresponds to a one-dimensional toy version of the ground-state dissociation of a diatomic molecule.

In solid state physics, electronic properties of crystalline materials are often inferred from the spectrum of periodic Schrödinger operators. As a consequence of Bloch's theorem, the numerical computation of electronic quantities of interest involves computing derivatives or integrals over the Brillouin zone of so-called energy bands, which are piecewise smooth, Lipschitz continuous periodic functions obtained by solving a parametrized elliptic eigenvalue problem on a Hilbert space of periodic functions. Classical discretization strategies for resolving these eigenvalue problems produce approximate energy bands that are either non-periodic or discontinuous, both of which cause difficulty when computing numerical derivatives or employing numerical quadrature. In a paper with M. Hassan (Sorbonne Université) 48, E. Cancès and L. Vidal studied an alternative discretization strategy based on an ad hoc operator modification approach. While specific instances of this approach have been proposed in the physics literature, they introduced a systematic formulation of this operator modification approach. They derived a priori error estimates for the resulting energy bands and showed that these bands are periodic and can be made arbitrarily smooth (away from band crossings) by adjusting suitable parameters in the operator modification approach.

In his post-doctoral work co-supervised by Claude Le Bris (MATHERIALS) and Pierre Rouchon (Inria QUANTIC), Masaaki Tokieda addresses various issues related to the numerical simulation and the fundamental understanding of several models of physical systems likely candidates to play a crucial role in quantum computing. More specifically, he studies several pathways to efficiently account for adiabatic elimination in the simulation of composite quantum systems in interactions, modeled by Lindblad type master equations. The specific question currently under study is the expansion up to high orders and the compatibility of such an expansion with the formal requirements of consistency of quantum mechanical evolutions. He is also planning to address various other connected issues, all aiming at better fundamental understanding and a more effective simulation of open quantum systems.

Tensor methods have proved to be very powerful tools in order to represent high-dimensional objects with low complexity. Such methods prove to have a wide range of applications in quantum chemistry, for instance for the approximation of the ground state wavefunction of a molecular system when the number of electrons is large. The DMRG method is one example of such a numerical scheme. Research efforts are led in the team so as to propose new methodological developments in order to improve on the current state-of-the-art tensor methods.

In 20, V. Ehrlacher, M. Fuente-Ruiz and D. Lombardi (Inria COMMEDIA) introduce a method to compute an approximation of a given tensor as a sum of Tensor Trains (TTs), where the order of the variates and the values of the ranks can vary from one term to the other in an adaptive way. The numerical scheme is based on a greedy algorithm and an adaptation of the TT-SVD method. The proposed approach can also be used in order to compute an approximation of a tensor in a Canonical Polyadic format (CP), as an alternative to standard algorithms like Alternating Linear Squares (ALS) or Alternating Singular Value Decomposition (ASVD) methods. Some numerical experiments are presented, in which the proposed method is compared to ALS and ASVD methods for the construction of a CP approximation of a given tensor and performs particularly well for high-order tensors. The interest of approximating a tensor as a sum of Tensor Trains is illustrated in several numerical test cases.

The aim of computational statistical physics is to compute macroscopic properties of materials starting from a microscopic description, using concepts of statistical physics (thermodynamic ensembles and molecular dynamics). The contributions of the team can be divided into five main topics: (i) the improvement of techniques to sample the configuration space; (ii) the study of simulation methods to efficiently simulate nonequilibrium systems; (iii) the sampling of dynamical properties and rare events; (iv) the use and development of machine learning tools in molecular dynamics and sampling; (v) the use of particle methods for sampling and optimization.

There is still a need to improve techniques to sample the configuration space. In 25, Tony Lelièvre together with Lucie Delemotte (KTH, Sweden), J. Hénin (IBPC, France), Michael Shirts (University of Colorado, USA) and Omar Valsson (MPI Mainz, Germany) provide on overview of enhanced sampling algorithms. These algorithms have emerged as powerful methods to extend the potential of molecular dynamics simulations and allow the sampling of larger portions of the configuration space of complex systems. This "living" review is intended to be updated to continue to reflect the wealth of sampling methods as they emerge in the literature.

Many systems in computational statistical physics are not at equilibrium. This is in particular the case when one wants to compute transport coefficients, which determine the response of the system to some external perturbation. For instance, the thermal conductivity relates an applied temperature difference to an energy current through Fourier's law, while the mobility coefficient relates an applied external constant force to the average velocity of the particles in the system. G. Stoltz reviewed in 66 the motivations and mathematical framework involved in the computation of transport coefficients, with a particular emphasis on the numerical analysis of the estimators at hand.

The main limitations of usual methods to compute transport coefficients is the large variance of the estimators, which motivates searching for dedicated variance reduction strategies. Such a method was proposed by G. Pavliotis (Imperial College London, United-Kingdom), G. Stoltz and U. Vaes in the context of the estimation of the mobility via Einstein's method in 65, although the method can be adapted to other transport coefficients. The fundamental idea is to approximate the solution to some Poisson equation determining the transport coefficient, and relying on Ito calculus to construct a random variable strongly correlated to the square displacement from the position at origin. The motivation of this work was to estimate the mobility of underdamped Langevin dynamics of two dimensional systems for low values of the friction, in an attempt to (in)validate physical conjectures about the divergence of the mobility as the friction goes to zero.

Sampling trajectories which link metastable states of the target probability measure, and estimating the associated transition rates from one metastable state to another, is a difficult task, which requires dedicated numerical methods. Various works along these lines were preprinted this year.

In 62, Tony Lelièvre, together with Mouad Ramil (Seoul National University, South Korea) and Julien Reygner (CERMICS, France), propose and analyze a simple and complete numerical method to estimate statistics of transitions between metastable states for the Langevin dynamics, based on the so-called Hill relation. More precisely, they prove the Hill relation in the fairly general context of positive Harris recurrent chains, and show that this formula applies to the Langevin dynamics. Moreover, they provide an explicit expression of the invariant measure involved in the Hill relation for the Langevin dynamics, and describe an elementary exact simulation procedure.

In 61, Tony Lelièvre, together with Boris Nectoux (Laboratoire de Mathématiques Blaise Pascal, France) and Dorian Le Peutrec (Institut Denis Poisson, France), conclude a series of papers which aim at providing firm mathematical grounds to jump Markov models which are used to model the evolution of molecular systems, as well as to some numerical methods which use these underlying jump Markov models to efficiently sample metastable trajectories of the overdamped Langevin dynamics. More precisely, using the quasi-stationary distribution approach to analyze the metastable exit from the basin of attraction of a local minimum of the potential energy function, they prove that the exit event (exit position and exit time) of the overdamped Langevin dynamics in the small temperature regime can be accurately modeled by a jump Markov model parameterized by the Eyring–Kramers rates. From a mathematical viewpoint, the proof relies on tools from the semiclassical analysis of Witten Laplacians on bounded domains. The main difficulty is that, since they consider as metastable states the basins of attraction of the local minima of the energy, the exit regions are neighborhoods of saddle points of the energy, and many standard techniques (such as WKB approximations) cannot handle critical points on the boundary.

In 34, Tony Lelièvre, together with Mouad Ramil (Seoul National University, South Korea) and Julien Reygner (CERMICS, France), give an overview of some of the results obtained during the PhD work of Mouad Ramil. More precisely, the paper provides a self-contained analysis of the Parallel Replica algorithm applied to the Langevin dynamics. This algorithm was designed to efficiently sample metastable trajectories relying on a parallelization in time technique. The analysis relies on results on the existence of quasi-stationary distributions of the Langevin dynamics in domains bounded in positions. The article also contains some discussions about the overdamped limit of the quasi-stationary distribution.

Another approach to sampling reactive trajectories is to allow for longer integration times, thanks to dedicated algorithmic developments. In 58, Frédéric Legoll and Tony Lelièvre, together with Olga Gorynina (WSL-SLF, Switzerland) and Danny Perez (Los Alamos National Laboratory, USA) numerically investigate an adaptive version of the parareal algorithm in the context of molecular dynamics. This method allows to more efficiently integrate in time the dynamics under consideration. The parareal algorithm uses a family of machine-learning spectral neighbor analysis potentials (SNAP) as fine, reference, potentials and embedded-atom method potentials (EAM) as coarse potentials. The numerical results (obtained using LAMMPS, a very broadly used software within the materials science community) demonstrate significant computational gains when using the adaptive parareal algorithm in comparison to a sequential integration of the Langevin dynamics.

Together with G. Robin (CNRS & Université d'Evry), I. Sekkat (CERMICS) and G. Victorino Cardoso (CMAP, Ecole polytechnique & IHU LIRYC), T. Lelièvre and G. Stoltz considered in 63 how to generate reactive trajectories linking two metastable states. More precisely, they investigated the capabilities and limitations of supervised and unsupervised methods based on variational autoencoders to generate such paths. Bottleneck autoencoders are however somewhat limited in describing reactive paths, which is why alternative approaches based on an importance sampling function determined by a reinforcement learning strategy were also studied. The potential of the approach was demonstrated on simple low dimensional examples.

A. Levitt and G. Stoltz studied in 17 with F. Bottin (CEA/DAM), A. Castellano (CEA/DAM), J. Bouchet (CEA Cadarache) how to train simple empirical force fields on ab-initio data, in order to reproduce thermodynamic properties at finite temperature. The method iterates between exploration phases where new configurations are efficiently sampled and generated, using the current version of the simple empirical potential at hand, and a training phase where the empirical potential is updated with new ab-initio data. Thermodynamic consistency is ensured via some nonlinear reweighting procedure.

In some situations, stochastic numerical methods can be made more efficient by using various replicas of the system. The ensemble Kalman filter is a methodology for incorporating noisy data into complex dynamical models to enhance predictive capability. It is widely adopted in the geophysical sciences, underpinning weather forecasting for example, and is starting to be used throughout the sciences and engineering. For high dimensional filtering problems, the ensemble Kalman filter has a robustness that is not shared by the particle filter; in particular it does not suffer from weight collapse. However, there is no theory which quantifies its accuracy as an approximation of the true filtering distribution, except in the Gaussian setting. In order to address this issue, U. Vaes together with J. A. Carrillo (University of Oxford, United Kingdom), F. Hoffmann (Hausdorff Center for Mathematics, Germany) and A. M. Stuart (Caltech, USA) provided in 51 an analysis of the accuracy of the ensemble Kalman filter beyond the Gaussian setting. The analysis is developed for the mean field ensemble Kalman filter, which can be rewritten in terms of maps on probability measures. These maps are proved to be locally Lipschitz in an appropriate weighted total variation metric, which enables to demonstrate that, if the true filtering distribution is close to Gaussian after appropriate lifting to the joint space of state and data, then it is well approximated by the ensemble Kalman filter.

In 60, Tony Lelièvre and Panos Parpas (Imperial College London, United Kingdom) introduce a new stochastic algorithm to locate the index-1 saddle points of a potentiel energy function. Finding index-1 saddle points is crucial to build kinetic Monte Carlo models. These models describe the evolution of the molecular system by a jump Markov model with values in the local minima of the energy function, the jumps between these states being parameterized by the Eyring–Kramers laws. This paramaterization thus requires to identify the index-1 saddle points which connect local minima. The proposed algorithm can be seen as an equivalent of the stochastic gradient descent which is a natural stochastic process to locate local minima. It relies on two ingredients: (i) the concentration properties on index-1 saddle points of the first eigenmodes of the Witten Laplacian on 1-forms and (ii) a probabilistic representation of the solution to a partial differential equation involving this differential operator. The resulting algorithm is an interacting particle system, where the particles populate neighborhoods of the index-1 saddle points. Numerical examples on simple molecular systems illustrate the efficacy of the proposed approach.

From the theoretical viewpoint, the project-team has pursued the development of a general theory for homogenization of deterministic materials modeled as periodic structures with defects. This work, performed in collaboration with X. Blanc, P.-L. Lions and P. Souganidis, has also been the topic of the PhD thesis of R. Goudey, defended this year. We recall that the periodic setting is the oldest traditional setting for homogenization. Alternative settings include the quasi- and almost-periodic settings, and the random stationary setting. From a modeling viewpoint, assuming that multiscale materials are periodic is however an idealistic assumption: natural media (such as the subsoil) have no reason to be periodic, and manufactured materials, even though indeed sometimes designed to be periodic, are often not periodic in practice, e.g. because of imperfect manufacturing processes, of small geometric details that break the periodicity and can be critical in terms of industrial performances, ...Quasi- and almost-periodic settings are not appropriate answers to this difficulty. Using a random stationary setting may be tempting from a modelization viewpoint (in the sense that all that is not known about the microstructure can be “hidden” in a probabilistic description), but this often leads to prohibitively expensive computations, since the model is very general. The direction explored by the project-team consists in considering periodic structures with defects, a setting which is rich enough to fit reality while still leading to affordable computations.

Considering defects in the structure raises many mathematical questions. From an overall perspective, homogenization is based upon the determination of corrector functions, useful to compute the homogenized properties of the materials as well as to provide a fine-scale description of the oscillatory solution. In general, corrector problems are posed on the whole space. In the periodic and random stationary settings, it turns out that the corrector problems can actually be posed on a bounded domain. Powerful tools (e.g. Rellich compactness theorems) can then be used (to establish well-posedness and qualitative properties of the correctors, ...). The presence of defects breaks this property, making the corrector problem non-compact. Additional tools (such as the concentration-compactness method or the theory of Calderón-Zygmund operators) are required to circumvent this difficulty.

Starting from the simplest case (localized defects in a purely diffusive equation, a setting for which we were able to show two-scale expansion results), we have followed two directions: (i) considering more complex equations (advection-diffusion equations, Hamilton-Jacobi equations, ...) for which the defects, although localized, may have an impact on a larger and larger neighborhood, and (ii) considering more complex (i.e. less localized) defects:

A monograph that summarizes the contributions of the project-team on this topic, along with a general perspective on the field, has been written by C. Le Bris, in collaboration with X. Blanc. The French and English versions of this textbook are respectively in print for the series "Maths & Applications" and "MS&A, Modeling, Simulation and Applications", both at Springer. In addition, C. Le Bris has written a short text that summarizes the major results obtained and that will be published in the "Séminaire Laurent Schwartz 2022-2023 volume".

In the context of the PhD of S. Ruget, which started this year, C. Le Bris and F. Legoll have pursued their work on the question of how to determine the homogenized coefficient of a multiscale problem without explicitly performing an homogenization approach.
This work is a follow-up on earlier works over the years in collaboration with K. Li, S. Lemaire and O. Gorynina, in the case of a diffusion equation with highly oscillatory diffusion coefficients. Here, this question is revisited in the setting of Schroedinger equations with rapidly oscillating potentials.
The motivation for this work is that Schroedinger equations, besides their own interest, show some specific features (in comparison e.g. to diffusion equations) bringing hope that further progress can be achieved. To address these questions is the objective of the PhD of S. Ruget.

From a numerical perspective, the Multiscale Finite Element Method (MsFEM) is a classical strategy to address the situation when the homogenized problem is not known (e.g. in difficult nonlinear cases), or when the scale of the heterogeneities, although small, is not considered to be zero (and hence the homogenized problem cannot be considered as a sufficiently accurate approximation).

The MsFEM approach uses a Galerkin approximation of the problem on a pre-computed basis, obtained by solving local problems mimicking the problem at hand at the scale of mesh elements. This basis differs from standard (e.g. polynomial) bases that are generally used in existing legacy codes in industry. As a result, the MsFEM approach is intrusive, which hinders its adoption in industrial (and, more generally, non-academic) environments.

To overcome this obstacle, R. Biezemans, C. Le Bris and F. Legoll, together with A. Lozinski (in delegation in the team for the first half of the year), have designed modified MsFEM approaches that allow for a non-intrusive implementation, i.e., using any existing legacy code for a Galerkin approximation on a piecewise affine basis. The key principles of the approach are presented in 12. The technique is reminiscent of the classical approach to homogenization: "corrector" functions are computed in each element of the mesh, from which slowly varying effective coefficients are computed. This leads to an effective PDE that can indeed be solved by standard finite element approaches.

A more comprehensive study of the non-intrusive MsFEM technique has subsequently been finalized in 41, where R. Biezemans, C. Le Bris, F. Legoll and A. Lozinski show that the non-intrusive approach can be extended to a wide variety of problems and more advanced, state of the art MsFEM variants. Indeed, this work provides non-intrusive MsFEMs for general linear second order PDEs, and with various choices for the local problems, in particular the use of an oversampling technique (where the local problems are solved on domains that are larger than a single mesh element) and a Crouzeix-Raviart MsFEM. This work is also the first to propose and test a new MsFEM variant, namely by developing an oversampling technique for the Crouzeix-Raviart MsFEM. Further, it is proved there that the non-intrusive approach is equivalent to the original MsFEM for a Petrov-Galerkin variant of the MsFEM with affine test functions, and for the Galerkin MsFEM, numerical experiments show that the non-intrusive approach preserves the same accuracy as the original MsFEM.

A second research direction pursued in the PhD of R. Biezemans is the question of how to design accurate MsFEM approaches for various types of equations, beyond the purely diffusive case, and in particular for the case of multiscale advection-diffusion problems, in the advection-dominated regime. Thin boundary layers are present in the exact solution, and numerical approaches should be carefully adapted to this situation, e.g. using stabilization. How stabilization and the multiscale nature of the problem interplay with one another is a challenging question, and several MsFEM variants have been compared by R. Biezemans, C. Le Bris, F. Legoll and A. Lozinski. The main results are being prepared for publication, showing in particular the stabilization of an MsFEM with weak continuity conditions of Crouzeix-Raviart type by adding specific bubble functions, satisfying the same type of weak boundary conditions, to the approximation space.

Finally, R. Biezemans, C. Le Bris, F. Legoll and A. Lozinski have continued their study of the convergence analysis of MsFEMs. Indeed, despite the fact that MsFEM approaches have been proposed more than two decades ago, it turns out that not all specific settings are covered by the numerical analyses existing in the literature. The research team have previously extended the analysis of MsFEM to the case of rectangular meshes and that of periodic diffusion coefficients that are not necessarily Hölder continuous. An ongoing research effort is devoted to further generalizing the analysis to non-periodic settings and to provide a fully rigorous convergence proof of various MsFEMs with the oversampling technique.

In the context of the PhD of A. Lefort, which started this year, C. Le Bris and F. Legoll have undertaken the study of a multiscale, time-dependent, reaction-diffusion equation. This problem is different from the equations previously studied by the team by the fact that it is time-dependent and that it includes a reaction term (in addition to the diffusive term). From a numerical perspective, two difficulties are present in the problem. First, the coefficients of the equation (and therefore the solution) oscillate at a small spatial scale. In addition, the problem in time is stiff: a standard marching scheme such as the backward Euler scheme would need a small time-step to provide an accurate solution. Several directions of research have been identified, such as establishing the homogenized limit of the problem and designing efficient numerical approaches.

In 2022, S. Boyaval has improved the mathematical understanding of the symmetric-hyperbolic system of conservation laws introduced in 2020 to model non-Newtonian fluids

43. Precisely, he has established rigorously the structural stability of the model: Newtonian fluids are recovered in one asymptotic limit of the PDE parameters, while the elastodynamics of hyperelastic materials is also recovered in another asymptotic limit of the parameters

44. Researches are pursued for efficient numerical simulations.

The objective of a reduced-order model reduction method is the following: it may sometimes be very expensive from a computational point of view to simulate the properties of a complex system described by a complicated model, typically a set of PDEs. This cost may become prohibitive in situations where the solution of the model has to be computed for a very large number of values of the parameters involved in the model. Such a parametric study is nevertheless necessary in several contexts, for instance when the value of these parameters has to be calibrated so that numerical simulations give approximations of the solutions that are as close as possible to some measured data. A reduced-order model method then consists in constructing, from a few complex simulations which were performed for a small number of well-chosen values of the parameters, a so-called reduced model, much cheaper and quicker to solve from a numerical point of view, and which enables to get an accurate approximation of the solution of the model for any other values of the parameters.

In 64, together with Idrissa Niakh, Guillaume Drouet (EDF) and Alexandre Ern (SERENA), a new stable model reduction method for linear variational inequalities with parameter-dependent constraints. The method was applied for the reduction of various parametrized contact mechanical problems.

In 39, a new model-order reduction method based on optimal transport theory was investigated by Virginie Ehrlacher together with B. Battisti, T. Blickhan (Garching, Germany), G. Enchéry (IFPEN), D. Lombardi (INRIA COMMEDIA) and O. Mula (Eindhoven University). This approach, based on the use of Wasserstein barycenters, was successfully applied to the reduction of parametrized porous medium flow problems.

Cross-diffusion systems are nonlinear degenerate parabolic systems which naturally arise in diffusion models of multi-species mixtures in a wide variety of applications: tumor growth, population dynamics, materials science etc. In materials science they typically model the evolution of local densities or volumic fractions of chemical species within a mixture.

In 55, Jad Dabaghi and Virginie Ehrlacher investigated a new structure-preserving model reduction methods for parametrized cross-diffusion systems. The proposed numerical method is analyzed from a mathematical point of view and proved to satisfy the same mathematical properties as the high fidelity modelity (preserving for instance the non-negativeness of the solutions). The method consists in introducing an appropriate nonlinear transformation of the set of solutions, based on the use of entropy variables associated to the system, before applying reduced-basis techniques to the parametrized problem.

In 52, Jean Cauvin-Vila, Virginie Ehrlacher and Amaury Hayat analyzed from a mathematical point of view the boundary stabilization of one-dimensional (linearized) cross-diffusion system in a moving domain. They prove that the system can be stabilized in any arbitrary finite time by using the so-called backstepping stabilization technique.

Many research activities of the project-team are conducted in close collaboration with private or public companies: CEA, EDF, IFPEN, Sanofi, OSMOS Group, SAFRANTech. The project-team is also supported by the Office of Naval Research and the European Office of Aerospace Research and Development, for multiscale simulations of random materials. All these contracts are operated at and administrated by the École des Ponts, except the contracts with IFPEN, which are administrated by Inria.

T. Lelièvre, G. Stoltz and F. Legoll participate in the Laboratoire International Associé (LIA) CNRS / University of Illinois at Urbana-Champaign on complex biological systems and their simulation by high performance computers. This LIA involves French research teams from Université de Nancy, Institut de Biologie Structurale (Grenoble) and Institut de Biologie Physico-Chimique (Paris). The LIA has been renewed for 4 years, starting January 1st, 2018.

Eric Cancès is one of the PIs of the Simons Targeted Grant “Moiré materials magic” (September 2021 - August 2026). His co-PIs are Allan MacDonald (UT Austin, coordinating PI), Svetlana Jitomirskaya (UC Irvine), Efthimios Kaxiras (Harvard), Lin Lin (UC Berkeley), Mitchell Luskin (University of Minnesota), Angel Rubio (Max-Planck Institut), Maciej Zworski (UC Berkeley).

Danny Perez (Los Alamos National Laboratory, USA) visited the team in September and October. This was the opportunity to discuss new research directions for accelerated molecular dynamics algorithms.

EMC2 project on cordis.europa.eu

Molecular simulation has become an instrumental tool in chemistry, condensed matter physics, molecular biology, materials science, and nanosciences. It will allow to propose de novo design of e.g. new drugs or materials provided that the efficiency of underlying software is accelerated by several orders of magnitude.

The ambition of the EMC2 project is to achieve scientific breakthroughs in this field by gathering the expertise of a multidisciplinary community at the interfaces of four disciplines: mathematics, chemistry, physics, and computer science. It is motivated by the twofold observation that, i) building upon our collaborative work, we have recently been able to gain efficiency factors of up to 3 orders of magnitude for polarizable molecular dynamics in solution of multi-million atom systems, but this is not enough since ii) even larger or more complex systems of major practical interest (such as solvated biosystems or molecules with strongly-correlated electrons) are currently mostly intractable in reasonable clock time. The only way to further improve the efficiency of the solvers, while preserving accuracy, is to develop physically and chemically sound models, mathematically certified and numerically efficient algorithms, and implement them in a robust and scalable way on various architectures (from standard academic or industrial clusters to emerging heterogeneous and exascale architectures).

EMC2 has no equivalent in the world: there is nowhere such a critical number of interdisciplinary researchers already collaborating with the required track records to address this challenge. Under the leadership of the 4 PIs, supported by highly recognized teams from three major institutions in the Paris area, EMC2 will develop disruptive methodological approaches and publicly available simulation tools, and apply them to challenging molecular systems. The project will strongly strengthen the local teams and their synergy enabling decisive progress in the field.

Recent successes have established the potential of parallel-in-time integration as a powerful algorithmic paradigm to unlock the performance of Exascale systems. However, these successes have mainly been achieved in a rather academic setting, without an overarching understanding. TIME-X will take the next leap in the development and deployment of this promising new approach for massively parallel HPC simulation, enabling efficient parallel-in-time integration for real-life applications. We will:

(i) provide software for parallel-in-time integration on current and future Exascale HPC architectures, delivering substantial improvements in parallel scaling;

(ii) develop novel algorithmic concepts for parallel-in-time integration, deepening our mathematical understanding of their convergence behaviour and including advances in multi-scale methodology;

(iii) demonstrate the impact of parallel-in-time integration, showcasing the potential on problems that, to date, cannot be tackled with full parallel efficiency in three diverse and challenging application fields with high societal impact: weather and climate, medicine and fusion.

To realise these ambitious, yet achievable goals, the inherently inter-disciplinary TIME-X Consortium unites top researchers from numerical analysis and applied mathematics, computer science and the selected application domains. Europe is leading research in parallel-in-time integration. TIME-X unites all relevant actors at the European level for the first time in a joint strategic research effort. A strategic investment from the European Commission would enable taking the necessary next step: advancing parallel-in-time integration from an academic/mathematical methodology into a widely available technology with a convincing proof of concept, maintaining European leadership in this rapidly advancing field and paving the way for industrial adoption.

The project-team is involved in several ANR projects:

Members of the project-team are participating in the following GdR:

The project-team is involved in two Labex: the Labex Bezout (2011-) and the Labex MMCD (2012-).

C. Le Bris is a participant to the Inria Challenge EQIP (Engineering for Quantum Information Processors), in particular in collaboration with P. Rouchon (QUANTIC project-team).

S. Boyaval

E. Cancès

V. Ehrlacher

C. Le Bris

F. Legoll

T. Lelièvre

A. Levitt co-organizes the applied mathematics seminar of the CERMICS lab, and the internal seminar of the EMC2 project (Sorbonne Université).

G. Stoltz

The members of the project-team have taught the following courses.

At École des Ponts 1st year (equivalent to L3):

At École des Ponts 2nd year (equivalent to M1):

At the M2 “Mathématiques de la modélisation” of Sorbonne Université:

At other institutions:

The following PhD theses supervised by members of the project-team have been defended:

The following PhD theses supervised by members of the project-team are ongoing:

Project-team members have participated in the following PhD juries:

Project-team members have participated in the following habilitation juries:

Project-team members have participated in the following selection committees:

Members of the project-team have delivered lectures in the following seminars, workshops and conferences:

Members of the project-team have delivered the following series of lectures:

Members of the project-team have presented posters in the following seminars, workshops and international conferences:

Members of the project-team have participated (without giving talks nor presenting posters) in the following seminars, workshops and international conferences: