The MATHERIALS project-team was created jointly by the École des Ponts ParisTech (ENPC) and Inria in 2015. It is the follow-up and an extension of the former project-team MICMAC originally created in October 2002. It is hosted by the CERMICS laboratory (Centre d'Enseignement et de Recherches en Mathématiques et Calcul Scientifique) at École des Ponts. The permanent research scientists of the project-team have positions at CERMICS and at two other laboratories of École des Ponts: Institut Navier and Laboratoire Saint-Venant. The scientific focus of the project-team is to analyze and improve the numerical schemes used in the simulation of computational chemistry at the microscopic level and to create simulations coupling this microscopic scale with meso- or macroscopic scales (possibly using parallel algorithms). Over the years, the project-team has accumulated an increasingly solid expertise on such topics, which are traditionally not well known by the community in applied mathematics and scientific computing. One of the major achievements of the project-team is to have created a corpus of literature, authoring books and research monographs on the subject 3, 4, 5, 6, 8, 7, 9 that other scientists may consult in order to enter the field.

Our group, originally only involved in electronic structure computations, continues to focus on many numerical issues in quantum chemistry, but now expands its expertise to cover several related problems at larger scales, such as molecular dynamics problems and multiscale problems. The mathematical derivation of continuum energies from quantum chemistry models is one instance of a long-term theoretical endeavour.

Quantum Chemistry aims at understanding the properties of matter through
the modelling of its behavior at a subatomic scale, where matter is
described as an assembly of nuclei and electrons.
At this scale, the equation that rules the interactions between these
constitutive elements is the Schrödinger equation. It can be
considered (except in few special cases notably those involving
relativistic phenomena or nuclear reactions)
as a universal model for at least three reasons. First it contains all
the physical
information of the system under consideration so that any of the
properties of this system can in theory be deduced from the
Schrödinger
equation associated to it. Second, the Schrödinger equation does not
involve any
empirical parameters, except some fundamental constants of Physics (the
Planck constant, the mass and charge of the electron, ...); it
can thus be written for any kind of molecular system provided its
chemical
composition, in terms of natures of nuclei and number of electrons,
is known. Third, this model enjoys remarkable predictive
capabilities, as confirmed by comparisons with a large amount of
experimental data of various types.
On the other hand, using this high quality model requires working with
space and time scales which are both very
tiny: the typical size of the electronic cloud of an isolated atom is
the Angström (all macroscopic properties can be
simply upscaled from the consideration of the short time behavior of a
tiny sample of matter. Many of them derive from ensemble or bulk
effects, that are far from being easy to understand and to model.
Striking examples are found in solid state materials or biological
systems. Cleavage, the ability of minerals to naturally split along
crystal surfaces (e.g. mica yields to thin flakes), is an ensemble
effect. Protein folding is
also an ensemble effect that originates from the presence of the
surrounding medium; it is responsible for peculiar properties
(e.g. unexpected acidity of some reactive site enhanced by special
interactions) upon which vital processes are based.
However, it is undoubtedly true that many macroscopic phenomena originate from
elementary processes which take place at the atomic scale. Let us
mention for instance the fact that
the elastic constants of a perfect crystal or the color of a chemical
compound (which is related to the wavelengths
absorbed or emitted during optic transitions between electronic
levels) can be evaluated by atomic scale calculations. In the same
fashion, the lubricative properties of graphite are essentially due to a
phenomenon which can be entirely modeled at the atomic scale.
It is therefore reasonable to simulate the behavior of matter at the
atomic scale in order to understand what is going on at the
macroscopic one.
The journey is however a long one. Starting from the basic
principles of Quantum Mechanics to model the matter at the subatomic
scale,
one finally uses statistical mechanics to reach the macroscopic
scale. It is often necessary to rely on intermediate steps to deal with
phenomena which take place on various mesoscales.
It may then be possible to couple one description of the system with some
others within the so-called multiscale models.
The sequel indicates how this journey can be completed
focusing on the first smallest scales (the subatomic one), rather than on the
larger ones.
It has already been mentioned that at the subatomic scale,
the behavior of nuclei and electrons is governed by the Schrödinger
equation, either in its time-dependent form
or in its time-independent form. Let us only mention at this point that

The time-dependent equation is a first-order linear evolution
equation, whereas the time-independent equation is a linear eigenvalue
equation.
For the reader more familiar with numerical analysis
than with quantum mechanics, the linear nature of the problems stated
above may look auspicious. What makes the
numerical simulation of these equations
extremely difficult is essentially the huge size of the Hilbert
space: indeed, this space is roughly some
symmetry-constrained subspace of nonlinear partial differential equations,
each
of these equations being posed on

As the size of the systems one wants to study increases, more efficient
numerical techniques need to be resorted to. In computational chemistry,
the typical scaling law for the complexity of computations with respect
to the size of the system under study is

An alternative strategy to reduce the complexity of ab initio
computations is to try to couple different models at different
scales. Such a mixed strategy can be either a sequential one or a
parallel one, in the sense that

The coupling of different scales can even go up to the macroscopic scale, with methods that couple a microscopic representation of matter, or at least a mesoscopic one, with the equations of continuum mechanics at the macroscopic level.

The orders of magnitude used in the microscopic representation of
matter are far from the orders of magnitude of the macroscopic
quantities we are used to: The number of particles under
consideration in a macroscopic sample of material is of the order of
the Avogadro number

To give some insight into such a large number of particles contained in
a macroscopic sample, it is helpful to
compute the number of moles of water on earth. Recall that one mole of water
corresponds to 18 mL, so that a standard glass of water contains roughly
10 moles, and a typical bathtub contains i.e.

For practical numerical computations
of matter at the microscopic level, following the dynamics of every atom would
require simulating

Describing the macroscopic behavior of matter knowing its microscopic
description
therefore seems out of reach. Statistical physics allows us to bridge the gap
between microscopic and macroscopic descriptions of matter, at least on a
conceptual
level. The question is whether the estimated quantities for a system of

Despite its intrinsic limitations on spatial and timescales, molecular simulation has been used and developed over the past 50 years, and its number of users keeps increasing. As we understand it, it has two major aims nowadays.

First, it can be
used as a numerical microscope, which allows us to perform
“computer” experiments.
This was the initial motivation for simulations at the microscopic level:
physical theories were tested on computers.
This use of molecular simulation is particularly clear in its historic
development, which was triggered and sustained by the physics of simple
liquids. Indeed, there was no good analytical theory for these systems,
and the observation of computer trajectories was very helpful to guide the
physicists'
intuition about what was happening in the system, for instance the mechanisms
leading to molecular diffusion. In particular,
the pioneering works on Monte Carlo methods by Metropolis et al., and the first
molecular dynamics
simulation of Alder and Wainwright were performed because of such motivations.
Today, understanding the behavior of matter at the
microscopic level can still be difficult from an experimental viewpoint
(because of the high resolution required, both in time and in
space), or because we simply do not know what to look for!
Numerical simulations are then a valuable tool to test some
ideas or obtain some data to process and analyze in order
to help assessing experimental setups. This is
particularly true for current nanoscale systems.

Another major aim of molecular simulation, maybe even more important than the
previous one,
is to compute macroscopic
quantities or thermodynamic properties,
typically through averages of some functionals of the system.
In this case, molecular simulation is a
way to obtain quantitative information on a system,
instead of resorting to approximate theories, constructed for simplified models,
and giving only qualitative answers.
Sometimes, these properties are accessible through experiments,
but in some cases only numerical computations are possible
since experiments may be unfeasible or too costly
(for instance, when high pressure or large temperature regimes are considered,
or when studying materials not yet synthesized).
More generally, molecular simulation is a tool to explore the links between
the microscopic and macroscopic properties of a material, allowing
one to address modelling questions such as “Which microscopic ingredients are
necessary
(and which are not) to observe a given macroscopic behavior?”

Over the years, the project-team has developed an increasing expertise on multiscale modeling for materials science at the continuum scale. The presence of numerous length scales in material science problems indeed represents a challenge for numerical simulation, especially when some randomness is assumed on the materials. It can take various forms, and includes defects in crystals, thermal fluctuations, and impurities or heterogeneities in continuous media. Standard methods available in the literature to handle such problems often lead to very costly computations. Our goal is to develop numerical methods that are more affordable. Because we cannot embrace all difficulties at once, we focus on a simple case, where the fine scale and the coarse-scale models can be written similarly, in the form of a simple elliptic partial differential equation in divergence form. The fine scale model includes heterogeneities at a small scale, a situation which is formalized by the fact that the coefficients in the fine scale model vary on a small length scale. After homogenization, this model yields an effective, macroscopic model, which includes no small scale (the coefficients of the coarse scale equations are thus simply constant, or vary on a coarse length scale). In many cases, a sound theoretical groundwork exists for such homogenization results. The difficulty stems from the fact that the models generally lead to prohibitively costly computations (this is for instance the case for random stationary settings). Our aim is to focus on different settings, all relevant from an applied viewpoint, and leading to practically affordable computational approaches. It is well-known that the case of ordered (that is, in this context, periodic) systems is now well-understood, both from a theoretical and a numerical standpoint. Our aim is to turn to cases, more relevant in practice, where some disorder is present in the microstructure of the material, to take into account defects in crystals, impurities in continuous media... This disorder may be mathematically modeled in various ways.

Such endeavors raise several questions. The first one, theoretical in nature, is to extend the classical theory of homogenization (well developed e.g. in the periodic setting) to such disordered settings. Next, after homogenization, we expect to obtain an effective, macroscopic model, which includes no small scale. A second question is to introduce affordable numerical methods to compute the homogenized coefficients. An alternative approach, more numerical in nature, is to directly attack the oscillatory problem by using discretization approaches tailored to the multiscale nature of the problem (the construction of which is often inspired by theoretical homogenization results). For a comprehensive account of many of the research efforts of the team on these topics, we refer to 1, 2.

The team continued its long-standing project to study density functional theory from an applied mathematics perspective.

E. Cancès co-edited with G. Friesecke (TU Munich, Germany) a book on Density Functional Theory reviewing modeling aspects, mathematical and numerical analysis results, computational methods, and state-of-the-art applications (Springer 2023, 40). In Chapter 7 of this book 41, E. Cancès, A. Levitt, Y. Maday (Sorbonne University), and C. Yang (LBNL, Berkeley, USA) discuss the main algorithms used to solve the Kohn–Sham models, as well as the recent numerical analysis of these models and algorithms.

Together with G. Kemlin (former PhD student in the project-team and now at University of Amiens) and A. Levitt (University Paris-Saclay), E. Cancès has studied the numerical analysis of linear and
nonlinear Schrödinger equations with periodic analytic potentials 21.
They prove that, for linear equations, when the potential is
analytic in a strip of width

Together with G. Dusson (CNRS and University of Besançon), B. Stamm (University of Stuttgart), and F. Lipparini, P. Mazzeo, and F. Pes (quantum chemists from the university of Pisa), E. Polack proposed a scheme based on Grassmann extrapolation of density matrices for an accurate calculation of initial guesses in Born-Oppenheimer Molecular Dynamics simulations 32. The method shows excellent results on large quantum mechanics/molecular mechanics systems simulated with Kohn-Sham density functional theory .

Etienne Polack and Laurent Vidal developed new functionalities in DFTK, a Julia library implementing plane-wave density functional theory for the simulation of the electronic structure of molecules and materials, whose development was launched in 2019 within the Project-Team MATHERIALS (main developers: M. Herbst, now at EPFL, and A. Levitt, now at Université Paris-Saclay).

The treatment of strongly correlated quantum systems is a long-standing challenge in computational chemistry and physics. The application of high-accuracy first-principle methods that are able to capture the electronic correlation effects at chemical accuracy is commonly stymied by a steep computational scaling with respect to system size. A potential remedy is provided by quantum embedding theories, which can be somehow interpreted as domain decomposition methods for the quantum many-body problem in the Fock space. Such approaches include the dynamical mean-field theory (DMFT) and the density matrix embedding theory (DMET). Together with F. Faultsich (UC Berkeley, USA) and A. Levitt, E. Cancès, A. Kirsch and E. Letournel provided the first mathematical analysis of DMET 47. They prove that, under certain assumptions, (i) the exact ground-state density matrix is a fixed-point of the DMET map for non-interacting systems, (ii) there exists a unique physical solution in the weakly-interacting regime, and (iii) DMET is exact at first order in the coupling parameter. They provide numerical simulations to support their results and comment on the physical meaning of the assumptions under which they hold true. They show that the violation of these assumptions may yield multiple solutions of the DMET equations.

Twisted bilayer graphene (TBG) is obtained by stacking two identical graphene sheets on top of one another and rotating them in opposite directions around the transverse direction by a relative, typically small, angle

In his post-doctoral work co-supervised by Claude Le Bris (MATHERIALS) and Pierre Rouchon (Inria QUANTIC), Masaaki Tokieda has addressed various issues related to the numerical simulation and the fundamental understanding of several models of physical systems likely candidates to play a crucial role in quantum computing.

Recent research efforts have been carried out in the team on the development of efficient numerical methods for quantum chemistry using optimal transport theory.

On the one hand, appropriate modified Wasserstein barycenters have been used in order to design new reduced-order model reduction methods to accelerate electronic structure calculations of molecules. First, together with Geneviève Dusson and Nathalie Nouaime, V. Ehrlacher developed new types of mixture Wasserstein distances and barycenters adapted to electronic densities that can be written as squares of linear combination of Slater determinants of Gaussian functions in 55.
Second, in 53, V. Ehrlacher (together with M. Dalery, G. Dusson and A. Lozinski) worked on the design of new greedy algorithms using the latter mixture Wasserstein barycenters and on the proof of estimates on the decay of Kolmogorov

On the other hand, V. Ehrlacher and L. Nenna proved in 57 that moment-constrained approximations of the Lieb functional (which may be seen as a particular instance of quantum optimal transport problems) enjoyed similar sparsity properties as moment-constrained approximation of classical optimal transport problems.

The aim of computational statistical physics is to compute macroscopic properties of materials starting from a microscopic description, using concepts of statistical physics (thermodynamic ensembles and molecular dynamics). The contributions of the team can be divided into four main topics: (i) the improvement of techniques to sample the configuration space; (ii) the development of simulation methods to efficiently simulate nonequilibrium systems; (iii) the study of dynamical properties and rare event sampling; (iv) the use of particle methods for sampling and optimization.

Before presenting the contributions in more detail, let us first mention two recent reviews written by the members of the team, in a book entitled Comprehensive Computational Chemistry, namely 43 by T. Lelièvre with D. Perez (LANL, USA), and 42 by T. Lelièvre and G. Stoltz with C. Chipot (CNRS and Université de Lorraine, France) and P. Gkeka (Sanofi, France). They respectively review recent advances in Accelerated Molecular Dynamics methods, and on (non)equilibrium methods for free-energy calculations with molecular dynamics.

There is still a need to improve techniques to sample the configuration space, and to understand their performance. In 61, T. Lelièvre, R. Santet and G. Stoltz develop and study a new variant of the Hamiltonian Monte Carlo algorithm which can be used for nonseparable Hamiltonian. Such Hamiltonian functions naturally appear in many contexts, for numerical or physical reasons, for example when a position-dependent mass is considered. To get an unbiased sampling method, a reversibility check has to be enforced, because of the implicitness of the Störmer–Verlet integrator.

In order to improve the efficiency of sampling, variance reduction techniques should be used. In 50, T. Lelièvre, G. Stoltz and U. Vaes, together with M. Chak (Sorbonne Université, France) analyze an importance sampling approach for Markov chain Monte Carlo methods that relies on the overdamped Langevin dynamics. More precisely, they study an estimator based on an ergodic average along a realization of an overdamped Langevin process for a modified potential. An explicit expression for the biasing potential that minimizes the asymptotic variance of the estimator is obtained in dimension 1, and a general numerical approach for approximating the optimal potential in the multi-dimensional setting is proposed. The capabilities of the proposed method are demonstrated by means of numerical experiments.

To quantify the performance of sampling methods based on ergodic stochastic differential equations, bounds on the resolvent of the generator of the dynamics under consideration are useful, as one can derive upper bounds on the asymptotic variance for time averages. Such bounds can in turn be deduced from decay estimates on the semigroup. Together with G. Brigati (Université Paris Dauphine, France), G. Stoltz studied in 46 how to obtain constructive decay estimates for the semigroup corresponding to hypoelliptic generators associated with Langevin dynamics, using hypocoercive techniques based on space time averages and Lions' lemma.

Finally, let us mention that the stochastic dynamics used to sample probability measures in statistical physics can also be used to minimize functions when the temperature is sent to 0. This idea is used in the work 27 by G. Stoltz together with K. Karoni and B. Leimkuhler (University of Edinburgh, United-Kingdom), where variations of the adaptive Langevin dynamics (an underdamped Langevin dynamics where the friction is adjusted through some feedback term à la Nosé–Hoover) are considered to minimize high dimensional objectives.

Many systems in computational statistical physics are not at equilibrium. This is in particular the case when one wants to compute transport coefficients, which determine the response of the system to some external perturbation. For instance, the thermal conductivity relates an applied temperature difference to an energy current through Fourier's law, while the mobility coefficient relates an applied external constant force to the average velocity of the particles in the system. The main limitations of usual methods to compute transport coefficients is the large variance of the estimators, which motivates searching for dedicated variance reduction strategies. R. Gastaldello is starting his PhD work on this topic. Let us next describe the efforts of the team done in the previous year.

In 35, R. Spacek and G. Stoltz proposed to add an additional perturbation to the system (so-called synthetic forcing), which perserves the invariant measure and hence does not change the linear response, but which allows to limit the nonlinearity of the response. This makes it possible to resort to larger forcings in practice, and hence more easily determine the response of the nonequilibrium system. Several classes of admissible synthetic forcings are systematically studied, and their performance is assessed on toy systems.

Another way to possibly reduce the variance of estimators of transport coefficients based on nonequilibrium molecular dynamics is to resort to a dual strategy. Whereas standard non-equilibrium approaches fix the forcing and measure the average flux induced in the system driven out of equilibrium, a dual philosophy consists in fixing the value of the flux, and measuring the average magnitude of the forcing needed to induce it. A deterministic version of this approach, named Norton dynamics, was studied in the 1980s by Evans and Morriss. In 45, N. Blassel and G. Stoltz introduce a stochastic version of this method, first developing a general formal theory for a broad class of diffusion processes, and then specializing it to underdamped Langevin dynamics, which are commonly used for molecular dynamics simulations. Numerical evidence demonstrates that the stochastic Norton method provides an equivalent measure of the linear response, and in fact that this equivalence extends well beyond the linear response regime. This work raises many intriguing questions, both from the theoretical and the numerical perspectives.

On the applicative side, G. Stoltz studied in 26 with T. Hoang Ngoc Minh and B. Rotenberg (Sorbonne Université, France) the effet of confinement, adsorption on surfaces, and ion-ion interactions on the response of confined electrolyte solutions to oscillating electric fields in the direction perpendicular to the confining walls. Nonequilibrium simulations allows to characterize the transitions between linear and nonlinear regimes when varying the magnitude and frequency of the applied field, but the linear response, characterized by the frequency-dependent conductivity, is more efficiently predicted from the equilibrium current fluctuations. To that end, a Green–Kubo relation appropriate for overdamped dynamics is rederived for time periodic forcings. The expression highlights the contributions of the underlying Brownian fluctuations and of the interactions of the particles between them and with external potentials. The frequency-dependent conductivity always decays from a bulk-like behavior at high frequency to a vanishing conductivity at low frequency due to the confinement of the charge carriers by the walls.

Sampling transitions from one metastable state to another is a difficult task. In 33 T. Lelièvre, T. Pigeon and G. Stoltz, together with A. Anciaux-Sekadrian, M. Corral-Valero, M. Moreaud (collaborators at IFPEN, France) apply for the first time the Adapive Multilevel Splitting (AMS) method to study catalytic reactions. Computing accurate rate constants for catalytic events occurring at the surface of a given material represents a challenging task with multiple potential applications in chemistry. The AMS method requires a one dimensional reaction coordinate to index the progress of the transition. To build such a reaction coordinate, various approaches are tested, including Support Vector Machine and path collective variables. The calculated rate constants and transition mechanisms are discussed and compared to those obtained by a conventional static approach based on the Eyring-Polanyi equation with harmonic approximation. The AMS method is able to better take into account entropic effects as well as complex transition mechanisms, e.g. those involving multiple pathways.

On the methodological side, A. Guyader and T. Lelièvre are currently exploring how AMS and Importance Sampling (IS) are affected by an importance function which is not exactly the committor

More generally, finding collective variables to describe some important coarse-grained information on physical systems, in particular metastable states, remains a key issue in molecular dynamics. Collective variables allow to compute free energy differences and/or bias the dynamics to observe transitions. In 59, T. Lelièvre, T. Pigeon and G. Stoltz, together with W. Zhang (FU Berlin, Germany) analyze the performances of autoencoders to construct collective variables. They study some relevant mathematical properties of the loss function considered for training autoencoders, and provide physical interpretations based on conditional variances and minimum energy paths. They also consider various extensions in order to better describe physical systems, by incorporating more information on transition states at saddle points, and/or allowing for multiple decoders in order to describe several transition paths. On the application side, T. Lelièvre and G. Stoltz, together with Z. Belkacemi, M. Bianciotto, P. Gkeka, H. Minoux (collaborators at Sanofi, France) characterize in 12 the dynamics of the N-terminal domain of the heat shock protein 90 (Hsp90) using an autoencoder-learned collective variable in conjunction with adaptive biasing force Langevin dynamics. Using this machine-learnt collective variable is a crucial ingredient to generate transitions between native states.

In some situations, stochastic numerical methods can be made more efficient by using various replicas of the system. For algorithms based on interacting particle systems that admit a mean-field description, convergence analysis is often more accessible at the mean-field level. In order to transpose convergence results obtained at the mean-field level to the finite ensemble size setting, it is desirable to show that the particle dynamics converge in an appropriate sense to the corresponding mean-field dynamics. In 58, U. Vaes together with N. J. Gerber (Hausdorff Center for Mathematics, Germany) and F. Hoffmann (Caltech, USA) proved quantitative mean-field limit results for two related interacting particle systems: Consensus-Based Optimization and Consensus-Based Sampling. The approach employed to this end is based a generalization of Sznitman’s classical argument: in order to circumvent issues related to the lack of global Lipschitz continuity of the coefficients, an event of small probability is discarded, the contribution of which is controlled using moment estimates for the particle systems. In addition, their work presents new results on the well-posedness of the particle systems and their mean-field limit, and provides novel stability estimates for the weighted mean and the weighted covariance.

From the theoretical viewpoint, the project-team has pursued the development of a general theory for homogenization of deterministic materials modeled as periodic structures with defects. This series of works is performed in collaboration with X. Blanc (Université Paris-Cité), P.-L. Lions (Collège de France) and P. Souganidis (Chicago, USA). The periodic setting is the oldest traditional setting for homogenization. Alternative settings include the quasi- and almost-periodic settings, and the random stationary setting. From a modeling viewpoint, assuming that multiscale materials are periodic is however an idealistic assumption: natural media (such as the subsoil) have no reason to be periodic, and manufactured materials, even though indeed sometimes designed to be periodic, are often not periodic in practice, e.g. because of imperfect manufacturing processes, of small geometric details that break the periodicity and can be critical in terms of industrial performances, ...Quasi- and almost-periodic settings are not appropriate answers to this difficulty. Using a random stationary setting may be tempting from a modeling viewpoint (in the sense that all that is not known about the microstructure can be “hidden” in a probabilistic description), but this often leads to prohibitively expensive computations, since the model is very general. The direction explored by the project-team consists in considering periodic structures with defects, a setting which is rich enough to fit reality while still leading to affordable computations.

In that direction, Y. Achdou (partially on leave at INRIA from Université Paris-Cité in 2022-2023) and C. Le Bris have studied in 10 the homogenization of a class of stationary Hamilton-Jacobi equations in which the Hamiltonian is obtained by perturbing near the origin an otherwise periodic Hamiltonian. Homogenization leads to an effective Hamilton-Jacobi equation supplemented with an effective Dirichlet boundary condition at the origin: this boundary value problem has to be understood in the sense of the stratified problems introduced by Bressan et al and later studied by Barles and Chasseigne. The effective Dirichlet data is obtained as the limit of a sequence of ergodic constants associated to truncated cell problems posed in balls with diameter tending to infinity. Several research directions stem from this work. In particular, an ongoing research deals with situations in which homogenization leads to stratified problems with more complex geometries, for example a stratification of the plane composed of three manifolds: an open half-line, the end-point of the latter and finally the complement of the closed half-line. Other open questions concern the rate of convergence to the solution of the effective problem and the behavior of the above-mentioned ergodic constants as the diameter of the truncated cell tends to infinity. These questions seem difficult and a sensible way to tackle them would be to carry out numerical simulations first.

Also in that direction of research, C. Le Bris has co-authored with X. Blanc two textbooks on Homogenization Theory, one in French 1 and one in English 2. The two books mostly present the same material. The French version however dwells a bit more on the theoretical aspects while the English version is slightly more focused on computational issues. This action testifies to the wish of the team to reach the largest possible audience and introduce them to the challenging field of multiscale science. In addition, a short text that summarizes the major results obtained has been written by C. Le Bris and has been published in the "Séminaire Laurent Schwartz 2022-2023 volume" 28.

In the context of the PhD of S. Ruget, C. Le Bris and F. Legoll have pursued their work on the question of how to determine the homogenized coefficient of a multiscale problem without explicitly performing an homogenization approach. This work is a follow-up on earlier works over the years in collaboration with K. Li, S. Lemaire and O. Gorynina. They have first extended their approach to the case of Schroedinger equations with rapidly oscillating potentials. Current efforts are focused on investigating the robustness of the approach with respect to the available data, which, in practice, may be noisy, blurred, incomplete, or only available in the form of averages over domains of large size (compared to the size of the microstructure).

From a numerical perspective, the Multiscale Finite Element Method (MsFEM) is a classical strategy to address the situation when the homogenized problem is not known (e.g. in difficult nonlinear cases), or when the scale of the heterogeneities, although small, is not considered to be zero (and hence the homogenized problem cannot be considered as a sufficiently accurate approximation). The MsFEM approach uses a Galerkin approximation of the problem on a pre-computed basis, obtained by solving local problems mimicking the problem at hand at the scale of mesh elements.

In the context of the PhD of R. Biezemans, C. Le Bris and F. Legoll have addressed the question of how to design accurate MsFEM approaches for various types of equations, beyond the purely diffusive case, and in particular for the case of multiscale advection-diffusion problems, in the advection-dominated regime. Thin boundary layers are present in the exact solution, and numerical approaches should be carefully adapted to this situation, e.g. using stabilization. How stabilization and the multiscale nature of the problem interplay with one another is a challenging question, and several MsFEM variants have been compared by R. Biezemans, C. Le Bris, F. Legoll and A. Lozinski. The main results have been collected in Part II of the PhD manuscript of R. Biezemans (who defended his PhD thesis in September 2023) and a manuscript is about to be submitted for publication. It is shown there how MsFEM with weak continuity conditions of Crouzeix-Raviart type can be stabilized by adding specific bubble functions, satisfying the same type of weak boundary conditions.

In the context of the PhD of A. Lefort, C. Le Bris and F. Legoll have undertaken the study of a multiscale, reaction-diffusion equation. This problem is different from the equations previously studied by the team by the fact that it includes a large reaction term which competes with the diffusive term. From a numerical perspective, two difficulties are present in the time-dependent version of the problem. First, the coefficients of the equation (and therefore the solution) oscillate at a small spatial scale. In addition, the problem in time is stiff. In order to not address all difficulties at the same time, the associated eigenvalue problem has been considered,
for which a promising MsFEM-type approach was introduced. The robustness of the method is currently being investigated.

In parallel to the exploration of advection-diffusion equations and reaction-diffusion equations, another direction of research is focused on hyperbolic multiscale conservation laws. The homogenized limit of a large class of such conservation laws has recently been established in the literature. As a preliminary work on this topic, A. Boucart (currently post-doc in the project-team), C. Le Bris and F. Legoll have put in action the classical homogenization approach, and numerically demonstrated that the two-scale approximation provided by homogenization theory is indeed an accurate approximation of the reference solution, however for a ratio between the macro and the micro scales which needs to be larger than for elliptic problems. A manuscript collecting their conclusions (on this topic as well as on related questions) is being prepared for publication.

Using standard homogenization theory, one knows that the homogenized tensor, which is a deterministic matrix, depends on the solution of a stochastic equation, the so-called corrector problem, which is posed on the whole space

In collaboration with B. Stamm and S. Xiang (both at Aachen University, Germany), V. Ehrlacher and F. Legoll have studied in 56 new alternatives for the approximation of the homogenized matrix in the case of the (vector-valued) linear elasticity equation. This work extends previous works dedicated to scalar-valued equations. These alternative definitions rely on the use of an embedded corrector problem, where a finite-size domain made of the highly oscillatory material is embedded in a homogeneous infinite medium whose diffusion coefficients have to be appropriately determined.

In collaboration with M. Bertin and S. Brisard (ENPC), F. Legoll has introduced in 16 a variance reduction approach for the computation of the homogenized coefficients. The method is based on concurrently using (in a control variate fashion) the reference computations (solving the corrector problem on a large but finite domain) together with some inexpensive mean-field approximations often used in the computational mechanics community, such as the ones stemming from the Hashin-Shtrikman principle. The numerical efficiency of the approach has been demonstrated on several examples, including cases with large contrasts.

In 2023, S. Boyaval has generalized the symmetric-hyperbolic system of conservation laws introduced in 2020 to a whole class of models for non-Newtonian fluids 15. Precisely, a family of quasilinear PDEs is proved symmetric-hyperbolic and endowed with an additional dissipation inequality (a formulation of the second- principle) whatever the choice of a "stored energy functional"–"relaxation functional" couple within that class. This defines a framework for numerical methods.

The objective of a reduced-order model reduction method is the following: it may sometimes be very expensive from a computational point of view to simulate the properties of a complex system described by a complicated model, typically a set of PDEs. This cost may become prohibitive in situations where the solution of the model has to be computed for a very large number of values of the parameters involved in the model. Such a parametric study is nevertheless necessary in several contexts, for instance when the value of these parameters has to be calibrated so that numerical simulations give approximations of the solutions that are as close as possible to some measured data. A reduced-order model method then consists in constructing, from a few complex simulations which were performed for a small number of well-chosen values of the parameters, a so-called reduced model, much cheaper and quicker to solve from a numerical point of view, and which enables to obtain an accurate approximation of the solution of the model for any other values of the parameters.

In 62, V. Ehrlacher together with I. Niakh, G. Drouet (EDF) and A. Ern (SERENA) introduced a new model reduction method for parametrized linear variational inequalities of the first and second kind for mechanical contact and friction problems. The method relies on the use of a so-called Nitsche formulation of the problem. This avoids the use of dual variables, which are well-known not to be easily reducible for these types of problems.

In 52 and 51, V. Ehrlacher and T. Lelièvre, together with G.Dusson (Université de Besançon) and Y. Conjungo Taumhas, and F. Madiot (CEA) develop a reduced basis method for parametrized non-symmetric eigenvalue problems arising in the loading pattern optimization of a nuclear core in neutronics. This requires to derive a posteriori error estimates for the eigenvalue and left and right eigenvectors. A first implementation of the method in APOLLO3, the CEA/DES deterministic multi-purpose code for reactor physics analysis, is presented.

Cross-diffusion systems are nonlinear degenerate parabolic systems that naturally arise in diffusion models of multi-species mixtures in a wide variety of applications: tumor growth, population dynamics, materials science etc. In materials science they typically model the evolution of local densities or volumic fractions of chemical species within a mixture.

In 49, J. Cauvin-Vila, V. Ehrlacher, G. Marino (Augsburg) and J.-F. Pietschmann (Augsburg) studied mathematically a model of a multicomponent mixture where cross-diffusion effects occur between the different species but only one species separates from the other. The evolution of the system is modeled by a gradient flow, in an appropriate metric, of a degenerate Ginzburg–Landau energy, which yields a system of coupled partial differential equations of Cahn–Hilliard type. Local minimizers of the energy functional are shown to exist and to qualify as classical stationary solutions of the system. Finally, the authors prove exponential convergence to a constant stationary solution in a particular parameter regime, and they introduce a novel structure-preserving finite volume scheme to approximate the evolution numerically.

In 37, C. Cancès, J. Cauvin-Vila, C. Chainais-Hillairet and V. Ehrlacher developed a new structure-preserving numerical scheme for a model of a physical vapor deposition process used for the fabrication of thin film layers. The model involves two different types of cross-diffusion systems coupled by an evolving interface. The moving interface is addressed with a cut-cell approach, where the mesh is locally deformed around the interface. The scheme is shown to preserve the structure of the continuous system, namely: mass conservation, nonnegativity, volume-filling constraints and decay of the free energy.

Many research activities of the project-team are conducted in close collaboration with private or public companies: CEA, EDF, IFPEN, Sanofi, OSMOS Group, SAFRANTech. The project-team is also supported by the Office of Naval Research and the European Office of Aerospace Research and Development, for multiscale simulations of random materials. All these contracts are operated at and administrated by the École des Ponts, except the contracts with IFPEN, which are administrated by Inria.

T. Lelièvre, G. Stoltz and F. Legoll participate in the Laboratoire International Associé (LIA) CNRS / University of Illinois at Urbana-Champaign on complex biological systems and their simulation by high performance computers. This LIA involves French research teams from Université de Nancy, Institut de Biologie Structurale (Grenoble) and Institut de Biologie Physico-Chimique (Paris). The LIA has been last renewed in January 1st, 2018.

Eric Cancès is one of the PIs of the Simons Targeted Grant “Moiré materials magic” (September 2021 - August 2026). His co-PIs are Allan MacDonald (UT Austin, coordinating PI), Svetlana Jitomirskaya (UC Berkeley), Efthimios Kaxiras (Harvard), Lin Lin (UC Berkeley), Mitchell Luskin (University of Minnesota), Angel Rubio (Max-Planck Institut), Maciej Zworski (UC Berkeley).

EMC2 project on cordis.europa.eu

Molecular simulation has become an instrumental tool in chemistry, condensed matter physics, molecular biology, materials science, and nanosciences. It will allow to propose de novo design of e.g. new drugs or materials provided that the efficiency of underlying software is accelerated by several orders of magnitude.

The ambition of the EMC2 project is to achieve scientific breakthroughs in this field by gathering the expertise of a multidisciplinary community at the interfaces of four disciplines: mathematics, chemistry, physics, and computer science. It is motivated by the twofold observation that, i) building upon our collaborative work, we have recently been able to gain efficiency factors of up to 3 orders of magnitude for polarizable molecular dynamics in solution of multi-million atom systems, but this is not enough since ii) even larger or more complex systems of major practical interest (such as solvated biosystems or molecules with strongly-correlated electrons) are currently mostly intractable in reasonable clock time. The only way to further improve the efficiency of the solvers, while preserving accuracy, is to develop physically and chemically sound models, mathematically certified and numerically efficient algorithms, and implement them in a robust and scalable way on various architectures (from standard academic or industrial clusters to emerging heterogeneous and exascale architectures).

EMC2 has no equivalent in the world: there is nowhere such a critical number of interdisciplinary researchers already collaborating with the required track records to address this challenge. Under the leadership of the 4 PIs, supported by highly recognized teams from three major institutions in the Paris area, EMC2 will develop disruptive methodological approaches and publicly available simulation tools, and apply them to challenging molecular systems. The project will strongly strengthen the local teams and their synergy enabling decisive progress in the field.

Recent successes have established the potential of parallel-in-time integration as a powerful algorithmic paradigm to unlock the performance of Exascale systems. However, these successes have mainly been achieved in a rather academic setting, without an overarching understanding. TIME-X will take the next leap in the development and deployment of this promising new approach for massively parallel HPC simulation, enabling efficient parallel-in-time integration for real-life applications. We will:

(i) provide software for parallel-in-time integration on current and future Exascale HPC architectures, delivering substantial improvements in parallel scaling;

(ii) develop novel algorithmic concepts for parallel-in-time integration, deepening our mathematical understanding of their convergence behaviour and including advances in multi-scale methodology;

(iii) demonstrate the impact of parallel-in-time integration, showcasing the potential on problems that, to date, cannot be tackled with full parallel efficiency in three diverse and challenging application fields with high societal impact: weather and climate, medicine and fusion.

To realise these ambitious, yet achievable goals, the inherently inter-disciplinary TIME-X Consortium unites top researchers from numerical analysis and applied mathematics, computer science and the selected application domains. Europe is leading research in parallel-in-time integration. TIME-X unites all relevant actors at the European level for the first time in a joint strategic research effort. A strategic investment from the European Commission would enable taking the necessary next step: advancing parallel-in-time integration from an academic/mathematical methodology into a widely available technology with a convincing proof of concept, maintaining European leadership in this rapidly advancing field and paving the way for industrial adoption.

EMC2 project on cordis.europa.eu

Interacting particle or agent-based systems are ubiquitous in science. They arise in an extremely wide variety of applications including materials science, biology, economics and social sciences. Several mathematical models exist to account for the evolution of such systems at different scales, among which stand optimal transport problems, Fokker-Planck equations, mean-field games systems or stochastic differential equations. However, all of them suffer from severe limitations when it comes to the simulation of high-dimensional problems, the high-dimensionality character coming either from the large number of particles or agents in the system, the high amount of features of each agent or particle, or the huge quantity of parameters entering the model. The objective of this project is to provide a new mathematical framework for the development and analysis of efficient and accurate numerical methods for the simulation of high-dimensional particle or agent systems, stemming from applications in materials science and stochastic game theory.

The main challenges which will be addressed in this project are:

The potential impacts of the project are huge: making possible such extreme-scale simulations will enable to gain precious insights on the predictive power of agent- or particle-based models, with applications in various fields, such as quantum chemistry, molecular dynamics, crowd motion or urban traffic.

The project-team is involved in several ANR projects:

The project-team is also involved in PEPR projects:

Members of the project-team are participating in the following GdR or RT:

The project-team is involved in the Labex Bezout (2011-).

C. Le Bris is a participant to the Inria Challenge EQIP (Engineering for Quantum Information Processors), in particular in collaboration with P. Rouchon (QUANTIC project-team).

S. Boyaval

E. Cancès

V. Ehrlacher

C. Le Bris

F. Legoll

T. Lelièvre

G. Stoltz

The members of the project-team have taught the following courses.

At École des Ponts 1st year (equivalent to L3):

At École des Ponts 2nd year (equivalent to M1):

At the M2 “Mathématiques de la modélisation” of Sorbonne Université:

At other institutions:

The following PhD theses supervised by members of the project-team have been defended:

The following PhD theses supervised by members of the project-team are ongoing:

Project-team members have participated in the following PhD juries:

Project-team members have participated in the following habilitation juries:

Project-team members have participated in the following selection committees:

Members of the project-team have delivered lectures in the following seminars, workshops and conferences:

Members of the project-team have delivered the following series of lectures:

Members of the project-team have presented posters in the following seminars, workshops and international conferences:

Members of the project-team have participated (without giving talks nor presenting posters) in the following seminars, workshops and international conferences: