The MATHERIALS project-team has been created jointly by the École des Ponts ParisTech (ENPC) and Inria in 2015. It is the follow-up and an extension of the former project-team MICMAC originally created in October 2002. It is hosted by the CERMICS laboratory (Centre d'Enseignement et de Recherches en Mathématiques et Calcul Scientifique) at École des Ponts. The permanent research scientists of the project-team have positions at CERMICS and at two other laboratories of École des Ponts: Institut Navier and Laboratoire Saint-Venant. The scientific focus of the project-team is to analyze and improve the numerical schemes used in the simulation of computational chemistry at the microscopic level and to create simulations coupling this microscopic scale with meso- or macroscopic scales (possibly using parallel algorithms). Over the years, the project-team has accumulated an increasingly solid expertise on such topics, which are traditionally not well known by the community in applied mathematics and scientific computing. One of the major achievements of the project-team is to have created a corpus of literature, authoring books and research monographs on the subject 1, 2, 3, 4, 6, 5, 7 that other scientists may consult in order to enter the field.

Our group, originally only involved in electronic structure computations, continues to focus on many numerical issues in quantum chemistry, but now expands its expertise to cover several related problems at larger scales, such as molecular dynamics problems and multiscale problems. The mathematical derivation of continuum energies from quantum chemistry models is one instance of a long-term theoretical endeavour.

Quantum Chemistry aims at understanding the properties of matter through
the modelling of its behavior at a subatomic scale, where matter is
described as an assembly of nuclei and electrons.
At this scale, the equation that rules the interactions between these
constitutive elements is the Schrödinger equation. It can be
considered (except in few special cases notably those involving
relativistic phenomena or nuclear reactions)
as a universal model for at least three reasons. First it contains all
the physical
information of the system under consideration so that any of the
properties of this system can in theory be deduced from the
Schrödinger
equation associated to it. Second, the Schrödinger equation does not
involve any
empirical parameters, except some fundamental constants of Physics (the
Planck constant, the mass and charge of the electron, ...); it
can thus be written for any kind of molecular system provided its
chemical
composition, in terms of natures of nuclei and number of electrons,
is known. Third, this model enjoys remarkable predictive
capabilities, as confirmed by comparisons with a large amount of
experimental data of various types.
On the other hand, using this high quality model requires working with
space and time scales which are both very
tiny: the typical size of the electronic cloud of an isolated atom is
the Angström (all macroscopic properties can be
simply upscaled from the consideration of the short time behavior of a
tiny sample of matter. Many of them derive from ensemble or bulk
effects, that are far from being easy to understand and to model.
Striking examples are found in solid state materials or biological
systems. Cleavage, the ability of minerals to naturally split along
crystal surfaces (e.g. mica yields to thin flakes), is an ensemble
effect. Protein folding is
also an ensemble effect that originates from the presence of the
surrounding medium; it is responsible for peculiar properties
(e.g. unexpected acidity of some reactive site enhanced by special
interactions) upon which vital processes are based.
However, it is undoubtedly true that many macroscopic phenomena originate from
elementary processes which take place at the atomic scale. Let us
mention for instance the fact that
the elastic constants of a perfect crystal or the color of a chemical
compound (which is related to the wavelengths
absorbed or emitted during optic transitions between electronic
levels) can be evaluated by atomic scale calculations. In the same
fashion, the lubricative properties of graphite are essentially due to a
phenomenon which can be entirely modeled at the atomic scale.
It is therefore reasonable to simulate the behavior of matter at the
atomic scale in order to understand what is going on at the
macroscopic one.
The journey is however a long one. Starting from the basic
principles of Quantum Mechanics to model the matter at the subatomic
scale,
one finally uses statistical mechanics to reach the macroscopic
scale. It is often necessary to rely on intermediate steps to deal with
phenomena which take place on various mesoscales.
It may then be possible to couple one description of the system with some
others within the so-called multiscale models.
The sequel indicates how this journey can be completed
focusing on the first smallest scales (the subatomic one), rather than on the
larger ones.
It has already been mentioned that at the subatomic scale,
the behavior of nuclei and electrons is governed by the Schrödinger
equation, either in its time-dependent form
or in its time-independent form. Let us only mention at this point that

The time-dependent equation is a first-order linear evolution
equation, whereas the time-independent equation is a linear eigenvalue
equation.
For the reader more familiar with numerical analysis
than with quantum mechanics, the linear nature of the problems stated
above may look auspicious. What makes the
numerical simulation of these equations
extremely difficult is essentially the huge size of the Hilbert
space: indeed, this space is roughly some
symmetry-constrained subspace of nonlinear partial differential equations,
each
of these equations being posed on

As the size of the systems one wants to study increases, more efficient
numerical techniques need to be resorted to. In computational chemistry,
the typical scaling law for the complexity of computations with respect
to the size of the system under study is

An alternative strategy to reduce the complexity of ab initio
computations is to try to couple different models at different
scales. Such a mixed strategy can be either a sequential one or a
parallel one, in the sense that

The coupling of different scales can even go up to the macroscopic scale, with methods that couple a microscopic representation of matter, or at least a mesoscopic one, with the equations of continuum mechanics at the macroscopic level.

The orders of magnitude used in the microscopic representation of
matter are far from the orders of magnitude of the macroscopic
quantities we are used to: The number of particles under
consideration in a macroscopic sample of material is of the order of
the Avogadro number

To give some insight into such a large number of particles contained in
a macroscopic sample, it is helpful to
compute the number of moles of water on earth. Recall that one mole of water
corresponds to 18 mL, so that a standard glass of water contains roughly
10 moles, and a typical bathtub contains i.e.

For practical numerical computations
of matter at the microscopic level, following the dynamics of every atom would
require simulating

Describing the macroscopic behavior of matter knowing its microscopic
description
therefore seems out of reach. Statistical physics allows us to bridge the gap
between microscopic and macroscopic descriptions of matter, at least on a
conceptual
level. The question is whether the estimated quantities for a system of

Despite its intrinsic limitations on spatial and timescales, molecular simulation has been used and developed over the past 50 years, and its number of users keeps increasing. As we understand it, it has two major aims nowadays.

uid10First, it can be
used as a numerical microscope, which allows us to perform
“computer” experiments.
This was the initial motivation for simulations at the microscopic level:
physical theories were tested on computers.
This use of molecular simulation is particularly clear in its historic
development, which was triggered and sustained by the physics of simple
liquids. Indeed, there was no good analytical theory for these systems,
and the observation of computer trajectories was very helpful to guide the
physicists'
intuition about what was happening in the system, for instance the mechanisms
leading to molecular diffusion. In particular,
the pioneering works on Monte-Carlo methods by Metropolis et al., and the first
molecular dynamics
simulation of Alder and Wainwright were performed because of such motivations.
Today, understanding the behavior of matter at the
microscopic level can still be difficult from an experimental viewpoint
(because of the high resolution required, both in time and in
space), or because we simply do not know what to look for!
Numerical simulations are then a valuable tool to test some
ideas or obtain some data to process and analyze in order
to help assessing experimental setups. This is
particularly true for current nanoscale systems.

Another major aim of molecular simulation, maybe even more important than the
previous one,
is to compute macroscopic
quantities or thermodynamic properties,
typically through averages of some functionals of the system.
In this case, molecular simulation is a
way to obtain quantitative information on a system,
instead of resorting to approximate theories, constructed for simplified models,
and giving only qualitative answers.
Sometimes, these properties are accessible through experiments,
but in some cases only numerical computations are possible
since experiments may be unfeasible or too costly
(for instance, when high pressure or large temperature regimes are considered,
or when studying materials not yet synthesized).
More generally, molecular simulation is a tool to explore the links between
the microscopic and macroscopic properties of a material, allowing
one to address modelling questions such as “Which microscopic ingredients are
necessary
(and which are not) to observe a given macroscopic behavior?”

Over the years, the project-team has developed an increasing expertise on how to couple models written at the atomistic scale with more macroscopic models, and, more generally, an expertise in multiscale modelling for materials science.

The following observation motivates the idea of coupling atomistic and
continuum representation of materials. In many situations of interest
(crack propagation, presence of defects in the atomistic lattice, ...),
using a model based on continuum mechanics is difficult. Indeed, such a
model is based on a macroscopic constitutive law, the derivation of
which requires a deep qualitative and quantitative understanding of the
physical and mechanical properties of the solid under consideration.
For many solids, reaching such an understanding is a challenge, as loads
they are subjected to become larger and more diverse, and as
experimental observations helping designing such models are not always
possible (think of materials used in the nuclear industry).
Using an atomistic model in the whole domain is not possible either, due
to its prohibitive computational cost. Recall indeed that a
macroscopic sample of matter contains a number of atoms on the order of
only a small
part of the solid. So, a natural idea is to try to take advantage of
both models, the continuum mechanics one and the atomistic one, and to
couple them, in a domain decomposition spirit. In most of the domain,
the deformation is expected to be smooth, and reliable continuum
mechanics models are then available. In the rest of the
domain, the expected deformation is singular, so that one needs an atomistic
model to describe it properly, the cost of which remains however limited
as this region is small.

From a mathematical viewpoint, the question is to couple a discrete model with a model described by PDEs. This raises many questions, both from the theoretical and numerical viewpoints:

More generally, the presence of numerous length scales in material
science problems represents a challenge for numerical simulation,
especially when some randomness is assumed on the
materials. It can take various forms, and includes defects in
crystals, thermal fluctuations, and impurities or heterogeneities in
continuous media. Standard methods available in the literature to
handle such problems often lead to very costly computations. Our
goal is to develop numerical methods that are more
affordable. Because we cannot embrace all difficulties at once, we
focus on a simple case, where the fine scale and the coarse-scale
models can be written similarly, in the form of a simple elliptic
partial differential equation in divergence form. The fine scale
model includes heterogeneities at a small scale, a situation which
is formalized by the fact that the coefficients in the fine scale
model vary on a small length scale. After homogenization, this model
yields an effective, macroscopic model, which includes no small
scale. In many cases, a sound theoretical groundwork exists for such
homogenization results. The difficulty stems from the fact that the models
generally lead to prohibitively costly computations. For such a
case, simple from the theoretical viewpoint, our aim is to focus on
different practical computational approaches to speed-up the
computations. One possibility, among others, is to look for specific
random materials, relevant from the practical viewpoint, and for
which a dedicated approach can be proposed, that is less expensive
than the general approach.

T. Lelièvre was awarded a visiting professorship from the Leverhulme Trust, to cover his sabbatical leave at Imperial College from January 2020 to June 2020.

Grégoire Ferré, who did his PhD in the Matherials team from Oct. 2016 to Sept. 2019, was awarded the PhD prize from the doctoral school MSTIC of University Paris Est. His worked focused on large deviation theory and its application in statistical physics, both from theoretical and numerical viewpoints. He is now a research associate at Capital Fund Management.

gen.parRep is the first publicly available implementation of the Generalized Parallel Replica method (BSD 3-Clause license), targeting frequently encountered metastable biochemical systems, such as conformational equilibria or dissociation of protein-ligand complexes.

It was shown (hal-01832823) that the resulting C++/MPI implementation exhibits a strong linear scalability, providing up to 70 % of the maximum possible speedup on several hundreds of CPUs.

The team members have continued their study of algorithms for solving the ground-state problem in Kohn-Sham density functional theory, the long-term goal being the construction of robust and efficient numerical methods with guaranteed error bounds.

In 46, E. Cancès, G. Kemlin and A. Levitt have studied the algebraic structure of the self-consistent problems present in mean-field models such as Hartree-Fock and Kohn-Sham density functional theory. They have showed the local convergence of the damped self-consistent iteration and gradient descent, and have compared explicitly their convergence rates, providing insight into the strengths and weaknesses of different methods.

E. Cancès, M. Herbst and A. Levitt have implemented a posteriori error estimators into the DFTK code, and have demonstrated its use in 26. They have provided fully guaranteed error estimators (including discretization, algebraic and machine arithmetic errors), albeit under simplifying assumptions (analytic pseudopotentials, and neglecting explicit electron-electron). Work is underway to extend this formalism to the more realistic case of nonlinear mean-field models.

M. Herbst and A. Levitt have investigated numerically the convergence of self-consistent iterations in extended systems, and the impact of various preconditioners. As is well-known, homogeneous preconditioners are unable to reflect various chemical environments such as interfaces or surfaces. Using the density of states as a proxy for the local dielectric properties of the medium, they have proposed in 53 a simple yet cheap and efficient preconditioner that does not need to be tuned manually for each system. Numerical results have shown a consistent improvement over state-of-the-art methods in a variety of systems, with particularly good performance on metallic surfaces.

In 14, E. Cancès, G. Dusson (CNRS and
University of Besançon), Y. Maday (Sorbonne University), B. Stamm
(University of Aachen, Germany), and M. Vorhalík (Inria Paris,
project-team Serena) have proven a priori error estimates for the
perturbation-based post-processing of the plane-wave approximation of
Schrödinger equations introduced and tested numerically in previous
works of the same authors. They have considered a Schrödinger operator

The team members have focused on justifying rigorously and computing efficiently the response properties of molecules and materials.

In 45, E. Cancès, A. Levitt and S. Siraj-Dine have studied the time-dependent response properties of crystals to a uniform electric field. Their results present in a unified framework various regimes described in the physical and mathematical literature (quantum Hall effect in insulators, ballistic conduction and Bloch oscillations in metals, residual conductivity of graphene), and shed light on their range of validity.

M. Herbst and T. Fransson (Heidelberg University, Germany) have used the recently developed adcc code to study the error of the core-valence separation (CVS) approximation in the context of simulating X-ray absorption spectra 25. Using an iterative post-processing procedure they have managed to undo said approximations on obtained computational results and in this way have managed for the first time to study the error of the approximation within the regime of computational parameters employed in practice. Their results show that in the K-edge region of the X-ray spectra errors are negligible and overall only amount to a scalar shift of the simulated spectrum. This shows that the CVS approximation is therefore safe to use for the calculation of core-excited states not lowering the quality of results required for comparison with experimental X-ray absorption spectra.

E. Cancès, R. Coyaud and R. Scott (University of Chicago, USA) have
pursued their study of van der Waals interactions. In
43, they have extended a method previously
introduced by the first and third authors, to compute more terms in
the asymptotic expansion of the van der Waals attraction between two
hydrogen atoms. These terms are obtained by solving a set of modified
Slater–Kirkwood partial differential equations. The accuracy of the
method is demonstrated by numerical simulations and comparison with
other methods from the literature. It is also shown that the
scattering states of the hydrogen atom, that are the states associated
with the continuous spectrum of the Hamiltonian, have a major
contribution to the C

In 10, R. Benda and E. Cancès, in collaboration
with B. Lebental and G. Zucchi (Ecole Polytechnique) have investigated the
interaction of polyfluorene and fluorene/carbazole copolymers, bearing
various functional groups and side chains, with small to large
diameter carbon nanotubes (CNTs) in vacuo, using variable-charge
molecular dynamics simulations based on the reactive force field
ReaxFF. It is shown that non-covalent functionalization of nanotubes,
driven by

C. Le Bris has pursued his long term collaboration with P. Rouchon (Ecole des Mines de Paris and Inria/QUANTIC) on the study of high dimensional Lindblad type equations at play in the modelling of open quantum systems. They have co-supervised the M2 internship of L-A. Sellem, that was focused on the simulation of some simple quantum gates, and has investigated several discretization strategies based upon the choice of suitable basis sets. Following up on the topic of his internship, L-A. Sellem has started in September 2020 his PhD work, under the co-supervision of P. Rouchon and C. Le Bris, at the intersection between the QUANTIC and the MATHERIALS project-teams.

The aim of computational statistical physics is to compute macroscopic properties of materials starting from a microscopic description, using concepts of statistical physics (thermodynamic ensembles and molecular dynamics). The contributions of the project-team can be divided into four main topics: (i) the development of methods for sampling the configuration space; (ii) the efficient computation of dynamical properties which requires to sample metastable trajectories; (iii) the simulation of nonequilibrium systems and the computation of transport coefficients; (iv) coarse-graining techniques to reduce the computational cost of molecular dynamic simulations and gain some insights on the models.

Various dynamics are used in computational statistical physics to sample the configurations of the system according to the target probability measure at hand and approximate averages as time averages along one realization. It is important to have a good theoretical understanding of the performance of sampling methods in order to choose the optimal parameters in actual numerical simulations, for instance to have a variance as small as possible for time averages along realizations of certain dynamics. A common dynamics is the so-called Langevin dynamics, which corresponds to a Hamiltonian dynamics perturbed by an Ornstein–Uhlenbeck process on the momenta. The generator associated with this stochastic differential equation is hypoelliptic (at best). Proving the longtime convergence of the semigroup requires dedicated tools, under a general strategy known as hypocoercivity, and usually involves prefactors which are difficult to quantitatively estimate. Bounds on the asymptotic variance on the other hand only require bounds on the resolvent of the generator. An approach to directly obtaining estimates on the resolvent of hypocoercive operators is proposed in 40, using Schur complements, rather than obtaining bounds on the resolvent from an exponential decay of the evolution semigroup. The results can be extended, besides Langevin-like dynamics, to the linear Boltzmann equation (which is also the generator of randomized Hybrid Monte Carlo in molecular dynamics). In particular, the dependence of the resolvent bounds on the parameters of the dynamics and on the dimension is made precise, and the relationships with other hypocoercive approaches are highlighted. This work has been performed while Max Fathi (now at Université de Paris, France) was visiting the MATHERIALS project-team.

Other numerical sampling algorithms were also studied. In 58, T. Lelièvre and G. Stoltz, together with W. Zhang (Zuse Institute Berlin, Germany) have proposed new Markov Chain Monte Carlo (MCMC) algorithms to sample probability distributions on submanifolds. Such target distributions are typically encountered when studying systems with molecular constraints (fixed bond angles or bond lengths), or for free energy computations. The newly proposed method generalizes previous algorithms by allowing the use of set-valued maps in the proposal step of the MCMC algorithms. The motivation for this generalization is that the numerical solvers used to project proposed moves to the submanifold of interest may find several solutions. The unbiasedness of these new algorithms is proven thanks to some carefully enforced reversibility property. The interest of the new MCMC algorithms is illustrated on various numerical examples.

For sampling multimodal and anisotropic distributions in Bayesian inference or statistical physics, T. Lelièvre, G. Robin and G. Stoltz, together with G. Pavliotis (Imperial College, United-Kingdom) are studying a new algorithm based on Langevin dynamics, in the framework of G. Robin’s post-doc funded by Inria through the "Programme de Recherche Exploratoire" on "Molecular Dynamics and Learning". They have proposed a new method extending the Metropolis Adjusted Langevin Algorithm (MALA), by introducing a diffusion function allowing a more efficient exploration of the parameter space. This diffusion function is obtained by optimizing the convergence rate of the sampling algorithm, which boils down to solving a spectral problem. The properties of this optimization problem are studied, and a numerical method is developed to compute the corresponding optimal diffusion. Numerical experiments are also performed to evaluate the gain in terms of convergence speed, in comparison to the original MALA with constant diffusion.

Finally, the team has also pursued its endeavour to study and improve free energy biasing techniques, such as adaptive biasing force or metadynamics. The gist of these techniques is to bias the original metastable dynamics used to sample the target probability measure in the configuration space by an approximation of the free energy along well-chosen reaction coordinates. This approximation is built on the fly, using empirical measures over replicas, or occupations measures over the trajectories of the replicas. In 49, V. Ehrlacher and T. Lelièvre, together with P. Monmarché (Sorbonne Université, France) have introduced a modified version of the Adaptive Biasing Force method, in which the free energy is approximated by a sum of tensor products of one-dimensional functions. This enables to handle a larger number of reaction coordinates than the classical algorithm. It is proven that the algorithm is well-defined, and the long-time convergence of the algorithm is studied. Numerical experiments demonstrate that the method is able to capture correlations between reaction coordinates.

The theory of quasi-stationary distributions has been used by T. Lelièvre and collaborators over the past years in order to rigorously model the exit event from a metastable state by a jump Markov process, and to study this exit event in the small temperature regime. In 39, T. Lelièvre together with M. Baudel and A. Guyader (Sorbonne Université, France) have illustrated how the Hill relation and the notion of quasi-stationary distribution can be used to analyse the error introduced by many algorithms that are used to compute mean reaction times between metastable states for Markov processes. The theoretical findings are illustrated on various examples demonstrating the sharpness of the error analysis as well as the applicability of the study to elliptic diffusions.

In 55, T. Lelièvre, together with D. Le Peutrec (Université d'Orléans, France) and B. Nectoux (Université Clermont Auvergne, France) have considered the first exit point distribution from a bounded domain of the overdamped Langevin dynamics in the small temperature regime, under rather general assumptions on the potential function. This work is a continuation of a previous paper (part 1, see 19) where the exit point distribution is studied starting from the quasi-stationary distribution. The proofs are based on analytical results on the dependency of the exit point distribution on the initial condition, large deviation techniques and results on the genericity of Morse functions.

Quasi-stationary distributions can be seen as the first eigenvector associated with the generator of the stochastic differential equation at hand, on a domain with Dirichlet boundary conditions (which corresponds to absorbing boundary conditions at the level of the underlying stochastic processes). Many results on the quasi-stationary distribution hold for non degenerate stochastic dynamics, the associated generator of which is elliptic. The case of degenerate dynamics is less clear. In 57, T. Lelièvre and M. Ramil, together with J. Reygner (Ecole des Ponts, France), have generalized well-known results on the probabilistic representation of solutions to parabolic equations on bounded domains to the so-called kinetic Fokker Planck equation on bounded domains in positions, with absorbing boundary conditions. Furthermore, a Harnack inequality and a maximum principle are provided for solutions to this kinetic Fokker-Planck equation, as well as the existence of a smooth transition density for the associated absorbed Langevin dynamics. The continuity of this transition density at the boundary is studied as well as the compactness, in various functional spaces, of the associated semigroup. This work is a cornerstone to prove the consistency of various algorithms used to simulate metastable trajectories of the Langevin dynamics, in particular the Parallel Replica algorithm and the Adaptive Multilevel Splitting method.

Many systems in computational statistical physics are not at equilibrium, but rather in a stationary state. This is in particular the case when one wants to compute transport coefficients, which determine the response of the system to some external perturbation. For instance, the thermal conductivity relates an applied temperature difference to an energy current through Fourier's law; while the mobility coefficient relates an applied external constant force to the average velocity of the particles in the system.

G. Stoltz and U. Vaes, together with G. Pavliotis (Imperial College
London, United-Kingdom) have studied in 33 the properties of the mobility for generalized Langevin dynamics. These dynamics can be seen as Langevin dynamics with some colored noise, and recast in the quasi-Markovian setting as a stochastic differential equation with additional degrees of freedom modeling the structure of the noise. A hypocoercive analysis in the

Since joining the project-team in November, U. Vaes has also been working with G. Stoltz on variance reduction methods for the computation of mobility coefficients for Langevin dynamics in the underdamped (or Hamiltonian) limit, for two dimensional systems. There is currently no precise numerical or theoretical understanding of the divergent behavior of the mobility as the friction coefficient vanishes. Strong limitations are due to the large computational cost of numerically converging the estimated value of the diffusion coefficient as the friction decreases. U. Vaes and G. Stoltz are studying the use of variance reduction methods to more efficiently compute the mobility.

Finally, G. Stoltz, together with A. Iacobucci and S. Olla (Université
Paris-Dauphine, France) have studied in 54
the macroscopic profiles of temperature and angular momentum in the
stationary state of chains of rotors under a thermo-mechanical forcing
applied at the boundaries. These profiles are solutions of a system of
diffusive partial differential equations with boundary conditions
determined by the thermo-mechanical forcing. Instead of expensive
Monte Carlo simulations of the underlying microscopic physical system,
they have performed extensive numerical simulations based on a finite
difference method for the system of partial differential equations
describing the macroscopic steady state. They have presented a formal
derivation of these stationary equations based on a linear response
argument and local equilibrium assumptions. This allows to
characterize the regime of parameters leading to uphill diffusion, a
situation where the energy flows in the direction of the gradient of
temperature; and to identify regions of parameters corresponding to a
negative thermal conductivity (i.e. a positive linear response
to a gradient of temperature). The agreement with previous results
obtained by numerical simulation of the microscopic dynamics confirms
the validity of the macroscopic equations which were derived.

In 24, T. Lelièvre, G. Stoltz and Z. Belkacemi have reviewed how machine learning techniques are used in molecular dynamics to extract valuable information from the enormous amounts of data generated by simulation of complex systems. They provide a review of the goals, benefits, and limitations of machine learning techniques for computational studies on atomistic systems, focusing on the construction of empirical force fields from ab-initio databases and the determination of reaction coordinates for free energy computation and enhanced sampling. This work is co-authored with P. Gkeka (Sanofi, France) A. Farimani (Carnegie Mellon University, USA), M. Ceriotti (EPFL, Switzerland), J. Chodera (Memorial Sloane Kettering Cancer Center, USA), A. Dinner (University of Chicago, USA), A. Ferguson (University of Chicago, USA), J.B. Maillet (CEA, France), H. Minoux (Sanofi, France), C. Peter (University of Konstanz, Germany), F. Pietrucci (Sorbonne Université, France), A. Silveira (Memorial Sloane Kettering Cancer Center, USA), A. Tkatchenko (University of Luxembourg, Luxembourg), Z. Trstanova (University of Edinburgh, United Kingdom) and R. Wiewiora (Memorial Sloane Kettering Cancer Center, USA).

In homogenization theory, members of the project-team have pursued their ongoing systematic study of perturbations of periodic problems (by local and nonlocal defects). This has been done in the context of the PhD work of R. Goudey, who has studied perturbations of periodic systems that do not decay at infinity but become increasingly rare. Another interesting direction studied is that of general algebras for homogenization, introduced by X. Blanc, C. Le Bris and P.-L. Lions two decades ago. Some very illustrative one-dimensional cases have been explored, as well as the case of equations simpler than the diffusion equation, namely elliptic equations with highly oscillatory potentials. A monograph that summarizes the developments completed in the past years, and that combines them with the basic elements of homogenization theory, is currently in writing by C. Le Bris and X. Blanc (half-time on leave at Inria since September 2019 and until August 2021 from Université de Paris).

Also in the context of homogenization theory, O. Gorynina, C. Le Bris and F. Legoll have pursued their work on the question of how to determine the homogenized coefficient of heterogeneous media without explicitly performing an homogenization approach. This work is a follow-up on earlier works over the years by C. Le Bris and F. Legoll in collaboration with K. Li and next S. Lemaire. The 18 months post-doc (January 2019-July 2020) of O. Gorynina has allowed to complete the mathematical study and the numerical improvement of a computational approach initially introduced by R. Cottereau (CNRS Marseille). This approach combines, in the Arlequin framework, the original fine-scale description of the medium (modelled by an oscillatory coefficient) with an effective description (modelled by a constant coefficient) and optimizes upon the coefficient of the effective medium to best fit the response of a purely homogeneous medium. In the limit of asymptotically infinitely fine structures, the approach yields the value of the homogenized coefficient. Various computational improvements have been suggested. They have been collected in a publication currently submitted 52. The theoretical study of the approach is performed in a contribution soon to be submitted.

From a numerical perspective, the Multiscale Finite Element Method (MsFEM) is a classical strategy to address the situation when the homogenized problem is not known (e.g. in difficult nonlinear cases), or when the scale of the heterogeneities, although small, is not considered to be zero (and hence the homogenized problem cannot be considered as a sufficiently accurate approximation). During the year, several research tracks have been pursued in this general direction.

The MsFEM approach uses a Galerkin approximation of the problem on a
pre-computed basis, obtained by solving local problems mimicking the
problem at hand at the scale of mesh elements, with carefully chosen
right-hand sides and boundary conditions. The initially proposed
version of MsFEM uses as basis functions the solutions to these local
problems, posed on each mesh element, with null right-hand sides and
with the coarse P1 elements as Dirichlet boundary conditions. Various
improvements have next been proposed, such as the oversampling
variant, which solves local problems on larger domains and restricts
their solutions to the considered element. Despite the fact that the
approach has been proposed many years ago, it turns out that not all
specific settings are covered by the numerical analyses existing in
the literature. Together with C. Le Bris, F. Legoll and A. Lozinski,
R. Biezemans (who joined the project-team this year during his M2
internship and is now starting his PhD) has extended the analysis of
MsFEM to the case of rectangular meshes and that of coefficients that
are not necessarily Hölder continuous. An ongoing research effort is
devoted to further improving the error estimates. In addition, R.
Biezemans, C. Le Bris, F. Legoll and A. Lozinski have undertaken the
precise mathematical study of an existing variant of MsFEM, where the
standard basis set is enriched using a singular value decomposition of
an operator defined for each edge of the coarse mesh, and the range of
which is essentially given by the trace on that edge of the
oscillatory solutions (for, say, various right-hand sides). This
variant has been recently proposed in the literature, and its
appealing numerical performances (basis functions that are locally
supported, lack of any resonance error in the regime

Second, within the PhD thesis of A. Lesage, Multiscale Finite Element Methods have been developped for thin heterogeneous plates. The fact that one of the dimension of the domain of interest scales as the typical size of the heterogeneities within the material induces theoretical and practical difficulties that have to be carefully taken into account. In a first step, V. Ehrlacher, F. Legoll and A. Lesage, in collaboration with A. Lebée (ENPC, France), have established strong convergence results (between the oscillatory solution and an adequate two-scale expansion) for problems posed on thin heterogeneous plates. After having considered the case of a diffusion equation, the case of linear elasticity has been studied, both in the so-called membrane case (that is, when the loading lies in the in-plane directions) and in the much more challenging case of bending (that is, when the loading is perpendicular to the in-plane directions). In a second step, several MsFEM variants have been proposed and compared numerically. A priori error bounds have been established, assessing the performances of the approaches, and extensive numerical tests have been performed. All these results are presented in two manuscripts in preparation.

In 2020, S. Boyaval has pursued his research to improve mathematical models of non-Newtonian fluids for application to large time-space domains, when waves propagate at finite speed. Regarding fluid viscoelasticity (of Maxwell-type), it turns out this can be modeled within the standard framework of classical continuum mechanics, with a symmetric-hyperbolic system of conservation laws unifying the elastodynamics of hyperelastic materials and Newtonian fluid dynamics 42. To that aim, the viscoelastic system is an extension of elastodynamics using a time-dependent structure tensor that produces friction through relaxation. The new framework is versatile and should be consolidated for extension to other material imperfection beyong viscoelasticity.

New results were obtained in 2020 by V. Ehrlacher on the analysis and numerical analysis of cross-diffusion systems, in the context of the ANR project COMODO.

In a joint work 44 with C. Cancès (Inria RAPSODI) and L. Monasse (Inria COFFEE), a finite volume numerical scheme for the so-called Maxwell-Stefan model has been developed. The Maxwell-Stefan model is a cross-diffusion system used to describe the evolution of a mixture of gas, and is used in particular in biomedical applications. The numerical method developed is provably convergent and preserves on the discrete level the main mathematical properties of the continuous system, including mass conservation, non-negativity of the solutions and an entropy-entropy dissipation inequality which accunts for the long-time behaviour of the solutions of the system.

In a joint work with J-F. Pietschmann and G. Marino (Chemnitz, Germany) 50, V. Ehrlacher has proved the existence of weak solutions to a multi-species cross-diffusion degenerate Cahn-Hilliard system, including phase separation effects of one species with respect to the other species composing the mixture. The proof of the existence of weak solutions to such a model requires the development of new analysis tools, in particular due to the degenerate nature of the resulting model.

In a joint work with D. Lombardi and M. Fuente-Ruiz (Inria COMMEDIA), V. Ehrlacher has developed a new algorithm for the computation of a canonical polyadic approximation of a high-order tensor 48. The new method is much more efficient and robust that the standard Alternating Least Square method and gave very encouraging results especially for tensors with high order.

Many research activities of the project-team are conducted in close collaboration with private or public companies: CEA, SANOFI, EDF. The project-team is also supported by the Office of Naval Research and the European Office of Aerospace Research and Development, for multiscale simulations of random materials. All these contracts are operated at and administrated by the École des Ponts.

The project-team is involved in several ANR projects:

Members of the project-team are participating in the following GdR:

The project-team is involved in two Labex: the Labex Bezout (2011-) and the Labex MMCD (2012-).

C. Le Bris is a participant to the Inria Challenge EQIP (Engineering for Quantum Information Processors), in particular in collaboration with P. Rouchon (QUANTIC project-team).

The ERC Synergy Grant EMC2 (ERC Grant Agreement number 810367 , PI E. Cancès, L. Grigori, Y. Maday, J-P. Piquemal) has started in September 2019.

The Euro HPC grant TIME-X (PIs: Y. Maday and G. Samaey), focusing on parallel-in-time computations and in which F. Legoll and T. Lelièvre participate, has been selected for funding.

T. Lelièvre, G. Stoltz and F. Legoll participate in the Laboratoire International Associé (LIA) CNRS / University of Illinois at Urbana-Champaign on complex biological systems and their simulation by high performance computers. This LIA involves French research teams from Université de Nancy, Institut de Biologie Structurale (Grenoble) and Institut de Biologie Physico-Chimique (Paris). The LIA has been renewed for 4 years, starting January 1st, 2018.

R. Benda co-organizes the PhD students and postdocs seminar of CERMICS.

E. Cancès
is a member of the editorial boards of Mathematical Modelling and Numerical Analysis

V. Ehrlacher

C. Le Bris

F. Legoll is a member of the editorial board of SIAM MMS (2012-) and of ESAIM: Proceedings and Surveys (2012-).

T. Lelièvre

A. Levitt co-organizes the applied mathematics seminar of the CERMICS lab, and the internal seminar of the EMC2 project (Sorbonne Université).

G. Robin

G. Stoltz

The members of the project-team have taught the following courses.

At École des Ponts 1st year (equivalent to L3):

At École des Ponts 2nd year (equivalent to M1):

At École des Ponts 3rd year (equivalent to M2):

At the M2 “Mathématiques de la modélisation” of Sorbonne Université:

At other institutions:

The following HDR theses have been defended by members of the project-team

The following PhD theses supervised by members of the project-team have been defended:

The following PhD theses supervised by members of the project-team are ongoing:

Project-team members have participated in the following PhD juries:

Project-team members have participated in the following habilitation juries:

Project-team members have participated in the following selection committees:

Members of the project-team have delivered lectures in the following seminars, workshops and conferences:

Members of the project-team have delivered the following series of lectures:

Members of the project-team have presented posters in the following seminars, workshops and international conferences:

Members of the project-team have participated (without giving talks nor presenting posters) in the following seminars, workshops and international conferences: