Section: New Results

Computational Statistical Physics

Participants : Manon Baudel, Qiming Du, Grégoire Ferré, Frédéric Legoll, Tony Lelièvre, Mouad Ramil, Geneviève Robin, Laura Silva Lopes, Gabriel Stoltz.

The objective of computational statistical physics is to compute macroscopic properties of materials starting from a microscopic description, using concepts of statistical physics (thermodynamic ensembles and molecular dynamics). The contributions of the team can be divided into four main topics: (i) the development of methods for sampling the configuration space; (ii) the efficient computation of dynamical properties which requires to sample metastable trajectories; (iii) the simulation of nonequilibrium systems and the computation of transport coefficients; (iv) coarse-graining techniques to reduce the computational cost of molecular dynamic simulations and gain some insights on the models.

Sampling of the configuration space: new algorithms and applications

The work [52] by G. Ferré and G. Stoltz considers fluctuations of empirical averages for stochastic differential equations. Such averages are commonly used to compute ergodic averages in statistical physics in order to estimate macroscopic quantities, but they are subject to fluctuations. If small deviations are described by the central limit theorem, important fluctuations enter the large deviations framework. This theory is well understood when considering bounded observables of a stochastic differential equation, but quantities of interest are generally unbounded. The authors identify the class of unbounded functions which enter the "usual" regime of large deviations. The answer is not trivial, and suggests that many physical observables satisfy another type of large deviations, which leads to further works. Additionally, the influence of irreversibility on the fluctuations was studied by providing a mathematical illustration of the second law of thermodynamics, stating that irreversible dynamics generate more entropy, or more disorder, than reversible ones.

The team also pursued its endeavour to study and improve free energy biasing techniques, such as adaptive biasing force or metadynamics. The gist of these techniques is to bias the original metastable dynamics used to sample the target probability measure in the configuration space by an approximation of the free energy along well-chosen reaction coordinates. This approximation is built on the fly, using empirical measures over replicas, or occupations measures over the trajectories of the replicas. Two works have been performed on such methods

  • First, in [50], V. Ehrlacher, T. Lelièvre and P. Monmarché (Sorbonne Université) have developed a new numerical method in order to compute the free energy and the biased potential given by the Adaptive Biasing Force method in the case where the number of reaction cordinates in the system is too large to apply standard grid-based approximation techniques. The algorithm uses a greedy algorithm and a tensor product approximation. Convergence proofs of both the underlying ABF technique (which uses an unbiased occupation measure) and the greedy tensor-product approximation are provided.

  • Second, in [55], T. Lelièvre together with B. Jourdain (Ecole des Ponts) and P.-A. Zitt (Université Paris Est) have used a parallel between metadynamics and self interacting models for polymers to study the longtime convergence of the original metadynamics algorithm in the adiabatic setting, namely when the dynamics along the collective variables decouple from the dynamics along the other degrees of freedom. The bias which is introduced when the adiabatic assumption does not hold is also discussed.

The team has also considered new applications in terms of sampling, and the analysis of related sampling methods:

  • For large scale Bayesian inference, B. Leimkuhler (Edinburgh, United Kingdom), M. Sachs (Duke, USA) and G. Stoltz have studied in [57] the convergence of Adaptive Langevin dynamics, which is a method for sampling the Boltzmann-Gibbs distribution at a prescribed temperature in cases where the potential gradient is subject to stochastic perturbation of unknown magnitude. The method replaces the friction in underdamped Langevin dynamics with a dynamical variable, updated according to a negative feedback loop control law as in the Nose-Hoover thermostat. Hypocoercive techniques allow to show that the law of Adaptive Langevin dynamics converges exponentially rapidly to the stationary distribution, with a rate that can be quantified in terms of the key parameters of the dynamics. This implies in particular that a central limit theorem holds for the time averages computed along a stochastic path.

  • For the simulation of log-gases, G. Ferré and G. Stoltz have studied in [46] with D. Chafaï (Université Paris Dauphine) a follow up to a former project on the efficient simulation of Coulomb and logarithmic gases. A previous work has demonstrated the usefulness of Hybrid Monte Carlo techniques for sampling the invariant measure of such gases. Gases under constraint are now considered. First, the algorithm proposed in [31] was used to numerically explore the situation. Then, large deviations techniques were employed to study the limiting behaviour of the conditioned gas when the number of particles gets large. For a class of constraints, the equation solved by the limiting empirical density shows in particular cases a spectacular behaviour. This work suggests to further explore some research paths, such as the limiting distribution for large constraints.

Sampling of dynamical properties and rare events

In the preprint [48], T. Lelièvre uses the quasi-stationary distribution approach to study the first exit point distribution from a bounded domain of the overdamped Langevin dynamics, in collaboration with G. Di Gesù (TU Wien, Austria), B. Nectoux (Université Blaise Pascal) and D. Le Peutrec (Université Paris-Sud). The quasi-stationary distribution approach has been developed by T. Lelièvre and collaborators over the past years in order to rigorously model the exit event from a metastable state by a jump Markov process, and to study this exit event in the small temperature regime. In [48], the authors prove that in the small temperature regime and under rather general assumptions on the initial conditions and on the potential function, the support of the distribution of the first exit point concentrates on some points realizing the minimum of the potential on the boundary. The proof relies on tools to study tunnelling effects in semi-classical analysis. This preprint has been divided into two separate articles for publication: the first one [22] has been accepted for publication; the second one is currently under review.

Nonequilibrium systems and computation of transport coefficients

Stemming from the IHP trimester "Stochastic Dynamics Out of Equilibrium" held at Institut Henri Poincare in April-July 2017, a collection of contributions has been grouped in a volume of proceedings [38], focusing on aspects of nonequilibrium dynamics and its ongoing developments. This volume has been edited by G. Giacomin (Universite Paris Diderot), S. Olla (Université Paris Dauphine), E. Saada (CNRS and Université Paris Descartes), H. Sophn (TU Munich, Germany) and G. Stoltz. It includes contributions from various events relating to three domains: (i) transport in non-equilibrium statistical mechanics; (ii) the design of more efficient simulation methods; (iii) life sciences.

In addition, P. Plechac (University of Delaware, USA), T. Wang (Army Research Lab, USA) and G. Stoltz have considered in [61] numerical schemes for computing the linear response of steady-state averages of stochastic dynamics with respect to a perturbation of the drift part of the stochastic differential equation. The schemes are based on Girsanov's change-of-measure theory to reweight trajectories with factors derived from a linearization of the Girsanov weights. Both the discretization error and the finite time approximation error have been investigated. The designed numerical schemes have been shown to be of bounded variance with respect to the integration time, which is a desirable feature for long time simulation. The discretization error has been shown to be improved to second order accuracy in the time step by modifying the weight process in an appropriate way.


Two works have been done to explore new methods to define "good" reaction coordinates:

  • The estimation of the Poincaré constant of a given probability measure allows to quantify the typical convergence rate of reversible diffusions to their equilibrium measure. Loucas Pillaud-Vivien, F. Bach and A. Rudi (Inria SIERRA), together with T. Lelièvre and G. Stoltz, have shown in [58], both theoretically and experimentally how to estimate the Poincaré constant given sufficiently many samples of the probability measure under consideration, using reproducing Hilbert kernel spaces. As a by-product of this estimation, they have also derived an algorithm that captures a low dimensional representation of the data by finding directions which are difficult to sample – reaction coordinates in the language of molecular dynamics. This amounts to finding the marginal of the high dimensional sampled measure for which the Poincaré constant is the largest possible.

  • In [60], T. Lelièvre together with B. Leimkuhler and Z. Trstanova (University of Edinburgh, Scotland) has explored numerically the interest of using diffusion maps to define reaction coordinates or metastable states. Diffusion maps approximate the generator of Langevin dynamics from simulation data, and the idea is thus to use the eigenvalues and eigenvectors to build coarse-grained variables. They have also discussed the use of diffusion maps to define local reaction coordinates within the metastable sets, formalising the locality via the concept of quasi-stationary distribution and justifying the convergence of diffusion maps applied to samples within a metastable set.

Another coarse-graining procedure was considered to justify the approximation of an infinite system of ordinary differential equations (the Becker-Doring equations, describing coagulation/fragmentation processes of species of integer sizes) in terms of a partial differential equation. Formal Taylor expansions motivate that the dynamics at large sizes should be dictated by an advection-diffusion equation, called Fokker-Planck equation. P. Terrier and G. Stoltz rigorously proved in [59] the link between these two descriptions for evolutions on finite times rather than in some hydrodynamic limit, motivated by the results of numerical simulations and the construction of dedicated algorithms based on splitting strategies. In fact, the Becker-Doring equations and the Fokker-Planck equation are related through some pure diffusion with unbounded diffusion coefficient. The crucial point in the analysis is to obtain decay estimates for the solution of this pure diffusion and its derivates to control remainders in the Taylor expansions. The small parameter in this analysis is the inverse of the minimal size of the species.