The MICMAC project-team has been created jointly by the Ecole Nationale des Ponts et Chaussées (ENPC) and INRIA in October 2002. It is hosted by the CERMICS laboratory (Centre d'Enseignement et de Recherches en Mathématiques, Informatique et Calcul Scientifique) at ENPC. The scientific focus of the project-team is to analyze and improve the numerical schemes used in the simulation of computational chemistry at the microscopic level, and in simulations coupling this microscopic scale with larger, meso or macroscopic, scales.

Over the years, the project-team has accumulated an increasingly solid expertise on such topics, which are traditionally not well known by the community in applied mathematics and scientific computing. One of the major achievements of the project-team is to have created a corpus of literature, authoring books and research monographs on the subject , , , that other scientists may consult in order to enter the field.

Quantum Chemistry aims at understanding the properties of matter through the modeling of its behavior at a subatomic scale, where matter is described as an assembly of nuclei and electrons.

At this scale, the equation that rules the interactions between these constitutive elements is the Schrödinger equation. It can be considered (except in few special cases notably those involving relativistic phenomena or nuclear reactions) as a universal model for at least three reasons. First it contains all the physical information of the system under consideration so that any of the properties of this system can in theory be deduced from the Schrödinger equation associated to it. Second, the Schrödinger equation does not involve any empirical parameter, except some fundamental constants of Physics (the Planck constant, the mass and charge of the electron, ...); it can thus be written for any kind of molecular system provided its chemical composition, in terms of natures of nuclei and number of electrons, is known. Third, this model enjoys remarkable predictive capabilities, as confirmed by comparisons with a large amount of experimental data of various types.

On the other hand, using this high quality model requires working with space and time scales which are both very tiny: the typical size of the electronic cloud of an isolated atom is the
Angström (
10
^{-10}meter), and the size of the nucleus embedded in it is
10
^{-15}meter; the typical vibration period of a molecular bond is the femtosecond (
10
^{-15}second), and the characteristic relaxation time for an electron is
10
^{-18}second. Consequently, Quantum Chemistry calculations concern very short time (say
10
^{-12} second) behaviors of very small size (say
10
^{-27} m
^{3}) systems. The underlying question is therefore whether information on phenomena at these scales is or not of some help to understand, or better predict, macroscopic properties of
matter.

It is certainly not true that
*all*macroscopic properties can be simply upscaled from the consideration of the short time behavior of a tiny sample of matter. Many of them proceed (also) from ensemble or bulk effects,
that are far from being easy to understand and to model. Striking examples are found in solid state materials or biological systems. Cleavage, the ability minerals have to naturally split along
crystal surfaces (e.g. mica yields to thin flakes) is an ensemble effect. Protein folding is also an ensemble effect that originates from the presence of the surrounding medium; it is
responsible for peculiar properties (e.g. unexpected acidity of some reactive site enhanced by special interactions) upon which vital processes are based.

However, it is undoubtedly true that on the other hand
*many*macroscopic phenomena originate from elementary processes which take place at the atomic scale. Let us mention for instance the fact that the elastic constants of a perfect crystal
or the color of a chemical compound (which is related to the wavelengths absorbed or emitted during optic transitions between electronic levels) can be evaluated by atomic scale calculations.
In the same fashion, the lubricative properties of graphite are essentially due to a phenomenon which can be entirely modeled at the atomic scale.

It is therefore reasonable to simulate the behavior of matter at the atomic scale in order to understand what is going on at the macroscopic one. The journey is however a long one. Starting
from the basic principles of Quantum Mechanics to model the matter at the subatomic scale, one finally uses statistical mechanics to reach the macroscopic scale. It is often necessary to rely
on intermediate steps to deal with phenomena which take place on various
*mesoscales*. Possibly, one approach is coupled to the others within the so-called
*multiscale*models. The sequel indicates how this journey can be completed focusing rather on the first scale (the subatomic one), than on the latter ones.

It has already been mentioned that at the subatomic scale, the behavior of nuclei and electrons is governed by the Schrödinger equation, either in its time dependent form or in its time independent form. Let us only mention at this point that

both equations involve the quantum Hamiltonian of the molecular system under consideration; from a mathematical viewpoint, it is a self-adjoint operator on some Hilbert
space;
*both*the Hilbert space and the Hamiltonian operator depend on the nature of the system;

also present into these equations is the wavefunction of the system; it completely describes its state; its
L^{2}norm is set to one.

The time dependent equation is a first order linear evolution equation, whereas the time-independent equation is a linear eigenvalue equation.

For the reader more familiar with numerical analysis than with quantum mechanics, the linear nature of the problems stated above may look auspicious. What makes the numerical simulation of
these equations extremely difficult is essentially the huge size of the Hilbert space: indeed, this space is roughly some symmetry-constrained subspace of
, with
d= 3(
M+
N),
Mand
Nrespectively denoting the number of nuclei and the number of electrons the system is made of. The parameter
dis already 39 for a single water molecule and reaches rapidly
10
^{6}for polymers or biological molecules. In addition, a consequence of the universality of the model is that one has to deal at the same time with several energy scales. In
molecular systems, the basic elementary interaction between nuclei and electrons (the two-body Coulomb interaction) appears in various complex physical and chemical phenomena whose
characteristic energies cover several orders of magnitude: the binding energy of core electrons in heavy atoms is
10
^{4}times as large as a typical covalent bond energy, which is itself around 20 times as large as the energy of a hydrogen bond. High precision or at least controlled error
cancellations are thus required to reach chemical accuracy when starting from the Schrödinger equation.

Clever approximations of the Schrödinger problems are therefore needed. The main two approximation strategies, namely the Born-Oppenheimer-Hartree-Fock and the Born-Oppenheimer-Kohn-Sham
strategies, end up with large systems of coupled
*nonlinear*partial differential equations, each of these equations being posed on
. The size of the underlying functional space is thus reduced at the cost of a dramatic increase of the mathematical complexity of the problem: nonlinearity. The mathematical and
numerical analysis of the resulting models is one of the major concerns of the project-team.

As the size of the systems one wants to study increases, more efficient numerical techniques need to be resorted to. In computational chemistry, the typical scaling law for the complexity of
computations with respect to the size of the system under study is
N^{3},
Nbeing for instance the number of electrons. The Holy Grail in this respect is to reach a linear scaling, so as to make possible simulations of systems of practical interest in biology or
material science. Efforts in this direction must address a large variety of questions such as

how to improve the nonlinear iterations that are the basis of any
*ab initio*models for computational chemistry?

how to more efficiently solve the inner loop which most often consists in the solution procedure for the linear problem (with frozen nonlinearity)?

how to design a sufficiently small variational space, whose dimension is kept limited while the size of the system increases?

An alternative strategy to reduce the complexity of
*ab initio*computations is to try to couple different models at different scales. Such a mixed strategy can be either a sequential one or a parallel one, in the sense that

in the former, the results of the model at the lower scale are simply used to evaluate some parameters that are inserted in the model for the larger scale: one example is the parameterized classical molecular dynamics, which makes use of force fields that are fitted to calculations at the quantum level;

while in the latter, the model at the lower scale is concurrently coupled to the model at the larger scale: an instance of such a strategy is the so called QM/MM coupling (standing for Quantum Mechanics/Molecular Mechanics coupling) where some part of the system (typically the reactive site of a protein) is modeled with quantum models, that therefore accounts for the change in the electronic structure and for the modification of chemical bonds, while the rest of the system (typically the inert part of a protein) is coarse grained and more crudely modeled by classical mechanics.

The coupling of different scales can even go up to the macroscopic scale, with methods that couple a microscopic description of matter, or at least a mesoscopic one, with the equations of continuum mechanics at the macroscopic level.

The laser control of chemical reactions is today an experimental reality. Experiments, carried out by many groups of researchers and in many different contexts and settings, have demonstrated the feasibility of controlling the evolution of a quantum system using a laser field. All these experiments exploit the remarkable properties of quantum interactions (interferences) between one, or more, external interactions (e.g. lasers) and the sample of matter under study. In order to create the ad hoc interferences that will drive the system to the desired goal, one can play with the dephasing between two beams, conveniently choose the frequencies of the beams, or make use of the two aspects mixed together, which amounts to allowing for “all” possible laser fields as in optimal control schemes.

Whatever the strategy, the success of these numerous experiments not only validates the idea of manipulating and controlling quantum systems with lasers, but also motivates the need for further theoretical studies in this direction, in order to further improve the results and the range of their applicability. Interest in this research area has also been increasing in more applied communities. The standard modeling for the problem of the laser control of a molecular system involves the time-dependent Schrödinger equation which rules the evolution of the wavefunction describing the state of the system. On the basis of the Schrödinger equation, one then states a control problem, either in the framework of exact control or in the framework of optimal control.

The first fact to underline as a crucial feature of the problem of laser control is the orders of magnitude in time and space that are typically encountered here. The space scale is indeed
that of an atom, say
10
^{-10} m, but more important than that, the time scale is of the order of the femtosecond (
10
^{-15} s) and can even go down to the attosecond (
10
^{-18} s). As surprising as it may seem, the laser fields can literally be “tailored” on these tiny timescales. They can involve huge intensities (
10
^{12} W/cm
^{2}and above), and their shots can be cycled at 1 KHz. One can do several thousands of experiments in a minute. This ability changes the whole landscape of the control problem, for
making an experiment is here far cheaper than running a numerical simulation. This has motivated the paradigm of closed-loop optimization where the criterion to be optimized is evaluated
on-the-fly on an experimental device. One of the current challenging issue for the mathematicians taking part into the field is to understand how to take advantage of a combined
experimental/numerical strategy. In this respect, it is to be noted that the experimental side can come from on-the-fly experiments (how to decide what to do?), but may also come from the
tremendous amount of data that can be (and actually is) stored from the billions of experiments done to this day (how to dig into this database?).

A second point is to remark on the way in which the control enters the problem: the control multiplies the state. Theoretically and numerically, this bilinear nature causes difficulties. Finally, open-loop control is dealt with, at least for two reasons: first, the timescale on which the phenomenon goes is too short for the current capabilities of electronic devices, which prevents closing the loop within one experiment; but secondly, feedback control means measuring something, which in a quantum framework means interacting with and thus perturbing the system itself. These two bottlenecks might be overcome in the future, but this will undoubtedly require a lot of theoretical and technical work.

A third peculiarity regards the choice of admissible laser fields as control : what types of
E(
t)should be allowed when setting up the control problem? This question leads to a dichotomy : one can choose either to restrict oneself to the experimentally
feasible fields, or to basically let the field be free, therefore allowing for very general laser fields, even those out of reach for the contemporary technology. The two tracks may be
followed. In particular, the second track, the most “imaginative” one (rather unusual in comparison to other contexts), can serve as a useful technical guide for building the lasers of
tomorrow.

A final key issue is robustness. It is of course a standard fact in every control problem that the control obtained needs to be robust, for obvious practical reasons. The somewhat unusual feature in the present setting is that the experiments show that they are surprisingly robust with respect to all kinds of perturbations (noise, uncertainties in the measures, ...). Clearly, there is here something to be understood at the theoretical level, e.g. by envisioning new modeling strategies that incorporate undesirable perturbations.

In computational quantum chemistry as in most of our scientific endeavours, we pursue a twofold goal: giving the models a sound mathematical grounding, and improving the numerical approaches.

Existence results for the extended Kohn-Sham LDA (local density approximation) model as well as for the two-electron Kohn-Sham GGA (generalized gradient approximation) model, have been obtained by A. Anantharaman and E. Cancès, using the concentration-compactness method .

E. Cancès has addressed issues related to the modeling and simulation of local defects in periodic crystals. Computing the energies of local defects in crystals is a major issue in quantum chemistry, materials science and nano-electronics. In collaboration with M. Lewin (CNRS, Cergy), E. Cancès and A. Deleurence have proposed in 2008 a new model for describing the electronic structure of a crystal in the presence of a local defect. This model is based on formal analogies between the Fermi sea of a perturbed crystal and the Dirac sea in Quantum Electrodynamics (QED) in the presence of an external electrostatic field. The justification of this model is obtained using a thermodynamic limit on the so-called supercell model. They have also introduced a variational method for computing the perturbation in a basis of precomputed maximally localized Wannier functions of the reference perfect crystal. In , E. Cancès and M. Lewin have pursued the analysis of this model and have used the model to construct a rigorous mathematical derivation of the Adler-Wiser formula for the dielectric permittivity of crystals.

E. Cancès has also initiated a collaboration with Y. Maday and R. Chakir (University of Paris 6) on the numerical analysis of variational approximations of nonlinear elliptic eigenvalue
problems. In
, they provide
*a priori*error estimates for variational approximations of the ground state energy, eigenvalue and eigenvector of problems of the form
,
. They focus in particular on the Fourier spectral and pseudospectral approximation (for periodic problems) and on the
and
finite-element discretizations. Work in progress is concerned with the planewave approximation of the Thomas-Fermi-von Weizsäcker and the Kohn-Sham LDA model.

In collaboration with F. Mauri and N. Mingo, physicists respectively from IMPMC (Paris 6 and 7) and CEA Grenoble, G. Stoltz has continued to study the reduction in thermal conductivity for carbon nanotubes when isotope disorder is present, using methods from quantum statistical physics, see . More precisely, the influence of the disorder structure has been investigated in (alloy material versus homogeneous slices).

In collaboration with C. Brouder (IMPMC, Paris 6 and 7) and G. Panati (University La Sapienza, Roma), G. Stoltz has also proved the Gell-Mann and Low formula for systems with degenerate ground states in . This formula relates an eigenstate of an initial reference Hamiltonian to a perturbed one, using some adiabatic switching. The key point of the work has been to identify the directions within the initial degenerate space in which the switching can be performed. Physical implications of this work in the domain of quantum field theory are discussed in .

E. Cancès and G. Stoltz have studied, in collaboration with several researchers in chemistry, the mathematical foundations of the so-called Optimized Effective Potential approach , in the context of the Kohn-Sham equations in Density Functional theory. This method replaces the (exact) non-local exchange operator by some approximate local operator, optimal in some sense. They made precise the necessary optimality condition, and studied also the existence and uniqueness of the solution to the corresponding nonlinear partial differential equation system in a simplified case (for the so-called Slater potential).

The domain decomposition method proposed by M. Barrault (now at EDF), E. Cancès, W. Hager (University of Florida), and C. Le Bris, originally designed to solve the linear subproblem in electronic structure calculations, has been successfully coupled with the nonlinear loop of the Hartree-Fock problem (self-consistent iterations). This work has been completed by H. Amor, under the supervision of G. Bencteux (EDF) and E. Cancès. Besides, test cases with large basis sets including polarization and diffuse atomic orbitals have confirmed the robustness of this approach to compute the ground state of extended linear molecules (polymers and nanotubes).

The PhD thesis of H. Galicher has been defended.

The extremely broad field of Molecular dynamics is a domain where the MICMAC project-team, originally more involved in the quantum chemistry side, has invested a lot of efforts in the recent years. Molecular dynamics may also be termed 'computational statistical physics' since the main aim is to numerically estimate average properties of materials as given by the laws of statistical physics. The project-team studies both deterministic and probabilistic techniques used in the field. On these topics, we benefited from funding from the ARC Hybrid and the ANR MEGAS (“MEthodes Géométriques et échantillonnage : Application à la Simulation moléculaire”). The habilitation thesis of T. Lelièvre has been defended.

Constant energy averages are often computed as long time limits of time averages along a typical trajectory of the Hamiltonian dynamics. One difficulty of such a computation is the presence of several time scales in the dynamics: the frequencies of some motions are very high (e.g. for the atomistic bond vibrations), while those of other motions are much smaller. Actually, fast phenomena are only relevant through their mean effect on the slow phenomena, and their precise description is not needed. Consequently, there is a need for time integration algorithms that take into account these fast phenomena only in an averaged way, and for which the time step is not restricted by the highest frequencies.

We collaborate on this subject with F. Castella, P. Chartier and E. Faou from INRIA Rennes, with the funding of ANR MEGAS and ARC Hybrid.

In many cases, the dynamics of the system under study is restrained to some submanifold of the whole accessible space. A famous instance is the Hamiltonian dynamics, for which the energy of the system is constant. Hamiltonian dynamics is useful for computing average properties assuming ergodicity, but this dynamics is sometimes not ergodic due to spurious invariants of the motion. It is then appealing to resort to projected stochastic dynamics, which are more likely to destroy these spurious invariants. Such a scheme has been analyzed by E. Faou (INRIA Rennes) and T. Lelièvre in and rates of convergence have also been provided.

Despite the success of stochastic techniques, deterministic sampling methods, such as the Nosé-Hoover dynamics, are still often used in the applied community to compute canonical averages. In collaboration with M. Luskin and R. Moeckel (University of Minnesota), F. Legoll has studied the Nosé-Hoover dynamics when applied to completely integrable systems. Using averaging and KAM techniques, it has been showed that the dynamics is actually not ergodic with respect to the Gibbs measure . This extends a previous work that addressed the simple case of the one-dimensional harmonic oscillator.

For large molecular systems, the information of the whole configuration space may be summarized in a few coordinates of interest, called reaction coordinates. An important problem in chemistry or biology is to compute the effective energy felt by those reaction coordinates, called free energy. T. Lelièvre and G. Stoltz, together with M. Rousset, are currently finishing a review book on free energy computations .

Besides this review work, we continued our systematic studies on the new class of adaptive methods:

The Adaptive Biasing Force (ABF) method is a stochastic algorithm used to compute this free energy. It is based upon a nonlinear dynamics, which uses the mean force as a biasing force to prevent the system from being trapped in metastable regions. The nonlinearity in the dynamics comes from a conditional expectation computed with respect to the solution. The convergence of the algorithm can be accelerated by using multiple walkers, each following similar dynamics but driven by independent Brownian motions. The use of multiple walkers allows for further improvement via a selection mechanism, whereby the walkers are weighted according to a given fitness function and resampled at fixed time intervals. T. Lelièvre and K. Minoukadeh, together with C. Chipot (University of Illinois, on leave from Université Henri Poincaré - Nancy 1), have shown, in , the applicability of the method to realistic biological systems. Their work has in particular highlighted cases in which the standard ABF method, using a single walker, fails to converge within reasonable time scales;

in addition, a current work in progress by B. Dickson, T. Lelièvre, G. Stoltz and F. Legoll, in collaboration with P. Fleurat-Lessard (Département de chimie, ENS Lyon) is to improve adaptive methods where the biasing potential is updated (in contrast to the update of the derivative of this potential as for ABF), see . This study is supported by the ANR Sire;

On a more applicative side, M. Hamdi currently studies the biological relevance of functionalized carbon nanotubes for targetted drug delivery, using the ABF methodology to compute the free energy profile and the transition mechanism associated with the penetration of the nanotube in the cell lipidic membrane.

The thermodynamic integration method, and nonequilibrium methods in the Jarzynski fashion for Langevin dynamics, have been extended in .

As a by-product of these free-energy studies, T. Lelièvre has obtained in
a new result for proving a logarithmic Sobolev inequality on a
measure
defined on
, assuming that a logarithmic Sobolev inequality holds for the marginals
*
and the conditional measures
(·|
)associated to some function
(with
m<
n). This theoretical result has practical interest in molecular dynamics, where
is the reaction coordinate and where the above assumptions are often met in practice.

The free energy completely describes the statistics of the reaction coordinates. F. Legoll and T. Lelièvre have been working on the definition of a dynamics closed in these reaction coordinates. The problem hence amounts to reducing the dimension of a set of SDEs, from the full set of degrees of freedom to only a small subset of them. Encouraging numerical results have been obtained , along with estimates on the accuracy on the proposed effective dynamics (again using entropy techniques). Extensions of this work to more realistic molecular systems, as well as to solid materials, are currently under study.

The microscopic dynamics used to sample the configurations of the system are often trapped in metastable states. A major numerical issue is therefore the search for transition paths connecting metastable states. E. Cancès, F. Legoll, K. Minoukadeh and two of their collaborators at CEA Saclay proposed in an improvement to an existing eigenvector following method, the Activation-Relaxation Technique nouveau (ARTn), for searches of saddle points and transition pathways on a given potential energy surface. Local convergence and robustness of the algorithm have been established, and the new method has been successfully tested on point defects in body centered cubic iron. The possible application of this algorithm to other large scale problems is currently studied, in collaboration with D. Wales (Cambridge).

In addition, D. Pommier and T. Lelièvre are currently studying various methods numerically to compute reactive paths. Mathematically, this amounts to sampling trajectories which start from a metastable region, and end in another metastable region.

The study of nonequilibrium systems in the case of mechanical shock waves has been pursued. A first work dealt with the computation of the so-called Hugoniot curve, which is the locus of all the thermodynamic states which can be attained from a given initial state with shocks of increasing strengths . The numerical method employed is an original adaptive scheme which improves on previous iterative techniques, and which can be used more generally to sample systems with a constraint fixed in average. In , we additionally investigated whether release waves behind shock fronts are isentropic. This was assessed by comparing the numerical results from equilibrium methods based on techniques for free-energy computations.

This year, a work has also started on the thermal conductivity of one-dimensional chains, benefiting from the presence of S. Olla (on leave from Université Paris Dauphine), see . In this classical setting, we can use nonlinear interaction potentials and perturb the dynamics of the system by some energy and momentum preserving noise. The aim is to determine whether this is enough to achieve a finite conductivity. This study is somehow the classical counterpart of the quantum computations , .

Finally, we mention a PhD thesis starting under the cosupervision of T. Lelièvre and A. Ern, on the modelling of clays (funding from ANDRA Agence Nationale pour la gestion des Déchets RadioActifs). The PhD student is R. Joubaud. One part of the project contains a modelling at the molecular level of the diffusion of ions in clays, and we plan to use nonequilibrium methods to understand this process. This work involves F. Legoll, T. Lelièvre and G. Stoltz, within a collaboration with B. Rotenberg and P. Turq (Paris 6, Laboratoire PECSA).

Modeling the electrical response of fuel cells at the molecular level represents a persistent challenge characterized by length scales that are orders of magnitude greater than the sizes accessible to quantum chemistry simulations. In order to overcome these limitations, a comprehensive atom-continuum model has been developed, based on a quantum molecular description of the interfacial region with a polarizable-continuum representation of the electrolyte , . This approach has been implemented in the Quantum-Espresso quantum simulation package . The atom-continuum model has been applied to interpret electrochemical spectroscopy experiments and other technologically relevant applications will follow in collaboration with N. Marzari (Massachusetts Institute Technology).

The project-team's interest closely follows the recent prospects opened by the laboratory implementations of closed loop optimal control. This is done in collaboration with the group of H. Rabitz (Princeton University).

The development of pulse-shaping techniques opens new ways to control atomic or molecular processes by laser fields. Many promising results have been obtained with a setup made of a pulse shaper controlled by the algorithm which, from the results of the preceding experiments, builds an improved new control field. Such algorithms lead to very efficient solutions but have some negative points. In particular, no insight into the control mechanism is gained from this approach since little to no knowledge about the system is needed and the control field is not optimal by construction. On the theoretical side, optimal control theory, OCT, is a powerful tool to design electric fields to control quantum dynamics. Monotonically convergent algorithms are other efficient approaches to solve the optimality equations. They have been applied with success to a large number of controlled quantum systems in atomic or molecular physics and in quantum computing. These methods are flexible and can be adapted to different nonstandard situations encountered in the control of molecular processes. Among recent developments, we can cite the question of nonlinear interaction with the control field and the question of spectral constraints on the field. The latter problem is particularly important in view of experimental applications since not every control field can be produced by pulse-shaping techniques. For instance, liquid crystal pulse shapers are able to tailor only a piecewise constant Fourier transform of the control field in phase and in amplitude. Experimentally, the spectral amplitude and phase are discretized, e.g., into 640 points which is the number of pixels in a currently used standard mask.

The control of quantum dynamics induced by an intense laser field continues to be a challenge to both experiment and theory. In this context, optimal control theory is an efficient tool for designing laser pulses able to control quantum processes. Different methods have been developed to solve the optimal equations such as the Lyapounov-like approaches.

A vast majority of works have considered a linear interaction between the quantum system and the electromagnetic field. This linear interaction corresponds, for molecular systems, to the first-order dipolar approximation of the permanent dipole moment. Due to the intensity of the field or to the particular structure of the problem, some systems need to go beyond this approximation e.g., for the control of molecular orientation and alignment of a linear molecule by non-resonant laser pulses. The natural question arises of whether one can apply these approaches to such systems interacting nonlinearly with the field. This question has been answered in a situation where a quadratic term in the control is present. The theory works as expected for systems with controllable linearization; however for systems globally but not locally controllable it is proven that no continuous feedback exists; for these situations we proposed two solutions: either a discontinuous feedback or an averaging procedure that weakens the monotonic property of Lyapounov approaches , .

Traditionally, the numerical simulations consider some description of the interaction of the laser and the system, of which the most used is the
*dipole approximation*, and perform optimizations considering the laser intensity as the main variable. A different view of the problem has been taken: the evolution semigroup of unitary
propagators is considered and it is asked that the resulting Hamiltonian be consistent with the chosen approximation type. The specific optimization algorithm is now implemented and the first
encouraging numerical results are presented in a submitted work
.

The project-team have continued their theoretical and numerical efforts on the general topic of "passage from the atomistic to the continuum". This concerns theoretical issues arising in this passage but also the development and the improvement of numerical simulations coupling the two scales. The lecture notes review some of the numerical methods used in this context, along with some numerical analysis results. It also turns out that this topic shares many common features with the modelling of complex fluids (another domain in which the project-team has been strongly involved for many years), as explained in the review article .

In collaboration with X. Blanc (Paris 6) and C. Patz (WIAS, Berlin), C. Le Bris and F. Legoll addressed questions related to finite temperature modeling of atomistic systems, and derivation of coarse-grained descriptions. The starting observation is that, for atomistic systems at constant temperature, relevant quantities are statistical averages of some functions (called observables in that context) with respect to the Gibbs measure. One particular case of interest is when the observable at hand does not depend on all the variables, but only on some of them (gathered in a region of interest, where some defects appear, for instance). In that case, a relevant quantity to compute is the free energy associated to these few degrees of freedom. In the one-dimensional setting, an efficient strategy, that bypasses the simulation of the whole system, has been proposed to compute this free energy, as well as averages of such observables. This strategy is based on a rigorous thermodynamical procedure. Encouraging results have been reported in . Recent efforts in the project-team aimed at extending the strategy to more complex cases. Promising results have been obtained in the 2D scalar case .

Another situation of major interest, beyond the static setting, is the dynamical case. Some preliminary work, on some simple models, has been conducted by C. Le Bris, in collaboration with X. Blanc (Paris 6) and P.-L. Lions (Collège de France).

In collaboration with X. Blanc (Paris 6), C. Le Bris has studied the applicabilition of ideas based on filtering to homogenization of elliptic partial differential equations. The bottom line is to modify the corrector problem by introducing a filtering function, in order to improve the efficiency of the method. Some popular methods, such as the oversampling method, can indeed be considered as special instances of such a general strategy. Encouraging numerical results, supported by a rigorous theoretical analysis, have been reported in the work , in the case of periodic and quasi-periodic settings.

The project-team has also pursued its efforts in the field of stochastic homogenization of elliptic equations. An interesting case in that context is when the randomness
comes as a
*small*perturbation of the deterministic case. This situation can indeed be handled with a dedicated approach, which turns out to be far more efficient than the standard approach of
stochastic homogenization, as explained in
.

This case has been studied by C. Le Bris, in collaboration with P.-L. Lions (Collège de France) and X. Blanc (Paris 6). The analysis naturally gives rise to a numerical strategy, which has been studied and implemented by R. Costaouec, C. Le Bris and F. Legoll .

In the work mentioned above, the perturbation to the deterministic case is supposed to be small in the
norm (that is, it is almost surely small). In
, A. Anantharaman and C. Le Bris have extended this study
to the case when the perturbation is small in a weaker norm (the case when only the
*expectation*of the perturbation is assumed to be small, rather than the perturbation itself, is covered by that framework). The approach proves to be very efficient from a computational
viewpoint. It is rigorously founded in a certain class of settings and has been successfully numerically tested for more general settings. A. Anantharaman and C. Le Bris, in
collaboration with E. Cancès, have started to address the theoretical issues related to these general settings.

The team has also addressed, from a numerical viewpoint, the case when the randomness is not small. In that case, using the standard homogenization theory, one knows that
the homogenized tensor, which a deterministic matrix, depends on the solution of a stochastic equation, the so-called corrector problem, which is posed on the
*whole*space
. This equation is therefore delicate and expensive to solve. In practice, the space
is truncated to some bounded domain, on which the corrector problem is numerically solved. In turn, this yields a converging approximation of the homogenized tensor, which happens to be
a
*random*matrix. For a given truncation of
, R. Costaouec, C. Le Bris and F. Legoll, in collaboration with X. Blanc (Paris 6), have studied how to reduce the variance. Several strategies have been tested. Definite conclusions on
the efficiency of the methods, as well as their range of applicability, are yet to be obtained. Nonetheless, very encouraging numerical results have already been obtained.

From a numerical perspective, the Multiscale Finite Element Method is a classical strategy to address the situation when the homogenized problem is not known (e.g. in nonlinear cases), or when the scale of the heterogeneities, although small, is not considered to be zero (and hence the homogenized problem cannot be considered as an accurate enough model). The extension of this strategy to the stochastic case is currently studied by F. Thomines, as the first stage of his PhD thesis, with X. Blanc (Paris 6), C. Le Bris and F. Legoll.

Furthermore, still in the context of elliptic homogenization, S. Boyaval and C. Le Bris have studied the applicability of reduced-basis ideas to variational problems with stochastic parameters, in collaboration with Y. Maday (CNRS/UPMC/Brown), N.C. Nguyen and A.T. Patera (MIT). The motivation stems from the need to take into account many different random microstructures in the context of stochastic homogenization. One of the bottlenecks is that the solutions to a given partial differential equation for different stochastic parameters form a high-dimensional space. To address this difficulty, different approaches have been recently suggested in the literature on uncertainty quantification for stochastic partial differential equations. The combination of these approaches with the reduced-basis method has been tested and analyzed for a scalar (linear) elliptic problem with stochastic boundary conditions . A state-of-the-art review article on reduced basis techniques applied to stochastic problems has also been written.

In the context of parabolic homogenization, A. Anantharaman has pursued the study of boundary layers in time (close to the initial time
t= 0) and space (close to the domain boundaries), in collaboration with G. Allaire (CMAP) and E. Cancès. The idea is to add space, time and space-time
boundary layer terms to the usual approximate solution (which is computed by solving the homogenized problem and the corrector problems), so that the difference between the exact solution and
the approximate solution can be estimated, and more precisely controlled in interesting functional spaces. The main difficulty that has yet to be overcome, is that the classical space boundary
layers (usually defined in the stationary case) and the classical time boundary layers (usually defined in an infinite domain) are not compatible one with one another. Nonetheless, progress has
been made in the understanding of this issue through the use of Bloch waves.

In collaboration with J.-F. Gerbeau, T. Lelièvre have worked on adequate boundary conditions for the moving contact line for free surface flows, see . In a continuation of this work, A. Suzuki and T. Lelièvre are now considering applications of this free surface flow model to microfluidics.

In parallel, T. Lelièvre is working in collaboration with R. Joubaud and A. Ern within the ANR METHODE on using the code which has been developed over the past ten years for free surface flow for application to flows on bumps. The new difficulty is to find proper boundary conditions for free surface flow with inflow and outflow. Two approaches are followed: either finding "transparent boundary conditions", or using periodic boundary conditions.

In this field, the numerical analysis of discretizations of different models for non-Newtonian fluids has been pursued. We have considered:

models using
*constitutive relations*, like the Oldroyd-B model, in which the non-Newtonian behaviour of the fluid is accounted for at the same macroscopic scale as the flow by constitutive
equations coupled to the Navier-Stokes equations, and

*micro-macro*models, like dumbbell models, in which a kinetic description of particles diluted in the fluid explains the non-Newtonian behaviour of the suspension as a coupling between
the Navier-Stokes equations and the averages of stochastic quantities computed from the solution to a Fokker-Planck equation.

An equivalence between the two modellings is only possible under very stringent assumptions. The former models typically involve nonlinear equations in a low-dimensional space, whereas the latter ones involve a linear Fokker-Planck equation for a probability density functional in a high-dimensional space.

(i) Models using constitutive relations

The issue we have dealt concerns free energy dissipative schemes for the Oldroyd-B model.

In , the previous analysis has been complemented. It is shown that there exist solutions to free-energy-dissipative schemes regardless of the formulation (with or without logarithm), whatever the time-step in the backward Euler time discretization. However, the uniqueness of the solutions is not ensured anymore for too large time steps. In , only the schemes for the logarithm formulation are shown to dissipate the free energy for all solutions.

The stability of finite element discretizations for the constitutive equations is still being investigated:

a collaboration involves T. Lelièvre and S. Boyaval with R. Kupferman (The Hebrew University of Jerusalem, IL) and M. Hulsen (Technische Universiteit Eindhoven, NL) regarding numerical simulations using new schemes for benchmark flows,

a collaboration involves S. Boyaval with J.W. Barrett (Imperial College, London, UK) regarding the extension of the work to other constitutive equations (the FENE-P model).

(ii) Micro-macro models

The work addresses the numerical simulation of the micro-macro models using numerous stochastic processes describing the time evolution of the particles diluted in the fluid, rather than computing the probability density functional solution to a Fokker-Planck equation. Two numerical methods are proposed in this work to reduce the variance in the Monte-Carlo evaluation of such expected values. The approaches are based on the Reduced-Basis method, which is used for speeding up the computation of many solutions to a parametrized partial differential equation at many parameter values. It also has many possible extensions to other applications such as the calibration of the volatility in finance (see ), or Bayesian statistics (work in progress).

Another work in progress, by T. Lelièvre, G. Samaey and V. Legat (Université catholique de Louvain, BE), aims at testing a numerical method based on coarse-time stepping schemes for the FENE model. The objective is twofold: discuss the existence and accuracy of closure approximations for the FENE model (a widely used micro-macro model for dumbbells), and compare this numerical approach with standard ones in terms of computational efficiency.

On a general level, a pedagogical mathematical introduction to the subject of complex fluid modelling and simulation has been written by C. Le Bris and T. Lelièvre. Also, as a follow up to theoretical contributions on the theory of ordinary and stochastic differential equations originally motivated by the complex fluids context, C. Le Bris and M. Hauray(Paris 6) have pursued the analysis of differential equations with irregular coefficient fields in .

with D. Pommier (within the PhD thesis of Jose Infante Acevedo), we propose a pricing method for pricing multi-asset options. The option pricing problem can be reduced to
a computation of multi-dimensional integrals. Deterministic methods (as opposed to Monte Carlo methods) suffer from the so-called
*curse of dimensionality*,
*i.e.*, the exponential increase in the number of unknowns as the dimension
dincrease. In order to reduce the complexity, we consider a type of nonlinear approximation approach, presented in
, to compute numerical integrations on the Fourier domain. This
method can readily be applied to solving the problems under various asset price dynamics, for which the characteristic function (
*i.e.*the Fourier transform of the probability density function) is available. This is the case for models from the class of regular affine processes, which also includes some
jump-diffusion models. We compare our method to the sparse combination technics proposed by Leentvaar and Oosterlee.

with A. Suzuki, we are working on the convergence of the method, and on its application to molecular dynamics, more precisely, to the evaluation of the so-called commitor function which is very important to understand various properties of "reactive trajectories", namely trajectories which start in a metastable state and ends in another one.

with E. Cancès, within the PhD thesis of V. Ehrlacher, we are working on extensions of the method to nonlinear problem, and in particular to contact problems, with applications to uncertainty propagation in solid mechanics problems.

S. Olla is currently developping his work in several research directions

Heat Transport in a Weakly Interacting Anharmonic Cristal, in collaboration with C. Liverani. The program is to derive the heat equation and Fourier's law of condution from microscopic hamiltonian dynamics of chains of anharmonic oscillators, in a diffusive rescaling of space and time. A preliminary result proves an autonomous dynamics for the energies of the particles in the appropriate long time scaling limit.

Work and entropy for an anharmonic chain in the hyperbolic macroscopic space-time scale, with N. Even. We study the macroscopic effect, in an hyperbolic space- time scale limit, of the application of a force at the boundary of a chain of anharmonic oscillators.

Einstein relation and linear response for a random walk in a random Galtson-Watson tree, with G. Ben Arous and O. Zeitouni. The program is to compute effective macroscopic velocities for particles in random environment driven by an external force (gravity, electric field etc.).

Many research activities of the project-team are conducted in close collaboration with private or public companies: EADS, Electricité de France and Commissariat à l'Energie Atomique for computational chemistry, molecular dynamics and multiscale simulation of solids. The project-team is also supported by US Navy and US Air Force, for multiscale simulations of random materials.

The project-team is shared between INRIA, Ecole Nationale des Ponts et Chaussées and Paris Dauphine.

The project-team is involved in seven ANR projects.

The first one, the ANR MEGAS, has been accepted in 2009. Its aims are to study several methods for numerical simulation, with an emphasis on sampling methods. It includes four research teams: the INRIA project IPSO in Rennes, the INRIA project SIMPAF in Lille, the eDAM team in Nancy (chemistry), and our team. The scientist in charge is Tony Lelièvre.

The second one (ANR “Calcul intensif et grilles de calcul” LN3M, scientist in charge F. Jollet, CEA-DAM) aims at developping new numerical methods and softwares for multiscale modeling of materials.

The third one (ANR “non thématique” ACCQUAREL, scientist in charge G. Turinici) includes teams from Dauphine, Paris VI and Cergy Universities) is focusing on relativistic quantum theory.

The fourth one concerns G. Turinici, leader of the Paris Dauphine team of the C-QUID ANR project (animated by J.M. Coron).

The fifth one is the ANR Parmat (scientist in charge Guy Bencteux (EDF)).

The sixth one (ANR “Calcul intensif et grilles de calcul” SIRE, scientist in charge Ph. Sautet, ENS Lyon) focuses on the simulation of chemical reactivity at the interfaces.

The seventh one is the ANR METHODE on Hydrological modeling (scientist in charge S. Cordier (Université d'Orléans)).

In addition, the team is participating in

the ARC Hybrid: this collaborative research action involves INRIA teams from Rennes (IPSO), Lille (SIMPAF), Sofia-Antiopolis (TOSCA) and our project-team. The purpose of the action is to study theoretical models and numerical methods mixing deterministic and stochastic aspects in the context of molecular simulation.

the Big MC project: the project is focused on the study of Monte-Carlo methods for high-dimensional problems, with typical applications in financial mathematics, Bayesian statistics, and computational statistical physics. Three nodes participate to this project: one research team at the Institut TELECOM, another one at CEREMADE, University Paris Dauphine, and the third one at University Paris Est (including two members of our team). The coordinator is Gersende Fort (TELECOM).

the GdR Quantum dynamics: This interdisciplinary research network is focused on physical and mathematical problems related to the time evolution of quantum systems (transport problems, nonequilibrium systems, etc).

G. Turinici is associate member of the "Optimization with PDE Constraints (OPTPDE)" project of the ESF.

We continue our long standing collaboration with Herschel Rabitz at Princeton University on questions related to laser control. The project-team has been awarded a PICS CNRS-NSF grant for a collaboration between Princeton University and the Laboratoire CEREMADE (Paris Dauphine). The collaboration has been officialized through the "Associated teams" INRIA program, MicMac and University of Princeton being part of the "OMQP" joint initiative.

E. Cancès is

co-Editor in chief (with P. Del Moral and J.-F. Gerbeau) (2005-) of ESAIM Proc.

a member of the editorial boards of Mathematical Modelling and Numerical Analysis (2006-) and of SIAM Journal of Scientific Computing (2008-).

E. Cancès has co-organized the following events in 2009:

the Workshop on continuum modelling of biomolecules (Beijing, Sept. 14-17, 2009), and

with F. Legoll, a mini-symposium on Numerical methods and their applications in molecular simulation, ICNAAM 2009 conference, Greece (Rethymno, Crete), September 18-22, 2009.

C. Le Bris is co-Editor-in-chief (with A.T. Patera, MIT) (2005-) of Mathematical Modeling and Numerical Analysis. He is editor-in-chief of Applied Mathematics Research Express (2003-). He is a member of the editorial boards of Archive for Rational Mechanics and Analysis (2004-), COCV (Control, Optimization and Calculus of Variations) (2003-), Mathematics Applied in Science and Technology (2006-), Networks and Heterogeneous Media (2005-), Nonlinearity (2005-), Review of Mathematical Science (2006-), Journal de Mathématiques Pures et Appliquées (2009-). He is a member of the editorial board of the monograph series Mathématiques et Applications, Series, Springer (2008-), and Modeling, Simulations and Applications, Series, Springer (2009-).

C. Le Bris has been

a member of the organizing committee of the IMA thematic year
*Mathematics and Chemistry*, Minneapolis 2008-2009,

a Distinguished Ordway Visitor 2008-2009, School of Mathematics, University of Minnesota.

awarded the
**Aisenstadt Chair 2009-2010**from the Université de Montréal and has delivered a series of lectures there, in October 2009.

C. Le Bris is a member of

the Scientific Program Committee of ICIAM 2011, Vancouver, Canada,

the scientific board of ENPC, 2008- (nominated as representative of the research scholars),

the “Comité d'experts” for the “Fondation de Recherche pour l'Aéronautique et l'Espace”,

the “Comité d'animation du domaine thématique Mathématiques appliquées, calcul et simulation” at INRIA,

the organizing committee of the international conference "Dynamical Analysis of Molecular Systems", Edinburgh, June 28-July 2, 2010,

(as Co-chair) the ESF-EMS international conference on "Highly Oscillatory Problems", 13–17 September 2010, Isaac Newton Institute for Mathematical Sciences, Cambridge.

C. Le Bris and T. Lelièvre have co-organized a workshop on numerical methods in rheology, Ecole des Ponts, January 2009,
http://

F. Legoll has co-organized, with H. Ben Dhia, a mini-symposium on "Multimodel and Multiscale Approaches in Solid Mechanics: Algorithms and Applications Advances", 10th US National Congress on Computational Mechanics, Columbus, July 16-19, 2009.

S. Olla is

a member of the editorial board of The Annals of Probability (2009-)

the scientist in charge of the agreement USP-COFECUB UC103/06: Mod èles Stochastiques, Genomique et phonetiques (2005-2009).

a member of the Scientific Board of the agreement GREFI-MEFI between CNRS and INDAM (Istituto NAzionale di Alta Matematica, Italy).

the local coordinator of the project ANR LHMSHE (programme blanc 2007),
*Limites hydrodynamiques et mécanique statistique hors équilibre*in the University of Paris Dauphine.

a member of the steering commitee of the Paris Graduate School, a project sponsored by the Mathematical Foundation of Paris.

G. Turinici has been a member of

-the organizing committee of the workshop "Quantum control and coherence" held in March 2009 within the thematic year
*Mathematics and Chemistry*, Minneapolis 2008-2009,

- the organizing committee of the workshop "Mathematical approaches in optimization, modellisation and control", Iasi, Romania, May 7, 2009

Analyse, cours à l'Ecole Nationale des Ponts et Chaussées (A. Anantharaman, S. Boyaval, E. Cancès, F. Legoll),

Analyse numérique et Optimisation (L3), cours et TD ESIEE (S. Boyaval)

Simulation moléculaire: aspects théoriques et numériques, cours de M2, Université Paris 6 (E. Cancès, M. Lewin),

Analyse numérique et optimisation, PC du cours de G. Allaire et P.-L. Lions, Ecole Polytechnique (E. Cancès, C. Le Bris),

Analyse 1, cours à l'Université Paris Dauphine (A. Grigoriu) (TD 40H, L1)

Mathématiques des modèles multi-échelles, cours à l'Ecole Nationale des Ponts et Chaussées (F. Legoll, M. Lewin)

Systèmes multiéchelles, cours de M2, Université Paris 6 (F. Legoll)

Probabilités et applications, (42h), Cours Ecole Nationale des Ponts et Chaussées (T. Lelièvre),

Méthodes déterministes en mathématiques financières, (42h), Cours Ecole Nationale des Ponts et Chaussées (T. Lelièvre),

Modéliser Programmer Simuler, (27h), Cours Ecole Nationale des Ponts et Chaussées (T. Lelièvre),

Object Oriented Progamming: C++, cours et cours d'été à l'Université Paris 1 - Panthéon-Sorbonne (K. Minoukadeh),

Analyse Numérique, TP de MATLAB à l'ISBS (K. Minoukadeh),

Micro-informatique: logiciels scientifiques, cours à l'Ecole Nationale des Ponts et Chaussées (S. Boyaval, K. Minoukadeh, R. Costaouec),

Algèbre, TD à l'université Paris-Est Marne-La-Vallée (R.Roux) (36h, L1),

Analyse, TD à l'université Paris-Est Marne-La-Vallée (R.Roux) (36h, L1),

Outils Mathématiques, TD à l'IUP-GSI de l'Université Paris-Est Marne-La-Vallée (R.Roux) (68h, L3),

Analyse spectrale, cours à l'Ecole Nationale des Ponts et Chaussées (G. Stoltz),

Calcul scientifique, cours à l'Ecole Nationale des Ponts et Chaussées (G. Stoltz),

Simulation numérique et méthodes de changement d'échelles, Master SMCD, Ecole des Ponts ParisTech(G. Stoltz),

Algèbre linéaire 3, cours à l'Université Paris Dauphine (G. Turinici) (21H, L2),

Analyse numérique: évolution, cours à l'Université Paris Dauphine (G. Turinici) (20H cours, 40H TD, M1)

Introduction à l'analyse numérique des EDP, cours à l'Université Paris Dauphine (G. Turinici) (30H, M2),

Méthodes numériques en finance, cours à l'Université Paris Dauphine (G. Turinici) (9H, M2),

Approches quantitatives et numériques des stratégies financières, cours à l'Université Paris Dauphine (G. Turinici) (21H, M2).

Members of the project-team have delivered lectures in the following seminars, workshops and international conferences:

A. Anantharaman, PhD students seminar at Laboratoire Jacques-Louis Lions, Université Paris VI, January 2009.

A. Anantharaman, CIMPA-UNESCO-Egypt School "Recent developments in the theory of elliptic partial differential equations", AAST, Alexandria, January 26th - February 3rd, 2009.

A. Anantharaman, 12è Rencontre du Nonlinéaire, IHP, March 11th-13th, 2009.

A. Anantharaman, Journées du GdR MASCOT-NUM, Université Paris XIII & IHP, March 18th-20th, 2009.

A. Anantharaman, 33rd Conference on Stochastic Processes and Their Applications, TU Berlin, July 27th-31st, 2009.

S. Boyaval, weekly seminar, LIMSI, February 2009,

S. Boyaval, GDR MASCOT, IHP, Paris, March 2009,

S. Boyaval, Minisymposium "Reduced-Basis methods" at ICOSAHOM 09 in Trondheim, Norway, June 2009,

S. Boyaval, Minisymposium "Advances in numerical methods for non-Newtonian flows" at 8th Enumath 2009 in Uppsala, Sweden, June 2009,

S. Boyaval, MoRePaS 09 in Muenster, Germany, September 2009,

E. Cancès, EPSRC symposium workshop on computational PDEs, Warwick, January 11-17, 2009.

E. Cancès, IPAM workshop on computational kinetic transport and hybrid methods, Los Angeles, March 30-April 3, 2009

E. Cancès, MAFELAP conference, London, June 9-12, 2009

E. Cancès, LN3M worskop, Lyon, September 28-30, 2009

E. Cancès, weekly seminar of the mathematics department, Freie Universität Berlin, April 2009

E. Cancès, weekly seminar of the mathematics departement, Oxford University, Febrary 2009

E. Cancès, weekly seminar of the mathematics department, University of California Santa Barbara, April 2009

E. Cancès, weekly seminar of the mathematics department, University of California San Diego, April 2009

E. Cancès, weekly seminar of the mathematical department, Technische Universität Munich, April 2009

E. Cancès, Ecole Normale Supérieure de Cachan, Septembre 2009

E. Cancès, weekly seminar of the Institute of computational mathematics, Chinese Academy of Sciences, September 2009

E. Cancès, weekly seminar of the chemistry department, University of Marne-la-Vallée, November 2009

R. Costaouec, “Stochastic Processes and their Applications”, Technische Universität, Berlin, July 27-31, 2009.

R. Costaouec, SIAM Conference on Analysis of Partial Differential Equations, Miami, December 7-10, 2009.

I. Dabo, 14th Total Energy Workshop, ICTP, Trieste, January 2009

I. Dabo, IMPMC seminar series, IMPMC, Paris, April 2009

I. Dabo, CEA seminar series, CEA, Bruyères-le-Chatel, June 2009

A. Grigoriu, "Mathematical approaches in optimization, modellisation and control" May, 2009, Iasi, Romania

A.Grigoriu, "Workshop Coherence, Control, and Dissipation",IMA-University of Minnesota,March 2-6, 2009, Minneapolis, USA

A. Grigoriu, "The 28th IASTED International Conference on Modelling, Identification and Control", February 16 - 18, 2009, Innsbruck, Austria

C. Le Bris, Boeing Distinguished Colloquium, University of Washington at Seattle, January 2009

C. Le Bris, Carnegie Mellon University, April 2009

C. Le Bris, Solid Mechanics Seminar, University of Minnesota, April 2009

C. Le Bris, “Frontiers of Scientific computing” Seminar, Lousiana State University, April 2009,

C. Le Bris, “ICES Seminar Series”, University of Texas at Austin, April 2009

C. Le Bris, Colloquium Jacques Morgenstern, Sophia-Antipolis, June 2009,

C. Le Bris, Series of lectures, 'Multiscale problems', University of Santiago de Compostela, June 2009.

C. Le Bris, plenary lecture, ENUMATH, Uppsala, Sweden, June 29- July 3, 2009.

C. Le Bris, Workshop “Mathematical Theory and Computational Methods in Materials Sciences”, August 10 – 14, 2009, Singapore.

C. Le Bris, Workshop “New Trends in Model Coupling. Theory, Numerics and Applications,” 2-4 September, 2009, Paris.

C. Le Bris, Workshop “Modeling and Simulation of Multi-Scale and Multi-Physics Systems”, Leuven, Belgium, September 8-9, 2009.

C. Le Bris, plenary lecture, Congress XXI CEDYA (Congreso de Ecuaciones Diferenciales Y Aplicaciones) - XI CMA (Congreso de Matematica Aplicada), September 21-25, 2009, Ciudad Real (Spain),

C. Le Bris, 'Aisenstadt Chair', Université de Montréal, October 2009.

C. Le Bris, Séminaire IRSAMC Toulouse, November 2009,

C. Le Bris, Bath Colloquium Series “Landscapes in Mathematical Science”, Bath, UK, November 2009

C. Le Bris, Workshop ”Numerical Analysis of Multiscale Computations”, Banff International Research Station for Mathematical Innovation and Discovery (BIRS), December 6-11, 2009.

F. Legoll, IMA seminar on mathematics and chemistry, Minneapolis, 25th February 2009

F. Legoll, Séminaire d'analyse numérique, EPFL, Lausanne, 22nd April 2009

F. Legoll, IMA seminar on mathematics and chemistry, Minneapolis, 13th May 2009

F. Legoll, IMA tutorial Methods of molecular simulation, Minneapolis, May 15-16, 2009

F. Legoll, IMA workshop on Molecular simulations: algorithms, analysis and applications, Minneapolis, May 18-22, 2009

F. Legoll, OxMOS workshop on Multiscale models in solid mechanics, Oxford, June 3rd, 2009

F. Legoll, workshop on Computational multiscale methods, Oberwolfach, June 15-19, 2009

F. Legoll, Capstone conference, Warwick, June 28 - July 2, 2009

F. Legoll, 2nd annual conference of EPSRC network Mathematical challenges of molecular dynamics, Bath, July 13-15, 2009

F. Legoll, 10th US National Congress of Computational Mechanics, Columbus, July 16-19, 2009

F. Legoll, Cecam workshop on deterministic thermostats, Lausanne, July 27-29, 2009

F. Legoll, workshop on PDE and Materials, Oberwolfach, September 14-18, 2009

F. Legoll, ICNAAM 2009 conference, Rethymno, Greece, 17-22 september 2009

F. Legoll, Workshop LN3M, Lyon, september 28-29, 2009

F. Legoll, Séminaire de Mathématiques Appliquées, Collège de France, December 4th, 2009

T. Lelièvre, Mathematics seminar at the Imperial College, London, March 2009

T. Lelièvre, Workshop Adaptivity, robustness and complexity of multiscale algorithms, ICMS, Edinburg, April 2009.

T. Lelièvre, Third Conference on Numerical Methods in Finance, Ecole des Ponts, Paris, April 2009.

T. Lelièvre, IMA Tutorial : Methods of Molecular Simulation, Minneapolis, May 2009.

T. Lelièvre, IMA seminar on Mathematics and Chemistry, Minneapolis, May 2009.

T. Lelièvre, Meeting on PDEs, Stochastic Analysis and Simulation of Processes, Sophia-Antipolis, June 2009.

T. Lelièvre, Plenary speaker at the EPSRC Symposium Capstone Conference, Warwick, 5 June 2009.

T. Lelièvre, Warwick seminar on Applied Mathematics and Statistics, October 2009.

T. Lelièvre, Workshop Theory and Numerics for Kinetic Equations, Saarbrücken, November 2009.

T. Lelièvre, Workshop BIRS on Numerical Analysis of Multiscale Computations, Banff, December 2009.

K. Minoukadeh, Neuvième forum des jeunes mathématiciennes, IHP, Paris, November 6-7, 2009.

K. Minoukadeh, Workshop LN3M 2009, ENS Lyon, September 28-30, 2009.

S. Olla, Conference France-Brésil, Rio de Janeiro, Brazil, September 10, 2009

S. Olla, Probability seminar, University of Minnesota, Minneapolis, USA, October 9, 2009.

S. Olla, Probability seminar, University of Budapest, Hungary, November 19, 2009.

S. Olla, Workshop on Atomistic Models of Solids, University of Oxford, December 7, 2009.

S. Olla, Material Theories, Oberwolfach, December 14-19, 2009.

G. Stoltz, Mathematical methods for ab-initio quantum chemistry, Nice, France, 15-16 october 2009

G. Stoltz, ICNAAM 2009, Rethymno, Greece, 17-22 september 2009

G. Stoltz, weekly seminar of Université de Lille, june 2009

G. Stoltz, seminar of the ANR BigMC, Paris, june 2009

G. Stoltz, IMA seminar on mathematics and chemistry, Minneapolis, may 2009

G. Stoltz, ICMS workshop “Adaptivity, robustness and complexity of multiscale algorithms”, Edinburgh, United-Kingdom, march 2009

G. Turinici, "The 28th IASTED International Conference on Modelling, Identification and Control", Innsbruck, Austria, February 16 - 18, 2009

G. Turinici, "Mathematical approaches in optimization, modellisation and control", Iasi, Romania, May 7, 2009

G. Turinici, Workshop "Coherence, Control, and Dissipation", IMA-University of Minnesota,March 2-6, 2009

G. Turinici, seminar, Janvier 2009, Ecole Politecnico di Milano

Members of the project-team have presented posters in the following seminars, workshops and international conferences:

K. Minoukadeh, FOMMS 2009, Blaine, WA, July 12-16, 2009.

K. Minoukadeh, Molecular Simulations: Algorithms, Analysis and Applications, IMA, University of Minnesota, Minneapolis, MN, May 18-22, 2009.

K. Minoukadeh, Rare Events in High-Dimensional Systems, IPAM, UCLA, Los Angeles, CA, February 23-27, 2009.

G. Stoltz, IPAM workshop “Rare Events”, Los Angeles, february 2009

Members of the project-team have participated (without giving talks nor presenting posters) in the following seminars, workshops and international conferences:

A. Anantharaman, SIAM Conference on Analysis of Partial Differential Equations, Miami, December 7th-10th, 2009.

R. Roux, 33rd Conference on Stochastic Processes and Their Applications, Berlin (Germany, July 2009)

In addition to the above, some members of the team have been invited for stays in institutions abroad:

E. Cancès, Brown University, Providence, USA, July-August 2009.

C. Le Bris, University of Minnesota, IMA, Minneapolis, USA, during the academic year 2008-09.

F. Legoll, Institute for Mathematics and its Applications, University of Minnesota, May 2009.

S. Olla, Technical University of Budapest, Hungary, within the projet of French-Hungary collaboration ”Balaton”, november 2009.