EN FR
EN FR
2022
Activity report
Project-Team
ASCII
RNSR: 201923478S
In partnership with:
CNRS, Institut Polytechnique de Paris
Team name:
Analysis of Stochastic Cooperative Intelligent Interactions
In collaboration with:
Centre de Mathématiques Appliquées (CMAP)
Domain
Applied Mathematics, Computation and Simulation
Theme
Stochastic approaches
Creation of the Project-Team: 2019 November 01

Keywords

Computer Science and Digital Science

  • A6.1.2. Stochastic Modeling
  • A6.1.3. Discrete Modeling (multi-agent, people centered)
  • A6.2.2. Numerical probability
  • A6.2.3. Probabilistic methods
  • A6.2.6. Optimization
  • A6.2.7. High performance computing
  • A6.3.4. Model reduction
  • A6.3.5. Uncertainty Quantification

1 Team members, visitors, external collaborators

Research Scientists

  • Denis Talay [Team leader, INRIA, Senior Researcher, HDR]
  • Quentin Cormier [Inria, Researcher, from Oct 2022]
  • Carl Graham [CNRS, Researcher]

Faculty Members

  • Josselin Garnier [LIX, Professor]
  • Nizar Touzi [LIX, Professor]

PhD Students

  • Leila Bassou [LIX]
  • Guillaume Chennetier [EDF]
  • Paul Lartaud [Ecole Polytechnique]
  • Nathan Sauldubois [Ecole Polytechnique]

Technical Staff

  • Maxime Colomb [INRIA, Engineer]
  • Nicolas Gilet [INRIA, Engineer]

Administrative Assistants

  • Marie Enee [INRIA]
  • Julienne Moukalou [INRIA, from May 2022]

2 Overall objectives

The stochastic particles concerned by the Ascii research behave like agents whose interactions are stochastic and intelligent; these agents desire to optimally cooperate towards a common objective by solving strategic optimisation problems. Ascii's overall objective consists in developing their theoretical and numerical analyses with target applications in economics, neuroscience, physics, biology, stochastic numerics. To the best of our knowledge, this challenging objective is quite innovative.

In addition to modelling challenges raised by our innovating approaches to handle intelligent multi-agent interactions, we develop new mathematical theories and numerical methods to deal with interactions which, in most of the interesting cases we have in mind, are irregular, non Markovian, and evoluting. In particular, original and non-standard stochastic control and stochastic optimization methodologies are being developed, combined with original specific calibration methodologies.

To reach our objectives, we combine various mathematical techniques coming from stochastic analysis, partial differential equation analysis, numerical probability, optimization theory, and stochastic control theory.

3 Research program

Concerning particle systems with singular interactions, in addition to the convergence to mean–field limits and the analysis of convergence rates of relevant discretizations, one of our main challenges will concern the simulation of complex, singular and large scale McKean-Vlasov particle systems and stochastic partial differential equations, with a strong emphasis on the detection of numerical instabilities and potential big approximation errors.

The determination of blow-up times is also a major issue for spectrum approximation and criticality problems in neutron transport theory, Keller-Segel models for chemotaxis, financial bubble models, etc.

Reliability assessment for power generation systems or subsystems is another target application of our research. For such complex systems, standard Monte Carlo methods are inefficient because of the difficulties to appropriately simulate rare events. We thusdevelop algorithms based on particle filters methods combined with suitable variance reduction methods.

Exhibiting optimal regulation procedures in a stochastic environment is an important challenge in many fields. As emphasized above, in the situations we are interested in, the agents do not compete but concur to their regulation. We here give three examples: the control of cancer therapies, regulation and mechanism design by optimal contracting, and the distributed control for the planning problem.

Optimal contracting is widely used in economics in order to model agents interactions subject to the so-called moral hazard problem. This is best illustrated by the works of Tirole (2014 Nobel Prize in economics) in industrial economics. The standard situation is described by the interaction of two parties. The principal (e.g. land owner) hires the agent (e.g. farmer) in order to delegate the management of an output of interest (e.g. production of the land). The agent receives a salary as a compensation for the costly effort devoted to the management of the output. The principal only observes the output value, and has no access to the agent effort. Due to the cost of effort, the agent may divert his effort from the direction desired by the principal. The contract is proposed by the principal, and chosen according to a Stackelberg game: anticipating the agent's optimal response to any contract, she searches for the optimal contract by optimizing her utility criterion.

We are developing the continuous time formulation of this problem and allow for diffusion control and for possible competing multiple agents and principals. This is achieved by using crucially the recently developed second order backward stochastic differential equations, which act as HJB equations in the present non-Markovian framework.

The current environmental transition requires governments to incite firms to introduce green technologies as a substitute the the outdated polluting ones. This transition requires appropriate incentive schemes so as to reach the overall transition objective. This problem can be formulated in the framework of the last Principal-Agent problem as follows. Governments act as principals by setting the terms of an incentive regulation based on subsidies and tax reductions. Firms acting as agents optimize their production strategies given the regulation imposed by the governments. Such incentive schemes are also provided by the the refinancing channel through private investor, as best witnessed by the remarkable growth of green bonds markets.

Another motivation comes from Mechanism design. Modern decentralized facilities are present throughout our digitally connected economies. With the fragmentation of financial markets, exchanges are nowadays in competition. As the traditional international exchanges are now challenged by alternative trading venues, markets have to find innovative ways to attract liquidity. One solution is to use a make-taker fees system, that is, a rule enabling them to charge in an asymmetric way liquidity provision and liquidity consumption. The most classical setting, used by many exchanges (such as Nasdaq, Euronext, BATS Chi-X,...), consists in subsidizing the former while taxing the latter. In practice, this results in associating a fee rebate to executed limit orders and applying a transaction cost for market orders.

A platform aims at attracting two types of agents: market makers post bid and ask prices for some underlying asset or commodity, and brokers fulfill their trading needs if the posted prices are convenient. The platform takes its benefits from the fees gained on each transaction. As transactions only occur when the market makers takes on more risky behavior by posting interesting bid and ask prices, the platform (acting as the principal) sets the terms of an incentive compensation to the market makers (acting as agents) for each realized transaction. Consequently, this optimal contracting problem serves as an optimization tool for the mechanism design of the platform.

Inspired by optimal transport theory, we formulate the above regulation problem as the interaction between a principal and a “crowd” of symmetric agents. Given the large number of agents, we model the limiting case of a continuum of agents whose state is then described by their distribution. The mean field game formulates the interacting agents optimal decision according to a Nash equilibrium competition. The optimal planning problem, introduced by Pierre-Louis Lions, seeks for an incentivizing scheme of the regulator, acting as a principal, aiming at pushing the crowd to some target distribution. Such a problem may be formulated for instance as a model for the design of smart cities. Then, one may use the same techniques as for the Principal-Agent problem in order to convert this problem to a more standard optimal transport problem.

In a situation where a Principal faces many interacting agents, distributed control may serve as an important tool to preserve the aggregate production of the agents, while distributing differently the contributions amongst agents.

The above approach needs now to be extended in order to accommodate more realistic situations. Let us list the following important extensions:

  • The case of noisy observation of the output leads to control problems under partial observation for both types of agents, which are significantly more involved as they lead to infinite dimensional control problems after the filtering stage;
  • Another important extension is the account for the so-called adverse selection: the principal has no access to the optimization criterion of the agent, instead she only has a prior on the distribution; in the economic literature, this is addressed in one period static models by allowing the prinicipal to offer a menu of incentivizing contracts which are chosen so that each agent chooses the one which is designed for him (incentive compatibility constraint).

Our research program on networks with interacting agents concerns various types of networks: electronic networks, biological networks, social networks, etc. The numerous mathematical tools necessary to analyse them depend on the network type and the analysis objectives. They include propagation of chaos theory, queing process theory, large deviation theory, ergodic theory, population dynamics, partial differential equation analysis, in order to respectively determine mean-field limits, congestion rates or spike train distributions, failure probabilities, equilibrium measures, evolution dynamics, macroscopic regimes, etc.

For example, recently proposed neuron models consist in considering different populations of neurons and setting up stochastic time evolutions of the membrane potentials depending on the population. When the number of populations is fixed, interaction intensities between individuals in different populations have similar orders of magnitude, and the total number of neurons tends to infinity, mean-field limits have been identified and fluctuation theorems have been proven.

However, to the best of our knowledge, no theoretical analysis is available on interconnected networks of networks with different populations of interacting individuals which naturally arise in biology and in economics.

We aim to study the effects of interconnections between sub-networks resulting from individual and local connections. Of course, the problem needs to be posed in terms of the geometry of the big network and of the scales between connectivity intensities and network sizes.

A related research concerns stochastic, continuous state and time opinion models where each agent's opinion locally interacts with other agents' opinions in the systemic. Due to some exogenous randomness, the interaction tends to create clusters of common opinion. By using linear stability analysis of the associated nonlinear Fokker-Planck equation that governs the empirical density of opinions in the limit of infinitely many agents, we can estimate the number of clusters, the time to cluster formation, the critical strength of randomness so as to have cluster formation, the cluster dynamics after their formation, the width and the effective diffusivity of the clusters.

Another type of network systems we are interested in derives from financial systemic risk modeling. We consider evolving systems with a large number of inter-connected components, each of which can be in a normal state or in a failed state. These components also have mean field interactions and a cooperative behavior. We will also include diversity as well as other more complex interactions such as hierarchical ones. In such an inter-connected system, individual components can be operating closer to their margin of failure, as they can benefit from the stability of the rest of the system. This, however, reduces the overall margin of uncertainty, that is, increases the systemic risk: our research thus addresses QMU (Quantification of Margins of Uncertainty) problems.

We aim to study the probability of overall failure of the system, that is, its systemic risk. We therefore have to model the intrinsic stability of each component, the strength of external random perturbations to the system, and the degree of inter-connectedness or cooperation between the components.

Our target applications are the following ones:

  • Engineering systems with a large number of interacting parts: Components can fail but the system fails only when a large number of components fail simultaneously.
  • Power distribution systems: Individual components of the system are calibrated to withstand fluctuations in demand by sharing loads, but sharing also increases the probability of an overall failure.
  • Banking systems: Banks cooperate and by spreading the risk of credit shocks between them can operate with less restrictive individual risk policies. However, this increases the risk that they may all fail, that is, the systemic risk.

One of our objectives is to explain that, in some circumstances, one simultaneously observes individual risk decreases and systemic risk increase.

4 Application domains

Our short and mid-term potential industrial impact concerns e.g. energy market regulations, financial market regulations, power distribution companies, nuclear plant maintenance. It also concerns all the industrial sectors where massive stochastic simulations at nano scales are becoming unavoidable and certified results are necessary.

We also plan to have impact in cell biology, macro-economy, and applied mathematics at the crossroads of stochastic integration theory, optimization and control, PDE analysis, and stochastic numerical analysis.

5 New software and platforms

5.1 New platforms

The ICI epidemy propagation simulation platform

Participants: Maxime Colomb, Josselin Garnier, Nicolas Gilet, Carl Graham, Denis Talay.

In 2020, D. Talay launched the ICI (INRIA-Collaboration-IGN) project whose he is the coordinator. This project is aimed to simulate a dynamic spreading of an epidemy at the individual scale, in a very precise geographic environment.

Nicolas Gilet (INRIA research engineer) and Maxime Colomb (INRIA-IGN research engineer) are the main developers of the code. The permanent members of the team Ascii jointly work on the modeling and algorithmic issues. The following Inria and IGN researchers contribute to specific tasks:

  • Aline Carneiro Viana (Inria Project-team TriBE)
  • Laura Grigori (Inria Project-team Alpines)
  • Julien Perret (IGN)
  • Razvan Stanica (Inria Project-team Agora)
  • Milica Tomasevic (CMAP, Ecole Polytechnique)

Infection between inhabitants is calculated with the density of person that they stay with during the day, their epidemiologic status and probability laws. Statistical studies on the simulation results should allow one to better understand the propagation of an epidemy and to compare the performances of various public stop-and-go strategies to control the person-to-person contamination.

This year, Maxime Colomb (INRIA-IGN engineer) and Nicolas GILET (INRIA engineer) have developed a prototype of the modeling with the support of the ASCII permanent members and the ICI contributors. The modeling is based on the coupling of two different models: on the one hand, the modeling of the urban geographical area where the population live and move; on the other hand, the modeling of random choices of the daily travels and the contaminations due to the interactions between individuals. The simulation is for now applied on a sub-part of Paris's fifth arrondissement (Jussieu/St-Victor) and aimed to run on single arrondissement or small cities.

The geographic model is built from multiple geographic sources (IGN, INSEE, OpenStreetMap, Local authority open data portals, etc.). A three-layered synthetic population is generated in order to represent housing, populated by households, composed by individuals. The multiple characteristics added allows to represent conditions of living and inner household interactions of the population. Shops and activities are generated by matching multi-sourced data, allowing to enrich information about each amenity (opening hour, open during lockdown, etc.). We simulate the socio-professional structures and daily trips of the population by taking into account probability laws related to the urban space (probability of going out, going to work, shopping, etc.) and to social characteristics (age, job, etc.). Currently, the modeling is based on intuitive and simple laws of trips according to individuals groups (pupils, students, working people, retirees). The calibration of these probability laws is being improved by using data provided by precise surveys and mobile operators.

In addition, the person-to-person contamination has been modeled between individuals located in the same space at the same time using transmission probability laws specific to each individual, parameterized by the distance between a healthy and contamined individual, as well as by the contact duration. Since the model is stochastic, in order to obtain accurate and robust statistics on the evolution of the epidemic, we must be able to simulate a large number of independent socio-professional structures within a given urban area, and then, for each population, a large number of realizations of daily trips and contaminations.

Therefore, to carry out a very large number of simulations covering all parameters of the model, the model requires very high performance computing. The code is written in Julia language and is currently parallelized using MPI tool. At this time, the model is launched on the internal cluster of INRIA Saclay called Margaret (200 CPU cores corresponding to 10 CPU nodes) which allows to check the code for a few different epidemiological parameters. We have also obtained the support of AMD to launched our model on a cluster, equipped with AMD EPYC™ processors and AMD Instinct™ Accelerators, into the national GRID5000/SILECS infrastructure. Moreover, in September 2021, the ICI project has obtained 6 millions CPU hours from DARI/GENCI which can be used on the CEA cluster called Irene-Rome (up to 300 000 CPU cores) in order to launch simulations for a large panel of epidemiological parameters. These hours can be used until October 2022.

Finally, Maxime Colomb and Nicolas Gilet have developed a website that describes the ICI project and the characteristics of the ICI model. They have also developed an user interface from which it is possible to study the effect of health policies on the epidemic propagation by displaying the main epidemic indicators computed by the model

Our next step is to calibrate the model with epidemiologic data and compare the predictive capacities of ICI with simpler models (SIR/SEIR).

Multiple Markov chains are constructed and calibrated for various geographical and socio-demographic profiles with the precise values of a global survey. Micro-spatialization of travel objective must be realized using mobile phone data and the list of available places, weighted by their capacity to receive public. The synthetic population generation have to be improved in order to give occupation to each individuals and to get more close to existing statistics. Those improvements are made jointly with the redaction of a scientific article.

Finally, Maxime Colomb and Nicolas Gilet have developed a user interface by including the back-end of the application. More precisely, they have put the interface on an INRIA web server and they have built an automatic pipeline between the interface and the server in order to display all of the simulations to the user.

Web sites:

http://ici.gitlabpages.inria.fr/website

https://gitlab.inria.fr/ici/website

https://ici.saclay.inria.fr

Our contribution to the PyCATSHOO toolbox

Participants: Josselin Garnier.

Our second topical activity concerns the PyCATSHOO toolbox developed by EDF which allows the modeling of dynamical hybrid systems such as nuclear power plants or dams. Hybrid systems mix two kinds of behaviour. First, the discrete and stochastic behaviour which is in general due to failures and repairs of the system's constituents. Second, the continuous and deterministic physical phenomena which evolve inside the system.

PyCATSHOO is based on the theoretical framework of Piecewise Deterministic Markov Processes (PDMPs). It implements this framework thanks to distributed hybrid stochastic automata and object-oriented modeling. It is written in C++. Both Python and C++ APIs are available. These APIs can be used either to model specific systems or for generic modelling i.e. for the creation of libraries of component models. Within PyCATSHOO special methods can be developed.

J. Garnier is contributing, and will continue to contribute, to this toolbox within joint Cifre programs with EdF. The PhD theses are aimed to add new functionalities to the platform. For instance, an importance sampling with cross entropy method

6 New results

6.1 Mean field games, mean field optimal stopping problems

Participants: Leila Bassou, Quentin Cormier, Mehdi Talbi, Nizar Touzi.

Optimal stopping of dynamic interacting stochastic systems.

In a series of three papers, Mehdi Talbi, Nizar Touzi, and Jianfeng Zhang (University of Southern California)study the mean field limit of the multiple optimal stopping problem. They develop the usual dynamic programming approach in the present context and provide a complete characterization of the problem in terms of a new obstacle equation on the Wasserstein space: verification argument in the smooth case and an appropriate notion of viscosity solutions in the general case.

From finite population optimal stopping to mean field optimal stopping.

Mehdi Talbi, Nizar Touzi, and Jianfeng Zhang analyze the convergence of the finite population optimal stopping problem towards the corresponding mean field limit by adapting the Barles-Souganidis monotone scheme method to this context. As a by-product of their analysis, they obtain an extension of the standard propagation of chaos to the context of stopped McKean-Vlasov diffusions.

Entropic optimal planning for path-dependent mean field games.

In the context of mean field games, with possible control of the diffusion coefficient, Zhenjie Ren, Xiaolu Tan, Nizar Touzi, and Junjian Yang have considered a path-dependent version of the planning problem introduced by P.L. Lions: given a pair of marginal distributions (m0, m1), find a specification of the game problem starting from the initial distribution m0, and inducing the target distribution m1 at the mean field game equilibrium. The main result reduces the path-dependent planning problem into an embedding problem, that is, constructing a McKean-Vlasov dynamics with given marginals (m0,m1). Up to integrability, the minimum entropy solution of the planning problem is also characterized.

Mean Field Game of Mutual Holding.

Mao Fabrice Djete (CMAP, Ecole Polytechnique) and Nizar Touzi have introduced a mean field model for optimal holding of a representative agent of her peers as a natural expected scaling limit from the corresponding N-agent model. The induced mean field dynamics appear naturally in a form which is not covered by standard McKean-Vlasov stochastic differential equations. An explicit solution of the corresponding mean field game of mutual holding is obtained, and is defined by a bang-bang control consisting in holding those competitors with positive drift coefficient of their dynamic value. They next use this mean field game equilibrium to construct (approximate) Nash equilibria for the corresponding N-player game.

Is there a Golden Parachute in Sannikov's principal-Agent problem?

Dylan Possamaï and Nizar Touzi have provided a complete review of the continuous-time optimal contracting problem introduced by Sannikov, in the extended context allowing for possibly different discount rates for both parties. The agent's problem is to seek for optimal effort, given the compensation scheme proposed by the principal over a random horizon. Then, given the optimal agent's response, the principal determines the best compensation scheme in terms of running payment, retirement, and lump-sum payment at retirement. A Golden Parachute is a situation where the agent ceases any effort at some positive stopping time, and receives a payment afterwards, possibly under the form of a lump sum payment, or of a continuous stream of payments. The authors have shown that a Golden Parachute only exists in certain specific circumstances. This is in contrast with the results claimed by Sannikov, where the only requirement is a positive agent's marginal cost of effort at zero. Namely, the authors show that there is no Golden Parachute if this parameter is too large. Similarly, in the context of a concave marginal utility, there is no Golden Parachute if the agent's utility function has a too negative curvature at zero. In the general case, the authors prove that an agent with positive reservation utility is either never retired by the principal, or retired above some given threshold (as in Sannikov's solution). They also show that different discount factors induce a face-lifted utility function, which allows to reduce the analysis to a setting similar to the equal discount rates one. Finally, the authors confirm that an agent with small reservation utility does have an informational rent, meaning that the principal optimally offers him a contract with strictly higher utility than his participation value.

Synchronization in a Kuramoto Mean Field Game

In collaboration with René Carmona and Mete Soner, Quentin Cormier has studied the classical Kuramoto model in the setting of an infinite horizon mean field game. The system is shown to exhibit both synchronization and phase transition. Incoherence below a critical value of the interaction parameter is demonstrated by the stability of the uniform distribution. Above this value, the game bifurcates and develops self-organizing time homogeneous Nash equilibria. As interactions become stronger, these stationary solutions become fully synchronized. Results are proved by an amalgam of techniques from nonlinear partial differential equations, viscosity solutions, stochastic optimal control and stochastic processes.

6.2 Stochastic cooperative numerical methods for complex industrial systems

Participants: Guillaume Chennetier, Josselin Garnier, Clément Cauchy, Baptiste Kerleguer, Paul Lartaud, Angèle Niclas.

Seismic probabilistic risk assessment.

The key elements of seismic probabilistic risk assessment studies are the fragility curves which express the probabilities of failure of structures conditional to a seismic intensity measure. A multitude of procedures is currently available to estimate these curves. For modeling-based approaches which may involve complex and expensive numerical models, the main challenge is to optimize the calls to the numerical codes to reduce the estimation costs. Adaptive techniques can be used for this purpose, but in doing so, taking into account the uncertainties of the estimates (via confidence intervals or ellipsoids related to the size of the samples used) is an arduous task because the samples are no longer independent and possibly not identically distributed. The main contribution of this work is to deal with this question in a mathematical and rigorous way. To this end, C. Gauchy, C. Feau, J. Garnier have proposed and implemented an active learning methodology based on adaptive importance sampling for parametric estimations of fragility curves. They have proven some theoretical properties (consistency and asymptotic normality) for the estimator of interest. Moreover, they have given a convergence criterion in order to use asymptotic confidence ellipsoids. Finally, the performances of the methodology has been evaluated on analytical and industrial test cases of increasing complexity.

Surrogate modeling of a complex numerical code.

B. Kerleguer has considered the surrogate modeling of a complex numerical code in a multifidelity framework when the code output is a time series. Using an experimental design of the low-and high-fidelity code levels, an original Gaussian process regression method has been proposed. The code output has been expanded on a basis built from the experimental design. The first coefficients of the expansion of the code output are processed by a co-kriging approach. The last coefficients are collectively processed by a kriging approach with covariance tensorization. The resulting surrogate model taking into account the uncertainty in the basis construction has been shown to have better performance in terms of prediction errors and uncertainty quantification than standard dimension reduction techniques.

Inverse problems.

L. Borcea, J. Garnier, A. V. Mamonov, and J. Zimmerling have introduced a novel, computationally inexpensive approach for imaging with an active array of sensors, which probe an unknown medium with a pulse and measure the resulting waves. The imaging function is based on the principle of time reversal in non-attenuating media and uses a data driven estimate of the ‘internal wave’ originating from the vicinity of the imaging point and propagating to the sensors through the unknown medium. The authors have explained how this estimate can be obtained using a reduced order model (ROM) for the wave propagation. They have analyzed the imaging function, connected it to the time reversal process and describe how its resolution depends on the aperture of the array, the bandwidth of the probing pulse and the medium through which the waves propagate. They also have shown how the internal wave can be used for selective focusing of waves at points in the imaging region. They have assessed the performance of the imaging methods with numerical simulations and compared them to the conventional reverse-time migration method and the ‘backprojection’ method introduced recently as an application of the same ROM. Other active research directions concern imaging in randomly perturbed media in the stochastic homogenization regime by Q. Goepfert (for applications ito medical imaging and non-destructive testing) and inverse problems in randomly perturbed waveguides (with applications to underwater acoustics) by A. Niclas.

Variance reduction.

J. Garnier and L. Mertz have examined a control variate estimator for a quantity that can be expressed as the expectation of a function of a random process, that is itself the solution of a differential equation driven by fast mean-reverting ergodic forces. The control variate is the same function for the limit diffusion process that approximates the original process when the mean-reversion time goes to zero. To get an efficient control variate estimator, a coupling method for the original process and the limit diffusion process has been proposed. It has been shown that the correlation between the two processes indeed goes to one when the mean reversion time goes to zero and the convergence rate has been quantified, which makes it possible to characterize the variance reduction of the proposed control variate method. The efficiency of the method has been illustrated on a few examples.

On time-dependent reliability-based design optimization problems with constraints.

A. Cousin, J. Garnier, M. Guiton, and M. Munoz-Zuniga have considered a time-dependent reliability-based design optimization (RBDO) problem with constraints involving the maximum and/or the integral of a random process over a time interval. We focus especially on problems where the process is a stationary or a piece-wise stationary Gaussian process. A two-step procedure is proposed to solve the problem. First, we use ergodic theory and extreme value theory to reformulate the original constraints into time-independent ones. We obtain an equivalent RBDO problem for which classical algorithms perform poorly. The second step of the procedure is to solve the reformulated problem with a new method based on an adaptive kriging strategy well suited to the reformulated constraints called AK-ECO for adaptive kriging for expectation constraints optimization. The procedure is applied to two toy examples involving a harmonic oscillator subjected to random forces. It is then applied to an optimal design problem for a floating offshore wind turbine.

6.3 A stochastic numerical method for the parabolic-parabolic Keller-Segel system

Participants: Denis Talay.

The parabolic-parabolic Keller-Segel model is a set of equations that model the process of cell movement. It takes into account the evolution of different chemical components that can aid, hinder or change the direction of movement, a process called chemotaxis.

In collaboration with Radu Maftei (who is a past Inria post-doc student) Milica Tomasevic (CMAP, Ecole Polytechnique) and Denis Talay have continued to analyse the numerical performances of a stochastic particle numerical method for the parabolic-parabolic Keller-Segel model. They also propose and test various algorithmic improvements to the method in order to substantially decrease its execution time without altering its global accuracy.

6.4 Stochastic cooperative communication networks; modelling and numerical issues for interacting stochastic particle systems

Participants: Quentin Cormier, Carl Graham, Denis Talay.

Communication networks and their algorithms.

An important current research topic of Carl Graham is the modeling, analysis, simulation, and performance evaluation of communication networks and their algorithms. Most of these algorithms must function in real time, in a distributed fashion, and using sparse information. In particular load balancing algorithms aim to provide better utilization of the network resources and hence better quality of service for clients by striving to avoid the starving of some servers and the build-up of queues at others by routing the clients so as to have them well spread out throughout the system. Carl Graham's recent focus of work on these networks is perfect simulation in equilibrium, and a paper on this is in its terminal phases of writing. Perfect simulation methods allow to estimate quality of service (QoS) indicators in the stationary regime by Monte Carlo methods.

Regenerative properties of Hawkes processes.

Establishing regenerative properties of Hawkes processes allows to derive systematically long-time asymptotic results in view of statistical applications. A paper of Carl Graham with M. Costa, L. Marsalle, and V.C. Tran proves regeneration for specific nonlinear Hawkes processes with transfer functions which may take negative values in order to model self-inhibition phenomena. An essential assumption is that the transfer function has a bounded support; this allows to introduce an auxiliary Markov process with values in point processes with support including the one of the transfer function, and then prove that it is positive recurrent. This regenerative result then allows to prove in particular a non-asymptotic exponential concentration inequality by carefully adapting the Bernstein inequality. Another paper of Carl Graham proves regenerative properties for the linear Hawkes process under minimal assumptions on the transfer function, which may have unbounded support. The proof exploits the deep independence properties of the Poisson cluster point process decomposition of Hawkes and Oakes, and the regeneration times are not stopping times for the Hawkes process. The regeneration time is interpreted as the renewal time at zero of a M/G/infinity queue, which yields a formula for its Laplace transform. When the transfer function admits some exponential moments, the cluster length can be stochastically dominated by exponential random variables with parameters expressed in terms of these moments. This yields explicit bounds on the Laplace transform of the regeneration time in terms of simple integrals or of special functions, yielding an explicit negative upper-bound on its abscissa of convergence. This is illustrated on the exponential concentration inequality previously obtained by Carl Graham with coauthors.

Long time behavior of particle systems and their mean-field limit.

Quentin Cormier has studied the long time behavior of a family of McKean-Vlasov stochastic differential equations. He has given conditions ensuring the local stability of an invariant probability measure. The criterion involves the location of the roots of an explicit holomorphic function associated to the dynamics. When all the roots lie on the left-half plane, local stability holds and convergence is proven in Wasserstein norms. The optimal rate of convergence is provided. The probabilistic proof makes use of Lions derivatives and relies on a new `integrated sensitivity' formula.

An hypothesis test for complex stochastic simulations.

In a joint work with Héctor Olivero, D. Talay has proposed and analyzed an asymptotic hypothesis test for independent copies of a given random variable which is supposed to belong to an unknown domain of attraction of a stable law. The null hypothesis 𝐇0 is: `X=V is in the domain of attraction of the Normal law' and the alternative hypothesis is 𝐇1: `X is in the domain of attraction of a stable law with index smaller than 2'.

Surprisingly, the proposed hypothesis test is based on a statistic which is inspired by methodologies to determine whether a semimartingale has jumps from the observation of one single path at discrete times. The authors have justified their test by proving asymptotic properties of discrete time functionals of Brownian bridges. They also have discussed many numerical experiments which allowed them to illustrate satisfying properties of the proposed test.

7 Bilateral contracts and grants with industry

7.1 Bilateral contracts with industry

Several INRIA teams, including ASCII, are involved in the CIROQUO Research & Industry Consortium – Consortium Industrie Recherche pour l'Optimisation et la QUantification d'incertitude pour les données Onéreuses – (Industry Research Consortium for the Optimization and QUantification of Uncertainty for Onerous Data). Josselin Garnier is the INRIA Saclay representative on the steering committee.

The principle of the CIROQUO Research & Industry Consortium is to bring together academic and technological research partners to solve problems related to the exploitation of numerical simulators, such as code transposition (how to go from small to large scale when only small-scale simulations are possible), taking into account the uncertainties that affect the result of the simulation, validation and calibration (how to validate and calibrate a computer code from collected experimental data). This project is the result of a simple observation: industries using computer codes are often confronted with similar problems during the exploitation of these codes, even if their fields of application are very varied. Indeed, the increase in the availability of computing cores is counterbalanced by the growing complexity of the simulations, whose computational times are usually of the order of an hour or a day. In practice, this limits the number of simulations. This is why the development of mathematical methods to make the best use of simulators and the data they produce is a source of progress. The experience acquired over the last thirteen years in the DICE and ReDICE projects and the OQUAIDO Chair shows that the formalization of real industrial problems often gives rise to first-rate theoretical problems that can feed scientific and technical advances. The creation of the CIROQUO Research & Industry Consortium, led by the Ecole Centrale de Lyon and co-animated with the IFPEN, follows these observations and responds to a desire for collaboration between technological research partners and academics in order to meet the challenges of exploiting large computing codes.

Scientific approach. The limitation of the number of calls to simulators implies that some information – even the most basic information such as the mean value, the influence of a variable or the minimum value of a criterion – cannot be obtained directly by the usual methods. The international scientific community, structured around computer experiments and uncertainty quantification, took up this problem more than twenty years ago, but a large number of problems remain open. On the academic level, this is a dynamic field which is notably the subject of the French CNRS Research Group MascotNum since 2006 and renewed in 2020.

Composition. The CIROQUO Research & Industry Consortium aims to bring together a limited number of participants in order to make joint progress on test cases from the industrial world and on the upstream research that their treatment requires. The overall approach that the CIROQUO Research & Industry Consortium will focus on is metamodeling and related areas such as experiment planning, optimization, inversion and calibration. IRSN, STORENGY, CEA, IFPEN, BRGM are the Technological Research Partners. Mines Saint-Etienne, Centrale Lyon, CNRS, UCA, UPS, UT3 and Inria are the Academic Partners of the consortium.

Scientific objectives. On the practical level, the expected impacts of the project are a concretization of the progress of numerical simulation by a better use of computational time, which allows the determination of better solutions and associated uncertainties. On the theoretical level, this project will allow to create an emulation around the major scientific locks of the discipline such as code transposition/calibration/validation, modeling for complex environments, or stochastic codes. In each of these scientific axes, a particular attention will be paid to the large dimension. Real problems sometimes involve several tens or hundreds of inputs. Methodological advances will be proposed to take into account this additional difficulty. The work expected from the consortium differs from the dominant research in machine learning by specificities linked to the exploration of expensive numerical simulations. However, it seems important to build bridges between the many recent developments in machine learning and the field of numerical simulation.

Philosophy. The CIROQUO Research & Industry Consortium is a scientific collaboration project aiming to mobilize means to achieve methodological advances. The project promotes cross-fertilization between partners coming from different backgrounds but confronted with problems related to a common methodology. It has three objectives: - the development of exchanges between technological research partners and academic partners on issues, practices and solutions through periodic scientific meetings and collaborative work, particularly through the co-supervision of students;

- the contribution of common scientific skills thanks to regular training in mathematics and computer science;

- the recognition of the Consortium at the highest level thanks to publications in international journals and the diffusion of free reference software.

7.2 Collaboration with EdF on industrial risks

This collaboration whose Josselin Garnier is the Ascii leader, has been underway for several years. It concerns the assessment of the reliability of hydraulic and nuclear power plants built and operated by EDF (Electricite De France). The failure of a power plant is associated with major consequences (flood, dam failure, or core meltdown), for regulatory and safety reasons EDF must ensure that the probability of failure of a power plant is sufficiently low.

The failure of such systems occurs when physical variables (temperature, pressure, water level) exceed a certain critical threshold. Typically, these variables enter this critical region only when several components of the system are deteriorated. Therefore, in order to estimate the probability of system failure, it is necessary to model jointly the behavior of the components and of the physical variables. For this purpose, a model based on a Deterministic Markovian Piecewise Process (DMPP) is used. The platform called PYCATSHOO has been developed by EDF to simulate this type of process. This platform allows to estimate the probability of failure of the system by Monte Carlo simulation as long as the probability of failure is not too low. When the probability becomes too low, the classical Monte Carlo estimation method, which requires a lot of simulations to estimate the probabilities of rare events, is much too slow to execute in our context. It is necessary to use methods using fewer simulations to estimate the probability of system failure: variance reduction methods. Among the variance reduction methods are "importance sampling" and "splitting" methods, but these methods present difficulties when used with PDMPs.

Work has been undertaken on the subject, leading to the defense of a first CIFRE thesis (Thomas Galtier, thesis defended in 2019) and the preparation of a new CIFRE thesis (Guillaume Chennetier, from 2021). Theoretical articles have been written and submitted to journals. New theoretical works on sensitivity analysis in rare event regimes are the subject of the new thesis. The integration of the methods in the PYCATSHOO platform is progressively done.

8 Partnerships and cooperations

8.1 International initiatives

8.1.1 Inria associate team not involved in an IIL or an international program

CIRCUS
  • Title:
    Columbia Inria Research on Collaborative Ultracritical Systems
  • Duration:
    2020 ->
  • Coordinator:
    Philip Protter (pep2117@columbia.edu)
  • Partners:
    • Columbia University (États-Unis)
  • Inria contact:
    Denis Talay
  • Summary:

    CIRCUS will focus on collaborative stochastic agent and particle systems. In standard models, the agents and particles have `blind' interactions generated by an external interaction kernel or interaction mechanism which their empirical distribution does not affect. A contrario, agent and particle systems which will be developed, analysed, simulated by CIRCUS will have the key property that the agents and particles dynamically optimize their interactions.

    Two main directions of research will be investigated: optimal regulation in stochastic environments, and optimized simulations of particle systems with singular interactions. In both cases, the interactions (between the agents or the numerical particles) are optimized, non Markovian, and the singularities reflect ultracritical phenomena such as aggregations or finite-time blow-ups.

8.2 International research visitors

P. Protter (Columbiq university) visited the team in December.

9 Dissemination

9.1 Promoting scientific activities

Participants: Josselin Garnier, Denis Talay, Nizar Touzi.

J. Garnier is Vice-head of the Foundation Mathématique Hadamard, in charge of Labex Mathématique Hadamard.

J. Garnier and Veronique Maume-Deschamps (president chair of the French Agency AMIES) chaired the working group `Développement économique, de la compétitivité et de l'innovation' at the `Assises des Mathématiques' Conférence.

D. Talay is Vice-President of the Natixis Foundation which promotes academic research on financial risks. He also serves as a member of the scientific committee of the French agency AMIES in charge of promoting Mathematics to industry.

N. Touzi is Scientific Director of the Louis Bachelier Institute which hosts two public foundations promoting research in finance and economics for sustainable growth.

N. Touzi is Vice-head of the Doctoral School Jacques Hadamard (EDMH) in charge of the Polytechnique Pole.

Member of the editorial boards

J. Garnier is a member of the editorial boards of the journals Asymptotic Analysis, Discrete and Continuous Dynamical Systems – Series S, ESAIM P&S, Forum Mathematicum, SIAM Journal on Applied Mathematics, and SIAM ASA Journal on Uncertainty Quantification (JUQ).

D.Talay serves as an Area Editor of Stochastic Processes and their Applications, and as an Associate Editor of Journal of the European Mathematical Society, Monte Carlo Methods and Applications. He also serves as Co-editor in chief of MathematicS in Action. During the evaluation period his other Associate Editor terms ended: Probability, Uncertainty and Quantitative Risk, ESAIM Probability and Statistics, Stochastics and Dynamics, Journal of Scientific Computing, SIAM Journal on Scientific Computing.

N.Touzi serves as an Area Editor of Stochastic Processes and their Applications, and as an Associate Editor of Advances in Calculus of Variations, Stochastics: an International Journal of Probability and Stochastic Processes, Journal of Optimization Theory and Applications, Mathematical Control and Related Fields, Tunisian Journal of Mathematics, Springer Briefs in Mathematical Finance. He also is Co-Editor of Paris-Princeton Lectures in Mathematical Finance.

9.1.1 Invited talks

%labelASCII:animation-leadership

9.2 Teaching - Supervision - Juries

9.2.1 Teaching

Josselin Garnier is a professor at the Ecole Polytechnique and he has a full teaching service. He also teaches the class "Inverse problems and imaging" at the Master Mathématiques, Vision, Apprentissage (M2 MVA).

D. Talay teaches the master course Equations différentielles stochastiques de McKean–Vlasov et limites champ moyen de systèmes de particules stochastiques en interaction, 24h, M2 Probabilité et Applications, LPSM, Sorbonne Université, France.

Nizar Touzi is a professor at the Ecole Polytechnique and he has a full teaching service.

10 Scientific production

10.1 Publications of the year

International journals

  • 1 articleN.Naoufal Acharki, A.Antoine Bertoncello and J.Josselin Garnier. Robust prediction interval estimation for Gaussian processes by cross-validation method.Computational Statistics and Data Analysis178February 2023, 107597
  • 2 articleK.Kilian Baudin, J.Josselin Garnier, A.Adrien Fusaro, N.Nicolas Berti, G.Guy Millot and A.Antonio Picozzi. Weak Langmuir turbulence in disordered multimode optical fibers.Physical Review A1051January 2022, 013528
  • 3 articleN.Nicolas Berti, K.Kilian Baudin, A.Adrien Fusaro, G.Guy Millot, A.Antonio Picozzi and J.Josselin Garnier. Interplay of Thermalization and Strong Disorder: Wave Turbulence Theory, Numerical Simulations, and Experiments in Multimode Optical Fibers.Physical Review Letters1296August 2022, 063901
  • 4 articleA.Alexis Cousin, J.Josselin Garnier, M.Martin Guiton and M.Miguel Munoz Zuniga. A two-step procedure for time-dependent reliability-based design optimization involving piece-wise stationary Gaussian processes.Structural and Multidisciplinary Optimization654April 2022, 120
  • 5 articleJ.Josselin Garnier and L.Laurent Mertz. A Control Variate Method Driven by Diffusion Approximation.Communications on Pure and Applied Mathematics753March 2022, 455-492
  • 6 articleJ.Josselin Garnier and P.Philippe Roux. Modal formulation and paraxial approximation for acoustic wave propagation in waveguides with surface perturbations.Journal of the Acoustical Society of America1515May 2022, 3239-3254
  • 7 articleJ.Josselin Garnier and K.Knut Sølna. Scintillation of partially coherent light in time-varying complex media.Journal of the Optical Society of America. A Optics, Image Science, and Vision3982022, 1309
  • 8 articleM.Maarten de Hoop, J.Josselin Garnier and K.Knut Sølna. System of radiative transfer equations for coupled surface and body waves.Zeitschrift für Angewandte Mathematik und Physik735October 2022, 177
  • 9 articleG.Gang Xu, J.Josselin Garnier, A.Adrien Fusaro and A.Antonio Picozzi. Background-enhanced collapse instability of optical speckle beams in nonlocal nonlinear media.Physica D: Nonlinear Phenomena434June 2022, 133230

Reports & preprints

  • 10 miscR.Rene Carmona, Q.Quentin Cormier and H. M.Halil Mete Soner. Synchronization in a Kuramoto Mean Field Game.December 2022
  • 11 miscG.Guillaume Chennetier, H.Hassane Chraibi, A.Anne Dutfoy and J.Josselin Garnier. Adaptive importance sampling based on fault tree analysis for piecewise deterministic Markov process.October 2022
  • 12 miscQ.Quentin Cormier. A bifurcation analysis of some McKean-Vlasov equations.December 2022
  • 13 miscJ.Josselin Garnier, H.Houssem Haddar and H.Hadrien Montanelli. The linear sampling method for random sources.October 2022
  • 14 miscC.Clement Gauchy, C.Cyril Feau and J.Josselin Garnier. Importance sampling based active learning for parametric seismic fragility curve estimation.January 2022
  • 15 miscH.Héctor Olivero and D.Denis Talay. An hypothesis test for the domain of attraction of a random variable.October 2022
  • 16 miscZ.Zhenjie Ren, X.Xiaolu Tan, N.Nizar Touzi and J.Junjian Yang. Entropic optimal planning for path-dependent mean field games.December 2022
  • 17 miscM.Mehdi Talbi, N.Nizar Touzi and J.Jianfeng Zhang. Dynamic programming equation for the mean field optimal stopping problem.December 2022
  • 18 miscM.Mehdi Talbi, N.Nizar Touzi and J.Jianfeng Zhang. From finite population optimal stopping to mean field optimal stopping.December 2022
  • 19 miscM.Mehdi Talbi, N.Nizar Touzi and J.Jianfeng Zhang. Viscosity solutions for obstacle problems on Wasserstein space.December 2022