2023Activity reportProject-TeamASCII
RNSR: 201923478S- Research center Inria Saclay Centre at Institut Polytechnique de Paris
- In partnership with:CNRS, Institut Polytechnique de Paris
- Team name: Analysis of Stochastic Cooperative Intelligent Interactions
- In collaboration with:Centre de Mathématiques Appliquées (CMAP)
- Domain:Applied Mathematics, Computation and Simulation
- Theme:Stochastic approaches
Keywords
Computer Science and Digital Science
- A3.1. Data
- A3.2. Knowledge
- A3.3. Data and knowledge analysis
- A6. Modeling, simulation and control
Other Research Topics and Application Domains
- B1. Life sciences
- B1.1. Biology
- B1.2. Neuroscience and cognitive science
- B2.3. Epidemiology
- B4. Energy
- B6. IT and telecom
1 Team members, visitors, external collaborators
Research Scientists
- Carl Graham [Team leader, CNRS, Researcher, HDR]
- Quentin Cormier [INRIA, Researcher]
- Denis Talay [INRIA, Emeritus, HDR]
Faculty Members
- Josselin Garnier [ECOLE POLY PALAISEAU, Professor, HDR]
- Nizar Touzi [ECOLE POLY PALAISEAU, Professor, until Sep 2023, HDR]
Post-Doctoral Fellows
- Angele Niclas [ECOLE POLY PALAISEAU, until Sep 2023]
- Charlie Sire [INRIA, Post-Doctoral Fellow, from Dec 2023]
PhD Students
- Leila Bassou [ECOLE POLY PALAISEAU]
- Guillaume Chennetier [EDF]
- Assil Fadle [ECOLE POLY PALAISEAU]
- Paul Lartaud [ECOLE POLY PALAISEAU]
- Songbo Wang [ECOLE POLY PALAISEAU]
Technical Staff
- Maxime Colomb [INRIA-IGN, Engineer]
- Nicolas Gilet [INRIA, Engineer, until Jan 2023]
Administrative Assistant
- Julienne Moukalou [INRIA]
External Collaborators
- Guillaume Perrin [UNIV GUSTAVE EIFFEL, until Mar 2023]
- Milica Tomasevic [CNRS, Researcher]
2 Overall objectives
The ASCII team investigates stochastic interacting particle systems which behave as a collection of agents striving to cooperate intelligently in order to achieve a common objective by solving complex optimisation problems. Its goal is the stochastic modelisation of the relevant phenomena and the mathematical and numerical analyses of the resulting models. The target applicative fields include economics, energy production, neuroscience, physics, biology, and stochastic numerics.
Our innovative approches for handling intelligent multi-agent interaction raise many modelling challenges. The resulting models are complex and often have non-Markovian evolutions and exhibit singularities. This requires us to develop new mathematical tools and numerical methods, both stochastic and deterministic, for instance non-standard stochastic control and stochastic optimization methodologies coupled with original specific calibration methodologies. We combine various mathematical techniques coming notably from stochastic analysis, partial differential equation analysis, numerical probability, optimization theory, and stochastic control theory.
3 Research program
Concerning particle systems with singular interactions, in addition to the convergence to mean–field limits and the analysis of convergence rates of relevant discretizations, one of our main challenges will concern the simulation of complex, singular and large scale McKean-Vlasov particle systems and stochastic partial differential equations, with a strong emphasis on the detection of numerical instabilities and potential big approximation errors.
The determination of blow-up times is also a major issue for spectrum approximation and criticality problems in neutron transport theory, Keller-Segel models for chemotaxis, financial bubble models, etc.
Reliability assessment for power generation systems or subsystems is another target application of our research. For such complex systems, standard Monte Carlo methods are inefficient because of the difficulties to appropriately simulate rare events. We thusdevelop algorithms based on particle filters methods combined with suitable variance reduction methods.
Exhibiting optimal regulation procedures in a stochastic environment is an important challenge in many fields. As emphasized above, in the situations we are interested in, the agents do not compete but concur to their regulation. We here give three examples: the control of cancer therapies, regulation and mechanism design by optimal contracting, and the distributed control for the planning problem.
Optimal contracting is widely used in economics in order to model agents interactions subject to the so-called moral hazard problem. This is best illustrated by the works of Tirole (2014 Nobel Prize in economics) in industrial economics. The standard situation is described by the interaction of two parties. The principal (e.g. land owner) hires the agent (e.g. farmer) in order to delegate the management of an output of interest (e.g. production of the land). The agent receives a salary as a compensation for the costly effort devoted to the management of the output. The principal only observes the output value, and has no access to the agent effort. Due to the cost of effort, the agent may divert his effort from the direction desired by the principal. The contract is proposed by the principal, and chosen according to a Stackelberg game: anticipating the agent's optimal response to any contract, she searches for the optimal contract by optimizing her utility criterion.
We are developing the continuous time formulation of this problem and allow for diffusion control and for possible competing multiple agents and principals. This is achieved by using crucially the recently developed second order backward stochastic differential equations, which act as HJB equations in the present non-Markovian framework.
The current environmental transition requires governments to incite firms to introduce green technologies as a substitute the the outdated polluting ones. This transition requires appropriate incentive schemes so as to reach the overall transition objective. This problem can be formulated in the framework of the last Principal-Agent problem as follows. Governments act as principals by setting the terms of an incentive regulation based on subsidies and tax reductions. Firms acting as agents optimize their production strategies given the regulation imposed by the governments. Such incentive schemes are also provided by the the refinancing channel through private investor, as best witnessed by the remarkable growth of green bonds markets.
Another motivation comes from Mechanism design. Modern decentralized facilities are present throughout our digitally connected economies. With the fragmentation of financial markets, exchanges are nowadays in competition. As the traditional international exchanges are now challenged by alternative trading venues, markets have to find innovative ways to attract liquidity. One solution is to use a make-taker fees system, that is, a rule enabling them to charge in an asymmetric way liquidity provision and liquidity consumption. The most classical setting, used by many exchanges (such as Nasdaq, Euronext, BATS Chi-X,...), consists in subsidizing the former while taxing the latter. In practice, this results in associating a fee rebate to executed limit orders and applying a transaction cost for market orders.
A platform aims at attracting two types of agents: market makers post bid and ask prices for some underlying asset or commodity, and brokers fulfill their trading needs if the posted prices are convenient. The platform takes its benefits from the fees gained on each transaction. As transactions only occur when the market makers takes on more risky behavior by posting interesting bid and ask prices, the platform (acting as the principal) sets the terms of an incentive compensation to the market makers (acting as agents) for each realized transaction. Consequently, this optimal contracting problem serves as an optimization tool for the mechanism design of the platform.
Inspired by optimal transport theory, we formulate the above regulation problem as the interaction between a principal and a “crowd” of symmetric agents. Given the large number of agents, we model the limiting case of a continuum of agents whose state is then described by their distribution. The mean field game formulates the interacting agents optimal decision according to a Nash equilibrium competition. The optimal planning problem, introduced by Pierre-Louis Lions, seeks for an incentivizing scheme of the regulator, acting as a principal, aiming at pushing the crowd to some target distribution. Such a problem may be formulated for instance as a model for the design of smart cities. Then, one may use the same techniques as for the Principal-Agent problem in order to convert this problem to a more standard optimal transport problem.
In a situation where a Principal faces many interacting agents, distributed control may serve as an important tool to preserve the aggregate production of the agents, while distributing differently the contributions amongst agents.
The above approach needs now to be extended in order to accommodate more realistic situations. Let us list the following important extensions:
- The case of noisy observation of the output leads to control problems under partial observation for both types of agents, which are significantly more involved as they lead to infinite dimensional control problems after the filtering stage;
- Another important extension is the account for the so-called adverse selection: the principal has no access to the optimization criterion of the agent, instead she only has a prior on the distribution; in the economic literature, this is addressed in one period static models by allowing the prinicipal to offer a menu of incentivizing contracts which are chosen so that each agent chooses the one which is designed for him (incentive compatibility constraint).
Our research program on networks with interacting agents concerns various types of networks: electronic networks, biological networks, social networks, etc. The numerous mathematical tools necessary to analyse them depend on the network type and the analysis objectives. They include propagation of chaos theory, queing process theory, large deviation theory, ergodic theory, population dynamics, partial differential equation analysis, in order to respectively determine mean-field limits, congestion rates or spike train distributions, failure probabilities, equilibrium measures, evolution dynamics, macroscopic regimes, etc.
For example, recently proposed neuron models consist in considering different populations of neurons and setting up stochastic time evolutions of the membrane potentials depending on the population. When the number of populations is fixed, interaction intensities between individuals in different populations have similar orders of magnitude, and the total number of neurons tends to infinity, mean-field limits have been identified and fluctuation theorems have been proven.
However, to the best of our knowledge, no theoretical analysis is available on interconnected networks of networks with different populations of interacting individuals which naturally arise in biology and in economics.
We aim to study the effects of interconnections between sub-networks resulting from individual and local connections. Of course, the problem needs to be posed in terms of the geometry of the big network and of the scales between connectivity intensities and network sizes.
A related research concerns stochastic, continuous state and time opinion models where each agent's opinion locally interacts with other agents' opinions in the systemic. Due to some exogenous randomness, the interaction tends to create clusters of common opinion. By using linear stability analysis of the associated nonlinear Fokker-Planck equation that governs the empirical density of opinions in the limit of infinitely many agents, we can estimate the number of clusters, the time to cluster formation, the critical strength of randomness so as to have cluster formation, the cluster dynamics after their formation, the width and the effective diffusivity of the clusters.
Another type of network systems we are interested in derives from financial systemic risk modeling. We consider evolving systems with a large number of inter-connected components, each of which can be in a normal state or in a failed state. These components also have mean field interactions and a cooperative behavior. We will also include diversity as well as other more complex interactions such as hierarchical ones. In such an inter-connected system, individual components can be operating closer to their margin of failure, as they can benefit from the stability of the rest of the system. This, however, reduces the overall margin of uncertainty, that is, increases the systemic risk: our research thus addresses QMU (Quantification of Margins of Uncertainty) problems.
We aim to study the probability of overall failure of the system, that is, its systemic risk. We therefore have to model the intrinsic stability of each component, the strength of external random perturbations to the system, and the degree of inter-connectedness or cooperation between the components.
Our target applications are the following ones:
- Engineering systems with a large number of interacting parts: Components can fail but the system fails only when a large number of components fail simultaneously.
- Power distribution systems: Individual components of the system are calibrated to withstand fluctuations in demand by sharing loads, but sharing also increases the probability of an overall failure.
- Banking systems: Banks cooperate and by spreading the risk of credit shocks between them can operate with less restrictive individual risk policies. However, this increases the risk that they may all fail, that is, the systemic risk.
One of our objectives is to explain that, in some circumstances, one simultaneously observes individual risk decreases and systemic risk increase.
4 Application domains
Our short and mid-term potential industrial impact concerns notably energy market regulations, financial market regulations, power distribution companies, and nuclear plant maintenance. It also concerns all the industrial sectors where massive stochastic simulations at nano scales are becoming unavoidable and certified results are necessary. We also plan to have impact in epidemiology, cell biology, neuroscience, macro-economy, and applied mathematics at the crossroads of stochastic integration theory, optimization and control, PDE analysis, and stochastic numerical analysis.
5 Social and environmental responsibility
5.1 Footprint of research activities
Classic footprint for researchers: massive computer runs and travel for international conferences and cooperations.
5.2 Impact of research results
The research is useful for the risk managment of global interlocked economic instances and actors, of energy production in complex power plants, and of epidemics and pandemics.
6 Highlights of the year
6.1 Awards
Josselin Garnier has been elected in December 2023 to the Académie des Sciences.
6.2 Scientific results
Josselin Garnier has obtained important results with Liliana Borcea on the inverse problem methods based on reduced order models. These results allow to obtain new formulations for the inversion methods for wave forms (for sismic imaging for instance). The solutions can be obtained by optimisation methods in a much more tractable way than traditional methods, since the cost function to be optimised enjoys nice convexity properties.
7 New software, platforms, open data
7.1 New platforms
7.1.1 The ICI epidemy propagation simulation platform
Participants: Maxime Colomb, Quentin Cormier, Josselin Garnier, Nicolas Gilet, Carl Graham, Denis Talay.
In 2020, D. Talay launched the ICI project, a collaboration between INRIA and IGN, of which he is the coordinator. This project aims to provide a platform to simulate an individual-based model of epidemic propagation on a finely represented large geoographic environment. Statistical studies of the simulation results should allow to better understand the epidemy propagation and to compare in silico the performances of various public health strategies to control it.
Nicolas Gilet (INRIA research engineer) and Maxime Colomb (INRIA-IGN research engineer) have been the main developers of the code of a prototype, but Nicolas has left us at the beginning of the year for a permanent postion at CEA. Quentin Cormier has intervened directly on the code to help optimize it for further extension to a much larger scale. The permanent members of Ascii jointly work on the modeling and algorithmic issues. The following Inria and IGN researchers have contributed to this project:
- Aline Carneiro Viana (Inria Project-team TriBE)
- Laura Grigori (Inria Project-team Alpines)
- Julien Perret (IGN)
- Razvan Stanica (Inria Project-team Agora)
- Milica Tomasevic (Inria Project-team Merge)
The simulation was first applied on a sector of the fifth arrondissement of Paris, Jussieu–Saint Victor. It is now being extended to the whole of Paris, and the next step will be to consider the Île de France. Tests on Lyon are also in the works.
The prototype is based on a coupling of diverse models.
- A precise model of the geographical area where the population live and move is performed, using fine scale data bases.
- Synthetic populations are generated in accordance with socio-economic data bases. These populations are representative of the actual populations.
- The spatial evolutions of the individuals are modeled in accordance with data bases furnished by transport and mobile communication companies.
- Contamination events due to individuals interacting during this evolution at specific loci are then modeled.
The geographic model is built from multiple geographic sources such as IGN, INSEE, OpenStreetMap, and Local authority open data portals. A three-layered synthetic population is generated in order to represent housing and to populate it by households composed of individuals. Multiple characteristics are added in order to allow to represent the living conditions and inner household interactions of the population. Shops and activities are generated by matching multi-sourced data which allow to enrich information about each amenity such as opening hours and surface.
We simulate the socio-professional structures and daily trips of the population by taking into account probability laws related to the urban space (probability of going out, going to work, shopping, etc.) and to social characteristics (age, job, etc.). Currently, the modeling is based on intuitive and simple laws of trips according to individuals groups (pupils, students, working people, retirees). The calibration of these probability laws is being improved by using data provided by precise surveys and mobile operators.
In addition, person-to-person contamination has been modeled between individuals located in the same space at the same time using transmission probability laws specific to each individual's characteristics, parameterized by the distance between a healthy and a contagious individual as well as by the contact duration. Since the model is stochastic, obtaining accurate and robust statistics on the evolution of the epidemic requires to simulate firstly a large number of independent socio-professional structures within a given urban area and then, for each population, a large number of realizations of daily trips and contaminations.
Therefore, to carry out a very large number of simulations covering all parameters of the model, the model requires high performance computing. The code is written in the Julia language and is currently parallelized using the MPI tool. The model has been launched on the internal cluster of INRIA Saclay called Margaret (200 CPU cores corresponding to 10 CPU nodes) which allows to check the code for a few different epidemiological parameters. We have also obtained the support of AMD to launch our model on a cluster, equipped with AMD EPYC™ processors and AMD Instinct™ Accelerators, into the national GRID5000/SILECS infrastructure. The ICI project has obtained 4 millions CPU hours from DARI/GENCI which can be used on the CEA cluster called Irene-Rome (up to 300 000 CPU cores) in order to launch simulations for a large panel of epidemiological parameters. These hours can be used until May 2024.
Multiple Markov chains are constructed and calibrated for various geographical and socio-demographic profiles with the precise values of a global survey. Micro-spatialization of travel objective must be realized using mobile phone data and the list of available places, weighted by their capacity to receive public. The synthetic population generation have to be improved in order to give occupation to each individuals and to get more close to existing statistics. Those improvements are made jointly with the redaction of scientific papers.
Maxime Colomb and Nicolas Gilet have developed a website describing the ICI project and model. They have developed a user interface by including the back-end of the application on an INRIA web server and building an automatic pipeline between the interface and the server in order to display all the results of the simulations to the user. From this it is possible to study the effect of health policies on the epidemic propagation by displaying the main epidemic indicators computed by the model.
Different parts of this project have been presented in various scientific events (Journées de la recherche de l'IGN 2023, SocSimFest 2023, GT Échelle ).
Our next step is to calibrate the model with epidemiologic data and compare the predictive capacities of ICI with simpler models (SIR/SEIR).
We are initiating multiple collaborations with thematic partners. We are part of the Mobidec PEPR which aims to create toolboxes for various transportation simulations. The creation of spatialized schedules for well-described individuals should benefit from the knowledge of the various research programs involved in this PEPR. This part of the ICI project should then become available for multiple usages through the Mobidec toolbox. We are collaborating with the CRESS laboratory of INSERM which works on interventionnal epidemiology and is very interested by the ICI project. Multiple internships and PhDs should result from those collaborations.
More generally the tools developped by ICI should benefit the project “Jumeau numérique du territoire” in a collaboration between INRIA and IGN.
Web sites
http://ici.gitlabpages.inria.fr/website
7.1.2 Our contribution to the PyCATSHOO toolbox
Participants: Josselin Garnier.
Our second topical activity concerns the PyCATSHOO toolbox developed by EDF which allows the modeling of dynamical hybrid systems such as nuclear power plants or dams. Hybrid systems mix two kinds of behaviour. First, the discrete and stochastic behaviour which is in general due to failures and repairs of the system's constituents. Second, the continuous and deterministic physical phenomena which evolve inside the system.
PyCATSHOO is based on the theoretical framework of Piecewise Deterministic Markov Processes (PDMPs). It implements this framework thanks to distributed hybrid stochastic automata and object-oriented modeling. It is written in C++. Both Python and C++ APIs are available. These APIs can be used either to model specific systems or for generic modelling i.e. for the creation of libraries of component models. Within PyCATSHOO special methods can be developed.
J. Garnier is contributing, and will continue to contribute, to this toolbox within joint Cifre programs with EdF. The PhD theses are aimed to add new functionalities to the platform. For instance, an importance sampling with cross entropy method
8 New results
8.1 Mean field games, mean field optimal stopping problems
Participants: Leila Bassou, Quentin Cormier, Nizar Touzi.
From finite population optimal stopping to mean field optimal stopping
In collaboration with Mehdi Talbi and Jianfeng Zhang, Nizar Touzi analyzed the convergence of the finite population optimal stopping problem towards the corresponding mean field limit. Building on the viscosity solution characterization of the mean field optimal stopping problem of our previous papers [Talbi, Touzi, Zhang 2021, 2022], they proved the convergence of the value functions by adapting the Barles-Souganidis [1991] monotone scheme method to this context. They next characterized the optimal stopping policies of the mean field problem by the accumulation points of the finite population optimal stopping strategies. In particular, if the limiting problem has a unique optimal stopping policy, then the finite population optimal stopping strategies do converge towards this solution. As a by-product of their analysis, they provided an extension of the standard propagation of chaos to the context of stopped McKean-Vlasov diffusions.
A Principal-Agent Framework for Optimal Incentives in Renewable Investments
René Aïd, Annika Kemper and Nizar Touzi investigated the optimal regulation of energy production reflecting the long-term goals of the Paris Climate Agreement. They analyzed the optimal regulatory incentives to foster the development of non-emissive electricity generation when the demand for power is served either by a monopoly or by two competing agents. The regulator wishes to encourage green investments to limit carbon emissions, while simultaneously reducing intermittency of the total energy production. They find that the regulation of a competitive market is more efficient than the one of the monopoly as measured with the certainty equivalent of the Principal's value function. This higher efficiency is achieved thanks to a higher degree of freedom of the incentive mechanisms which involves cross-subsidies between firms. A numerical study quantifies the impact of the designed second-best contract in both market structures compared to the business-as-usual scenario. In addition, they expanded the monopolistic and competitive setup to a more general class of tractable Principal-Multi-Agent incentives problems when both the drift and the volatility of a multi-dimensional diffusion process can be controlled by the Agents. They followed the resolution methodology of Cvitanić et al. (2018) in an extended linear quadratic setting with exponential utilities and a multi-dimensional state process of Ornstein-Uhlenbeck type. They provided closed-form expression of the second-best contracts. In particular, they show that they are in rebate form involving time-dependent prices of each state-variable.
Mean field game of mutual holding with defaultable agents, and systemic risk
Mao Fabrice Djete, Gaoyue Guo, Nizar Touzi introduced the possibility of default in the mean field game of mutual holding. This is modeled by introducing absorption at the origin of the equity process. They provided an explicit solution of this mean field game. Moreover, they provided a particle system approximation, and derived an autonomous equation for the time evolution of the default probability, or equivalently the law of the hitting time of the origin by the equity process. The systemic risk is thus described by the evolution of the default probability.
Impact of carbon market on production emissions
Arash Fahim and Nizar Touzi addressed the effect of the carbon emission allowance market on the production policy of a large polluter production firm. They investigated this effect in two cases; when the large polluter cannot affect the risk premium of the allowance market, and when it can change the risk premium by its production. In this simple model, they ignored any possible investment of the firm in pollution reducing technologies. They formulated the problem of optimal production by a stochastic optimization problem. Then, they showed that, as expected, the market reduces the optimal production policy in the first case if the firm is not given a generous initial cheap allowance package. However, when the large producer activities can change the market risk premium, the cut on the production and consequently pollution cannot be guaranteed. In fact, there are cases in this model when the optimal production is always larger than expected, and an increase in production, and thus pollution, can increase the profit of the firm. They concluded that some of the parameters of the market which contribute to this effect can be wisely controlled by the regulators in order to diminish this manipulative behavior of the firm.
Viscosity Solutions for HJB Equations on the Process Space: Application to Mean Field Control with Common Noise
Jianjun Zhou, Nizar Touzi, Jianfeng Zhang investigated a path dependent optimal control problem on the process space with both drift and volatility controls, with possibly degenerate volatility. The dynamic value function is characterized by a fully nonlinear second order path dependent HJB equation on the process space, which is by nature infinite dimensional. In particular, their model covers mean field control problems with common noise as a special case. They introduced a new notion of viscosity solutions and establish both existence and comparison principle, under merely Lipschitz continuity assumptions. The main feature of their notion is that, besides the standard smooth part, the test function consists of an extra singular component which allows us to handle the second order derivatives of the smooth test functions without invoking the Ishii's lemma. They used the doubling variable arguments, combined with the Ekeland-Borwein-Preiss Variational Principle in order to overcome the noncompactness of the state space. A smooth gauge-type function on the path space is crucial for their estimates.
Entropic optimal planning for path-dependent mean field games
In 5, in the context of mean field games, with possible control of the diffusion coefficient, Zhenjie Ren, Xiaolu Tan, Nizar Touzi and Junjian Yang considered a path-dependent version of the planning problem introduced by P.L. Lions: given a pair of marginal distributions , find a specification of the game problem starting from the initial distribution , and inducing the target distribution at the mean field game equilibrium. Their main result reduces the path-dependent planning problem into an embedding problem, that is, constructing a McKean-Vlasov dynamics with given marginals . Some sufficient conditions on are provided to guarantee the existence of solutions. They also characterized, up to integrability, the minimum entropy solution of the planning problem. In particular, as uniqueness does not hold anymore in their path-dependent setting, one can naturally introduce an optimal planning problem which would be reduced to an optimal transport problem along with controlled McKean-Vlasov dynamics.
Synchronization in a Kuramoto Mean Field Game
In collaboration with René Carmona and Mete Soner, Quentin Cormier has studied the classical Kuramoto model in the setting of an infinite horizon mean field game 11. The system is shown to exhibit both synchronization and phase transition. Incoherence below a critical value of the interaction parameter is demonstrated by the stability of the uniform distribution. Above this value, the game bifurcates and develops self-organizing time homogeneous Nash equilibria. As interactions become stronger, these stationary solutions become fully synchronized. Results are proved by an amalgam of techniques from nonlinear partial differential equations, viscosity solutions, stochastic optimal control and stochastic processes.
8.2 Stochastic cooperative numerical methods for complex industrial systems
Participants: Guillaume Chennetier, Josselin Garnier, Clément Cauchy, Baptiste Kerleguer, Paul Lartaud, Angèle Niclas.
Waveform inversion via reduced order modeling
Liliana Borcea, Josselin Garnier Alexander Mamonov and Jörn Zimmerling introduced a novel approach to waveform inversion based on a data-driven reduced order model (ROM) of the wave operator 9. The presentation is for the acoustic wave equation, but the approach can be extended to elastic or electromagnetic waves. The data are time resolved measurements of the pressure wave gathered by an acquisition system that probes the unknown medium with pulses and measures the generated waves. They proposed to solve the inverse problem of velocity estimation by minimizing the square misfit between the ROM computed from the recorded data and the ROM computed from the modeled data, at the current guess of the velocity. They gave a step by step computation of the ROM, which depends nonlinearly on the data and yet can be obtained from them in a noniterative fashion, using efficient methods from linear algebra. We also explain how to make the ROM robust to data inaccuracy. The ROM computation requires the full array response matrix gathered with colocated sources and receivers. However, they find that the computation can deal with an approximation of this matrix, obtained from towed-streamer data using interpolation and reciprocity on-the-fly. Although the full-waveform inversion approach of nonlinear least-squares data fitting is challenging without low-frequency information, due to multiple minima of the data fit objective function, they find that the ROM misfit objective function has better behavior, even for a poor initial guess. They also find by explicit computation of the objective functions in a simple setting that the ROM misfit objective function has convexity properties, whereas the least-squares data fit objective function displays multiple local minim
Waveform Inversion with a Data Driven Estimate of the Internal Wave
In 8 Liliana Borcea, Josselin Garnier Alexander Mamonov and Jörn Zimmerling studied an inverse problem for the wave equation, concerned with estimating the wave speed from data gathered by an array of sources and receivers that emit probing signals and measure the resulting waves. The typical approach to solving this problem is a nonlinear least squares minimization of the data misfit, over a search space. There are two main impediments to this approach, which manifest as multiple local minima of the objective function: The nonlinearity of the mapping from the wave speed to the data, which accounts for multiple scattering effects, and poor knowledge of the kinematics (smooth part of the wave speed), which causes cycle skipping. They shaw that the nonlinearity can be mitigated using a data driven estimate of the wave field at points inside the medium, also known as the “internal wave field.” This leads to improved performance of the inversion for a reasonable initial guess of the kinematics.
Paraxial Wave Propagation in Random Media with Long-Range Correlations
In 10 Liliana Borcea, Josselin Garnier and Knut Sølna studied the paraxial wave equation with a randomly perturbed index of refraction, which can model the propagation of a wave beam in a turbulent medium. The random perturbation is a stationary and isotropic process with a general form of the covariance that may be integrable or not. They focused attention mostly on the non-integrable case, which corresponds to a random perturbation with long-range correlations, that is relevant for propagation through a cloudy turbulent atmosphere. The analysis is carried out in a high-frequency regime where the forward scattering approximation holds. It reveals that the randomization of the wave field is multiscale: the travel time of the wave front is randomized at short distances of propagation and it can be described by a fractional Brownian motion. The wave field observed in the random travel time frame is affected by the random perturbations at long distances, and it is described by a Schr¨odinger-type equation driven by a standard Brownian field. They used these results to quantify how scattering leads to decorrelation of the spatial and spectral components of the wave field and to a deformation of the pulse emitted by the source. These are important questions for applications like imaging and free space communications with pulsed laser beams through a turbulent atmosphere. We also compare the results with those used in the optics literature, which are based on the Kolmogorov model of turbulence.
Well-posedness of wave scattering in perturbed elastic waveguides and plates: application to an inverse problem of shape defect detection
Eric Bonnetier Angèle Niclas and L. Seppecher worked on theoretical tools to study wave propagation in elastic waveguides and perform multi-frequency scattering inversion to reconstruct small shape defects in elastic waveguides and plates 7. Given surface multi-frequency wavefield measurements, they used a Born approximation to reconstruct localized defect in the geometry of the plate. To justify this approximation, they introduced a rigorous framework to study the propagation of elastic wavefield generated by arbitrary sources. By studying the decreasing rate of the series of inhomogeneous Lamb mode, they proved the well-posedness of the PDE that model elastic wave propagation in two- and three-dimensional planar waveguides. They also characterize the critical frequencies for which the Lamb decomposition is not valid. By using these results, they generalized the shape reconstruction method already developed for acoustic waveguide to two-dimensional elastic waveguides and provide a stable reconstruction method based on a mode-by-mode spacial Fourier inversion given by the scattered field.
Reconstruction of smooth shape defects in waveguides using locally resonant frequencies.
In 17, Angèle Niclas and Laurent Seppecher wroked on a new method to reconstruct slowly varying width defects in 2D waveguides using locally resonant frequencies. At these frequencies, locally resonant modes propagate in the waveguide under the form of Airy functions depending on a parameter called the locally resonant point. In this particular point, the local width of the waveguide is known and its location can be recovered from boundary measurements of the wavefield. Using the same process for different frequencies, they produce a good approximation of the width in all the waveguide. Given multi-frequency measurements taken at the surface of the waveguide, we provide a -stable explicit method to reconstruct the width of the waveguide. We finally validate our method on numerical data, and we discuss its applications and limits.
Adaptive importance sampling based on fault tree analysis for piecewise deterministic Markov process
Piecewise deterministic Markov processes (PDMPs) can be used to model complex dynamical industrial systems. The counterpart of this modeling capability is their simulation cost, which makes reliability assessment untractable with standard Monte Carlo methods. A significant variance reduction can be obtained with an adaptive importance sampling (AIS) method based on a cross-entropy (CE) procedure. The success of this method relies on the selection of a good family of approximations of the committor function of the PDMP. In collaboration with Hassane Chraibi and Anne Dutfoy, Guillaume Chennetier and Josselin Garnier, original families are proposed in 20. They are well adapted to high-dimensional industrial systems. Their forms are based on reliability concepts related to fault tree analysis: minimal path sets and minimal cut sets. The proposed method is discussed in detail and applied to academic systems and to a realistic system from the nuclear industry.
8.3 A stochastic numerical method for the parabolic-parabolic Keller-Segel system
Participants: Denis Talay.
The parabolic-parabolic Keller-Segel model is a set of equations that model the process of cell movement. It takes into account the evolution of different chemical components that can aid, hinder or change the direction of movement, a process called chemotaxis.
In collaboration with Radu Maftei (who is a past Inria post-doc student) Milica Tomasevic (CMAP, Ecole Polytechnique) and Denis Talay have continued to analyse the numerical performances of a stochastic particle numerical method for the parabolic-parabolic Keller-Segel model. They also propose and test various algorithmic improvements to the method in order to substantially decrease its execution time without altering its global accuracy.
8.4 Stochastic cooperative communication networks; modelling and numerical issues for interacting stochastic particle systems
Participants: Quentin Cormier, Carl Graham, Denis Talay.
Communication networks and their algorithms.
An important current research topic of Carl Graham is the modeling, analysis, simulation, and performance evaluation of communication networks and their algorithms. Most of these algorithms must function in real time, in a distributed fashion, and using sparse information. In particular load balancing algorithms aim to provide better utilization of the network resources and hence better quality of service for clients by striving to avoid the starving of some servers and the build-up of queues at others by routing the clients so as to have them well spread out throughout the system. Carl Graham's recent focus of work on these networks is perfect simulation in equilibrium, and a paper on this is in its terminal phases of writing. Perfect simulation methods allow to estimate quality of service (QoS) indicators in the stationary regime by Monte Carlo methods.
Long time behavior of particle systems and their mean-field limit, application to spiking neural networks
Quentin Cormier has studied the long time behavior of a family of McKean-Vlasov stochastic differential equations. He has given conditions ensuring the local stability of an invariant probability measure. The criterion involves the location of the roots of an explicit holomorphic function associated to the dynamics. When all the roots lie on the left-half plane, local stability holds and convergence is proven in Wasserstein norms. The optimal rate of convergence is provided. This method is then applied to study a large class of models of interacting particles on the torus, see 22. In addition, he has adapted the method to study the long time behavior of a model of interacting Integrate-and-Fire neurons 21. This last result provides new insights concerning the bistability behavior of such networks. In particular, a conjecture formulated by P. Robert and J. Touboul regarding the number of stationary solutions is proved.
An hypothesis test for complex stochastic simulations.
In a joint work with Héctor Olivero, D. Talay has proposed and analyzed an asymptotic hypothesis test for independent copies of a given random variable which is supposed to belong to an unknown domain of attraction of a stable law. The null hypothesis is: ` is in the domain of attraction of the Normal law' and the alternative hypothesis is : ` is in the domain of attraction of a stable law with index smaller than 2'.
Surprisingly, the proposed hypothesis test is based on a statistic which is inspired by methodologies to determine whether a semimartingale has jumps from the observation of one single path at discrete times. The authors have justified their test by proving asymptotic properties of discrete time functionals of Brownian bridges. They also have discussed many numerical experiments which allowed them to illustrate satisfying properties of the proposed test.
9 Bilateral contracts and grants with industry
Participants: Josselin Garnier.
9.1 CIROQUO Research & Industry Consortium
Several INRIA teams, including ASCII, are involved in the CIROQUO Research & Industry Consortium – Consortium Industrie Recherche pour l'Optimisation et la QUantification d'incertitude pour les données Onéreuses – (Industry Research Consortium for the Optimization and QUantification of Uncertainty for Onerous Data). Josselin Garnier is the INRIA Saclay representative on the steering committee.
The principle of the CIROQUO Research & Industry Consortium is to bring together academic and technological research partners to solve problems related to the exploitation of numerical simulators, such as code transposition (how to go from small to large scale when only small-scale simulations are possible), taking into account the uncertainties that affect the result of the simulation, validation and calibration (how to validate and calibrate a computer code from collected experimental data). This project is the result of a simple observation: industries using computer codes are often confronted with similar problems during the exploitation of these codes, even if their fields of application are very varied. Indeed, the increase in the availability of computing cores is counterbalanced by the growing complexity of the simulations, whose computational times are usually of the order of an hour or a day. In practice, this limits the number of simulations. This is why the development of mathematical methods to make the best use of simulators and the data they produce is a source of progress. The experience acquired over the last thirteen years in the DICE and ReDICE projects and the OQUAIDO Chair shows that the formalization of real industrial problems often gives rise to first-rate theoretical problems that can feed scientific and technical advances. The creation of the CIROQUO Research & Industry Consortium, led by the Ecole Centrale de Lyon and co-animated with the IFPEN, follows these observations and responds to a desire for collaboration between technological research partners and academics in order to meet the challenges of exploiting large computing codes.
Scientific approach. The limitation of the number of calls to simulators implies that some information – even the most basic information such as the mean value, the influence of a variable or the minimum value of a criterion – cannot be obtained directly by the usual methods. The international scientific community, structured around computer experiments and uncertainty quantification, took up this problem more than twenty years ago, but a large number of problems remain open. On the academic level, this is a dynamic field which is notably the subject of the French CNRS Research Group MascotNum since 2006 and renewed in 2020.
Composition. The CIROQUO Research & Industry Consortium aims to bring together a limited number of participants in order to make joint progress on test cases from the industrial world and on the upstream research that their treatment requires. The overall approach that the CIROQUO Research & Industry Consortium will focus on is metamodeling and related areas such as experiment planning, optimization, inversion and calibration. IRSN, STORENGY, CEA, IFPEN, BRGM are the Technological Research Partners. Mines Saint-Etienne, Centrale Lyon, CNRS, UCA, UPS, UT3 and Inria are the Academic Partners of the consortium.
Scientific objectives. On the practical level, the expected impacts of the project are a concretization of the progress of numerical simulation by a better use of computational time, which allows the determination of better solutions and associated uncertainties. On the theoretical level, this project will allow to create an emulation around the major scientific locks of the discipline such as code transposition/calibration/validation, modeling for complex environments, or stochastic codes. In each of these scientific axes, a particular attention will be paid to the large dimension. Real problems sometimes involve several tens or hundreds of inputs. Methodological advances will be proposed to take into account this additional difficulty. The work expected from the consortium differs from the dominant research in machine learning by specificities linked to the exploration of expensive numerical simulations. However, it seems important to build bridges between the many recent developments in machine learning and the field of numerical simulation.
Philosophy. The CIROQUO Research & Industry Consortium is a scientific collaboration project aiming to mobilize means to achieve methodological advances. The project promotes cross-fertilization between partners coming from different backgrounds but confronted with problems related to a common methodology. It has three objectives: - the development of exchanges between technological research partners and academic partners on issues, practices and solutions through periodic scientific meetings and collaborative work, particularly through the co-supervision of students;
- the contribution of common scientific skills thanks to regular training in mathematics and computer science;
- the recognition of the Consortium at the highest level thanks to publications in international journals and the diffusion of free reference software.
9.2 Collaboration with EdF on industrial risks
This collaboration whose Josselin Garnier is the Ascii leader, has been underway for several years. It concerns the assessment of the reliability of hydraulic and nuclear power plants built and operated by EDF (Electricite De France). The failure of a power plant is associated with major consequences (flood, dam failure, or core meltdown), for regulatory and safety reasons EDF must ensure that the probability of failure of a power plant is sufficiently low.
The failure of such systems occurs when physical variables (temperature, pressure, water level) exceed a certain critical threshold. Typically, these variables enter this critical region only when several components of the system are deteriorated. Therefore, in order to estimate the probability of system failure, it is necessary to model jointly the behavior of the components and of the physical variables. For this purpose, a model based on a Deterministic Markovian Piecewise Process (DMPP) is used. The platform called PYCATSHOO has been developed by EDF to simulate this type of process. This platform allows to estimate the probability of failure of the system by Monte Carlo simulation as long as the probability of failure is not too low. When the probability becomes too low, the classical Monte Carlo estimation method, which requires a lot of simulations to estimate the probabilities of rare events, is much too slow to execute in our context. It is necessary to use methods using fewer simulations to estimate the probability of system failure: variance reduction methods. Among the variance reduction methods are "importance sampling" and "splitting" methods, but these methods present difficulties when used with PDMPs.
Work has been undertaken on the subject, leading to the defense of a first CIFRE thesis (Thomas Galtier, thesis defended in 2019) and the preparation of a new CIFRE thesis (Guillaume Chennetier, from 2021). Theoretical articles have been written and submitted to journals. New theoretical works on sensitivity analysis in rare event regimes are the subject of the new thesis. The integration of the methods in the PYCATSHOO platform is progressively done.
10 Partnerships and cooperations
Participants: Quentin Cormier, Josselin Garnier, Carl Graham, Denis Talay.
10.1 International initiatives
10.1.1 Inria associate team not involved in an Inria International Lab or an Inria international program
CIRCUS
-
Title:
Columbia Inria Research on Collaborative Ultracritical Systems
-
Duration:
2020 ->
-
Coordinator:
Philip Protter (pep2117@columbia.edu)
-
Partners:
- Columbia University (États-Unis)
-
Inria contact:
Denis Talay
-
Summary:
CIRCUS will focus on collaborative stochastic agent and particle systems. In standard models, the agents and particles have `blind' interactions generated by an external interaction kernel or interaction mechanism which their empirical distribution does not affect. A contrario, agent and particle systems which will be developed, analysed, simulated by CIRCUS will have the key property that the agents and particles dynamically optimize their interactions.
Two main directions of research will be investigated: optimal regulation in stochastic environments, and optimized simulations of particle systems with singular interactions. In both cases, the interactions (between the agents or the numerical particles) are optimized, non Markovian, and the singularities reflect ultracritical phenomena such as aggregations or finite-time blow-ups.
10.2 International research visitors
10.2.1 Visits to international teams
Research stays abroad
Quentin Cormier
- Visited institution: Princeton University
- États unis d'Amérique
- Dates: October 28 – November 04, 2023
- Context of the visit: Réné Carmona and Mete Soner, CIRCUS
From October 28th to November 4th Quentin Cormier visited René Carmona and Mete Soner at Princeton in the CIRCUS framework to continue their collaboration on the study of synchronisation phenomena for mean field games. They have studied a meanfield game version of Kuramoto’s model in the case where all the agents (oscillators) are identical. They show the existence of a phase transition: When the noise intensity is weak enough, the agents synchronize in spite of the fact they are egoist. See 11. They now aim to study the case where the oscillators are heterogeneous. Then the oscillators do not naturally oscillate at the same frequency: one supposes that one only knows the probability distribution of the natural frequencies of the oscillators. One would like to determine the Nash equilibria of the system and their long time stability in terms of this probability distribution. Quentin Cormier gave a seminar at Princeton - ORFE Department – during his visit.
Josselin Garnier will continue his long term collaboration with Liliana BORCEA on inverse wave scattering models and numerical methods.
Denis Talay will contine continue his long term collaboration with Philip Protter on strict local martingales.
Nizar Touzi has moved from Ecole Polytechnique to NYU Tandon Engineering in the automn. We will of course continue to collaborate with him.
Philip Protter gave a lecture at the Conference in Denis Talay’s honor last September at Cirm, Marseille.
11 Dissemination
Participants: Maxime Colomb, Quentin Cormier, Josselin Garnier, Carl Graham, Denis Talay.
11.1 Promoting scientific activities
11.1.1 Scientific events: organisation
Member of the organizing committees
Josselin Garnier was an organizer of the CIROQUO (Consortium Industrie Recherche pour l’Optimisation et la QUantification d’incertitude pour les donnees Onereuses) workshop at INRIA Saclay, May 23-25, 2023, 60 participants.
11.1.2 Journal
Member of the editorial boards
Josselin Garnier is in the boards of the following journals: Asymptotic Analysis, Discrete and Continuous Dynamical Systems – Series S, ESAIM P&S, SIAM Journal on Applied Mathematics and SIAM/ASA Journal on Uncertainty Quantification (JUQ).
D. Talay is Area Editor of Stochastic Processes and their Applications, Editor of Journal of the European Mathematical Society (JEMS), and Associate Editor of Monte Carlo Methods and Applications.
Reviewer - reviewing activities
Carl Graham is a regular reviewer for probability journals such as Stochastic Processes and Applications, Annals of Applied Probability, and Queueing Systems, as well as for books and for grants.
Quentin Cormier is also a regular reviewer for probability journals.
11.1.3 Invited talks
In 2023 Josselin Garnier was an invited speaker in the following conferences: IAS Workshop on Inverse Problems, Imaging and PDEs, Hong Kong, December 11-15, 2023; New trends in homogenization, Roscoff, November 27-December 1, 2023; Math+X Symposium: Dynamos, planetary exploration and general relativity, inverse problems and machine learning, Iceland, May 29-June 1, 2023; Theory of wave scattering in complex and random media, Cambridge (Newton Institute), March 20-24, 2023.
Quentin Cormier was invited speaker at the following conferences: Stochastic Processes and their Applications, Lisbon, July 24 – July 28, 2023; “Order and Randomness in Partial Differential Equations”, Institut Mittag-Leffler, Stockholm, November 26 – December 1, 2023; “A Random Walk in the Land of Stochastic Analysis and Numerical Probability”, Centre International de Rencontres Mathématiques in Luminy, September 4 – September 9, 2023. He was invited speaker at the seminar “Groupe de Travail Modélisation Stochastique” of LPSM, Paris, October 11, 2023.
Carl Graham was an invited speaker at the conference “A Random Walk in the Land of Stochastic Analysis and Numerical Probability”, Centre International de Rencontres Mathématiques in Luminy, 2023-09-04 – 2023-09-08.
D. Talay gave an invited talk at the conference “Mean-Field Interaction, Singular Kernels, and Approximation” in Paris in December and at the North-East and Midlands Stochastic Analysis Seminars in Durham University in July.
Different parts of the ICI project have been presented by Maxime Colomb in various scientific events (Journées de la recherche de l'IGN 2023, SocSimFest 2023, GT Échelle ).
11.1.4 Scientific expertise
D. Talay served as a member of the External Supervisory Committee of the Ph.D. program in Mathematics of the Coimbra and Porto universities. The committee visited these two universities in April.
D. Talay reported on proposals submitted to the Research Grants Council (RGC) of Hong Kong.
D. Talay is a member of the Comité National Français de Mathématiciens.
D. Talay is a member of the scientific committee of the national agency AMIES aimed to promote the collaborations between the mathematicans and the industry. He is also Vice-president of the Scientific and Administration Councils of the Natixis Foundation which supports academic research in financial mathematics, notably on risk analyses, and on machine learning applied to mathematical problems in economy, finance and insurance.
11.1.5 Research administration
Josselin Garnier is the Chairman of the Applied Mathematics Department at Ecole polytechnique since september 2023. He is the head of a workpackage of the PEPR NumPEX.
11.2 Teaching - Supervision - Juries
11.2.1 Teaching
Josselin Garnier is a professor at the Ecole Polytechnique with a full teaching service. He also teaches the class "Inverse problems and imaging" at the Master Mathématiques, Vision, Apprentissage (M2 MVA).
Nizar Touzi is a professor at the Ecole Polytechnique with a full teaching service.
Quentin Cormier teaches tutorials for the course MAP361 on Randomness at Ecole Polytechnique.
Maxime Colomb is in charge of the module “Données massives spatialisées” for the MEDAS Master of the CNAM.
11.2.2 Juries
D. Talay was a member of Alexandre Richard's HDR committee at Centrale-Supelec in January, Yoan Tardy’s Ph.D. committee at Sorbonne Université in June, and Zoé Agathe-Nerine's Ph.D. committee at Paris Cité University in November.
11.2.3 Supervision
Josselin Garnier has supervised the following PhD students in 2023: Ali Abboud, Raphaël Carpintero Perez, Guillaume Chennetier, Amin Dhaou (with E. Le Pennec), Celia Escribe (with E. Gobet and P. Quirion), Quentin Goepfert (with L. Giovangigli and P. Millien), Paul Lartaud, Antoine Van Biesbroeck.
11.3 Popularization
Josselin Garnier and Valérie Maume-Deschamps were leading the working group “Développement économique, de la compétitivité et de l’innovation” of the “Assises des mathématiques”. They have written a synthesis included in the proceedings of the Assises.
12 Scientific production
12.1 Major publications
- 1 articleWaveform Inversion with a Data Driven Estimate of the Internal Wave.SIAM Journal on Imaging Sciences161March 2023, 280-312HALDOI
- 2 articleWaveform inversion via reduced order modeling.Geophysics882March 2023, R175-R191HALDOI
- 3 articleParaxial Wave Propagation in Random Media with Long-Range Correlations.SIAM Journal on Applied Mathematics831February 2023, 25-51HALDOI
- 4 articleSynchronization in a Kuramoto Mean Field Game.Communications in Partial Differential Equations489September 2023, 1214-1244HALDOI
- 5 articleEntropic Optimal Planning for Path-Dependent Mean Field Games.SIAM Journal on Control and Optimization613June 2023, 1415-1437HALDOIback to text
12.2 Publications of the year
International journals
International peer-reviewed conferences
Reports & preprints