EN FR
EN FR
2020
Activity report
Project-Team
ASCII
RNSR: 201923478S
Research center
In partnership with:
CNRS, Ecole Polytechnique
Team name:
Analysis of Stochastic Cooperative Intelligent Interactions
In collaboration with:
Centre de Mathématiques Appliquées (CMAP)
Domain
Applied Mathematics, Computation and Simulation
Theme
Stochastic approaches
Creation of the Project-Team: 2019 November 01

Keywords

Computer Science and Digital Science

  • A6.1.2. Stochastic Modeling
  • A6.1.3. Discrete Modeling (multi-agent, people centered)
  • A6.2.2. Numerical probability
  • A6.2.3. Probabilistic methods
  • A6.2.6. Optimization
  • A6.2.7. High performance computing
  • A6.3.4. Model reduction
  • A6.3.5. Uncertainty Quantification

1 Team members, visitors, external collaborators

Research Scientists

  • Jocelyne Bion-Nadal [CNRS, Researcher, until Jun 2020]
  • Carl Graham [CNRS, Researcher]

Faculty Members

  • Denis Talay [Team leader, Inria, HDR]
  • Josselin Garnier [École polytechnique, Professor]
  • Nizar Touzi [École polytechnique, Professor]

PhD Students

  • Naoufal Acharki [TOTAL-Pau]
  • Guillaume Chennetier [École polytechnique, from Dec 2020]
  • Fabrice Djete [Ecole Polytechnique]
  • Clément Gauchy [CEA]
  • Corentin Houpert [CEA]

Technical Staff

  • Maxime Colomb [Inria, Engineer, from Oct 2020]
  • Radu Maftei [Inria, Engineer, from Mar 2020 until Jun 2020, HDR]

Administrative Assistant

  • Marie Enee [Inria, from Oct 2020]

2 Overall objectives

The stochastic particles concerned by the Ascii research behave like agents whose interactions are stochastic and intelligent; these agents desire to optimally cooperate towards a common objective by solving strategic optimisation problems. Ascii's overall objective consists in developing their theoretical and numerical analyses with target applications in economics, neuroscience, physics, biology, stochastic numerics. To the best of our knowledge, this challenging objective is quite innovative.

In addition to modelling challenges raised by our innovating approaches to handle intelligent multi-agent interactions, we develop new mathematical theories and numerical methods to deal with interactions which, in most of the interesting cases we have in mind, are irregular, non Markovian, and evoluting. In particular, original and non-standard stochastic control and stochastic optimization methodologies are being developed, combined with original specific calibration methodologies.

To reach our objectives, we combine various mathematical techniques coming from stochastic analysis, partial differential equation analysis, numerical probability, optimization theory, and stochastic control theory.

3 Research program

Concerning particle systems with singular interactions, in addition to the convergence to mean–field limits and the analysis of convergence rates of relevant discretizations, one of our main challenges will concern the simulation of complex, singular and large scale McKean-Vlasov particle systems and stochastic partial differential equations, with a strong emphasis on the detection of numerical instabilities and potential big approximation errors.

The determination of blow-up times is also a major issue for spectrum approximation and criticality problems in neutron transport theory, Keller-Segel models for chemotaxis, financial bubble models, etc.

Reliability assessment for power generation systems or subsystems is another target application of our research. For such complex systems, standard Monte Carlo methods are inefficient because of the difficulties to appropriately simulate rare events. We thusdevelop algorithms based on particle filters methods combined with suitable variance reduction methods.

Exhibiting optimal regulation procedures in a stochastic environment is an important challenge in many fields. As emphasized above, in the situations we are interested in, the agents do not compete but concur to their regulation. We here give three examples: the control of cancer therapies, regulation and mechanism design by optimal contracting, and the distributed control for the planning problem.

Optimal contracting is widely used in economics in order to model agents interactions subject to the so-called moral hazard problem. This is best illustrated by the works of Tirole (2014 Nobel Prize in economics) in industrial economics. The standard situation is described by the interaction of two parties. The principal (e.g. land owner) hires the agent (e.g. farmer) in order to delegate the management of an output of interest (e.g. production of the land). The agent receives a salary as a compensation for the costly effort devoted to the management of the output. The principal only observes the output value, and has no access to the agent effort. Due to the cost of effort, the agent may divert his effort from the direction desired by the principal. The contract is proposed by the principal, and chosen according to a Stackelberg game: anticipating the agent's optimal response to any contract, she searches for the optimal contract by optimizing her utility criterion.

We are developing the continuous time formulation of this problem and allow for diffusion control and for possible competing multiple agents and principals. This is achieved by using crucially the recently developed second order backward stochastic differential equations, which act as HJB equations in the present non-Markovian framework.

The current environmental transition requires governments to incite firms to introduce green technologies as a substitute the the outdated polluting ones. This transition requires appropriate incentive schemes so as to reach the overall transition objective. This problem can be formulated in the framework of the last Principal-Agent problem as follows. Governments act as principals by setting the terms of an incentive regulation based on subsidies and tax reductions. Firms acting as agents optimize their production strategies given the regulation imposed by the governments. Such incentive schemes are also provided by the the refinancing channel through private investor, as best witnessed by the remarkable growth of green bonds markets.

Another motivation comes from Mechanism design. Modern decentralized facilities are present throughout our digitally connected economies. With the fragmentation of financial markets, exchanges are nowadays in competition. As the traditional international exchanges are now challenged by alternative trading venues, markets have to find innovative ways to attract liquidity. One solution is to use a make-taker fees system, that is, a rule enabling them to charge in an asymmetric way liquidity provision and liquidity consumption. The most classical setting, used by many exchanges (such as Nasdaq, Euronext, BATS Chi-X,...), consists in subsidizing the former while taxing the latter. In practice, this results in associating a fee rebate to executed limit orders and applying a transaction cost for market orders.

A platform aims at attracting two types of agents: market makers post bid and ask prices for some underlying asset or commodity, and brokers fulfill their trading needs if the posted prices are convenient. The platform takes its benefits from the fees gained on each transaction. As transactions only occur when the market makers takes on more risky behavior by posting interesting bid and ask prices, the platform (acting as the principal) sets the terms of an incentive compensation to the market makers (acting as agents) for each realized transaction. Consequently, this optimal contracting problem serves as an optimization tool for the mechanism design of the platform.

Inspired by optimal transport theory, we formulate the above regulation problem as the interaction between a principal and a “crowd” of symmetric agents. Given the large number of agents, we model the limiting case of a continuum of agents whose state is then described by their distribution. The mean field game formulates the interacting agents optimal decision according to a Nash equilibrium competition. The optimal planning problem, introduced by Pierre-Louis Lions, seeks for an incentivizing scheme of the regulator, acting as a principal, aiming at pushing the crowd to some target distribution. Such a problem may be formulated for instance as a model for the design of smart cities. Then, one may use the same techniques as for the Principal-Agent problem in order to convert this problem to a more standard optimal transport problem.

In a situation where a Principal faces many interacting agents, distributed control may serve as an important tool to preserve the aggregate production of the agents, while distributing differently the contributions amongst agents.

The above approach needs now to be extended in order to accommodate more realistic situations. Let us list the following important extensions:

  • The case of noisy observation of the output leads to control problems under partial observation for both types of agents, which are significantly more involved as they lead to infinite dimensional control problems after the filtering stage;
  • Another important extension is the account for the so-called adverse selection: the principal has no access to the optimization criterion of the agent, instead she only has a prior on the distribution; in the economic literature, this is addressed in one period static models by allowing the prinicipal to offer a menu of incentivizing contracts which are chosen so that each agent chooses the one which is designed for him (incentive compatibility constraint).

Our research program on networks with interacting agents concerns various types of networks: electronic networks, biological networks, social networks, etc. The numerous mathematical tools necessary to analyse them depend on the network type and the analysis objectives. They include propagation of chaos theory, queing process theory, large deviation theory, ergodic theory, population dynamics, partial differential equation analysis, in order to respectively determine mean-field limits, congestion rates or spike train distributions, failure probabilities, equilibrium measures, evolution dynamics, macroscopic regimes, etc.

For example, recently proposed neuron models consist in considering different populations of neurons and setting up stochastic time evolutions of the membrane potentials depending on the population. When the number of populations is fixed, interaction intensities between individuals in different populations have similar orders of magnitude, and the total number of neurons tends to infinity, mean-field limits have been identified and fluctuation theorems have been proven.

However, to the best of our knowledge, no theoretical analysis is available on interconnected networks of networks with different populations of interacting individuals which naturally arise in biology and in economics.

We aim to study the effects of interconnections between sub-networks resulting from individual and local connections. Of course, the problem needs to be posed in terms of the geometry of the big network and of the scales between connectivity intensities and network sizes.

A related research concerns stochastic, continuous state and time opinion models where each agent's opinion locally interacts with other agents' opinions in the systemic. Due to some exogenous randomness, the interaction tends to create clusters of common opinion. By using linear stability analysis of the associated nonlinear Fokker-Planck equation that governs the empirical density of opinions in the limit of infinitely many agents, we can estimate the number of clusters, the time to cluster formation, the critical strength of randomness so as to have cluster formation, the cluster dynamics after their formation, the width and the effective diffusivity of the clusters.

Another type of network systems we are interested in derives from financial systemic risk modeling. We consider evolving systems with a large number of inter-connected components, each of which can be in a normal state or in a failed state. These components also have mean field interactions and a cooperative behavior. We will also include diversity as well as other more complex interactions such as hierarchical ones. In such an inter-connected system, individual components can be operating closer to their margin of failure, as they can benefit from the stability of the rest of the system. This, however, reduces the overall margin of uncertainty, that is, increases the systemic risk: our research thus addresses QMU (Quantification of Margins of Uncertainty) problems.

We aim to study the probability of overall failure of the system, that is, its systemic risk. We therefore have to model the intrinsic stability of each component, the strength of external random perturbations to the system, and the degree of inter-connectedness or cooperation between the components.

Our target applications are the following ones:

  • Engineering systems with a large number of interacting parts: Components can fail but the system fails only when a large number of components fail simultaneously.
  • Power distribution systems: Individual components of the system are calibrated to withstand fluctuations in demand by sharing loads, but sharing also increases the probability of an overall failure.
  • Banking systems: Banks cooperate and by spreading the risk of credit shocks between them can operate with less restrictive individual risk policies. However, this increases the risk that they may all fail, that is, the systemic risk.

One of our objectives is to explain that, in some circumstances, one simultaneously observes individual risk decreases and systemic risk increase.

4 Application domains

Our short and mid-term potential industrial impact concerns e.g. energy market regulations, financial market regulations, power distribution companies, nuclear plant maintenance. It also concerns all the industrial sectors where massive stochastic simulations at nano scales are becoming unavoidable and certified results are necessary.

We also plan to have impact in cell biology, macro-economy, and applied mathematics at the crossroads of stochastic integration theory, optimization and control, PDE analysis, and stochastic numerical analysis.

5 New software and platforms

5.1 New platforms

Our most topical activity concerns the PyCATSHOO toolbox developed by EDF which allows the modeling of dynamical hybrid systems such as nuclear power plants or dams. Hybrid systems mix two kinds of behaviour. First, the discrete and stochastic behaviour which is in general due to failures and repairs of the system's constituents. Second, the continuous and deterministic physical phenomena which evolve inside the system.

PyCATSHOO is based on the theoretical framework of Piecewise Deterministic Markov Processes (PDMPs). It implements this framework thanks to distributed hybrid stochastic automata and object-oriented modeling. It is written in C++. Both Python and C++ APIs are available. These APIs can be used either to model specific systems or for generic modelling i.e. for the creation of libraries of component models. Within PyCATSHOO special methods can be developed.

J. Garnier is contributing, and will continue to contribute, to this toolbox within joint Cifre programs with EdF. The PhD theses are aimed to add new functionalities to the platform. For instance, an importance sampling with cross entropy method

6 New results

6.1 Optimal stopping of dynamic stochastic interacting systems

This project is the central topic of the thesis of Mehdi Talbi (second year PhD, supervised by Nizar Touzi). Consider the problem of choice of optimal time to stop using a polluting technology so as to engage in a new one (for instance, passage to electric vehicule). Our main objective is to solve the problem of optimal stopping for each agent belonging to a population from the viewpoint of a central planner optimizing some criterion depending on the distribution of the population.

In the case a finite population, this multiple optimal stopping problem has been considered by Kobylanski, Quenez & Rouy-Mironescu (Annals of Applied Probability, 2011). The value function is naturally expressed as a standard single stopping problem of one of the continuing particles.

  • This observation reduces the multiple stopping problem to a classical sigle optimal stopping one, with stopping value given by the maximum of all value functions of sub-multiple stopping problems formulated on all possible subsets of continuing particles. Direct iteration of this argument leads to a characterization of the multiple stopping problem by means of a cascade system of standard single optimal stopping.
  • Optimal stopping times are obtained as first hitting times of the corresponding value functions to their stopping values, with a characterization of the particle to be stopped at this time as one among those which allow to attain the maximum of value functions of multiple optimal stopping of the subpopulation obtained by substracting this particle.
  • In the case where the particles dynamics is defined by a stochastic differential equation, it is well-known that the solution of the optimal stopping problem is characterized by an obstacle partial differential equation (PDE). Then, the last argument allows to characterize the solution of the multiple stopping problem by means of a system of cascade obstacle PDEs.

Our objective is to study the mean field limit of these results. We first introduce an appropriate symmetry so as to guarantee the anonymity of the particules. Then, the limiting multiple optimal stopping problem is an optimal stopping problem of a stochastic differential equation (SDE) with mean field interaction à la McKean-Vlasov, with optimization criterion depending on the law of the stopped state of the continuum of particules. We derive an appropriate version of the dynamic programming principle on the space of marginal laws of the state process of particules. We prove an Itô formula in this context which induces the corresponding dynamic programming equation as an infinitesimal version of the dynamic programming principle. This identifies the the obstacle equation on the Wasserstein space of probability measure on the state of the process, which in turn allows to characterize the optimal stopping rule for the population.

6.2 Modeling systemic risk by stochastic differential games

This project is the central topic of the thesis of Leila Bassou (first year PhD, supervised by Nizar Touzi). It is jointly conducted with Fabrice Djete (post-doc at CMAP). The analysis of systemic risk of the financial sector is most important, especially since the last financial crisis which highlighted the complex effects induced the multiple connexions on the network of financial actors.

The existing literature on this topic is essentially empirical and suffers from a serious lack of theoretical foundations. Our contributions constitute the first step for understanding the interdependence structures constructed by the economic actors, and help to analyze their nature and to examine their consequences during the periods of stress.

6.3 Optimal interdependence in a model with finite number of actors

Each financial actor is characterized by its proper dynamic. For instance, the proper dynamics of a bank are those of the difference between the total deposits and the total loans. For an insurance company, the proper dynamics is defined by the expected sinistrality.

Each actor has the possibility to purchase a proportion of each of the other actors. This might be desirable from a risk management viewpoint as it allows to diversify the risk. An immediate consequence is that the evolution of the balance sheet of each actor depends on the values of the others. Each individual actor optimizes its risk management by appropriately choosing its investment in the other actors.

Actors are only allowed to choose their investments in the others, and have no hand on the investment strategies of the other, including on thel-selves.

Nizar Touzi and his collaborators aim at characerizing the Nash equilibrium of this game, if exists:

  • Each actor chooses the optimal investment strategy in the others, given the strategies of all of the competitors,
  • A Nash equilibrium is a situation of interdependence such that no actor optimally chooses to deviate from it, given that the others' strategies are fixed to this equilibrium value.

We provide a general characterization of such Nash equilibria in the context of a special stochastic differential game. The actors preferences are defined by exponential utility functions so as to addressing the problem by means of backward stochastic differential equations, a most useful tool for the underlying path dependent dependent stochastic control problems.

Many question remain to be explored during the next months.

  • In general, we prove that Nash equilibria are not unique (when exist). How to select among these equilibria so as to analyze the equilibrium dynamics of the system ?
  • What happens if one of the actors is bankrupt, i.e. if its value hits zero ? this situation is totally excluded in our present study, we shall include it so as to better understand the possible propagation into the system inducing to a situation of systemic crisis.
  • How to simulate such situations, and is it possible to forsee any signals which allow to predict such a crisis risk ?
  • What happens in the limit of continuum of actors ? by introducing an appropriate symmetry in the system, one may hope to obtain a scaling limit offering a simple macroscopic description.

6.4 Mean field optimal interdependence

Rather than taking the scaling limit of the microscopic equilibriawith finite number of agents, to deduce the macroscopic limit, we may instead derive the Nash equilibria directly from the macroscopic model.

By introducing an appropriate symmetry in the system, we obtain a scaling limit which represents the macroscopic description as a mean field stochastic differential equation à la McKean-Vlasov, of new type: the interaction term appears as an expectation of some stochastic integral with respect to a copy of th eequilibrium system.

When there is no common noise, we obtain a unique Nash equilibrium with a quasi-explicit characterization. At this equilibrium, the representative actor possesses a unit proportion of each of the competitors whose drift coefficient is above some constant characterized as the unique solution of an integral equation.

We shall continue our study of this model in th efollowing months in the following directions.

- Can we justify this result by a limiting argument from the corresponding finite agent Nash equilibria?

- What happens in the situation of common noise ? this case is of course more realistic as it encodes the possible correlation between the different actors. Major difficulties appear however as this setting typically leads to the analysis of a class of stochastic HJB equations...

6.5 Distributed control on the electricity network

This project is conducted with René Aïd, Professor of economics at Dauphine, and Assil Fadle, a third year student at ENS Paris.

Many electric devices do not need to be permanently powered for their normal functioning. This is the case for instance for water heaters ou for pools engines. Such a flexibility offers a precious opportunity for the electricity provider who is in charge of powering a large number of such devices, with independent usages, or at least independent conditionally on some common factor.

The electricity provider may restrict the powering so as to guarantee a good quality of service. For instance, consumers would not complain from the quality of service of their water heater as long as the water temperature is between 40 and 60 degrees Celsius. Consequently, there is no need to provide such a device with electricity outside this range.

This flexibility allows the operator to adapt the consumers demand to the supply, which is needed to be programmed in advance due to the industrial complexity of the production tool. With this flexibility, the operator avoids to resort to the expensive spot electricity market in order to guarantee the supply-demand equilibrium at each point in time.

This problem has been formulated par Busic & Meyn by using tools from automatic control. We reformulate the model in the paradigme of stochastic control of a mean field interacting system.

However such a flexibility can not be used by the electricity provider without acceptance of the consumer. Under fixed tarification of the retail electricity, the consumer has reason to accept to give up on his right to control his/her own device. This is a typical situation of moral hazard, that economic theory typically addresses with appropriate incentive.

We develop a model of optimal incentive in the context the previous distributed control problem. From the mathematical viewpoint, such a model is a Stackelberg non-zero sum stochastic differential game.

6.6 Communication networks

Carl Graham has worked on load balancing in communication networks. The idea is to model, analyze, simulate, and evaluate protocols which aim to better utilize the network resources by avoiding the starving of resources. This is usually done under constraints of distributed protocols with sparse information transfers by ensuring that clients are “well” spread throughout the system. The main focus of my present work on these networks is prefect simulation in equilibrium. This in particular will allow to estimate by Monte Carlo methods a number of quality of service (QoS) indicators directly in the stationary regime.

6.7 Sensitivity of diffusion models to their noise Hurst parameter

In collaboration with Alexandre Richard (Ecole Centrale - Supelec, Saclay, France) Denis Talay developed a sensitivity analysis w.r.t. the long-range/memory noise parameter for probability distributions of functionals of solutions to stochastic differential equations.

This is an important stochastic modeling issue in many applications since Markov models may sometimes be seen as questionable idealizations of the reality. Empirical studies actually tend to show memory effects in biological, financial, physical data. Such empirical results justify to consider non-Markov models driven by noises with long-range memory such as fractional Brownian motions rather than by Lévy noises. But choosing a noise with long-range memory does not close the modeling problem since the parametric estimation of the model may be difficult and crude.

Therefore, one often needs to balance tractable Markov models against more realistic but complex non-Markov models. A natural question then arises: Is it really worth introducing complex models?

A. Richard and D. Talay have thus considered solutions {XtH} to stochastic differential equations driven by fractional Brownian motions. They have developed sensitivity analyses when the Hurst parameter H of the noise tends to the critical Brownian parameter H=12 from above or from below. Two classes of sensitivities have been studied. On the one hand, the probability distributions of smooth functionals in the Malliavin calculus sense. On the other hand, , the probability distributions of irregular functionals, namely, Laplace transforms of first passage times of XH at given thresholds.

Our results show that the Markov Brownian model is a good proxy model as long as the Hurst parameter remains close to 12.

6.8 The ICI epidemic propagation simulator

The launching the ICI joint Inria-IGN project was important for the team. In addition to the permanent members of the team and Maxime Colomb (INRIA-IGN engineer), this project initiated by Denis Talay in July gathered in 2020 the following contributors:

  • - Aline Carneiro Viana (équipe-projet TriBE)
  • - Laura Grigori (équipe-projet Alpines)
  • - Julien Perret (IGN)
  • - Razvan Stanica (équipe-projet Agora)
  • - Milica Tomasevic (CMAP, Ecole Polytechnique)

The project is aimed to simulate individual motions of contaminated or healthy people within a real small city and to simulate the contaminations between the unhabitants. Statistical studies on the simulation results should allow one to better understand the propagation of an epidemy and to compare the performances of various public stop-and-go strategies to control the pandemy.

In addition, the simulator does not rely on macroscopic laws of SIR type SIR to describe the evolution of the epidemy or the contamination rates. It will not use global models for population movements. It will rely on models at the individual scale which are calibrated from real historical data, notably data provided by mobile phone operators.

Finally, the simulator is based on a precise topographic and sociological description of the city under consideration which is developed by the IGN experts involved in the project.

6.9 Stochastic models in Biology

Carl Graham has continued to investigate the regenerative properties of Hawkes processes, with a view to obtain a better understanding of neural networks. Part of the mathematical interest and difficulty lies in the fact that these processes are not in general Markov.

He has also studied metabolic heterogeneity in bacteria, most notably Escherichia coli, in collaboration with biologists. This started under the impulse of Sylvie Méléard who is the co-advisor of Josué Tchouanti's PhD thesis “Stochastic Models for Metabolic Heterogeneity in Certain Bacteria”, funded by the “Chaire Modélisation Mathématique et Biodiversité” (École polytechnique, Muséum National d'Histoire Naturelle, Fondation de l'École polytechnique, VEOLIA Environnement). Carl Graham is currently a member of the ANR Janus which is going to explore this topic for the next few years.

7 Bilateral contracts and grants with industry

7.1 Bilateral contracts with industry

The year 2020 was dedicated to the launch of the CIROQUO Research & Industry Consortium – Consortium Industrie Recherche pour l'Optimisation et la QUantification d'incertitude pour les données Onéreuses – (Industry Research Consortium for the Optimization and QUantification of Uncertainty for Onerous Data) in which several INRIA teams, including ASCII, are involved. Josselin Garnier is the INRIA Saclay representative on the steering committee. The agreement has been validated by legal experts from the academic and technological members of the consortium and is in the process of being signed.

The principle of the CIROQUO Research & Industry Consortium is to bring together academic and technological research partners to solve problems related to the exploitation of numerical simulators, such as code transposition (how to go from small to large scale when only small-scale simulations are possible), taking into account the uncertainties that affect the result of the simulation, validation and calibration (how to validate and calibrate a computer code from collected experimental data). This project is the result of a simple observation: industries using computer codes are often confronted with similar problems during the exploitation of these codes, even if their fields of application are very varied. Indeed, the increase in the availability of computing cores is counterbalanced by the growing complexity of the simulations, whose computational times are usually of the order of an hour or a day. In practice, this limits the number of simulations. This is why the development of mathematical methods to make the best use of simulators and the data they produce is a source of progress. The experience acquired over the last thirteen years in the DICE and ReDICE projects and the OQUAIDO Chair shows that the formalization of real industrial problems often gives rise to first-rate theoretical problems that can feed scientific and technical advances. The creation of the CIROQUO Research & Industry Consortium, led by the Ecole Centrale de Lyon and co-animated with the IFPEN, follows these observations and responds to a desire for collaboration between technological research partners and academics in order to meet the challenges of exploiting large computing codes.

Scientific approach. The limitation of the number of calls to simulators implies that some information – even the most basic information such as the mean value, the influence of a variable or the minimum value of a criterion – cannot be obtained directly by the usual methods. The international scientific community, structured around computer experiments and uncertainty quantification, took up this problem more than twenty years ago, but a large number of problems remain open. On the academic level, this is a dynamic field which is notably the subject of the French CNRS Research Group MascotNum since 2006 and renewed in 2020.

Composition. The CIROQUO Research & Industry Consortium aims to bring together a limited number of participants in order to make joint progress on test cases from the industrial world and on the upstream research that their treatment requires. The overall approach that the CIROQUO Research & Industry Consortium will focus on is metamodeling and related areas such as experiment planning, optimization, inversion and calibration. IRSN, STORENGY, CEA, IFPEN, BRGM are the Technological Research Partners. Mines Saint-Etienne, Centrale Lyon, CNRS, UCA, UPS, UT3 and Inria are the Academic Partners of the consortium.

Scientific objectives. On the practical level, the expected impacts of the project are a concretization of the progress of numerical simulation by a better use of computational time, which allows the determination of better solutions and associated uncertainties. On the theoretical level, this project will allow to create an emulation around the major scientific locks of the discipline such as code transposition/calibration/validation, modeling for complex environments, or stochastic codes. In each of these scientific axes, a particular attention will be paid to the large dimension. Real problems sometimes involve several tens or hundreds of inputs. Methodological advances will be proposed to take into account this additional difficulty. The work expected from the consortium differs from the dominant research in machine learning by specificities linked to the exploration of expensive numerical simulations. However, it seems important to build bridges between the many recent developments in machine learning and the field of numerical simulation.

Philosophy. The CIROQUO Research & Industry Consortium is a scientific collaboration project aiming to mobilize means to achieve methodological advances. The project promotes cross-fertilization between partners coming from different backgrounds but confronted with problems related to a common methodology. It has three objectives: - the development of exchanges between technological research partners and academic partners on issues, practices and solutions through periodic scientific meetings and collaborative work, particularly through the co-supervision of students;

- the contribution of common scientific skills thanks to regular training in mathematics and computer science;

- the recognition of the Consortium at the highest level thanks to publications in international journals and the diffusion of free reference software.

7.2 Collaboration with EdF on industrial risks

This collaboration whose Josselin Garnier is the Ascii leader, has been underway for several years. It concerns the assessment of the reliability of hydraulic and nuclear power plants built and operated by EDF (Electricite De France). The failure of a power plant is associated with major consequences (flood, dam failure, or core meltdown), for regulatory and safety reasons EDF must ensure that the probability of failure of a power plant is sufficiently low.

The failure of such systems occurs when physical variables (temperature, pressure, water level) exceed a certain critical threshold. Typically, these variables enter this critical region only when several components of the system are deteriorated. Therefore, in order to estimate the probability of system failure, it is necessary to model jointly the behavior of the components and of the physical variables. For this purpose, a model based on a Deterministic Markovian Piecewise Process (DMPP) is used. The platform called PYCATSHOO has been developed by EDF to simulate this type of process. This platform allows to estimate the probability of failure of the system by Monte Carlo simulation as long as the probability of failure is not too low. When the probability becomes too low, the classical Monte Carlo estimation method, which requires a lot of simulations to estimate the probabilities of rare events, is much too slow to execute in our context. It is necessary to use methods using fewer simulations to estimate the probability of system failure: variance reduction methods. Among the variance reduction methods are "importance sampling" and "splitting" methods, but these methods present difficulties when used with PDMPs.

Work has been undertaken on the subject, leading to the defense of a first CIFRE thesis (Thomas Galtier, thesis defended in 2019) and the preparation of a new CIFRE thesis (Guillaume Chennetier, from 2021). Theoretical articles have been written and submitted to journals. New theoretical works on sensitivity analysis in rare event regimes are the subject of the new thesis. The integration of the methods in the PYCATSHOO platform is progressively done.

8 Partnerships and cooperations

8.1 International initiatives

8.1.1 Inria associate team not involved in an IIL

CIRCUS
  • Title: Columbia Inria Research on Collaborative Ultracritical Systems
  • Coordinator: Denis Talay
  • Partners:
    • Statistics Department, Columbia University (United States)
  • Inria contact: Denis Talay
  • Summary:

    CIRCUS will focus on collaborative stochastic agent and particle systems. In standard models, the agents and particles have `blind' interactions generated by an external interaction kernel or interaction mechanism which their empirical distribution does not affect. A contrario, agent and particle systems which will be developed, analysed, simulated by CIRCUS will have the key property that the agents and particles dynamically optimize their interactions.

    Two main directions of research will be investigated: optimal regulation in stochastic environments, and optimized simulations of particle systems with singular interactions. In both cases, the interactions (between the agents or the numerical particles) are optimized, non Markovian, and the singularities reflect ultracritical phenomena such as aggregations or finite-time blow-ups.

9 Dissemination

9.1 Promoting scientific activities

9.1.1 Journal

Member of the editorial boards

Josselin Garnier is an associated editor of the journals Asymptotic Analysis, Discrete and Continuous Dynamical Systems – Series S, ESAIM P&S, Forum Mathematicum, SIAM Journal on Applied Mathematics, and SIAM/ASA Journal on Uncertainty Quantification (JUQ).

Carl Graham is an associated editor of the journal Markov Processes and Related Fields.

Denis Talay serves as an Area Editor of Stochastic Processes and their Applications, and as an Associate Editor of Journal of the European Mathematical Society, Probability, Uncertainty and Quantitative Risk, ESAIM Probability and Statistics, Stochastics and Dynamics, Journal of Scientific Computing, Monte Carlo Methods and Applications, SIAM Journal on Scientific Computing, Communications in Applied Mathematics and Computational Science, Éditions de l'École Polytechnique. He also served as Co-editor in chief of MathematicS in Action.

Nizar Touzi is Co-Editeur Finance and Stochastics since January 2007, and Associate Editor on the board of the Mathematical Finance (Novembre 2003–June 2020), Journal of Optimization Theory and Applications (since January 2014), Stochastic Processes and their Applications (since January 2016), Stochastics: An International Journal on Probability and Stochastic Processes (since January 2016), Control and Calculus of Variations (since June 2019), Tunisian Journal of Mathematics (since July 2019).

9.1.2 Leadership within the scientific community

Josselin Garnier is vice-chairman of the Applied Mathematics Department at Ecole Polytechnique.

Josselin Garnier is vice-chairman of the Fondation Mathematique Jacques Hadamard (FMJH), in charge of the Labex Hadamard (LMH).

D. Talay continued to chair the Scientific Council of the French Applied Math. Society SMAI.

D. Talay is a member of the scientific committee of the `Institut Mathématiques de la Planète Terre' project supported by INSMI-CNRS.

D. Talay served as a member of the scientific council of the Complex System academy of the Université Côte d'Azur Idex.

D. Talay is serving as a member of the CMUP Advisory Commission (University of Porto).

D. Talay is a member of the Comité National Français de Mathématiciens.

9.1.3 Scientific expertise

D. Talay served as a member of the committee for positions in Applied Mathematics at the Ecole Polytechnique.

D. Talay chaired the HCERES evaluation committee for the Toulouse Mathematics Institute (IMT).

D. Talay served as a member of the evaluation committee of the Charles University (Prague, Czech Republic).

9.2 Teaching - Supervision - Juries

9.2.1 Teaching

Josselin Garnier is a professor at the Ecole Polytechnique and he has a full teaching service. He also teaches the class "Inverse problems and imaging" at the Master Mathématiques, Vision, Apprentissage (M2 MVA).

D. Talay teaches the master course Equations différentielles stochastiques de McKean–Vlasov et limites champ moyen de systèmes de particules stochastiques en interaction, 24h, M2 Probabilité et Applications, LPSM, Sorbonne Université, France.

10 Scientific production

10.1 Publications of the year

International journals

  • 1 article ManonM. Barthe, JosuéJ. Tchouanti, Pedro HenriqueP. Gomes, CarineC. Bideaux, DelphineD. Lestrade, CarlC. Graham, Jean-PhilippeJ.-P. Steyer, SylvieS. Meleard, JérômeJ. Harmand, NathalieN. Gorret, MurielM. Cocaign-Bousquet and BriceB. Enjalbert. Availability of the Molecular Switch XylR Controls Phenotypic Heterogeneity and Lag Duration during Escherichia coli Adaptation from Glucose to Xylose mBio 11 6 December 2020
  • 2 article KilianK. Baudin, AdrianA. Fusaro, ‪Katarzyna‪. Krupa, JosselinJ. Garnier, SergioS. Rica, GuyG. Millot and A. Picozzi. Classical Rayleigh-Jeans Condensation of Light Waves: Observation and Thermodynamic Characterization Physical Review Letters 125 24 December 2020
  • 3 articleLilianaL. Borcea and JosselinJ. Garnier. High-Resolution Interferometric Synthetic Aperture Imaging in Scattering MediaSIAM Journal on Imaging Sciences131January 2020, 291-316
  • 4 articleLilianaL. Borcea, JosselinJ. Garnier and KnutK. Sølna. Multimode communication through the turbulent atmosphereJournal of the Optical Society of America. A Optics, Image Science, and Vision3752020, 720
  • 5 articleLilianaL. Borcea, JosselinJ. Garnier and KnutK. Sølna. Onset of energy equipartition among surface and body wavesProceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences4772246February 2021, 20200775
  • 6 articleLilianaL. Borcea and JosselinJ. Garnier. Wave Propagation in Randomly Perturbed Weakly Coupled WaveguidesMultiscale Modeling and Simulation: A SIAM Interdisciplinary Journal181January 2020, 44-78
  • 7 articleManonM. Costa, CarlC. Graham, LaurenceL. Marsalle and Viet-ChiV.-C. Tran. Renewal in Hawkes processes with self-excitation and inhibitionAdvances in Applied Probability523September 2020, 879-915
  • 8 articleGregory KyriakosG. Delipei, JosselinJ. Garnier, Jean-CharlesJ.-C. Le Pallec and BenoitB. Normand. High to Low pellet cladding gap heat transfer modeling methodology in an uncertainty quantification framework for a PWR Rod Ejection Accident with best estimate couplingEPJ N - Nuclear Sciences & Technologies62020, 56
  • 9 articleMathiasM. Fink and JosselinJ. Garnier. How a moving passive observer can perceive its environment ? The Unruh effect revisitedWave Motion93January 2020, 102462
  • 10 articleJosselinJ. Garnier. Intensity fluctuations in random waveguidesCommunications in Mathematical Sciences1842020, 947-971
  • 11 articleJosselinJ. Garnier. Low-frequency source imaging in an acoustic waveguideInverse Problems3611November 2020, 115004
  • 12 article JosselinJ. Garnier and KnutK. Sølna. Implied Volatility Structure in Turbulent and Long-Memory Markets Frontiers in Applied Mathematics and Statistics 6 April 2020
  • 13 articleJosselinJ. Garnier and KnutK. Sølna. Optimal Hedging Under Fast-Varying Stochastic VolatilitySIAM Journal on Financial Mathematics111January 2020, 274-325
  • 14 articleCarlC. Graham, JérômeJ. Harmand, SylvieS. Méléard and JosuéJ. Tchouanti. Bacterial Metabolic Heterogeneity: from Stochastic to Deterministic ModelsMathematical Biosciences and Engineering175July 2020, 5120-5133
  • 15 articlePierreP. Laperdrix, NataliiaN. Bielova, BenoitB. Baudry and GildasG. Avoine. Browser Fingerprinting: A SurveyACM Transactions on the Web142April 2020, 1-33
  • 16 articleSophieS. Marque-Pucheu, GuillaumeG. Perrin and JosselinJ. Garnier. An efficient dimension reduction for the Gaussian process emulation of two nested codes with functional outputsComputational Statistics353September 2020, 1059-1099
  • 17 articleRémiR. Sainct, CyrilC. Feau, Jean-MarcJ.-M. Martinez and JosselinJ. Garnier. Efficient methodology for seismic fragility curves estimation by active learning on Support Vector MachinesStructural Safety86September 2020, 101972
  • 18 articleMilicaM. Tomasevic and DenisD. Talay. A new McKean-Vlasov stochastic interpretation of the parabolic-parabolic Keller-Segel model: The one-dimensional caseBernoulli2622020, 1323-1353