The stochastic particles concerned by the Ascii research
behave like agents whose interactions are
stochastic and intelligent; these agents desire to
optimally cooperate towards a common objective by solving strategic
optimisation problems.
Ascii's overall objective consists in developing their
theoretical and numerical analyses with target applications in
economics, neuroscience, physics, biology, stochastic numerics. To the
best of our knowledge, this challenging objective is quite innovative.

In addition to modelling challenges raised by our innovating approaches to handle intelligent multi-agent interactions, we develop new mathematical theories and numerical methods to deal with interactions which, in most of the interesting cases we have in mind, are irregular, non Markovian, and evoluting. In particular, original and non-standard stochastic control and stochastic optimization methodologies are being developed, combined with original specific calibration methodologies.

To reach our objectives, we combine various mathematical techniques coming from stochastic analysis, partial differential equation analysis, numerical probability, optimization theory, and stochastic control theory.

Concerning particle systems with singular interactions, in
addition to the convergence to mean–field limits and the
analysis of convergence rates of relevant discretizations,
one of our main challenges will concern the simulation of
complex, singular and large scale McKean-Vlasov particle
systems and stochastic partial differential equations, with a strong
emphasis on the detection of numerical instabilities and potential
big approximation errors.

The determination of blow-up times is also a major issue for spectrum approximation and criticality problems in neutron transport theory, Keller-Segel models for chemotaxis, financial bubble models, etc.

Reliability assessment for power generation systems or subsystems is another target application of our research. For such complex systems, standard Monte Carlo methods are inefficient because of the difficulties to appropriately simulate rare events. We thusdevelop algorithms based on particle filters methods combined with suitable variance reduction methods.

Exhibiting optimal regulation procedures in a stochastic environment is an important challenge in many fields. As emphasized above, in the situations we are interested in, the agents do not compete but concur to their regulation. We here give three examples: the control of cancer therapies, regulation and mechanism design by optimal contracting, and the distributed control for the planning problem.

Optimal contracting is widely used in economics in order to model agents interactions subject to the so-called moral hazard problem. This is best illustrated by the works of Tirole (2014 Nobel Prize in economics) in industrial economics. The standard situation is described by the interaction of two parties. The principal (e.g. land owner) hires the agent (e.g. farmer) in order to delegate the management of an output of interest (e.g. production of the land). The agent receives a salary as a compensation for the costly effort devoted to the management of the output. The principal only observes the output value, and has no access to the agent effort. Due to the cost of effort, the agent may divert his effort from the direction desired by the principal. The contract is proposed by the principal, and chosen according to a Stackelberg game: anticipating the agent's optimal response to any contract, she searches for the optimal contract by optimizing her utility criterion.

We are developing the continuous time formulation of this problem and allow for diffusion control and for possible competing multiple agents and principals. This is achieved by using crucially the recently developed second order backward stochastic differential equations, which act as HJB equations in the present non-Markovian framework.

The current environmental transition requires governments to incite firms to introduce green technologies as a substitute the the outdated polluting ones. This transition requires appropriate incentive schemes so as to reach the overall transition objective. This problem can be formulated in the framework of the last Principal-Agent problem as follows. Governments act as principals by setting the terms of an incentive regulation based on subsidies and tax reductions. Firms acting as agents optimize their production strategies given the regulation imposed by the governments. Such incentive schemes are also provided by the the refinancing channel through private investor, as best witnessed by the remarkable growth of green bonds markets.

Another motivation comes from Mechanism design. Modern decentralized facilities are present throughout our digitally connected economies. With the fragmentation of financial markets, exchanges are nowadays in competition. As the traditional international exchanges are now challenged by alternative trading venues, markets have to find innovative ways to attract liquidity. One solution is to use a make-taker fees system, that is, a rule enabling them to charge in an asymmetric way liquidity provision and liquidity consumption. The most classical setting, used by many exchanges (such as Nasdaq, Euronext, BATS Chi-X,...), consists in subsidizing the former while taxing the latter. In practice, this results in associating a fee rebate to executed limit orders and applying a transaction cost for market orders.

A platform aims at attracting two types of agents: market makers post bid and ask prices for some underlying asset or commodity, and brokers fulfill their trading needs if the posted prices are convenient. The platform takes its benefits from the fees gained on each transaction. As transactions only occur when the market makers takes on more risky behavior by posting interesting bid and ask prices, the platform (acting as the principal) sets the terms of an incentive compensation to the market makers (acting as agents) for each realized transaction. Consequently, this optimal contracting problem serves as an optimization tool for the mechanism design of the platform.

Inspired by optimal transport theory, we formulate the above regulation problem as the interaction between a principal and a “crowd” of symmetric agents. Given the large number of agents, we model the limiting case of a continuum of agents whose state is then described by their distribution. The mean field game formulates the interacting agents optimal decision according to a Nash equilibrium competition. The optimal planning problem, introduced by Pierre-Louis Lions, seeks for an incentivizing scheme of the regulator, acting as a principal, aiming at pushing the crowd to some target distribution. Such a problem may be formulated for instance as a model for the design of smart cities. Then, one may use the same techniques as for the Principal-Agent problem in order to convert this problem to a more standard optimal transport problem.

In a situation where a Principal faces many interacting agents, distributed control may serve as an important tool to preserve the aggregate production of the agents, while distributing differently the contributions amongst agents.

The above approach needs now to be extended in order to accommodate more realistic situations. Let us list the following important extensions:

Our research program on networks with interacting agents concerns various types of networks: electronic networks, biological networks, social networks, etc. The numerous mathematical tools necessary to analyse them depend on the network type and the analysis objectives. They include propagation of chaos theory, queing process theory, large deviation theory, ergodic theory, population dynamics, partial differential equation analysis, in order to respectively determine mean-field limits, congestion rates or spike train distributions, failure probabilities, equilibrium measures, evolution dynamics, macroscopic regimes, etc.

For example, recently proposed neuron models consist in considering different populations of neurons and setting up stochastic time evolutions of the membrane potentials depending on the population. When the number of populations is fixed, interaction intensities between individuals in different populations have similar orders of magnitude, and the total number of neurons tends to infinity, mean-field limits have been identified and fluctuation theorems have been proven.

However, to the best of our knowledge, no theoretical analysis is available on interconnected networks of networks with different populations of interacting individuals which naturally arise in biology and in economics.

We aim to study the effects of interconnections between sub-networks resulting from individual and local connections. Of course, the problem needs to be posed in terms of the geometry of the big network and of the scales between connectivity intensities and network sizes.

A related research concerns stochastic, continuous state and time opinion models where each agent's opinion locally interacts with other agents' opinions in the systemic. Due to some exogenous randomness, the interaction tends to create clusters of common opinion. By using linear stability analysis of the associated nonlinear Fokker-Planck equation that governs the empirical density of opinions in the limit of infinitely many agents, we can estimate the number of clusters, the time to cluster formation, the critical strength of randomness so as to have cluster formation, the cluster dynamics after their formation, the width and the effective diffusivity of the clusters.

Another type of network systems we are interested in derives from financial systemic risk modeling. We consider evolving systems with a large number of inter-connected components, each of which can be in a normal state or in a failed state. These components also have mean field interactions and a cooperative behavior. We will also include diversity as well as other more complex interactions such as hierarchical ones. In such an inter-connected system, individual components can be operating closer to their margin of failure, as they can benefit from the stability of the rest of the system. This, however, reduces the overall margin of uncertainty, that is, increases the systemic risk: our research thus addresses QMU (Quantification of Margins of Uncertainty) problems.

We aim to study the probability of overall failure of the system, that is, its systemic risk. We therefore have to model the intrinsic stability of each component, the strength of external random perturbations to the system, and the degree of inter-connectedness or cooperation between the components.

Our target applications are the following ones:

One of our objectives is to explain that, in some circumstances, one simultaneously observes individual risk decreases and systemic risk increase.

Our short and mid-term potential industrial impact concerns e.g. energy market regulations, financial market regulations, power distribution companies, nuclear plant maintenance. It also concerns all the industrial sectors where massive stochastic simulations at nano scales are becoming unavoidable and certified results are necessary.

We also plan to have impact in cell biology, macro-economy, and applied mathematics at the crossroads of stochastic integration theory, optimization and control, PDE analysis, and stochastic numerical analysis.

The ICI epidemy propagation simulation platform

In 2020, D. Talay launched the ICI (INRIA-Collaboration-IGN) project whose he is the coordinator. This project is aimed to simulate a dynamic spreading of an epidemy at the individual scale, in a very precise geographic environment.

Nicolas Gilet (INRIA research engineer) and Maxime Colomb (INRIA-IGN research engineer) are the main developers of the code. The permanent members of the team Ascii jointly work on the modeling and algorithmic issues. The following Inria and IGN researchers contribute to specific tasks:

Infection between inhabitants is calculated with the density of person that they stay with during the day, their epidemiologic status and probability laws. Statistical studies on the simulation results should allow one to better understand the propagation of an epidemy and to compare the performances of various public stop-and-go strategies to control the person-to-person contamination.

This year, Maxime Colomb (INRIA-IGN engineer) and Nicolas GILET (INRIA engineer) have developed a prototype of the modeling with the support of the ASCII permanent members and the ICI contributors. The modeling is based on the coupling of two different models: on the one hand, the modeling of the urban geographical area where the population live and move; on the other hand, the modeling of random choices of the daily travels and the contaminations due to the interactions between individuals. The simulation is for now applied on a sub-part of Paris's fifth arrondissement (Jussieu/St-Victor) and aimed to run on single arrondissement or small cities.

The geographic model is built from multiple geographic sources (IGN, INSEE, OpenStreetMap, Local authority open data portals, etc.). A three-layered synthetic population is generated in order to represent housing, populated by households, composed by individuals. The multiple characteristics added allows to represent conditions of living and inner household interactions of the population. Shops and activities are generated by matching multi-sourced data, allowing to enrich information about each amenity (opening hour, open during lockdown, etc.). We simulate the socio-professional structures and daily trips of the population by taking into account probability laws related to the urban space (probability of going out, going to work, shopping, etc.) and to social characteristics (age, job, etc.). Currently, the modeling is based on intuitive and simple laws of trips according to individuals groups (pupils, students, working people, retirees). The calibration of these probability laws is being improved by using data provided by precise surveys and mobile operators.

In addition, the person-to-person contamination has been modeled between individuals located in the same space at the same time using transmission probability laws specific to each individual, parameterized by the distance between a healthy and contamined individual, as well as by the contact duration. Since the model is stochastic, in order to obtain accurate and robust statistics on the evolution of the epidemic, we must be able to simulate a large number of independent socio-professional structures within a given urban area, and then, for each population, a large number of realizations of daily trips and contaminations.

Therefore, to carry out a very large number of simulations covering all parameters of the model, the model requires very high performance computing. The code is written in Julia language and is currently parallelized using MPI tool. At this time, the model is launched on the internal cluster of INRIA Saclay called Margaret (200 CPU cores corresponding to 10 CPU nodes) which allows to check the code for a few different epidemiological parameters. We have also obtained the support of AMD to launched our model on a cluster, equipped with AMD EPYC™ processors and AMD Instinct™ Accelerators, into the national GRID5000/SILECS infrastructure. Moreover, in September 2021, the ICI project has obtained 6 millions CPU hours from DARI/GENCI which can be used on the CEA cluster called Irene-Rome (up to 300 000 CPU cores) in order to launch simulations for a large panel of epidemiological parameters. These hours can be used until October 2022.

Finally, Maxime Colomb and Nicolas Gilet have developed a website that describes the ICI project and the characteristics of the ICI model. They have also developed an user interface from which it is possible to study the effect of health policies on the epidemic propagation by displaying the main epidemic indicators computed by the model

Our next step is to calibrate the model with epidemiologic data and compare the predictive capacities of ICI with simpler models (SIR/SEIR).

Multiple Markov chains are constructed and calibrated for various geographical and socio-demographic profiles with the precise values of a global survey. Micro-spatialization of travel objective must be realized using mobile phone data and the list of available places, weighted by their capacity to receive public. The synthetic population generation have to be improved in order to give occupation to each individuals and to get more close to existing statistics. Those improvements are made jointly with the redaction of a scientific article.

Finally, Maxime Colomb and Nicolas Gilet have developed a user interface by including the back-end of the application. More precisely, they have put the interface on an INRIA web server and they have built an automatic pipeline between the interface and the server in order to display all of the simulations to the user.

Web sites:

http://ici.gitlabpages.inria.fr/website

https://gitlab.inria.fr/ici/website

https://ici.saclay.inria.fr

Our contribution to the
PyCATSHOO

Our second topical activity concerns the PyCATSHOO
toolbox developed by EDF
which allows the modeling of dynamical hybrid
systems such as nuclear power plants or dams.
Hybrid systems mix two kinds of behaviour.
First, the discrete and stochastic behaviour which is in general due to
failures and repairs of the system's constituents. Second,
the continuous and deterministic physical phenomena which evolve
inside the system.

PyCATSHOO is based on the theoretical framework of Piecewise Deterministic Markov Processes (PDMPs). It implements this framework thanks to distributed hybrid stochastic automata and object-oriented modeling. It is written in C++. Both Python and C++ APIs are available. These APIs can be used either to model specific systems or for generic modelling i.e. for the creation of libraries of component models. Within PyCATSHOO special methods can be developed.

J. Garnier is contributing, and will continue to contribute, to this toolbox within joint Cifre programs with EdF. The PhD theses are aimed to add new functionalities to the platform. For instance, an importance sampling with cross entropy method

Optimal stopping of dynamic interacting stochastic systems.

In a series of three papers, Mehdi Talbi, Nizar Touzi, and Jianfeng Zhang (University of Southern California)study the mean field limit of the multiple optimal stopping problem. They develop the usual dynamic programming approach in the present context and provide a complete characterization of the problem in terms of a new obstacle equation on the Wasserstein space: verification argument in the smooth case and an appropriate notion of viscosity solutions in the general case.

From finite population optimal stopping to mean field optimal
stopping.

Mehdi Talbi, Nizar Touzi, and Jianfeng Zhang analyze the convergence of the finite population optimal stopping problem towards the corresponding mean field limit by adapting the Barles-Souganidis monotone scheme method to this context. As a by-product of their analysis, they obtain an extension of the standard propagation of chaos to the context of stopped McKean-Vlasov diffusions.

Entropic optimal planning for path-dependent mean field games.

In the context of mean field games, with possible control of
the diffusion coefficient,
Zhenjie Ren, Xiaolu Tan, Nizar Touzi, and Junjian Yang
have considered a path-dependent version of the
planning problem introduced by P.L. Lions: given a pair of marginal
distributions (m0, m1), find a specification of the game problem
starting from the initial distribution m0, and inducing the target
distribution m1 at the mean field game equilibrium. The main result
reduces the path-dependent planning problem into an embedding problem,
that is, constructing a McKean-Vlasov dynamics with given marginals

Mean Field Game of Mutual Holding.

Mao Fabrice Djete (CMAP, Ecole Polytechnique) and Nizar Touzi have introduced a mean field model for optimal holding of a representative agent of her peers as a natural expected scaling limit from the corresponding N-agent model. The induced mean field dynamics appear naturally in a form which is not covered by standard McKean-Vlasov stochastic differential equations. An explicit solution of the corresponding mean field game of mutual holding is obtained, and is defined by a bang-bang control consisting in holding those competitors with positive drift coefficient of their dynamic value. They next use this mean field game equilibrium to construct (approximate) Nash equilibria for the corresponding N-player game.

Is there a Golden Parachute in Sannikov's principal-Agent
problem?

Dylan Possamaï and Nizar Touzi have provided a complete review of the continuous-time optimal contracting problem introduced by Sannikov, in the extended context allowing for possibly different discount rates for both parties. The agent's problem is to seek for optimal effort, given the compensation scheme proposed by the principal over a random horizon. Then, given the optimal agent's response, the principal determines the best compensation scheme in terms of running payment, retirement, and lump-sum payment at retirement. A Golden Parachute is a situation where the agent ceases any effort at some positive stopping time, and receives a payment afterwards, possibly under the form of a lump sum payment, or of a continuous stream of payments. The authors have shown that a Golden Parachute only exists in certain specific circumstances. This is in contrast with the results claimed by Sannikov, where the only requirement is a positive agent's marginal cost of effort at zero. Namely, the authors show that there is no Golden Parachute if this parameter is too large. Similarly, in the context of a concave marginal utility, there is no Golden Parachute if the agent's utility function has a too negative curvature at zero. In the general case, the authors prove that an agent with positive reservation utility is either never retired by the principal, or retired above some given threshold (as in Sannikov's solution). They also show that different discount factors induce a face-lifted utility function, which allows to reduce the analysis to a setting similar to the equal discount rates one. Finally, the authors confirm that an agent with small reservation utility does have an informational rent, meaning that the principal optimally offers him a contract with strictly higher utility than his participation value.

Synchronization in a Kuramoto Mean Field Game

In collaboration with René Carmona and Mete Soner, Quentin Cormier has studied the classical Kuramoto model in the setting of an infinite horizon mean field game. The system is shown to exhibit both synchronization and phase transition. Incoherence below a critical value of the interaction parameter is demonstrated by the stability of the uniform distribution. Above this value, the game bifurcates and develops self-organizing time homogeneous Nash equilibria. As interactions become stronger, these stationary solutions become fully synchronized. Results are proved by an amalgam of techniques from nonlinear partial differential equations, viscosity solutions, stochastic optimal control and stochastic processes.

Seismic probabilistic risk assessment.

The key elements of seismic probabilistic risk assessment studies are the fragility curves which express the probabilities of failure of structures conditional to a seismic intensity measure. A multitude of procedures is currently available to estimate these curves. For modeling-based approaches which may involve complex and expensive numerical models, the main challenge is to optimize the calls to the numerical codes to reduce the estimation costs. Adaptive techniques can be used for this purpose, but in doing so, taking into account the uncertainties of the estimates (via confidence intervals or ellipsoids related to the size of the samples used) is an arduous task because the samples are no longer independent and possibly not identically distributed. The main contribution of this work is to deal with this question in a mathematical and rigorous way. To this end, C. Gauchy, C. Feau, J. Garnier have proposed and implemented an active learning methodology based on adaptive importance sampling for parametric estimations of fragility curves. They have proven some theoretical properties (consistency and asymptotic normality) for the estimator of interest. Moreover, they have given a convergence criterion in order to use asymptotic confidence ellipsoids. Finally, the performances of the methodology has been evaluated on analytical and industrial test cases of increasing complexity.

Surrogate modeling of a complex numerical code.

B. Kerleguer has considered the surrogate modeling of a complex numerical code in a multifidelity framework when the code output is a time series. Using an experimental design of the low-and high-fidelity code levels, an original Gaussian process regression method has been proposed. The code output has been expanded on a basis built from the experimental design. The first coefficients of the expansion of the code output are processed by a co-kriging approach. The last coefficients are collectively processed by a kriging approach with covariance tensorization. The resulting surrogate model taking into account the uncertainty in the basis construction has been shown to have better performance in terms of prediction errors and uncertainty quantification than standard dimension reduction techniques.

Inverse problems.

L. Borcea, J. Garnier, A. V. Mamonov, and J. Zimmerling have introduced a novel, computationally inexpensive approach for imaging with an active array of sensors, which probe an unknown medium with a pulse and measure the resulting waves. The imaging function is based on the principle of time reversal in non-attenuating media and uses a data driven estimate of the ‘internal wave’ originating from the vicinity of the imaging point and propagating to the sensors through the unknown medium. The authors have explained how this estimate can be obtained using a reduced order model (ROM) for the wave propagation. They have analyzed the imaging function, connected it to the time reversal process and describe how its resolution depends on the aperture of the array, the bandwidth of the probing pulse and the medium through which the waves propagate. They also have shown how the internal wave can be used for selective focusing of waves at points in the imaging region. They have assessed the performance of the imaging methods with numerical simulations and compared them to the conventional reverse-time migration method and the ‘backprojection’ method introduced recently as an application of the same ROM. Other active research directions concern imaging in randomly perturbed media in the stochastic homogenization regime by Q. Goepfert (for applications ito medical imaging and non-destructive testing) and inverse problems in randomly perturbed waveguides (with applications to underwater acoustics) by A. Niclas.

Variance reduction.

J. Garnier and L. Mertz have examined a control variate estimator for a quantity that can be expressed as the expectation of a function of a random process, that is itself the solution of a differential equation driven by fast mean-reverting ergodic forces. The control variate is the same function for the limit diffusion process that approximates the original process when the mean-reversion time goes to zero. To get an efficient control variate estimator, a coupling method for the original process and the limit diffusion process has been proposed. It has been shown that the correlation between the two processes indeed goes to one when the mean reversion time goes to zero and the convergence rate has been quantified, which makes it possible to characterize the variance reduction of the proposed control variate method. The efficiency of the method has been illustrated on a few examples.

On time-dependent reliability-based
design optimization problems with constraints.

A. Cousin, J. Garnier, M. Guiton, and M. Munoz-Zuniga have considered a time-dependent reliability-based design optimization (RBDO) problem with constraints involving the maximum and/or the integral of a random process over a time interval. We focus especially on problems where the process is a stationary or a piece-wise stationary Gaussian process. A two-step procedure is proposed to solve the problem. First, we use ergodic theory and extreme value theory to reformulate the original constraints into time-independent ones. We obtain an equivalent RBDO problem for which classical algorithms perform poorly. The second step of the procedure is to solve the reformulated problem with a new method based on an adaptive kriging strategy well suited to the reformulated constraints called AK-ECO for adaptive kriging for expectation constraints optimization. The procedure is applied to two toy examples involving a harmonic oscillator subjected to random forces. It is then applied to an optimal design problem for a floating offshore wind turbine.

The parabolic-parabolic Keller-Segel model is a set of equations that model the process of cell movement. It takes into account the evolution of different chemical components that can aid, hinder or change the direction of movement, a process called chemotaxis.

In collaboration with Radu Maftei (who is a past Inria post-doc student) Milica Tomasevic (CMAP, Ecole Polytechnique) and Denis Talay have continued to analyse the numerical performances of a stochastic particle numerical method for the parabolic-parabolic Keller-Segel model. They also propose and test various algorithmic improvements to the method in order to substantially decrease its execution time without altering its global accuracy.

Communication networks and their algorithms.

An important current research topic of Carl Graham is the modeling, analysis, simulation, and performance evaluation of communication networks and their algorithms. Most of these algorithms must function in real time, in a distributed fashion, and using sparse information. In particular load balancing algorithms aim to provide better utilization of the network resources and hence better quality of service for clients by striving to avoid the starving of some servers and the build-up of queues at others by routing the clients so as to have them well spread out throughout the system. Carl Graham's recent focus of work on these networks is perfect simulation in equilibrium, and a paper on this is in its terminal phases of writing. Perfect simulation methods allow to estimate quality of service (QoS) indicators in the stationary regime by Monte Carlo methods.

Regenerative properties of Hawkes processes.

Establishing regenerative properties of Hawkes processes allows to derive systematically long-time asymptotic results in view of statistical applications. A paper of Carl Graham with M. Costa, L. Marsalle, and V.C. Tran proves regeneration for specific nonlinear Hawkes processes with transfer functions which may take negative values in order to model self-inhibition phenomena. An essential assumption is that the transfer function has a bounded support; this allows to introduce an auxiliary Markov process with values in point processes with support including the one of the transfer function, and then prove that it is positive recurrent. This regenerative result then allows to prove in particular a non-asymptotic exponential concentration inequality by carefully adapting the Bernstein inequality. Another paper of Carl Graham proves regenerative properties for the linear Hawkes process under minimal assumptions on the transfer function, which may have unbounded support. The proof exploits the deep independence properties of the Poisson cluster point process decomposition of Hawkes and Oakes, and the regeneration times are not stopping times for the Hawkes process. The regeneration time is interpreted as the renewal time at zero of a M/G/infinity queue, which yields a formula for its Laplace transform. When the transfer function admits some exponential moments, the cluster length can be stochastically dominated by exponential random variables with parameters expressed in terms of these moments. This yields explicit bounds on the Laplace transform of the regeneration time in terms of simple integrals or of special functions, yielding an explicit negative upper-bound on its abscissa of convergence. This is illustrated on the exponential concentration inequality previously obtained by Carl Graham with coauthors.

Long time behavior of particle systems and their mean-field limit.

Quentin Cormier has studied the long time behavior of a family of McKean-Vlasov stochastic differential equations. He has given conditions ensuring the local stability of an invariant probability measure. The criterion involves the location of the roots of an explicit holomorphic function associated to the dynamics. When all the roots lie on the left-half plane, local stability holds and convergence is proven in Wasserstein norms. The optimal rate of convergence is provided. The probabilistic proof makes use of Lions derivatives and relies on a new `integrated sensitivity' formula.

An hypothesis test for complex stochastic simulations.

In a joint work with Héctor Olivero, D. Talay has
proposed and analyzed an asymptotic hypothesis test for
independent copies of a given random variable which is supposed to
belong to an unknown domain of attraction of a stable law.
The null hypothesis

Surprisingly, the proposed hypothesis test is based on a statistic which is inspired by methodologies to determine whether a semimartingale has jumps from the observation of one single path at discrete times. The authors have justified their test by proving asymptotic properties of discrete time functionals of Brownian bridges. They also have discussed many numerical experiments which allowed them to illustrate satisfying properties of the proposed test.

Several INRIA teams, including ASCII, are involved in the CIROQUO Research & Industry Consortium – Consortium Industrie Recherche pour l'Optimisation et la QUantification d'incertitude pour les données Onéreuses – (Industry Research Consortium for the Optimization and QUantification of Uncertainty for Onerous Data). Josselin Garnier is the INRIA Saclay representative on the steering committee.

The principle of the CIROQUO Research & Industry Consortium is to bring together academic and technological research partners to solve problems related to the exploitation of numerical simulators, such as code transposition (how to go from small to large scale when only small-scale simulations are possible), taking into account the uncertainties that affect the result of the simulation, validation and calibration (how to validate and calibrate a computer code from collected experimental data). This project is the result of a simple observation: industries using computer codes are often confronted with similar problems during the exploitation of these codes, even if their fields of application are very varied. Indeed, the increase in the availability of computing cores is counterbalanced by the growing complexity of the simulations, whose computational times are usually of the order of an hour or a day. In practice, this limits the number of simulations. This is why the development of mathematical methods to make the best use of simulators and the data they produce is a source of progress. The experience acquired over the last thirteen years in the DICE and ReDICE projects and the OQUAIDO Chair shows that the formalization of real industrial problems often gives rise to first-rate theoretical problems that can feed scientific and technical advances. The creation of the CIROQUO Research & Industry Consortium, led by the Ecole Centrale de Lyon and co-animated with the IFPEN, follows these observations and responds to a desire for collaboration between technological research partners and academics in order to meet the challenges of exploiting large computing codes.

Scientific approach.
The limitation of the number of calls to simulators implies that some information – even the most basic information such as the mean value, the influence of a variable or the minimum value of a criterion – cannot be obtained directly by the usual methods.
The international scientific community, structured around computer experiments and uncertainty quantification, took up this problem more than twenty years ago, but a large number of problems remain open.
On the academic level, this is a dynamic field which is notably the subject of the French CNRS Research Group MascotNum since 2006 and renewed in 2020.

Composition.
The CIROQUO Research & Industry Consortium aims to bring together a limited number of participants in order to make joint progress on test cases from the industrial world and on the upstream research that their treatment requires.
The overall approach that the CIROQUO Research & Industry Consortium will focus on is metamodeling and related areas such as experiment planning, optimization, inversion and calibration.
IRSN, STORENGY, CEA, IFPEN, BRGM are the Technological Research Partners.
Mines Saint-Etienne, Centrale Lyon, CNRS, UCA, UPS, UT3 and Inria are the Academic Partners of the consortium.

Scientific objectives.
On the practical level, the expected impacts of the project are a concretization of the progress of numerical simulation by a better use of computational time, which allows the determination of better solutions and associated uncertainties.
On the theoretical level, this project will allow to create an emulation around the major scientific locks of the discipline such as code transposition/calibration/validation, modeling for complex environments, or stochastic codes.
In each of these scientific axes, a particular attention will be paid to the large dimension. Real problems sometimes involve several tens or hundreds of inputs.
Methodological advances will be proposed to take into account this additional difficulty.
The work expected from the consortium differs from the dominant research in machine learning by specificities linked to the exploration of expensive numerical simulations. However,
it seems important to build bridges between the many recent developments in machine learning and the field of numerical simulation.

Philosophy.
The CIROQUO Research & Industry Consortium is a scientific collaboration project aiming to mobilize means to achieve methodological advances.
The project promotes cross-fertilization between partners coming from different backgrounds but confronted with problems related to a common methodology. It has three objectives: - the development of exchanges between technological research partners and academic partners on issues, practices and solutions through periodic scientific meetings and collaborative work, particularly through the co-supervision of students;

- the contribution of common scientific skills thanks to regular training in mathematics and computer science;

- the recognition of the Consortium at the highest level thanks to publications in international journals and the diffusion of free reference software.

This collaboration whose Josselin Garnier is the Ascii leader, has been underway for several years. It concerns the assessment of the reliability of hydraulic and nuclear power plants built and operated by EDF (Electricite De France). The failure of a power plant is associated with major consequences (flood, dam failure, or core meltdown), for regulatory and safety reasons EDF must ensure that the probability of failure of a power plant is sufficiently low.

The failure of such systems occurs when physical variables (temperature, pressure, water level) exceed a certain critical threshold. Typically, these variables enter this critical region only when several components of the system are deteriorated. Therefore, in order to estimate the probability of system failure, it is necessary to model jointly the behavior of the components and of the physical variables. For this purpose, a model based on a Deterministic Markovian Piecewise Process (DMPP) is used. The platform called PYCATSHOO has been developed by EDF to simulate this type of process. This platform allows to estimate the probability of failure of the system by Monte Carlo simulation as long as the probability of failure is not too low. When the probability becomes too low, the classical Monte Carlo estimation method, which requires a lot of simulations to estimate the probabilities of rare events, is much too slow to execute in our context. It is necessary to use methods using fewer simulations to estimate the probability of system failure: variance reduction methods. Among the variance reduction methods are "importance sampling" and "splitting" methods, but these methods present difficulties when used with PDMPs.

Work has been undertaken on the subject, leading to the defense of a first CIFRE thesis (Thomas Galtier, thesis defended in 2019) and the preparation of a new CIFRE thesis (Guillaume Chennetier, from 2021). Theoretical articles have been written and submitted to journals. New theoretical works on sensitivity analysis in rare event regimes are the subject of the new thesis. The integration of the methods in the PYCATSHOO platform is progressively done.

CIRCUS will focus on collaborative stochastic agent and particle systems. In standard models, the agents and particles have `blind' interactions generated by an external interaction kernel or interaction mechanism which their empirical distribution does not affect. A contrario, agent and particle systems which will be developed, analysed, simulated by CIRCUS will have the key property that the agents and particles dynamically optimize their interactions.

Two main directions of research will be investigated: optimal regulation in stochastic environments, and optimized simulations of particle systems with singular interactions. In both cases, the interactions (between the agents or the numerical particles) are optimized, non Markovian, and the singularities reflect ultracritical phenomena such as aggregations or finite-time blow-ups.

P. Protter (Columbiq university) visited the team in December.

J. Garnier is Vice-head of the Foundation Mathématique Hadamard, in charge of Labex Mathématique Hadamard.

J. Garnier and Veronique Maume-Deschamps (president chair of the French Agency AMIES) chaired the working group `Développement économique, de la compétitivité et de l'innovation' at the `Assises des Mathématiques' Conférence.

D. Talay is Vice-President of the Natixis Foundation which promotes academic research on financial risks. He also serves as a member of the scientific committee of the French agency AMIES in charge of promoting Mathematics to industry.

N. Touzi is Scientific Director of the Louis Bachelier Institute which hosts two public foundations promoting research in finance and economics for sustainable growth.

N. Touzi is Vice-head of the Doctoral School Jacques Hadamard (EDMH) in charge of the Polytechnique Pole.

J. Garnier is a member of the editorial boards of the journals Asymptotic Analysis, Discrete and Continuous Dynamical Systems – Series S, ESAIM P&S, Forum Mathematicum, SIAM Journal on Applied Mathematics, and SIAM ASA Journal on Uncertainty Quantification (JUQ).

D.Talay serves as an Area Editor of Stochastic Processes and their Applications, and as an Associate Editor of Journal of the European Mathematical Society, Monte Carlo Methods and Applications. He also serves as Co-editor in chief of MathematicS in Action. During the evaluation period his other Associate Editor terms ended: Probability, Uncertainty and Quantitative Risk, ESAIM Probability and Statistics, Stochastics and Dynamics, Journal of Scientific Computing, SIAM Journal on Scientific Computing.

N.Touzi serves as an Area Editor of Stochastic Processes and their Applications, and as an Associate Editor of Advances in Calculus of Variations, Stochastics: an International Journal of Probability and Stochastic Processes, Journal of Optimization Theory and Applications, Mathematical Control and Related Fields, Tunisian Journal of Mathematics, Springer Briefs in Mathematical Finance. He also is Co-Editor of Paris-Princeton Lectures in Mathematical Finance.

%labelASCII:animation-leadership

Josselin Garnier is a professor at the Ecole Polytechnique and he has a full teaching service. He also teaches the class "Inverse problems and imaging" at the Master Mathématiques, Vision, Apprentissage (M2 MVA).

D. Talay teaches the master course Equations différentielles stochastiques de McKean–Vlasov
et limites champ moyen de systèmes de particules stochastiques en
interaction, 24h, M2 Probabilité et Applications, LPSM,
Sorbonne Université, France.

Nizar Touzi is a professor at the Ecole Polytechnique and he has a full teaching service.