The Inria project team
MathRisk team was created in 2013. It is the follow-up of the
MathFi project team founded in 2000. MathFi was focused on financial
mathematics, in particular on computational methods for pricing and hedging
increasingly complex financial products.
The 2007 global financial crisis and its “aftermath
crisis” has abruptly highlighted the critical importance of a better
understanding and management of risk.

The project team MathRisk addresses broad research topics embracing risk management in quantitative finance and insurance and in other related domains as economy and sustainable development. In these contexts, the management of risk appears at different time scales, from high frequency data to long term life insurance management, raising challenging renewed modeling and numerical issues. We aim at both producing advanced mathematical tools, models, algorithms, and software in these domains, and developing collaborations with various institutions involved in risk control. The scientific issues we consider include:

Option pricing and hedging, and risk-management of portfolios in finance and insurance.
These remain crucial issues in finance and insurance, with the development of increasingly complex products and various regulatory legislations.
Models must take into
account the multidimensional features, incompleteness issues, model
uncertainties and various market imperfections and defaults.
It is also important to understand and capture the joint dynamics of the underlying assets and their volatilities.
The insurance activity faces a large class of risk, including financial risk, and is submitted to strict regulatory requirements. We aim at proposing
modelling frameworks which catch the main specificity of life insurance contracts.

Systemic risk and contagion modeling.
These last years have been shaped by ever more interconnectedness among all aspects of human life. Globalization and economics growth as well as technological progress have led to more complex dependencies worldwide. While these complex networks facilitate physical, capital and informational transmission, they have an inherent potential to create and propagate distress and risk. The financial crisis 2007-2009 has illustrated the significance of network structure on the amplification of initial shocks in the banking system to the level of the global financial system, leading to an economic recession.
We are contributing on the issues of systemic risk and financial networks, aiming at developing
adequate tools for monitoring financial stability which capture accurately the risks due to a variety of interconnections in the financial system.

(Martingale) Optimal transport. Optimal transport problems arise in a wide range of topics, from economics to physics. In mathematical finance, an additional martingale constraint is considered to take the absence of arbitrage opportunities into account. The minimal and maximal costs provide price bounds robust to model risk, i.e. the risk of using an inadequate model. On the other hand, optimal transport is also useful to analyse mean-field interactions. We are in particular interested in particle approximations of McKean-Vlasov stochastic differential equations (SDEs) and the study of mean-field backward SDEs with applications to systemic risk quantization.

Advanced numerical probability methods and Computational finance. Our project team is very much involved in numerical probability,
aiming at pushing numerical methods
towards the effective implementation. This numerical orientation is supported by a mathematical expertise
which permits a rigorous analysis of the algorithms and provides theoretical support for the study of rates of convergence
and the introduction of new tools for the improvement of numerical methods.
Financial institutions and insurance companies, submitted to more and more stringent regulatory
legislations, such as FRTB or XVA computation,
are facing numerical
implementation challenges and research focused on numerical efficiency is strongly needed. Overcoming the curse of dimensionality in computational finance is a crucial issue that we address by developing advanced stochastic algorithms and deep learning techniques.

The MathRisk project is strongly devoted to the development of new
mathematical methods and numerical algorithms. Mathematical tools
include stochastic modeling, stochastic analysis, in particular
various aspects of
stochastic control and optimal stopping with nonlinear expectations, Malliavin calculus, stochastic optimization,
random graphs, (martingale) optimal transport, mean-field systems,
numerical probability
and
generally advanced numerical methods for effective solutions.
The numerical platform Premia that MathRisk is developing in collaboration
with a consortium of financial institutions,
focuses on the computational challenges the recent developments in financial mathematics encompass, in particular risk control in large dimensions.

After the recent financial crisis, systemic risk has emerged as one of the major research topics in mathematical finance. Interconnected systems are subject to contagion in time of distress. The scope is to understand and model how the bankruptcy of a bank (or a large company) may or not induce other bankruptcies. By contrast with the traditional approach in risk management, the focus is no longer on modeling the risks faced by a single financial institution, but on modeling the complex interrelations between financial institutions and the mechanisms of distress propagation among these.

The mathematical modeling of default contagion, by which an economic shock causing initial losses and default of a few institutions is amplified due to complex linkages, leading to large scale defaults, can be addressed by various techniques, such as network approaches or mean field interaction models.

The goal of our project is to develop a model that captures the dynamics of a complex financial network and to provide methods for the control of default contagion, both by a regulator and by the institutions themselves.

We have contributed in the last years to the research on the control of contagion in financial systems in the framework of random graph models (see PhD thesis of R. Chen 75 and Z. Cao 31).

In 60, 103, 8, we consider a financial network described as a weighted directed graph, in which nodes represent financial institutions and edges the exposures between them. The distress propagation is modeled as an epidemics on this graph. We study the optimal intervention of a lender of last resort who seeks to make equity infusions in a banking system prone to insolvency and to bank runs, under complete and incomplete information of the failure cluster, in order to minimize the contagion effects. The paper 8 provides in particular important insight on the relation between the value of a financial system, connectivity and optimal intervention.

The results show that up to a certain connectivity, the value of the financial system increases with connectivity. However, this is no longer the case if connectivity becomes too large. The natural question remains how to create incentives for the banks to attain an optimal level of connectivity. This is studied in 76, where network formation for a large set of financial institutions represented as nodes is investigated. Linkages are source of income, and at the same time they bear the risk of contagion, which is endogeneous and depends on the strategies of all nodes in the system. The optimal connectivity of the nodes results from a game. Existence of an equilibrium in the system and stability properties is studied. The results suggest that financial stability is best described in terms of the mechanism of network formation than in terms of simple statistics of the network topology like the average connectivity.

In 7, H. Amini (University of Florida), A. Minca (Cornell University) and A. Sulem study Dynamic Contagion Risk Model With Recovery Features. We introduce threshold growth in the classical threshold contagion model, in which nodes have downward jumps when there is a failure of a neighboring node. We are motivated by the application to financial and insurance-reinsurance networks, in which thresholds represent either capital or liquidity. An initial set of nodes fail exogenously and affect the nodes connected to them as they default on financial obligations. If those nodes’ capital or liquidity is insufficient to absorb the losses, they will fail in turn. In other terms, if the number of failed neighbors reaches a node’s threshold, then this node will fail as well, and so on. Since contagion takes time, there is the potential for the capital to recover before the next failure. It is therefore important to introduce a notion of growth. Choosing the configuration model as underlying graph, we prove fluid limits for the baseline model, as well as extensions to the directed case, state-dependent inter-arrival times and the case of growth driven by upward jumps. We then allow nodes to choose their connectivity by trading off link benefits and contagion risk. Existence of an asymptotic equilibrium is shown as well as convergence of the sequence of equilibria on the finite networks. In particular, these results show that systems with higher overall growth may have higher failure probability in equilibrium.

A. Sulem with M.C. Quenez and M. Grigorova have studied option pricing and hedging in nonlinear incomplete financial markets model with default. The underlying market model consists of a risk-free asset and a risky asset driven by a Brownian motion and a compensated default martingale. The portfolio processes follow nonlinear dynamics with a nonlinear driver incomplete, in the sense that not every contingent claim can be replicated by a portfolio.
In this framework, we address in 13 the problem of pricing and (super)hedging of European options. By using a dynamic programming approach, we provide a dual formulation of the seller’s superhedging price as the supremum over a suitable set of equivalent probability measures constrained BSDE with default. In 87, we study the superhedging problem for American options with irregular payoffs. We establish a dual formulation of the seller’s price in terms of the value of a non-linear mixed optimal control/stopping problem. We also characterize the seller's price process as the minimal supersolution of a reflected BSDE with constraints. We then prove a duality result for the buyer's price in terms of the value of a non-linear optimal control/stopping game problem.
A crucial step in the proofs is to establish a non-linear optional and a non-linear predictable decomposition for
processes which are complete market model with default is previously studied in 79.
A complete analysis of BSDEs driven by a Brownian motion and a compensated default jump process with intensity process

The theory of optimal stopping in connection with American option pricing has been extensively studied in recent years. Our contributions in this area concern:

(i) The analysis of the binomial approximation of the American put price in the Black-Scholes model. We proved that the rate of convergence is,
up to a logarithmic factor, of the order
The American put in the Heston stochastic volatility model.
We have results about existence and uniqueness for the associated variational inequality,
in suitable weighted Sobolev spaces, following up on the work of P. Feehan et al. (2011, 2015, 2016) (cf 101). We also established some qualitative properties of the value function (monotonicity, strict convexity, smoothness) 100.
(iii) A probabilistic approach to the smoothness of the free boundary in the optimal stopping of a one-dimensional diffusion (work in progress with T. De Angelis)(University of Torino),

The 3rd edition of the book Applied Stochastic Control of Jump
diffusions (Springer, 2019) by B. Øksendal and A. Sulem 15 contains recent developments within stochastic control and its applications. In particular, there is a new chapter devoted to a
comprehensive presentation of financial markets modelled by jump diffusions,
one on backward stochastic differential equations and risk measures, and an advanced stochastic control chapter including optimal control of mean-field systems, stochastic differential games and stochastic Hamilton-Jacobi-Bellman equations.

J. Guyon and co-authors have investigated the modeling of the volatility of financial markets 25, 24, 26. In particular, the (mostly) path-dependent nature of volatility has been shown in 25, an article that has been downloaded 7,000+ times on SSRN. Path-dependent volatility (PDV) provides a new paradigm of volatility modeling, which can be mixed with stochastic volatility (PDSV) to account for the exogenous part of volatility. In 88, J. Guyon has uncovered a remarkable property of the S&P 500 and VIX markets, which he called inversion of convex ordering. In 24, M. El Amrani and J. Guyon have shown that, contrary to a common belief in the mathematical finance community, the term-structure of the at-the-money skew does not follow a power law. In 26, J. Guyon and S. Mustapha have calibrated neural stochastic differential equations jointly to S&P 500 smiles, VIX futures, and VIX smiles.

Life insurance contracts are popular and involve very large portfolios, for a total amount of trillions of euros in Europe. To manage them in a long run, insurance companies perform Asset and Liability Management (ALM) : it consists in investing the deposit of policyholders in different asset classes such as equity, sovereign bonds, corporate bonds, real estate, while respecting a performance warranty with a profit sharing mechanism for the policyholders. A typical question is how to determine an allocation strategy which maximizes the rewards and satisfies the regulatory constraints. The management of these portfolios is quite involved: the different cash reserves imposed by the regulator, the profit sharing mechanisms, and the way the insurance company determines the crediting rate to its policyholders make the whole dynamics path-dependent and rather intricate. A. Alfonsi et al. have developed in 49 a synthetic model that takes into account the main features of the life insurance business. This model is then used to determine the allocation that minimizes the Solvency Capital Requirement (SCR). In 50, numerical methods based on Multilevel Monte-Carlo algorithms are proposed to calculate the SCR at future dates, which is of practical importance for insurance companies. The standard formula prescribed by the regulator is basically obtained from conditional expected losses given standard shocks that occur in the future.

Optimal transport problems arise in a wide range of topics, from economics to physics.
There exists different methods to solve numerically optimal transport problems.
A popular one is the Sinkhorn algorithm which uses an entropy regularization of the cost function and then iterative Bregman projections. Alfonsi et al. 52 have proposed an alternative relaxation that consists in replacing the constraint of matching exactly the marginal laws by constraints of matching some moments. Using Tchakaloff's theorem, it is shown that the optimum is reached by a discrete measure, and the optimal transport is found by using a (stochastic) gradient descent that determines the weights and the points of the discrete measure. The number of points only depends of the number of moments considered, and therefore does not depend on the dimension of the problem. The method has then been developed in 51 in the case of symmetric multimarginal optimal transport problems. These problems arise in quantum chemistry with the Coulomb interaction cost. The problem is in dimension

In 73, O.Bencheikh and B. Jourdain prove that the weak error between a stochastic differential equation with nonlinearity in the sense of McKean given by moments and its approximation by the Euler discretization with time-step

In 97, B. Jourdain and A. Tse propose a generalized version of the central limit theorem for nonlinear functionals of the empirical measure of i.i.d. random variables, provided that the functional satisfies some regularity assumptions for the associated linear functional derivatives of various orders. Using this result to deal with the contribution of the initialization, they check the convergence of fluctuations between the empirical measure of particles in an interacting particle system and its mean-field limiting measure. In 82, R. Flenghi and B. Jourdain pursue their study of the central limit theorem for nonlinear functionals of the empirical measure of random variables by relaxing the i.i.d. assumption to deal with the successive values of an ergodic Markov chain. In 53, A. Alfonsi and B. Jourdain show that any optimal coupling for the quadratic Wasserstein distance

In mathematical finance, optimal transport problems with an additional martingale constraint are considered to handle the model risk, i.e. the risk of using an inadequate model.
The Martingale Optimal Transport (MOT) problem introduced in 71 provides model-free hedges and bounds on the prices of exotic options. The market prices of liquid call and put options give the marginal distributions of the underlying asset at each traded maturity. Under the simplifying assumption that the risk-free rate is zero, these probability measures are in increasing convex order, since by Strassen's theorem this property is equivalent to the existence of a martingale measure with the right marginal distributions. For an exotic payoff function of the values of the underlying on the time-grid given by these maturities, the model-free upper-bound (resp. lower-bound) for the price consistent with these marginal distributions is given by the following martingale optimal transport problem : maximize (resp. minimize) the integral of the payoff with respect to the martingale measure over all martingale measures with the right marginal distributions. Super-hedging (resp. sub-hedging) strategies are obtained by solving the dual problem. With J. Corbetta, A. Alfonsi and B. Jourdain 5 have studied sampling methods preserving the convex order for two probability measures

Martingale Optimal Transport provides thus bounds for the prices of exotic options that take into account the risk neutral marginal distributions of the underlying assets deduced from the market prices of vanilla options. For these bounds to be robust, the stability of the optimal value with respect to these marginal distributions is needed. Because of the global martingale constraint, stability is far less obvious than in optimal transport (it even fails in multiple dimensions). B. Jourdain has advised the PhD of W. Margheriti devoted to this issue and related problems. He also initiated a collaboration on this topic with M. Beiglböck, one of the founders of MOT theory.
In 91, B. Jourdain and W. Margheriti exhibit a new family of martingale couplings between two one-dimensional probability measures

In order to exploit the natural links between quantization and convex order in view of numerical methods for (Weak) Martingale Optimal Transport, B. Jourdain has initiated a fruitful collaboration with G. Pagès, one of the leading experts of quantization. For two compactly supported probability measures in the convex order, any stationary quadratic primal quantization of the smaller remains dominated by any dual quantization of the larger. B. Jourdain and G. Pagès prove in 96 that any martingale coupling between the original probability measures can be approximated by a martingale coupling between their quantizations in Wassertein distance with a rate given by the quantization errors but also in the much finer adapted Wassertein distance. In 94, in order to approximate a sequence of more than two probability measures in the convex order by finitely supported probability measures still in the convex order, they propose to alternate transitions according to a martingale Markov kernel mapping a probability measure in the sequence to the next and dual quantization steps. In the case of ARCH models, the noise has to be truncated to enable the dual quantization steps. They exhibit conditions under which the ARCH model with truncated noise is dominated by the original ARCH model in the convex order and also analyse the error of the scheme combining truncation of the noise according to primal quantization with the dual quantization steps. In 95, they prove that for compactly supported one dimensional probability distributions having a log-concave density,

Calibration problems in finance can be cast as Schrödinger problems. Due to the no-arbitrage condition, martingale Schrödinger problems must be considered. To jointly calibrate S&P 500 (SPX) and VIX options, J. Guyon has introduced dispersion-constrained martingale Schrödinger problems. In 24, he solved for the first time this longstanding puzzle of quantitative finance that has often been described as the Holy Grail of volatility modeling: build a model that jointly and exactly calibrates to the prices of SPX options, VIX futures, and VIX options. He did so using a nonparametric, discrete-time, minimum-entropy approach. He established a strong duality theorem and characterized the absence of joint SPX/VIX arbitrage. The minimum entropy jointly calibrating model is explicit in terms of the dual Schrödinger portfolio, i.e., the maximizer of the dual problems, should it exist, and is numerically computed using an extension of the Sinkhorn algorithm. Numerical experiments show that the algorithm performs very well in both low and high volatility regimes.

The pricing of American option or its Bermudan approximation amounts to solving a backward dynamic programming equation, in which the main difficulty comes from the conditional expectation involved in the computation of the continuation value.

In 102, B. Lapeyre and J. Lelong study neural networks approximations of conditional expectations. They prove the convergence of the well-known Longstaff and Schwartz algorithm when the standard least-square regression on a finite-dimensional vector space is replaced by a neural network approximation, and illustrate the numerical efficiency of the method on several numerical examples. Its stability with respect to a change of parameters as interest rate and volatility is shown. The numerical study proves that training neural network with only a few chosen points in the grid of parameters permits to price efficiently for a whole range of parameters.

In 84, two efficient techniques, called GPR Tree (GRP-Tree) and GPR Exact Integration (GPR-EI), are proposed to compute the price of American basket options. Both techniques are based on Machine Learning, exploited together with binomial trees or with a closed formula for integration. On the exercise dates, the value of the option is first computed as the maximum between the exercise value and the continuation value and then approximated by means of Gaussian Process Regression. In 86, an efficient method is provided to compute the price of multi-asset American options, based on Machine Learning, Monte Carlo simulations and variance reduction techniques. Numerical tests show that the proposed algorithm is fast and reliable, and can handle American options on very large baskets of assets, overcoming the curse of dimensionality issue.

Machine Learning in the Energy and Commodity Market.
Evaluating moving average options is a computational challenge for the energy and commodity market,
as the payoff of the option depends on the prices of underlying assets observed on a moving window.
An efficient method for pricing Bermudan style moving average options is presented in 85, based on Gaussian Process Regression and Gauss-Hermite quadrature.
This method is tested in the Clewlow-Strickland model, the reference framework for modeling prices of energy commodities,
the Heston (non-Gaussian) model and the rough-Bergomi model, which involves a double non-Markovian feature, since the whole history of the volatility process impacts the future distribution of the process.

Our project team is very much involved in numerical probability, aiming at pushing numerical methods towards the effective implementation. This numerical orientation is supported by a mathematical expertise which permits a rigorous analysis of the algorithms and provides theoretical support for the study of rates of convergence and the introduction of new tools for the improvement of numerical methods. This activity in the MathRisk team is strongly related to the development of the Premia software.

The approximation of SDEs and more general Markovian processes is a very active field. One important axis of research is the analysis of the weak error, that is the error between the law of the process and the law of its approximation. A standard way to analyse this is to focus on marginal laws, which boils down to the approximation of semigroups. The weak error of standard approximation schemes such as the Euler scheme has been widely studied, as well as higher order approximations such as those obtained with the Richardson-Romberg extrapolation method.

Stochastic Volterra Equations (SVE) provide a wide family of non-Markovian stochastic processes. They have been introduced in the early 80's by Berger and Mizel and have received a recent attention in mathematical finance to model the volatility : it has been noticed that SVEs with a fractional convolution kernel

In collaboration with L. Caramellino and G. Poly, V. Bally has settled a Malliavin type calculus for a general class of random variables, which are not supposed to be Gaussian (as it is the case in the standard Malliavin calculus). This is an alternative to the

As an application of the above methodology, V. Bally et al. have studied several limit theorems of Central Limit type (see 65 and 63). In particular they estimate the total variation distance between random polynomials, and prove a universality principle for the variance of the number of roots of trigonometric polynomials with random coefficients 67).

V. Bally, L. Caramellino and A. Kohatsu Higa, study the regularity properties of the law of the solutions of jump type SDE's 61. They use an interpolation criterion (proved in 69) combined with Malliavin calculus for jump processes. They also use a Gaussian approximation of the solution combined with Malliavin calculus for Gaussian random variables. Another approach to the same regularity property, based on a semigroup method has been developed by Bally and Caramellino in 66. An application for the Bolzmann equation is given by V. Bally in 69. In the same line but with different application, the total variation distance between a jump equation and its Gaussian approximation is studied by V. Bally and his PhD student Y. Qin 68 and by V. Bally, V. Rabiet, D. Goreac 67. A general discussion on the link between total variation distance and integration by parts is done in 64. Finally V. Bally et al. estimate in 62 the probability that a diffusion process remains in a tube around a smooth function.

In 90, B. Jourdain and A. Kebaier are interested in deriving non-asymptotic error bounds for the multilevel Monte Carlo method. As a first step, they deal with the explicit Euler discretization of stochastic differential equations with a constant diffusion coefficient. As long as the deviation is below an explicit threshold, they check that the multilevel estimator satisfies a Gaussian-type concentration inequality optimal in terms of the variance.

Approximation of conditional expectations.
The approximation of conditional expectations and the computation of expectations involving nested conditional expectations are important topics with a broad range of applications. In risk management, such quantities typically occur in the computation of the regulatory capital such as future Value-at-Risk or CVA. A. Alfonsi et al. 50
have developed a Multilevel Monte-Carlo (MLMC) method to calculate the Solvency Capital Ratio of insurance companies at future dates.
The main advantage of the method is that it avoids regression issues and has the same computational complexity as a plain Monte-Carlo method (i.e. a computational time in

We have focused above on the research program of the last four years. We refer to the previous MathRisk activity report for a description of the research done earlier, in particular on Liquidity and Market Microstructure 54, 48, 4, dependence modelling 98, interest rate modeling 47, Robust option pricing in financial markets with imperfections 77, 105, 12, 11, Mean field control and Stochastic Differential Games 104, 89, 109, Stochastic control and optimal stopping (games) under nonlinear expectation 79, 81, 80, 78, robust utility maximization 108, 109, 83, Generalized Malliavin calculus and numerical probability.

The domains of application are quantitative finance and insurance with emphasis on risk modeling and control. In particular, the project-team Mathrisk focuses on financial modeling and calibration, systemic risk, option pricing and hedging, portfolio optimization, risk measures.

Our work aims to contribute to a better management of risk in the banking and insurance systems, in particular by the study of systemic risk, asset price modeling, stability of financial markets.

MathRisk has organized an international conference on numerical methods in finance in June 2023 in Udine (Italy) to celebrate the 25th anniversary of the software Premia and the team MathRisk.

MathRisk had a very successful evaluation in 2023.

On March 14, 2023, FIFA changed the format of the 2026 FIFA World Cup based on Julien Guyon's articles

The new release Premia 25 has been delivered to the Consortium on September 29 2023. It contains the following new implemented algorithms.

I. Machine Learning algorithms and Risk Management:

• Optimal Stopping via Randomized Neural Networks. C.Herrera, F.Krach, P.Ruyssen, J.Teichmann • Deep Learning-Based Least Square Forward-Backward Stochastic Differential Equation Solver for High-Dimensional Derivative Pricing. J.Liang Z.Xu P.Li Quantitative Finance, 21-8, 2021. • The Deep Parametric PDE Method: Application to Option Pricing. K.Glau L.Wunderlich Applied Mathematics and Computation, 432, 2022. • Computing XVA for American basket derivatives by Machine Learning techniques. L.Goudenege A.Molent A.Zanette • Backward Hedging for American Options with Transaction Costs. L.Goudenge A.Molent A.Zanette • Pricing high-dimensional American options by kernel ridge regression. W.Hu T.Zastawniak Quantitative Finance, 20-5, 2020. • KrigHedge: Gaussian Process Surrogates for Delta Hedging. M.Ludkovski Y.Saporito Applied Mathematical Finance, 28-4, 2021.

II. Advanced numerical methods for Equity Derivatives:

• Option Pricing using Quantum Computers. N. Stamatopoulos, D. J. Egger, Y. Sun, C. Zoufal, R. Iten, N. Shen, S. Woerner, Quantum, 4-291, 2020. • The interpolated drift implicit Euler scheme Multilevel Monte Carlo method for pricing Barrier options and applications to the CIR and CEV models. M. Ben Derouich, A. Kebaier • Hybrid multifactor scheme for stochastic Volterra equations. S. E. Rømer • On the discrete-time simulation of the rough Heston model. A.Richard, X.Tan, F.Yang Siam Journal of Financial Mathematics, 14-1, 2023. • The stochastic collocation Monte Carlo sampler: highly efficient sampling from ‘ex- pensive’ distributions. L. A. Grzelak, J. A. S. Witteveen, M. Suarez-Taboada, C. W. Oosterlee Quantitative Finance, 19-2, 2019

A. Sulem, H. Amini, and their PhD student Z. Cao have studied the control of interbank contagion, dynamics and stability of complex financial networks, by using techniques from random graphs and stochastic control. We have obtained limit results for default cascades in sparse heterogeneous financial networks subject to an exogenous macroeconomic shock in 20. These limit theorems for different system-wide wealth aggregation functions allow us to provide systemic risk measures in relation with the structure and heterogeneity of the financial network. These results are applied to determine the optimal policy for a social planner to target interventions during a financial crisis, with a budget constraint and under partial information of the financial network. Banks can impact each other due to large-scale liquidations of similar assets or non-payment of liabilities. In 57, we present a general tractable framework for understanding the joint impact of fire sales and default cascades on systemic risk in complex financial networks. The effect of heterogeneity in network structure and price impact function on the final size of default cascade and fire sales loss is investigated.

In 55, we provide central limit theorems to analyze the combined effects of fire sales and default cascades on systemic risk within stochastic financial networks. The impact of prices is modeled through a specifically defined inverse demand function. Our study presents various limit theorems that delve into the dynamics of total shares sold and the equilibrium pricing of illiquid assets in a streamlined fire sales context. We show that the equilibrium prices of these assets demonstrate asymptotically Gaussian fluctuations. In our numerical experiments, we demonstrate how our central limit theorems can be applied to construct confidence intervals for the magnitude of contagion and the extent of losses due to fire sales.

In 56, We study multidimensional CramẂe study multidimensional Cramér-Lundberg risk processes where agents, located on a large sparse network, receive losses from their neighbors. To reduce the dimensionality of the problem, we introduce classification of agents according to an arbitrary countable set of types. The ruin of any agent triggers losses for all of its neighbours. We consider the case when the loss arrival process induced by the ensemble of ruined agents follows a Poisson process with general intensity function that scales with the network size. When the size of the network goes to infinity, we provide explicit ruin probabilities at the end of the loss propagation process for agents of any type. These limiting probabilities depend, in addition to the agents' types and the network structure, on the loss distribution and the loss arrival process. For a more complex risk processes on open networks, when in addition to the internal networked risk processes the agents receive losses from external users, we provide bounds on ruin probabilities.

Agnès Sulem, Rui Chen, Andreea Minca, Roxana Dumitrescu have studied mean-field BSDEs with a generalized mean-field operator which can capture system influence with higher order interactions such as those occurring in an inhomogeneous random graph.

We interpret the BSDE solution as a dynamic global risk measure for
a representative bank whose risk attitude is influenced by the system. This influence can come in a wide class of choices, including the average system state or average intensity of system interactions 22.

This opens the path towards using dynamic risk measures induced by mean-field BSDE as a complementary approach to systemic risk measurement.

Extensions to Graphon BSDEs with jumps are studied by H. Amini, A. Sulem, and their PhD student Z. Cao in 58. The use of graphons has emerged recently in order to analyze heterogeneous interaction in mean-field systems and game theory.
Existence, uniqueness and stability of solutions under some regularity assumptions are established. We also prove convergence results for interacting mean-field particle systems with inhomogeneous interactions to graphon mean-field BSDE systems.

In 59, we study continuous stochastic games with inhomogeneous mean field interactions on large networks and explore their graphon limits. We consider a model with a continuum of players, where each player's dynamics involve not only mean field interactions but also individual jumps induced by a Poisson random measure. We examine the case of controlled dynamics, with control terms present in the drift, diffusion, and jump components. We introduce the graphon game model based on a graphon controlled stochastic differential equation system with jumps, which can be regarded as the limiting case of a finite game's dynamic system as the number of players goes to infinity. Under some general assumptions, we establish the existence and uniqueness of Markovian graphon equilibria. We then provide convergence results on the state trajectories and their laws, transitioning from finite game systems to graphon systems. We also study approximate equilibria for finite games on large networks, using the graphon equilibrium as a benchmark. The rates of convergence are analyzed under various underlying graphon models and regularity assumptions.

This is an ongoing work in collaboration with Mathieu Laurière (NYU Shangai). We develop theoretical and numerical analysis of extended Graphon Mean Field Games (GMFG) in a discrete-time setting. On the theoretical side, we provide rigorous analysis on the existence of approximated Nash equilibrium of the GMFG system by considering joined state-action distribution, we also refined the proof of existence by categorizing pure policies and mixed policies. On the numerical side, we explore some learning schemes (i.e. reinforcement learning) to study graphon mean field equilibrium.

D. Lamberton and Tiziano De Angelis (University of Torino) are working on the optimal stopping problem of a one dimensional diffusion in finite horizon. They develop a probabilistic approach to the regularity of the associated free boundary problem.

They derived a probabilistic proof of the differentiability of the free boundary for the optimal stopping problem of a one-dimensional diffusion. They are working on extensions of our results to higher order derivatives.

Some of the results on the American put price in the Heston model that were obtained in joint work with Giulia Terenzi have also been improved. In particular, we have estimates for the time derivative without the Feller condition.

For many examples of couples

In 44, they complete the analysis of the Martingale Wasserstein Inequality started by checking that this inequality fails in dimension

While many questions in robust finance can be posed in the martingale optimal transport framework or its weak extension, others like the subreplication price of VIX futures, the robust pricing of American options or the construction of shadow couplings necessitate additional information to be incorporated into the optimization problem beyond that of the underlying asset. In 43, B. Jourdain and G. , B. Jourdain and G. Pammer take into account this extra information by introducing an additional parameter to the weak martingale optimal transport problem. They prove the stability of the resulting problem with respect to the risk neutral marginal distributions of the underlying asset. Finally, they deduce stability of the three previously mentioned motivating examples.

In 29, B. Jourdain and G. Pagès are interested in comparing solutions to stochastic Volterra equations for the convex order on the space of continuous

In 42, B. Jourdain and G. Pagès are interested in the propagation of convexity by the strong solution to a one-dimensional Brownian stochastic diffential equation with coefficients Lipschitz in the spatial variable uniformly in the time variable and in the convex ordering between the solutions of two such equations. They prove that while these properties hold without further assumptions for convex functions of the processes at one instant only, an assumption almost amounting to spatial convexity of the diffusion coefficient is needed for the extension to convex functions at two instants. Under this spatial convexity of the diffusion coefficients, the two properties even hold for convex functionals of the whole path. For directionally convex functionals, the spatial convexity of the diffusion coefficient is no longer needed. The method of proof consists in first establishing the results for time discretization schemes of Euler type and then transfering them to their limiting Brownian diffusions. They thus exhibit approximations which avoid convexity arbitrages by preserving convexity propagation and comparison and can be computed by Monte Carlo simulation.

In the spirit of Guyon and Lekeufack (2023) 25 who are interested in the dependence of volatility indices (e.g. the VIX) on the paths of the associated equity indices (e.g. the S&P 500), H. Andrès, A. Boumezoued and B. Jourdain study in 37 how implied volatility can be predicted using the past trajectory of the underlying asset price. The empirical study reveals that a large part of the movements of the at-the-money (ATM) implied volatility for up to two years maturities can be explained using the past returns and their squares. Moreover, this feedback effect gets weaker when the maturity increases and that up to four years of the past evolution of the underlying price should be used for the prediction. Building on this new stylized fact, H. Andrès, A. Boumezoued and B. Jourdain fit to historical data a parsimonious version of the SSVI parameterization (Gatheral and Jacquier, 2014) of the implied volatility surface relying on only four parameters and show that the two parameters ruling the ATM implied volatility as a function of the maturity exhibit a path-dependent behavior with respect to the underlying asset price. By adding this feedback effect to the path-dependent volatility model of Guyon and Lekeufack for the underlying asset price and by specifying a hidden semi-Markov diffusion model for the residuals of these two parameters and the two other parameters, they are able to simulate highly realistic paths of implied volatility surfaces that are arbitrage-free.

With N. Vadillo Fernandez, A. Alfonsi has proposed in 35 a joint model for temperature and electricity spot price in order to quantify the risk of derivatives such as quanto that deal with the fluctuations of climate (Heating Degree Day index) and electricity. We present an estimation method for this model and give analytic formula for the average payoff and for a static quadratic hedging strategy based on HDD and Electricity spot options.

J. Guyon and J. Lekeufack 25 learn from data that volatility is mostly path-dependent: up to 90% of the variance of the implied volatility of equity indexes is explained endogenously by past index returns, and up to 65% for (noisy estimates of) future daily realized volatility. The path-dependency that we uncover is remarkably simple: a linear combination of a weighted sum of past daily returns and the square root of a weighted sum of past daily squared returns with different time-shifted power-law weights capturing both short and long memory. This simple model, which is homogeneous in volatility, is shown to consistently outperform existing models across equity indexes and train/test sets for both implied and realized volatility. It suggests a simple continuous-time path-dependent volatility (PDV) model that may be fed historical or risk-neutral parameters. The weights can be approximated by superpositions of exponential kernels to produce Markovian models. In particular, J. Guyon and J. Lekeufack propose a 4-factor Markovian PDV model which captures all the important stylized facts of volatility, produces very realistic price and (rough-like) volatility paths, and jointly fits SPX and VIX smiles remarkably well. They thus show that a continuous-time Markovian parametric stochastic volatility (actually, PDV) model can practically solve the joint SPX/VIX smile calibration problem.

Using two years of S&P 500, Eurostoxx 50, and DAX data, M. El Amrani and J. Guyon 24, empirically investigate the term-structure of the at-the-money-forward (ATM) skew of equity indexes. While a power law (2 parameters) captures the term-structure well away from short maturities, the power law fit deteriorates considerably when short maturities are included. By contrast, 3-parameter shapes that look like power laws but do not blow up at vanishing maturity, such as time-shifted or capped power laws, are shown to fit well regardless of whether short maturities are included or not. Their study suggests that the term-structure of equity ATM skew has a power-law shape for maturities above 1 month but has a different behavior, and in particular may not blow up, for shorter maturities. The 3-parameter shapes are derived from non-Markovian variance curve models using the Bergomi-Guyon expansion. A simple 4-parameter term-structure similarly derived from the (Markovian) two-factor Bergomi model is also considered and provides even better fits. The extrapolated zero-maturity skew, far from being infinite, is distributed around a typical value of 1.5 (in absolute value).

J. Guyon and S. Mustapha 26 calibrate neural stochastic differential equations jointly to S&P 500 smiles, VIX futures, and VIX smiles. Drifts and volatilities are modeled as neural networks. Minimizing a suitable loss allows them to fit market data for multiple S&P 500 and VIX maturities. A one-factor Markovian stochastic local volatility model is shown to fit both smiles and VIX futures within bid-ask spreads. The joint calibration actually makes it a pure path-dependent volatility model, confirming the findings in [Guyon, 2022, The VIX Future in Bergomi Models: Fast Approximation Formulas and Joint Calibration with S&P 500 Skew].

G. Gazzani and J. Guyon consider a stochastic volatility model where the dynamics of the volatility process are described by a linear combination of a (exponentially) weighted sum of past daily returns and the square root of a weighted sum of past daily squared returns in the spirit of 25.They discuss the influence of an additional parameter that allows to reproduce the implied volatility smiles of SPX an VIX options within a 4-factor Markovian model (4FPDV). The empirical nature of this class of path-dependent volatility models (PDVs) comes with computational challenges, especially in relation to VIX options pricing and calibration. To address these challenges, they propose an accurate neural network approximation of the VIX leveraging on the markovianity of the 4FPDV. This approximation is subsequently used to tackle the joint calibration problem of SPX and VIX options. They additionally discuss a local volatility extension of the 4FPDV, in order to exactly calibrate market smiles. A preprint will be posted in Q1 2024.

A. Alfonsi and E. Lombardo have developed in 19 high order schemes for the weak error for the CIR process, based on the construction proposed in a recent paper by A. Alfonsi and V. Bally. We keep on this analysis to extend these results to the Heston model.

In 34, A. Alfonsi studies the stochastic invariance in a convex domain of SVEs. He also provides a second order approximation scheme for SVEs with multiexponential kernels which stay in some convex domain, and this is used for the multi-exponential Heston model. A. Alfonsi and A. Kebaier study the weak error for the approximation of Stochastic Volterra Equations and processes with rough paths.

In 41, R. Flenghi and B. Jourdain prove the joint convergence in distribution of q variables modulo one obtained as partial sums of a sequence of i.i.d. square integrable random variables multiplied by a common factor given by some function of an empirical mean of the same sequence. The limit is uniformy distributed over

The stratified resampling mechanism is one of the resampling schemes commonly used in the resampling steps of particle filters. In 40, R. Flenghi and B. Jourdain prove a central limit theorem for this mechanism under the assumption that the initial positions are independent and identically distributed and the weights proportional to a positive function of the positions such that the image of their common distribution by this function has a non zero component absolutely continuous with respect to the Lebesgue measure. This result relies on the convergence in distribution of the fractional part of partial sums of the normalized weights to some random variable uniformly distributed on

In 68, V. Bally and his PhD student Yifen Qin obtain total variation distance result between a jump-equation and its Gaussian approximation by Malliavin calculus techniques.

They approximate the invarient measure of a Markov process, solution of a stochastic equation with jumps by using a Euler scheme with decreasing step introduced by D Lamberton and G Pages in the early 2000 in the case of diffusion processes driven by Brownian motion. The novelty here is that They deal with jump processes. Under appropiate non degeneracy hypothesis, they have estimated the error in total variation distance and also proved convergence of the density functions.

A. Alfonsi, V. Bally and A Kohatzu Higa (Ritzumikan University) are working on a continuation of the above mentioned work on the approximation of the invarient measure for some non linear stochastic differential equations of Mc Kean Vlasov and Bozmann type.

A. Alfonsi, J. Lelong and A. Kebaier are working on a numerical method to price American options based on the dual representation introduced by Rogers (2002).

A. Alfonsi and V. Bally have proposed a new approach based on the sewing lemma on the Wasserstein Space to study existence and uniqueness of solutions of the Boltzmann equation 16. They are now working with L. Caramellino (Roma Univ) to extend their results by using the stochastic sewing lemma recently proposed by Khoa Lê (2020).

We pursue the development of Machine Learning an Deep learnig techniques in particular for McKean-Vlasov models of singular stochastic volatility, robust utility maximization, and high-dimensional optimal stopping problems. The corresponding algorithms are implemented in the Premia software.

We have started to construct a pricing framework using the Qiskit framework. Comparison of efficiency with other techniques has been done.

Chair Ecole Polytechnique-Ecole des Ponts ParisTech-Sorbonne Université-Société Générale "Financial Risks" of the Risk fondation.

Postdoctoral grant : G.Szulda

Chair Ecole des Ponts ParisTech - Université Paris-Cité - BNP Paribas "Futures of Quantitative Finance"

Institut Europlace de Finance Louis Bachelier and Labex Louis Bachelier grant : "Multi-Agent Reinforcement Learning in Large Financial Networks with Heterogeneous Interactions" from November 2023.

A. Alfonsi

Co-organizer of the Mathrisk seminar “Méthodes stochastiques et finance”

Co-organizer of the Bachelier (Mathematical Finance) seminar (IHP, Paris).

V. Bally

Organizer of the seminar of the LAMA laboratory, Université Gustave Eiffel.

A. Sulem

Co-organizer of the seminar INRIA-MathRisk /Université Paris Diderot LPSM “Numerical probability and mathematical finance”

A. Alfonsi

Member of the editorial board of the Book Series "Mathématiques et Applications" of Springer.

J. Guyon

Associate editor of

B. Jourdain

Associate editor of

D. Lamberton

Associate editor of

A. Sulem

Associate editor of

A. Alfonsi

Member of the council of the Bachelier Finance Society

A. Sulem

Member of the Nominating Committee of the Bachelier Finance Society

B. Jourdain

Deputy head of the Labex Bézout.