The Inria project team
MathRisk team was created in 2013. It is the follow-up of the
MathFi project team founded in 2000. MathFi was focused on financial
mathematics, in particular on computational methods for pricing and hedging
increasingly complex financial products.
The 2007 global financial crisis and its “aftermath
crisis” has abruptly highlighted the critical importance of a better
understanding and management of risk.

The project team MathRisk addresses broad research topics embracing risk management in quantitative finance and insurance and in other related domains as economy and sustainable development. In these contexts, the management of risk appears at different time scales, from high frequency data to long term life insurance management, raising challenging renewed modeling and numerical issues. We aim at both producing advanced mathematical tools, models, algorithms, and software in these domains, and developing collaborations with various institutions involved in risk control. The scientific issues we consider include:

Option pricing and hedging, and risk-management of portfolios in finance and insurance.
These remain crucial issues in finance and insurance, with the development of increasingly complex products and various regulatory legislations.
Models must take into
account the multidimensional features, incompleteness issues, model
uncertainties and various market imperfections and defaults.
It is also important to understand and capture the joint dynamics of the underlying assets and their volatilities.
The insurance activity faces a large class of risk, including financial risk, and is submitted to strict regulatory requirements. We aim at proposing
modelling frameworks which catch the main specificity of life insurance contracts.

Systemic risk and contagion modeling.
These last years have been shaped by ever more interconnectedness among all aspects of human life. Globalization and economics growth as well as technological progress have led to more complex dependencies worldwide. While these complex networks facilitate physical, capital and informational transmission, they have an inherent potential to create and propagate distress and risk. The financial crisis 2007-2009 has illustrated the significance of network structure on the amplification of initial shocks in the banking system to the level of the global financial system, leading to an economic recession.
We are contributing on the issues of systemic risk and financial networks, aiming at developing
adequate tools for monitoring financial stability which capture accurately the risks due to a variety of interconnections in the financial system.

(Martingale) Optimal transport. Optimal transport problems arise in a wide range of topics, from economics to physics. In mathematical finance, an additional martingale constraint is considered to take the absence of arbitrage opportunities into account. The minimal and maximal costs provide price bounds robust to model risk, i.e. the risk of using an inadequate model. On the other hand, optimal transport is also useful to analyse mean-field interactions. We are in particular interested in particle approximations of McKean-Vlasov stochastic differential equations (SDEs) and the study of mean-field backward SDEs with applications to systemic risk quantization.

Advanced numerical probability methods and Computational finance. Our project team is very much involved in numerical probability,
aiming at pushing numerical methods
towards the effective implementation. This numerical orientation is supported by a mathematical expertise
which permits a rigorous analysis of the algorithms and provides theoretical support for the study of rates of convergence
and the introduction of new tools for the improvement of numerical methods.
Financial institutions and insurance companies, submitted to more and more stringent regulatory
legislations, such as FRTB or XVA computation,
are facing numerical
implementation challenges and research focused on numerical efficiency is strongly needed. Overcoming the curse of dimensionality in computational finance is a crucial issue that we address by developing advanced stochastic algorithms and deep learning techniques.

The MathRisk project is strongly devoted to the development of new
mathematical methods and numerical algorithms. Mathematical tools
include stochastic modeling, stochastic analysis, in particular
various aspects of
stochastic control and optimal stopping with nonlinear expectations, Malliavin calculus, stochastic optimization,
random graphs, (martingale) optimal transport, mean-field systems,
numerical probability
and
generally advanced numerical methods for effective solutions.
The numerical platform Premia that MathRisk is developing in collaboration
with a consortium of financial institutions,
focuses on the computational challenges the recent developments in financial mathematics encompass, in particular risk control in large dimensions.

After the recent financial crisis, systemic risk has emerged as one of the major research topics in mathematical finance. Interconnected systems are subject to contagion in time of distress. The scope is to understand and model how the bankruptcy of a bank (or a large company) may or not induce other bankruptcies. By contrast with the traditional approach in risk management, the focus is no longer on modeling the risks faced by a single financial institution, but on modeling the complex interrelations between financial institutions and the mechanisms of distress propagation among these.

The mathematical modeling of default contagion, by which an economic shock causing initial losses and default of a few institutions is amplified due to complex linkages, leading to large scale defaults, can be addressed by various techniques, such as network approaches (see in particular R. Cont et al. 54 and A. Minca 93) or mean field interaction models (Garnier-Papanicolaou-Yang 75).

The goal of our project is to develop a model that captures the dynamics of a complex financial network and to provide methods for the control of default contagion, both by a regulator and by the institutions themselves.

We have contributed in the last years to the research on the control of contagion in financial systems in the framework of random graph models (see PhD thesis of R. Chen 67).

In 55, 94, 8, we consider a financial network described as a weighted directed graph, in which nodes represent financial institutions and edges the exposures between them. The distress propagation is modeled as an epidemics on this graph. We study the optimal intervention of a lender of last resort who seeks to make equity infusions in a banking system prone to insolvency and to bank runs, under complete and incomplete information of the failure cluster, in order to minimize the contagion effects. The paper 8 provides in particular important insight on the relation between the value of a financial system, connectivity and optimal intervention.

The results show that up to a certain connectivity, the value of the financial system increases with connectivity. However, this is no longer the case if connectivity becomes too large. The natural question remains how to create incentives for the banks to attain an optimal level of connectivity. This is studied in 68, where network formation for a large set of financial institutions represented as nodes is investigated. Linkages are source of income, and at the same time they bear the risk of contagion, which is endogeneous and depends on the strategies of all nodes in the system. The optimal connectivity of the nodes results from a game. Existence of an equilibrium in the system and stability properties is studied. The results suggest that financial stability is best described in terms of the mechanism of network formation than in terms of simple statistics of the network topology like the average connectivity.

In 7, H. Amini (University of Florida), A. Minca (Cornell University) and A. Sulem study Dynamic Contagion Risk Model With Recovery Features. We introduce threshold growth in the classical threshold contagion model, in which nodes have downward jumps when there is a failure of a neighboring node. We are motivated by the application to financial and insurance-reinsurance networks, in which thresholds represent either capital or liquidity. An initial set of nodes fail exogenously and affect the nodes connected to them as they default on financial obligations. If those nodes’ capital or liquidity is insufficient to absorb the losses, they will fail in turn. In other terms, if the number of failed neighbors reaches a node’s threshold, then this node will fail as well, and so on. Since contagion takes time, there is the potential for the capital to recover before the next failure. It is therefore important to introduce a notion of growth. Choosing the configuration model as underlying graph, we prove fluid limits for the baseline model, as well as extensions to the directed case, state-dependent inter-arrival times and the case of growth driven by upward jumps. We then allow nodes to choose their connectivity by trading off link benefits and contagion risk. Existence of an asymptotic equilibrium is shown as well as convergence of the sequence of equilibria on the finite networks. In particular, these results show that systems with higher overall growth may have higher failure probability in equilibrium.

A. Sulem with M.C. Quenez and M. Grigorova have studied option pricing and hedging in nonlinear incomplete financial markets model with default. The underlying market model consists of a risk-free asset and a risky asset driven by a Brownian motion and a compensated default martingale. The portfolio processes follow nonlinear dynamics with a nonlinear driver incomplete, in the sense that not every contingent claim can be replicated by a portfolio.
In this framework, we address in 13 the problem of pricing and (super)hedging of European options. By using a dynamic programming approach, we provide a dual formulation of the seller’s superhedging price as the supremum over a suitable set of equivalent probability measures constrained BSDE with default. In 78, we study the superhedging problem for American options with irregular payoffs. We establish a dual formulation of the seller’s price in terms of the value of a non-linear mixed optimal control/stopping problem. We also characterize the seller's price process as the minimal supersolution of a reflected BSDE with constraints. We then prove a duality result for the buyer's price in terms of the value of a non-linear optimal control/stopping game problem.
A crucial step in the proofs is to establish a non-linear optional and a non-linear predictable decomposition for
processes which are complete market model with default is previously studied in 71.
A complete analysis of BSDEs driven by a Brownian motion and a compensated default jump process with intensity process

The theory of optimal stopping in connection with American option pricing has been extensively studied in recent years. Our contributions in this area concern:

(i) The analysis of the binomial approximation of the American put price in the Black-Scholes model. We proved that the rate of convergence is,
up to a logarithmic factor, of the order
The American put in the Heston stochastic volatility model.
We have results about existence and uniqueness for the associated variational inequality,
in suitable weighted Sobolev spaces, following up on the work of P. Feehan et al. (2011, 2015, 2016) (cf 89). We also established some qualitative properties of the value function (monotonicity, strict convexity, smoothness) 88.
(iii) A probabilistic approach to the smoothness of the free boundary in the optimal stopping of a one-dimensional diffusion (work in progress with T. De Angelis)(University of Torino),

The 3rd edition of the book Applied Stochastic Control of Jump
diffusions (Springer, 2019) by B. Øksendal and A. Sulem 15 contains recent developments within stochastic control and its applications. In particular, there is a new chapter devoted to a
comprehensive presentation of financial markets modelled by jump diffusions,
one on backward stochastic differential equations and risk measures, and an advanced stochastic control chapter including optimal control of mean-field systems, stochastic differential games and stochastic Hamilton-Jacobi-Bellman equations.

Life insurance contracts are popular and involve very large portfolios, for a total amount of trillions of euros in Europe. To manage them in a long run, insurance companies perform Asset and Liability Management (ALM) : it consists in investing the deposit of policyholders in different asset classes such as equity, sovereign bonds, corporate bonds, real estate, while respecting a performance warranty with a profit sharing mechanism for the policyholders. A typical question is how to determine an allocation strategy which maximizes the rewards and satisfies the regulatory constraints. The management of these portfolios is quite involved: the different cash reserves imposed by the regulator, the profit sharing mechanisms, and the way the insurance company determines the crediting rate to its policyholders make the whole dynamics path-dependent and rather intricate. A. Alfonsi et al. have developed in 46 a synthetic model that takes into account the main features of the life insurance business. This model is then used to determine the allocation that minimizes the Solvency Capital Requirement (SCR). In 47, numerical methods based on Multilevel Monte-Carlo algorithms are proposed to calculate the SCR at future dates, which is of practical importance for insurance companies. The standard formula prescribed by the regulator is basically obtained from conditional expected losses given standard shocks that occur in the future.

Optimal transport problems arise in a wide range of topics, from economics to physics.
There exists different methods to solve numerically optimal transport problems.
A popular one is the Sinkhorn algorithm which uses an entropy regularization of the cost function and then iterative Bregman projections. Alfonsi et al. 49 have proposed an alternative relaxation that consists in replacing the constraint of matching exactly the marginal laws by constraints of matching some moments. Using Tchakaloff's theorem, it is shown that the optimum is reached by a discrete measure, and the optimal transport is found by using a (stochastic) gradient descent that determines the weights and the points of the discrete measure. The number of points only depends of the number of moments considered, and therefore does not depend on the dimension of the problem. The method has then been developed in 48 in the case of symmetric multimarginal optimal transport problems. These problems arise in quantum chemistry with the Coulomb interaction cost. The problem is in dimension

In 66, O.Bencheikh and B. Jourdain prove that the weak error between a stochastic differential equation with nonlinearity in the sense of McKean given by moments and its approximation by the Euler discretization with time-step

In 85, B. Jourdain and A. Tse propose a generalized version of the central limit theorem for nonlinear functionals of the empirical measure of i.i.d. random variables, provided that the functional satisfies some regularity assumptions for the associated linear functional derivatives of various orders. Using this result to deal with the contribution of the initialization, they check the convergence of fluctuations between the empirical measure of particles in an interacting particle system and its mean-field limiting measure. In 39, R. Flenghi and B. Jourdain pursue their study of the central limit theorem for nonlinear functionals of the empirical measure of random variables by relaxing the i.i.d. assumption to deal with the successive values of an ergodic Markov chain. In 50, A. Alfonsi and B. Jourdain show that any optimal coupling for the quadratic Wasserstein distance

In mathematical finance, optimal transport problems with an additional martingale constraint are considered to handle the model risk, i.e. the risk of using an inadequate model.
The Martingale Optimal Transport (MOT) problem introduced in 65 provides model-free hedges and bounds on the prices of exotic options. The market prices of liquid call and put options give the marginal distributions of the underlying asset at each traded maturity. Under the simplifying assumption that the risk-free rate is zero, these probability measures are in increasing convex order, since by Strassen's theorem this property is equivalent to the existence of a martingale measure with the right marginal distributions. For an exotic payoff function of the values of the underlying on the time-grid given by these maturities, the model-free upper-bound (resp. lower-bound) for the price consistent with these marginal distributions is given by the following martingale optimal transport problem : maximize (resp. minimize) the integral of the payoff with respect to the martingale measure over all martingale measures with the right marginal distributions. Super-hedging (resp. sub-hedging) strategies are obtained by solving the dual problem. With J. Corbetta, A. Alfonsi and B. Jourdain 5 have studied sampling methods preserving the convex order for two probability measures

Martingale Optimal Transport provides thus bounds for the prices of exotic options that take into account the risk neutral marginal distributions of the underlying assets deduced from the market prices of vanilla options. For these bounds to be robust, the stability of the optimal value with respect to these marginal distributions is needed. Because of the global martingale constraint, stability is far less obvious than in optimal transport (it even fails in multiple dimensions). B. Jourdain has advised the PhD of W. Margheriti devoted to this issue and related problems. He also initiated a collaboration on this topic with M. Beiglböck, one of the founders of MOT theory.
In 82, B. Jourdain and W. Margheriti exhibit a new family of martingale couplings between two one-dimensional probability measures

In order to exploit the natural links between quantization and convex order in view of numerical methods for (Weak) Martingale Optimal Transport, B. Jourdain has initiated a fruitful collaboration with G. Pagès, one of the leading experts of quantization. For two compactly supported probability measures in the convex order, any stationary quadratic primal quantization of the smaller remains dominated by any dual quantization of the larger. B. Jourdain and G. Pagès prove in 30 that any martingale coupling between the original probability measures can be approximated by a martingale coupling between their quantizations in Wassertein distance with a rate given by the quantization errors but also in the much finer adapted Wassertein distance. In 29, in order to approximate a sequence of more than two probability measures in the convex order by finitely supported probability measures still in the convex order, they propose to alternate transitions according to a martingale Markov kernel mapping a probability measure in the sequence to the next and dual quantization steps. In the case of ARCH models, the noise has to be truncated to enable the dual quantization steps. They exhibit conditions under which the ARCH model with truncated noise is dominated by the original ARCH model in the convex order and also analyse the error of the scheme combining truncation of the noise according to primal quantization with the dual quantization steps. In 84, they prove that for compactly supported one dimensional probability distributions having a log-concave density,

Our project team is very much involved in numerical probability, aiming at pushing numerical methods towards the effective implementation. This numerical orientation is supported by a mathematical expertise which permits a rigorous analysis of the algorithms and provides theoretical support for the study of rates of convergence and the introduction of new tools for the improvement of numerical methods. This activity in the MathRisk team is strongly related to the development of the Premia software.

The approximation of SDEs and more general Markovian processes is a very active field. One important axis of research is the analysis of the weak error, that is the error between the law of the process and the law of its approximation. A standard way to analyse this is to focus on marginal laws, which boils down to the approximation of semigroups. The weak error of standard approximation schemes such as the Euler scheme has been widely studied, as well as higher order approximations such as those obtained with the Richardson-Romberg extrapolation method.

Stochastic Volterra Equations (SVE) provide a wide family of non-Markovian stochastic processes. They have been introduced in the early 80's by Berger and Mizel and have received a recent attention in mathematical finance to model the volatility : it has been noticed that SVEs with a fractional convolution kernel

In collaboration with L. Caramellino and G. Poly, V. Bally has settled a Malliavin type calculus for a general class of random variables, which are not supposed to be Gaussian (as it is the case in the standard Malliavin calculus). This is an alternative to the

As an application of the above methodology, V. Bally et al. have studied several limit theorems of Central Limit type (see 60 and 58). In particular they estimate the total variation distance between random polynomials, and prove a universality principle for the variance of the number of roots of trigonometric polynomials with random coefficients 62).

V. Bally, L. Caramellino and A. Kohatsu Higa, study the regularity properties of the law of the solutions of jump type SDE's 18. They use an interpolation criterion (proved in 63) combined with Malliavin calculus for jump processes. They also use a Gaussian approximation of the solution combined with Malliavin calculus for Gaussian random variables. Another approach to the same regularity property, based on a semigroup method has been developed by Bally and Caramellino in 61. An application for the Bolzmann equation is given by V. Bally in 63. In the same line but with different application, the total variation distance between a jump equation and its Gaussian approximation is studied by V. Bally and his PhD student Y. Qin 19 and by V. Bally, V. Rabiet, D. Goreac 62. A general discussion on the link between total variation distance and integration by parts is done in 59. Finally V. Bally et al. estimate in 57 the probability that a diffusion process remains in a tube around a smooth function.

In 81, B. Jourdain and A. Kebaier are interested in deriving non-asymptotic error bounds for the multilevel Monte Carlo method. As a first step, they deal with the explicit Euler discretization of stochastic differential equations with a constant diffusion coefficient. As long as the deviation is below an explicit threshold, they check that the multilevel estimator satisfies a Gaussian-type concentration inequality optimal in terms of the variance.

Approximation of conditional expectations.
The approximation of conditional expectations and the computation of expectations involving nested conditional expectations are important topics with a broad range of applications. In risk management, such quantities typically occur in the computation of the regulatory capital such as future Value-at-Risk or CVA. A. Alfonsi et al. 47
have developed a Multilevel Monte-Carlo (MLMC) method to calculate the Solvency Capital Ratio of insurance companies at future dates.
The main advantage of the method is that it avoids regression issues and has the same computational complexity as a plain Monte-Carlo method (i.e. a computational time in

We have focused above on the research program of the last four years. We refer to the previous MathRisk activity report for a description of the research done earlier, in particular on Liquidity and Market Microstructure 51, 45, 4, dependence modelling 86, interest rate modeling 42, Robust option pricing in financial markets with imperfections 69, 95, 12, 11, Mean field control and Stochastic Differential Games 99, 80, 100, Stochastic control and optimal stopping (games) under nonlinear expectation 71, 73, 72, 70, robust utility maximization 98, 100, 74, Generalized Malliavin calculus and numerical probability.

The domains of application are quantitative finance and insurance with emphasis on risk modeling and control. In particular, the project-team Mathrisk focuses on financial modeling and calibration, systemic risk, option pricing and hedging, portfolio optimization, risk measures.

Our work aims to contribute to a better management of risk in the banking and insurance systems, in particular by the study of systemic risk, asset price modeling, stability of financial markets.

Release 24 of the Premia software has been delivered to the Consortium in March 2022. It contains the following new implemented algorithms dedicated to Machine Learning in finance: • Deep Hedging. H.Buhler, L. Gonon, J. Teichmann, B. Wood Quantitative Finance, Volume 19-8, 2019. • Deep neural network framework based on backward stochastic differential equations for pricing and hedging American options in high dimensions. Y.Chen J.W.L. Wan Quantitative Finance, 21-1, 2021 • Deep learning for ranking response surfaces with applications to optimal stopping problems. R.Hu Quantitative Finance, 21-1, 2021 • DGM: A deep learning algorithm for solving partial differential equations. J.Sirignano K.Spiliopoulos Journal of Computational Physics 375, 2018 • Financial option valuation by unsupervised learning with artificial neural networks. B. Salvador, C. W. Oosterlee, R. van der Meer • Equal Risk Pricing of Derivatives with Deep Hedging. A.Carbonneau F.Godin Quantitative Finance, 2020 • Quant GANs: Deep Generation of Financial Time Series. M.Wiese, R. Knobloch, R.Korn, P.Kretschmer Quantitative Finance, 20-9, 2020 • Artificial neural network for option pricing with and without asymptotic correction. H.Funahashi Quantitative Finance, 2020 • Pricing Bermudan options using regression trees/random forests. Z. El Filali Ech-Chafiq, P. Henry-Labordere, J.Lelong • Moving average options: Machine Learning and Gauss-Hermite quadrature for a double non-Markovian problem. L.Goudenge A.Molent A.Zanette European Journal of Operational Research 2022

Pricing of Equity Derivatives: • American options in the Volterra Heston model. S.Pulido, E.Chevalier, E.Zuniga. • Sinh-Acceleration: Efficient Evaluation of Probability Distributions, Option Pricing, and Monte-Carlo Simulations. S. Boyarchenko, S.Levendorskii International Journal of Theoretical and Applied Finance 22-03,2019 • A Simple Wiener-Hopf Factorization Approach for Pricing Double Barrier Options. O. Kudryavtsev In: Karapetyants A.N., Pavlov I.V., Shiryaev A.N. (eds) Operator Theory and Har- monic Analysis. OTHA 2020. Springer Proceedings in Mathematics Statistics, vol 358.

A. Sulem and H. Amini are supervising a PhD student (Z. Cao) on the control of interbank contagion, dynamics and stability of complex financial networks, by using techniques from random graphs and stochastic control. We have obtained limit results for default cascades in sparse heterogeneous financial networks subject to an exogenous macroeconomic shock in 53. These limit theorems for different system-wide wealth aggregation functions allow us to provide systemic risk measures in relation with the structure and heterogeneity of the financial network. These results are applied to determine the optimal policy for a social planner to target interventions during a financial crisis, with a budget constraint and under partial information of the financial network. Banks can impact each other due to large-scale liquidations of similar assets or non-payment of liabilities. In 52, we present a general tractable framework for understanding the joint impact of fire sales and default cascades on systemic risk in complex financial networks. The effect of heterogeneity in network structure and price impact function on the final size of default cascade and fire sales loss is investigated.

Agnès Sulem, Rui Chen, Andreea Minca, Roxana Dumitrescu have studied mean-field BSDEs with a generalized mean-field operator which can capture system influence with higher order interactions such as those occurring in an inhomogeneous random graph.

We interpret the BSDE solution as a dynamic global risk measure for
a representative bank whose risk attitude is influenced by the system. This influence can come in a wide class of choices, including the average system state or average intensity of system interactions 24.

This opens the path towards using dynamic risk measures induced by mean-field BSDE as a complementary approach to systemic risk measurement.

Extensions to Graphon BSDEs with jumps are studied by H. Amini, A. Sulem, and their PhD student Z. Cao in 35. The use of graphons has emerged recently in order to analyze heterogeneous interaction in mean-field systems and game theory.
Existence, uniqueness, measurability and stability of solutions under some regularity assumptions are established. We also prove convergence results for interacting mean-field particle systems with inhomogeneous interactions to graphon mean-field BSDE systems.

D. Lamberton and Tiziano De Angelis (University of Torino) are working on the optimal stopping problem of a one dimensional diffusion in finite horizon. They develop a probabilistic approach to the regularity of the associated free boundary problem.

B. Jourdain and A. Tse have extended to non-linear functionals of the empirical measure of independent and identically distributed random vectors the central limit theorem which is well known for linear functionals. The main tool permitting this extension is the linear functional derivative, one of the notions of derivation on the Wasserstein space of probability measures that have recently been developed. In 39, R. Flenghi and B. Jourdain relax first the equal distribution assumption and then the independence property to be able to deal with the successive values of an ergodic Markov chain.

Wasserstein projections in the convex order were first considered in the framework of weak optimal transport, and found application in various problems such as concentration inequalities and martingale optimal transport. In dimension one, it is well-known that the set of probability measures with a given mean is a lattice w.r.t. the convex order. In 40, B. Jourdain, W. Margheriti and G. Pammer prove that, contrary to the minimum and maximum in the convex order, the Wasserstein projections are Lipschitz continuity w.r.t. the Wasserstein distance in dimension one. Moreover, they provide examples that show sharpness of the obtained bounds for the 1-Wasserstein distance.

In 41, B. Jourdain and G. Pagès are interested in comparing solutions to stochastic Volterra equations for the convex order on the space of continuous

V. Bally and A. Alfonsi have studied existence, uniqueness and Euler approximation for jump type stochastic equations of Bolzmann and Mc Kean-Vlasov type 44. The specificity of their approach is to use a methodology based on stochastic flows and the sewing lemma which has been introduced in the rough path theory. Recently, V. Bally and his Phd student Y. Qin use Malliavin calculus for jump processes to prove convergence of the Euler scheme (for the above mentioned equations) in total variation distance 19(while in 44 only the Wasserstein distance is considered). Moreover, they study the approximation of the invariant measure by an adaptative Euler scheme, as introduced by Lamberton and Pagès and recently studied by Panloup and Pagès.

J. Guyon and S. Mustapha calibrate neural stochastic differential equations jointly to S&P 500 smiles, VIX futures, and VIX smiles. Drifts and volatilities are modeled as neural networks. Minimizing a suitable loss allows them to fit market data for multiple S&P 500 and VIX maturities. A one-factor Markovian stochastic local volatility model is shown to fit both smiles and VIX futures within bid-ask spreads. The joint calibration actually makes it a pure path-dependent volatility model, confirming the findings in 79.

Real-world economic scenarios provide stochastic forecasts of economic variables like interest rates, equity stocks or indices, or inflation and are widely used in the insurance sector. Unlike risk-neutral scenarios, they aim to be realistic in view of the historical data and/or expert expectations of future incomes. In 37, H. Andrës, A. Boumezoued and B. Jourdain propose a new approach for the validation of real-world economic scenario motivated by insurance applications. This approach is based on the statistical test developed by Chevyrev and Oberhauser and relies on the notions of signature and maximum mean distance. This test allows to check whether two samples of stochastic processes paths come from the same distribution. Their contribution is to apply this test to two stochastic processes, namely the fractional Brownian motion and the Black-Scholes dynamics. They analyze its statistical power based on numerical experiments under two constraints: 1. they work in an asymetric setting in which they compare a large sample that represents simulated real-world scenarios and a small sample that mimics information from historical data, both with a monthly time step as often considered in practice and 2. they make the two samples identical from the perspective of validation methods used in practice, i.e. they impose that the marginal distributions of the two samples are the same at a given one-year horizon. By performing specific transformations of the signature, they obtain high statistical powers and demonstrate the potential of this validation approach for real-world economic scenarios. they also discuss several challenges related to the numerical implementation of this approach, and highlight its domain of validity in terms of distance between models and the volume of data at hand.

In view of the long lifetime of insurance contracts, major part of insurers/reinsurers assets portfolios is composed of bonds. Consequently, the main financial risk the insurers are exposed to is the interest rates risk. The thesis of S. Mehalla 92 (superviser B. Lapeyre) is dedicated to the study of efficient calibration procedures for interest rates models, such as as LIBOR Market Model, used by insurance/reinsurance undertakings. Financial models with stochastic volatility factor are studied. A new method to efficiently price swap rates derivatives under the LIBOR Market Model with Stochastic Volatility and Displaced Diffusion is proposed (see 17). Fast calibration based on optimization algorithms are studied (see 16, 56).

Insurance companies sell insurance contracts that give a protection against weather fluctuations, typically on rain or temperature. These contracts are managed then using weather derivatives. In 34, A. Alfonsi and his PhD student N. Vadillo Fernandez propose a new stochastic volatility model for the temperature in order to price derivatives on the climate (Heating Degree Day index). They develop a simple and efficient estimation of the model parameters on historical data and provide efficient numerical pricing methods.

A. Alfonsi and V. Bally 43 have proposed a method to construct high order approximations from an elementary approximation scheme by using it on suitable random grids. This method can be applied to any general semigroup, provided that the “elementary scheme” satisfies some properties, which hold e.g. for the Euler scheme for SDEs with regular coefficients. In 33, A. Alfonsi and E. Lombardo develop high order schemes for the weak error for the Cox-Ingersoll-Ross process, that has singular diffusion coefficient. This process is widely used in mathematical finance to model the interest rate or the volatility as in the Heston model. The method is based on the construction proposed in 43.

Alfonsi and Kebaier 31 have analyzed the

O. Bencheikh and B. Jourdain have studied the Euler-Maruyama approximation of
SDEs
with constant diffusion coefficient,
when the drift coefficient is bounded and merely measurable. In 22, they prove weak convergence with order

A.Alfonsi, B. Lapeyre and J. Lelong have revisited the classical Monte-Carlo regression problem to approximate the conditional expectation

In 23, O. Bencheikh and B. jourdain analyse the rate of convergence of a system of interacting particles with mean-field rank based interaction in the drift coefficient and constant diffusion coefficient.

In 19, V. Bally and his PhD student Yifen Qin obtain total variation distance result between a jump-equation and its Gaussian approximation by Malliavin calculus techniques.

The pricing of American option or its Bermudan approximation amounts to solving a backward dynamic programming equation, in which the main difficulty comes from the conditional expectation involved in the computation of the continuation value.

In 90, B. Lapeyre and J. Lelong study neural networks approximations of conditional expectations. They prove the convergence of the well-known Longstaff and Schwartz algorithm when the standard least-square regression on a finite-dimensional vector space is replaced by a neural network approximation, and illustrate the numerical efficiency of the method on several numerical examples. Its stability with respect to a change of parameters as interest rate and volatility is shown. The numerical study proves that training neural network with only a few chosen points in the grid of parameters permits to price efficiently for a whole range of parameters.

In 76, two efficient techniques, called GPR Tree (GRP-Tree) and GPR Exact Integration (GPR-EI), are proposed to compute the price of American basket options. Both techniques are based on Machine Learning, exploited together with binomial trees or with a closed formula for integration. On the exercise dates, the value of the option is first computed as the maximum between the exercise value and the continuation value and then approximated by means of Gaussian Process Regression. In 77, an efficient method is provided to compute the price of multi-asset American options, based on Machine Learning, Monte Carlo simulations and variance reduction techniques. Numerical tests show that the proposed algorithm is fast and reliable, and can handle American options on very large baskets of assets, overcoming the curse of dimensionality issue.

Machine Learning in the Energy and Commodity Market.
Evaluating moving average options is a computational challenge for the energy and commodity market,
as the payoff of the option depends on the prices of underlying assets observed on a moving window. An efficient method for pricing Bermudan style moving average options is presented in 25, based on Gaussian Process Regression and Gauss-Hermite quadrature. This method is tested in the Clewlow-Strickland model, the reference framework for modeling prices of energy commodities, the Heston (non-Gaussian) model and the rough-Bergomi model, which involves a double non-Markovian feature, since the whole history of the volatility process impacts the future distribution of the process.

Investment strategies based on Machine learning algorithms.
The goal of the thesis of H. Madmoun 91 (Supervisor: B. Lapeyre)
is to show how portfolio allocation can benefit from the development of Machine Learning algorithms. Two investment strategies based on such algorithms are proposed.
The first strategy (Low Turbulence Model) is an asset allocation method based on an interpretable low-dimensional representation framework of financial time series combining signal processing, deep neural networks, and bayesian statistics.
The second one (Attention Based Ranking Model) is a stock picking strategy for large- cap US stocks.

CIFRE agreement Braham gardens/Ecole des Ponts

PhD thesis of Hachem Madmoun: "Creating Investment Strategies based on Machine Learning Algorithms"

Chair Ecole Polytechnique-Ecole des Ponts ParisTech-Sorbonne Université-Société Générale "Financial Risks" of the Risk fondation.

Postdoctoral grants : G.Szulda

Chair Ecole des Ponts ParisTech - Université Paris-Cité - BNP Paribas "Futures of Quantitative Finance"

Hamed Amini, Associate Professor (University of Florida): Collaboration with Agnès Sulem on Dynamic contagion in financial networks; co-advising of the PhD thesis of Z. Cao, Research stay, Spring and Summmer 2022.

A. Alfonsi:

Co-organizer of the Mathrisk seminar “Méthodes stochastiques et finance”

Co-organizer of the Bachelier (Mathematical Finance) seminar (IHP, Paris).

V. Bally

- Organizer of the seminar of the LAMA laboratory, Université Gustave Eiffel.

- Organizer of the mini symposium " Stochastic Equations", XVth French-Romanian Colloquium on Applied Mathematics, Toulouse, 29 August - 2 September 2022

A. Sulem

- Co-organizer of the seminar INRIA-MathRisk /Université Paris Diderot LPSM “Numerical probability and mathematical finance”

- Member of the scientific committee of the London-Paris Bachelier workshop (September 2022), IHP, Paris.

J. Guyon

Associate editor of

B. Jourdain

Associate editor of

D. Lamberton

Associate editor of

A. Sulem

Associate editor of

A. Sulem

Keynote speaker,
19th e-Summer School in Risk Finance and Stochastics 28/9 - 30/9/2022, Athens

J. Guyon

course "Probability Theory", 1st year ENPC

A. Sulem

Master of Mathematics, Université du Luxembourg, Responsible of the course on "Numerical Methods in Finance", and lectures (22 hours)

J. Guyon

PhD defense of William Lefebvre, LPSM, Université Paris-Cité, 09/12/2022,