The Inria project team
MathRisk team was created in 2013. It is the follow-up of the
MathFi project team founded in 2000. MathFi was focused on financial
mathematics, in particular on computational methods for pricing and hedging
increasingly complex financial products.
The 2007 global financial crisis and its “aftermath
crisis” has abruptly highlighted the critical importance of a better
understanding and management of risk. The project MathRisk has
been reoriented towards mathematical handling of risk, and addresses
broad research topics embracing risk measurement and risk management, modeling and
optimization in quantitative finance, but also in
other related domains where risk control is paramount.
The project team MathRisk aims both at producing mathematical tools and models
in these domains, and developing collaborations with various
institutions involved in risk control.
Quantitative finance remains for the project an important source of
mathematical problems and applications. Indeed, the
pressure of new legislation leads to a massive reorientation of
research priorities, and the interest of analysts shifted to risk
control preoccupation.

The scientific issues related to quantitative finance we consider include systemic risk and contagion modeling, robust finance, market frictions, counterparty and liquidity risk, assets dependence modeling, market micro-structure modeling and price impact. In this context, models must take into account the multidimensional feature and various market imperfections. They are much more demanding mathematically and numerically, and require the development of risk measures taking into account incompleteness issues, model uncertainties, interplay between information and performance and various defaults.

Besides, financial institutions, submitted to more stringent regulatory legislations such as FRTB or XVA computation, are facing practical implementation challenges which still need to be solved. Research focused on numerical efficiency remains strongly needed in this context, renewing the interest for the numerical platform Premia ( that Mathrisk is developing in collaboration with a consortium of financial institutions.

While these themes arise naturally in the world of quantitative finance, a number of these issues and mathematical tools are also relevant to the treatment of risk in other areas as economy, social insurance and sustainable development, of fundamental importance in today's society. In these contexts, the management of risk appears at different time scales, from high frequency data to long term life insurance management, raising challenging renewed modeling and numerical issues.

The MathRisk project is strongly involved in the development of new
mathematical methods and numerical algorithms. Mathematical tools
include stochastic modeling, stochastic analysis, in particular
stochastic (partial) differential equations and various aspects of
stochastic control and optimal stopping of these equations, nonlinear expectations, Malliavin calculus, stochastic optimization,
dynamic game theory, random graphs, martingale optimal transport
(especially in relation to numerical considerations), long time behavior
of Markov processes (with applications to Monte-Carlo methods) and
generally advanced numerical methods for effective solutions.

After the recent financial crisis, systemic risk has emerged as one of the major research topics in mathematical finance. Interconnected systems are subject to contagion in time of distress. The scope is to understand and model how the bankruptcy of a bank (or a large company) may or not induce other bankruptcies. By contrast with the traditional approach in risk management, the focus is no longer on modeling the risks faced by a single financial institution, but on modeling the complex interrelations between financial institutions and the mechanisms of distress propagation among these.

The mathematical modeling of default contagion, by which an economic shock causing initial losses and default of a few institutions is amplified due to complex linkages, leading to large scale defaults, can be addressed by various techniques, such as network approaches (see in particular R. Cont et al. and A. Minca ) or mean field interaction models (Garnier-Papanicolaou-Yang ).

We have contributed in the last years to the research on the control of contagion in financial systems in the framework of random graph models : In , , , A. Sulem with A. Minca and H. Amini consider a financial network described as a weighted directed graph, in which nodes represent financial institutions and edges the exposures between them. The distress propagation is modeled as an epidemics on this graph. They study the optimal intervention of a lender of last resort who seeks to make equity infusions in a banking system prone to insolvency and to bank runs, under complete and incomplete information of the failure cluster, in order to minimize the contagion effects. The paper provides in particular important insight on the relation between the value of a financial system, connectivity and optimal intervention.

The results show that up to a certain connectivity, the value of the financial system increases with connectivity. However, this is no longer the case if connectivity becomes too large. The natural question remains how to create incentives for the banks to attain an optimal level of connectivity. This is studied in , where network formation for a large set of financial institutions represented as nodes is investigated. Linkages are source of income, and at the same time they bear the risk of contagion, which is endogeneous and depends on the strategies of all nodes in the system. The optimal connectivity of the nodes results from a game. Existence of an equilibrium in the system and stability properties is studied. The results suggest that financial stability is best described in terms of the mechanism of network formation than in terms of simple statistics of the network topology like the average connectivity.

Liquidity risk is the risk arising from the difficulty of selling (or buying) an asset. Usually, assets are quoted on a market with a Limit Order Book (LOB) that registers all the waiting limit buy and sell orders for this asset. The bid (resp. ask) price is the most expensive (resp. cheapest) waiting buy or sell order. If a trader wants to sell a single asset, he will sell it at the bid price, but if he wants to sell a large quantity of assets, he will have to sell them at a lower price in order to match further waiting buy orders. This creates an extra cost, and raises important issues. From a short-term perspective (from few minutes to some days), it may be interesting to split the selling order and to focus on finding optimal selling strategies. This requires to model the market microstructure, i.e. how the market reacts in a short time-scale to execution orders. From a long-term perspective (typically, one month or more), one has to understand how this cost modifies portfolio managing strategies (especially delta-hedging or optimal investment strategies). At this time-scale, there is no need to model precisely the market microstructure, but one has to specify how the liquidity costs aggregate.

For rather liquid assets, liquidity risk is usually taken into account via price impact models which describe how a (large) trader influences the asset prices. Then, one is typically interested in the optimal execution problem: how to buy/sell a given amount of assets optimally within a given deadline. This issue is directly related to the existence of statistical arbitrage or Price Manipulation Strategies (PMS). Most of price impact models deal with single assets. A. Alfonsi, F. Klöck and A. Schied have proposed a multi-assets price impact model that extends previous works. Price impact models are usually relevant when trading at an intermediary frequency (say every hour). At a lower frequency, price impact is usually ignored while at a high frequency (every minute or second), one has to take into account the other traders and the price jumps, tick by tick. Midpoint price models are thus usually preferred at this time scale. With P. Blanc, Alfonsi has proposed a model that makes a bridge between these two types of model: they have considered an (Obizhaeva and Wang) price impact model, in which the flow of market orders generated by the other traders is given by an exogeneous process. They have shown that Price Manipulation Strategies exist when the flow of order is a compound Poisson process. However, modeling this flow by a mutually exciting Hawkes process with a particular parametrization allows them to exclude these PMS. Besides, the optimal execution strategy is explicit in this model. A practical implementation is given in .

- Calibration of stochastic and local volatility models.
The volatility is a key concept in modern mathematical finance, and
an indicator of market stability.
Risk management and associated instruments depend strongly on the
volatility, and volatility modeling is a crucial issue in the finance industry. Of particular importance is the assets dependence modeling.

By Gyongy's theorem, a local and stochastic volatility model is calibrated to the market prices of all call options with positive maturities and strikes if its local volatility function is equal to the ratio of the Dupire local volatility function over the root conditional mean square of the stochastic volatility factor given the spot value. This leads to a SDE nonlinear in the sense of McKean. Particle methods based on a kernel approximation of the conditional expectation, as presented by Guyon and Henry-Labordère , provide an efficient calibration procedure even if some calibration errors may appear when the range of the stochastic volatility factor is very large. But so far, no existence result is available for the SDE nonlinear in the sense of McKean. In the particular case when the local volatility function is equal to the inverse of the root conditional mean square of the stochastic volatility factor multiplied by the spot value given this value and the interest rate is zero, the solution to the SDE is a fake Brownian motion. When the stochastic volatility factor is a constant (over time) random variable taking finitely many values and the range of its square is not too large, B. Jourdain and A. Zhou proved existence to the associated Fokker-Planck equation . Thanks to results obtained by Figalli in , they deduced existence of a new class of fake Brownian motions. They extended these results to the special case of the LSV model called Regime Switching Local Volatility, when the stochastic volatility factor is a jump process taking finitely many values and with jump intensities depending on the spot level.

- Interest rates modeling.
Affine term structure models have been popularized by Dai and Singleton , Duffie, Filipovic and Schachermayer . They consider vector affine diffusions (the coordinates are usually called factors) and assume that the short interest rate is a linear combination of these factors. A model of this kind is the Linear Gaussian Model (LGM) that considers a vector Ornstein-Uhlenbeck diffusions for the factors, see El Karoui and Lacoste . A. Alfonsi et al. have proposed an extension of this model, when the instantaneous covariation between the factors is given by a Wishart process. Doing so, the model keeps its affine structure and tractability while generating smiles for option prices. A price expansion around the LGM is obtained for Caplet and Swaption prices.

- Numerical Methods for Martingale Optimal Transport problems.

The Martingale Optimal Transport (MOT) problem introduced in has received a recent attention in finance since it gives model-free hedges and bounds on the prices of exotic options. The market prices of liquid call and put options give the marginal distributions of the underlying asset at each traded maturity. Under the simplifying assumption that the risk-free rate is zero, these probability measures are in increasing convex order, since by Strassen's theorem this property is equivalent to the existence of a martingale measure with the right marginal distributions. For an exotic payoff function of the values of the underlying on the time-grid given by these maturities, the model-free upper-bound (resp. lower-bound) for the price consistent with these marginal distributions is given by the following martingale optimal transport problem : maximize (resp. minimize) the integral of the payoff with respect to the martingale measure over all martingale measures with the right marginal distributions. Super-hedging (resp. sub-hedging) strategies are obtained by solving the dual problem.
With J. Corbetta, A. Alfonsi and B. Jourdain have studied sampling methods preserving the convex order for two probability measures

Their method is the first generic approach to tackle the martingale optimal transport problem numerically and can also be applied to several marginals.

- Robust option pricing in financial markets with imperfections.

A. Sulem, M.C. Quenez and R. Dumitrescu have studied robust pricing in an imperfect financial market
with default.
The market imperfections are taken into account via the nonlinearity of the wealth dynamics.
In this setting, the pricing system is expressed as a nonlinear g-expectation

In a Markovian framework, the results of the paper on combined optimal stopping/stochastic control with

The dissipation of general convex entropies for continuous time Markov processes can be described in terms of backward martingales with respect to the tail filtration. The relative entropy is the expected value of a backward submartingale. In the case of (non necessarily reversible) Markov diffusion processes, J. Fontbona and B. Jourdain used Girsanov theory to explicit the Doob-Meyer decomposition of this submartingale. They deduced a stochastic analogue of the well known entropy dissipation formula, which is valid for general convex entropies, including the total variation distance. Under additional regularity assumptions, and using Itô's calculus and ideas of Arnold, Carlen and Ju , they obtained a new Bakry-Emery criterion which ensures exponential convergence of the entropy to 0. This criterion is non-intrinsic since it depends on the square root of the diffusion matrix, and cannot be written only in terms of the diffusion matrix itself. They provided examples where the classic Bakry Emery criterion fails, but their non-intrinsic criterion applies without modifying the law of the diffusion process.

With J. Corbetta, A. Alfonsi and B. Jourdain have studied the time derivative of the Wasserstein distance between the marginals of two Markov processes . The Kantorovich duality leads to a natural candidate for this derivative. Up to the sign, it is the sum of the integrals with respect to each of the two marginals of the corresponding generator applied to the corresponding Kantorovich potential. For pure jump processes with bounded intensity of jumps, J. Corbetta, A. Alfonsi and B. Jourdain proved that the evolution of the Wasserstein distance is actually given by this candidate. In dimension one, they showed that this remains true for Piecewise Deterministic Markov Processes. They applied the formula to estimate the exponential decrease rate of the Wasserstein distance between the marginals of two birth and death processes with the same generator in terms of the Wasserstein curvature.

- Mean-field limits of systems of interacting particles.
In , B. Jourdain and his former PhD student J. Reygner have studied a mean-field version of rank-based models of equity markets such as the Atlas model introduced by Fernholz in the framework of Stochastic Portfolio Theory. They obtained an asymptotic description of the market when the number of companies grows to infinity. Then, they discussed the long-term capital distribution, recovering the Pareto-like shape of capital distribution curves usually derived from empirical studies, and providing a new description of the phase transition phenomenon observed by Chatterjee and Pal.
They have also studied multitype sticky particle systems which can be obtained as vanishing noise limits of multitype rank-based diffusions (see ).
Under a uniform strict hyperbolicity assumption on the characteristic fields, they constructed a multitype version of the sticky particle dynamics.
In ,
they obtain the optimal rate of convergence as the number of particles grows to infinity of the approximate solutions to the diagonal hyperbolic system based on multitype sticky particles and on easy to compute time discretizations of these dynamics.

- Mean field control and Stochastic Differential Games (SDGs).
To handle situations where controls are chosen by several agents who interact in various ways, one may use the theory of Stochastic Differential Games (SDGs). Forward–Backward SDG and stochastic control under Model Uncertainty are studied
in by A. Sulem and B. Øksendal.
Also of interest are large population games, where each player interacts with the average effect of the others and individually has negligible effect on the overall population. Such an interaction pattern may be modeled by mean field coupling and this leads to the study of mean-field stochastic control and related
SDGs.
A. Sulem, Y. Hu and B. Øksendal have studied singular mean field control problems and singular mean field two-players stochastic differential games . Both sufficient and necessary conditions for the optimal controls and for the Nash equilibrium are obtained. Under some assumptions, the optimality conditions for singular mean-field control are reduced to a reflected Skorohod problem.
Applications to
optimal irreversible investments under uncertainty have been investigated.
Predictive mean-field equations as a model for prices influenced by beliefs about the future are studied
in
.

M.C. Quenez and A. Sulem have studied optimal stopping with nonlinear expectation

The problem of a generalized Dynkin game problem with nonlinear expectation

A generalized mixed game problem when the players have two actions: continuous control and stopping is studied in a Markovian framework in . In this work, dynamic programming principles (DPP) are established: a strong DPP is proved in the case of a regular obstacle and a weak one in the irregular case. Using these DPPs, links with parabolic partial integro-differential Hamilton-Jacobi- Bellman variational inequalities with two obstacles are obtained.

With B. Øksendal and C. Fontana, A. Sulem has contributed on the issues of robust utility maximization , , and relations between information and performance .

Vlad Bally has extended the stochastic differential calculus built by P. Malliavin which allows one to obtain integration by parts and associated regularity probability laws. In collaboration with L. Caramellino (Tor Vegata University, Roma), V. Bally has developed an abstract version of Malliavin calculus based on a splitting method (see ). It concerns random variables with law locally lower bounded by the Lebesgue measure (the so-called Doeblin's condition). Such random variables may be represented as a sum of a "smooth" random variable plus a rest. Based on this smooth part, he achieves a stochastic calculus which is inspired from Malliavin calculus . An interesting application of such a calculus is to prove convergence for irregular test functions (total variation distance and more generally, distribution distance) in some more or less classical frameworks as the Central Limit Theorem, local versions of the CLT and moreover, general stochastic polynomials . An exciting application concerns the number of roots of trigonometric polynomials with random coefficients . Using Kac Rice lemma in this framework one comes back to a multidimensional CLT and employs Edgeworth expansions of order three for irregular test functions in order to study the mean and the variance of the number of roots. Another application concerns U statistics associated to polynomial functions. The techniques of generalized Malliavin calculus developed in are applied in for the approximation of Markov processes (see and ). On the other hand, using the classical Malliavin calculus, V. Bally in collaboration with L. Caramellino and P. Pigato studied some subtle phenomena related to diffusion processes, as short time behavior and estimates of tubes probabilities (see , ).

Our project team is very much involved in numerical probability, aiming at pushing numerical methods towards the effective implementation. This numerical orientation is supported by a mathematical expertise which permits a rigorous analysis of the algorithms and provides theoretical support for the study of rates of convergence and the introduction of new tools for the improvement of numerical methods. This activity in the MathRisk team is strongly related to the development of the Premia software.

With A. Kohatsu-Higa, A. Alfonsi and B. Jourdain have proved using optimal transport tools that the Wasserstein distance between the time marginals of an elliptic SDE and its Euler discretization with

With their former PhD student, A. Al Gerbi, E. Clément and B. Jourdain have proved strong convergence with order

A. Kebaier and B. Jourdain are interested in deriving non-asymptotic error bounds for the multilevel Monte Carlo method. As a first step, they dealt in with the explicit Euler discretization of stochastic differential equations with a constant diffusion coefficient. They obtained Gaussian-type concentration. To do so, they used the Clark-Ocone representation formula and derived bounds for the moment generating functions of the squared difference between a crude Euler scheme and a finer one and of the squared difference of their Malliavin derivatives. The estimation of such differences is much more complicated than the one of a single Euler scheme contribution and explains why they suppose the diffusion coefficient to be constant. This assumption ensures boundedness of the Malliavin derivatives of both the SDE and its Euler scheme.

R. Dumitrescu and C. Labart have studied the discrete time approximation scheme for the solution of a doubly reflected Backward Stochastic Differential Equation with jumps, driven by a Brownian motion and an independent compensated Poisson process , .

V. Bally and A. Kohatsu-Higa have recently proposed an unbiased estimator based on the parametrix method to compute expectations of functions of a given SDE ( ). This method is very general, and A. Alfonsi, A. Kohastu-Higa and M. Hayashi have applied it to the case of one-dimensional reflected diffusions. In this case, the estimator can be obtained explicitly by using the scheme of Lépingle and is quite simple to implement. It is compared to other simulation methods for reflected SDEs.

In joint work with A. Bouselmi, D. Lamberton studied the asymptotic behavior of the exercise boundary near maturity
for American put options in exponential Lévy models. In , they deal with jump-diffusion models, and establish that, in some cases, the behavior differs from the classical
Black and Scholes setting.
D. Lamberton has also worked on the binomial approximation of the American put. The conjectured rate of convergence
is

The domains of application are quantitative finance and insurance with emphasis on risk modeling and control. In particular, Mathrisk focuses on dependence modeling, systemic risk, market microstructure modeling and risk measures.

Our work aims to contribute to a better management of risk in the banking and insurance systems, in particuluar by the study of systemic risk, asset price modeling, stability of financial markets.

Despite the pandemic, Mathrisk team has been increasingly active, with several PhD defenses, participation to the main international conferences and intense publication activity.

PREMIA: Numerical Platform for quantitative finance

PREMIA : Release 23

The goal of the project is to develop a model that captures the dynamics of a complex financial network and to provide methods for the control of default contagion, both by a regulator and by the institutions themselves. This introduces a new class of problems that are very challenging mathematically, as it relies on using mean field games and random graphs theory. Agnès Sulem, Andreea Minca (Cornell University), Hamed Amini (J. Mack Robinson College of Business, Georgia State University) have studied a Dynamic Contagion Risk Model With Recovery Features. They introduce threshold growth in the classical threshold contagion model, in which nodes have downward jumps when there is a failure of a neighboring node. Choosing the configuration model as underlying graph, they prove fluid limits for the baseline model, as well as extensions to the directed case, state-dependent inter-arrival times and the case of growth driven by upward jumps. They then allow nodes to choose their connectivity by trading off link benefits and contagion risk. They define a rational equilibrium concept in which nodes choose their connectivity according to an expected failure probability of any given link, and then impose condition that the expected failure probability coincides with the actual failure probability under the optimal connectivity. Existence of an asymptotic equilibrium is shown as well as convergence of the sequence of equilibria on the finite networks. In particular, these results show that systems with higher overall growth may have higher failure probability in equilibrium .

Agnès Sulem, Hamed Amini and their PhD student Zhongyang Cao have obtained some limit results for default cascades in sparse heterogeneous financial networks subject to an exogenous macroeconomic shock in . These results are applied to determine the optimal policy for a social planner to target interventions during a financial crisis, with a budget constraint and under partial information of the financial network. In , they present a general tractable framework for understanding the joint impact of fire sales and default cascades on systemic risk in complex financial networks. The effect of heterogeneity in network structure and price impact function on the final size of default cascade and fire sales loss is investigated.

Agnès Sulem, Rui Chen, Andreea Minca, Roxana Dumitrescu have studied mean-field BSDEs with a generalized mean-field operator which can capture system influence with higher order interactions. Convergence of finite approximations to the mean-field BSDE have been obtained. In the finite system, the mean-field term can incorporate for example an inhomogeneous graph model in which the intensity of bilateral interactions depends on the states of the end nodes by means of a kernel function. This opens the path towards using dynamic risk measures induced by mean-field BSDE as a complementary approach to systemic risk measurement.

Agnès Sulem has studied with Miryana Grigorova (University of Leeds) and
Marie-Claire Quenez (Université Paris Denis Diderot) superhedging prices and the associated superhedging strategies for both European and American options (see and in a non-linear incomplete market model with default.
The underlying market model consists of a risk-free asset and a risky asset driven by a Brownian motion and a compensated default martingale. The portfolio processes follow non-linear dynamics with a non-linear driver

With his PhD student N. Vadillo Fernandez, A. Alfonsi develops a stochastic model for temperature in order to price derivatives on the climate (Heating Degree Day index). They also work on its estimation to historical data on weather.

A. Alfonsi has obtained a grant from AXA Foundation on a Joint Research Initiative with a team of AXA France working on the strategic asset allocation. This team has to make recommendations on the investment over some assets classes as, for example, equity, real estate or bonds. In order to do that, each side of the balance sheet (assets and liabilities) is modeled in order to take into account their own dynamics but also their interactions. Given that the insurance products are long time contracts, the projections of the company's margins have to be done considering long maturities. When doing simulations to assess investment policies, it is necessary to take into account the SCR which is the amount of cash that has to be settled to manage the portfolio. Typically, the computation of the future values of the SCR involve expectations under conditional laws, which is greedy in computation time.

A. Alfonsi and his PhD student A. Cherchali have constructed a model of the ALM management of insurance companies that takes into account the regulatory constraints on life-insurance . They have developed Multilevel Monte-Carlo methods to approximate the SCR (Solvency Capital Requirement) at a future date and more generally, to calculate the worst of

Antonino Zanette with Ludovic Goudenège (Ecole Centrale de Paris) and Andrea Molent (University of Udine) study the valuation of GMWB variable annuity when both stochastic volatility and stochastic interest rate are considered in the Heston Hull-White model. .

Neural network regression for Bermudan option pricing. The pricing of Bermudan options amounts to solving a dynamic programming principle, in which the main difficulty, especially in high dimension, comes from the conditional expectation involved in the computation of the continuation value. These conditional expectations are classically computed by regression techniques on a finite dimensional vector space. In ,
Bernard Lapeyre and Jérôme Lelong
study neural networks approximations of conditional expectations.
They prove the convergence of the well-known Longstaff and Schwartz algorithm when the standard least-square regression is replaced by a neural network approximation.
They illustrate the numerical efficiency of neural networks as an alternative to standard regression methods for approximating conditional expectations on several numerical examples.

Machine learning for pricing American options.
In ,
L. Goudenège, A. Molent and A. Zanette
develop techniques, called GPR Tree and GPR Exact Integration,
both based on Machine Learning, to compute prices of American basket options
in high-dimension. Both Markovian and non Markovian models are
studied, in particular rough Bergomi model, which provides stochastic volatility with memory.
In , they propose an efficient method to compute the price of multi-asset American options, based on Machine Learning, Monte Carlo simulations and variance reduction technique.

Big data techniques for portfolio optimization.
With his PhD student Hachem Madmoun, Bernard Lapeyre
analyses the trajectories of asset prices
by Fourier transform, and big data techniques such as vae (variational autoencoder)
and hmm (hidden Markov chains)
for portfolio management.

In , Oumaima Bencheikh and Benjamin Jourdain study the approximation in Wasserstein distance with index

With V. Ehrlacher and R. Coyaud, Aurelien Alfonsi is working on numerical methods based to approximate the optimal transport in the symmetric case in the multimarginal case. .

It is known since that two one-dimensional probability measures in the convex order admit a martingale coupling with respect to which the integral of

In , B. Jourdain and W. Margheriti are interested in martingale rearrangement couplings. As introduced by Wiesel, in order to prove the stability of Martingale Optimal Transport problems, these are projections in adapted Wasserstein distance of couplings between two probability measures on the real line in the convex order onto the set of martingale couplings between these two marginals. In reason of the lack of relative compactness of the set of couplings with given marginals for the adapted Wasserstein topology, the existence of such a projection is not clear at all. Under a barycentre dispersion assumption on the original coupling which is in particular satisfied by the Hoeffding-Fréchet or comonotone coupling, Wiesel gives a clear algorithmic construction of a martingale rearrangement when the marginals are finitely supported and then gets rid of the finite support assumption by relying on a rather messy limiting procedure to overcome the lack of relative compactness. B. Jourdain and W. Margheriti give a direct general construction of a martingale rearrangement coupling under the barycentre dispersion assumption. This martingale rearrangement is obtained from the original coupling by an approach similar to the construction given in of the inverse transform martingale coupling, a member of a family of martingale couplings close to the Hoeffding-Fréchet coupling, but for a slightly different injection in the set of extended couplings introduced by Beiglböck and Juillet and which involve the uniform distribution on

While many questions in (robust) finance can be posed in the martingale optimal transport (MOT) framework, others require to consider also non-linear cost functionals.
Following the terminology of Gozlan, Roberto, Samson and Tetali for classical optimal transport, this corresponds to weak martingale optimal transport (WMOT). In , M. Beigkböck, B. Jourdain, W. Margheriti and G. Pammer establish stability of WMOT in dimension one which is important since financial data can give only imprecise information on the underlying marginals. As application, they deduce the stability of the superreplication bound for VIX futures as well as the stability of the stretched Brownian motion and we derive a monotonicity principle for WMOT.

Quantization provides a very natural way to preserve the convex order when approximating two ordered probability measures by two finitely supported ones. Indeed, when the convex order dominating original probability measure is compactly supported, it is smaller than any of its dual quantizations while the dominated original measure is greater than any of its stationary (and therefore any of its optimal) quadratic primal quantization. Moreover, the quantization errors then correspond to martingale couplings between each original probability measure and its quantization. This enables B. Jourdain and G. Pagès to prove
in that any martingale coupling between the original probability measures can be approximated by a martingale coupling between their quantizations in Wassertein distance with a rate given by the quantization errors but also in the much finer adapted Wassertein distance. As a consequence, while the stability of (Weak) Martingale Optimal Transport problems with respect to the marginal distributions has only been established in dimension 1 so far, their value function computed numerically for the quantized marginals converges in any dimension to the value for the original probability measures as the numbers of quantization points go to

A. Alfonsi and A. Kebaier have studied the strong error for the approximation of Stochastic Volterra Equations and processes with rough paths in . They are now working on the study of the weak error for some approximation schemes.

With E. Lombardo, A. Alfonsi is studying high order schemes for the weak error for the CIR process, based on the construction proposed in a recent paper by A. Alfonsi and V. Bally .

B. Jourdain and his PhD student E. Kahn study stochastic differential equations coming from the eigenvalues of Wishart processes , .

In collaboration with L. Caramellino (University Tor Vergata) and with G. Poly (University of Rennes), V. Bally has settled a Malliavin type calculus for a general class of random variables, which are not supposed to be Gaussian (as it is the case in the standard Malliavin calculus). This is an alternative to the

A. Alfonsi and V. Bally propose a new approach to study existence and uniqueness of solutions of the Boltzmann equation in .They now work with L. Caramellino to extend their results under more general assumptions on jumps.

In collaboration with L. Caramellino and A. Kohatsu Higa, V. Bally study the regularity of the solutions of jump type equations. A first result is obtained in .

Damien Lamberton has worked on American options in stochastic volatility models (especially the Heston model). He has improved some results on the regularity of the value function and on the exercise boundary.

CIFRE agreement Milliman company/Ecole des Ponts (),

PhD thesis of Sophian Mehalla on "Interest rate risk modeling for insurance companies", Supervisor: Bernard Lapeyre.

CIFRE agreement Brahham gardens/Ecole des Ponts

PhD thesis of Hachem Madmoun: "Gestion de portefeuilles utilisant des techniques de (big) data"

Collaboration with IRT Systemx

PhD grant of Adrien Touboul on "Uncertainty computation in a graph of physical simulations", Supervisors: Bernard Lapeyre and Julien Reygner.

Chair X-ENPC-SU-Société Générale "Financial Risks" of the Risk fondation.

Research stay of Hamed Amini, Associate Professor, Department of Risk Management and Insurance, Mack Robinson College of Business, Georgia State University, June-August 2021.

Collaboration with A. Sulem and PhD student Z. Cao on systemic risk in financial networks.

Collaborations with L. Caramellino (Tor Vegata University, Roma), A. Kohatsu Higa (Ritsumeikan University, Japan), A. Minca (Cornell University), M. Grigorova (University of Leeds)), H. Amini (Georgia State University).

Labex Bezout

Pôle Finance Innovation

A. Alfonsi

Licence

- A. Alfonsi: "Probability theory”, first year course at Ecole des Ponts.

- V. Bally : "Analyse Hilbertienne", Course L3, UPEMLV

- B. Jourdain : course "Mathematical tools for engineers", 1st year ENPC.

- B. Jourdain course "Mathematical finance", 2nd year ENPC

- D. Lamberton: "Intégration et probabilités", L3 course, Université Gustave Eiffel.

Master

- Aurélien Alfonsi

- Vlad Bally

- Benjamin Jourdain

- B. Jourdain, B. Lapeyre

- J.-F. Delmas, B.Jourdain

- D. Lamberton

- B. Lapeyre

- A. Sulem

Charles Meynard (May-August), adviser: Ludovic Goudenège.

"Numerical methods in finance and insurance"

Agnès Sulem