The Inria project team
MathRisk team was created in 2013. It is the follow-up of the
MathFi project team founded in 2000. MathFi was focused on financial
mathematics, in particular on computational methods for pricing and hedging
increasingly complex financial products.
The 2007 global financial crisis and its “aftermath
crisis” has abruptly highlighted the critical importance of a better
understanding and management of risk. The project MathRisk has
been reoriented towards mathematical handling of risk, and addresses
broad research topics embracing risk measurement and risk management, modeling and
optimization in quantitative finance, but also in
other related domains where risk control is paramount.
The project team MathRisk aims both at producing mathematical tools and models
in these domains, and developing collaborations with various
institutions involved in risk control.
Quantitative finance remains for the project an important source of
mathematical problems and applications. Indeed, the
pressure of new legislation leads to a massive reorientation of
research priorities, and the interest of analysts shifted to risk
control preoccupation.

The scientific issues related to quantitative finance we consider include systemic risk and contagion modeling, robust finance, market frictions, counterparty and liquidity risk, assets dependence modeling, market micro-structure modeling and price impact. In this context, models must take into account the multidimensional feature and various market imperfections. They are much more demanding mathematically and numerically, and require the development of risk measures taking into account incompleteness issues, model uncertainties, interplay between information and performance and various defaults.

Besides, financial institutions, submitted to more stringent regulatory
legislations such as FRTB or XVA computation, are facing practical
implementation challenges which still need to be solved.
Research focused on numerical efficiency remains strongly needed in this context, renewing the interest for
the numerical platform Premia (http://

While these themes arise naturally in the world of quantitative finance, a number of these issues and mathematical tools are also relevant to the treatment of risk in other areas as economy, social insurance and sustainable development, of fundamental importance in today's society. In these contexts, the management of risk appears at different time scales, from high frequency data to long term life insurance management, raising challenging renewed modeling and numerical issues.

The MathRisk project is strongly involved in the development of new
mathematical methods and numerical algorithms. Mathematical tools
include stochastic modeling, stochastic analysis, in particular
stochastic (partial) differential equations and various aspects of
stochastic control and optimal stopping of these equations, nonlinear expectations, Malliavin calculus, stochastic optimization,
dynamic game theory, random graphs, martingale optimal transport
(especially in relation to numerical considerations), long time behavior
of Markov processes (with applications to Monte-Carlo methods) and
generally advanced numerical methods for effective solutions.

After the recent financial crisis, systemic risk has emerged as one of the major research topics in mathematical finance. Interconnected systems are subject to contagion in time of distress. The scope is to understand and model how the bankruptcy of a bank (or a large company) may or not induce other bankruptcies. By contrast with the traditional approach in risk management, the focus is no longer on modeling the risks faced by a single financial institution, but on modeling the complex interrelations between financial institutions and the mechanisms of distress propagation among these.

The mathematical modeling of default contagion, by which an economic shock causing initial losses and default of a few institutions is amplified due to complex linkages, leading to large scale defaults, can be addressed by various techniques, such as network approaches (see in particular R. Cont et al. 48 and A. Minca 86) or mean field interaction models (Garnier-Papanicolaou-Yang 77).

We have contributed in the last years to the research on the control of contagion in financial systems in the framework of random graph models : In 50, 87, 5, A. Sulem with A. Minca and H. Amini consider a financial network described as a weighted directed graph, in which nodes represent financial institutions and edges the exposures between them. The distress propagation is modeled as an epidemics on this graph. They study the optimal intervention of a lender of last resort who seeks to make equity infusions in a banking system prone to insolvency and to bank runs, under complete and incomplete information of the failure cluster, in order to minimize the contagion effects. The paper 5 provides in particular important insight on the relation between the value of a financial system, connectivity and optimal intervention.

The results show that up to a certain connectivity, the value of the financial system increases with connectivity. However, this is no longer the case if connectivity becomes too large. The natural question remains how to create incentives for the banks to attain an optimal level of connectivity. This is studied in 62, where network formation for a large set of financial institutions represented as nodes is investigated. Linkages are source of income, and at the same time they bear the risk of contagion, which is endogeneous and depends on the strategies of all nodes in the system. The optimal connectivity of the nodes results from a game. Existence of an equilibrium in the system and stability properties is studied. The results suggest that financial stability is best described in terms of the mechanism of network formation than in terms of simple statistics of the network topology like the average connectivity.

Liquidity risk is the risk arising from the difficulty of selling (or buying) an asset. Usually, assets are quoted on a market with a Limit Order Book (LOB) that registers all the waiting limit buy and sell orders for this asset. The bid (resp. ask) price is the most expensive (resp. cheapest) waiting buy or sell order. If a trader wants to sell a single asset, he will sell it at the bid price, but if he wants to sell a large quantity of assets, he will have to sell them at a lower price in order to match further waiting buy orders. This creates an extra cost, and raises important issues. From a short-term perspective (from few minutes to some days), it may be interesting to split the selling order and to focus on finding optimal selling strategies. This requires to model the market microstructure, i.e. how the market reacts in a short time-scale to execution orders. From a long-term perspective (typically, one month or more), one has to understand how this cost modifies portfolio managing strategies (especially delta-hedging or optimal investment strategies). At this time-scale, there is no need to model precisely the market microstructure, but one has to specify how the liquidity costs aggregate.

For rather liquid assets, liquidity risk is usually taken into account via price impact models which describe how a (large) trader influences the asset prices. Then, one is typically interested in the optimal execution problem: how to buy/sell a given amount of assets optimally within a given deadline. This issue is directly related to the existence of statistical arbitrage or Price Manipulation Strategies (PMS). Most of price impact models deal with single assets. A. Alfonsi, F. Klöck and A. Schied 47 have proposed a multi-assets price impact model that extends previous works. Price impact models are usually relevant when trading at an intermediary frequency (say every hour). At a lower frequency, price impact is usually ignored while at a high frequency (every minute or second), one has to take into account the other traders and the price jumps, tick by tick. Midpoint price models are thus usually preferred at this time scale. With P. Blanc, Alfonsi 3 has proposed a model that makes a bridge between these two types of model: they have considered an (Obizhaeva and Wang) price impact model, in which the flow of market orders generated by the other traders is given by an exogeneous process. They have shown that Price Manipulation Strategies exist when the flow of order is a compound Poisson process. However, modeling this flow by a mutually exciting Hawkes process with a particular parametrization allows them to exclude these PMS. Besides, the optimal execution strategy is explicit in this model. A practical implementation is given in 42.

- Calibration of stochastic and local volatility models.
The volatility is a key concept in modern mathematical finance, and
an indicator of market stability.
Risk management and associated instruments depend strongly on the
volatility, and volatility modeling is a crucial issue in the finance industry. Of particular importance is the assets dependence modeling.

By Gyongy's theorem, a local and stochastic volatility model is calibrated to the market prices of all call options with positive maturities and strikes if its local volatility function is equal to the ratio of the Dupire local volatility function over the root conditional mean square of the stochastic volatility factor given the spot value. This leads to a SDE nonlinear in the sense of McKean. Particle methods based on a kernel approximation of the conditional expectation, as presented by Guyon and Henry-Labordère 79, provide an efficient calibration procedure even if some calibration errors may appear when the range of the stochastic volatility factor is very large. But so far, no existence result is available for the SDE nonlinear in the sense of McKean. In the particular case when the local volatility function is equal to the inverse of the root conditional mean square of the stochastic volatility factor multiplied by the spot value given this value and the interest rate is zero, the solution to the SDE is a fake Brownian motion. When the stochastic volatility factor is a constant (over time) random variable taking finitely many values and the range of its square is not too large, B. Jourdain and A. Zhou proved existence to the associated Fokker-Planck equation 22. Thanks to results obtained by Figalli in 73, they deduced existence of a new class of fake Brownian motions. They extended these results to the special case of the LSV model called Regime Switching Local Volatility, when the stochastic volatility factor is a jump process taking finitely many values and with jump intensities depending on the spot level.

- Interest rates modeling.
Affine term structure models have been popularized by Dai and Singleton 63, Duffie, Filipovic and Schachermayer 64. They consider vector affine diffusions (the coordinates are usually called factors) and assume that the short interest rate is a linear combination of these factors. A model of this kind is the Linear Gaussian Model (LGM) that considers a vector Ornstein-Uhlenbeck diffusions for the factors, see El Karoui and Lacoste 72. A. Alfonsi et al. 39 have proposed an extension of this model, when the instantaneous covariation between the factors is given by a Wishart process. Doing so, the model keeps its affine structure and tractability while generating smiles for option prices. A price expansion around the LGM is obtained for Caplet and Swaption prices.

- Numerical Methods for Martingale Optimal Transport problems.

The Martingale Optimal Transport (MOT) problem introduced in 61 has received a recent attention in finance since it gives model-free hedges and bounds on the prices of exotic options. The market prices of liquid call and put options give the marginal distributions of the underlying asset at each traded maturity. Under the simplifying assumption that the risk-free rate is zero, these probability measures are in increasing convex order, since by Strassen's theorem this property is equivalent to the existence of a martingale measure with the right marginal distributions. For an exotic payoff function of the values of the underlying on the time-grid given by these maturities, the model-free upper-bound (resp. lower-bound) for the price consistent with these marginal distributions is given by the following martingale optimal transport problem : maximize (resp. minimize) the integral of the payoff with respect to the martingale measure over all martingale measures with the right marginal distributions. Super-hedging (resp. sub-hedging) strategies are obtained by solving the dual problem.
With J. Corbetta, A. Alfonsi and B. Jourdain 13 have studied sampling methods preserving the convex order for two probability measures

Their method is the first generic approach to tackle the martingale optimal transport problem numerically and can also be applied to several marginals.

- Robust option pricing in financial markets with imperfections.

A. Sulem, M.C. Quenez and R. Dumitrescu have studied robust pricing in an imperfect financial market
with default.
The market imperfections are taken into account via the nonlinearity of the wealth dynamics.
In this setting, the pricing system is expressed as a nonlinear g-expectation

In a Markovian framework, the results of the paper 8 on combined optimal stopping/stochastic control with

The dissipation of general convex entropies for continuous time Markov processes can be described in terms of backward martingales with respect to the tail filtration. The relative entropy is the expected value of a backward submartingale. In the case of (non necessarily reversible) Markov diffusion processes, J. Fontbona and B. Jourdain 75 used Girsanov theory to explicit the Doob-Meyer decomposition of this submartingale. They deduced a stochastic analogue of the well known entropy dissipation formula, which is valid for general convex entropies, including the total variation distance. Under additional regularity assumptions, and using Itô's calculus and ideas of Arnold, Carlen and Ju 51, they obtained a new Bakry-Emery criterion which ensures exponential convergence of the entropy to 0. This criterion is non-intrinsic since it depends on the square root of the diffusion matrix, and cannot be written only in terms of the diffusion matrix itself. They provided examples where the classic Bakry Emery criterion fails, but their non-intrinsic criterion applies without modifying the law of the diffusion process.

With J. Corbetta, A. Alfonsi and B. Jourdain have studied the time derivative of the Wasserstein distance between the marginals of two Markov processes 44. The Kantorovich duality leads to a natural candidate for this derivative. Up to the sign, it is the sum of the integrals with respect to each of the two marginals of the corresponding generator applied to the corresponding Kantorovich potential. For pure jump processes with bounded intensity of jumps, J. Corbetta, A. Alfonsi and B. Jourdain 43 proved that the evolution of the Wasserstein distance is actually given by this candidate. In dimension one, they showed that this remains true for Piecewise Deterministic Markov Processes. They applied the formula to estimate the exponential decrease rate of the Wasserstein distance between the marginals of two birth and death processes with the same generator in terms of the Wasserstein curvature.

- Mean-field limits of systems of interacting particles.
In 82, B. Jourdain and his former PhD student J. Reygner have studied a mean-field version of rank-based models of equity markets such as the Atlas model introduced by Fernholz in the framework of Stochastic Portfolio Theory. They obtained an asymptotic description of the market when the number of companies grows to infinity. Then, they discussed the long-term capital distribution, recovering the Pareto-like shape of capital distribution curves usually derived from empirical studies, and providing a new description of the phase transition phenomenon observed by Chatterjee and Pal.
They have also studied multitype sticky particle systems which can be obtained as vanishing noise limits of multitype rank-based diffusions (see 84).
Under a uniform strict hyperbolicity assumption on the characteristic fields, they constructed a multitype version of the sticky particle dynamics.
In 83,
they obtain the optimal rate of convergence as the number of particles grows to infinity of the approximate solutions to the diagonal hyperbolic system based on multitype sticky particles and on easy to compute time discretizations of these dynamics.

In 76, N. Fournier and B. Jourdain are interested in the two-dimensional Keller-Segel partial differential equation. This equation is a model for chemotaxis (and for Newtonian gravitational interaction).

- Mean field control and Stochastic Differential Games (SDGs).
To handle situations where controls are chosen by several agents who interact in various ways, one may use the theory of Stochastic Differential Games (SDGs). Forward–Backward SDG and stochastic control under Model Uncertainty are studied
in 91 by A. Sulem and B. Øksendal.
Also of interest are large population games, where each player interacts with the average effect of the others and individually has negligible effect on the overall population. Such an interaction pattern may be modeled by mean field coupling and this leads to the study of mean-field stochastic control and related
SDGs.
A. Sulem, Y. Hu and B. Øksendal have studied singular mean field control problems and singular mean field two-players stochastic differential games 80. Both sufficient and necessary conditions for the optimal controls and for the Nash equilibrium are obtained. Under some assumptions, the optimality conditions for singular mean-field control are reduced to a reflected Skorohod problem.
Applications to
optimal irreversible investments under uncertainty have been investigated.
Predictive mean-field equations as a model for prices influenced by beliefs about the future are studied
in
92.

M.C. Quenez and A. Sulem have studied optimal stopping with nonlinear expectation

In 8, M.C. Quenez, A. Sulem and R. Dumitrescu study a combined optimal control/stopping problem under nonlinear expectation weak dynamic programming principle (DPP), from which they derive that the
upper and lower semi-continuous envelopes of viscosity solution of an associated nonlinear
Hamilton-Jacobi-Bellman variational inequality.

The problem of a generalized Dynkin game problem with nonlinear expectation

A generalized mixed game problem when the players have two actions: continuous control and stopping is studied in a Markovian framework in 68. In this work, dynamic programming principles (DPP) are established: a strong DPP is proved in the case of a regular obstacle and a weak one in the irregular case. Using these DPPs, links with parabolic partial integro-differential Hamilton-Jacobi- Bellman variational inequalities with two obstacles are obtained.

With B. Øksendal and C. Fontana, A. Sulem has contributed on the issues of robust utility maximization 90, 92, and relations between information and performance 74.

Vlad Bally has extended the stochastic differential calculus built by P. Malliavin which allows one to obtain integration by parts and associated regularity probability laws. In collaboration with L. Caramellino (Tor Vegata University, Roma), V. Bally has developed an abstract version of Malliavin calculus based on a splitting method (see 53). It concerns random variables with law locally lower bounded by the Lebesgue measure (the so-called Doeblin's condition). Such random variables may be represented as a sum of a "smooth" random variable plus a rest. Based on this smooth part, he achieves a stochastic calculus which is inspired from Malliavin calculus 6. An interesting application of such a calculus is to prove convergence for irregular test functions (total variation distance and more generally, distribution distance) in some more or less classical frameworks as the Central Limit Theorem, local versions of the CLT and moreover, general stochastic polynomials 55. An exciting application concerns the number of roots of trigonometric polynomials with random coefficients 56. Using Kac Rice lemma in this framework one comes back to a multidimensional CLT and employs Edgeworth expansions of order three for irregular test functions in order to study the mean and the variance of the number of roots. Another application concerns U statistics associated to polynomial functions. The techniques of generalized Malliavin calculus developed in 53 are applied in for the approximation of Markov processes (see 60 and 59). On the other hand, using the classical Malliavin calculus, V. Bally in collaboration with L. Caramellino and P. Pigato studied some subtle phenomena related to diffusion processes, as short time behavior and estimates of tubes probabilities (see 54, 57).

Our project team is very much involved in numerical probability, aiming at pushing numerical methods towards the effective implementation. This numerical orientation is supported by a mathematical expertise which permits a rigorous analysis of the algorithms and provides theoretical support for the study of rates of convergence and the introduction of new tools for the improvement of numerical methods. This activity in the MathRisk team is strongly related to the development of the Premia software.

With A. Kohatsu-Higa, A. Alfonsi and B. Jourdain 4 have proved using optimal transport tools that the Wasserstein distance between the time marginals of an elliptic SDE and its Euler discretization with

With their former PhD student, A. Al Gerbi, E. Clément and B. Jourdain 1 have proved strong convergence with order

A. Kebaier and B. Jourdain are interested in deriving non-asymptotic error bounds for the multilevel Monte Carlo method. As a first step, they dealt in 81 with the explicit Euler discretization of stochastic differential equations with a constant diffusion coefficient. They obtained Gaussian-type concentration. To do so, they used the Clark-Ocone representation formula and derived bounds for the moment generating functions of the squared difference between a crude Euler scheme and a finer one and of the squared difference of their Malliavin derivatives. The estimation of such differences is much more complicated than the one of a single Euler scheme contribution and explains why they suppose the diffusion coefficient to be constant. This assumption ensures boundedness of the Malliavin derivatives of both the SDE and its Euler scheme.

In 52, R. Assaraf, B. Jourdain, T. Lelièvre and R. Roux considered the solution to a stochastic differential equation with constant diffusion coefficient and
with a drift function which depends smoothly on some real parameter

R. Dumitrescu and C. Labart have studied the discrete time approximation scheme for the solution of a doubly reflected Backward Stochastic Differential Equation with jumps, driven by a Brownian motion and an independent compensated Poisson process 67, 66.

V. Bally and A. Kohatsu-Higa have recently proposed an unbiased estimator based on the parametrix method to compute expectations of functions of a given SDE ( 58). This method is very general, and A. Alfonsi, A. Kohastu-Higa and M. Hayashi 45 have applied it to the case of one-dimensional reflected diffusions. In this case, the estimator can be obtained explicitly by using the scheme of Lépingle 85 and is quite simple to implement. It is compared to other simulation methods for reflected SDEs.

A. Alfonsi, A. Kebaier and C. Rey 46 have computed the Maximum Likelihood Estimator for the Wishart process and studied its convergence in the ergodic and in some non ergodic cases. In the ergodic case, which is the most relevant for applications, they obtain the standard square-root convergence. In the non ergodic case, the analysis rely on refined results for the Laplace transform of Wishart processes, which are of independent interest.

In joint work with A. Bouselmi, D. Lamberton studied the asymptotic behavior of the exercise boundary near maturity
for American put options in exponential Lévy models. In 7, they deal with jump-diffusion models, and establish that, in some cases, the behavior differs from the classical
Black and Scholes setting.
D. Lamberton has also worked on the binomial approximation of the American put. The conjectured rate of convergence
is

The domains of application are quantitative finance and insurance with emphasis on risk modeling and control. In particular, Mathrisk focuses on dependence modeling, systemic risk, market microstructure modeling and risk measures.

The goal of the project is to develop a model that captures the dynamics of a complex financial network and to provide methods for the control of default contagion, both by a regulator and by the institutions themselves. This introduces a new class of problems that are very challenging mathematically, as it relies on using mean field games and random graphs theory. Agnès Sulem, Andreea Minca (Cornell University), Hamed Amini (J. Mack Robinson College of Business, Georgia State University) have studied a Dynamic Contagion Risk Model With Recovery Features. They introduce threshold growth in the classical threshold contagion model, in which nodes have downward jumps when there is a failure of a neighboring node. Choosing the configuration model as underlying graph, they prove fluid limits for the baseline model, as well as extensions to the directed case, state-dependent inter-arrival times and the case of growth driven by upward jumps. They then allow nodes to choose their connectivity by trading off link benefits and contagion risk. They define a rational equilibrium concept in which nodes choose their connectivity according to an expected failure probability of any given link, and then impose condition that the expected failure probability coincides with the actual failure probability under the optimal connectivity. Existence of an asymptotic equilibrium is shown as well as convergence of the sequence of equilibria on the finite networks. In particular, these results show that systems with higher overall growth may have higher failure probability in equilibrium 49. Zhangyong Cao (DIM Math-Innov doctoral allocation) has started a PhD thesis under the direction of A. Sulem on dynamics and stability of complex financial networks (after an internship in Spring 2020). The first objective is to develop a ruin theory in random networks, in particular on the configuration model as underlying graph. Some limit results for default cascades in sparse heterogeneous financial networks have already been obtained.

Agnès Sulem, Rui Chen, Andreea Minca, Roxana Dumitrescu have studied mean-field BSDEs with a generalized mean-field operator which can capture system influence with higher order interactions. Convergence of finite approximations to the mean-field BSDE have been obtained. In the finite system, the mean-field term can incorporate for example an inhomogeneous graph model in which the intensity of bilateral interactions depends on the states of the end nodes by means of a kernel function. This opens the path towards using dynamic risk measures induced by mean-field BSDE as a complementary approach to systemic risk measurement.

Agnès Sulem has studied with Miryana Grigorova (University of Leeds) and
Marie-Claire Quenez (Université Paris Denis Diderot) superhedging prices and the associated superhedging strategies for both European and American options (see 20 and 78 in a non-linear incomplete market model with default.
The underlying market model consists of a risk-free asset and a risky asset driven by a Brownian motion and a compensated default martingale. The portfolio processes follow non-linear dynamics with a non-linear driver

A. Alfonsi has obtained a grant from AXA Foundation on a Joint Research Initiative with a team of AXA France working on the strategic asset allocation. This team has to make recommendations on the investment over some assets classes as, for example, equity, real estate or bonds. In order to do that, each side of the balance sheet (assets and liabilities) is modeled in order to take into account their own dynamics but also their interactions. Given that the insurance products are long time contracts, the projections of the company's margins have to be done considering long maturities. When doing simulations to assess investment policies, it is necessary to take into account the SCR which is the amount of cash that has to be settled to manage the portfolio. Typically, the computation of the future values of the SCR involve expectations under conditional laws, which is greedy in computation time.

A. Alfonsi and his PhD student A. Cherchali have constructed a model of the ALM management of insurance companies that takes into account the regulatory constraints on life-insurance 12. They have developed Multilevel Monte-Carlo methods to approximate the SCR (Solvency Capital Requirement) at a future date and more generally, to calculate the worst of

Antonino Zanette with Ludovic Goudenège (Ecole Centrale de Paris) and Andrea Molent (University of Udine) study the valuation of GMWB variable annuity when both stochastic volatility and stochastic interest rate are considered in the Heston Hull-White model. 18.

Neural network regression for Bermudan option pricing. The pricing of Bermudan options amounts to solving a dynamic programming principle, in which the main difficulty, especially in high dimension, comes from the conditional expectation involved in the computation of the continuation value. These conditional expectations are classically computed by regression techniques on a finite dimensional vector space. In 38,
Bernard Lapeyre and Jérôme Lelong
study neural networks approximations of conditional expectations.
They prove the convergence of the well-known Longstaff and Schwartz algorithm when the standard least-square regression is replaced by a neural network approximation.
They illustrate the numerical efficiency of neural networks as an alternative to standard regression methods for approximating conditional expectations on several numerical examples.

Machine learning for pricing American options.
In 19, L. Goudenège, A. Molent and A. Zanette
develop techniques, called GPR Tree and GPR Exact Integration,
both based on Machine Learning, to compute prices of American basket options
in high-dimension. Both Markovian and non Markovian models are
studied, in particular rough Bergomi model, which provides stochastic volatility with memory.

Big data techniques for portfolio optimization.
With his PhD student Hachem Madmoun, Bernard Lapeyre
analyses the trajectories of asset prices
by Fourier transform, and big data techniques such as vae (variational autoencoder)
and hmm (hidden Markov chains)
for portfolio management.

In 36, Benjamin Jourdain and Alvin Tse propose a generalised version of the central limit theorem for nonlinear functionals of the empirical measure of i.i.d. random variables, provided that the functional satisfies some regularity assumptions for the associated linear functional derivatives of various orders. This generalisation can be applied to Monte-Carlo methods, even when there is a nonlinear dependence on the measure component. As a consequence of this result, they also analyse the convergence of fluctuation between the empirical measure of particles in an interacting particle system and their mean-field limiting measure (as the number of particles goes to infinity), when the dependence on measure is nonlinear. In 15, A. Alfonsi and B. Jourdain study the structure of optimal couplings for the squared quadratic Wasserstein distance of probability measures with finite second order moments.

In 28, Oumaima Bencheikh and Benjamin Jourdain study the approximation in Wasserstein distance with index

With V. Ehrlacher and R. Coyaud, Aurelien Alfonsi is working on numerical methods based to approximate the optimal transport in the symmetric case in the multimarginal case. 14

It is known since 21 that two one-dimensional probability measures in the convex order admit a martingale coupling with respect to which the integral of

In 33, B. Jourdain and W. Margheriti are interested in martingale rearrangement couplings. As introduced by Wiesel, in order to prove the stability of Martingale Optimal Transport problems, these are projections in adapted Wasserstein distance of couplings between two probability measures on the real line in the convex order onto the set of martingale couplings between these two marginals. In reason of the lack of relative compactness of the set of couplings with given marginals for the adapted Wasserstein topology, the existence of such a projection is not clear at all. Under a barycentre dispersion assumption on the original coupling which is in particular satisfied by the Hoeffding-Fréchet or comonotone coupling, Wiesel gives a clear algorithmic construction of a martingale rearrangement when the marginals are finitely supported and then gets rid of the finite support assumption by relying on a rather messy limiting procedure to overcome the lack of relative compactness. B. Jourdain and W. Margheriti give a direct general construction of a martingale rearrangement coupling under the barycentre dispersion assumption. This martingale rearrangement is obtained from the original coupling by an approach similar to the construction given in 21 of the inverse transform martingale coupling, a member of a family of martingale couplings close to the Hoeffding-Fréchet coupling, but for a slightly different injection in the set of extended couplings introduced by Beiglböck and Juillet and which involve the uniform distribution on

In 34, Benjamin Jourdain and Gilles Pagès establish for dual quantization the counterpart of Kieffer's uniqueness result for compactly supported one dimensional probability distributions having a log-concave density (also called strongly unimodal): for such distributions, Lr-optimal dual quantizers are unique at each level N, the optimal grid being the unique critical point of the quantization error. An example of non-strongly unimodal distribution for which uniqueness of critical points fails is exhibited. In the quadratic r=2 case, they propose an algorithm which computes the unique optimal dual quantizer with geometric rate of convergence in the log-concave case. It provides a counterpart of Lloyd's method I algorithm in a Voronoi framework. Finally semi-closed forms of Lr-optimal dual quantizers are established for power distributions on compacts intervals and truncated exponential distributions.

Quantization provides a very natural way to preserve the convex order when approximating two ordered probability measures by two finitely supported ones. Indeed, when the convex order dominating original probability measure is compactly supported, it is smaller than any of its dual quantizations while the dominated original measure is greater than any of its stationary (and therefore any of its optimal) quadratic primal quantization. Moreover, the quantization errors then correspond to martingale couplings between each original probability measure and its quantization. This enables B. Jourdain and G. Pagès to prove
in 35 that any martingale coupling between the original probability measures can be approximated by a martingale coupling between their quantizations in Wassertein distance with a rate given by the quantization errors but also in the much finer adapted Wassertein distance. As a consequence, while the stability of (Weak) Martingale Optimal Transport problems with respect to the marginal distributions has only been established in dimension 1 so far, their value function computed numerically for the quantized marginals converges in any dimension to the value for the original probability measures as the numbers of quantization points go to

In 29,
Oumaima Bencheikh and Benjamin Jourdain are interested
in the Euler-Maruyama discretization of a stochastic differential equation in dimension

A. Alfonsi and A. Kebaier are working on the approximation of some processes with rough paths.

B. Jourdain and his PhD student E. Kahn study stochastic differential equations coming from the eigenvalues of Wishart processes 31, 37.

In collaboration with L. Caramellino (University Tor Vergata) and with G. Poly (University of Rennes), V. Bally has settled a Malliavin type calculus for a general class of random variables, which are not supposed to be Gaussian (as it is the case in the standard Malliavin calculus). This is an alternative to the

In collaboration with L. Caramellino and A. Kohatsu Higa, V. Bally study the regularity of the solutions of jump type equations. A first result is obtained in 26.

Damien Lamberton has revisited the results of Bensoussan and Lions on variational inequalities, using some semi-group theory.
He has contributed to a winter school on "Theory and practice of optimal stopping and free boundary problems"
(cf. https://

CIFRE agreement Milliman company/Ecole des Ponts (http://

PhD thesis of Sophian Mehalla (started November 2017) on "Interest rate risk modeling for insurance companies", Supervisor: Bernard Lapeyre.

CIFRE agreement Brahham gardens/Ecole des Ponts

PhD thesis of Hachem Madmoun: "Gestion de portefeuilles utilisant des techniques de (big) data"
https://

Collaboration with IRT Systemx

PhD grant of Adrien Touboul (started November 2017) on "Uncertainty computation in a graph of physical simulations", Supervisors: Bernard Lapeyre and Julien Reygner.

Chair X-ENPC-SU-Société Générale "Financial Risks" of the Risk fondation : A. Alfonsi, B. Jourdain, B. Lapeyre

Labex Bezout

Pôle Finance Innovation

A. Alfonsi:

Co-organizer of the working group seminar of MathRisk “Méthodes stochastiques et finance”.
http://

V. Bally

Organizer of the seminar of the LAMA laboratory, Université Gustave Eiffel.

A. Sulem

Co-organizer of the seminar INRIA-MathRisk /Université Paris Diderot LPSM
“Numerical probability and mathematical finance”.
https://

B. Jourdain

Associate editor of

D. Lamberton

Associate editor of

A. Sulem

Associate editor of

Master :

- Aurélien Alfonsi

- Vlad Bally

- Benjamin Jourdain

- B. Jourdain, B. Lapeyre

- J.-F. Delmas, B.Jourdain

- D. Lamberton

- B. Lapeyre

- A. Sulem