MathRisk is a joint Inria project-team with ENPC (CERMICS Laboratory) and the University Paris Est Marne-la-Vallée (UPEMLV, LAMA Laboratory), located in Paris and Marne-la-Vallée.

http://

The starting point of the development of modern finance theory is traditionally associated to the publication of the famous paper of Black and Scholes in 1973 . Since then, in spite of sporadic crises, generally well overcome, financial markets have grown in a exponential manner. More and more complex exotic derivative products have appeared, on equities first, then on interest rates, and more recently on credit markets. The period between the end of the eighties and the crisis of 2008 can be qualified as the “golden age of financial mathematics”: finance became a quantitative industry, and financial mathematics programs flourished in top universities, involving seminal interplays between the worlds of finance and applied mathematics. During its 12 years existence, the Mathfi project team has extensively contributed to the development of modeling and computational methods for the pricing and hedging of increasingly complex financial products.

Since the crisis of 2008, there has been a critical reorientation of research priorities in quantitative finance with emphasis on risk. In 2008, the “subprime” crisis has questioned the very existence of some derivative products such as CDS (credit default swaps) or CDOs (collateralized debt obligations), which were accused to be responsible for the crisis. The nature of this crisis is profoundly different from the previous ones. It has negatively impacted the activity on the exotic products in general, - even on equity derivative markets-, and the interest in the modeling issues for these products. The perfect replication paradigm, at the origin of the success of the Black and Scholes model became unsound, in particular through the effects of the lack of liquidity. The interest of quantitative finance analysts and mathematicians shifted then to more realistic models taking into account the multidimensional feature and the incompleteness of the markets, but as such getting away from the “lost paradi(gm)” of perfect replication. These models are much more demanding numerically, and require the development of hedging risk measures, and decision procedures taking into account the illiquidity and various defaults.

Moreover, this crisis, and in particular the Lehman Brothers bankruptcy and its consequences, has underlined a systemic risk due to the strong interdependencies of financial institutions. The failure of one of them can cause a cascade of failures, thus affecting the global stability of the system. Better understanding of these interlinkage phenomena becomes crucial.

At the same time, independently from the subprime crisis, another phenomenon has appeared: deregulation in the organization of stock markets themselves. This has been encouraged by the Markets in Financial Instruments Directive (MIFID) which is effective since November, 1st 2007. This, together with the progress of the networks, and the fact that all the computers have now a high computation power, have induced arbitrage opportunities on the markets, by very short term trading, often performed by automatic trading. Using these high frequency trading possibilities, some speculating operators benefit from the large volatility of the markets. For example, the flash crash of May, 6 2010 has exhibited some perverse effects of these automatic speculating trading strategies. These phenomena are not well understood and the theme of high frequency trading needs to be explored.

To summarize, financial mathematics is facing the following new evolutions:

the complete market modeling has become unsatisfactory to provide a realistic picture of the market and is replaced by incomplete and multidimensional models which lead to new modeling and numerical challenges.

quantitative measures of risk coming from the markets, the hedging procedures, and the lack of liquidity are crucial for banks,

uncontrolled systemic risks may cause planetary economic disasters, and require better understanding,

deregulation of stock markets and its consequences lead to study high frequency trading.

The project team MathRisk is designed to address these new issues, in particular dependence modeling, systemic risk, market microstructure modeling and risk measures. The research in modeling and numerical analysis remain active in this new context, motivated by new issues.

The MathRisk project team develops the software Premia dedicated
to pricing and hedging options and calibration of financial models, in collaboration with a consortium of financial institutions.
https://

The MathRisk project is part of the
Université Paris-Est **“Labex” BÉZOUT**.

The volatility is a key concept in modern mathematical finance, and
an indicator of the market stability.
Risk management and associated instruments depend strongly on the
volatility, and volatility modeling has thus become a crucial issue in the finance industry. Of particular importance is the assets *dependence* modeling.
The calibration of models for a single asset can now be well managed by banks but modeling of dependence is the bottleneck to efficiently aggregate such models.
A typical issue is how to go from the individual evolution of
each stock belonging to an index to the joint modeling of these stocks. In
this perspective, we want to model stochastic volatility in a *multidimensional* framework. To handle these questions mathematically, we have to deal with stochastic
differential equations that are defined on matrices in order to model either
the instantaneous covariance or the instantaneous correlation between the
assets. From a numerical point of view, such models are very demanding since
the main indexes include generally more than thirty assets. It is therefore
necessary to develop efficient numerical methods for pricing options and
calibrating such models to market data.
As a first application, modeling the dependence between assets allows us to
better handle derivatives products on a basket. It would give also a way to price
and hedge consistenly single-asset and basket products. Besides, it can be a way to
capture how the market estimates the dependence between assets. This could give
some insights on how the market anticipates the systemic risk.

The financial crisis has caused an increased interest in mathematical finance studies which take into account the market incompleteness issue and the liquidity risk. Loosely speaking, liquidity risk is the risk that comes from the difficulty of selling (or buying) an asset. At the extreme, this may be the impossibility to sell an asset, which occurred for “junk assets” during the subprime crisis. Hopefully, it is in general possible to sell assets, but this may have some cost. Let us be more precise. Usually, assets are quoted on a market with a Limit Order Book (LOB) that registers all the waiting limit buy and sell orders for this asset. The bid (resp. ask) price is the most expensive (resp. cheapest) waiting buy or sell order. If a trader wants to sell a single asset, he will sell it at the bid price. Instead, if he wants to sell a large quantity of assets, he will have to sell them at a lower price in order to match further waiting buy orders. This creates an extra cost, and raises important issues. From a short-term perspective (from few minutes to some days), this may be interesting to split the selling order and to focus on finding optimal selling strategies. This requires to model the market microstructure, i.e. how the market reacts in a short time-scale to execution orders. From a long-term perspective (typically, one month or more), one has to understand how this cost modifies portfolio managing strategies (especially delta-hedging or optimal investment strategies). At this time-scale, there is no need to model precisely the market microstructure, but one has to specify how the liquidity costs aggregate.

On a long-term perspective, illiquidity can be approached via various ways: transactions costs , , , , , , , delay in the execution of the trading orders , , , trading constraints or restriction on the observation times (see e.g. and references herein). As far as derivative products are concerned, one has to understand how delta-hedging strategies have to be modified. This has been considered for example by Cetin, Jarrow and Protter . We plan to contribute on these various aspects of liquidity risk modeling and associated stochastic optimization problems. Let us mention here that the price impact generated by the trades of the investor is often neglected with a long-term perspective. This seems acceptable since the investor has time enough to trade slowly in order to eliminate its market impact. Instead, when the investor wants to make significant trades on a very short time horizon, it is crucial to take into account and to model how prices are modified by these trades. This question is addressed in the next paragraph on market microstructure.

The European directive MIFID has increased the competition between markets (NYSE-Euronext, Nasdaq, LSE and new competitors). As a consequence, the cost of posting buy or sell orders on markets has decreased, which has stimulated the growth of market makers. Market makers are posting simultaneously bid and ask orders on a same stock, and their profit comes from the bid-ask spread. Basically, their strategy is a “round-trip” (i.e. their position is unchanged between the beginning and the end of the day) that has generated a positive cash flow.

These new rules have also greatly stimulated research on
market microstructure modeling. From a
practitioner point of view, the main issue is to solve the so-called
“optimal execution problem”: given a deadline

Solving the optimal execution problem is not only an interesting mathematical challenge. It is also a mean to better understand market viability, high frequency arbitrage strategies and consequences of the competition between markets. For example when modeling the market microstructure, one would like to find conditions that allow or exclude round trips. Beyond this, even if round trips are excluded, it can happen that an optimal selling strategy is made with large intermediate buy trades, which is unlikely and may lead to market instability.

We are interested in finding synthetic market models in which we can describe and solve the optimal execution problem. A. Alfonsi and A. Schied (Mannheim University) have already proposed a simple Limit Order Book model (LOB) in which an explicit solution can be found for the optimal execution problem. We are now interested in considering more sophisticated models that take into account realistic features of the market such as short memory or stochastic LOB. This is mid term objective. At a long term perspective one would like to bridge these models to the different agent behaviors, in order to understand the effect of the different quotation mechanisms (transaction costs for limit orders, tick size, etc.) on the market stability.

After the recent financial crisis, systemic risk has emerged as one of the major research topics in mathematical finance. The scope is to understand and model how the bankruptcy of a bank (or a large company) may or not induce other bankruptcies. By contrast with the traditional approach in risk management, the focus is no longer on modeling the risks faced by a single financial institution, but on modeling the complex interrelations between financial institutions and the mechanisms of distress propagation among these. Ideally, one would like to be able to find capital requirements (such as the one proposed by the Basel committee) that ensure that the probability of multiple defaults is below some level.

The mathematical modeling of default contagion, by which an economic shock causing initial losses and default of a few institutions is amplified due to complex linkages, leading to large scale defaults, can be addressed by various techniques, such as
network approaches (see in particular
R. Cont et al. and A. Minca ) or mean field interaction models (Garnier-Papanicolaou-Yang ).
The recent approach in seems very promising. It describes the
financial network approach as a weighted directed graph, in which nodes represent financial institutions and
edges the exposures between them. Distress propagation in a financial system
may be modeled as an epidemics on this graph. In the case of incomplete
information on the structure of the interbank network, cascade dynamics may be
reduced to the evolution of a multi-dimensional Markov chain that corresponds
to a sequential discovery of exposures and determines at any time the size of
contagion.
Little has been done so far on the *control* of such systems in order to
reduce the systemic risk and we aim to contribute to this domain.

The financial crisis has caused an increased interest in mathematical finance studies which take into account the market incompleteness issue and the default risk modeling, the interplay between information and performance, the model uncertainty and the associated robustness questions, and various nonlinearities. We address these questions by further developing the theory of stochastic control in a broad sense, including stochastic optimization, nonlinear expectations, Malliavin calculus, stochastic differential games and various aspects of optimal stopping.

The theory of American option pricing has been an incite for a number of research articles about optimal stopping. Our recent contributions in this field concern optimal stopping in models with jumps, irregular obstacles, free boundary analysis, reflected BSDEs.

.

Effective numerical methods are crucial in the pricing and hedging of derivative securities. The need for more complex models leads to stochastic differential equations which cannot be solved explicitly, and the development of discretization techniques is essential in the treatment of these models. The project MathRisk addresses fundamental mathematical questions as well as numerical issues in the following (non exhaustive) list of topics: Multidimensional stochastic differential equations, High order discretization schemes, Singular stochastic differential equations, Backward stochastic differential equations.

Monte-Carlo methods is a very useful tool to evaluate prices
especially for complex models or options. We carry on research
on *adaptive variance reduction methods* and to use *Monte-Carlo methods for calibration* of advanced models.

This activity in the MathRisk team is strongly related to the development of the Premia software.

The original Stochastic Calculus of Variations, now called the Malliavin calculus, was developed by Paul Malliavin in 1976 . It was originally designed to study the smoothness of the densities of solutions of stochastic differential equations. One of its striking features is that it provides a probabilistic proof of the celebrated Hörmander theorem, which gives a condition for a partial differential operator to be hypoelliptic. This illustrates the power of this calculus. In the following years a lot of probabilists worked on this topic and the theory was developed further either as analysis on the Wiener space or in a white noise setting. Many applications in the field of stochastic calculus followed. Several monographs and lecture notes (for example D. Nualart , D. Bell D. Ocone , B. Øksendal ) give expositions of the subject. See also V. Bally for an introduction to Malliavin calculus.

From the beginning of the nineties, applications of the Malliavin calculus in finance have appeared : In 1991 Karatzas and Ocone showed how the Malliavin calculus, as further developed by Ocone and others, could be used in the computation of hedging portfolios in complete markets .

Since then, the Malliavin calculus has raised increasing interest and subsequently many other applications to finance have been found , such as minimal variance hedging and Monte Carlo methods for option pricing. More recently, the Malliavin calculus has also become a useful tool for studying insider trading models and some extended market models driven by Lévy processes or fractional Brownian motion.

We give below an idea why Malliavin calculus may be a useful instrument for probabilistic numerical methods.

We recall that the theory is based on an integration by parts formula
of the form

An important feature is that one has a relatively explicit expression for the weight

Let us now look at one of the main consequences of the integration by parts formula.
If one considers the *Dirac* function *Heaviside* function and the above
integration by parts formula reads *Greeks*) or conditional
expectations, which appear in the pricing of American options
by the dynamic programming).
See the papers by Fournié et al
and and the papers by Bally et al., Benhamou, Bermin et al., Bernis et al., Cvitanic et al., Talay and Zheng and Temam
in .

L. Caramellino, A. Zanette and V. Bally have been concerned with the computation of conditional expectations using Integration by Parts formulas and applications to the numerical computation of the price and the Greeks (sensitivities) of American or Bermudean options. The aim of this research was to extend a paper of Reigner and Lions who treated the problem in dimension one to higher dimension - which represent the real challenge in this field. Significant results have been obtained up to dimension 5 and the corresponding algorithms have been implemented in the Premia software.

Moreover, there is an increasing interest in considering jump components in the financial models, especially motivated by calibration reasons. Algorithms based on the integration by parts formulas have been developed in order to compute Greeks for options with discontinuous payoff (e.g. digital options). Several papers and two theses (M. Messaoud and M. Bavouzet defended in 2006) have been published on this topic and the corresponding algorithms have been implemented in Premia. Malliavin Calculus for jump type diffusions - and more general for random variables with locally smooth law - represents a large field of research, also for applications to credit risk problems.

The Malliavin calculus is also used in models of insider trading. The "enlargement of filtration" technique plays an important role in the modeling of such problems and the Malliavin calculus can be used to obtain general results about when and how such filtration enlargement is possible. See the paper by P. Imkeller in ). Moreover, in the case when the additional information of the insider is generated by adding the information about the value of one extra random variable, the Malliavin calculus can be used to find explicitly the optimal portfolio of an insider for a utility optimization problem with logarithmic utility. See the paper by J.A. León, R. Navarro and D. Nualart in ).

A. Kohatsu Higa and
A. Sulem have studied a controlled stochastic system whose state
is described by a stochastic differential equation with anticipating
coefficients. These SDEs can be interpreted in the sense of *forward integrals*, which are the natural generalization of the
semimartingale integrals, as introduced by Russo and Valois . This methodology has
been applied for utility maximization with insiders.

The applications domains are quantitative finance and insurance with emphasis on risk modeling and control. In particular, Mathrisk focuses on dependence modeling, systemic risk, market microstructure modeling and risk measures.

Creation of a joint seminar on Numerical probability and Mathematical Finance with the LPMA laboratory, University Paris-Diderot.

Organization by B. Jourdain with B. Bouchard (Université Paris-Dauphine) and E. Gobet (Ecole Polytechnique) of the 2015-2016 thematic semester on Monte Carlo methods (financed by the Institute Louis Bachelier) at Institut Henri Poincaré, Paris
https://

Keywords: Financial products - Computational finance - Option pricing

Premia is a software designed for option pricing, hedging and financial model calibration.

The Premia project keeps track of the most recent advances in the field of computational finance in a well-documented way. It focuses on the implementation of numerical analysis techniques for both probabilistic and deterministic numerical methods. An important feature of the platform Premia is the detailed documentation which provides extended references in option pricing.

Premia is thus a powerful tool to assist Research & Development professional teams in their day-to-day duty. It is also a useful support for academics who wish to perform tests on new algorithms or pricing methods without starting from scratch.

Besides being a single entry point for accessible overviews and basic implementations of various numerical methods, the aim of the Premia project is: 1 - to be a powerful testing platform for comparing different numerical methods between each other, 2 - to build a link between professional financial teams and academic researchers, 3 - to provide a useful teaching support for Master and PhD students in mathematical finance.

Participants: Mathrisk project team and contributors

Partners: Ecole des Ponts ParisTech - Inria - Université Paris-Est - Consortium Premia

Contact: Agnès Sulem

URL: http://

AMS: 91B28;65Cxx;65Fxx;65Lxx;65Pxx

License: Licence Propriétaire (genuine license for the Consortium Premia)

OS/Middelware: Linux, Mac OS X, Windows

APP: The development of Premia started in 1999 and 17 are released up to now and registered at the APP agency. Premia 16 has been registered on 0303/2015 under the number IDDN.FR.001.190010.013.S.C.2001.000.31000

Programming language: C/C++

Documentation: scientific documentation of all the algorithm implemented. PNL has a 100 pages user documentation

Size of the software: For the Src part of Premia : 337046 lines , that is 14 Mbyte of code, and 117 Mbyte of PDF files of documentation; For PNL: 747952 lines , that is 25 MO.

interfaces : Nsp for Windows/Linux/Mac, Excel, binding Python, and a Web interface.

Premia contains various numerical algorithms (Finite-differences, trees and Monte-Carlo) for pricing vanilla and exotic options on equities, interest rate, credit and energy derivatives.

**Equity derivatives: **

The following models are considered:

Black-Scholes model (up to dimension 10), stochastic volatility models (Hull-White, Heston, Fouque-Papanicolaou-Sircar), models with jumps (Merton, Kou, Tempered stable processes, Variance gamma, Normal inverse Gaussian), Bates model.

For high dimensional American options, Premia provides the most recent Monte-Carlo algorithms: Longstaff-Schwartz, Barraquand-Martineau, Tsitsklis-Van Roy, Broadie-Glassermann, quantization methods and Malliavin calculus based methods.

Dynamic Hedging for Black-Scholes and jump models is available.

Calibration algorithms for some models with jumps, local volatility and stochastic volatility are implemented.

**Interest rate derivatives**

The following models are considered:

HJM and Libor Market Models (LMM): affine models, Hull-White, CIR

Premia provides a calibration toolbox for Libor Market model using a database of swaptions and caps implied volatilities.

**Credit derivatives: Credit default swaps (CDS), Collateralized debt obligations (CDO) **

Reduced form models and copula models are considered.

Premia provides a toolbox for pricing CDOs using the most recent algorithms (Hull-White, Laurent-Gregory, El Karoui-Jiao, Yang-Zhang, Schönbucher)

**Hybrid products**

A PDE solver for pricing derivatives on hybrid products like options on inflation and interest or change rates is implemented.

**Energy derivatives: swing options**

Mean reverting and jump models are considered.

Premia provides a toolbox for pricing swing options using finite differences, Monte-Carlo Malliavin-based approach and quantization algorithms.

To facilitate contributions, a standardized numerical library (PNL) has been developed by J. Lelong under the LGPL since 2009, which offers a wide variety of high level numerical methods for dealing with linear algebra, numerical integration, optimization, random number generators, Fourier and Laplace transforms, and much more. Everyone who wishes to contribute is encouraged to base its code on PNL and providing such a unified numerical library has considerably eased the development of new algorithms which have become over the releases more and more sophisticated.

This year, Jérome Lelong has performed the following tasks on the development of PNL:

Releases 1.7.3 and 1.7.4. of the *PNL* library (http://

Simplify the use of PNL under Visual Studio. It can either be compiled using CMake or added as an external library to an existing project.

Improve the construction of large `PnlBasis` objects and make it possible
to deal with non tensor functions.

Add complex error functions.

The software Premia is supported by a Consortium of financial institutions created in 1999. The members of the Consortium give an annual financial contribution and receive every year a new version enriched with new algorithms. They participate to the annual meeting where future new developments are discussed.

All releases of the software Premia (18 in 2016) are registered at the French agency APP. The most recent is provided to the Consortium with an appropriate licence. An opensource version is also available for academic purposes. The software is thus used in many universities, with in France and abroad.

Premia 18 has been delivered to the consortium members in March 2016.

It contains the following new algorithms:

A Forward Solution for Computing Derivatives Exposure. M. Ben Taarit, B. Lapeyre.

Monte Carlo Calculation of Exposure Profiles and Greeks for Bermudan and Barrier Options under the Heston Hull-White Model. Q. Feng, C.W. Oosterlee.

Dynamic optimal execution in a mixed-market-impact Hawkes price
model. A. Alfonsi, P. Blanc.
*Finance & Stochastics*

A Hamilton Jacobi Bellman approach to optimal trade execution. P. Forsyth,
*Applied Numerical Mathematics 61, 2011.*

Value function approximation or stopping time approximation: a
comparison of two recent numerical methods for American option
pricing using simulation and regression. L. Stentoft,
*Journal of Computational Finance, 18( 1), 2014.*

Pricing American-Style Options by Monte Carlo Simulation:
Alternatives to Ordinary Least Squares. S. Tompaidis, C. Yang,
*Journal of Computational Finance, 18(1), 2014,*

Solving Optimal Stopping Problems using Martingale Bases. J.Lelong

The Stochastic Grid Bundling Method: Efficient Pricing of Bermudan Options and their Greeks. S. Jain, C.W. Oosterlee.

Two-dimensional Fourier cosine series expansion method for
pricing financial options. C.W.Oosterlee M.J. Ruijter, *SIAM J. Sci. Comput., 34(5), 2012.*

Estimation of the parameters of the Wishart process. A.Alfonsi,
A.Kebaier, C.Rey,
*Preprint*

The 4/2 Stochastic Volatility Model. M. Grasselli,
*Preprint.*

Ninomiya Victoir Scheme and Multi Level Scheme. A. Al Gerbi, E. Clement, B. Jourdain.

Importance Sampling for Multilevel Monte Carlo. A.Kebaier J.Lelong

The evaluation of barrier option prices under stochastic
volatility. C.Chiarella, B.Kang, G.H.Meyer,
*Computers and Mathematics with Applications 64, 2012.*

Volatility swaps and volatility options on discretely sampled realized
variance. G.Lian, C.Chiarella, P.S.Kalev,
*Journal of Economic Dynamics Control 47 2014*

Efficient variations of the Fourier transform in applications to option pricing. S. Boyarchenko and S.Levendorski,
*Journal of Computational Finance, 18(2), 2014.*

Model-free implied volatility: from surface to
index. M. Fukasawa et al.,
*Int. J. Theor. Appl. Finan. 14, 433, 2011*

Stratified approximations for the pricing of
options on average. N.Privault J.Yu *Journal of Computational
Finance.*

Moreover, J. Lelong has ensured everyday maintenance to fix various bugs, especially related to Visual C++; and has get rid of the old bunch of scripts to generate the HTML documentation by implementing the required mechanism directly in TeX. This makes the system much more robust; He has also worked on the continuous integration process with Sébastien Hinderer. Moreover, part of Premia documentation is now generated directly from the source code. The 3000 lines of undocumented C code used so far had become unmaintainable. Now, it is replaced by a much more flexible and efficient Python script.

Our objective is to study the magnitude of default contagion in a large financial system, in which banks receive benefits from their connections, and to investigate how the institutions choose their connectivities by weighing the default risk and the benefits induced by connectivity. We study two versions of the model. In the first version (static) the benefits are received at the end of the contagion. In this case, each bank either receives fixed benefits per link if it survives, otherwise its payoff is zero. In the second version, which is a dynamic model, banks receive cash-flows from their connections, spread over time. Effectively, these cash flows increase the threshold of the bank over the time of contagion. We call this model contagion with intrinsic recovery features. In the first model, there is no calendar time. In the second model, the cash flows arrive at a certain rate in calendar time, while the losses come with each revealed link. We thus need to relate the intensity of revealing a link with calendar time. Both models have new features compared to past literature. The most important feature is that banks choose their connectivities optimally. The second model is dynamic and introduces growth over time. Computing the magnitude of contagion in this case is challenging, and we provide an iterative solution for this.

We pursue the development of the theory of stochastic control and optimal stopping with
nonlinear expectation induced by a nonlinear BSDE with (default) jump, and the
application to nonlinear pricing in financial markets with default.
To that purpose we have studied nonlinear BSDE with default and proved several properties for these equations.
We have also addressed the case with ambiguity on the model, in particular ambiguity on the default probability. In this context, we study robust superhedging strategies for the seller of a game optimal stopping problem by proving some duality results, and characterize the robust seller's price of a game option as the value function of a *mixed generalized* Dynkin game.

We study stochastic maximum principles, both necessary and sufficient, for SPDE with jumps with a general mean-field operator.

The majority of the results on the numerical methods for FBSDEs relies on the global Lipschitz assumption, which is not satisfied for a number of important cases such as the Fisher-KPP or the FitzHugh-Nagumo equations. In a previous work, A. Lionnet with Gonzalo Dos Reis and Lukasz Szpruch showed that for BSDEs with monotone drivers having polynomial growth in the primary variable y, only the (sufficiently) implicit schemes converge. But these require an additional computational effort compared to explicit schemes. They have thus developed a general framework that allows the analysis, in a systematic fashion, of the integrability properties, convergence and qualitative properties (e.g. comparison theorem) for whole families of modified explicit schemes. These modified schemes are characterized by the replacement of the driver by a driver that depends on the time-grid, and converge to the original driver as the size of the time-steps goes to 0. The framework yields the convergence of some modified explicit scheme with the same rate as implicit schemes and with the computational cost of the standard explicit scheme .

With J. Corbetta (postdoc financed by the chair financial risks), A. Alfonsi and B. Jourdain are interested in the time derivative of the Wasserstein distance between the marginals of two Markov processes. The Kantorovich duality leads to a natural candidate for this derivative. Up to the sign, it is the sum of the integrals with respect to each of the two marginals of the corresponding generator applied to the corresponding Kantorovich potential. For pure jump processes with bounded intensity of jumps, J. Corbetta, A. Alfonsi and B. Jourdain proved that the evolution of the Wasserstein distance is actually given by this candidate. In dimension one, they show that this remains true for Piecewise Deterministic Markov Processes .

By Gyongy's theorem, a local and stochastic volatility model is calibrated to the market prices of all call options with positive maturities and strikes if its local volatility function is equal to the ratio of the Dupire local volatility function over the root conditional mean square of the stochastic volatility factor given the spot value. This leads to a SDE nonlinear in the sense of McKean. Particle methods based on a kernel approximation of the conditional expectation, as presented by Guyon and Henry-Labordère (2011), provide an efficient calibration procedure even if some calibration errors may appear when the range of the stochastic volatility factor is very large. But so far, no existence result is available for the SDE nonlinear in the sense of McKean. In the particular case where the local volatility function is equal to the inverse of the root conditional mean square of the stochastic volatility factor multiplied by the spot value given this value and the interest rate is zero, the solution to the SDE is a fake Brownian motion. When the stochastic volatility factor is a constant (over time) random variable taking finitely many values and the range of its square is not too large, B. Jourdain and A. Zhou prove existence to the associated Fokker-Planck equation. Thanks to Figalli (2008), they then deduce existence of a new class of fake Brownian motions. They extend these results to the special case of the LSV model called Regime Switching Local Volatility, where the stochastic volatility factor is a jump process taking finitely many values and with jump intensities depending on the spot level. Under the same condition on the range of its square, they prove existence to the associated Fokker-Planck PDE. They finally deduce existence of the calibrated model by extending the results in Figalli (2008).

With Mihail Zervos, D. Lamberton has worked on American options involving the maximum of the underlying asset. With Giulia Terenzi, he has been working on American options in Heston's model. They obtained results about existence and uniqueness for the associated variational inequality, in suitable weighted Sobolev spaces (see Feehan and co-authors for recent results on elliptic problems).

We have studied the Maximum Likelihood Estimator for the Wishart processes and in particular its convergence in the ergodic and in some non ergodic cases. In the non ergodic cases, our analysis rely on refined results on the Laplace transform for Wishart processes. Our work also extends the recent paper by Ben Alaya and Kebaier on the maximum likelihood estimation for the CIR process.

With A. Kohatsu-Higa and M. Hayashi, Aurelien Alfonsi is investigating how to apply the parametrix method recently proposed by V. Bally and A. Kohatsu-Higa for reflected SDEs. This method allows them to obtain an unbiased estimator for expectations of general functions of the process.

This work was motivated by previous papers of Nicolas Fournier, J. Printemps, E. Clément, A. Debussche and V. Bally on the regularity of the law of the solutions of some equations with coefficients with little regularity - for example diffusion processes with Hölder coefficients (but also many other examples including jump type equations, Bolzmann equation or Stochastic PDE's). Since we do not have sufficient regularity the usual approach by Malliavin calculus fails in this framework. Then one may use an alternative idea which roughly speaking is the following: We approximate the law of the random variable

Continuing the above work we study, in collabration with Lucia Caramellino, the regularity of the solution of jump type equations. This subject has been extensively treated in the literarure using different hypothesis and different variants of Malliavin calculus adapted to equations with jumps. The case of Poisson Point measures with absolutely continuous intensity measure is already well understood with the paper of Bichteler, Garereux and Jacod in the 80's. But the case of discrete intensity measures is more subtle. In this case J. Picard has succeded to obtain regularity results using a variant of Malliavin Calculus based on finite differences. We work also in this framework but we do not use directly some variant of Malliavin calculus but we use an interpolation argument . These are still working papers.

In collaboration with L. Caramellino we work on invariance principles for stochastic series of polynomial type. In the case of polynomials of degree one we must have the classical Central Limit Theorem (for random variables which are not identically distributed). For polynomials of higher order we are in the framework of the so called U statistics which have been introduced by Hoffdings in the years 1948 and which play an important role in modern statistics. Our contribution in this topic concerns convergence in total variation distance for this type of objects. We use abstract Malliavin calculus and more generally, the methods mentioned in the above paragraph.

Consortium PREMIA, Natixis - Inria

Consortium PREMIA, Crédit Agricole CIB - Inria

Chair Ecole Polytechnique-ENPC-UPMC-Société Générale "Financial Risks" of the Risk fondation : A. Alfonsi, B. Jourdain, B. Lapeyre

ANR Stab 2013-2016, Participant : B. Jourdain, Partners : Lyon 1, Paris-Dauphine

ANR Cosmos 2015-2018, Participant: B. Jourdain ; Partners : Ecole des Ponts, Telecom, INIRIA Rennes and IBPC

**Pôle Finance Innovation.**

Center of Excellence program in Mathematics and Life Sciences at the Department of Mathematics, University of Oslo, Norway, (B. Øksendal).

Department of Mathematics, University of Manchester (Tusheng Zhang, currently in charge of an EU-ITN program on BSDEs and Applications).

Kensas University (Yaozhong Hu)

Mannheim University (Alexander Schied, Chair of Mathematics in Business and Economics, Department of Mathematics)

Roma Tor Vergata University (Lucia Caramellino)

Ritsumeikan University (A. Kohatsu-Higa).

Oleg Kudryavtsev, Rostov University (Russia), 2 months

Babacar Diallo [Inria, Trainee, from Mar 2016 until Aug 2016]

Nicolas Le Mouel [Inria, Trainee, from Jul 2016 until Oct 2016]

Mouad Ramil [Inria, Trainee, from Mar 2016 until Aug 2016]

Vlad Bally visited Tor Vergata University, Roma. (Collboration with Lucia Caramellino)

B. Jourdain (with B. Bouchard and E. Gobet): organization of the 2015-2016 thematic semester on Monte Carlo methods financed by the Institute Louis Bachelier, and the closing conference.

A. Alfonsi: Co-organizer of the working group seminar of MathRisk “Méthodes stochastiques et finance”.
http://

A.Sulem : Co-organiser of the joint working group seminar MathRisk/LPMA, University Paris-Diderot :"Mathematical finance and numerical probability".

J. Lelong:

journées MAS 2016, Grenoble.

CEMRACS 2017

Les Journées de Probabilités 2017

session organizer at CANUM 2016

Session organizer at “The International Conference on Monte Carlo techniques”, 2016, Paris

A. Sulem is Reviewer for *Mathematical Reviews*

R. Elie

Associate editor of *SIAM Journal on Financial Mathematics (SIFIN)* (since November 2014)

D. Lamberton

Associate editor of

Mathematical Finance,

Associate editor of ESAIM Probability & Statistics

A. Sulem

Associate editor of

2011- Present: *Journal of Mathematical Analysis and Applications (JMAA)*

2009- Present: *International Journal of Stochastic Analysis (IJSA)*

2008- Present: *SIAM Journal on Financial Mathematics (SIFIN)*

The members of the team reviewed numerous papers for numerous journals .

A. Alfonsi

January 15th 2016: "Wishart processes: MLE estimation and interest rate modelling", North British Probability seminar, Edinburgh.

February 5th 2016: "Dynamic optimal execution in a mixed-market-impact Hawkes price model", Frontiers in Stochastic Modeling for Finance, Padua, Italy

October 7th 2016: "Maximum Likelihood Estimation for Wishart processes" WORKSHOP USPC-NUS Models and numerical methods for financial risk management, Paris Diderot.

November 17th 2016: "Extension and calibration of a Hawkes-based optimal execution model", SIAM Conference on Financial Mathematics & Engineering, Austin.

December 9th 2016: "Optimal Execution in a Hawkes Price Model and Calibration", Market Microstructure Confronting many viewpoints #4, Paris.

B. Jourdain

Seminar of the chair“financial risks”, June 3rd 2016 : Strong convergence properties of the Ninomiya-Victoir scheme and applications to multilevel Monte Carlo

Seminar Mathrisk P7, September 22th 2016 : Existence for a calibrated regime-switching local volatility model

C. Labart

Frontiers in stochastic modelling for finance, Padua and Venice, Italy, February 2016.

Closing International Conference of Thematic cycle on Monte-Carlo techniques, Paris, July 2016.

J. Lelong

CANUM 2016

Seminar on Insurance Mathematics and Stochastic Finance at ETH Zurich, May 2016.

Journées MAS 2016.

Closing International Conference of Thematic cycle on Monte-Carlo techniques, Paris, July 2016

Frontiers in stochastic modelling for finance, Padua and Venice, Italy, February 2016

A. Sulem

Stochastic analysis, control and games with applications to financial economics, University of Leeds, November 2016.

National University of Singapore/ Université Paris-Diderot workshop on quantitative finance, October 2016, Paris.

Abel Symposium 2016 "Computation and Combinatorics in Dynamics, Stochastics and Control", August 2016,
Barony Rosendal, Norway.
http://

Simulation of Stochastic graphs and applications, closing conference on “ Monte-Carlo techniques”, Paris, July 2016

Conference "Frontiers in Stochastic Modelling for Finance", Padua and Venice, Italy, February, 2016.
https://

"Actuarial and Financial Mathematics Conference" (Plenary talk) , February 2016, Brussels, Belgium.

A. Zanette

"Hybrid tree-finite difference methods for the Heston, Bates and Heston Hull-White models". SIMAI Politecnico di Milano 2016.

A. Sulem : Member of the Committee for technology development, Inria Paris

B. Jourdain : Head of the doctoral school MSTIC, university Paris-Est

**Undergraduate programs**

A. Alfonsi: `Probabilités”, first year course at the Ecole des Ponts.

B. Jourdain :

- course "Mathematical finance", 2nd year ENPC

- course "Introduction to probability theory", 1st year, Ecole Polytechnique

B. Jourdain, B. Lapeyre course "Monte-Carlo methods", 3rd year ENPC and Master Recherche Mathématiques et Application, Université Paris-Est Marne-la-Vallée

**Graduate programs**

A. Alfonsi:

- “Traitement des données de marché : aspects statistiques et calibration”, lecture for the Master at UPEMLV.

- “Mesures de risque”, Master course of UPEMLV and Paris VI.

- Professeur chargé de cours at Ecole Polytechnique

J.-F. Delmas, B.Jourdain course "Jump processes with applications to energy markets", 3rd year ENPC and Master Recherche Mathématiques et Application, Université Paris-Est Marne-la-Vallée

B.Jourdain

- course "Stochastic numerical methods", 3rd year, Ecole Polytechnique

- projects in finance and numerical methods, 3rd year, Ecole Polytechnique

A. Sulem

- "Finite difference for PDEs in Finance", Master 2 MASEF, Université Paris IX-Dauphine, Département Mathématiques et Informatique de la Décision et des Organisations (MIDO), 18h.

- Master of Mathematics, University of Luxembourg, 22 h lectures and responsible of the module "Numerical Methods in Finance".

**Doctoral programs**

A. Sulem:
International summer school in mathematical finance, University of Alberta in Edmonton, Canada
"Informational and Imperfect Financial Markets", https://

PhD :

Anis Al Gerbi : "Ninomiya-Victoir scheme: strong convergence, asymptotics for the normalized error and multilevel Monte Carlo methods", Université Paris-Est supervised by B. Jourdain and E. Clément, defended on October 10 2016

PhD in progress :

Rui Chen (Fondation Sciences Mathématiques de Paris grant), "Stochastic Control of mean field systems and applications to systemic risk, from September 2014, Université Paris-Dauphine, Superviser: A. Sulem

Marouen Iben Taarit , “ On CVA and XVA computations ”, CIFRE Natixis/ENPC, Adviser: Bernard Lapeyre

Giulia Terenzi , "American options in complex financial models", Université Paris-Est Marne-la-Vallée, Supervisors: Damien Lamberton and Lucia Caramellino, from University Tor Vergata, Rome

Alexandre Zhou (started November 2015) "Analysis of stochastic particle methods applied to finance", supervised by B.Jourdain

B. Jourdain

- PhD of Khaled Salhi, defended on December 5, University of Lorraine

- Reviewer for the PhD of Anthony Le Cavil, defended on December 9, University Paris-Saclay

A. Sulem

PhD Richàrd Fischer,
*Modélisation de la dépendance pour des statistiques d'ordre et estimation non-paramétrique*, (Modelling the dependence of order statistics and nonparametric estimation),
(Jury chair), defended on September 30 2016, Ecole des Ponts.