Omegais located both at INRIA Lorraine (Nancy) and INRIA SophiaAntipolis.
The Inria Research team Omegais located both at Inria SophiaAntipolis and Inria Lorraine. The team develops and analyzes stochastic models and probabilistic numerical methods. The present fields of applications are in finance, fluid mechanics, biology, chemical kinetics.
Our competences cover the mathematics behind stochastic modelling and stochastic numerical methods. We also benefit from a wide experimental experience on calibration and simulation techniques for stochastic models, and on the numerical resolution of deterministic equations by probabilistic methods. We pay a special attention to collaborations with engineers, practitioners, physicists, biologists and numerical analysts.
Concerning the probabilistic resolution of linear and nonlinear partial differential equations, the Omegateam studies Monte Carlo methods, stochastic particle methods and ergodic methods. For example, we are interested in fluid mechanics equations (Burgers, NavierStokes, etc.), in equations of chemical kinetics and in homogenization problems for PDEs with random coefficients.
We develop simulation methods which take into account the boundary conditions. We provide non asymptotic error estimates in order to describe the global numerical error corresponding to each choice of numerical parameters: number of particles, discretization step, integration time, number of simulations, etc. The key argument consists in interpreting the algorithm as a discretized probabilistic representation of the solution of the PDE under consideration. Therefore part of our research consists in constructing probabilistic representations which allows us to derive efficient numerical methods. In addition, we validate our theoretical results by numerical experiments.
In financial mathematics and in actuarial science, Omegais concerned by market modelling and specific Monte Carlo methods. In particular we study calibration questions, financial risks connected with modelling errors, and the dynamical control of such risks. We also develop numerical methods of simulation to compute prices and sensitivities of various financial contracts.
In neurobiology we are concerned by stochastic models which describe the neuronal activity. We also develop a stochastic numerical method which will hopefully be useful to the Odyssée project to make more efficient a part of the inverse problem resolution whose aim is to identify magnetic permittivities around brains owing to electroencephalographic measurements. A new component of the biological applications of Omegais formed by population dynamics and genetics. We develop stochastic interacting particle models to study the ecology and evolution of populations.
Most often physicists, economists, biologists, engineers need a stochastic model because they cannot describe the physical, economical, biological, etc., experiment under consideration with deterministic systems, either because of its complexity and/or its dimension or because precise measurements are impossible. Then they renounce to get the description of the state of the system at future times given its initial conditions and, instead, try to get a statistical description of the evolution of the system. For example, they desire to compute occurrence probabilities for critical events such as overstepping of given thresholds by financial losses or neuronal electrical potentials, or to compute the mean value of the time of occurrence of interesting events such as the fragmentation up to a very low size of a large proportion of a given population of particles. By nature such problems lead to complex modelling issues: one has to choose appropriate stochastic models, which requires a thorough knowledge of their qualitative properties, and then one has to calibrate them, which requires specific statistical methods to face the lack of data or the inaccuracy of these data. In addition, having chosen a family of models and computed the desired statistics, one has to evaluate the sensitivity of the results to the unavoidable model specifications. The Omegateam, in collaboration with specialists of the relevant fields, develops theoretical studies of stochastic models, calibration procedures, and sensitivity analysis methods.
In view of the complexity of the experiments, and thus of the stochastic models, one cannot expect to use closed form solutions of simple equations in order to compute the desired statistics. Often one even has no other representation than the probabilistic definition (e.g., this is the case when one is interested in the quantiles of the probability law of the possible losses of financial portfolios). Consequently the practitioners need Monte Carlo methods combined with simulations of stochastic models. As the models cannot be simulated exactly, they also need approximation methods which can be efficiently used on computers. The Omegateam develops mathematical studies and numerical experiments in order to determine the global accuracy and the global efficiency of such algorithms.
The simulation of stochastic processes is not motivated by stochastic models only. The stochastic differential calculus allows one to represent solutions of certain deterministic partial differential equations in terms of probability distributions of functionals of appropriate stochastic processes. For example, elliptic and parabolic linear equations are related to classical stochastic differential equations, whereas nonlinear equations such as the Burgers and the Navier–Stokes equations are related to McKean stochastic differential equations describing the asymptotic behavior of stochastic particle systems. In view of such probabilistic representations one can get numerical approximations by using discretization methods of the stochastic differential systems under consideration. These methods may be more efficient than deterministic methods when the space dimension of the PDE is large or when the viscosity is small. The Omegateam develops new probabilistic representations in order to propose probabilistic numerical methods for equations such as conservation law equations, kinetic equations, nonlinear Fokker–Planck equations.
Omegais interested in developing stochastic models and probabilistic numerical methods. Our present motivations come from Fluid Mechanics, Chemical Kinetics, Finance and Biology.
In Fluid Mechanics Omegadevelops probabilistic methods to solve vanishing vorticity problems and to study complex flows at the boundary, in particular their interaction with the boundary. We elaborate and analyze stochastic particle algorithms. Our expertise concerns:
The convergence analysis of the stochastic particle methods on theoretical test cases. In particular, we explore speed up methods such as variance reduction techniques and time extrapolation schemes.
The design of original schemes for applicative cases. A first example concerns the micromacro model of polymeric fluid (the FENE model). A second one concerns the Lagrangian modelling of turbulent flows and its application in combustion for two–phase flows models (joint collaboration with Électricité de France).
The Monte Carlo methods for the simulation of fluid particles in a fissured (and thus discontinuous) porous media.
An important part of the work of the Omegateam concerns the coagulation and fragmentation models.
The areas in which coagulation and fragmentation models appear are numerous: polymerization, aerosols, cement and binding agents industry, copper industry (formation of copper particles), behavior of fuel mixtures in engines, formation of stars and planets, population dynamics, etc.
For all these applications we are led to consider kinetic equations using coagulation and fragmentation kernels (a typical example being the kinetics of polymerization reactions). The Omegateam aims to analyze and to solve numerically these kinetic equations. By using a probabilistic approach we describe the behavior of the clusters in the model and we develop original numerical methods. Our approach allows to intuitively understand the time evolution of the system and to answer to some open questions raised by physicists and chemists. More precisely, we can compute or estimate characteristic reaction times such as the gelification time (at which there exists an infinite sized cluster), the time after which the degree of advancement of a reaction is reached, etc.
For a long time now Omegahas collaborated with researchers and practitioners in various financial institutions and insurance companies. We are particularly interested in calibration problems, risk analysis (especially model risk analysis), optimal portfolio management, Monte Carlo methods for option pricing and risk analysis, asset and liabilities management. We also work on the partial differential equations related to financial issues, for example the stochastic control Hamilton–Jacobi–Bellman equations. We study existence, uniqueness, qualitative properties and appropriate deterministic or probabilistic numerical methods. At the time being we pay a special attention to the financial consequences induced by modelling errors and calibration errors on hedging strategies and portfolio management strategies.
For a couple of years Omegahas studied stochastic models in biology. In neurobiology, stochastic methods are developed to analyze stochastic resonance effects and to solve inverse problems. For example, we are concerned by the identification of an elliptic operator involved in the calibration of the magnetic permittivity owing to electroencephalographic measurements. This elliptic operator has a divergence form and a discontinuous coefficient. The discontinuities make difficult the construction of a probabilistic interpretation allowing us to develop an efficient Monte Carlo method for the numerical resolution of the elliptic problem. In population dynamics, stochastic individualbased models are developed to study ecology and evolution. One specific feature under study is the phenomenon of evolutionary branching, where evolution drives a population towards states where several subpopulations, despite selection, stably coexist inside the same population. The spatial counterpart of this phenomenon, where spatial interactions drive the population to subdivide into spatially distinct clusters, is often called spatial clustering.
M. Deaconu and A. Lejay have continued to study the random walk on rectangles Monte Carlo method to solve PDEs such as the Dirichlet problem. The idea to improve the results in consists in using an importance sampling technique: with this approach, the weights involved in the algorithm are easily computed. Not only this allows us to improve the speed of the random walk on rectangles method, but also to improve the simulation of rare events. Numerical tests are performed. We now study a variance reduction technique based on stochastic algorithms.
P. Étoré and A. Lejay have continued their work on the simulation of onedimensional diffusions with discontinuous coefficients. We have shown in how to construct a random walk on an arbitrary grid which approximates a diffusion with an infinitesimal generator with uniformly elliptic and bounded coefficients. Both the time increment and the future position are computed by solving onedimensional PDEs, which can be done simply if the coefficients are approximated by constant coefficients. In the case of highly heterogeneous coefficients, a homogenization procedure may be used and the random walk gives the behavior of the diffusion at some mesoscopic scale. We have then started to study multidimensional situations.
A. Lejay, E. Mordecki (Universidad de la República Oriental del Uruguay) and S. Torrès (Universidad de Valparaíso) have continued to study an algorithm to solve backward stochastic differential equations with jumps. The objective is to develop numerical methods for semilinear integropartial differential equations.
A. Lejay and S. Maire are working on Monte Carlo computations of the first eigenvalue of the Laplace operator in a bounded domain improving the results obtained in .
In collaboration with E. Gobet (ENSIMAG), S. Maire works on the proof of convergence and on the extension of a new algorithm based on a domain decomposition method .
All these methods are motivated by problems in physics and finance: estimation of the electrical conductivity in the brain (MEG problem), simulation of diffusions in heterogeneous media, computation of effective coefficients for PDEs at small scales, computation of option prices, ...
In collaboration with C. Deluigi (Université de Toulon et du Var) and S. Dumont (Université d'Amiens), S. Maire has developed a numerical method to solve the Poisson equation in an
hypercube in
,
d6
. The algorithm is based on a sparse polynomial
approximation developed in
.
In numerous applications, one chooses to model a complex dynamical phenomenon by stochastic differential equations or, more generally, by semimartingales, either because random forces excite a mechanical system, or because time–dependent uncertainties disturb a deterministic trend, or because one aims to reduce the dimension of a large scale system by considering that some components contribute stochastically to the evolution of the system. Examples of applications respectively concern mechanical oscillators submitted to random loading, prices of financial assets, molecular dynamics.
Of course the calibration of the model is a crucial issue. A huge literature deals with the statistics of stochastic processes, particularly of diffusion processes: However, somewhat astonishingly, it seems to us that most of the papers consider that the dimension of the noise is known by the observer. This hypothesis is often questionable: there is no reason to a priorifix this dimension when one observes a basket of assets, or a complex mechanical structure in a random environment. Actually we were motivated to study this question by modelling and simulation issues related to the pricing of contracts based on baskets of energy prices within our past collaboration with Gaz de France.
Jointly with J. Jacod (Université Paris 6), A. Lejay and D. Talay have tackled the question of estimating the Brownian dimension of an Itô process from the observation of one
trajectory during a finite time interval. More precisely, we aim to build estimators which provide an ``explicative Brownian dimension''
r_{B}: a model driven by a
r_{B}Brownian motion satisfyingly fits the information conveyed by the observed path, whereas increasing the Brownian dimension does not allow to fit the data any better. Stated this way,
the problem is obviously ill posed, hence our first step consisted in defining a reasonable framework to develop our study. We developed several estimators and obtained partial theoretical
results on their convergence. We also studied these estimators numerically owing to simulated observations of various models.
M. Bossy, N. Champagnat, D. Talay and E. Tanré have continued their work on the modelling of stochastic resonance effects in the neuronal activity. We have to face important technical difficulties due to the huge complexity of the analytical formulae describing the probability densities of particular stopping times: these random times are the bounds of the spike intervals of the electrical potential along the neurons. We are trying to develop accurate approximate formulae which would allow us to quantify the level of random noise which should be added to the internal noise in order to improve the efficiency of the neuronal activity, in the sense that the period of a periodic electrical input signal is better recognized by the neuronal system. We are developing and studying a mode recognition algorithm in order to estimate the modes of the probability density of the spiking time. These results were presented at the conference NeuroComp 2006 , .
The phenomenon of coagulation was first described by the Polish physicist Marian von Smoluchowski in 1916, in a paper on the precipitation in colloidal suspensions. It applies and is studied in many domains as: the formation of stars and planets, the behaviour of fuels in combustion engines, the polymerisation phenomenon, etc. Our expertise in this domain is now developed in a new direction.
More precisely, M. Deaconu and E. Tanré are trying to adapt their methodology for a problem arising in the Chilean copper industry. Copper ore is ground in crushers by using steel balls, the aim being to fragment the ore by using the least amount of energy. A stochastic algorithm that evaluates the time spent in the crusher has been already constructed.
A new study is underway, aiming to construct a model which effectively illustrates the phenomenon of fragmentation and accounts for new parameters, such as the position of particles in the crusher, the crusher geometry, the shape, size and number of steel balls used as projectiles and the damage factor. This factor takes into account the effect of lowimpact collisions, which may occur before particle fracture. A balance must be found between the model's complexity and its quality in order to approximate the real model: copper industry crushers. This study is the natural continuation of the preceding works on coagulation, in a more general context that accounts for the two phenomena of coagulation and fragmentation. A successful outcome will allow us to optimize crushers efficiency at minimal cost.
E. Tanré participated to the supervision of the Master internship of Tomás Neumann (Pontificia Universidad Catòlica de Chile) on this subject.
In his thesis F. Bernardin has studied a Euler numerical scheme for multivalued stochastic differential equations (MSDE). The weak convergence rate of this scheme has been studied during this year by F. Bernardin, M. Bossy and D. Talay. The probabilistic representation of partial differential inclusions in terms of MSDEs has been established in the framework of viscosity solutions introduced by Crandall and Lions. In the one dimensional case the rate of convergence has been obtained. We now aim to extend our results in dimension larger than 1.
In her Ph.D. thesis, A. Ganz studies stochastic particle numerical methods to approximate the limit, when
tgoes to infinity, of the McKean Vlasov partial differential equation
We suppose that
bsatisfies
(
x
y)(
b(
x)
b(
y))

x
y
^{2}for some
>0(monotonicity condition), and that
is a non linear interaction kernel as a small perturbation of
b.
We have established the existence and uniqueness of the equilibrium measure and the convergence rate to the equilibrium. We adapt proofs in for:
where
:[0,
t]×
[0, +
],
and
are uniform convex functions, and the extension of this work to non uniform convex drifts in
. As in these references we get existence and
uniqueness of the equilibrium probability measure for the stochastic process associated
X_{t}, exponential convergence rate to the stationary measure in the 2Wasserstein distance. We also get estimates on the error when the equilibrium measure is approximated by the empirical
measure of the particle system at large time
T.
Finally, in dimension 1, we use the method introduced in
to estimate the convergence rate of
to the distribution function of the equilibrium measure, where
is the discretization by Euler scheme for the particle system. Our error bound is proportional to the discretization time step, inversely proportional to
, and decays exponentially in
T.
Under the supervision of D. Talay and M. Bossy, M. Cissé studies the viscosity solution of a semilinear parabolic partial differential equation with Neumann homogeneous conditions at the boundary of a smooth convex domain. He uses the connection between this equation and a forward backward stochastic differential equation with reflection in the forward term. The derivability of the reflected forward SDE in the sense of distributions with respect to the initial data is deduced from results by Bouleau and Hirsch on the absolute continuity of probability measures.
During his PostDoc at Berlin, N. Champagnat started to work with S. Roelly (Universität Potsdam) on critical or subcritical multitype DawsonWatanabe (MDW) superprocesses. The
motivation was to study the effect of conditioning on the nonextinction of the population (or only of one type) on each subpopulation of fixed type. This work is now progressing in the
following directions. Using the cumulant semigroup characterization of the process, we computed the law of the process conditioned to never get extinct (
Qprocess), gave its interpretation as a MDW process with immigration and obtained the limit law of the mass of this process, first in the case where the mutation
matrix is irreducible, and second for several specific 2type reducible mutation matrices. The irreducible case is similar to the monotype one, whereas the reducible cases show a strong
dependence of the
Qprocess and its limit law on the type conditioned to survive forever. We were in particular able to identify which subpopulation explodes or gets extinct according to
the conditioning.
We are also studying the effect of conditioning on the survival at time
t+
on the law of the process at time
t, for finite
. One interesting new feature is that, in all cases (irreducible and reducible), one may permute the limits
tand
to get the stationary distribution of the
Qprocess.
This preliminary work should allow us to study the effect of conditioning on the nonextinction on a 2types population with logistic competition. In general, such a conditioning leads to the survival of the ``bestfitted'' type, but we expect that, in mutualistic cases, coexistence may happen, providing another approach of evolutionary branching (see ).
In , under the assumptions of rare mutations and large population, N. Champagnat obtained a microscopic interpretation of a jump process of evolution where the evolution proceeds by a sequence of mutant invasions in the population, in the specific case where only one type can survive at any time. This process, called ``trait substitution sequence'' (TSS) of adaptive dynamics, is the basis of the theory of adaptive dynamics . This theory provides in particular an
interpretation of the phenomenon of evolutionary branching, where the distribution of the phenotypic traits under consideration in the population, initially concentrated around a single value, is driven by selection to divide into two (or more) clusters corresponding to subpopulations with different phenotypes, that stably coexist and are still in competition.
In collaboration with S. Méléard (École Polytechnique, Palaiseau), we started to study a similar scaling limit of the microscopic interacting particle system (a birthdeathmutationcompetition model), in the case where coexistence is possible. Our goal is to study evolutionary branching thanks to this model. In particular, we aim at confirming (or infirming) mathematically the branching criterion proposed by the biologists, to distinguish the cases where coexistence is possible forever from the cases where one branch gets extinct, and to study the behavior of the branching time when the size of the mutation jumps is small.
A technical difficulty raised by this question is the study of the TSS process after the first branching time, where two types are coexisting in the population. Each jump in the TSS process then corresponds to the invasion of a mutant in such a population. In the scaling limit of the particle system, the competition phase that takes place after the invasion of the mutant is governed by a 3type LotkaVolterra system of competitive form. A collaboration with M. Benaïm (Université de Neuchatel) was initiated to search for criteria allowing us to determine the outcome of competition (i.e. which type(s) eventually survive) when a mutant type invades a stable 2type population.
N. Champagnat and A. Lambert (Laboratoire d'écologie, ENS Paris) have recently obtained the counterpart of the TSS of the previous section when the population size is finite. In this case, the invasion of a mutant in the population corresponds to its fixation (i.e. the event where the mutant type is the only surviving type). A diffusion process has also been obtained when the size of the jumps in the TSS vanishes, together with analytical expressions of its coefficients as series. From a biological viewpoint, this diffusion process quantifies the interplay between genetic drift and directional selection, and allows to interpret the phenomenon of punctualism, where the evolution of the population is an alternance of long phases of stasis (constant state) and short phases of quick evolution, as a change of basin of attraction for the diffusion process.
We are now working on the numerical computation of the coefficients of the diffusion, in order to simulate it for a bistable model of evolution of virulence in viruses, that has been already studied in a deterministic setting . Our mathematical computation also allows us to decompose the fixation probability of a nearly neutral mutant type (i.e. for which the mutation has very small effects on the dynamics' parameters) into five fundamental components that correspond to the mutation of specific combinations of the parameters of the model. The comparison of these coefficients allows us to determine which mutant type is more likely to invade the population. This problem is related to the biological notion of robustness of the population. We also plan to study the effect of the limit of large population (in the underlying particle system) on the diffusion process. In particular, the diffusion part converges to zero, and the deterministic limit obtained has to be compared with the (deterministic) ``canonical equation of adaptive dynamics'', which has been obtained in an infinite population setting .
In , N. Champagnat and S. Méléard (École Polytechnique, Palaiseau) have studied some large population scalings of individualbased ecological birthdeathmutationcompetition models where each particle moves in the (physical) space according to a diffusion reflected at the boundary of a fixed domain, and in the phenotypic space with mutations. A particular care is put on the spatial interaction range, in order to recover integroPDE models with local spatial interactions proposed by Desvillettes et al. . Several examples are given, illustrating spatial and phenotypic clustering and the importance of such models, that combine space and evolution, for the study of invasion phenomena.
A. Rousseau pursued his work on the primitive equations of the atmosphere and the ocean in the absence of viscosity. In presence of viscosity, the mathematical theory has been pretty much studied in the 90's by many authors. In absence of viscosity, little progress was made since pioneering papers of the 70's.
In earlier works, A. Rousseau collaborated with Roger Temam (Indiana University) and Joe Tribbia (NCAR) to investigate these equations in dimension 2. In , the numerical computations of the 2D model have confirmed the wellposedness of the socalled open boundary conditions.
The 3D model is now under investigation, both from the theoretical and numerical viewpoints.
This section presents the results obtained by Omegaon the modeling of turbulent flows. Actually, it could be a part of the previous section on stochastic modeling and applications. We have chosen to present this work in a distinct coherent section.
In the statistical approach of NavierStokes equation in
, the Eulerian properties of the fluid (such as velocity, pressure and other fundamental quantities) are supposed to depend on possible realizations
. In the incompressible case, in order to compute the averaged velocity, one needs to model the equation of the Reynolds stress. A direct modeling is for example the socalled
kepsilon turbulence model. An alternative approach
consists in describing the flow through a
Lagrangian stochastic model whose averaged properties are linked to those of the Eulerian fields by conditional means w.r.t. particle position. In the computational fluid dynamics literature,
those models are referred to as stochastic Lagrangian models. For example, the Simplified Langevin model
, which characterizes particle positionsvelocity
in the case of homogeneous turbulent flow, is defined as
where
C_{0},
and
k(
t,
x)are positive real quantities supposed to be known, and
Wis a Brownian motion. The pressure gradient
is treated as solution of a Poisson equation in order to satisfy the constant massdensity (i.e. particle positions are uniformly distributed) and divergence free conditions. Moreover in
the presence of physical boundaries evaluated at
, the model is submitted to a limit condition of the form:
, for
. The corresponding discrete algorithm is reported in section
.
So far, we have considered a Lagrangian model of the type (
) where the pressure gradient is removed and
and
kare supposed to be constant. We now follow two new directions.
M. Bossy and J.F. Jabir study the wellposedness of a simplified version of equation ( ) (in particular, without the pressure term). The conditional expectation involved in ( ) implies that this equation is a McKeanVlasov equation with singular kernels. We aim to study corresponding smoothened interacting particle systems. Supposing that the drift term is continuous and bounded, we proved a propagation of chaos result for some particle systems, which allows us to construct a solution . Uniqueness was also proved under the same hypothesis.
Motivated by a downscaling application (see ), we constructed a confined Langevin model which allows us to take into account boundaries conditions within a resolution cell. The basic idea is to add a ``confinement'' term to the velocity dynamics in equation ( ) when the particle hits the boundary. Thanks to results established in and under weak hypotheses on the initial condition (in order to avoid accumulations of jump times), (strong) existence and uniqueness have been established in the simplified case of classical SDEs with a constant diffusion. These results were also improved for the class of McKeanVlasov equations with smooth kernels. Moreover O. Aboura, M. Bossy, and F. Bernardin have constructed a numerical Eulerscheme and have studied its rate of convergence.
We combine a ``fractional step'' method and a ``particles in cell'' method to simulate the time evolution of the Lagrangian variables of the flow described by equation ( ).
The most noticeable problem arises from the massdensity conservation between two time steps within a given cell. F. Bernardin and A. Rousseau have built their own numerical tool which proceeds by correction of the positions of the particles after a prediction step on their velocities. The new field has the same distribution as before the prediction step on the velocities. Since initial positions are uniformly distributed, the aim is to move the particles in a box such that:
their density is uniform (i.e. the number of particles is the same in each cell of a considered mesh).
the ``transport cost'', which is related to the particles transfer, is minimum.
This problem is not classical at all and is referred in the literature as a problem of discrete optimal transportation, which is known to be nonlinear and numerically very hard to solve in dimension 3, whereas the onedimensional corresponding problem is linear, and easy to handle. F. Bernardin and A. Rousseau (see and ) introduced a 3Dmethod, based on the simple case of dimension 1, and refer to recent works , to justify their work.
The results developed above were the subject of presentations to the 31st Conference on Stochastic Processes and their Applicationin Paris (by J.F. Jabir and A. Rousseau in July 2006), conference Numerical and Stochastic Models(by M. Bossy in October 2006) and Journées EDP RhôneAlpes(by F. Bernardin and A. Rousseau ). The paper is accepted in the International Conference on Clean Electrical Power Renewable Energy Resources Impact, 2007.
In collaboration with Rajna Gibson (Zürich University), Christophette Blanchet and Sylvain Rubenthaler (Université de Nice SophiaAntipolis), B. de Saporta, D. Talay and E. Tanré elaborate an appropriate mathematical framework to develop the analysis of the financial performances of financial techniques which are often used by the traders. This research is funded by NCCR FINRISK (Switzerland) and is a part of its project ``Conceptual Issues in Financial Risk Management''.
In the financial industry, there are three main approaches to investment: the fundamental approach, where strategies are based on fundamental economic principles, the technical analysis approach, where strategies are based on past prices behavior, and the mathematical approach where strategies are based on mathematical models and studies. The main advantage of technical analysis is that it avoids model specification, and thus calibration problems, misspecification risks, etc. On the other hand, technical analysis methods have limited theoretical justifications, and therefore none can assert that they are riskless, or even efficient.
Consider an unstable financial economy. It is impossible to specify and calibrate models which can capture all the sources of instability during a long time interval. Thus it is natural to compare the performances obtained by using erroneously calibrated mathematical models and the performances obtained by technical analysis techniques. To our knowledge, this question has not been investigated in the literature.
We deal with the following model for a financial market, in which two assets are traded continuously. The first one is an asset without systematic risk, called
bond(or bank account) whose price at time
tevolves according to the equation
dS_{t}^{0}=
S_{t}^{0}rdt;
S_{0}^{0}= 1.
The remaining asset is subject to systematic risk; we shall refer to it as a
stockand model the evolution of its price at time
tby the linear stochastic differential equation
dS_{t}=
S_{t}dB_{t}+
(
t)
S_{t}dt;
S_{0}=
s_{0}.
is a standard onedimensional Brownian motion on a given probability space . The random process takes values in . The time lengthes between two changes are independent exponential random variables. The difficulty arises here because the trader does not exactly observe the times of change.
This year we extended our earlier results , by taking transaction costs into account. At any time, the trader is allowed to invest all his wealth in the bond or in the stock. The trader's wealth evolves according to the equation
where
_{t}{0;1}is the strategy and
g_{01}and
g_{10}are the buying and selling costs. This leads to a nonclassical stochastic control problem. We have proved that the value function of this control problem is continuous, satisfies a
dynamic programming principle, and is the unique viscosity solution of a Hamilton–Jacobi–Bellman (HJB) equation. We developed an algorithm to compute the value function and a suboptimal
trading strategy, and compared its performance to the performance of the moving average strategy. When transactions costs are high, it is difficult for the technical analyst to outperform
miscalibrated mathematical strategies. We are now working on the numerical analysis of the HJB equation. We aim to precise the rate of discretization methods.
The motivation and the applications of this research concern American option pricing and the numerical resolution of the variational inequality characterizing prices of American options.
The probabilistic interpretation of variational inequalities in terms of backward stochastic differential equations is a useful tool to construct proper localized problems. M. Cissé
continues his Ph.D. thesis in this direction. A part of his research is done jointly with M. El Otmani, a Ph.D. student from the University Cadi Ayyad of Marrakech. Let
Zand
denote the gradient of the backward equation associated with a forward SDE without reflection and with reflection, respectively. Following the work of Ma and Zhang
giving the interpretation of
Zas the derivative of viscosity solution of variational inequalities, we obtain the represention of
as the derivative of viscosity solution of
localizedvariational inequalities. The localization error of the variational inequalities with artificial Neumann boundary conditions is then controlled by the difference between
Zand
.
It is wellknown that many stochastic control problems in financial economics, especially the portfolio optimization problems, are related to a particular kind of partial differential equation, the Hamilton–Jacobi–Bellman (HJB) equation. Unfortunately, this kind of equation can be solved explicitly only in few cases. Therefore, resorting to numerical approaches is inevitable. The implementation of the numerical method for solving HJB equations, however, needs to know the properties of the equations on the finite boundaries, which are not provided explicitly in practice since most of HJB equations are obtained over unbounded domains. Generally speaking, there is no way to find the precise boundary conditions for a HJB equation; then, it is worth raising two questions:
How can one find a suitable approximation of the boundary value for solving the corresponding HJB equation numerically?
What impact do the boundary errors have on the global error of the solution of a HJB equation?
As an attempt to answer those two questions, J. Huang, in his thesis, is conducting an error analysis for the HJB equation derived from the utility maximization problem in finite horizon. Thanks to the probabilistic interpretation of the HJB equation, we find that the global error of the solution is controlled by the boundary errors and the maximal probability that the portfolio reaches a given target. The problem involving this probability has been studied by some researchers, especially Browne , Spivak and Cvitanić . By their inspirational results, we have succeeded in finding an explicit estimation of the global error. Similar estimations can also be given when the market coefficients are not deterministic but satisfy some special conditions or when the portfolio strategies (stochastic control) are supposed to be bounded.
Concerning the first question mentioned above, there is no general methodology to find the suitable approximations of boundary values. Nevertheless, since the utility maximization problem can also be solved by the martingale method (or the duality approach), we might find a desirable approximation in this context if we are able to find the root of an implicit function and calculate the related expectations, which are the main difficulties in implementing the martingale method. The rootseeking problem for an unknown (implicit or errorcorrupted) function is actually in the palm of the algorithm of RobbinsMonro. So, we are trying to use this algorithm, or more precisely, the truncated algorithm proposed by Chen et al. , together with the Monte Carlo method to search the root involved in the martingale method, which, we hope, might result in a desirable approximation of the boundary value. More analyses are still in progress.
P. Protter (Cornell University) and D. Talay are addressing the following question. We have the possibility of trading in a risky asset (which we refer to as a stock) with both
liquidity and transaction costs. We further assume that the stock price follows a diffusion, and that the stock is highly liquid. We limit our trading strategies to those which change our
holdings only by jumps (i.e., discrete trading strategies), and we begin with $0 and 0 shares, and we end with a liquidated portfolio (that is, we no longer hold any shares of the stock), on
or before a predetermined ending time
T. The question then is, given the structure of the liquidity and transaction costs, what is the optimal trading strategy which will maximize our gains? This amounts to
maximize the value of our risk free savings account. This problem can be solved in this context if it is formulated as a non classical problem in stochastic optimal control. We prove
existence and uniqueness results for the related Hamilton–Jacobi–Bellman equation.
During his internship supervised by E. Tanré, A. Richou worked on discretization of backward stochastic differential equations (BSDE), on optimal quantization methods and the
kmean algorithm. He has studied the theoretical convergence and practical problems of implementation. This allows him to propose a timespace discretization scheme for
some BSDEs and the corresponding PDE. The theoretical convergence of this scheme is studied under many regularity assumptions. This scheme has been tested for American option pricing test
cases.
In collaboration with Pierre Patie (University of Bern, Switzerland), M. Cissé and E. Tanré solve explicitly the optimal stopping problem with random discounting and an additive
functional as cost of observations for a regular linear diffusion. This generalizes a result by Beibel and Lerche
,
. The approach relies on a combination of
Doob's
htransform, timechanges and martingales techniques.
In collaboration with A. Volpi (ESSTIN), B. Roynette and P. Vallois have worked on the ruin time of an insurance company. The benefits of the company are modeled by a Lévy process. The authors have considered the joint distribution of the ruin time, the overshoot and the undershoot.
A revised version of this study was sent to ESAIM Probability and Statistics in December 2006.
In this section we present our results on issues which are more abstract than the preceding ones and, at first glance, might appear decorrelated from our applied studies. However most of them are originally motivated by modelling problems, or technical difficulties to overcome in order to analyze in full generality stochastic numerical methods or properties of stochastic models.
In collaboration with P.L. Lions (Collège de France), M. Bossy, S. Maire, D. Talay and E. Tanré study the long time behavior of viscosity solutions of fully nonlinear stochastic partial differential equations. For particular equations and particular initial conditions, the asymptotic law can be fully explicited. Another direction of research concerns numerical methods to approximate these solutions.
L. Quer i Sardanyons worked in collaboration with S. Tindel (Université Nancy 1) on a ``rough path'' approach to the 1dimensional wave equation with a general fractional Brownian noise . His construction is pathwise and allows one to consider a large class of noises, without being restricted to the Gaussian framework.
The article from A. Lejay presents the theory of rough paths from a new point of view and endows the approach from D. Feyel and A. de la Pradelle together with the recent developments of this theory . This allows us to better understand that the rough path integrals correspond to a natural extension of the notion of integrals or differential equations driven by smooth functions, so that the required algebraic structures are gently introduced.
After P. Vallois' visit at Turku on August 2004, P. Vallois and P. Salminen have continued to study the joint distribution of the maxdecrease and the maxincrease of a Brownian motion. One motivation comes from mathematical finance where the maximumdecrease, also called maximum drawdown (MDD), is used to quantify the riskyness of a stock or any other asset. Related measures used hereby are e.g. the recovery time from MDD and the duration of MDD. We are able to answer to the question rised by Gabor Szekely about the covariance between the maxdecrease and the maxincrease at a fixed time. This is due to the fact that we are able to determine the joint distribution of the maxdecrease and the maxincrease stopped at an exponential time independent of the Brownian motion. This study has been accepted for publication.
Let
be the family of Wiener measures defined on the canonical space
. Let
be an
adapted, non negative process such that
, for any
. By definition, we have a penalisation principle if, under some suitable assumptions on
F, there exists a positive
martingale
(
M
_{t}
^{F}), starting at 1, such that:
In a series of papers , , , , , (see also ), we have considered a lot of situations where a penalisation principle holds true. Among them we can mention:
, where is a Borel function.
F_{t}=
f(
S_{t}), where
; or
F_{t}=
f(
L_{t}), where
(
L
_{t})denotes the local time at level 0 of
(
X
_{t}).
J. Bertoin (Université Paris 6), M. Yor (Université Paris 6) and B. Roynette describe some connections between two functional spaces: the space of (sub)critical branching mechanisms and the space of Bernstein functions.
Started last year, our joint collaboration with the Laboratoire de Météorologie Dynamique (Université Paris 6, École Polytechnique, École Normale Supérieure) is funded by the French Environment and Energy Management Agency (ADEME) and concerns the modeling and the simulation of local wind energy resources. We collaborate with Éric Peirano (ADEME), Philippe Drobinski and Tamara Salameh (LMD). S. Filali (who arrived on September 1st) concentrates her PostDoc research on this topic.
This year we continue our work on numerical simulations of turbulent flows using Lagrangian stochastic methods in the context of weather prediction. Theoretical results (see ) were obtained and put in practice for the dataprocessing coding of a fractionalstep algorithm described in the report of the present contract (published in December, 2006, see ).
We are planning to run the solver
MM5with two different resolutions. The first mesh will be coarse, and will be devoted to feed our stochastic model. The second mesh will be finer, and we hope that our
simulations will provide comparable results, with a better (smaller) computational cost (CPU time) than the one corresponding to the same resolution, performed with a classical refinement of
MM5.
Finally, we will use observations from measurements campaigns, namely the campaign FETCH that took place in 1998 in Southern France, for the validation of our simulations.
We collaborate with the Laboratoire de Météorologie Dynamique within the ``MeteoStoch'' project funded by the French Ministry of Research through the ACI ``Nouvelles Interfaces des Mathématiques'' program. J.F. Jabir's Ph.D. thesis is funded by a fellowship attached to this project. Our aims are:
we aim to construct seasonal and local models by using stochastic processes, to qualitatively analyze these models (do they fit typical behaviors of atmospheric variables?), to develop calibration and simulation methods, to analyze the convergence rates of these methods, etc.
we aim to study the effects of our stochastic models on various pricing of financial assets and various risk measures for financial institutions submitted to climatic risks.
Last year, we developed a stochastic seasonal weather system coupling the local daily temperature and rainfall. Using 11 years of daily observation coming from METEOFRANCE database, the relevant parameters of the model have been calibrated. Our first results seem to prove the validity of our theoretical assumptions and seem to capture some specific climate evolution during the season. This was the subject of a report for an EDFcontract.
This year, we were more particularly concerned by financial aspects: as Imkeller , we have considered an agent trading a financial asset submitted to climate risk due to temperature fluctuation. In the case of an exponential utility function, the optimal strategy and the contract value can be explicitly obtained by solving an HamiltonJacobiBellman equation. Our method remains accurate for the case of a diversified position where a climate derivative is simultaneously traded (a Heating Day Degrees contract for example). The sensibility of the solution to the weather parameters was also tested.
We collaborate with the OASIS team within the ANR project entitled ``GCPMF'' funded by the ANR Research Program ``Calcul Intensif et Grilles de Calcul 2005''.
Financial applications require to solve large size computations, that are so huge that they can not be tackled by conventional PCs. A typical example addresses risk analysis evaluation as periodically run by financial institutions (like VaR – Value at Risk –, and also market risks: greeks, duration, beta, ...). The use of parallelism is already applied in this financial context, but its usage in Computing Grids is far from being mastered.
The aim of this ANR program is to highlight the potential of parallel techniques applied to mathematical finance computing on Grid infrastructures. The consortium that conducts this project includes ten participants from academic laboratories in computer science and mathematics, banks, and IT companies.
This year, in collaboration with Françoise Baude, Ludovic Henrio and Viet Dung Doan, from the OASIS team, and with Stéphane Vialle and his team from Supélec–Metz, we have designed and implemented a Grid software architecture. We have applied this architecture for the pricing of European options by Monte Carlo methods. The specification and implementation for the pricing of American options using the LongstaffSchwartz algorithm is part of our current work. A part of this collaborative work is presented in .
The general goal of this just starting research project is the development and the analysis of new stochastic models of evolution, which take into account the interactions and the diversity of scales in evolution. The partners (probabilists and evolutionary biologists mainly in Paris, Marseille and Grenoble) are exploring four research directions:
Evolution at the molecular scale: new models of the evolution of genes taking into account the interactions between sites and the main factors of global changes in the genome (genes duplication, transfer, ...).
Adaptive evolution: macroscopic models of adaptive evolution that are deduced from the microscopic, individual scale and from genes to the organism.
Shape of random trees: random tree models for molecular evolution or for species evolution, and the mathematical tools to compare them in order to analyse the evolutionary relations between populations or species.
Coalescence: coalescent processes coding for the evolution of a group of genotypes inside a large population, allowing one to study the polymorphism when dependence between individuals and various scales are taken into account.
This research involves expertise on branching processes, random trees, coalescent processes, superprocesses, interacting particle systems and large deviations theory. A strong interest is brought to algorithmic implementations, numerical analysis and applications to data processing.
Within the ACI ``Structured Populations'', that ends at the end of 2006, S. Méléard (École Polytechnique, Palaiseau), L. Desvillettes (ENS Cachan), R. Ferrière, A. Lambert (both at the ``laboratoire d'écologie'', ENS Paris), C. Tran Viet (Université Paris 10) and N. Champagnat develop and study mathematical models of the dynamics of evolving populations. In these populations, the individuals are characterized by a trait (phenotype) but the population can also be spatially, sexually or agestructured. The phenomenon of selection results from ecological interactions between individuals, which explains the nonlinearity of the models. Two main aspects have been developed: first, the microscopic interpretation of macroscopic models, such as integrodifferential equations or classical approximate biological models of evolution, from an ecologically realistic interacting particle system; second, the large time behaviour of the macroscopic models. Three biological questions have been studied: first, the interplay between invasion and evolution for (sexual or asexual) populations subject to migration; second, the evolution of cooperation strategies; third, the evolution of agestructured population.
The team Omegaparticipates to the ``Groupe de Recherche GRIP'' on stochastic interacting particles and to the European Network AMAMEF on Advanced Mathematical Methods for Finance. D. Talay serves as a member of the scientific committees of these two research networks.
A. Lejay is the responsible for the subproject ``Monte Carlo Methods for Discontinuous Media'' within the project ``Particle Methods'' of the ``Groupe de Recherche MOMAS'' funded by ANDRA, BRGM, CEA, EDF, CNRS and IRSN.
D. Talay serves as an Associate Editor of: Stochastic Processes and their Applications, Annals of Applied Probability, ESAIM Probability and Statistics, Stochastics and Dynamics, SIAM Journal on Numerical Analysis, Mathematics of Computation, Monte Carlo Methods and Applications, Oxford IMA Journal of Numerical Analysis, Stochastic Environmental Research and Risk Assessment.
M. Bossy (up to June) and D. Talay (from June) served as members of the board of the French Society of Applied Mathematics (SMAI).
D. Talay is the new President of SMAI.
D. Talay serves as a member of the Committee for junior permanent research positions at Université Bordeaux 1.
D. Talay has participated to the INRIA delegation which visited the Croucher Foundation and the City University in Hong Kong last Spring.
M. Bossy serves as a member of the Scientific Committee of the École Doctorale ``Sciences Fondamentales et Appliquées'' of the University of Nice Sophia Antipolis.
M. Bossy serves as member of the ``Suivi Doctoral'' Committee, "Cours et Colloques" Committee and the scientific Committee for the "COopérations LOcales de Recherche" of INRIA Sophia Antipolis.
N. Champagnat is the administrator, with C. Tran Viet (Université de Nanterre) of the web site of the ANR MAEV.
M. Deaconu serves as member of the Scientific Committee of the MAS group (Probability and Statistics) within SMAI.
M. Deaconu is a permanent reviewer for the Mathematical Reviews.
M. Deaconu serves as member of the International Relations Work Group of COST at INRIA.
M. Deaconu serves as member of the ``Comité des Projets'' of LORIA and INRIA Lorraine and of the ```COMIPERS  Commission Ingénieurs'' of INRIA Lorraine.
M. Deaconu serves as member of the ``Conseil du laboratoire'' of IECN and of the ``Commission des spécialistes'' of the Departement of Mathematics of the Henri Poincaré University.
A. Lejay serves as a member of the ``Commission de Spécialistes'' of Université Louis Pasteur in Strasbourg.
A. Lejay serves as a member of the ``Commission des moyens informatiques'' of INRIA Lorraine.
P. Vallois is the head of the Probability and Statistics group of Institut Élie Cartan.
P. Vallois serves as a member of the council of the UFR STMIA, the ``Conseil du laboratoire'' and the ``Commission de Spécialistes'' of the Mathematics Department of Université Nancy 1.
B. Bihain, the head of the Laboratoire de Médecine et Thérapie Moléculaire (INSERM, Nancy), asked to our group of Probability and Statistics to investigate the proteomic profiling of clinical samples. More precisely, the question involves the differential analysis of the expression levels of a large subset of the proteome of a particular type of clinical specimens to identify those proteins whose change in expression levels might be associated with a given disease process. The statistical tools are Data Analysis, and Support Vectors Machine. P. Vallois, S. Tindel (Université Nancy 1) and J.M. Monnez (Université Nancy 2) have organized several seminars on these topics. M. Deaconu, S. Mézière (Université Nancy 2) also participate to the biostatistics group. O. Collignon has a Cifre thesis.
A colloquium on the ``Discretization of Processes'' has been organized by Omegaon January, 23–24, at the INRIA Sophia Antipolis. Talks were given by the following speakers. Paul Malliavin (Académie des Sciences), Ana Bela Cruzeiro (Universidade Técnica de Lisboa), Pierre Etore, Mireille Bossy, Aurélien Alfonsi (École Nationale des Ponts et Chaussées), Ahmed Kebaier (Université de MarnelaVallée), Chi Tran Viet (Université de Nanterre), Florent Malrieu (Université de Rennes 1), Vlad Bally (Université de MarnelaVallée), MariePierre Bavouzet (INRIA Rocquencourt), Valentin Konakov (Université Paris 6), Gilles Pagès (Université Paris 6), Sylvain Rubenthaler (Université de Nice), François Delarue (Université Paris 7) and Romuald Elie (Université Paris Dauphine).
D. Talay has a part time position of Professor at École Polytechnique. He also teaches probabilistic numerical methods at Université Paris 6 (DEA de Probabilités).
M. Bossy gives a 30h course on ``Stochastic calculus and financial mathematics'' in the Master IMAFA (``Informatique et Mathématiques Appliquées à la Finance et à l'Assurance'', Université de Nice Sophia Antipolis), and a 15h course on ``Risk management on energetic financial markets'' in the Master ``Ingénierie et Gestion de l'Energie'' (École des Mines de Paris) at SophiaAntipolis.
J.F. Jabir has given 14h of exercise class on ``Stochastic calculus and financial mathematics'' in the Master IMAFA.
E. Tanré gives a 6h course in the Master IMAFA.
P. Vallois gives courses in the Mathematical Finance Master 2 programme, in Nancy.
Pierre Étore defended his thesis entitled Approximation de processus de diffusion à coefficients discontinus en dimension un et applications à la simulationin December, 12 in Nancy.
F. Bernardin has given a seminar lecture at the Laboratoire de mécanique et ingénieries, Institut français de mécanique avancée, ClermontFerrand, in February.
F. Bernardin has given a seminar lecture at the Laboratoire régional des ponts et chaussées de Nice, équipe de recherche associée Risque sismique in March.
F. Bernardin and A. Rousseau have given a talk at the Journées EDP RhôneAlpes, SaintÉtienne, France, November.
M. Bossy gave a talk at the Colloquium Approximation Numérique des Processus Stochastiquesin Sophia Antipolis in January.
M. Bossy gave a plenary talk at the Colloquium Journée EDP Proba, Paris, in March.
M. Bossy gave a plenary talk at the Conference on Numerical and Stochastic Modelsin Paris in October.
N. Champagnat gave talks at the first workshop of the ANR MAEV in the Institut Albert Bonniot, Grenoble, in October , at the séminaire commun DieudonnéOmegain Nice in November and at the working group on control, games and biologyat the EPU in SophiaAntipolis in December.
J.F. Jabir and A. Rousseau have given a talk at the 31st Conference on Stochastic Processes and Applicationsin July in Paris.
A. Lejay spent two weeks and a half in Chile with a grant INRIACONICYT to work with R. Rebolledo and S. Torrès.
A. Lejay has given talks at the Journée ``Incertitude''in October in Institut Français du Pétrole in RueilMalmaison, in the Journées ``Analyse et Probabilité''in April in MarnelaVallée, at the Journées du GdR MOMAS ``Problèmes inverses et analyse d'incertitudes''in October in Nice and at the 3rd Encuentro Regional de Probabilidad y Estadística Matemáticain December in Bueonos Aires. He also gave seminars at Université Nancy 1 and at the Universitat de Barcelona.
L. Quer i Sardanyons gave a talk at the workshop Computational Aspects of Stochastic Partial Differential Equationsin Salzburg in September, at the seminar of Probability of Institut Élie Cartan, and presented a poster at the conferences Stochastic Processes and Their Applicationsin July in Paris and International Congress of Mathematicsin August in Madrid.
A. Rousseau gave a talk at the conference Mathematical and Geophysical Fluid Dynamicsorganized by the American Institute of Mathematics in February in Palo Alto, California.
B. Roynette gave a talk at the Seminar of Probability at Dijon in April.
B. de Saporta gave talks at the AMAMEF conference Numerical Methods in Financein Rocquencourt in February, at the 31st Conference on Stochastic Processes and their Applicationsin Paris in July, and seminars at the Universities of Nice, Brest, Amiens and Evry in March, and at the séminaire Bachelierin Paris in June.
D. Talay gave a survey course in February at the Workshop `Numerics for Stochastic Differential Equations with Applications', State University Tallahassee, Florida.
D. Talay gave a plenary lecture in July at `Markov Processes and Related Topics, A conference in honor of Tom Kurtz on his 65th birthday', University of Wisconsin–Madison.
D. Talay served as a member of the Scientific Committee `31st Conference on Stochastic Processes and their Applications' and attended the Conference in July in Paris. He also served as a member of the Scientific Committee of the 69th Annual Meeting of the Institute of Mathematical Statisticsheld in July in Rio de Janeiro. He also organized a session and gave a lecture there.
D. Talay coorganized an INRIA–City University of Hong Kong Workshop in Hong Kong in May. He also gave a lecture there.
D. Talay coorganized two SMAI Workshops on Probability and Partial Differential Equations in March and September in Institut Henri Poincaré, Paris.
D. Talay gave two seminars in May and June at the French bank IXIS.
D. Talay gave the welcome lecture at the `Journées MAS–SMAI' held in Lille in September.
E. Tanré has spent two weeks in Chile in July within the INRIACONICYT collaboration. He has given seminars at the Pontificia Universidad Catòlica de Chile and at IV Escuela de Invierno de Análisis Estocástico y Aplicaciones.
E. Tanré has given a talk at the First Conference of Advanced Mathematical Methods for Finance(AMAMEF) at Side (Turkey).
E. Tanré has given a seminar at Université de Rennes.
E. Tanré presented a poster at the conference NeuroComp 2006.
P. Vallois has given a lecture at the conference 31st Conference Stochastic Processes and their Applicationsin Paris in July, at the International conference on Stochastic analysis and its applicationsin Seattle in August and at the Journées de Probabilitésin Luminy in September.
P. Vallois has been invited one week by P. Salminen, at Turku (Finland) in October, and one week by B. Boufoussi at Marrakech in November.
L. Stoica (University of Bucharest) spent a month in Nancy as an invited professor to work with A. Lejay.
The Omegaseminar organized by M. Bossy and partly by N. Champagnat since September has received the following speakers: Alexandre Popier (WIAS), Miguel Martinez (Universidad de Valparaíso, Chili), Josselin Garnier (Université Paris 7), Pierre Del Moral (Université de Nice), Madalina Deaconu ( OmegaNancy), Carlos M. Mora (Universidad de Concepción, Chili), Lluis Quer i Sardanyons ( OmegaNancy), Pauline Barrieu (LSE, London), Tony Lelievre (ENPC CERMICS), Bruno Dupire (Bloomberg), Siham Filali (Université de Lille), Tamara Salameh (LMD, Paris 6), Christophe Baehr (METEO FRANCE, Toulouse), François Bolley (Université Paris Dauphine), Ragnar Norberg (London School of Economics), Persi Diaconis (Stanford University), Susan Holmes (Stanford University), Thérèse Malliavin (Institut Pasteur, Paris).
The seminar Probabilitésorganized at Nancy by S. Tindel has received the following speakers: Ivan Nourdin (Université de Paris 6), Emmanuel Monfrini (LORIA, Nancy), Massimiliano Gubinelli (Università di Pisa), Ilya Pavlyukevich (Humboldt Universität, Berlin), Giambattista Giacomin (Université de Paris 7), Isabelle Turpin (Université de Valenciennes), Laurence Maillard (Institut Pasteur et LPMA Paris 67), Vincent Lemaire (Université de MarnelaVallée), Andreas Eberle (Universität Bonn), Laurent Rouvière (Université de Rennes 2), Julien Jacques (Université de Grenoble 2), Dierk Peithmann (Humbolt Universität, Berlin), Alexis Devulder (Université de Paris 6), Sandie Ferrigno (Université de Montpellier 2), Aude Illig (Université de Paris Dauphine), Clément Dombry (Université de Lyon 1), Hermine Biermé (Université de Paris 5), Lucretiu Stoica (Université de Bucarest), Sophie Rainero (Université de Paris Dauphine), Zhan Shi (Université de Paris 6), Nicolas Fournier (Université de Paris 12), Pierre Calka (Université de Paris 5), Jean Jacques Daudin (INA ParisGrignon), Götz Kersting (University of Frankfurt am Main), Philippe Berthet (Université de Rennes 1), Fabien Panloup (Université Paris 6), Raby Guerbaz (Université Cadi Ayyad, Marrakech), Serguei Popov (Universidade de Campinas, Brazil), Ciprian Tudor (Université de Paris 1) and JeanBaptiste Gouéré (Université d'Orléans).