Section: Research Program
Estimation and control for stochastic processes
We develop inference about stochastic processes that we use for modeling. Control of stochastic processes is also a way to optimise administration (dose, frequency) of therapy.
There are many estimation techniques for diffusion processes or coefficients of fractional or multifractional Brownian motion according to a set of observations , , . But, the inference problem for diffusions driven by a fractional Brownian motion is still in its infancy. Our team has a good expertise about inference of the jump rate and the kernel of Piecewise Deterministic Markov Processes (PDMP) , , , . However, there are many directions to go further into. For instance, previous works made the assumption of a complete observation of jumps and mode, that is unrealistic in practice. We tackle the problem of inference of "Hidden PDMP". As an example, in pharmacokinetics modeling inference, we want to take into account for presence of timing noise and identification from longitudinal data. We have expertise on this subjects , and we also used mixed models to estimate tumor growth .
We consider the control of stochastic processes within the framework of Markov Decision Processes  and their generalization known as multi-player stochastic games, with a particular focus on infinite-horizon problems. In this context, we are interested in the complexity analysis of standard algorithms, as well as the proposition and analysis of numerical approximate schemes for large problems in the spirit of . Regarding complexity, a central topic of research is the analysis of the Policy Iteration algorithm, which has made significant progress in the last years , , , , but is still not fully understood. For large problems, we have a long experience of sensitivity analysis of approximate dynamic programming algorithms for Markov Decision Processes , , , , , and we currently investigate whether/how similar ideas may be adapted to multi-player stochastic games.