Section: New Results
Hamilton-Jacobi (HJ) approach
Dynamic programming and error estimates for stochastic control with Max cost
Participants : Olivier Bokanowski, Athena Picarelli, Hasnaa Zidani.
The paper [35] is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton-Jacobi-Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.
Optimal feedback control of undamped wave equations by solving a HJB equation
Participant : Hasnaa Zidani.
An optimal finite-time horizon feedback control problem for (semi linear) wave equations is studied in [42] . The feedback law can be derived from the dynamic programming principle and requires to solve the evolutionary Hamilton-Jacobi-Bellman (HJB) equation. Classical discretization methods based on finite elements lead to approximated problems governed by ODEs in high dimensional space which makes infeasible the numerical resolution by HJB approach. In the present paper, an approximation based on spectral elements is used to discretize the wave equation. The effect of noise is considered and numerical simulations are presented to show the relevance of the approach
Transmission conditions on interfaces for Hamilton-Jacobi-Bellman equations
Participants : Hasnaa Zidani, Zhiping Rao.
The works [43] , [28] deal with deterministic control problems where the dynamic and the running cost can be completely different in two (or more) complementary domains of the space . As a consequence, the dynamics and running cost present discontinuities at the interfaces of these domains. This leads to a complex interplay that has to be analyzed among transmission conditions to "glue" the propagation of the value function on the interfaces. Several questions arise: how to define properly the value function(s) and what is (are) the right Bellman Equation(s) associated with this problem?. In the case of a simple geometry (namely when the space is partitioned into two subdomains separated with an interface which is assumed to be a regular hypersurface without any connectedness requirement), [43] discuss different conditions on the hyperplane where the dynamic and the running cost are discontinuous, and the uniqueness properties of the Bellman problem are studied. In this paper it is used a dynamical approach, namely instead of working with test functions, the accent is put on invariance properties of an augmented dynamics related to the integrated control system. The comparison principle is accordingly based, rather than on (semi)continuity of the Hamiltonian appearing in the Hamilton–Jacobi–Bellman equation, on some weak separation properties of this dynamics with respect to the stratification. A more general situation where the space is partionned on several domains is also analyzed in [28] .
Singular perturbation of optimal control problems on multi-domains
Participants : Nicolas Forcadel, Hasnaa Zidani.
The goal of the paper[38] is to study a singular perturbation problem in the framework of optimal control on multi-domains. We consider an optimal control problem in which the controlled system contains a fast and a slow variables. This problem is reformulated as an Hamilton-Jacobi-Bellman (HJB) equation. The main difficulty comes from the fact that the fast variable lives in a multi-domain. The geometric singularity of the multi-domains leads to the discontinuity of the Hamiltonian. Under a controllability assumption on the fast variables, the limit equation (as the velocity of the fast variable goes to infinity) is obtained via a PDE approach and by means of the tools of the control theory.
Optimal control of first order HJ equations with linearly bounded Hamiltonian
Participant : Philip Graber.
In [40] , we consider the optimal control of solutions of first order Hamilton-Jacobi equations, where the Hamiltonian is convex with linear growth. This models the problem of steering the propagation of a front by constructing an obstacle. We prove existence of minimizers to this optimization problem as in a relaxed setting and characterize the minimizers as weak solutions to a mean field game type system of coupled partial differential equations. Furthermore, we prove existence and partial uniqueness of weak solutions to the PDE system. An interpretation in terms of mean field games is also discussed.
Zubov's equation for state-constrained perturbed nonlinear systems
Participant : Hasnaa Zidani.
The paper [41] gives a characterization of the uniform robust domain of attraction for a nite non-linear controlled system subject to perturbations and state constraints. We extend the Zubov approach to characterize this domain by means of the value function of a suitable in nite horizon state-constrained control problem which at the same time is a Lyapunov function for the system. We provide associated Hamilton-Jacobi-Bellman equations and prove existence and uniqueness of the solutions of these generalized Zubov equations.
Numerical methods for chance-constrained stochastic optimal control problems
Participant : Laurent Pfeiffer.
In Laurent Pfeiffer's PhD, we study stochastic optimal control problems with a probability constraint on the final state. This constraint must be satisfied with a probability greater or equal than a given level. We analyse and compare two approaches for discrete-time problems: a first one based on a dynamic programming principle and a second one using Lagrange relaxation. These approaches can be used for continuous-time problems, for which we give numerical illustrations.