Section: New Results
Optimal control: new results
Let us list here our new results in optimal control theory beyond the subRiemannian framework.

In [4] we focus on regional deterministic optimal control problems, i.e., problems where the dynamics and the cost functional may be different in several regions of the state space and present discontinuities at their interface. Under the assumption that optimal trajectories have a locally finite number of switchings (no Zeno phenomenon), we use the duplication technique to show that the value function of the regional optimal control problem is the minimum over all possible structures of trajectories of value functions associated with classical optimal control problems settled over fixed structures, each of them being the restriction to some submanifold of the value function of a classical optimal control problem in higher dimension. The lifting duplication technique is thus seen as a kind of desingularization of the value function of the regional optimal control problem. In turn, we extend to regional optimal control problems the classical sensitivity relations and we prove that the regularity of this value function is the same (i.e., is not more degenerate) than the one of the higherdimensional classical optimal control problem that lifts the problem.

The goal of [9] is to show how nonparametric statistics can be used to solve some chance constrained optimization and optimal control problems. We use the Kernel Density Estimation method to approximate the probability density function of a random variable with unknown distribution, from a relatively small sample. We then show how this technique can be applied and implemented for a class of problems including the Goddard problem and the trajectory optimization of an Ariane 5like launcher.

In control theory the term chattering is used to refer to fast oscillations of controls, such as an infinite number of switchings over a finite time interval. In [10] we focus on three typical instances of chattering: the Fuller phenomenon, referring to situations where an optimal control features an accumulation of switchings in finite time; the Robbins phenomenon, concerning optimal control problems with state constraints, where the optimal trajectory touches the boundary of the constraint set an infinite number of times over a finite time interval; and the Zeno phenomenon, for hybrid systems, referring to a trajectory that depicts an infinite number of location switchings in finite time. From the practical point of view, when trying to compute an optimal trajectory, for instance, by means of a shooting method, chattering may be a serious obstacle to convergence. In [10] we propose a general regularization procedure, by adding an appropriate penalization of the total variation. This produces a family of quasioptimal controls whose associated cost converge to the optimal cost of the initial problem as the penalization tends to zero. Under additional assumptions, we also quantify quasioptimality by determining a speed of convergence of the costs.

In [12], a new robust and fast method is developed to perform transfers that minimize fuel consumption between two invariant manifolds of periodic orbits in the circular restricted threebody problem. The method starts with an impulse transfer between two invariant manifolds to build an optimal control problem. This allows to choose an adequate fixed transfer time. Using the Pontryagin maximum principle, the resolution of the problem is formulated as that of finding the zero of a shooting function (indirect method). The algorithm couples different kinds of continuations (on cost, final state, and thrust) to improve robustness and to initialize the solver. The efficiency of the method is illustrated with numerical examples. Finally, the influence of the transfer time is studied numerically thanks to a continuation on this parameter, and it checks that, when transfer duration goes to zero, the control converges to the impulse transfer that it started with. It shows the robustness of the method and establishes a mathematical link between the two problems.

In [15] we consider the controllability problem for finitedimensional linear autonomous control systems, under state constraints but without imposing any control constraint. It is well known that, under the classical Kalman condition, in the absence of constraints on the state and the control, one can drive the system from any initial state to any final one in an arbitrarily small time. Furthermore, it is also well known that there is a positive minimal time in the presence of compact control constraints. We prove that, surprisingly, a positive minimal time may be required as well under state constraints, even if one does not impose any restriction on the control. This may even occur when the state constraints are unilateral, like the nonnegativity of some components of the state, for instance. Using the Brunovsky normal forms of controllable systems, we analyze this phenomenon in detail, that we illustrate by several examples. We discuss some extensions to nonlinear control systems and formulate some challenging open problems.

In [18] we consider a system of two coupled integrodifferential equations modeling populations of healthy and cancer cells under therapy. Both populations are structured by a phenotypic variable, representing their level of resistance to the treatment. We analyse the asymptotic behaviour of the model under constant infusion of drugs. By designing an appropriate Lyapunov function, we prove that both densities converge to Dirac masses. We then define an optimal control problem, by considering all possible infusion protocols and minimising the number of cancer cells over a prescribed time frame. We provide a quasioptimal strategy and prove that it solves this problem for large final times. For this modeling framework, we illustrate our results with numerical simulations, and compare our optimal strategy with periodic treatment schedules.

In [21] we use conductance based neuron models and the mathematical modeling of Optogenetics to define controlled neuron models and we address the minimal time control of these affine systems for the first spike from equilibrium. We apply tools of geometric optimal control theory to study singular extremals and we implement a direct method to compute optimal controls. When the system is too large to theoretically investigate the existence of singular optimal controls, we observe numerically the optimal bangbang controls.

In [23] we first derive a general integralturnpike property around a set for infinitedimensional nonautonomous optimal control problems with any possible terminal state constraints, under some appropriate assumptions. Roughly speaking, the integralturnpike property means that the time average of the distance from any optimal trajectory to the turnpike set converges to zero, as the time horizon tends to infinity. Then, we establish the measureturnpike property for strictly dissipative optimal control systems, with state and control constraints. The measureturnpike property, which is slightly stronger than the integralturnpike property, means that any optimal (state and control) solution remains essentially, along the time frame, close to an optimal solution of an associated static optimal control problem, except along a subset of times that is of small relative Lebesgue measure as the time horizon is large. Next, we prove that strict strong duality, which is a classical notion in optimization, implies strict dissipativity, and measureturnpike. Finally, we conclude the paper with several comments and open problems.

In [24], we investigate the asymptotic behavior of optimal designs for the shape optimization of 2D heat equations in long time horizons. The control is the shape of the domain on which heat diffuses. The class of 2D admissible shapes is the one introduced by Sverák, of all open subsets of a given bounded open set, whose complementary sets have a uniformly bounded number of connected components. Using a $\Gamma $convergence approach, we establish that the parabolic optimal designs converge as the length of the time horizon tends to infinity, in the complementary Hausdorff topology, to an optimal design for the corresponding stationary elliptic equation.

In [25], we study the steadystate (or periodic) exponential turnpike property of optimal control problems in Hilbert spaces. The turnpike property, which is essentially due to the hyperbolic feature of the Hamiltonian system resulting from the Pontryagin maximum principle, reflects the fact that, in large time, the optimal state, control and adjoint vector remain most of the time close to an optimal steadystate. A similar statement holds true as well when replacing an optimal steadystate by an optimal periodic trajectory. To establish the result, we design an appropriate dichotomy transformation, based on solutions of the algebraic Riccati and Lyapunov equations. We illustrate our results with examples including linear heat and wave equations with periodic tracking terms.

The Allee threshold of an ecological system distinguishes the sign of population growth either towards extinction or to carrying capacity. In practice human interventions can tune the Allee threshold for instance thanks to the sterile male technique and the mating disruption. In [26] we address various control objectives for a system described by a diffusionreaction equation regulating the Allee threshold, viewed as a real parameter determining the unstable equilibrium of the bistable nonlinear reaction term. We prove that this system is the mean field limit of an interacting system of particles in which individual behaviours are driven by stochastic laws. Numerical simulations of the stochastic process show that population propagations are governed by wavelike solutions corresponding to traveling solutions of the macroscopic reactiondiffusion system. An optimal control problem for the macroscopic model is then introduced with the objective of steering the system to a target traveling wave. The relevance of this problem is motivated by the fact that traveling wave solutions model the fact that bounded space domains reach asymptotically an equilibrium configuration. Using well known analytical results and stability properties of traveling waves, we show that wellchosen piecewise constant controls allow to reach the target approximately in sufficiently long time. We then develop a direct computational method and show its efficiency for computing such controls in various numerical simulations. Finally we show the efficiency of the obtained macroscopic optimal controls in the microscopic system of interacting particles and we discuss their advantage when addressing situations that are out of reach for the analytical methods. We conclude the article with some open problems and directions for future research.

Consider a general nonlinear optimal control problem in finite dimension, with constant state and/or control delays. By the Pontryagin Maximum Principle, any optimal trajectory is the projection of a Pontryagin extremal. In [39] we establish that, under appropriate assumptions, Pontryagin extremals depend continuously on the parameter delays, for adequate topologies. The proof of the continuity of the trajectory and of the control is quite easy, however, for the adjoint vector, the proof requires a much finer analysis. The continuity property of the adjoint with respect to the parameter delay opens a new perspective for the numerical implementation of indirect methods, such as the shooting method. We also discuss the sharpness of our assumptions.

In [43] we are concerned about the controllability of a general linear hyperbolic system of the form ${\partial}_{t}w(t,x)=\Sigma \left(x\right){\partial}_{x}w(t,x)+\gamma C\left(x\right)w(t,x)$ ($\gamma \in \mathbb{R}$) in one space dimension using boundary controls on one side. More precisely, we establish the optimal time for the null and exact controllability of the hyperbolic system for generic $\gamma $. We also present examples which yield that the generic requirement is necessary. In the case of constant $\Sigma $ and of two positive directions, we prove that the nullcontrollability is attained for any time greater than the optimal time for all $\gamma \in \mathbb{R}$ and for all $C$ which is analytic if the slowest negative direction can be alerted by both positive directions. We also show that the nullcontrollability is attained at the optimal time by a feedback law when $C\equiv 0$. Our approach is based on the backstepping method paying a special attention on the construction of the kernel and the selection of controls.

In [52] we consider a stateconstrained optimal control problem of a system of two nonlocal partialdifferential equations, which is an extension of the one introduced in a previous work in mathematical oncology. The aim is to minimize the tumor size through chemotherapy while avoiding the emergence of resistance to the drugs. The numerical approach to solve the problem was the combination of direct methods and continuation on discretization parameters, which happen to be insufficient for the more complicated model, where diffusion is added to account for mutations. In [52], we propose an approach relying on changing the problem so that it can theoretically be solved thanks to a Pontryagin Maximum Principle in infinite dimension. This provides an excellent starting point for a much more reliable and efficient algorithm combining direct methods and continuations. The global idea is new and can be thought of as an alternative to other numerical optimal control techniques.
We would also like to mention the defense of the PhD theses of Riccardo Bonalli [1] and Antoine Olivier [2] on the subject.