Commands is a team devoted to dynamic optimization, both for deterministic and stochastic systems. This includes the following approaches: trajectory optimization, deterministic and stochastic optimal control, stochastic programming, dynamic programming and Hamilton-Jacobi-Bellman equation.

Our aim is to derive new and powerful algorithms for solving numerically these problems, with applications in several industrial fields. While the numerical aspects are the core of our approach it happens that the study of convergence of these algorithms and the verification of their well-posedness and accuracy raises interesting and difficult theoretical questions, such as, for trajectory optimization: qualification conditions and second-order optimality condition, well-posedness of the shooting algorithm, estimates for discretization errors; for the Hamilton-Jacobi-Bellman approach: accuracy estimates, strong uniqueness principles when state constraints are present, for stochastic programming problems: sensitivity analysis.

For many years the team members have been deeply involved in various industrial applications, often in the framework of PhD theses. The Commands team itself has dealt since its foundation in 2009 with several types of applications:

Space vehicle trajectories, in collaboration with CNES, the French space agency.

Aeronautics, in collaboration with the startup Safety Line.

Production, management, storage and trading of energy resources, in collaboration with EDF, GDF and TOTAL.

Energy management for hybrid vehicles, in collaboration with Renault and IFPEN.

We give more details in the Bilateral contracts section.

The roots of deterministic optimal control are the “classical” theory of the calculus of variations, illustrated by the work of Newton, Bernoulli, Euler, and Lagrange (whose famous multipliers were introduced in ), with improvements due to the “Chicago school”, Bliss during the first part of the 20th century, and by the notion of relaxed problem and generalized solution (Young ).

*Trajectory optimization* really started with the spectacular achievement done by Pontryagin's group during the fifties,
by stating, for general optimal control problems, nonlocal optimality conditions generalizing those of Weierstrass.
This motivated the application to many industrial problems (see the classical books by Bryson and Ho , Leitmann ,
Lee and Markus , Ioffe and Tihomirov ).

*Dynamic programming* was introduced and systematically studied by R. Bellman during the fifties. The HJB equation, whose solution is the
value function of the (parameterized) optimal control problem, is a variant of the classical Hamilton-Jacobi equation of mechanics
for the case of dynamics parameterized by a control variable. It may be viewed as a differential form of the dynamic programming principle.
This nonlinear first-order PDE appears to be well-posed in the framework of *viscosity solutions* introduced by Crandall and Lions
.
The theoretical contributions in this direction did not cease growing, see the books by Barles and Bardi and Capuzzo-Dolcetta .

The so-called *direct methods* consist in an optimization of the trajectory, after having discretized time, by a nonlinear programming solver that possibly
takes into account the dynamic structure. So the two main problems are the choice of the discretization and the nonlinear programming algorithm.
A third problem is the possibility of refinement of the discretization once after solving on a coarser grid.

In the *full discretization approach*, general Runge-Kutta schemes with different values of control for each inner step are used. This allows to obtain and
control high orders of precision, see Hager , Bonnans . In an interior-point algorithm context, controls can be eliminated
and the resulting system of equation is easily solved due to its band structure. Discretization errors due to constraints are discussed in Dontchev et al.
. See also Malanowski et al. .

In the *indirect* approach, the control is eliminated thanks to Pontryagin's maximum principle. One has then to solve the two-points boundary value problem
(with differential variables state and costate) by a single or multiple shooting method. The questions are here the choice of a discretization scheme for the
integration of the boundary value problem, of a (possibly globalized) Newton type algorithm for solving the resulting finite dimensional problem
in

For state constrained problems or singular arcs, the formulation of the shooting function may be quite elaborate , , . As initiated in , we focus more specifically on the handling of discontinuities, with ongoing work on the geometric integration aspects (Hamiltonian conservation).

This approach consists in calculating the value function associated with the optimal control problem, and then synthesizing the feedback control and the optimal trajectory using Pontryagin's principle. The method has the great particular advantage of reaching directly the global optimum, which can be very interesting when the problem is not convex.

*Characterization of the value function*
>From the dynamic programming principle, we derive a characterization of the value function as being a solution (in viscosity sense) of an
Hamilton-Jacobi-Bellman equation, which is a nonlinear PDE of dimension equal to the number n of state variables.
Since the pioneer works of Crandall and Lions , many theoretical contributions were carried out,
allowing an understanding of the properties of the value function as well as of the set of admissible trajectories.
However, there remains an important effort to provide for the development of effective and adapted numerical tools, mainly because of numerical
complexity (complexity is exponential with respect to n).

*Optimal stochastic control problems* occur when the dynamical system is
uncertain. A decision typically has to be taken at each time, while
realizations of future events are unknown (but some information is given
on their distribution of probabilities).
In particular, problems of economic nature deal with large uncertainties
(on prices, production and demand).
Specific examples are the portfolio selection problems in a market with risky
and non-risky assets, super-replication with uncertain volatility,
management of power resources (dams, gas).
Air traffic control is another example of such problems.

*Nonsmoothness of the value function*.
Sometimes the value function is smooth
and the associated HJB equation can be solved explicitly.
Still, the value function is not smooth enough to satisfy the
HJB equation in the classical sense. As for the deterministic
case, the notion of viscosity solution provides a convenient framework for
dealing with the lack of smoothness, see Pham ,
that happens also to be
well adapted to the study of discretization errors for numerical
discretization schemes , .

For solving stochastic control problems, we studied the so-called Generalized Finite Differences (GFD), that allow to choose at any node, the stencil approximating the diffusion matrix up to a certain threshold . Determining the stencil and the associated coefficients boils down to a quadratic program to be solved at each point of the grid, and for each control. This is definitely expensive, with the exception of special structures where the coefficients can be computed at low cost. For two dimensional systems, we designed a (very) fast algorithm for computing the coefficients of the GFD scheme, based on the Stern-Brocot tree .

We have a collaboration with the startup Safety Line on the optimization of trajectories for civil aircrafts. Key points include the reliable identification of the plane parameters (aerodynamic and thrust models) using data from the flight recorders, and the robust trajectory optimization of the climbing and cruise phases. We use both local (quasi-Newton interior-point algorithms) and global optimization tools (dynamic programming).

We started a collaboration with IFPEN on the energy management for hybrid vehicles. A significant direction is the analysis and classification of traffic data. We have preliminary results on the choice of the routing which amounts to some type of constrained shortest path.

We started at the beginning of 2016 an Innovation Lab (Ilab) 'OSCAR', jointly with the startup Safety Line. The subject of the Ilab is the design of algorithmic tools for the (i) identification of aircraft dynamics, based on flight data recorders, and (ii) the computation of energy efficient flight trajectories.

Boite à Outils pour le Contrôle OPtimal

Keywords: Energy management - Numerical optimization - Biology - Identification - Dynamic Optimization - Transportation

Functional Description

Bocop is an open-source toolbox for solving optimal control problems, with collaborations with industrial and academic partners. Optimal control (optimization of dynamical systems governed by differential equations) has numerous applications in transportation, energy, process optimization, energy and biology. Bocop includes a module for parameter identification and a graphical interface, and runs under Linux / Windows / Mac.

Participants: Joseph Frédéric Bonnans, Pierre Martinon, Benjamin Heymann and Jinyan Liu

Contact: Pierre Martinon

URL: http://

Keywords: Optimization - Aeronautics

Functional Description

Optimize the climb speeds and associated fuel consumption for the flight planning of civil airplanes.

Participants: Joseph Frédéric Bonnans, Pierre Martinon, Stéphan Maindrault, Cindie Andrieu, Pierre Jouniaux and Karim Tekkal

Contact: Pierre Martinon

Participants: Joseph Frédéric Bonnans, Pierre Martinon, Benjamin Heymann and Jinyan Liu

Contact: Joseph Frédéric Bonnans

URL: http://

With Pierre Picard and Anasuya Raj, Ecole Polytechnique.

With Soledad Aronna, FGV, Rio de Janeiro. In this paper we consider second order optimality conditions for a bilinear optimal control problem governed by a strongly continuous semigroup operator, the control entering linearly in the cost function. We derive first and second order optimality conditions, taking advantage of the Goh transform. We then apply the results to the heat and wave equations.

With Soledad Aronna, FGV, Rio de Janeiro. This paper presents some optimality conditions for abstract optimization problems over complex spaces. We then apply these results to optimal control problems with a semigroup structure. As an application we detail the case when the state equation is the Schrödinger one, with pointwise constraints on the "bilinear'" control. We derive first and second order optimality conditions and address in particular the case that the control enters the state equation and cost function linearly.

With Michael D. Chekroun, UCLA) and H. Liu, Virginia Tech. Nonlinear optimal control problems in infinite dimensions are considered for which we establish approximation theorems and reduction procedures. Approximation theorems and reduction procedures are available in the literature. The originality of our approach relies on a combination of Galerkin approximation techniques with reduction techniques based on finite-horizon parameterizing manifolds. The numerical approximation of the control in a feedback form based on Hamilton-Jacobi-Equation become also affordable within this approach. The approach is applied to optimal control problems of delay differential equations and nonlinear parabolic equations.

With Andy Philpott and Faisal Wahid, U. Auckland. Mixed Integer Dynamic Approximation Scheme (MIDAS) is a new sampling-based algorithm for solving finite-horizon stochastic dynamic programs with monotonic Bellman functions. MIDAS approximates these value functions using step functions, leading to stage problems that are mixed integer programs. We provide a general description of MIDAS, and prove its almost-sure convergence to an epsilon-optimal policy when the Bellman functions are known to be continuous, and the sampling process satisfies standard assumptions.

With Francisco Silva XLIM, U. Limoges, Fernando Lanas and Guillermo Jimenez, U. Chile.

With Francisco Silva XLIM U. Limoges, Guillermo Jimenez, U. Chile.

In this ongoing work we discuss the variational analysis for stochastic volatility models with correlation and their applications for the pricing equations for European options is discussed. The considered framework is based on weigthed Sobolev spaces. Furthermore, to verify continuity of the rate term in the pricing equation an approach based on commutator analysis is developed.

In the framework of the PhD thesis of Arthur Le Rhun, we study the energy management of hybrid (parallel) vehicles, and more specifically the optimal use of the thermal engine. Before the PhD, a 4-month internship was focused on the eco-routing problem for hybrid vehicles, ie computing the optimal path. We proposed a method based on graphs: the road network is defined by a graph, and to take into account the hybrid aspect of the vehicle, we dicretized the State of Charge on each node. Then a simple shortest path algorithm (A*) applied to this extended graph is able to solve the routing problem. Numerical simulations indicate that the solution of our discrete eco-routing problem converges to the correct solution when a sufficiently fine discretization of SoC is used. We illustrate the method on the Ille-et-Vilaine department, see Fig. and Table . The main disadvantage of the method is the increasingly large computation time when the size of the extended graph grows.

SoC disc. | improved cases | Fuel savings | CPU time (s) |

3 | 19% | 0.9753 | 6.03 |

5 | 65% | 0.8531 | 14.64 |

10 | 88% | 0.5831 | 52.80 |

20 | 88% | 0.4222 | 283.43 |

In the framework of an Ilab with Safety Line (a startup in aeronautics), we design tools for the optimization of fuel consumption for civil planes. A first part is devoted to the identification of the aerodynamic and thrust characteristics of the plane, using recorded data from hundreds of flights. Fig. shows the drag and lift coefficients for a Boeing 737, as functions of Mach and angle of attack. A second part is optimizing the fuel consumption during the climb and cruise phases. Fig. shows a simulated climb phase, along with recorded data from the actual flight. This collaboration relies significantly on the toolboxes Bocop and BocopHJB developed by Commands since 2010.

Gaspard Monge Program for Optimization and Operational Research (Fondation Jacques Hadamard)

Michael D. Chekroun, U.C.L.A, collaboration on the approximation and reduction of optimal control problems in infinite dimension.

Alejandro Jofré, CMM, U. Chile, Santiago de Chile. Cosupervision of B. Heymann's PhD thesis.

Pablo Lotito, U. Tandil, Argentina, supervision of Justina Gianatti's PhD.

M. Chekroun (University of California, Los Angeles), 12.-14.12.2016.

Johannes Pfefffer (Technische Universität München), 12.-14.12.2016.

Luis Alberto Croquevielle Rendic: Classification of probability measures based on Optimal Transportation theory. January-March 2016. U. Catolica, Santiago, Chile.

Justina Gianatti, Discretization of stochastic control problems, U. Rosario (Argentina), May-July 2016.

F. Bonnans, 14th EUROPT Workshop on Advances in Continuous Optimization, Poznań, July 3-6, 2016.

F. Bonnans: Corresponding Editor of “ESAIM:COCV” (Control, Optimization and Calculus of Variations), and Associate Editor of “Applied Mathematics and Optimization”, “Optimization, Methods and Software”, and “Series on Mathematics and its Applications, Annals of The Academy of Romanian Scientists”.

Reviews for major journals in the field such as Applied Mathematics and Optimization, Automatica, J. Diff. Equations, J. of Optimization Theory and Applications the SIAM J. Optimization, SIAM J. Control and Optimization, Inverse problems, Journal of Numerical Mathematics, Operations Research, Optimization, Process Control, Math. Reviews.

A. Kröner: Seminars in U. Konstanz, U. Hamburg, 2016.

Minisymposium Numerical methods for time-dependent transportation and optimal control problems. Computational Methods in Applied Mathematics (CMAM), Jyväskyla, Finland, July 31-Aug. 6,2016;

Minisymposium 'Optimal Control - Theory and Applications', Emerging Trends in Applied Mathematics and Mechanics, Perpignan, May 30-June 3, 2016;

Minisymposium 'Numerical aspects of controllability of PDEs and inverse problems', CANUM (Congrès d'Analyse Numérique), Obernai, May 9-13, 2016;

A. Kröner:
Workshop on *Numerical methods for Hamilton-Jacobi equations in optimal control and related fields*, Linz, Austria, Nov., 2016;

F. Bonnans: French representative to the IFIP-TC5 committee (International Federation of Information Processing; TC7 devoted to System Modeling and Optimization).

F. Bonnans: member of the PGMO board and Steering Committee (Gaspard Monge Program for Optimization and Operations Research, EDF-FMJH).

F. Bonnans: member of the Broyden Prize committee (from the Journal Optimization Methods and Software).

Master :

F. Bonnans: *Numerical analysis of partial differential equations
arising in finance and stochastic control*, 36h, M2,
Ecole Polytechnique and U. Paris 6, France.

F. Bonnans: *Optimal control*, 15h, M2, Optimization master
(U. Paris-Saclay) and Ensta, France.

F. Bonnans: *Stochastic optimization*, 15h, M2, Optimization master
(U. Paris-Saclay), France.

A. Kröner : Optimal control of partial differential equations, 20h, M2, Optimization master (U. Paris-Saclay), France.

**E-learning**
F. Bonnans, several lecture notes on the page

http://www.cmap.polytechnique.fr/

PhD : Benjamin Heymann, Dynamic optimization with uncertainty; application to energy production. Polytechnique fellowship, defense October 2016, F. Bonnans and A. Jofre.

PhD in progress : Cédric Rommel, Data exploration for the optimization of aircraft trajectories. Started November 2015, F. Bonnans and P. Martinon. CIFRE fellowship (Safety Line).

PhD in progress : Arthur Le Rhun, Optimal and robust control of hybrid vehicles (IFPEN fellowship), started Sept. 2016, F. Bonnans and P. Martinon.

HDR Juries: F. Silva (Limoges), A. Rondepierre (Toulouse, rapporteur).

J.F. Bonnans: *Comment optimiser la gestion d'un micro-réseau
électrique intelligent ?*
Cahier de l'Institut Louis Bachelier N. 23 (2016), p. 12-13.