Commands is a team devoted to dynamic optimization, both for deterministic and stochastic systems. This includes the following approaches: trajectory optimization, deterministic and stochastic optimal control, stochastic programming, dynamic programming and Hamilton-Jacobi-Bellman equation.

Our aim is to derive new and powerful algorithms for solving numerically these problems, with applications in several industrial fields. While the numerical aspects are the core of our approach it happens that the study of convergence of these algorithms and the verification of their well-posedness and accuracy raises interesting and difficult theoretical questions, such as, for trajectory optimization: qualification conditions and second-order optimality condition, well-posedness of the shooting algorithm, estimates for discretization errors; for the Hamilton-Jacobi-Bellman approach: accuracy estimates, strong uniqueness principles when state constraints are present, for stochastic programming problems: sensitivity analysis.

For many years the team members have been deeply involved in various industrial applications, often in the framework of PhD theses. The Commands team itself has dealt since its foundation in 2009 with several types of applications:

Space vehicle trajectories, in collaboration with CNES, the French space agency.

Aeronautics, in collaboration with the startup Safety Line.

Production, management, storage and trading of energy resources, in collaboration with Edf, ex-Gdf and Total.

Energy management for hybrid vehicles, in collaboration with Renault and Ifpen.

We give more details in the Bilateral contracts section.

The roots of deterministic optimal control are the “classical” theory of the calculus of variations, illustrated by the work of Newton, Bernoulli, Euler, and Lagrange (whose famous multipliers were introduced in ), with improvements due to the “Chicago school”, Bliss during the first part of the 20th century, and by the notion of relaxed problem and generalized solution (Young ).

*Trajectory optimization* really started with the spectacular achievement done by Pontryagin's group during the fifties,
by stating, for general optimal control problems, nonlocal optimality conditions generalizing those of Weierstrass.
This motivated the application to many industrial problems (see the classical books by Bryson and Ho , Leitmann ,
Lee and Markus , Ioffe and Tihomirov ).

*Dynamic programming* was introduced and systematically studied by R. Bellman during the fifties. The HJB equation, whose solution is the
value function of the (parameterized) optimal control problem, is a variant of the classical Hamilton-Jacobi equation of mechanics
for the case of dynamics parameterized by a control variable. It may be viewed as a differential form of the dynamic programming principle.
This nonlinear first-order PDE appears to be well-posed in the framework of *viscosity solutions* introduced by Crandall and Lions
.
The theoretical contributions in this direction did not cease growing, see the books by Barles and Bardi and Capuzzo-Dolcetta .

The so-called *direct methods* consist in an optimization of the trajectory, after having discretized time, by a nonlinear programming solver that possibly
takes into account the dynamic structure. So the two main problems are the choice of the discretization and the nonlinear programming algorithm.
A third problem is the possibility of refinement of the discretization once after solving on a coarser grid.

In the *full discretization approach*, general Runge-Kutta schemes with different values of control for each inner step are used. This allows to obtain and
control high orders of precision, see Hager , Bonnans .
In the *indirect* approach, the control is eliminated thanks to Pontryagin's maximum principle. One has then to solve the two-points boundary value problem
(with differential variables state and costate) by a single or multiple shooting method. The questions are here the choice of a discretization scheme for the
integration of the boundary value problem, of a (possibly globalized) Newton type algorithm for solving the resulting finite dimensional problem
in

This approach consists in calculating the value function associated with the optimal control problem, and then synthesizing the feedback control and the optimal trajectory using Pontryagin's principle. The method has the great particular advantage of reaching directly the global optimum, which can be very interesting when the problem is not convex.

*Optimal stochastic control problems* occur when the dynamical system is
uncertain. A decision typically has to be taken at each time, while
realizations of future events are unknown (but some information is given
on their distribution of probabilities).
In particular, problems of economic nature deal with large uncertainties
(on prices, production and demand).
Specific examples are the portfolio selection problems in a market with risky
and non-risky assets, super-replication with uncertain volatility,
management of power resources (dams, gas).
Air traffic control is another example of such problems.

For solving stochastic control problems, we studied the so-called Generalized Finite Differences (GFD), that allow to choose at any node, the stencil approximating the diffusion matrix up to a certain threshold . Determining the stencil and the associated coefficients boils down to a quadratic program to be solved at each point of the grid, and for each control. This is definitely expensive, with the exception of special structures where the coefficients can be computed at low cost. For two dimensional systems, we designed a (very) fast algorithm for computing the coefficients of the GFD scheme, based on the Stern-Brocot tree .

In collaboraton with Ifpen and in the framework of A. Le Rhun's thesis, we have developped a methodology for the optimal energy management for hybrid vehicles, based on a statistical analysis of the traffic. See , , .

In collaboration with the Inbio team (Inst. Pasteur and Inria) we started to study the optimization of protein production based on cell culture.

We have now a strong involvement in the study of mean-field games (MFG) and their application to distributed energy production problems. In the paper we study MFG equilibria with coupling of the agents through a price function (see more in the 'New Results' section). In the framework of the PhD of Pierre Lavigne we currently study discrete-time models with risk-averse agents. Both directions take advantange of the recent recruitment of Laurent Pfeiffer as “chargé de recherche”, and of a starting collaboration with Jameson Graber (Baylor University, Texas).

*Boite à Outils pour le Contrôle OPtimal*

Keywords: Dynamic Optimization - Identification - Biology - Numerical optimization - Energy management - Transportation

Functional Description: Bocop is an open-source toolbox for solving optimal control problems, with collaborations with industrial and academic partners. Optimal control (optimization of dynamical systems governed by differential equations) has numerous applications in transportation, energy, process optimization, energy and biology. Bocop includes a module for parameter identification and a graphical interface, and runs under Linux / Windows / Mac.

Release Functional Description: Handling of delay systems Alternate automatic differentiation tool: CppAD Update for CMake and MinGW (windows version)

Participants: Benjamin Heymann, Virgile Andréani, Jinyan Liu, Joseph Frédéric Bonnans and Pierre Martinon

Contact: Pierre Martinon

URL: http://

Keywords: Optimal control - Stochastic optimization - Global optimization

Functional Description: Toolbox for stochastic or deterministic optimal control, dynamic programming / HJB approach.

Release Functional Description: User interface State jumps for switched systems Explicit handling of final conditions Computation of state probability density (fiste step to mean field games)

Participants: Benjamin Heymann, Jinyan Liu, Joseph Frédéric Bonnans and Pierre Martinon

Contact: Joseph Frédéric Bonnans

URL: http://

Keywords: Optimization - Aeronautics

Functional Description: Optimize the climb speeds and associated fuel consumption for the flight planning of civil airplanes.

News Of The Year: Improved atmosphere model 2D interpolations for temperature and wind data

Participants: Gregorutti Baptiste, Cindie Andrieu, Anamaria Lupu, Joseph Frédéric Bonnans, Karim Tekkal, Pierre Jouniaux and Pierre Martinon

Partner: Safety Line

Contact: Pierre Martinon

Keywords: Optimization - Aeronautics

Functional Description: Optimize the climb and cruising trajectory of flight by a HJB approach.

News Of The Year: First demonstrator for cruise flight deployed at Safety Line

Participants: Pierre Martinon, Joseph Frédéric Bonnans, Jinyan Liu, Gregorutti Baptiste and Anamaria Lupu

Partner: Safety Line

Contact: Pierre Martinon

The articles , , , co-written by L. Pfeiffer in the framework of his former position at the University of Graz, deal with the computation of feedback laws for stabilization problems of PDE systems. These problems are formulated as infinite-horizon optimal control problems.

Inria Project Lab ALGAE IN SILICO (2014-2018) was dedicated to provide an integrated platform for numerical simulation of microalgae “from genes to industrial process“. Commands joined the project in 2017 to tackle the optimization aspects. Our previous collaborations with teams Modemic and Biocore on bioreactors , have been renewed in this framework.

Inria Project Lab COSY (started in 2017) aims at exploiting the potential of state-of-art biological modelling, control techniques, synthetic biology and experimental equipment to achieve a paradigm shift in control of microbial communities. More precisely, we plan to determine and implement control strategies to make heterogeneous communities diversify and interact in the most profitable manner. Study of yeast cells has started in collaboration with team Lifeware (G. Batt) in the framework of the PhD of V. Andreani, and is pursued in the Postdoc of D. Lunz (started Nov. 2019).

F. Bonnans: PGMO Days, EDF'Lab Palaiseau,Dec. 3-4, 2019.

F. Bonnans: Associate Editor: Math. & Appl. / Annals of the Academy of Romanian Scientists (AOSR)

Master :

F. Bonnans: *Numerical analysis of partial differential equations
arising in finance and stochastic control*, 18h, M2,
Ecole Polytechnique and U. Paris 6, France.

F. Bonnans: *Optimal control of ordinary differential equations*, 15h, M2, Optimization master
(U. Paris-Saclay) and Ensta, France.

A. Kröner : Optimal control of partial differential equations, 20h, M2, Optimization master (U. Paris-Saclay), France.

L. Pfeiffer: *Optimal control of ordinary differential equations*, 18h, M2, Optimization master
(U. Paris-Saclay) and Ensta, France.

L. Pfeiffer: *Optimisation continue et combinatoire*, 17h, Ensta, France.

Finished PhD : A. Le Rhun, Optimal and robust control of hybrid vehicles. Started September 2016 (IFPEN fellowship), finished December 2019, F. Bonnans and P. Martinon.

PhD in progress : G. Bonnet, Efficient schemes for the Hamilton-Jacobi-Bellman equation. Started Oct. 2018. F. Bonnans and J.-M. Mirebeau, LMO, U. Orsay.

PhD in progress : P. Lavigne, Mathematical study of economic equilibria for renewable energy sources. Started Oct. 2018. F. Bonnans and L. Pfeiffer.

F. Bonnans: codirection of a joint Allistene-Ancre commission (contribution to the national strategy for research), Numerics and Energy committee (2017-2019).

F. Bonnans: Dimitrie Pompeiu Prize Committee (Academy of Romanian Scientists).