## Section: Scientific Foundations

### Historical aspects

The roots of deterministic optimal control are the “classical” theory of the calculus of variations, illustrated by the work of Newton, Bernoulli, Euler, and Lagrange (whose famous multipliers were introduced in [67] ), with improvements due to the “Chicago school”, Bliss [42] during the first part of the 20th century, and by the notion of relaxed problem and generalized solution (Young [75] ).

*Trajectory optimization* really started with the
spectacular achievement done by Pontryagin's group [73]
during the fifties, by stating, for general optimal control
problems, nonlocal optimality conditions generalizing those of Weierstrass.
This motivated the application to many industrial problems
(see the classical books by Bryson and Ho [48] ,
Leitmann [69] , Lee and Markus [68] ,
Ioffe and Tihomirov [64] ).
Since then, various theoretical achievements have been obtained
by extending the results to nonsmooth problems, see Aubin [38] ,
Clarke [49] , Ekeland [56] .

*Dynamic programming* was introduced and systematically studied by
R. Bellman during the fifties. The HJB equation, whose solution is the
value function of the (parameterized) optimal control problem,
is a variant of the classical Hamilton-Jacobi equation of mechanics
for the case of dynamics parameterized by a control variable.
It may be viewed as a differential form of the dynamic programming principle.
This nonlinear first-order PDE appears to be well-posed in the framework of
*viscosity solutions* introduced by Crandall and Lions
[51] , [52] , [50] . These tools also allow to perform the
numerical analysis of discretization schemes.
The theoretical contributions in this direction
did not cease growing, see the books by Barles [40] and
Bardi and Capuzzo-Dolcetta [39] .