To develop new algorithms in deterministic and stochastic optimal control, and deal with associated applications, especially for aerospace trajectories and management for the power industries (hydroelectric resources, storage of gas and petroleum).
In the field of deterministic optimal control, our objective is to develop algorithms combining iterative fast resolution of optimality conditions (of the discretized problem) and refinement of discretization, through the use of interior point algorithms. At the same time we wish to study multiarcs problems (separations, rendezvous, formation flights) which necessitates the use of decomposition ideas.
In the field of stochastic optimal control, our first objective is to develop fast algorithms for problems of dimension two and three, based on fast computation of consistent approximations as well as splitting methods. The second objective is to link these methods to the stochastic programming approach, in order to deal with problems of dimensions greater than three.
For deterministic optimal control problems there are basically three approaches. The socalled direct method consists in an optimization of the trajectory, after having discretized time, by
a nonlinear programming solver that possibly takes into account the dynamic structure; see Betts
. The indirect approach eliminates control
variables using Pontryagin's maximum principle, and solves the resulting twopoints boundary value problem by a multiple shooting method. Finally the dynamic programming approach solves the
associated HamiltonJacobiBellman (HJB) equation, which is a partial differential equation of dimension equal to the number
nof state variables. This allows to find the global minimum, whereas the two other approaches are local; however, it suffers from the curse of dimensionality (complexity
is exponential with respect to
n).
There are various additional issues: decomposition of large scale problems, simplification of models (leading to singular perturbation problems), computation of feedback solutions.
For stochastic optimal control problems there are essentially two approaches. The one based on the (stochastic) HJB equation has the same advantages and disadvantages as its deterministic counterpart. The stochastic programming approach is based on a finite approximation of uncertain events called a scenario tree (for problems with no decision this boils down to the Monte Carlo method). Their complexity is polynomial with respect to the number of state variables but exponential with respect to the number of time steps. In addition, various heuristics are proposed for dealing with the case (uncovered by the two other approaches) when both the number of state variables and time steps is large.
Aerospace trajectories (rockets, planes), automotive industry (car design), chemical engineering (optimization of transient phases, batch processes).
Storage and management, especially of natural and power resources, portfolio optimization.
We have presently two research softwares. The first is an implementation of interior point algorithms for trajectory optimization, and the second is an implementation of fast algorithms for bidimensional HJB equations of stochastic control.
We have studied stateconstrained optimal control problems with only one control variable and one state constraint, of arbitrary order. We consider the case of finitely many boundary arcs and touch times. We obtain a theory of secondorder conditions without gap, in which the difference between secondorder necessary or sufficient conditions is only a change of an inequality into a strict inequality. This allows us to characterize the secondorder quadratic growth condition, using the secondorder information.
We have studied the shooting algorithm for optimal control problems with a scalar control and a regular scalar state constraint. Additional conditions are displayed, under which the
socalled alternative formulation is equivalent to Pontryagin's minimum principle. The shooting algorithm appears to be wellposed (invertible Jacobian), iff (i) the nogap second order
sufficient optimality condition holds, and (ii) when the constraint is of order
q3, there is no boundary arc. Stability and sensitivity results without strict
complementarity at touch points are derived using Robinson's strong regularity theory, under a minimal secondorder sufficient condition. The directional derivatives of the control and state
are obtained as solutions of a linear quadratic problem. The result is published in
.
Assuming well posedness of a firstorder state constraint and weak secondorder optimality conditions (equivalent to uniform quadratic growth) we show that boundary arcs are structurally stable, and that touch point can either remain so, vanish or be transformed into a single boundary arc. This is the first result of this type. It follows that the shooting algorithm (properly adapted to the possible structural transformations) is wellposed in this case again.
The result is published in .
We just started in November 2006 a study of the multidimensional singular arc that can occur in the atmospheric flight of a launcher. The physical reason for not having a bangbang control (despite the fact that the hamiltonian function is affine w.r.t. the control, is that aerodynamic forces may make a high speed ineffective. Our preliminary results suggest that we have an effective means for computing such extremals.
In the framework of the thesis of N. Megdich, we have continued the study of numerical schemes for HJB equations coming from optimal control problems with state constraints (RDV problems, target problems, minimal time). When some controlability assumptions are not satisfied, the solution of the HJB equation is discontinuous and the classical schemes, relying on finite diffrences or/and interpolation technics, provide poor quality approximation. In fact, these schemes lead to an increasing loss of precision around the discontinuities as well as for long time approximations. Hence, the numerical solution is unsatisfying for long time approximations even in the continuous case.
We prove the convergence of a nonmonotone scheme for onedimensional HamiltonJacobiBellman equations of the form
u_{t}+ max
_{a}(
f(
x,
a)
u_{x}) = 0,
u(0,
x) =
u_{0}(
x). The scheme is related to the HJBUltraBee scheme suggested in
,
, which has an antidiffusive behavior, but where
the convergence was not proved. In our approach we can consider discontinous initial data
u_{0}. In particular we show a firstorder convergence of the scheme, in
L^{1}norm (i.e. an error bounded by a constant times
xwhere
xis the mesh size) towards the viscosity solution. We also illustrate the nondiffusive behavior of the scheme on some numerical examples. A corresponding preprint is under
preparation.
Let us stress on that our scheme is explicit and is nonmonotone (neither monotone in the sense of R. Abgrall ). As far as we know, there are few nonmonotone scheme that has been proved to converge for HJ equations (see also Lions and Souganidis for an implicit nonmonotone scheme).
We have continued the study of splitting algorithms for solving the HJB equation of stochastic control. We have clarified the issue of monotonicity and consistency of such algorithms. It appears unfortunately that monotonicity and consistency occur only under quite restrictive hypothesis.
In the framework of the thesis of S. Maroso, we have studied an implementable scheme for solving the HJB equation for stochastic impulse control problem. Our scheme is based on the cascade approach that we have already used in , to study the error estimates for the numerical approximation of the impulse HJB equation. At each (each) step of our algorithm, we have to solve an obstacle problem. We suggest to perform this resolution by doing a given number of iterations of policy algorithm (also called Howard algorithm).
We also study some convergence results for the policy algorithm. More precisely, we give a simple proof of a superlinear convergence of the policy iterations when applied to solve a problem in the following setting:
here, the output is the vector
XIR^{N},
being a compact set, and for
,
A(
)is a monotone matrix and
f(
)is a vector in
IR^{N}. The main idea in our proof is the formulation of the Howard algorithm as a semismooth Newton's method applied to find the zero of the function
Fdefined by:
We prove also that the function
Fis differentiable in a weak sense (slant differentiability)
,
On the other hand, we prove that the Howard algorithm used for solving an obstacle problem:
M
X
b,
X
) = 0,
is strictly equivalent to the Primaldual algorithm introduced by ItoKunich . For more details, see Chapter 3.
In a financial market, consisting in a nonrisky asset and some risky assets, people are interested to study the minimal initial capital needed in order to superreplicate a given contingent claim, under gamma constraints. Many authors have studied this problem in theoretical point of view , , .
In collaboration with O. Bokanowski
where
Jis a symetric matrix differential operator associated to the Hamiltonian, and where
^{}(
J)denotes the smallest eigenvalue of
J.
The advantage of the above HJB equation lies on the fact that the operator
Jdoes not depend on the control variable, but the "non standard" form of the equation could lead us to think that it is notuseful. However, from standard calculations,
we obtain a simple formulation of (
) in the following form:
In this new formulation the variable can be seen as a bounded control variable.
We study an approximation scheme for the equation ( ) based on the generalized finite differences algorithm introduced in , . We prove the existence, uniqueness of a bounded discrete solution. We also verify the monotonicity and stability of the scheme. Moreover, we give a consistance error approximation. Then by using the same arguments as in , we prove the convergence of the discrete solutions towards the value function , when the discretization step size tends to 0.
A preliminary version of this work is presented in the thesis of S. Maroso, while a complete version will be submitted as an INRIA report.
The Unit Commitment Problem (UCP) consists of defining the minimalcost power generation schedule for a given set of power plants. Due to many complex constraints, the deterministic UCP, even in its deterministic version, is a challenging largesize, nonconvex, nonlinear optimization problem, but there exist nowadays efficient tools to solve it. For a very short term horizon, the deterministic UCP is satisfactory; it is currently used for the daily scheduling in an industrial way. For the two/fourweek time horizon which we are concerned with, uncertainty become significant and cannot be ignored anymore, making it necessary to treat the UCP as a stochastic problem. Dealing with uncertainty introduces a level of complexity that is of an order of magnitude higher than in the deterministic case. Thus, there is a need to design new stochastic optimization techniques and models, that are implementable in an industrial context.
Contribution. Among possible tools, robust optimization offers promising opportunities. It has the very attractive property of leading to computationally tractable problems. To investigate this new approach, we focused our investigations on integrating the uncertainty on water inflows in the management of a hydraulic valley. To account for the progressive unfolding of uncertainty and the opportunity to take corrective, or recourse, actions, we modeled future decisions as linear functions of observed past inflows. These socalled linear decision rulesrestrict the field of possible future recourse but still capture a good deal of the adaptive feature of realtime management. We implemented the robust optimization approach on a small but representative valley with a one week horizon. We simulated the performance of the obtained on a large sample of randomly generated scenarios. In view of the lack of readily available alternatives, we benchmarked the robust optimization against a simple enough a deterministic policy with daily revision. The latter approach is quite close to operational practice: the oneday ahead controls are those obtained by optimizing a oneweek deterministic model; the model is revised on a daily basis to account for the actual water levels in the reservoirs. We could have used a similar periodic review scheme with the robust optimization scheme, but, even though it is perfectly implementable at the operation level, it turned out to be computationally too expensive in the extensive simulation runs. This has put the robust optimization in a clear disadvantage with respect to the deterministic policy with periodic review. Nevertheless, the robust approach reduces violations of the volume constraints from 75% to 95%. This improvement is obtained at the expense of modest 0.5% increase of the production cost in comparison with the cost of the deterministic technique. Two EDF's internal reports are in progress ( , ).
In the newt year, we'll study the problem of management of the margin of production, defined as the assessment between the offer and the demand. This problem can be formulated as finding optimal decisions, according to an economical criterion, for hedging against supply shortage or for selling the positive margin of production. A EDF's report is in progress ( ).
We are interested in the use of bundle methods to solve largescale mixedinteger problems, possibly nonconvex. Specifically, we consider optimization problems arising in electrical power management. Given an electric generation mix, the aim is to minimize production costs subject to operating constraints of generation units and other external constraints, like network flow capacities. There are many different problems fitting such a large framework. In particular, the time horizon chosen for the scheduling highly determines the specificity of problems. Short, middle and long term decisions have their own peculiarities that need to be reflected in the modeling. While short term problems are generally modeled in a deterministic framework, for longer terms, inherent uncertainties may result in poor solutions if a deterministic model is still used. Consider for instance the French case, where winter demand has uncertainties reaching up to several thousands of MW. When comparing this value to typical peak loads (70000 MW), we see that for the modeling to yield any significant values, it must explicitly incorporate the stochastic nature of the problem. For the midterm problem we are interested to solve, uncertainty is represented by a scenario tree, composed by many nodes representing all possible values of the demand, at each given time step.
Instead of dualizing all the constraints at once, an alternative approach is to choose at each iteration
subsetsof constraints to be dualized. In this dynamical relaxation, subsets
Jhave cardinality

Jmuch smaller than the original one. As a result, the corresponding dual function is manageable from the nondifferentiable optimization point of view.
Such dynamic relaxation could be done according to some rule depending on which multipliers are active, for example (i.e., analyzing at which nodes and blocks the demand is not satisfied). From the dual point of view, this approach yields multipliers with varying dimensions and a dual objective function that changes along iterations.
Work in progress.
We are finalizing a contract with CNES whose subject is the optimization of launcher trajectories when the atmospheric flight may have a singular arc.
We have two contracts with EDF, related to the CIFRE theses of G. Emiel et R. Apparigliato.
We have agreements of cooperation with Onera and CNRS concerning the studies on transfer or orbits for lowthrust satellites, and optimal trajectories for future launchers.
With Claudia Sagastizábal, IMPA, Rio de Janeiro : we are currently analysing some approaches for stochasting programming, with application to the production of electricity.
C. Sagastizábal and Mikhail Solodov (IMPA  Brazil), Hector RamirezCabrera (DIM  Chile), Pablo Lotito (Argentina).
F. Bonnans
Associate Professor , Ecole Polytechnique (50 h) , and Course on Continuous Optimization, Mastere de Math. et Applications, Filière "OJME", Optimisation, Jeux et Modélisation en Economie, Université Paris VI (18 h).
CIMPA School on Applied Math, 28 aout8 sept. 2006, Castro Urdiales, Cantabria (Espagne). Satellite activity of International Congress of Matematics 2006. Course on "Optimal control of ordinary differential equations" (7h).
A. Hermant
TD (8h) et TP (4h) du cours "Optimisation quadratique" de première année de l'ENSTA
TP (8h) du cours "Contrôle des EDP" de troisième année de l'ENSTA.
S. Maroso
Supervision of numerical work (4h) and exercices (8h), "Quadratic Optimization" course, first year at ENSTA.
Supervision of exercices (10h), "Differentiable Optimization" course, second year at ENSTA.
H. Zidani  Professeur at ENSTA (70h)
"Quadratic Optimization" course, first year at ENSTA.
"Front propagation" course, third year at ENSTA.
Optimal Control and HamiltonJacobiBellman Equations", third year at ENSTA.
N. Megdich
Supervision of numerical work (4h) and exercices (8h), "Quadratic Optimization" course, first year at ENSTA.
Course "Linear Control Systems" (30h), Master (1st year) University of ParisSud XI.
The Veszprém Optimization Conference: Advanced Algorithms (VOCAL). December 1315, 2006, Veszprém, Hungary F. Bonnans, invited plenary speaker
Eccomas 2006 Egmond aan Zee, The Netherlands, September 58, 2006. F. Bonnans, member of Scientific Committee.
XIX ISMP : International Symposium on Mathematical Programming. Rio de Janeiro, July 30August 4, 2006. F. Bonnans, International Programme Committee.
G. Emiel (attendant).
Talk by F. Bonnans "SecondOrder Optimality Conditions and Sensitivity Analysis for StateConstrained Otimal Control Prolems".
New Trends in Viscosity Solutions and Nonlinear PDE. Lisbonne July 2428.
Talks by S. Maroso : ``Error estimates for stochastic control problem with unbounded control'' and N. Megdich : ``An antidissipative fast method for control problems with state constraints''.
ROCOND: Robust Control Design. July 57,  Toulouse. F. Bonnans, member of scientific committee.
Workshop on Advances in Continuous Optimization, Reykjavik, Iceland, 30 June1 July 2006. Talk by A. Hermant : ``Nogap Secondorder Optimality Conditions for State Constrained Optimal Control Problems''.
Minisymposium "Quantitative Methods for HamiltonJacobi Equations and Applications", Torino 04/07/2006. H. Zidani invited speaker.
EURO XXI: 21st European Conference on Operational Research. Reykjavik, Iceland, July 25, 2006. Talk by F. Bonnans ``Fast computation of the leastcore and prenucleolus of cooperative games''.
13th IFAC Workshop on Control Applications of Optimisation CAO'06 . 26  28 April 2006, ENS Cachan, France. F. Bonnans, cochair of International Programme Committee.
Seminar of the 3 ^{rd}Romand cycle of Operational Research organized by the Swiss Association of Operational Research, Federal Polytechnic School of Lausanne, HEC Genève and University of Fribourg, March 2006. R. Apparigliato, attendant. http://roso.epfl.ch/3emecycle/,
Journée "Propagation de fronts et applications", CERMICSENPC 07/03/2006, H. Zidani, invited speaker.
Conf. AMAMEF (finance), Rocquencourt, February 13, 2006. Talk by S. Maroso ``Error estimates for a stochastic impulse control problem''.
Séminaire à Tours. S. Maroso "Analyse numérique d'un problème de contrôle stochastique avec contrôle non borné".
Seminar of Optimization, IMPA, Rio de Janeiro.
Talk by G. Emiel "Solving the midterm production planning problem of energy systems".