Commands is a team with a global view on dynamic optimization in its various aspects: trajectory optimization, geometric control, deterministic and stochastic optimal control, stochastic programming, dynamic programming and Hamilton-Jacobi-Bellman approach.
Our aim is to derive new and powerful algorithms for solving numerically these various problems, with applications in several industrial fields. While the numerical aspects are the core of our approach it happens that the study of convergence of these algorithms and the verification of their well-posedness and accuracy raises interesting and difficult theoretical questions, such as, for trajectory optimization: qualification conditions and second-order optimality condition, well-posedness of the shooting algorithm, estimates for discretization errors; for the Hamilton-Jacobi-Bellman approach: accuracy estimates, strong uniqueness principles when state constraints are present, for stochastic programming problems: sensitivity w.r.t. the probability laws, formulation of risk measures.
For many years the team members have been deeply involved in various industrial applications. The Commands team itself has dealt since its foundation in 2007 with two main types of apllications:
Space vehicle trajectories, in collaboration with CNES, the French space agency,
Production, management, storage and trading of energy resources (in collaboration with EDF, GDF and TOTAL).
We give more details in the Application domain section.
In the framework of our research with CNES in fast numerical methods for solving Hamilton-Jacobi-Bellman equations, we were able to build an efficient numerical code for optimizing the trajectory of the European launcher Ariane 5, with maximal payload and under a structural constraint on dynamic pressure.
For deterministic state-constrained optimal control problems we were able to provide a better understanding of the well-posedness and numerical properties of the shooting algorithm . This algorithm has been applied to the optimization of an atmospheric reentry problem in .
For deterministic optimal control we will distinguish two approaches, trajectory optimization, in which the object under consideration is a single trajectory, and the Hamilton-Jacobi-Bellman approach, based on dynamic principle, in which a family of optimal control problems is solved.
The roots of deterministic optimal control are the “classical” theory of the calculus of variations, illustrated by the work of Newton, Bernoulli, Euler, and Lagrange (whose famous multipliers were introduced in ), with improvements due to the “Chicago school”, Bliss during the first part of the 20th century, and by the notion of relaxed problem and generalized solution (Young ).
Trajectory optimizationreally started with the spectacular achievement done by Pontryagin's group during the fifties, by stating, for general optimal control problems, nonlocal optimality conditions generalizing those of Weierstrass. This motivated the application to many industrial problems (see the classical books by Bryson and Ho , Leitmann , Lee and Markus , Ioffe and Tihomirov ). Since then, various theoretical achievements have been obtained by extending the results to nonsmooth problems, see Aubin , Clarke , Ekeland . Substantial improvements were also obtained by using tools of differential geometry, which concern a precise understanding of optimal syntheses in low dimension for large classes of nonlinear control systems, see Bonnard, Faubourg and Trélat .
Overviews of numerical methods for trajectory optimization are provided in Pesch , Betts . We follow here the classical presentation that distinguishes between direct and indirect methods.
Dynamic programmingwas introduced and systematically studied by R. Bellman during the fifties. The HJB equation, whose solution is the value function of the (parameterized) optimal control problem, is a variant of the classical Hamilton-Jacobi equation of mechanics for the case of dynamics parameterized by a control variable. It may be viewed as a differential form of the dynamic programming principle. This nonlinear first-order PDE appears to be well-posed in the framework of viscosity solutionsintroduced by Crandall and Lions , , . These tools also allow to perform the numerical analysis of discretization schemes. The theoretical contributions in this direction did not cease growing, see the books by Barles and Bardi and Capuzzo-Dolcetta .
A interesting by-product of the HJB approach is an expression of the optimal control in feedback form. Also it reaches the global optimum, whereas trajectory optimization algorithms are of
local nature. A major difficulty when solving the HJB equation is the high cost for a large dimension
nof the state (complexity is exponential with respect to
n).
The so-called direct methodsconsist in an optimization of the trajectory, after having discretized time, by a nonlinear programming solver that possibly takes into account the dynamic structure. So the two main problems are the choice of the discretization and the nonlinear programming algorithm. A third problem is the possibility of refinement of the discretization once after solving on a coarser grid.
Many authors prefer to have a coarse discretization for the control variables (typically constant or piecewise-linear on each time step) and a higher order discretization for the state equation. The idea is both to have an accurate discretization of dynamics (since otherwise the numerical solution may be meaningless) and to obtain a small-scale resulting nonlinear programming problem. See e.g. Kraft . A typical situation is when a few dozen of time-steps are enough and there are no more than five controls, so that the resulting NLP has at most a few hundreds of unknowns and can be solved using full matrices software. On the other hand, the error order (assuming the problem to be unconstrained) is governed by the (poor) control discretization. Note that the integration scheme does not need to be specified (provided it allows to compute functions and gradients with enough precision) and hence general Ordinary Differential Equations integrators may be used.
On the other hand, a full discretization (i.e. in a context of Runge-Kutta methods, with different values of control for each inner substep of the scheme) allows to obtain higher orders that can be effectively computed, see Hager , Bonnans , being related to the theory of partitioned Runge-Kutta schemes, Hairer et al. . In an interior-point algorithm context, controls can be eliminated and the resulting system of equation is easily solved due to its band structure. Discretization errors due to constraints are discussed in Dontchev et al. . See also Malanowski et al. .
For large horizon problems integrating from the initial time to the final time may be impossible (finding a feasible point can be very hard !). Analogously to the indirect method of multiple shooting algorithm, a possibility is to add (to the control variables), as optimization parameters, the state variables for a given set of times, subject of course to “sticking” constraint. Note that once more the integration scheme does not need to be specified. Integration of the ODE can be performed in parallel. See Bock .
Recent proposals were made of methods based on a reformulation of the problem based on (possibly flat) output variables. By the definition, control and state variables are combinations of derivatives of these output variables. When the latter are represented on a basis of smooth functions such as polynomials, their derivatives are given linear combinations of the coefficients, and so the need for integration is avoided. One must of course take care of the possibly complicated expression of constraints that can make numerical difficulties. The numerical analysis of these methods seems largely open. See on this subject Petit, Milam and Murray .
The collocation approach for solving an ODE consists in a polynomial interpolation of the dynamic variable, the dynamic equation being enforced only at limited number of points (equal to the degree of the polynomial). Collocation can also be performed on each time step of a one-step method; it can be checked than collocation methods are a particular case of Runge-Kutta methods.
It is known that the polynomial interpolation with equidistant points is unstable for more than about 20 points, and that the Tchebycheff points should be preferred, see e.g. Section 5.2.6. Nevertheless, several papers suggested the use of pseudospectral methods Ross and Fahroo in which a single (over time) high-order polynomial approximation is used for the control and state. Therefore pseudospectral methods should not be used in the case of nonsmooth (e.g. discontinuous) control.
In view of model and data uncertainties there is a need for robust solutions. Robust optimization has been a subject of increasing importance in recent years see Ben-Tal and Nemirovski . For dynamic problems taking the worst-case of the perturbation at each time-step may be too conservative. Specific remedies have been proposed in specific contexts, see Ben-Tal et al. , Diehl and Björnberg .
A relatively simple method taking into account robustness, applicable to optimal control problems, was proposed in Diehl, Bock and Kostina .
The dominant ones (for optimal control problems as well as for other fields) have been successively the augmented Lagrangian approach (1969, due to Hestenes and Powell , see also Bertsekas ) successive quadratic programming (SQP: late seventies, due to , , and interior-point algorithms since 1984, Karmarkar . See the general textbooks on nonlinear programming , , .
When ordered by time the optimality system has a “band structure”. One can take easily advantage of this with interior-point algorithms whereas it is not so easy for SQP methods; see Berend et al. . There exist some very reliable SQP softwares SNOPT, some of them dedicated to optimal control problems, Betts , as well as robust interior-point software, see Morales et al. , Wächter and Biegler , and for application to optimal control Jockenhövel et al. .
We have developed a general SQP algorithm , for sparse nonlinear programming problems, and the associated software for optimal control problems; it has been applied to atmospheric reentry problems, in collaboration with CNES .
More recently, in collaboration with CNES and ONERA, we have developed a sparse interior-point algorithm with an embedded refinement procedure. The resulting TOPAZE code has been applied to various space trajectory problems , , . The method takes advantage of the analysis of discretization errors, is well-understood for unconstrained problems .
The indirect approach eliminates control variables using Pontryagin's maximum principle, and solves the two-points boundary value problem (with differential variables state and costate) by
a single or multiple shooting method. The questions are here the choice of a discretization scheme for the integration of the boundary value problem, of a (possibly globalized) Newton type
algorithm for solving the resulting finite dimensional problem in
IRn(
nis the number of state variables), and a methodology for finding an initial point.
The choice of the discretization scheme for the numerical integration of the boundary value problem can have a great impact on the convergence of the method. First, the integration itself can be tricky. If the state equation is stiff (the linearized system has fast modes) then the state-costate has both fast and unstable modes. Also, discontinuities of the control or its derivative, due to commutations or changes in the set of active constraints, lead to the use of sophisticated variable step integrators and/or switching detection mechanisms, see Hairer et al. , . Another point is the computation of gradients for the Newton method, for which basic finite differences can give inaccurate results with variable step integrators (see Bock ). This difficulty can be treated in several ways, such as the so-called “internal differentiation” or the use of variational equations, see Gergaud and Martinon .
Most optimal control problems include control and state constraints. In that case the formulation of the TPBVP must take into account entry and exit times of boundary arcs for these constraints, and (for state constraints of order at least two) times of touch points (isolated contact points). In addition for state constrained problems, the so-called “alternative formulation” (that allows to eliminate the “algebraic variables, i.e. control and state, from the algebraic constraints) has to be used, see Hartl, Sethi and Vickson .
Another interesting point is the presence of singular arcs, appearing for instance when the control enters in the system dynamics and cost function in a linear way, which is common in practical applications. As for state contraints, the formulation of the boundary value problem must take into account these singular arcs, over which the expression of the optimal control typically involves higher derivatives of the Hamiltonian, see Goh and Robbins .
As mentioned before, finding a suitable initial point can be extremely difficult for indirect methods, due to the small convergence radius of the Newton type method used to solve the boundary value problem. Homotopy methods are an effective way to address this issue, starting from the solution of an easier problem to obtain a solution of the target problem (see Allgower and Georg ). It is sometimes possible to combine the homotopy approach with the Newton method used for the shooting, see Deuflhard .
With a given value of the initial costate are associated (through in integration of the reduced state-costate system) a control and a state, and therefore a cost function. The latter can therefore be minimized by ad-hoc minimization algorithms, see Dixon and Bartholomew-Biggs . The advantage of this point of view is the possibility to use the various descent methods in order to avoid convergence to a local maximum or saddle-point. The extension of this approach to constrained problems (especially in the case of state constraints) is an open and difficult question.
We have recently clarified under which properties shooting algorithms are well-posed in the presence of state constraints. The (difficult) analysis was carried out in , . A related homotopy algorithm, restricted to the case of a single first-order state constraint, has been proposed in .
We also conducted a study of optimal trajectories with singular arcs for space launcher problems. The results obtained for the generalized three-dimensional Goddard problem (see ) have been successfully adapted for the numerical solution of a realistic launcher model (Ariane 5 class).
Furthermore, we continue to investigate the effects of the numerical integration of the boundary value problem and the accurate computation of Jacobians on the convergence of the shooting method. As initiated in , we focus more specifically on the handling of discontinuities, with ongoing work on the geometric integration aspects (Hamiltonian conservation).
Geometric approaches succeeded in giving a precise description of the structure of optimal trajectories, as well as clarifying related questions. For instance, there have been many works
aiming to describe geometrically the set of attainable points, by many authors (Krener, Schättler, Bressan, Sussmann, Bonnard, Kupka, Ekeland, Agrachev, Sigalotti, etc). It has been proved,
in particular, by Krener and Schättler
that, for generic single-input control-affine systems in
IR3,
, where the control satisfies the constraint
|
u|
1, the boundary of the accessible set in small time consists of the surfaces
generated by the trajectories
x+x-and
x-x+, where
x+(resp.
x-) is an arc corresponding to the control
u= 1(resp.
u= -1); moreover, every point inside the accessible set can be reached with a trajectory of the form
x-x+x-or
x+x-x+. It follows that minimal time trajectories of generic single-input control-affine systems in
IR3are locally of the form
x-x+x-or
x+x-x+, i.e., are bang-bang with at most two switchings.
This kind of result has been slightly improved recently by Agrachev-Sigalotti, although they do not take into account possible state constraints.
In , we have extended that kind of result to the case of state constraints: we described a complete classification, in terms of the geometry (Lie configuration) of the system, of local minimal time syntheses, in dimension two and three. This theoretical study was motivated by the problem of atmospheric re-entry posed by the CNES, and in , we showed how to apply this theory to this concrete problem, thus obtaining the precise structure of the optimal trajectory.
This approach consists in calculating the value function associated with the optimal control problem, and then synthesizing the feedback control and the optimal trajectory using Pontryagin's principle. The method has the great particular advantage of reaching directly the global optimum, which can be very interesting, when the problem is not convex.
>From the dynamic programming principle, we derive a characterization of the value function as being a solution (in viscosity sense) of an Hamilton-Jacobi-Bellman equation, wich is a nonlinear PDE of dimension equal to the number n of state variables. Since the pioneer works of Crandall and Lions , , , many theoretical contributions were carried out, allowing an understanding of the properties of the value function as well as of the set of admissible trajectories. However, there remains an important effort to provide for the development of effective and adapted numerical tools, mainly because of numerical complexity (complexity is exponential with respect to n).
Several numerical schemes have been already studied to treat the case when the solution of the HJB equation (the value function) is continuous. Let us quote for example the Semi-Lagrangian methods , studied by the team of M. Falcone (La Sapienza, Rome), the high order schemes WENO, ENO, Discrete galerkin introduced by S. Osher, C.-W. Shu, E. Harten , , , , and also the schemes on nonregular grids by R. Abgrall , . All these schemes rely on finite differences or/and interpolation techniques which lead to numerical diffusions. Hence, the numerical solution is unsatisfying for long time approximations even in the continuous case.
In a realistic optimal control problem, there are often constraints on the state (reaching a target, restricting the state of the system in an acceptable domain, ...). When some controlability assumptions are not satisfied, the value function associated to such a problem is discontinuous and the region of discontinuity is of great importance since it separates the zone of admissible trajectories and the nonadmissible zone.
In this case, it is not reasonable to use the usual numerical schemes (based on finite differences) for solving the HJB equation. Indeed, these schemes provide poor approximation quality because of the numerical diffusion.
Discrete approximations of the Hamilton-Jacobi equation for an optimal control problem of a differential-algebraic system were studied in .
Numerical methods for the HJB equation in a bilevel optimization scheme where the upper-level variables are design parameters were used in . The algorithm has been applied to the parametric optimization of hybrid car engines.
Within the framework of the thesis of N. Megdich, we have studied new antidiffusive schemes for HJB equations with discontinuous data , . One of these schemes is based on the Ultrabee algorithm proposed, in the case of advection equation with constant velocity, by Roe and recently revisited by Després-Lagoutière , . The numerical results on several academic problems show the relevance of the antidiffusive schemes. However, the theoretical study of the convergence is a difficult question and is only partially done.
Optimal stochastic control problems occur when the dynamical system is uncertain. A decision typically has to be taken at each time, while realizations of future events are unknown (but some information is given on their distribution of probabilities). In particular, problems of economic nature deal with large uncertainties (on prices, production and demand). Specific examples are the portfolio selection problems in a market with risky and non-risky assets, super-replication with uncertain volatility, management of power resources (dams, gas). Air traffic control is another example of such problems.
By stochastic programming we mean stochastic optimal control in a discrete time (or even static) setting; see the overview by Ruszczynski and Shapiro . The static and single recourse cases are essentially well-understood; by contrast the truly dynamic case (multiple recourse) presents an essential difficulty Shapiro , Shapiro and Nemirovski . So we will speak only of the latter, assuming decisions to be measurable w.r.t. a certain filtration (in other words, all information from the past can be used).
In the standard case of minimization of an expectation (possibly of a utility function) a dynamic programming principleholds. Essentially, this says that the decision is a function of the present state (we can ignore the past) and that a certain reverse-time induction over the associated values holds. Unfortunately a straighforward resolution of the dynamic programming principle based on a discretization of the state space is out of reach (again this is the curse of dimensionality). For convex problems one can build lower convex approximations of the value function: this is the Stochastic dual dynamic programming(SDDP) approach, Pereira and Pinto . Another possibility is a parametric approximation of the value function; however determining the basis functions is not easy and identifying (or, we could say in this context, learning) the best parameters is a nonconvex problem, see however Bertsekas and J. Tsitsiklis , Munos .
A popular approach is to sample the uncertainties in a structured way of a tree(branching occurs typically at each time). Computational limits allow only a small number of branching, far less than the amount needed for an accurate solution Shapiro and Nemirovski . Such a poor accuracy may nevertheless (in the absence of a more powerful approach) be a good way for obtaining a reasonable policy. Very often the resulting programs are linear, possibly with integer variables (on-off switches of plants, investment decisions), allowing to use (possibly dedicated) mixed integer linear programming codes. The tree structure (coupling variables) can be exploited by the numerical algorithms, see Dantzig and Wolfe , Kall and Wallace .
By Monte Carlo we mean here sampling a given number of independent trajectories (of uncertainties). In the special case of optimal stopping (e.g., American options) it happens that the state space and the uncertainty space coincide. Then one can compute the transition probabilities of a Markov chain whose law approaches the original one, and then the problem reduces to the one of a Markov chain, see . Let us mention also the quantization approach, see .
In the general case a useful possibility is to compute a tree by agregating the original sample, as done in .
Maximizing the expectation of gains can lead to a solution with a too high probability of important losses (bankruptcy). In view of this it is wise to make a compromise between expected gains and risk of high losses. A simple and efficient way to achieve that may be to maximize the expectation of a utility function; this, however, needs an ad-hoc tuning. An alternative is the mean-variance compromise, presented in the case of portfolio optimization in Markowitz . A useful generalization of the variance, uncluding dissymetric functions such as semideviations, is the theory of deviation measures, Rockafellar et al. .
Another possibility is to put a constraint on the level of gain to be obtained with a high probability value say at least 99%. The corresponding concept of value-at-risk leads to difficult nonconvex optimization problems, although convex relaxations may be derived, see Shapiro and Nemirovski .
Yet the most important contribution of the recent years is the axiomatized theory of risk measures Artzner et al. , satisfying the properties of monotonicity and possibly convexity.
In a dynamic setting, risk measures (over the total gains) are not coherent (they do not obey a dynamic programming principle). The theory of coherent risk measuresis an answer in which risk measures over successive time steps are inductively applied; see Ruszczyński and Shapiro . Their drawback is to have no clear economic interpretation at the moment. Also, associated numerical methods still have to be developed.
The study of relations between chance constraints (constraints on the probability of some event) and robust optimization is the subject of intense research. The idea is, roughly speaking, to solve a robust optimization (some classes of which are tractable in the sense of algorithmic complexity). See the recent work by Ben-Tal and Teboulle .
The case of continuous-time can be handled with the Bellman dynamic programming principle, which leads to obtain a characterization of the value function as solution of a second order Hamilton-Jacobi-Bellman equation , .
Sometimes this value function is smooth (e.g. in the case of Merton's portfolio problem, Oksendal ) and the associated HJB equation can be solved explicitly. Still, the value function is not smooth enough to satisfy the HJB equation in the classical sense. As for the deterministic case, the notion of viscosity solution provides a convenient framework for dealing with the lack of smoothness, see Pham , that happens also to be well adapted to the study of discretization errors for numerical discretization schemes , .
The numerical discretization of second order HJB equations was the subject of several contributions. The book of Kushner-Dupuis gives a complete synthesis on the chain Markov schemes (i.e Finite Differences, semi-Lagrangian, Finite Elements, ...). Here a main difficulty of these equations comes from the fact that the second order operator (i.e. the diffusion term) is not uniformly elliptic and can be degenerated. Moreover, the diffusion term (covariance matrix) may change direction at any space point and at any time (this matrix is associated the dynamics volatility).
In the framework of the thesis of R. Apparigliato (that will finish at the end of 2007) we have studied the robust optimization approach to stochastic programming problems, in the case of hydroelectric production, for one valley. The main difficulty lies with both the dynamic character of the system and the large number of constraints (capacity of each dam). We have also studied the simplified electricity production models for respecting the “margin” constraint. In the framework of the thesis of G. Emiel and in collaboration with CEPEL, we have studied large-scale bundle algorithms for solving (through a dual “price decomposition” method) stochastic problems for the Brazilian case.
For solving stochastic control problems, we studied the so-called Generalized Finite Differences (GFD), that allow to choose at any node, the stencil approximating the diffusion matrix up to a certain threshold . Determining the stencil and the associated coefficients boils down to a quadratic program to be solved at each point of the grid, and for each control. This is definitely expensive, with the exception of special structures where the coefficients can be computed at low cost. For two dimensional systems, we designed a (very) fast algorithm for computing the coefficients of the GFD scheme, based on the Stern-Brocot tree . The GFD scheme was used as a basis for the approximation of an HJB equation coming from a super-replication problem. The problem was motivated by a study conducted in collaboration with Société Générale, see .
Within the framework of the thesis of Stefania Maroso, we also contributed to the study of the error estimate of the approximation of Isaac equation associated to a differential game with one player , and also for the approximation of HJB equation associated with the impulse problem .
The field has been strongly influenced by the work of J.L. Lions, who started its systematic study of optimal control problems for PDEs in , in relation with singular perturbation problems , and ill-posed problems . A possible direction of research in this field consists in extending results from the finite-dimensional case such as Pontryagin's principle, second-order conditions, structure of bang-bang controls, singular arcs and so on. On the other hand PDEs have specific features such as finiteness of propagation for hyperbolic systems, or the smoothing effect of parabolic sytems, so that they may present qualitative properties that are deeply different from the ones in the finite-dimensional case.
Unilateral systems in mechanics, plasticity theory, multiphases heat (Stefan) equations, etc. are described by inequalities; see Duvaut and Lions , Kinderlehrer and Stampacchia . For an overview in a finite dimensional setting, see Harker and Pang . Optimizing such sytems often needs dedicated schemes with specific regularization tools, see Barbu , Bermúdez and Saguez . Nonconvex variational inequalities can be dealt as well in Controllability of such systems is discussed in Brogliato et al. .
As for finite-dimensional problems, but with additional difficulties, there is a need for a better understanding of stability and sensitivity issues, in relation with the convergence of numerical algorithms. The second-order analysis for optimal control problems of PDE's in dealt with in e.g. , . No much is known in the case of state constraints. At the same time the convergence of numerical algorithms is strongly related to this second-order analysis.
Many models in control problems couple standard finite dimensional control dynamics with partial differential equations (PDE's). For instance, a well known but difficult problem is to optimize trajectories for planes land-off, so as to minimize, among others, noise pollution. Noise propagation is modeled using wave like equations, i.e., hyperbolic equations in which the signal propagates at a finite speed. By contrast when controlling furnaces one has to deal with the heat equation, of parabolic type, which has a smooting effect. Optimal control laws have to reflect such strong differences in the model.
Let us mention some applications where optimal control of PDEs occurs. One can study the atmospheric reentry problem with a model for heat diffusion in the vehicle. Another problem is the one of traffic flow, modeled by hyperbolic equations, with control on e.g. speed limitations. Of course control of beams, thin structures, furnaces, are important.
An overview of sensitivity analysis of optimization problems in a Banach space setting, with some applications to the control of PDEs of elliptic type, is given in the book . See also .
We studied various regularization schemes for solving optimal control problems of variational inequalities: see Bonnans and D. Tiba , Bonnans and E. Casas , Bergounioux and Zidani . The well-posed of a “nonconvex” variational inequality modelling some mechanical equilibrium is considered in Bonnans, Bessi and Smaoui .
In Coron and Trélat , , we prove that it is possible, for both heat like and wave like equations, to move from any steady-state to any other by means of a boundary control, provided that they are in the same connected component of the set of steady-states. Our method is based on an effective feedback procedure which is easily and efficiently implementable. The first work was awarded SIAM Outstanding Paper Prize (2006).
Dynamic optimization appears in various applied fields: mechanics, physics, biology, economics. Pontryagin's principle itself appeared in the fifties as an applied tools for mechnaical engineers. Since that time progress in the theory and in the application went hand by hand, and so we are commited to develop both of them. We took part ine the past few years in the following appied projects:
Aerospace trajectories - CNES, ONERA. We have a long tradition of studying aerospace trajectory optimization problems: ascent phase of launchers, re-entry problem). Our main contributions in this field have been carried out in collaboration with CNES, Onera, through either research contracts or PhD fellowships. see , , , , .
Production, storage and Natural and power resources. - EDF, GDF, Total.
We have worked with EDF on the optimization of the short-term electricity production , as well as the mid-term electricity production , , , . We are starting a collaboration with TOTAL on the trading of LNG (liquefied natural gas).
SHOOT software for indirect shooting
TOPAZE code for trajectory optimization.Developed in the framework of the PhD Thesis of J. Laurent-Varin, supported by CNES and ONERA. Implementation of an interior-point algorithm for multiarc trajectory optimization, with built-in refinement. Applied to several academic, launcher and reentry problems.
SOHJB code for second order HJB equations. Developped since 2004 in C++ for solving the stochastic HJB equations in dimension 2. The code is based on the Generalized Finite Differences, and includes a decomposition of the covariance matrices in elementary diffusions pointing towards grid points. The implementation is very fast and was mainly tested on academic examples.
Sparse HJB-Ultrabee. Developped in C++ for solving HJB equations in dimension 4. This code is based on the Ultra-Bee scheme and an efficient storage technique with sparse matrices. The code provides also optimal trajectories for target problems. A prelimenary version in Scilab was developped by N. Megdich. The current version is developped by O. Bokanowski, E. Cristiani and H. Zidani. A specific software dedicated to space problems is developped also in C++, in the framework of a contract with the CNES.
With F. Alvarez, CMM, Universidad de Chile, Santiago. Barrier methods for nonlinear optimization have proved to be very efficient. When applying them to optimal control problem a specific
problem arises, namely the interaction between the discretization and the interior penalty. In the case of quadratic optimal control problems with uniformly strongly convex Hamiltonians and
bound control constraints, we were able
to give an expansion of the solution, proving that, in the case of the logarithmic penalty, and assuming
standard regularity assumptions for the junction points, the error due to the penalty is of order
in the
norm, and only
|log
|in the
L1norm. This is the first result on expansion of solutions of optimal control problems.
With P. Lotito, U. Tandil (Argentina). In the framework of the STIC AmSud “Energetic Optimization” project and of the internship of S. Aronna, we have established some qualitative properties of optimal controls for a continuous-time hydrothermal electricity production model. We show that in the case of several “parallel” dams, nonuniqueness of optimal trajectories may occur, and we characterize them in some cases. Then we discuss singular arcs using a reformulation of controls that allows to separate the “linear” and “nonlinear controls.
We have completed , our work on the characterization of either “bounded strong” or “Pontryagin minina” satisfying a quadratic growth condition, for optimal control problems of ordinary differential equations with constraints on initial-final state and control. No Clebsch-Legendre condition is assumed. This extends previous work by A. Milyutin and N. Osmolovskiĭ where the control constraints were assumed to be uniformly linearly independent.
In the frame of research contracts with the CNES, we have studied since 2006 trajectory optimization for the atmospheric climbing phase of space launchers. More precisely, our aim was to investigate the existence of optimal trajectories with non-maximal thrust arcs (i.e. singular arcs). The physical reason behind this phenomenon is that aerodynamic forces may make high speed ineffective (namely the drag term, proportional to the speed squared). Our main axis is an indirect method (Pontryagin's Minimum Principle and shooting method) combined with a continuation approach. The homotopic approach involves a quadratic regularization of the problem and is a way to handle the nonsmoothness of the optimal control, providing both sufficient information on the singular structure of the control and a suitable initial point for the shooting method. We also checked our numerical results with a direct method using the IPOPT solver.
We conducted the theoretical analysis and first simulations for a slightly simplified case (3D Goddard problem, see ). Then we moved on to the more complex Ariane 5 problem, with a realistic physical model for the thrust and drag forces. Our study concluded that there were no singular arcs for the typical Ariane 5 mission to the geostationnary transfer orbit (GTO). However, we also showed that increasing the specific impluse and reference area of the launcher could lead to optimal trajectories with singular arcs, see .
Fig. shows the structure and some examples of such trajectories.XXX The study of state constraints (dynamical pressure) was initiated for the Ariane launcher during the internship of G. Granato in summer 2008. We are currently extending this study to the case of winged launchers, with the additional issue of a more complex aerodynamic force with a lift term. In this case, both drag and lift term depend on the angle of attack of the launcher, and the throttle and direction of the thrust are no longer decoupled when applying the Minimum Principle.
For optimal control problems, the case where both the objective and dynamics are linear with respect to the control (or part of the control) is quite common. This happens for instance naturally when the controls represent forces applied to the system. The Hamiltonian is then linear with respect to the control, therefore its minimization typically gives a bang-bangoptimal control with possible switchings between the control bounds. The control law is set by the sign of the switching function , namely the derivative of the Hamiltonian with respect ot the control.
Furthermore, if the Hamiltonian actually happens to be constant with respect to the control, the switching function vanishes and the optimal control is singular. Contrary to the bang-bang case, the control can then take values in the interior of the feasible control set. A time interval where the control is singular is called a singular arc. For space launcher problems for instance, a singular arc corresponds to a flight phase with a non-maximal thrust throttle.
The usual way to determine the singular control value typically involves the time derivatives of the switching function, resulting in an equation of the form . However, this approach is not always possible in practice, for instance if the physical model includes some tabulated data for which analytical time derivatives are not available. This was the case in the Ariane 5 trajectory optimization problem we studied for the CNES. We introduced a new and alternate way to compute the singular control, namely solving the equation numerically instead of formally. The numerical values for were approximated from , which is always available. We show on Fig. the comparison of the exact and alternate singular control for the well known Goddard problem.
We tried to simplify the formulation by using an approximation of based on instead of . This causes a slight loss of precision but the user does not need to provide the expression for anymore. We currently study a further simplification with a generic switching function, independant from the problem. Thus it is not required anymore to provide the expression of , which makes the method easier to use. Another interesting challenge is the handling of singular arcs without requiring structure assumptions (number of arcs in particular). Our aim is to extend the kind of approach used for control switchings detailed in .
The reference
Stability analysis of optimal control problems with a second-order state constraintgives stability results for nonlinear optimal control problems subject to a regular state constraint
of second-order. The strengthened Legendre-Clebsch condition is assumed to hold, and no assumption on the structure of the contact set is made. Under a weak second-order sufficient condition
(taking into account the active constraints), we show that the solutions are Lipschitz continuous w.r.t. the perturbation parameter in the
L2norm, and Hölder continuous in the
norm. We use a generalized implicit function theorem in metric spaces by Dontchev and Hager
. The difficulty is that multipliers associated with second-order state constraints have a low regularity
(they are only bounded measures). We obtain Lipschitz stability of a “primitive” of the state constraint multiplier.
The reference
Homotopy Algorithm for Optimal Control Problems with a Second-order State Constraintgives stability results for nonlinear optimal control problems subject to a regular state constraint
of second-order. The strengthened Legendre-Clebsch condition is assumed to hold, and no assumption on the structure of the contact set is made. Under a weak second-order sufficient condition
(taking into account the active constraints), we show that the solutions are Lipschitz continuous w.r.t. the perturbation parameter in the
L2norm, and Hölder continuous in the
norm. We use a generalized implicit function theorem in metric spaces by Dontchev and Hager
. The difficulty is that multipliers associated with second-order state constraints have a low regularity
(they are only bounded measures). We obtain Lipschitz stability of a “primitive” of the state constraint multiplier.
The reference , Optimal Control of the Atmospheric Reentry of a Space Shuttle by an Homotopy Method, deals with the optimal control problem of the atmospheric reentry of a space shuttle with a second-order state constraint on the thermal flux. We solve the problem using the shooting algorithm combined with an homotopy method which automatically determines the structure of the optimal trajectory (composed of one boundary arc and one touch point).
We aim at solving optimal control problems by using the dynamic programming principles and the related Hamilton-Jacobi-Bellman (HJB) equations. Our study is motivated by a specific problem from optimization trajectory for space launchers. The problem consists in maximizing the payload for the European launcher ARIANE V, optimizing its trajectory from the earth to the desired orbit. The optimal trajectory is usually computed by means of the Shooting method, although this method suffers some shortcomings. For example, it usually computes only locally optimal solutions so there is no guarantee that the computed trajectory is the optimal one between all the possible trajectories. Moreover, shooting methods requieres the knowledge of the behavior of the optimal trajectory (number of commutations, singular arcs, ...). In collaboration with Cnes, we have carried out a study on the applicability of HJB approach to the climbing problem.
Let us recall that Hamilton-Jacobi approach consists in caracterizing the value function, associated to the finite-horizon control problem, as a unique solution of the following first-order partial differential equation, named Hamilton-Jacobi-Bellman (HJB) equation
where
fis the controlled dynamics of the system which depends on the state
xand on the control
a.
A(
x)is the set of the admissible controls at
x,
is the region where the state of the system is allowed to stay and
is the final cost. Once the solution of the HJB equation is computed, it is possible to compute the (global) optimal trajectory in real time starting from any initial point in
. As it can be seen, the number
dof the state variables corresponds to the dimension of the space in which HJB has to be solved, so the HJB technique is usually restricted to low-dimension problems (
d3) due to large CPU and memory requirements. Moreover, it can be slowed down by
a restrictive CFL condition, if this condition requires a very small time step with respect to the space step. Since the launcher's problem involves 7 state variables (4 in a reduced
formulation) the priority was to find an efficient way to store and compute only a small number of nodes (and not all the grid nodes) at each time step, exploiting the nature of the equation
which guarantees that the information follows the characteristic flow. We developed a new dynamic data structure which allows to store in memory a not ordered ! set of nodes preserving at the
same time a fast access to all data. This technique led to good results (see
) reducing dramatically the CPU time as well the required memory, and allowing to perform computation in
reasonable time up to dimension 4, without parallelization.
After that, we have focused on the real climbing problem (with real data given by Cnes). The major difficulty was to include in the model the three different phases of the climb, corresponding to the three different engines of the launcher (boosters and main engine, only main engine, secondary engine). In correspondence of each change of phases the (empty) fuel reservoir is lost, so we have to take into account a discontinuity of the mass of the launcher. On the other hand, since the variation of the mass of the launcher during the climbing process is not at all negligible, the mass must be included in the state variables and that prevents to deal with jumps of the mass. To overcome this difficulty we designed a new algorithm suitable for discontinuous trajectories. The numerical results we obtain on the launcher problem show the relevance and the good behavior of our algorithm. In Fig. we give the optimal trajectory to reach the target (GTO orbit). We compare 3 numerical solutions computed in different grids. The trajectory obtained in the caorsest grid is plotted in red, the one in a medium grid is plotted in blue, while in black we give the trajectory in the finest grid. The cpu times for the whole computations (approximation of the value function and synthesis of optimal trajectory) is 620s in the finest grid, and is only 80s in the caorest grid.
Finally, let us point out that the approximation in a rough grid gives qualitatively good results. This study leads also to a very fast numerical code in C++ for solving HJB equations in 4d.
Let
xbe denotes any controlled trajectory starting from
xwith an admissible control variable
, taking values in a compact set
A:
(
xcould represent the evolution of a launcher as well as a energy consumption of a car). Consider the reachablility function
defined by:
whith a closed set of state-constraints
K, and with a target
. The knowledge of function
provides the optimal time needed to reach the target as well as the capture basin.
In presence of state constraints, the characterisation of as solution of an HJB equation is not an easy task and the known results requier some strong controllability assumptions (see , ). These assumptions are restrective and the characterization of involves some boundary condition that cannot be taken into account in numerical algorithms.
Recently, we have been able to obtain a general charaterization of the value function, without any controllability assumption. Moreovever this characterisation leads to an effective numerical procedure. A preprint on this subject is in preparation.
The DGA grant serves partly to invite Prof. Ariela Briani (from U. of Pise) to spend one year in the Applied Mathematics Departement (UMA - ENSTA). We have started a theoretical study of the HJB equations associated to optimal control problems with jumps in the dynamics.
We have investigated a new numerical approximation for HJB equations with discontinuous initial data. This approximation is based on a decomposition of the initial data with a set of level-set functions. Hence we obtain a system of non-coupled HJB equations. The main interest of this decomposition lies in the fact that each equation of the involved system has as initial condition a function taking only two values {-1, 1}. We propose to solve these equations by anti-diffusive methods (Fast marching method, or Ultra bee). We prove a comparison theorem which justifies the decomposition with level set functions and which insures the convergence of our approach (decomposition of the initial condition + numerical scheme for each equation of the involved system). In , we present the results in dimension 1, we also validate the theoretical convergence by performing several tests still in dimension 1. However, the idea of decomposition is valid in higher dimension. In the framework of the Master's internship of D. Venturini (Master 2, Univ. Rome), we have tested this method for the climbing problem (ARIANE 5). The numerical results we have performed are very encouraging and we hope to be able to integrate this approach in a forthcoming study with the CNES.
We also investigated the HJB equation from another point of view. It is known since long time that, given an optimal control problem, the value function
ucomputed by HJB can be used to initialize the Pontryagin Minimum Principle (PMP) method in an optimal way (at least when
uis differentiable), and then to guarantee the convergence of the PMP method to the optimal solution.
The goal here was to join together the respective advantages of the two approaches. In particular, the fact that the HJB technique gives a globaloptimal trajectory, while the PMP technique is much more accurate. We have analyzed some control problems in dimension 2, 3 and 4, and obtained some good results in this direction. In the two-dimensional test the cost functional is the time needed to reach a target, and the dynamics is chosen in such a way there are many possible trajectories corresponding to local minima but only one trajectory realizing the global minimum. In the following figures we can see the convergence domain of the PMP method in a large zone around the exact solution (left) and around the HJB guess (right). Crosses denotes points from which the global optimal trajectory is computed.
We also investigated how to get useful informations by the HJB approach when the value function is not differentiable.
The three-dimensional test is the classical Goddard problem, which is hard because the CFL condition is very restrictive and it involves singular arcs, so the PMP method has some extra unknown. In the four-dimensional problem we deal with (saturated) state constraints.
In this work, we consider an extension of the Fast Marching (FM) method introduced by J. A. Sethian in 1996. The FM method is a well known algorithm used to speed up the convergence of the
classical iterative scheme for the eikonal equation
, where
c(
x)>0is given. The FM method was originally based on a first order finite difference scheme and it was extended to semi-Lagrangian scheme in
. Unfortunately, the FM method is restricted to the eikonal equation and it is not obvious how to generalize
the method to other HJB equations preserving the advantage in CPU time. In the last ten years some modified FM methods were proposed, but the equations these new methods deal with are just
small variations of the eikonal equation. In
we propose a FM-based method which is able to compute the solution of any equation modeling a monotone front
propagation problem, including Hamilton-Jacobi-Bellman and Hamilton-Jacobi-Isaacs equations arising in the framework of differential games.
In
we study numerical approximation of the Hamilton-Jacobi-Isaacs equations for differential games in presence
of state constraints. Here a system is controlled by two players
Aand
B, who play one against the other. The dynamics of the system has the form
f=
f(
x,
a,
b)where
aand
bare the controls for the first and second player respectively. The aim of the first player is to steer the system to a given target in minimum time, while the second player wants the
system stays far away for ever. Previously in
it was presented a numerical scheme and a proof of convergence in the particular case the dynamics of the two
players has the form
f(
x,
y,
a,
b) = (
fA(
x,
a),
fB(
y,
b)). In this case each player can control just ”its” part of the dynamics and it is responsible of ”its” part of the state constraints. The novelty here is that the
dynamics is not split, and both player must cooperate in order to maintain the system in the permitted zone, choosing at each stage a couple of control
(
a
*,
b
*!)which is admissible for the dynamics.
An important part of the work described below relies on recent developments on model theoretic structures (o-minimal structures) which generalize axiomatically the qualitative properties
of semialgebraic sets (see van den Dries, Shiota). Semialgebraic sets are subsets of
IRndefined by a finite number of polynomial equalities and inequalities. Finite union/intersection and complement of semialgebraic sets are semialgebraic; more importantly linear
projections of semialgebraic sets remain semialgebraic (Tarski Seideberg principle). These facts yield remarkable stability properties as well as a kind of “finiteness of pathologies”
principle. An illustration of these considerations could be as follows: take a bounded semialgebraic set
and a polynomial function
P:
IRn×
I
R
m
I
R, then the
nonsmoothfunction
has a semialgebraic graph (stability) and is smooth everywhere save perhaps on an finite union of manifolds of low dimension (finitess/tameness of pathologies). Keeping in mind the
semialgebraic model, an o-minimal structure over
IRis a boolean collection
of subsets of
enjoying (in particular) two major properties: the family is stable with respect to linear projection and “one-dimensional” sets are exactly finite union of intervals. A
function/point-to-set mapping is said to belong to such a structure if its graph belongs to
. This “theoretical” extensions of real algebraic geometry could seem useless if the only example of o-minimal structure was given by the collection of semialgebraic sets. Two major
breakthroughs by Gabrielov (globally subanalytic sets) and Wilkie (log-exp structure) have shown that o-minimal structure are numerous. This fact is, to our opinion, of high importance for
applied mathematics.
The striking stability results enjoyed by such structures can be indeed used to show that many finite-dimensional optimization problem are (or could be) formulated within this setting. This was the starting point for the use of such a theory in variational analysis and for the study of some related optimization algorithms. This being said, a general idea to understand what could be obtained in this framework, is to think “o-minimal” problems -which are often qualified in a more vivid way as “tame”- as problems which are generically well-posed or well-behaved.
More specifically the works we present here were developed in view of the study of two following general problems
convergence of gradient methods in a nonsmooth and nonconvex setting,
convergence of Newton's method either smooth or nonsmooth.
Particular attention was dedicated to the analysis of convergence rate of such methods and to what is usually called global convergence. This last term means that the sequence/curve generated by the algorithm/dynamical system converges to a specific equilibrium despite the fact that a continuum of critical points may be involved.
Tame mappings are semismooth
, with A. Daniilidis, A.S. Lewis. Superlinear convergence of the Newton method for nonsmooth equations
requires a “semismoothness" assumption. In this work we have proved that locally Lipschitz functions definable in an o-minimal structure (in particular semialgebraic or globally subanalytic
functions) are semismooth. Semialgebraic, or more generally, globally subanalytic mappings present the special interest of being
-order semismooth, where
is a
positiveparameter. As an application of this new estimate, we prove that the error at the
kth step of the Newton method behaves like
.
On the convergence of the proximal algorithm for nonsmooth functions involving analytic features
, with H. Attouch. We study the convergence of the proximal algorithm applied to nonsmooth functions that
satisfy the Łojasiewicz inequality around their generalized critical points. Typical examples of functions complying with these conditions are continuous semialgebraic or subanalytic
functions. Following Łojasiewicz's original idea, we have proved that any bounded sequence generated by the proximal algorithm converges to some generalized critical point. We also obtain
convergence rate results which are related to the flatness of the function by means of Łojasiewicz exponents. Apart from the sharp and elliptic cases which yield finite or geometric
convergence, the decay estimates that are derived are of the type
O(
k-
s), where
s(0, +
)depends on the flatness of the function.
A unifying local convergence result for Newton's method in Riemannian manifolds
, with F. Alvarez, J. Munier. We consider the problem of finding a singularity of a vector field
Xon a complete Riemannian manifold. Inspired by previous work of X. Wang, and Zabrejko-Nguen on Kantorovich's majorant method, our approach relies on the introduction of an abstract
one-dimensional Newton's method obtained using an adequate Lipschitz-type radial function of the covariant derivative of
X. Our main theorem asserts that if the one-dimensional method is well-posed and converges, then the Newton's method for
Xinherits of both its qualitative and
quantitativespecifities. A specialization of this result permits to recover three famous results, namely the Kantorovich, Smale and Nesterov-Nemirovskii theorems. Concerning
real-analytic vector fields the convergence criterion does not involve any curvature terms as it was the case in the pioneering work of Dedieu et al. The result is an exact equivalent of
Smale
-theorem in a Riemannian setting.
Alternating minimization methods for weakly coupled convex problems , We develop and explore the convergence properties of alternating algorithms for convex problems defined on a Hilbert space. Typical problems that are tackled are of the form
P) min{
f(
u) +
g(
v) + ||
A
u-
B
v||
2},
where
f,
gare defined on some Hilbert spaces and
(
u,
v)
A
u-
B
vis a term corresponding to a weak linear coupling. We provide an algorithm which produces sequences that converge weakly to a solution of
P. Applications to domain decomposition (in this case the linear operator
Aand
Bare trace operators) and potential games are considered.
We have furthered our study on convergence results of policy iterations algorithm. In particular, we have been interested in the generalisation of the algorithm to solve min-max problems in the form of
These kind of equations rise in many problems like optimal control problems, in game theory, and also in front propagations . We give an interpretation of the policy iterations algorithm as a Newton semi-smooth method. We derive the superlinear convergence result under monotonicity assumption on matrices (this assumption is essential to guarantee that the algorithm do not cycle). When the sets and are countable, we obtain a bound for the number of iterations needed to compute the solution. Numericlay, the convergence is obtained in only very few steps.
With C. Sagastizábal, CEPEL, Rio de Janeiro. In the framework of G. Emiel's PhD thesis we have studied possible evolutions of non-smooth optimization algorithms when dealing with large scale problems. One of the main motivations is the resolution of stochastic optimization problems through Lagrangian decomposition. Those problems arise in particular in mid-term production planning.
The two main theoretical contributions of the thesis deal with dynamic Lagrangian relaxation and incremental resolution , see also , and were described in our last anual report. This year was devoted to the improvements of those methods and to the application of the middle-term electricity production problem for EDF. The main result is that, using the methods studied in the thesis, we could significantly improve the efficiency of the “SOPRANO” approach (the scenario tree modelization of EDF).
In this paper we study the optimal control problem of the heat equation by a distributed control, in the presence of a state constraint. The latter is integral over the space and has to be satisfied at each time. Using an alternative optimality system, we show that both the control and multiplier are continuous in time. Under some natural geometric hypotheses (finite number of junction points), we obtain no-gap second-order optimality conditions. To our knowledge this is the first result with no-gap second-order optimality conditions for an optimal control problem of partial differential equations with state constraints.
With F. Alouges (CMAP). The dynamic of the magnetization inside a ferromagnetic body is modelized by the Landau-Lifshitz equations. These equations may be written as a partial differential
equation on the magnetization
mwhich is a three-dimensional vector field constrained to be of constant magnitude equal to 1 through the sample (after a renormalisation):
tm=
m×
H(
m) +
(
H(
m)-(
H(
m)·
m)
m),
where
H(
m)is the total magnetic field. We have proposed a new finite element scheme which is easy to implement and which guarantees the constraint at each discretization
node. F. Alouges improved it to make it inconditionally stable. Our recent work shows that these schemes seem to be more efficient than the Finite Difference schemes which are commonly used.
We worked this year on a time second order scheme.
INRIA-EDF (PhD of G. Emiel), 2005-2008. Solving large scale problems for middle term power planning. Involved researchers: F. Bonnans.
ENSTA-CNES (OPALE pole framework), 2008-2009. HJB approach for the atmospheric re-entry problem. Involved researchers: F. Bonnans, H. Zidani.
ENSTA-DGA, 2007-2009. Study of HJB equations associated to motion planning. Involved researchers: N. Forcadel, H. Zidani.
INRIA-CNES (OPALE pole framework), 2008-2009 Study of singular arcs for the ascent phase of winged launchers. Involved researchers: F. Bonnans, P. Martinon, E. Trélat.
INRIA-TOTAL. PhD fellowship (CIFRE) of Y. Cen, dec. 2008-dec. 2011. Involved researchers: F. Bonnans.
In the setting of the STIC AmSud project on “Energy Optimization” we have a collaboration with P. Lotito (U. Tandil) on deterministic continuous-time models for the optimization of hydrothermal electricity and related optimal control problems with singular arcs. We have one PhD student, Maria S. Aronna, on this project.
With Felipe Alvarez (CMM, Universidad de Chile, Santiago) we study the logarithmic penalty approach for optimal control problems, in connection with the PhD thesis of Francisco Silva.
Italy: U. Roma (Sapienza). With M. Falcone: numerical methods for the resolution of HJB equations. Collaboration in connection with the master's internship of D. Venturini.
Short-term visits: invited professors,
Felipe Alvarez (CMM and U. Chile, Santiago, 2 weeks), Dan Tiba (Inst. Math., Bucarest, 3 weeks), Pablo Lotito (U. Tandil, Argentina, 2 weeks), Nikolai Osmolovskiĭ (Systems Research Institute, Warszawa, 3 weeks), Elisabetta Carlini (U. La Sapienza, Rome, 1 week).
F. Bonnans is one of the three Corresponding Editors of “ESAIM:COCV” (Control, Optimisation and Calculus of Variations), and Associate Editor of “Applied Mathematics and Optimization”, and “Optimization, Methods and Software”.
F. Bonnans is member of: (i) Council-at-Large of the Mathematical Programming Society (2006-2009), (ii) Optimal Control Technical Committee 2.4 of IFAC (International Federation of Automatic Control), 2005-2008, (iii) board of the SMAI-MODE group (2007-2010).
He is one of the founders organizers of SPO (Séminaire Parisien d'Optimisation, IHP, Paris).
E. Trélat is Associate Editor of “ESAIM:COCV” (Control, Optimisation and Calculus of Variations) and of "International Journal of Mathematics and Statistics".
F. Bonnans. (i) Professeur Chargé de Cours, Ecole Polytechnique. Courses on Operations Research and Optimal Control, 50 h. (ii) Mastere de Math. et Applications, “Parcours OJME", Optimisation, Jeux et Modélisation en Economie, Université Paris VI. Course on Continuous Optimization and application to Stochastic Programming, (18 h).
P. Jaisson. (i) University of Versailles Saint Quentin (UVSQ): Mathematical methods for physics(TD : 36 h). (ii) ENSTA (first year): Quadratic programming, computer projects, 14 h.
E. Trélat. (i) University of Orléans: (i) Supervisor of the Master of Mathematics: M2: Optimal control(30h), Automatic(30h), Continuous optimization(24h). (ii) Other courses at this University: L3: Matrix numerical analysis(26h), Numerical tools(36h), preparation for CAPES (96h). (ii) ENSTA: Optimal control(22h). (iv) Supélec M2 ATSI: Optimal control(12h).
H. Zidani: (i) ENSTA: Quadratic Optimization(first year, 21h), "Front propagation" course (third year), Hamilton-Jacobi-Bellman approach to Optimal Control(third year, 21 h), “Numerical methods for front propagation” (third year, 21 h). (ii) Master “Ingénierie Mathématiques”, U. of Paris-Sud Orsay. ATS of University Paris-Sud : `Optimal control” (30 h). (iii) Course on Numerical methods in finance(30h), third year of ENSTA- M2 MMMEF, U. Paris 1.
J. Bolte: Conference Dynamiques et Optimisation. July 9-11, U. Paris V8, Paris.
J.F. Bonnans and H. Zidani: Aerospatial dynamics and Optimal Control. ENSTA, Paris, 23 May 2008.
R. Monneau, H. Zidani: CEA-EDF-INRIA School Numerical methods for Hamilton-Jacobi equations and hyperbolic conservation laws. Rocquencourt, Sept. 15-19, 2008.
E. Trélat: Organizing and scientific commitee, Control of Physical Systems and Partial Differential Equations. Institut Henri Poincaré, Paris, June 16-20, 2008.
J.F. Bonnans: Conference on Optimization and Practices in the Industry(COPI'08), Paris, November 26-28, 2008.
E. Trélat: Colloque sur l'Optimisation et les Systèmes d'InformationCOSI'2008, 8-10 juin 2008, Tizi-Ouzou, Algeria.
E. Trélat: Scientific commitee, Confrence Internationale Francophone d'Automatique(CIFA 2008), in charge of the theme “Optimization and control of nonlinear systems”, September 3-5, Bucharest, 2008.
J. Bolte: (i) Inégalité de Łojasiewicz : le point de vue de l'analyse. With A. Daniiilidis, O. Ley, L. Mazet. Journées Franco-Chiliennes d'Optimisation - Toulon, May 19-21, 2008. (ii) Quelques résultats en optimisation modérée. Conference Dynamiques et Optimisation”, June 9-11, Paris.
F. Bonnans: (i) Second-order optimality conditions for state-constrained optimal control problems. Second Int. Conf. on Nonlinear Programming with Applications (NPA2008). 7-9 April, 2008, Beijing. (ii) Second-order optimality conditions for state-constrained optimal control problems. With A. Hermant. Franco-Chilean days on Optimization. Toulon, May 19-21, 2008. (iii) Analysis of singular arcs in optimal control problems. Application to hydrothermal electricity production. With M.S. Aronna and P. Lotito. Conf. Dynamique et Optimisation. June 9-11 2008, Université Paris 6, Chevaleret. (iv) Second-order analysis of optimal control problems with control and initial-final state constraints. With N.P. Osmolovskiĭ. Conference “50 years of optimal control”. 15-20 Sept 2008, Bedlewo (Poland). (v) Les méthodes de gradient réduit. Scientific days in the honor of Pierre Huard, Nov. 24-25, 2008.
E. Trélat: (i) Optimal control and applications in aeronautics. Indo-French Conference, Chennai, Inde, 15-19 Dec. 2008. (ii) Optimal control and applications in aeronautics. COSI'08, Tizi Ouzou, Algérie, june 2008.
H. Zidani: (i) Some theoretical and numerical results for Howard's algorithm. Sminaire Math-Amsud, Lima, 4-6 November 2008. (ii) Approximation numérique d'un problème de sur-couverture. Congrès SM2A, Rabat, 6-8 February 2008.
J. Bolte: (i) Caractrisations des ingalits de Łojasiewicz. Journées MODE'08 - Clermont, Feb. 26-28, 2008. (ii) Characterizations of Łojasiewicz inequalities. Conference on Nonlinear Analysis and Optimization, Haifa, June 18-24, 2008.
F. Bonnans: Computation of polyhedral approximations of the Euclidean ball. With M. Lebelle. Journées MODE'08 - Clermont, Feb. 26-28, 2008.
E. Cristiani: (i) An efficient data structure to solve front propagation problems and its application to the climbing problem. With O. Bokanowski and H. Zidani. CANUM - Saint Jean de Monts, May, 26-30, 2008. (ii) A fast semi-Lagrangian method for the Isaacs equation of differential games with state constraints. MATHEMATICAL THEORY OF NETWORKS & SYSTEMS, Blacksburg, VA (USA), July 28 - August 1, 2008.
G. Emiel: Incremental Bundle Methods for Mid-term Power Planning. With C. Sagastizábal. EngOpt 2008 (International Conference on Engineering Optimization), Rio de Janeiro, June 1-5, 2008.
N. Forcadel: (i) Workshop “Singularities in nonlinear evolution phenomena and applications”, Pise, du 26 au 30 mai 2008. (ii) Colloque “mthodes mathmatiques pour l'image”, Orlans, 01-03 April 2008.
P. Martinon: (i) Etude numérique de trajectoires optimales avec arcs singuliers pour un problème de lanceur spatialJournées MODE'08 - Clermont, Feb. 26-28, 2008. (ii) Study of singular arcs for the atmospheric ascent of space launchersAerospatial dynamics and optimal control, ENSTA, 23 May 2008.
F. Silva: Error estimation of a penalized LQ optimal control problem. Optimization and Practices in the Industry (COPI'08), Paris, November 26-28, 2008.
E. Trélat: (i) Tomographic reconstruction of binary images. Conference on 50 Years of Optimal Control, Bedlewo, Pologne, september 2008. (ii) Dynamics around Lagrange points and Eight-Lissajous orbits. With P. Augros and G. Archambeau. Conference on Optimal Control Theory in Space and Quantum Dynamics, Dijon, june 2008.
J. Bolte: (i) Caractérisations des inégalités de Łojasiewicz. Séminaire de Géométrie de Chambéry, March 08, 2008.
J.F. Bonnans: (i) Generalized finite differences approach to the HJB equation of stochastic control. Laboratoire de Finance des Marchés d'Energies (FIME) seminar, U. Paris IX Dauphine, Feb. 14, 2008. (ii) Expansion of solutions of variational problems. LMA (Laboratoire de Mécanique et d'Acoustique) seminar, Marseille, April 22, 2008.
N. Forcadel: (i) Groupe de travail numérique, Université d'Orsay, le 26 mars 2008. (ii) Séminaire d'analyse appliquée, Université de Marseille, le 4 mars 2008. (iii) Séminaire “Géométrie, EDP et Physique Mathématique”, Université de Cergy-Pontoise, le 20 février 2008. (iv) Groupe de Travail "Applications des mathématiques", ENS Cachan antenne de Bretagne, le 6 février 2008. (v) Séminaire de Mathématiques Appliquées, Université de Clermont-Ferrand, le 31 janvier 2008. (vi) Séminaire EDP, ENS rue d'Ulm, le 30 janvier 2008. (vii) Séminaire d'Equations aux Dérivées Partielles, Université de Besançon, le 10 janvier 2008.
G. Emiel: (i) Nonsmooth optimization methods with application to electricity production. Laboratoire de Finance des Marchés d'Energies (FIME) seminar, Institut Henri Poincaré, Oct. 24, 2008. (ii) Incremental bundle methods. IMPA Optimization seminar, Rio de Janeiro, June 2008.
E. Trélat: (i) Régularité de la fonction valeur en contrle optimal; applications aux solutions de viscosité et à la stabilisation.Séminaire de géométrie, Univ. Chambéry, 28 novembre 2008. (ii) Contrôlabilité et discrétisation des EDP contrôlées.Séminaire EDP, Univ. Metz, 17 octobre 2008. (iii) Contrôlabilité et discrétisation des EDP contrôlées.Séminaire, Univ. Amiens, 28 avril 2008. (iv) Contrôlabilité et discrétisation des EDP contrôlées.LAGEP, Lyon, 18 janvier 2008. (v) Propriétés de contrôlabilité et stabilisation de quelques EDP.CMAP - INRIA Futurs, 8 janvier 2008.