EN FR
EN FR

## Section: Scientific Foundations

### Structure and control of non-linear systems

#### Feedback control and optimal control

Participants : Jean-Baptiste Pomet, Ahed Hindawi, Jana Nemcova, Ludovic Rifford.

Using the terminology defined in the beginning of section  3.1 , the class of models considered here is the one of finite dimensional nonlinear control systems. In many cases, a linear control based on the linear approximation around a nominal point or trajectory is sufficient. However, there are important instances where it is not, either because the magnitude of the control is limited or because the linear approximation is not controllable, or else in control problems like path planning, that are not local in nature.

State feedback stabilization consists in designing a control law which is a function of the state and makes a given point (or trajectory) asymptotically stable for the closed-loop system. That function of the state must bear some regularity, at least enough regularity to allow the closed-loop system to make sense; continuous or smooth feedback would be ideal, but one may also be content with discontinuous feedback if robustness properties are not defeated. One can consider this as a weak version of the optimal control problem which is to find a control that minimizes a given criterion (for instance the time to reach a prescribed state). Optimal control generally leads to a rather irregular dependence on the initial state; in contrast, stabilization is a qualitative objective (i.e., to reach a given state asymptotically) which is more flexible and allows one to impose much more regularity.

Lyapunov functions are a well-known tool to study the stability of non-controlled dynamic systems. For a control system, a Control Lyapunov Function is a Lyapunov function for the closed-loop system where the feedback is chosen appropriately. It can be expressed by a differential inequality called the “Artstein (in)equation” [37] , reminiscent of the Hamilton-Jacobi-Bellmann equation but largely under-determined. One can easily deduce a continuous stabilizing feedback control from the knowledge of a control Lyapunov function; also, even when such a control is known beforehand, obtaining a control Lyapunov function can still be very useful to deal with robustness issues.

Moreover, if one has to deal with a problem where it is important to optimize a criterion, and if the optimal solution is hard to compute, one can look for a control Lyapunov function which comes “close” (in the sense of the criterion) to the solution of the optimization problem but leads to a control easier to deal with.

A class of systems of interest to us is the one of systems with a conservative drift and a small control (whose effect is small in magnitude compared to the drift). A prototype is the control of a satellite with low thrust propellers: the conservative drift is the classical Kepler problem and the control is small compared to earth attraction. We developed, starting with Alex Bombrun's PhD [54] , original averaging methods, that differ from classical methods in that the average is a control system, i.e., the averaging process does not depend on the control strategy. A reference paper has been submitted [27] .

These constructions were exploited in a joint collaborative research conducted with Thales Alenia Space (Cannes), where minimizing a certain cost is very important (fuel consumption / transfer time) while at the same time a feedback law is preferred because of robustness and ease of implementation (see section  4.3 ).

#### Optimal transport

Participants : Ahed Hindawi, Jean-Baptiste Pomet, Ludovic Rifford.

Optimal transport is the problem of finding the cheapest transformation that moves a given initial measure to a given final one, where the cost of the transformation is obtained by integrating against the measure a point-to-point cost that may be a squared Euclidean distance or a Riemannian distance on a manifold or more exotic ones where some directions are privileged that naturally lean towards optimal control.

The problem has a long history which goes back to the pioneering works [76] and [72] , and was more recently revised and revitalized by [56] and [75] . At the same time, applications to many domains ranging from image processing to shape reconstruction or urban planning were developed, see a survey in [77] .

We are interested in optimal transport problems with a cost coming from optimal control, i.e. coming from minimizing an integral quadratic cost, among trajectories that are subject to differential constraints coming from a control system, and also in using geometric control methods in general transport problems  [55] . The case of controllable affine control systems without drift (in which case the cost is the sub-Riemannian distance) is studied in [36] , [34] , [58] . Systems with drift are the topic of A. Hindawi's PhD.

The optimal transport problem in this setting borrows methods from control; at the same time, it may help understanding optimal control: the problem of moving optimally from a point to another is a singular limit of the problem of moving optimally a measure with a smooth density to another one when the measures tend to Dirac masses.

See new results in section  6.9 .

#### Transformations and equivalences of non-linear systems and models

Participants : Laurent Baratchart, Jean-Baptiste Pomet.

The motivations for a detailed study of equivalence classes and invariance of models of control systems under various classes of transformations are two-fold:

• From the point of view of control, a command satisfying specific objectives on the transformed system can be used to control the original system including the transformation in the controller.

• From the point of view of identification and modeling, the interest is either to derive qualitative invariants to support the choice of a non-linear model given the observations, or to contribute to a classification of non-linear models which is missing sorely today. This is a prerequisite for a general theory of non-linear identification; indeed, the success of the linear model in control and identification is more due to the deep understanding one has of it than to some universal character of linearity.

The interested reader can find a richer overview (in french) in the first chapter of [80] .

A static feedback transformation is a (non-singular) re-parametrization of the control depending on the state, together with a change of coordinates in the state space. Static equivalence has motivated a very wide literature; in the differentiable case, classification is performed in relatively low dimensions; it gives insight on models and also points out that this equivalence is “too fine”, i.e., very few systems are equivalent and normal forms are far from being stable. This motivates the search for a coarser equivalence that would account for more qualitative phenomena. The Hartman-Grobman theorem states that every ordinary differential equation (i.e., dynamical system without control) is locally equivalent, in a neighborhood of a non-degenerate equilibrium, to a linear system via a transformation that is solely bi-continuous, whereas smoothness requires many more invariants. This was a motivation to study topological (non necessarily smooth) equivalence. A “Hartman Grobman Theorem for control systems” is stated in [42] under weak regularity conditions; it is too abstract to be relevant to the above considerations on qualitative phenomena: linearization is performed by functional non-causal transformations rather than feedback transformations stricto sensu; it however acquires a concrete meaning when the inputs are themselves generated by finite-dimensional dynamics. A stronger Hartman Grobman Theorem for control systems (where transformations are homeomorphisms in the state-control space) in fact cannot hold [50] : almost all topologically linearizable control systems are differentiably (in the same class of regularity as the system itself) linearizable. In general (equivalence between nonlinear systems), topological invariants are still a subject of interest to us.

A dynamic feedback transformation consists of a dynamic extension (adding new states, and assigning them new dynamics) followed by a state feedback on the augmented system; dynamic equivalence is another attempt to enlarge classes of equivalence. It is indeed strictly more general than static equivalence: it is known that many systems are dynamic equivalent but not static equivalent to a linear controllable system. The classes containing a linear controllable system are the ones of differentially flat systems; it turns out (see [59] ) that many practical systems are in this class and that being “flat” also means that all the solutions to the systems are given by a (Monge) parametrization that describes the solutions without any integration.

An important question remains open: how can one algorithmically decide whether a given system has this property or not, i.e., is dynamic linearizable or not? The mathematical difficulty is that no a priori bound is known on the order of the differential operator giving the parametrization. Within the team, results on low dimensional systems have been obtained [3] ; the above mentioned difficulty is not solved for these systems but results are given with a priori prescribed bounds on this order.

For general dynamic equivalence as well as flatness, very few invariants are known. In particular, the fact that the size of the extra dynamics contained in the dynamic transformation (or the order of the above mentioned differential operator, for flatness) is not a priori bounded makes it very difficult to prove that two systems are not dynamic feedback equivalent, or that a system is not flat. Many simple systems pointed out in [3] are conjectured not to be flat but no proof is available. The only known general necessary condition for flatness is the so-called ruled surface criterion; it was generalised by the team to dynamic equivalence between arbitrary nonlinear systems in [79] .

Another attempt towards conditions for flatness used the differential algebraic point of view: the module of differentials of a controllable system is, generically, free and finitely generated over the ring of differential polynomials in $d/dt$ with coefficients in the ring of functions on the system's trajectories; flatness amounts to existence of a basis consisting of closed differential forms. Expressed in this way, it looks like an extension of the classical Frobenius integrability theorem to the case where coefficients are differential operators. Some non classical conditions have to be added to the classical stability by exterior differentiation, and the problem is open. In [38] , a partial answer was given, but in a framework where infinitely many variables are allowed and a finiteness criterion is still missing.