The project develops tropical methods motivated by applications arising in decision theory (deterministic and stochastic optimal control, game theory, optimization and operations research), in the analysis or control of classes of dynamical systems (including timed discrete event systems and positive systems), in the verification of programs and systems, and in the development of numerical algorithms. Tropical algebra tools are used in interaction with various methods, coming from convex analysis, Hamilton–Jacobi partial differential equations, metric geometry, Perron-Frobenius and nonlinear fixed-point theories, combinatorics or algorithmic complexity. The emphasis of the project is on mathematical modelling and computational aspects.

The subtitle of the Tropical project, namely, “structures, algorithms, and interactions”,
refers to the spirit of our research, including
a methodological component,
computational aspects, and finally interactions
with other scientific fields or real world applications, in particular
through mathematical modelling.

Tropical algebra, geometry, and analysis have enjoyed spectacular development in recent years. Tropical structures initially arose to solve problems in performance evaluation of discrete event systems 51, combinatorial optimization 57, or automata theory 97. They also arose in mathematical physics and asymptotic analysis 87, 84. More recently, these structures have appeared in several areas of pure mathematics, in particular in the study of combinatorial aspects of algebraic geometry 76, 111, 101, 80, in algebraic combinatorics 69, and in arithmetics 62. Also, further applications of tropical methods have appeared, including optimal control 88, program invariant computation 48 and timed systems verification 86, and zero-sum games 2.

The term `tropical' generally refers to algebraic structures in which the laws originate from optimization processes. The prototypical tropical structure is the max-plus semifield, consisting of the real numbers, equipped with the maximum, thought of as an additive law, and the addition, thought of as a multiplicative law. Tropical objects appear as limits of classical objects along certain deformations (“log-limits sets” of Bergman, “Maslov dequantization”, or “Viro deformation”). For this reason, the introduction of tropical tools often yields new insights into old familiar problems, leading either to counterexamples or to new methods and results; see for instance 111, 92. In some applications, like optimal control, discrete event systems, or static analysis of programs, tropical objects do not appear through a limit procedure, but more directly as a modelling or computation/analysis tool; see for instance 106, 51, 78, 58.

Tropical methods are linked to the fields of positive systems and of metric geometry 94, 12. Indeed, tropically linear maps are monotone (a.k.a. order-preserving). They are also nonexpansive in certain natural metrics (sup-norm, Hopf oscillation, Hilbert's projective metric, ...). In this way, tropical dynamical systems appear to be special cases of nonexpansive, positive, or monotone dynamical systems, which are studied as part of linear and non-linear Perron-Frobenius theory 85, 3. Such dynamical systems are of fundamental importance in the study of repeated games 91. Monotonicity properties are also essential in the understanding of the fixed points problems which determine program invariants by abstract interpretation 63. The latter problems are actually somehow similar to the ones arising in the study of zero-sum games; see 7. Moreover, positivity or monotonicity methods are useful in population dynamics, either in a discrete space setting 107 or in a PDE setting 53. In such cases, solving tropical problems often leads to solutions or combinatorial insights on classical problems involving positivity conditions (e.g., finding equilibria of dynamical systems with nonnegative coordinates, understanding the qualitative and quantitative behavior of growth rates / Floquet eigenvalues 10, etc). Other applications of Perron-Frobenius theory originate from quantum information and control 100, 105.

The dynamic programming approach allows one to analyze one or two-player dynamic decision problems by means of operators, or partial differential equations (Hamilton–Jacobi or Isaacs PDEs), describing the time evolution of the value function, i.e., of the optimal reward of one player, thought of as a function of the initial state and of the horizon. We work especially with problems having long or infinite horizon, modelled by stopping problems, or ergodic problems in which one optimizes a mean payoff per time unit. The determination of optimal strategies reduces to solving nonlinear fixed point equations, which are obtained either directly from discrete models, or after a discretization of a PDE.

The geometry of solutions of optimal control and game problems
Basic questions include, especially for stationary or ergodic problems, the understanding of existence and uniqueness conditions for the solutions of dynamic
programming equations, for instance in terms of controllability
or ergodicity properties, and more generally the understanding of the structure
of the full set of solutions of stationary Hamilton–Jacobi PDEs and of the set of optimal strategies. These issues are already challenging
in the one-player deterministic case, which is an application
of choice of tropical methods, since the Lax-Oleinik semigroup, i.e.,
the evolution semigroup of the Hamilton-Jacobi PDE, is a linear
operator in the tropical sense.
Recent progress in the deterministic case
has been made by combining dynamical systems and PDE techniques (weak KAM theory 66),
and also using metric geometry ideas (abstract boundaries can be used to represent the sets of solutions 79, 4).
The two player case is challenging, owing to the lack of compactness of the analogue of the Lax-Oleinik semigroup and to a richer geometry. The conditions of solvability of ergodic problems for games (for instance, solvability of ergodic Isaacs PDEs), and the representation of solutions are only understood in special cases, for instance in the finite state space case, through tropical geometry and non-linear Perron-Frobenius methods 43, 41, 3.

Algorithmic aspects: from combinatorial algorithms to the attenuation of the curse of dimensionality
Our general goal is to push the limits of solvable models by means
of fast algorithms adapted to large scale instances.
Such instances arise from discrete problems,
in which the state space may so large that
it is only accessible through local oracles (for instance,
in some web ranking applications, the number of states may
be the number of web pages) 67. They also arise
from the discretization of PDEs, in which the number
of states grows exponentially with the number of degrees
of freedom, according to the “curse of dimensionality”.
A first line of research is the development of new
approximation methods for the value function. So far, classical approximations
by linear combinations have been used, as well as approximation
by suprema of linear or quadratic forms, which have
been introduced in the setting of dual dynamic programming
and of the so called “max-plus basis methods” 68. We believe that more concise
or more accurate approximations may be obtained by unifying
these methods.
Also, some max-plus basis methods have
been shown to attenuate the curse of dimensionality
for very special problems (for instance involving switching) 89, 73.
This suggests that the complexity of control or games problems may be measured
by more subtle quantities that the mere number of states, for instance,
by some forms of metric entropy (for example, certain large scale
problems have a low complexity owing to the presence of decomposition
properties, “highway hierarchies”, etc.).
A second line of of our research is the development
of combinatorial algorithms,
to solve large scale zero-sum two-player problems
with discrete state space. This is related to current
open problems in algorithmic game theory. In particular,
the existence of polynomial-time algorithms for games
with ergodic payment is an open question.
See e.g. 44 for a polynomial time average complexity result derived
by tropical methods.
The two lines
of research are related, as the understanding of the geometry of solutions
allows to develop better approximation or combinatorial algorithms.

Several applications (including population dynamics 10 and discrete event systems 51, 61, 45) lead to studying classes of dynamical systems with remarkable properties: preserving a cone, preserving an order, or being nonexpansive in a metric. These can be studied by techniques of non-linear Perron-Frobenius theory 3 or metric geometry 11. Basic issues concern the existence and computation of the “escape rate” (which determines the throughput, the growth rate of the population), the characterizations of stationary regimes (non-linear fixed points), or the study of the dynamical properties (convergence to periodic orbits). Nonexpansive mappings also play a key role in the “operator approach” to zero-sum games, since the one-day operators of games are nonexpansive in several metrics, see 8.

The different applications mentioned in the other sections lead us to develop some basic research on tropical algebraic structures and in convex and discrete geometry, looking at objects or problems with a “piecewise-linear ” structure. These include the geometry and algorithmics of tropical convex sets 47, 40, tropical semialgebraic sets 14, the study of semi-modules (analogues of vector spaces when the base field is replaced by a semi-field), the study of systems of equations linear in the tropical sense, investigating for instance the analogues of the notions of rank, the analogue of the eigenproblems 42, and more generally of systems of tropical polynomial equations. Our research also builds on, and concern, classical convex and discrete geometry methods.

Tropical algebraic objects appear as a deformation of classical objects thought various asymptotic procedures. A familiar example is the rule of asymptotic calculus,

when

This entails a relation between classical algorithmic problems
and tropical algorithmic problems, one may first solve the

In particular, tropicalization establishes a connection between polynomial systems and piecewise affine systems that are somehow similar to the ones arising in game problems. It allows one to transfer results from the world of combinatorics to “classical” equations solving. We investigate the consequences of this correspondence on complexity and numerical issues. For instance, combinatorial problems can be solved in a robust way. Hence, situations in which the tropicalization is faithful lead to improved algorithms for classical problems. In particular, scalings for the polynomial eigenproblems based on tropical preprocessings have started to be used in matrix analysis 74, 77.

Moreover, the tropical approach has been recently applied to construct examples of linear programs in which the central path has an unexpectedly high total curvature 6, and it has also led to positive polynomial-time average case results concerning the complexity of mean payoff games. Similarly, we are studying semidefinite programming over non-archimedean fields 14, 49, with the goal to better understand complexity issues in classical semidefinite and semi-algebraic programming.

One important class of applications of max-plus algebra comes from discrete event dynamical systems 51. In particular, modelling timed systems subject to synchronization and concurrency phenomena leads to studying dynamical systems that are non-smooth, but which have remarkable structural properties (nonexpansiveness in certain metrics , monotonicity) or combinatorial properties. Algebraic methods allow one to obtain analytical expressions for performance measures (throughput, waiting time, etc). A recent application, to emergency call centers, can be found in 45.

Optimal control and game theory have numerous well established applications fields: mathematical economy and finance, stock optimization, optimization of networks, decision making, etc. In most of these applications, one needs either to derive analytical or qualitative properties of solutions, or design exact or approximation algorithms adapted to large scale problems.

We develop, or have developed, several aspects of operations research, including the application of stochastic control to optimal pricing, optimal measurement in networks 102. Applications of tropical methods arise in particular from discrete optimization 58, 60, scheduling problems with and-or constraints 93, or product mix auctions 109.

A number of programs and systems verification questions, in which safety considerations are involved, reduce to computing invariant subsets of dynamical systems. This approach appears in various guises in computer science, for instance in static analysis of program by abstract interpretation, along the lines of P. and R. Cousot 63, but also in control (eg, computing safety regions by solving Isaacs PDEs). These invariant sets are often sought in some tractable effective class: ellipsoids, polyhedra, parametric classes of polyhedra with a controlled complexity (the so called “templates” introduced by Sankaranarayanan, Sipma and Manna 104), shadows of sets represented by linear matrix inequalities, disjunctive constraints represented by tropical polyhedra 48, etc. The computation of invariants boils down to solving large scale fixed point problems. The latter are of the same nature as the ones encountered in the theory of zero-sum games, and so, the techniques developed in the previous research directions (especially methods of monotonicity, nonexpansiveness, discretization of PDEs, etc) apply to the present setting, see e.g. 70, 75 for the application of policy iteration type algorithms, or for the application for fixed point problems over the space of quadratic forms 7. The problem of computation of invariants is indeed a key issue needing the methods of several fields: convex and nonconvex programming, semidefinite programming and symbolic computation (to handle semialgebraic invariants), nonlinear fixed point theory, approximation theory, tropical methods (to handle disjunctions), and formal proof (to certify numerical invariants or inequalities).

The team has developed collaborations on the dimensioning of emergency call centers, with Préfecture de Police (Plate Forme d'Appels d'Urgence - PFAU - 17-18-112, operated jointly by Brigade de sapeurs pompiers de Paris and by Direction de la sécurité de proximité de l'agglomération parisienne) and also with the Emergency medical services of Assistance Publique – Hôpitaux de Paris (Centre 15 of SAMU75, 92, 93 and 94). This work is described further in Section 8.7.1. Some work done specifically during the Covid-19 crisis is described in Section 8.7.2.

Coq-Polyhedra is a library providing a formalization of convex polyhedra in the Coq proof assistant. While still in active development, it provides an implementation of the simplex method, and already handles the basic properties of polyhedra such as emptiness, boundedness, membership. Several fundamental results in the theory of convex polyhedra, such as Farkas Lemma, duality theorem of linear programming, and Minkowski Theorem, are also formally proved.

The formalization is based on the Mathematical Components library, and makes an extensive use of the boolean reflection methodology.

Coq-Polyhedra now provides most of the basic operations on polyhedra. They are expressed on a quotient type that avoids reasoning with particular inequality representations. They include : * the construction of elementary polyhedra (half-spaces, hyperplanes, affine spaces, orthants, simplices, etc) * basic operations such as intersection, projection (thanks to the formalization of the Fourier-Motzkin algorithm), image under linear functions, computations of convex hulls, finitely generated cones, etc. * computation of affine hulls of polyhedra, as well as their dimension

Thanks to this, we have made huge progress on the formalization of the combinatorics of polyhedra. The poset of faces, as well as its fundamental properties (lattice, gradedness, atomicity and co-atomicity, etc) are now formalized. The manipulation of the faces is based on an extensive use of canonical structures, that allows to get the most appropriate inequality representations for reasoning. In this way, we arrive at very concise and elegant proofs, closer to the pen-and-paper ones.

This software aims at enabling the definition of a Petri network execution semantic, as well as the instanciation and execution of said network using the aforedefined semantic.

The heart of the project dwells in its kernel which operates the step-by-step execution of the network, obeying rules provided by an oracle. This user-defined and separated oracle computes the information necessary to the kernel for building the next state using the current state. The base of our software is the framework for the instanciation and execution of Petri nets, without making assumptions regarding the semantic.

In the context of the study of the dynamics of emergency call centers, a second part of this software is the definition and implementation of the semantic of call centers modelized as Petri nets, and more specifically timed prioritized Petri nets. A module interoperating with the kernel enables to include all the operational specificities of call centers (urgency level, discriminating between operators and callers ...) while guaranteeing the genericity of the kernal which embeds the Petri net formalism as such.

In order to enable the quantitative study of the throughput of calls managed by emergency center calls and the assesment of various organisationnal configurations considered by the stakeholders (firefighters, police, medical emergency service of the 75, 92, 93 and 94 French departments), this software modelizes their behaviours by resorting to extensions of the Petri net formalism. Given a call transfer protocol in a call center, which corresponds to a topology and an execution semantic of a Petri net, the software generates a set of entering calls in accord with the empirically observed statistic ditributions (share of very urgent calls, conversation length), then simulates its management by the operators with respect to the defined protocol. Transitional regimes phenomenons (peak load, support) which are not yet handled by mathematical analysis could therefore be studied. The ouput of the software is a log file which is an execution trace of the simulation featuring extensive information in order to enable the analysis of the data for providing simulation-based insights for decision makers.

The software relies on a Petri net simulation kernel designed to be as modular and adaptable as possible, fit for simulating other Petri-net related phenomenons, even if their semantic differ greatly.

In a series of joint works with Antoine Hochart, applied methods of non-linear fixed point theory to zero-sum games.

A key issue is the solvability of the ergodic equation
associated to a zero-sum game with finite state space,
i.e., given a dynamic programming operator

In 13, we apply game theory methods to the study of the nonlinear eigenproblem for homogeneous order preserving self maps of the interior of the cone. We show that the existence and uniqueness of an eigenvector is governed by combinatorial conditions, involving dominions (sets of states “controlled” by one of the two players). In this way, we characterize the situation in which the existence of an eigenvector holds independently of perturbations, and we solve an open problem raised in 72.

In 20, we introduce a non-linear fixed point method to approximate the joint spectral radius of a finite set of nonnegative matrices. We show in particular that the joint spectral radius is the limit of the eigenvalues of a family of non-linear risk-sensitive type dynamic programming operators. We develop a projective version of Krasnoselskii-Mann iteration to solve these eigenproblems, and report experimental results on large scale instances (several matrices in dimensions of order 1000 within a minute).

The PhD thesis of Benoît Tran 108, supervised by Jean-Philippe Chancelier (ENPC) and Marianne Akian concerns the numerical solution of the dynamic programming equation of discrete time stochastic control problems.

Several methods have been proposed in the litterature to bypass the curse of dimensionality difficulty of such an equation, by assuming a certain structure of the problem. Examples are the max-plus based method of McEneaney 90, 88, the stochastic max-plus scheme proposed by Zheng Qu 99, the stochastic dual dynamic programming (SDDP) algorithm of Pereira and Pinto 95, the mixed integer dynamic approximation scheme of Philpott, Faisal and Bonnans 50, the probabilistic numerical method of Fahim, Touzi and Warin 65, and its association with the max-plus based method proposed in 39. We propose to associate and compare these methods in order to solve more general structures.

In a first work 38, we build a common framework for both the SDDP and a discrete time and finite horizon version of Zheng Qu's algorithm for deterministic problems involving a finite set-valued (or switching) control and a continuum-valued control. We propose an algorithm that generates monotone approximations of the value function as a pointwise supremum, or infimum, of basic (affine or quadratic for example) functions which are randomly selected. We give sufficient conditions that ensure almost sure convergence of the approximations to the value function. In 32, we study generalizations and combinaison of these algorithms to the case of stochastic optimal control problems.

In recent works we introduce and study an entropic relaxation of the Nested Distance introduced by Pflug 96, and the interchange between integration and minimization.

Accelerated gradient algorithms in convex optimization were introduced by Nesterov. A fundamental question is whether similar acceleration schemes work for the iteration of nonexpansive mappings. In a joint work with Zheng Qu (Hong Kong University)

34, motivated by the analysis of Markov decision processes and zero-sum repeated games, we study fixed point problems for Shapley operators, i.e., for sup-norm nonexpansive and order preserving mapping. We deal more especially with affine operators, corresponding to zero-player problems – the latter can be used as a building blocks for one or two player problems, by means of policy iteration. For an affine operator, associated to a Markov chain, the acceleration property can be formalized as follows: one should replace an original scheme with a convergence rate

by a convergence rate

where

is the spectral gap of the Markov chain. We characterize the spectra of Markov chains for which this acceleration is possible. We also characterize the spectra for which a multiple acceleration is possible, leading to a rate of

for

.

In 37 (joint work with Vincent Leclère, ENPC), we study multistage stochastic problems with a linear structure and general cost distribution, and show that the value function is polyhedral. We characterize the affinity regions as the cells of a chamber complex. We deduce fixed-parameter tractability results, showing that when the dimensions of some state spaces are fixed, the problem (which is generally sharp-P complete) becomes polynomial.

Hamilton-Jacobi-Bellman equations arise as the dynamic programming equations of deterministic or stochastic optimal control problems. They allow to obtain the global optimum of these problems and to synthetize an optimal feedback control, leading to a solution robust against system perturbations. Several methods have been proposed in the litterature to bypass the obstruction of curse of dimensionality of such equations, assuming a certain structure of the problem, and/or using “unstructured discretizations”, that are not based on given grids. Among them, one may cite tropical numerical method, and probabilistic numerical method. On another direction, “highway hierarchies”, developped by Sanders, Schultes and coworkers 64, 103, initially for applications to on-board GPS systems, are a computational method that allows one to accelerate Dijkstra algorithm for discrete time and state shortest path problems.

The aim of the starting thesis of Shanqing Liu is to develop new numerical methods to solve Hamilton-Jacobi-Bellman equations that are less sensitive to curse of dimensionality. One will particularly develop methods based on continuous, or infinitesimal, analogues of highway hierarchies, that can also be adapted to unstructured discretization grids. The first step initiated during the internship of Shanqing Liu is to adapt highway hierarchies to the fast marching method.

In a recent paper 27, we investigated how the volume of a ball in a Hilbert geometry grows as its radius increases. In particular, we studied the volume entropy

where

We are continuing this work by investigating the volume of balls of finite
radius, rather than the asymptotics. We have found it convenient to turn
our attention to a metric different from the Hilbert metric,
but related to it. The Funk metric, as it is called, lacks the
symmetry property usually assumed for metric spaces.
However, it is somewhat simpler to work with when dealing with volumes,
and it exhibits the same interesting behaviour.
Given a convex body and a radius

This subsection summarizes the work done during the “délégation” at INRIA of Constantin Vernicos.

I started my secondment in early 2020 revising my work with Cormac Walsh on the entropy of Hilbert geometries which finally got accepted by the Annales de l'ENS who asked these revision, see Section 8.2.1 for more information. We also started working on another conjecture we had: that the Busemann volumes of a ball in any given Hilbert geometry was bounded from above by the volume ball of the hyperbolic geometry. We discovered that this failed for some small balls. This has to be put into perspective with the result obtained by Vernicos-Yang 110 which implies that for large radii this always the case. We are currently studying the Holmes-Thompson volume in Funk geometry.

In the second semester of 2020 I also started a collaboration with Antonin Guilloux (IMJ) with whom we are trying to find out at how many point inside a Hilbert geometry one needs to know the unit tangent ball to be able to characterise the convex body. Our conjecture is that one needs one more point than the dimension and we have done some progress in proving these for polytopes.

At the same time I have been exchanging with Stéphane Gaubert who introduced me to the central path and barrier method. That is how I saw that the universal barrier is exactly the Holmes-Thompson volume of the Funk geometry of a convex set. Furthermore it allowed me to realize that a paper by Andreas Bernig 54 actually uses a barrier for which the central path is a quasi-geodesic of the polytope. We are currently trying to find out how this is beneficial for the optimization problem.

In a joint work with Ricardo Katz (Conicet, Argentina) and Pierre-Yves Strub (LIX, Ecole Polytechnique), we present the first formalization of faces of polyhedra in the proof assistant Coq. This builds on the formalization of a library providing the basic constructions and operations over polyhedra, including projections, convex hulls and images under linear maps. Moreover, we design a special mechanism which automatically introduces an appropriate representation of a polyhedron or a face, depending on the context of the proof. We demonstrate the usability of this approach by establishing some of the most important combinatorial properties of faces, namely that they constitute a family of graded atomistic and coatomistic lattices closed under sublattices. This is implemented in the CoqPolyhedra library (we refer to the software session for more details). This work has been published in 10th International Joint Conference on Automated Reasoning 31, and invited for a submission to a special issue of Logical Methods in Computer Science.

During his M2 internship under the supervision of X. Allamigeon and P.-Y. Strub, Quentin Canu has worked on the development of vertex enumeration methods in CoqPolyhedra. We are now working the formalization of lattices and ordered structures, with a special emphasis on finite (sub)lattices.

In a joint work with Louis Rowen (Univ. Bar Ilan), we study linear algebra and convexity properties over “systems”. The latter provide a general setting encompassing extensions of the tropical semifields and hyperfields.

Closed tropical convex cones are the most basic examples of modules over the tropical semifield. They coincide with sub-fixed-point sets of Shapley operators – dynamic programming operators of zero-sum games. We study a larger class of cones, which we call “ambitropical” as it includes both tropical cones and their duals. Ambitropical cones can be defined as lattices in the order induced by Rn. Closed ambitropical cones are precisely the fixedpoint sets of Shapley operators. They are characterized by a property of best co-approximation arising from the theory of nonexpansive retracts of normed spaces. Finitely generated ambitropical cones arise when considering Shapley operators of deterministic games with finite action spaces. Finitely generated ambitropical cones are special polyhedral complexes whose cells are alcoved poyhedra, and locally, they are in bijection with order preserving retracts of the Boolean cube. This is a joint work with Sara Vannucci (invited PhD student from Salerno university).

We show that the problem consisting in computing a best approximation of a collection of points by a tropical hyperplane is equivalent to solving a mean payoff game, and also, to compute the maximal radius of an inscribed ball in a tropical polytope. We provide an application to economics - measuring the distance to equilibrium. We also study a dual problem — computing the minimal radius of a circumscribed ball to a tropical polytope – and apply it to the rank-one approximation of tropical matrices and tensors.

The tropical semifields can be thought of as images of fields with a non-archimedean valuation. It allows in this way to study the asymptotics of Puiseux series with complex coefficients. When dealing with Puiseux series with real coefficients and with its associated order, it is convenient to use the symmetrized tropical semiring introduced in 98 (see also 51), and the signed valuation which associates to any series its valuation together with its sign.

We study with these tools the asymptotics of eigenvalues and eigenvectors of symmetric positive definite matrices over the field of Puiseux series. This raises the problem of defining the appropriate notions of positive definite matrices over the symmetrized tropical semiring, eigenvalues and eigenvectors of such matrices, thus roots of polynomials and their multiplicities. This builds on 14 and 52.

The entropic barrier, studied by Bubeck and Eldan (Proc. Mach. Learn. Research, 2015), is a self-concordant barrier with asymptotically optimal self-concordance parameter. In a joint work 35 with Abdellah Aznag (Columbia University) and Yassine Hamdi (Ecole Polytechnique), we study the tropicalization of the central path associated with the entropic barrier, i.e., the logarithmic limit of this central path for a parametric family of linear programs defined over the field of Puiseux series. Our main result is that the tropicalization of the entropic central path is a piecewise linear curve which coincides with the tropicalization of the logarithmic central path studied by Allamigeon et al. in 6. One consequence is that the number of linear pieces in the tropical entropic central path can be exponential in the dimension and the number of inequalities defining the linear program. This result suggests that the interior point methods using the entropic barrier may be subject to the same pathology that the ones based on the logarithmic barrier, i.e., the number of iterations performed may be exponential in the dimension and the number of inequalities.

Linear complementarity programming is a generalization of linear programming which encompasses the computation of Nash equilibria for bimatrix games. While the latter problem is PPAD-complete, we show in 36 that the analogue of this problem in tropical algebra can be solved in polynomial time. Moreover, we prove that the Lemke–Howson algorithm carries over the tropical setting and performs a linear number of pivots in the worst case. A consequence of this result is a new class of (classical) bimatrix games for which Nash equilibria computation can be done in polynomial time. This is joint work with Frédéric Meunier (Cermics, ENPC).

Semidefinite programming consists in optimizing a linear function over
a spectrahedron. The latter is a subset of

where the

Semidefinite programming can be studied as well over non-archimedien ordered fields: non-archimedean instances encode parametric families of ordinary instances, with an infinitesimal or arbitrarily large parameter.

To this purpose, we studied tropical spectrahedra, which are defined as the images by the valuation of nonarchimedean spectrahedra. We establish that they are closed semilinear sets, and that, under a genericity condition, they are described by explicit inequalities expressing the nonnegativity of tropical minors of order 1 and 2. These results are presented in 14, together with more general materials on the tropicalization of semi-algebraic sets.

We study tropical posynomial systems, with motivations from call center performance evaluation (see Section 8.7.1). We exhibit a class of classical or tropical posynomial systems which can be solved by reduction to linear or convex programming problems. This relies on a notion of colorful vectors with respect to a collection of Newton polytopes. This extends the convex programming approach of one player stochastic games. These results appeared in the proceedings of the International Congress on Mathematical Software 2020 29.

The primary goal of 16 is to better understand the topological properties of various tensor ranks, an aspect that has been somewhat neglected in existing studies. However, the results on path-connectedness and simple-connectedness of tensor rank, multilinear rank, and their symmetric counterparts have useful practical implications.

One of the most basic and common problems involving tensors in applications is to find low-rank approximations with respect to one of these notions of rank. Riemannian manifold optimization techniques have been used for this problem. For instance, people consider approximation by tensors of a fixed multilinear rank, i.e.,

which is a smooth Riemannian manifold. Thus Riemannian optimization techniques can be applied. But this raises the question of whether

Besides, homotopy continuation techniques have also made a recent appearance in tensor decomposition problems over

Motivated by the above techniques, in 16 (joint work with Pierre Comon, Lek-Heng Lim, and Ke Ye) we systematically study path-connectedness and homotopy groups of sets of tensors defined by tensor rank, border rank, multilinear rank, as well as their symmetric counterparts for symmetric tensors.

In 25 (joint work with Lek-Heng Lim and Mateusz Michałek) we show that the empirical risk minimization (ERM) problem for neural networks has no solution in general. More precisely, given a training set

We show that even for

In 17 (joint work with Shmuel Friedland, University of Illinois at Chicago) we extend some characterizations and inequalities for the eigenvalues of nonnegative matrices, such as Donsker-Varadhan, Friedland-Karlin, Karlin-Ost inequalities, to nonnegative tensors. These inequalities are related to a correspondence between nonnegative tensors and ergodic control: the logarithm of the spectral radius of a tensor is given by the value of an ergodic problem in which instantaneous payments are given by a relative entropy. Some of these inequalities involve the tropical spectral radius, a limit of the spectral radius which we characterize combinatorially as the value of an ergodic Markov decision process.

Since 2014, we have been collaborating with Préfecture de Police (Régis Reboul and LcL Stéphane Raclot), more specifically with Brigade de Sapeurs de Pompiers de Paris (BSPP) and Direction de Sécurité de Proximité de l'agglomération parisienne (DSPAP), on the performance evaluation of the new organization (PFAU, “Plate forme d'appels d'urgence”) to handle emergency calls to firemen and policemen in the Paris area. We developed analytical models, based on Petri nets with priorities, and fluid limits, see 45, 46, 55. In 2019, with four students of École polytechnique, Céline Moucer, Julia Escribe, Skandère Sahli and Alban Zammit, we performed case studies, showing the improvement brought by the two level filtering procedure.

Moreover, in 2019, this work has been extended to encompass the handling of health emergency calls, with a new collaboration, involving responsibles from the four services of medical emergency aid of Assistance Publique – Hôpitaux de Paris (APHP), i.e., with SAMU75, 92, 93, 94, in the framework of a project coordinated by Dr. Christophe Leroy from APHP. As part of his PhD work, Marin Boyet have developed Petri net models capturing the characteristic of the centers (CRRA) handling emergency calls the SAMU, in order to make dimensioning recommendations. Following this, we have been strongly solicited by APHP during the pandemic of Covid-19 in order to determine crisis dimensioning of SAMU. Besides, we have initiated a new collaboration, with SAMU69, also on dimensioning.

In parallel, we have further investigated the theoretical properties of timed Petri nets with preselection and priority routing. We represent the behavior of these systems by piecewise affine dynamical systems. We use tools from the theory of nonexpansive mappings to analyze these systems. We establish an equivalence theorem between priority-free fluid timed Petri nets and semi-Markov decision processes, from which we derive the convergence to a periodic regime and the polynomial-time computability of the throughput. More generally, we develop an approach inspired by tropical geometry, characterizing the congestion phases as the cells of a polyhedral complex. These results are illustrated by the application to the performance evaluation of emergency call centers of SAMU in the Paris area. These results have been published in the 41st International Conference on Application and Theory of Petri Nets and Concurrency 30, and later invited to a submission in the journal Fundamenta Informaticæ.

This action began in March 2020, at the start of the Covid-19 crisis in the Paris area, when our team was contacted by the Emergency medical services (EMS) of Assistance Publique–Hôpitaux de Paris (AP-HP) to compute a dimensioning of the emergency call centers (CRRA of SAMU 75, 92, 93 and 94), i.e., to evaluate the numbers of assistants of medical regulation, and of emergency physicians, needed to deal with the flux of calls during the epidemiologic peak which was yet to come. A crisis dimensioning was delivered, based on the models previously developed in

30and on earlier modelling work with the SAMU of AP-HP, see Section

8.7.1.

To do so, we analyzed data from patient regulations files and found that they provide early and reliable signals of the epidemiologic growth, which were previously not included among the indicators used to monitor the epidemic. This led to a multidisciplinary work, joint with the physicians of the EMS of AP-HP, to construct medically relevant indicators. We implemented then within a flash research and development action, “PrediDRM”, involving, in addition to members of the Tropical team, engineers and researchers from other INRIA teams and other INRIA centers, Laurent Massoulié, David Parsons, Théotime Grohens, and Thomas Lepoutre. The goal of PrediDRM was to produce the indicators and to analyse them. Most of this work is presented in the article 18, which encompasses both medical and epidemiological aspects. It includes an analysis of the evolution of the epidemic in the Paris area based on EMS data, and mathematical modelling results, showing that the logarithm of epidemic observables can be approximated by a piecewise linear curve, and that its nondifferentiability points allow one to estimate the delay between sanitary measures and their effects on the load of EMS and other hospital departments. The team also benefited of information on numbers and types of emergency calls during the crisis, provided by SAMU69 and SAMU77 – calls to the number 15 – and also, by the direction of the PFAU programme at Préfecture de police (calls to the numbers 17-18-112). Supplementary analyses were produced using these data. We also benefited from informations provided by Enedis, Orange Flux Vision, and SFR during the crisis of the spring, allowing us to evaluate the influence of mobility on epidemic growth, see 18 for more information.

A broader discussion of epidemiological indicators, including EMS indicators, is presented in the article 15, authored by a collective name for a group of researchers gathering physicians and researchers from AP-HP, INSERM and INRIA.

Ayoub Foussoul, as part of his master internship of École polytechnique, analysed the conditioning of the piecewise linear approximation problem used in the monitoring algorithm. He got a “grand prix d'option” of École polytechnique for this work.

We were also requested by AP-HP to refine the EMS indicators by producing a cartography of the epidemic at a local scale in the Paris area, based on EMS calls, in order to help AP-HP to identify clusters. This extension of the PrediDRM project, called “ClusterCarmen”, involved in addition to Xavier Allamigeon, Stéphane Gaubert, and Laurent Massoulié, other researchers and engineers of INRIA, Cédric Adjih, Guillermo Barroso-Andrade, Mathieu Simonin, with the help of Thomas Calmant. Within AP-HP, the ClusterCarmen project was coordinated by Dr. Philippe Le Toumelin and Dr. Paul-Georges Reuter, with the support of the four SAMU of AP-HP and of the AP-HP DSI. The cartography software was deployed on May 11th (the day the initial lockdown was released), the cartography being produced automatically every day and delivered to AP-HP and to other experts in charge of monitoring of the epidemic. The cartography software produced by the INRIA team was transfered to AP-HP at the fall, to allow further developments and extensions.

This section presents results from the PhD work of Paulin Jacquot, in collaboration with Nadia Oudjane, Olivier Beaude and Cheng Wan (EDF Lab), that were published in 2020.

This work of Paulin Jacquot concerns the application of game theory and distributed optimization techniques to the operation of decentralized electric systems, and in particlar to the management of distributed electric consumption flexibilities. We start by adopting the point of view of a centralized operator in charge of the management of flexibilities for several agents. We provide a distributed and privacy-preserving algorithm to compute consumption profiles for agents that are optimal for the operator. In the proposed method, the individual constraints as well as the individual consumption profile of each agent are never revealed to the operator or the other agents 21. A patent related to this method has been published 82.

A collaboration with Cheng Wan (EDF Lab) led to an additional part of this PhD thesis. We consider an operator dealing with a very large number of players, for which evaluating the equilibria in a congestion game will be difficult. To address this issue, we give approximation results on the equilibria in congestion and aggregative games with a very large number of players, in the presence of coupling constraints. These results, obtained in the framework of variational inequalities and under some monotonicity conditions, can be used to compute an approximate equilibrium, solution of a small dimension problem 22. In line with the idea of modeling large populations, we consider nonatomic congestion games with coupling constraints, with an infinity of heterogeneous players: these games arise when the characteristics of a population are described by a parametric density function. Under monotonicity hypotheses, we prove that Wardrop equilibria of such games, given as solutions of an infinite dimensional variational inequality, can be approximated by symmetric Wardrop equilibria of auxiliary games, solutions of low dimension variational inequalities. Again, those results can be the basis of tractable methods to compute an approximate Wardrop equilibrium in a nonatomic infinite-type congestion game 83. Last, in a collaboration with Hélène Le Cadre, Cheng Wan and Clémence Alasseur, we consider a game model for the study of decentralized peer-to-peer energy exchanges between a community of consumers with renewable production sources. We study the generalized equilibria in this game, which characterize the possible energy trades and associated individual consumptions. We compare the equilibria with the centralized solution minimizing the social cost, and evaluate the efficiency of equilibria through the price of anarchy 24.

This work is part of the PhD work of Maxime Grangereau, cosupervised by Emmanuel Gobet, in collaboration with Wim van Ackooij (EDF).

We study a multistage and stochastic extension of the Optimal Power Flow problem (OPF). We developed semidefinite relaxations, extending the ones which arise in static and deterministic OPF problems. We provided a priori conditions which guarantee the absence of relaxation gap, and also a posteriori methods allowing one to bound this relaxation gap. We applied this approach on examples of grids, with scenario trees representing the random solar power production 71.

The aim of the internship of Luz Pascal, co-supervised by Iadine Chades (CSIRO, Australia), was to develop an algorithm for Adaptive Management (AM).

AM is the principal tool for conserving endangered species under global change. AM can be solved using simplified Mixed Observable Markov Decision Processes called hidden model MDPs (hmMDPs) when the unknown dynamics are assumed stationary 59. hmMDPs provide optimal policies to AM problems by augmenting the MDP state space with an unobservable state variable representing a finite set of predefined models (transition probabilities). A drawback in formalising an AM problem is that experts are often solicited to provide this predefined set of models and that one assume that the true transition probabilities are included in the candidate model set.

A first work of the internship was to propose an original approach to build a hmMDP with a universal set of predefined models. This has been done in the case of a 2-state n-action AM problem, and has been assessed on two species conservation case studies from Australia and randomly generated problems. This work will be published in the proceedings of the AAAI conference (held online on Feb. 2–9, 2021).

A second work was to study the convergence of the SARSOP algorithm generally used for solving stationary POMDP by comparing this algorithm with the combination of SDDP and tropical algorithms introduced in 32.

See Section 11.1.1.