The last decade has witnessed a remarkable convergence between several sub-domains of the calculus of variations, namely optimal transport (and its many generalizations), infinite dimensional geometry of diffeomorphisms groups and inverse problems in imaging (in particular sparsity-based regularization). This convergence is due to (i) the mathematical objects manipulated in these problems, namely sparse measures (e.g. coupling in transport, edge location in imaging, displacement fields for diffeomorphisms) and (ii) the use of similar numerical tools from non-smooth optimization and geometric discretization schemes. Optimal Transportation, diffeomorphisms and sparsity-based methods are powerful modeling tools, that impact a rapidly expanding list of scientific applications and call for efficient numerical strategies. Our research program shows the important part played by the team members in the development of these numerical methods and their application to challenging problems.
Optimal Mass Transportation is a mathematical research topic which started two centuries ago with Monge's work on the “Théorie des déblais et des remblais" (see ). This engineering problem consists in minimizing the transport cost between two given mass densities. In the 40's, Kantorovich introduced a powerful linear relaxation and introduced its dual formulation. The Monge-Kantorovich problem became a specialized research topic in optimization and Kantorovich obtained the 1975 Nobel prize in economics for his contributions to resource allocations problems. Since the seminal discoveries of Brenier in the 90's , Optimal Transportation has received renewed attention from mathematical analysts and the Fields Medal awarded in 2010 to C. Villani, who gave important contributions to Optimal Transportation and wrote the modern reference monographs , , arrived at a culminating moment for this theory. Optimal Mass Transportation is today a mature area of mathematical analysis with a constantly growing range of applications. Optimal Transportation has also received a lot of attention from probabilists (see for instance the recent survey for an overview of the Schrödinger problem which is a stochastic variant of the Benamou-Brenier dynamical formulation of optimal transport). The development of numerical methods for Optimal Transportation and Optimal Transportation related problems is a difficult topic and comparatively underdeveloped. This research field has experienced a surge of activity in the last five years, with important contributions of the Mokaplan group (see the list of important publications of the team). We describe below a few of recent and less recent Optimal Transportation concepts and methods which are connected to the future activities of Mokaplan :
Brenier's theorem characterizes the unique optimal map as the gradient of a convex potential. As such Optimal Transportation may be interpreted as an infinite dimensional optimisation problem under “convexity constraint": i.e. the solution of this infinite dimensional optimisation problem is a convex potential. This connects Optimal Transportation to “convexity constrained" non-linear variational problems such as, for instance, Newton's problem of the body of minimal resistance. The value function of the optimal transport problem is also known to define a distance between source and target densities called the Wasserstein distance which plays a key role in many applications such as image processing.
A formal substitution of the optimal transport map as the gradient of a convex potential in the mass conservation constraint (a Jacobian equation) gives a non-linear Monge-Ampère equation. Caffarelli used this result to extend the regularity theory for the Monge-Ampère equation. In the last ten years, it also motivated new research on numerical solvers for non-linear degenerate Elliptic equations and the references therein. Geometric approaches based on Laguerre diagrams and discrete data have also been developed. Monge-Ampère based Optimal Transportation solvers have recently given the first linear cost computations of Optimal Transportation (smooth) maps.
In recent years, the classical Optimal Transportation problem has been extended in several directions. First, different ground costs measuring the “physical" displacement have been considered. In particular, well posedness for a large class of convex and concave cost has been established by McCann and Gangbo . Optimal Transportation techniques have been applied for example to a Coulomb ground cost in Quantum chemistry in relation with Density Functional theory . Given the densities of electrons Optimal Transportation models the potential energy and their relative positions. For more than more than 2 electrons (and therefore more than 2 densities) the natural extension of Optimal Transportation is the so called Multi-marginal Optimal Transport (see and the references therein). Another instance of multi-marginal Optimal Transportation arises in the so-called Wasserstein barycenter problem between an arbitrary number of densities . An interesting overview of this emerging new field of optimal transport and its applications can be found in the recent survey of Ghoussoub and Pass .
Optimal transport has found many applications, starting from its relation with several physical models such as the semi-geostrophic equations in meteorology , , , , , mesh adaptation , the reconstruction of the early mass distribution of the Universe , in Astrophysics, and the numerical optimisation of reflectors following the Optimal Transportation interpretation of Oliker and Wang . Extensions of OT such as multi-marginal transport has potential applications in Density Functional Theory , Generalized solution of Euler equations (DFT) and in statistics and finance , ...Recently, there has been a spread of interest in applications of OT methods in imaging sciences , statistics and machine learning . This is largely due to the emergence of fast numerical schemes to approximate the transportation distance and its generalizations, see for instance . Figure shows an example of application of OT to color transfer. Figure shows an example of application in computer graphics to interpolate between input shapes.
While the optimal transport problem, in its original formulation, is a static problem (no time evolution is considered), it makes sense in many applications to rather consider time evolution. This is relevant for instance in applications to fluid dynamics or in medical images to perform registration of organs and model tumor growth.
In this perspective, the optimal transport in Euclidean space corresponds to an evolution where each particule of mass evolves in straight line. This interpretation corresponds to the Computational Fluid Dynamic (CFD) formulation proposed by Brenier and Benamou in . These solutions are time curves in the space of densities and geodesics for the Wasserstein distance. The CFD formulation relaxes the non-linear mass conservation constraint into a time dependent continuity equation, the cost function remains convex but is highly non smooth. A remarkable feature of this dynamical formulation is that it can be re-cast as a convex but non smooth optimization problem. This convex dynamical formulation finds many non-trivial extensions and applications, see for instance . The CFD formulation also appears to be a limit case of Mean Fields games (MFGs), a large class of economic models introduced by Lasry and Lions leading to a system coupling an Hamilton-Jacobi with a Fokker-Planck equation. In contrast, the Monge case where the ground cost is the euclidan distance leads to a static system of PDEs .
Another extension is, instead of considering geodesic for transportation metric (i.e. minimizing the Wasserstein distance to a target measure), to make the density evolve in order to minimize some functional. Computing the steepest descent direction with respect to the Wasserstein distance defines a so-called Wasserstein gradient flow, also known as JKO gradient flows after its authors . This is a popular tool to study a large class of non-linear diffusion equations. Two interesting examples are the Keller-Segel system for chemotaxis , and a model of congested crowd motion proposed by Maury, Santambrogio and Roudneff-Chupin . From the numerical point of view, these schemes are understood to be the natural analogue of implicit scheme for linear parabolic equations. The resolution is however costly as in involves taking the derivative in the Wasserstein sense of the relevant energy, which in turns requires the resolution of a large scale convex but non-smooth minimization.
To tackle more complicated warping problems, such as those encountered in medical image analysis, one unfortunately has to drop the convexity of the functional involved to define the gradient flow. This gradient flow can either be understood as defining a geodesic on the (infinite dimensional) group of diffeomorphisms , or on a (infinite dimensional) space of curves or surfaces . The de-facto standard to define, analyze and compute these geodesics is the “Large Deformation Diffeomorphic Metric Mapping” (LDDMM) framework of Trouvé, Younes, Holm and co-authors , . While in the CFD formulation of optimal transport, the metric on infinitesimal deformations is just the
Beside image warping and registration in medical image analysis, a key problem in nearly all imaging applications is the reconstruction of high quality data from low resolution observations. This field, commonly referred to as “inverse problems”, is very often concerned with the precise location of features such as point sources (modeled as Dirac masses) or sharp contours of objects (modeled as gradients being Dirac masses along curves). The underlying intuition behind these ideas is the so-called sparsity model (either of the data itself, its gradient, or other more complicated representations such as wavelets, curvelets, bandlets and learned representation ).
The huge interest in these ideas started mostly from the introduction of convex methods to serve as proxy for these sparse regularizations. The most well known is the
However, the theoretical analysis of sparse reconstructions involving real-life acquisition operators (such as those found in seismic imaging, neuro-imaging, astro-physical imaging, etc.) is still mostly an open problem. A recent research direction, triggered by a paper of Candès and Fernandez-Granda , is to study directly the infinite dimensional problem of reconstruction of sparse measures (i.e. sum of Dirac masses) using the total variation of measures (not to be mistaken for the total variation of 2-D functions). Several works , , have used this framework to provide theoretical performance guarantees by basically studying how the distance between neighboring spikes impacts noise stability.
In image processing, one of the most popular method is the total variation regularization , . It favors low-complexity images that are piecewise constant, see Figure for some example to solve some image processing problems. Beside applications in image processing, sparsity-related ideas also had a deep impact in statistics and machine learning . As a typical example, for applications to recommendation systems, it makes sense to consider sparsity of the singular values of matrices, which can be relaxed using the so-called nuclear norm (a.k.a. trace norm) . The underlying methodology is to make use of low-complexity regularization models, which turns out to be equivalent to the use of partly-smooth regularization functionals , enforcing the solution to belong to a low-dimensional manifold.
The dynamical formulation of optimal transport creates a link between optimal transport and geodesics on diffeomorphisms groups. This formal link has at least two strong implications that Mokaplan's will elaborate on: (i) the development of novel models that bridge the gap between these two fields ; (ii) the introduction of novel fast numerical solvers based on ideas from both non-smooth optimization techniques and Bregman metrics, as highlighted in Section .
In a similar line of ideas, we believe a unified approach is needed to tackle both sparse regularization in imaging and various generalized OT problems. Both require to solve related non-smooth and large scale optimization problems. Ideas from proximal optimization has proved crucial to address problems in both fields (see for instance , ). Transportation metrics are also the correct way to compare and regularize variational problems that arise in image processing (see for instance the Radon inversion method proposed in ) and machine learning (see ). This unity in term of numerical methods is once again at the core of Section .
The first layer of methodological tools developed by our team is a set of theoretical continuous models that aim at formalizing the problems studied in the applications. These theoretical findings will also pave the way to efficient numerical solvers that are detailed in Section .
(Participants: G. Carlier, J-D. Benamou, V. Duval, Xavier Dupuis (LUISS Guido Carli University, Roma)) The principal agent problem plays a distinguished role in the literature on asymmetric information and contract theory (with important contributions from several Nobel prizes such as Mirrlees, Myerson or Spence) and it has many important applications in optimal taxation, insurance, nonlinear pricing. The typical problem consists in finding a cost minimizing strategy for a monopolist facing a population of agents who have an unobservable characteristic, the principal therefore has to take into account the so-called incentive compatibilty constraint which is very similar to the cyclical monotonicity condition which characterizes optimal transport plans. In a special case, Rochet and Choné reformulated the problem as a variational problem subject to a convexity constraint. For more general models, and using ideas from Optimal Transportation, Carlier considered the more general
Our expertise: We have already contributed to the numerical resolution of the Principal Agent problem in the case of the convexity constraint, see , , .
Goals: So far, the mathematical PA model can be numerically solved for simple utility functions. A Bregman approach inspired by is currently being developed for more general functions. It would be extremely useful as a complement to the theoretical analysis. A new semi-Discrete Geometric approach is also investigated where the method reduces to non-convex polynomial optimization.
(Participants: G. Carlier, J-D. Benamou, G. Peyré) A challenging branch of emerging generalizations of Optimal Transportation arising in economics, statistics and finance concerns Optimal Transportation with conditional constraints. The martingale optimal transport , which appears naturally in mathematical finance aims at computing robust bounds on option prices as the value of an optimal transport problem where not only the marginals are fixed but the coupling should be the law of a martingale, since it represents the prices of the underlying asset under the risk-neutral probability at the different dates. Note that as soon as more than two dates are involved, we are facing a multimarginal problem.
Our expertise: Our team has a deep expertise on the topic of OT and its generalization, including many already existing collaboration between its members, see for instance , , for some representative recent collaborative publications.
Goals: This is a non trivial extension of Optimal Transportation theory and Mokaplan will develop numerical methods (in the spirit of entropic regularization) to address it. A popular problem in statistics is the so-called quantile regression problem, recently Carlier, Chernozhukov and Galichon used an Optimal Transportation approach to extend quantile regression to several dimensions. In this approach again, not only fixed marginals constraints are present but also constraints on conditional means. As in the martingale Optimal Transportation problem, one has to deal with an extra conditional constraint. The usual duality approach usually breaks down under such constraints and characterization of optimal couplings is a challenging task both from a theoretical and numerical viewpoint.
(Participants: G. Carlier, J-D. Benamou, M. Laborde, Q. Mérigot, V. Duval) The connection between the static and dynamic transportation problems (see Section ) opens the door to many extensions, most notably by leveraging the use of gradient flows in metric spaces. The flow with respect to the transportation distance has been introduced by Jordan-Kindelherer-Otto (JKO) and provides a variational formulation of many linear and non-linear diffusion equations. The prototypical example is the Fokker Planck equation. We will explore this formalism to study new variational problems over probability spaces, and also to derive innovative numerical solvers. The JKO scheme has been very successfully used to study evolution equations that have the structure of a gradient flow in the Wasserstein space. Indeed many important PDEs have this structure: the Fokker-Planck equation (as was first considered by ), the porous medium equations, the granular media equation, just to give a few examples. It also finds application in image processing . Figure shows examples of gradient flows.
Our expertise: There is an ongoing collaboration between the team members on the theoretical and numerical analysis of gradient flows.
Goals: We apply and extend our research on JKO numerical methods to treat various extensions:
Wasserstein gradient flows with a non displacement convex energy (as in the parabolic-elliptic Keller-Segel chemotaxis model )
systems of evolution equations which can be written as gradient flows of some energy on a product space (possibly mixing the Wasserstein and
perturbation of gradient flows: multi-species or kinetic models are not gradient flows, but may be viewed as a perturbation of Wasserstein gradient flows, we shall therefore investigate convergence of splitting methods for such equations or systems.
(Participants: G. Carlier, J-D. Benamou, G. Peyré) Congested transport theory in the discrete framework of networks has received a lot of attention since the 50's starting with the seminal work of Wardrop. A few years later, Beckmann proved that equilibria are characterized as solution of a convex minimization problem. However, this minimization problem involves one flow variable per path on the network, its dimension thus quickly becomes too large in practice. An alternative, is to consider continuous in space models of congested optimal transport as was done in which leads to very degenerate PDEs .
Our expertise: MOKAPLAN members have contributed a lot to the analysis of congested transport problems and to optimization problems with respect to a metric which can be attacked numerically by fast marching methods .
Goals: The case of general networks/anisotropies is still not well understood, general
(Participants: F-X. Vialard, J-D. Benamou, G. Peyré, L. Chizat) A major issue with the standard dynamical formulation of OT is that it does not allow for variation of mass during the evolution, which is required when tackling medical imaging applications such as tumor growth modeling or tracking elastic organ movements . Previous attempts , to introduce a source term in the evolution typically lead to mass teleportation (propagation of mass with infinite speed), which is not always satisfactory.
Our expertise: Our team has already established key contributions both to connect OT to fluid dynamics and to define geodesic metrics on the space of shapes and diffeomorphisms .
Goals: Lenaic Chizat's PhD thesis aims at bridging the gap between dynamical OT formulation, and LDDDM diffeomorphisms models (see Section ). This will lead to biologically-plausible evolution models that are both more tractable numerically than LDDM competitors, and benefit from strong theoretical guarantees associated to properties of OT.
(Participants: G. Carlier, J-D. Benamou) The Optimal Transportation Computational Fluid Dynamics (CFD) formulation is a limit case of variational Mean-Field Games (MFGs), a new branch of game theory recently developed by J-M. Lasry and P-L. Lions with an extremely wide range of potential applications . Non-smooth proximal optimization methods used successfully for the Optimal Transportation can be used in the case of deterministic MFGs with singular data and/or potentials . They provide a robust treatment of the positivity constraint on the density of players.
Our expertise: J.-D. Benamou has pioneered with Brenier the CFD approach to Optimal Transportation. Regarding MFGs, on the numerical side, our team has already worked on the use of augmented Lagrangian methods in MFGs and on the analytical side has explored rigorously the optimality system for a singular CFD problem similar to the MFG system.
Goals: We will work on the extension to stochastic MFGs. It leads to non-trivial numerical difficulties already pointed out in .
(Participants: G. Carlier, J-D. Benamou, Q. Mérigot, F. Santambrogio (U. Paris-Sud), Y. Achdou (Univ. Paris 7), R. Andreev (Univ. Paris 7)) Many models from PDEs and fluid mechanics have been used to give a description of people or vehicles moving in a congested environment. These models have to be classified according to the dimension (1D model are mostly used for cars on traffic networks, while 2-D models are most suitable for pedestrians), to the congestion effects (“soft” congestion standing for the phenomenon where high densities slow down the movement, “hard” congestion for the sudden effects when contacts occur, or a certain threshold is attained), and to the possible rationality of the agents Maury et al recently developed a theory for 2D hard congestion models without rationality, first in a discrete and then in a continuous framework. This model produces a PDE that is difficult to attack with usual PDE methods, but has been successfully studied via Optimal Transportation techniques again related to the JKO gradient flow paradigm. Another possibility to model crowd motion is to use the mean field game approach of Lions and Lasry which limits of Nash equilibria when the number of players is large. This also gives macroscopic models where congestion may appear but this time a global equilibrium strategy is modelled rather than local optimisation by players like in the JKO approach. Numerical methods are starting to be available, see for instance , .
Our expertise: We have developed numerical methods to tackle both the JKO approach and the MFG approach. The Augmented Lagrangian (proximal) numerical method can actually be applied to both models , JKO and deterministic MFGs.
Goals: We want to extend our numerical approach to more realistic congestion model where the speed of agents depends on the density, see Figure for preliminary results. Comparison with different numerical approaches will also be performed inside the ANR ISOTACE. Extension of the Augmented Lagrangian approach to Stochastic MFG will be studied.
(Participants: F-X. Vialard, G. Peyré, B. Schmitzer, L. Chizat) Diffeomorphic image registration is widely used in medical image analysis. This class of problems can be seen as the computation of a generalized optimal transport, where the optimal path is a geodesic on a group of diffeomorphisms. The major difference between the two approaches being that optimal transport leads to non smooth optimal maps in general, which is however compulsory in diffeomorphic image matching. In contrast, optimal transport enjoys a convex variational formulation whereas in LDDMM the minimization problem is non convex.
Our expertise: F-X. Vialard is an expert of diffeomorphic image matching (LDDMM) , , . Our team has already studied flows and geodesics over non-Riemannian shape spaces, which allows for piecewise smooth deformations .
Goals: Our aim consists in bridging the gap between standard optimal transport and diffeomorphic methods by building new diffeomorphic matching variational formulations that are convex (geometric obstructions might however appear). A related perspective is the development of new registration/transport models in a Lagrangian framework, in the spirit of , to obtain more meaningful statistics on longitudinal studies.
Diffeomorphic matching consists in the minimization of a functional that is a sum of a deformation cost and a similarity measure. The choice of the similarity measure is as important as the deformation cost. It is often chosen as a norm on a Hilbert space such as functions, currents or varifolds. From a Bayesian perspective, these similarity measures are related to the noise model on the observed data which is of geometric nature and it is not taken into account when using Hilbert norms. Optimal transport fidelity have been used in the context of signal and image denoising , and it is an important question to extends these approach to registration problems. Therefore, we propose to develop similarity measures that are geometric and computationally very efficient using entropic regularization of optimal transport.
Our approach is to use a regularized optimal transport to design new similarity measures on all of those Hilbert spaces. Understanding the precise connections between the evolution of shapes and probability distributions will be investigated to cross-fertilize both fields by developing novel transportation metrics and diffeomorphic shape flows.
The corresponding numerical schemes are however computationally very costly. Leveraging our understanding of the dynamic optimal transport problem and its numerical resolution, we propose to develop new algorithms. These algorithms will use the smoothness of the Riemannian metric to improve both accuracy and speed, using for instance higher order minimization algorithm on (infinite dimensional) manifolds.
(Participants: F-X. Vialard, G. Peyré, B. Schmitzer, L. Chizat) The LDDMM framework has been advocated to enable statistics on the space of shapes or images that benefit from the estimation of the deformation. The statistical results of it strongly depend on the choice of the Riemannian metric. A possible direction consists in learning the right invariant Riemannian metric as done in where a correlation matrix (Figure ) is learnt which represents the covariance matrix of the deformation fields for a given population of shapes. In the same direction, a question of emerging interest in medical imaging is the analysis of time sequence of shapes (called longitudinal analysis) for early diagnosis of disease, for instance . A key question is the inter subject comparison of the organ evolution which is usually done by transport of the time evolution in a common coordinate system via parallel transport or other more basic methods. Once again, the statistical results (Figure ) strongly depend on the choice of the metric or more generally on the connection that defines parallel transport.
Our expertise: Our team has already studied statistics on longitudinal evolutions in , .
Goals: Developing higher order numerical schemes for parallel transport (only low order schemes are available at the moment) and developing variational models to learn the metric or the connections for improving statistical results.
(Participants: G. Peyré, V. Duval, C. Poon, Q. Denoyelle) As detailed in Section , popular methods for regularizing inverse problems in imaging make use of variational analysis over infinite-dimensional (typically non-reflexive) Banach spaces, such as Radon measures or bounded variation functions.
Our expertise: We have recently shown in how – in the finite dimensional case – the non-smoothness of the functionals at stake is crucial to enforce the emergence of geometrical structures (edges in images or fractures in physical materials ) for discrete (finite dimensional) problems. We extended this result in a simple infinite dimensional setting, namely sparse regularization of Radon measures for deconvolution . A deep understanding of those continuous inverse problems is crucial to analyze the behavior of their discrete counterparts, and in we have taken advantage of this understanding to develop a fine analysis of the artifacts induced by discrete (i.e. which involve grids) deconvolution models. These works are also closely related to the problem of limit analysis and yield design in mechanical plasticity, see , for an existing collaboration between Mokaplan's team members.
Goals: A current major front of research in the mathematical analysis of inverse problems is to extend these results for more complicated infinite dimensional signal and image models, such as for instance the set of piecewise regular functions. The key bottleneck is that, contrary to sparse measures (which are finite sums of Dirac masses), here the objects to recover (smooth edge curves) are not parameterized by a finite number of degrees of freedom. he relevant previous work in this direction are the fundamental results of Chambolle, Caselles and co-workers , , . They however only deal with the specific case where there is no degradation operator and no noise in the observations. We believe that adapting these approaches using our construction of vanishing derivative pre-certificate could lead to a solution to these theoretical questions.
(Participants: G. Peyré, J-M. Mirebeau, D. Prandi) Modeling and processing natural images require to take into account their geometry through anisotropic diffusion operators, in order to denoise and enhance directional features such as edges and textures , . This requirement is also at the heart of recently proposed models of cortical processing . A mathematical model for these processing is diffusion on sub-Riemanian manifold. These methods assume a fixed, usually linear, mapping from the 2-D image to a lifted function defined on the product of space and orientation (which in turn is equipped with a sub-Riemannian manifold structure).
Our expertise: J-M. Mirebeau is an expert in the discretization of highly anisotropic diffusions through the use of locally adaptive computational stencils , . G. Peyré has done several contributions on the definition of geometric wavelets transform and directional texture models, see for instance . Dario Prandi has recently applied methods from sub-Riemannian geometry to image restoration .
Goals: A first aspect of this work is to study non-linear, data-adaptive, lifting from the image to the space/orientation domain. This mapping will be implicitly defined as the solution of a convex variational problem. This will open both theoretical questions (existence of a solution and its geometrical properties, when the image to recover is piecewise regular) and numerical ones (how to provide a faithful discretization and fast second order Newton-like solvers). A second aspect of this task is to study the implication of these models for biological vision, in a collaboration with the UNIC Laboratory (directed by Yves Fregnac), located in Gif-sur-Yvette. In particular, the study of the geometry of singular vectors (or “ground states” using the terminology of ) of the non-linear sub-Riemannian diffusion operators is highly relevant from a biological modeling point of view.
(Participants: G. Peyré, V. Duval, C. Poon) Scanner data acquisition is mathematically modeled as a (sub-sampled) Radon transform . It is a difficult inverse problem because the Radon transform is ill-posed and the set of observations is often aggressively sub-sampled and noisy . Typical approaches try to recovered piecewise smooth solutions in order to recover precisely the position of the organ being imaged. There is however a very poor understanding of the actual performance of these methods, and little is known on how to enhance the recovery.
Our expertise: We have obtained a good understanding of the performance of inverse problem regularization on compact domains for pointwise sources localization .
Goals: We aim at extending the theoretical performance analysis obtained for sparse measures to the set of piecewise regular 2-D and 3-D functions. Some interesting previous work of C. Poon et al (C. Poon is currently a postdoc in Mokaplan) have tackled related questions in the field of variable Fourier sampling for compressed sensing application (which is a toy model for fMRI imaging). These approaches are however not directly applicable to Radon sampling, and require some non-trivial adaptations. We also aim at better exploring the connection of these methods with optimal-transport based fidelity terms such as those introduced in .
(Participants: G. Peyré, F-X. Vialard, J-D. Benamou, L. Chizat) Some applications in medical image analysis require to track shapes whose evolution is governed by a growth process. A typical example is tumor growth, where the evolution depends on some typically unknown but meaningful parameters that need to be estimated. There exist well-established mathematical models , of non-linear diffusions that take into account recently biologically observed property of tumors. Some related optimal transport models with mass variations have also recently been proposed , which are connected to so-called metamorphoses models in the LDDMM framework .
Our expertise: Our team has a strong experience on both dynamical optimal transport models and diffeomorphic matching methods (see Section ).
Goals: The close connection between tumor growth models , and gradient flows for (possibly non-Euclidean) Wasserstein metrics (see Section ) makes the application of the numerical methods we develop particularly appealing to tackle large scale forward tumor evolution simulation. A significant departure from the classical OT-based convex models is however required. The final problem we wish to solve is the backward (inverse) problem of estimating tumor parameters from noisy and partial observations. This also requires to set-up a meaningful and robust data fidelity term, which can be for instance a generalized optimal transport metric.
The above continuous models require a careful discretization, so that the fundamental properties of the models are transferred to the discrete setting. Our team aims at developing innovative discretization schemes as well as associated fast numerical solvers, that can deal with the geometric complexity of the variational problems studied in the applications. This will ensure that the discrete solution is correct and converges to the solution of the continuous model within a guaranteed precision. We give below examples for which a careful mathematical analysis of the continuous to discrete model is essential, and where dedicated non-smooth optimization solvers are required.
(Participants: J-D. Benamou, G. Carlier, J-M. Mirebeau, Q. Mérigot) Optimal transportation models as well as continuous models in economics can be formulated as infinite dimensional convex variational problems with the constraint that the solution belongs to the cone of convex functions. Discretizing this constraint is however a tricky problem, and usual finite element discretizations fail to converge.
Our expertise: Our team is currently investigating new discretizations, see in particular the recent proposal for the Monge-Ampère equation and for general non-linear variational problems. Both offer convergence guarantees and are amenable to fast numerical resolution techniques such as Newton solvers. Since explaining how to treat efficiently and in full generality Transport Boundary Conditions for Monge-Ampère, this is a promising fast and new approach to compute Optimal Transportation viscosity solutions. A monotone scheme is needed. One is based on Froese Oberman work , a new different and more accurate approach has been proposed by Mirebeau, Benamou and Collino . As shown in , discretizing the constraint for a continuous function to be convex is not trivial. Our group has largely contributed to solve this problem with G. Carlier , Quentin Mérigot and J-M. Mirebeau . This problem is connected to the construction of monotone schemes for the Monge-Ampère equation.
Goals: The current available methods are 2-D. They need to be optimized and parallelized. A non-trivial extension to 3-D is necessary for many applications. The notion of
(Participants: J-D. Benamou, G. Carlier, J-M. Mirebeau, G. Peyré, Q. Mérigot) As detailed in Section , gradient Flows for the Wasserstein metric (aka JKO gradient flows ) provides a variational formulation of many non-linear diffusion equations. They also open the way to novel discretization schemes. From a computational point, although the JKO scheme is constructive (it is based on the implicit Euler scheme), it has not been very much used in practice numerically because the Wasserstein term is difficult to handle (except in dimension one).
Our expertise:
Solving one step of a JKO gradient flow is similar to solving an Optimal transport problem. A geometrical a discretization of the Monge-Ampère operator approach has been proposed by Mérigot, Carlier, Oudet and Benamou in see Figure . The Gamma convergence of the discretisation (in space) has been proved.
Goals: We are also investigating the application of other numerical approaches to Optimal Transport to JKO gradient flows either based on the CFD formulation or on the entropic regularization of the Monge-Kantorovich problem (see section 3.2.3). An in-depth study and comparison of all these methods will be necessary.
(Participants: V. Duval, G. Peyré, G. Carlier, Jalal Fadili (ENSICaen), Jérôme Malick (CNRS, Univ. Grenoble)) While pervasive in the numerical analysis community, the problem of discretization and
Our expertise: We have provided the first results on the discrete-to-continous convergence in both sparse regularization variational problems , and the static formulation of OT and Wasserstein barycenters
Goals: In a collaboration with Jérôme Malick (Inria Grenoble), our first goal is to generalized the result of to generic partly-smooth convex regularizers routinely used in imaging science and machine learning, a prototypal example being the nuclear norm (see for a review of this class of functionals). Our second goal is to extend the results of to the novel class of entropic discretization schemes we have proposed , to lay out the theoretical foundation of these ground-breaking numerical schemes.
(Participants: G. Peyré, V. Duval, C. Poon) There has been a recent spark of attention of the imaging community on so-called “grid free” methods, where one tries to directly tackle the infinite dimensional recovery problem over the space of measures, see for instance , . The general idea is that if the range of the imaging operator is finite dimensional, the associated dual optimization problem is also finite dimensional (for deconvolution, it corresponds to optimization over the set of trigonometric polynomials).
Our expertise: We have provided in a sharp analysis of the support recovery property of this class of methods for the case of sparse spikes deconvolution.
Goals: A key bottleneck of these approaches is that, while being finite dimensional, the dual problem necessitates to handle a constraint of polynomial positivity, which is notoriously difficult to manipulate (except in the very particular case of 1-D problems, which is the one exposed in ). A possible, but very costly, methodology is to ressort to Lasserre's SDP representation hierarchy . We will make use of these approaches and study how restricting the level of the hierarchy (to obtain fast algorithms) impacts the recovery performances (since this corresponds to only computing approximate solutions). We will pay a particular attention to the recovery of 2-D piecewise constant functions (the so-called total variation of functions regularization ), see Figure for some illustrative applications of this method.
(Participants: G. Peyré, J-D. Benamou, G. Carlier, Jalal Fadili (ENSICaen)) Both sparse regularization problems in imaging (see Section ) and dynamical optimal transport (see Section ) are instances of large scale, highly structured, non-smooth convex optimization problems. First order proximal splitting optimization algorithms have recently gained lots of interest for these applications because they are the only ones capable of scaling to giga-pixel discretizations of images and volumes and at the same time handling non-smooth objective functions. They have been successfully applied to optimal transport , , congested optimal transport and to sparse regularizations (see for instance and the references therein).
Our expertise: The pioneering work of our team has shown how these proximal solvers can be used to tackle the dynamical optimal transport problem , see also . We have also recently developed new proximal schemes that can cope with non-smooth composite objectives functions .
Goals: We aim at extending these solvers to a wider class of variational problems, most notably optimization under divergence constraints . Another subject we are investigating is the extension of these solvers to both non-smooth and non-convex objective functionals, which are mandatory to handle more general transportation problems and novel imaging regularization penalties.
(Participants: G. Peyré G. Carlier, L. Nenna, J-D. Benamou, L. Nenna, Marco Cuturi (Kyoto Univ.)) The entropic regularization of the Kantorovich linear program for OT has been shown to be surprisingly simple and efficient, in particular for applications in machine learning . As shown in , this is a special instance of the general method of Bregman iterations, which is also a particular instance of first order proximal schemes according to the Kullback-Leibler divergence.
Our expertise: We have recently shown how Bregman projections and Dykstra algorithm offer a generic optimization framework to solve a variety of generalized OT problems. Carlier and Dupuis have designed a new method based on alternate Dykstra projections and applied it to the principal-agent problem in microeconomics.
We have applied this method in computer graphics in a paper accepted in SIGGRAPH 2015 . Figure shows the potential of our approach to handle giga-voxel datasets: the input volumetric densities are discretized on a
Goals: Following some recent works (see in particular ) we first aim at studying primal-dual optimization schemes according to Bregman divergences (that would go much beyond gradient descent and iterative projections), in order to offer a versatile and very effective framework to solve variational problems involving OT terms. We then also aim at extending the scope of usage of this method to applications in quantum mechanics (Density Functional Theory, see ) and fluid dynamics (Brenier's weak solutions of the incompressible Euler equation, see ). The computational challenge is that realistic physical examples are of a huge size not only because of the space discretization of one marginal but also because of the large number of marginals involved (for incompressible Euler the number of marginals equals the number of time steps).
Following the pioneering work of Caffarelli and Oliker , Wang has shown that the inverse problem of freeforming a convex reflector which sends a prescribed source to a target intensity is a particular instance of Optimal Transportation. This is a promising approach to automatize the industrial design of optimised energy efficient reflectors (car/public lights for instance). We show in figure the experiment setting and one of the first numerical simulations produced by the ADT Mokabajour.
The method developed in has been used by researchers of TU Eindhoven in collaboration with Philips Lightning Labs to compute reflectors in a simplified setting (directional light source). Another approach, based on a geometric discretization of Optimal Transportation has been developed in , and is able to handle more realistic conditions (punctual light source).
Solving the exact Optimal Transportation model for the Reflector inverse problem involves a generalized Monge-Ampère problem and is linked to the open problem of c-convexity compatible discretization we plan to work on. The corresponding software development is the topic of the ADT Mokabajour.
See section 4.3 below for softwares. These methods will clearly become mainstream in reflector design but also in lense design . The industrial problems are mainly on efficiency (light pollution) and security (car head lights) based on free tailoring of the illumination. The figure below is an extreme test case where we exactly reproduce an image. They may represent one of the first incursion on PDE discretisation based methods into the field of non-imaging optics.
The analysis of large scale datasets to perform un-supervised (clustering) and supervised (classification, regression) learning requires the design of advanced models to capture the geometry of the input data. We believe that optimal transport is a key tool to address this problem because (i) many of these datasets are composed of histograms (social network activity, image signatures, etc.) (ii) optimal transport makes use of a ground metric that enhances the performances of classical learning algorithms, as illustrated for instance in .
Some of the theoretical and numerical tools developed by our team, most notably Wasserstein barycenters , , are now becoming mainstream in machine learning , . In its simplest (convex) form where one seeks to only maximize pairwise wasserstein distances, metric learning corresponds to the congestion problem studied by G. Carlier and collaborators , , and we will elaborate on this connection to perform both theoretical analysis and develop numerical schemes (see for instance our previous work ).
We aim at developing novel variational estimators extending classification regression energies (SVM, logistic regression ) and kernel methods (see ). One of the key bottleneck is to design numerical schemes to learn an optimal metric for these purpose, extending the method of Marco Cuturi to large scale and more general estimators. Our main targeted applications is natural language processing. The analysis and processing of large corpus of texts is becoming a key problems at the interface between linguistic and machine learning . Extending classical machine learning methods to this field requires to design suitable metrics over both words and bag-of-words (i.e. histograms). Optimal transport is thus a natural candidate to bring innovative solutions to these problems. In a collaboration with Marco Cuturi (Kyoto University), we aim at unleashing the power of transportation distances by performing ground distance learning on large database of text. This requires to lift previous works on distance on words (see in particular ) to distances on bags-of-words using transport and metric learning.
The Brenier interpretation of the generalized solutions of Euler equations in the sense of Arnold is an instance of multi-marginal optimal transportation, a recent and expanding research field which also appears in DFT (see chemistry below). Recent numerical developments in OT provide new means of exploring these class of solutions.
In the years 2000 and after the pioneering works of Otto, the theory of many-particle systems has become “geometrized” thanks to the observed intimate relation between the geometric theory of geodesic convexity in the Wasserstein distance and the proof of entropy dissipation inequalities that determine the trend to equilibrium. The OT approach to the study of equilibration is still an extremely active field, in particular the various recently established connections to sharp functional inequalities and isoperimetric problems.
A third specific topic is the use of optimal transport models in non-imaging optics. Light intensity here plays the role of the source/target prescribed mass and the transport map defines the physical shape of specular reflector or refracting lense achieving such a transformation. This models have been around since the works of Oliker and Wang in the 90's. Recent numerical progresses indicate that OT may have an important industrial impact in the design of optical elements and calls for further modelisation and analysis.
The treatment of chemical reactions in the framework of OT is a rather recent development. The classical theory must be extended to deal with the transfer of mass between different particle species by means of chemical reactions.
A promising and significant recent advance is the introduction and analysis of a novel metric that combines the pure transport elements of the Wasserstein distance with the annihilation and creation of mass, which is a first approximation of chemical reactions. The logical next challenge is the extension of OT concepts to vectorial quantities, which allows to rewrite cross-diffusion systems for the concentration of several chemical species as gradient flows in the associated metric. An example of application is the modeling of a chemical vapor deposition process, used for the manufacturing of thin-film solar cells for instance. This leads to a degenerate cross-diffusion equations, whose analysis — without the use of OT theory — is delicate. Finding an appropriate OT framework to give the formal gradient flow structure a rigorous meaning would be a significant advance for the applicability of the theory, also in other contexts, like for biological multi-species diffusion.
A very different application of OT in chemistry is a novel approach to the understanding of density functional theory (DFT) by using optimal transport with “Coulomb costs”, which is highly non convex and singular. Albeit this theory shares some properties with the usual optimal transportation problems, it does not induce a metric between probability measures. It also uses the multi-marginal extension of OT, which is an active field on its own right.
OT methods have been introduced in biology via gradient flows in the Wasserstein metric. Writing certain chemotaxis systems in variational form allowed to prove sharp estimates on the long time asymptotics of the bacterial aggregation. This application had a surprising payback on the theory: it lead to a better understanding and novel proofs of important functional inequalities, like the logarithmic Hardy-Littlewood-Sobolev inequality. Further applications followed, like transport models for species that avoid over-crowding, or cross-diffusion equations for the description of biologic segregation. The inclusion of dissipative cross-diffusion systems into the framework of gradient flows in OT-like metrics appears to be one of the main challenges for the future development of the theory. This extension is not only relevant for biological applications, but is clearly of interest to participants with primary interest in physics or chemistry as well.
Further applications include the connection of OT with game theory, following the idea that many selection processes are based on competition. The ansatz is quite universal and has been used in other areas of the life sciences as well, like for the modeling of personal income in economics.
Applications of variational methods are widespread in medical imaging and especially for diffeomorphic image matching. The formulation of large deformation by diffeomorphisms consists in finding geodesics on a group of diffeomorphisms. This can be seen as a non-convex and smoothed version of optimal transport where a correspondence is sought between objects that can be more general than densities. Whereas the diffeomorphic approach is well established, similarity measures between objects of interest are needed in order to drive the optimization. While being crucial for the final registration results, these similarity measures are often non geometric due to a need of fast computability and gradient computation. However, our team pioneered the use of entropic smoothing for optimal transport which gives fast and differentiable similarity measures that take into account the geometry. Therefore, we expect an important impact on this topic, work still in progress. This example of application belongs to the larger class of inverse problems where a geometric similarity measure such as optimal transport might enhance notably the results. Concerning this particular application, potential interactions with the Inria team ARAMIS and also the team ASCLEPIOS can leverage new proposed similarity measure towards a more applicative impact.
Recent years have seen intense cross-fertilization between OT and various problems arising in economics. The principal-agent problem with adverse selection is particularly important in modern microeconomics, mathematically it consists in minimizing a certain integral cost functional among the set of
New ERC Grant for G. Peyré
Gabriel Peyré is the recipient of a second ERC grand (consolidator), project NORIA (http://
Pisa
Four members of Mokaplan : G. Peyré, G. Carlier, J-D. Benamou, Simone di Marino (starting 2017) have been invited speakers at the Pisa Scuola Normale Bi-Annual Optimal Transport Conference (November 7-11). This is considered as the most prestigious conference in the field.
Functional Description
We design a software resolving the following inverse problem: define the shape of a mirror which reflects the light from a source to a defined target, distribution and support of densities being prescribed. Classical applications include the conception of solar oven, public lightning, car headlights⊠Mathematical modeling of this problem, related to the optimal transport theory, takes the form of a nonlinear Monge-Ampere type PDE. The numerical resolution of these models remained until recently a largely open problem. MOKABAJOUR project aims to develop, using algorithms invented especially at Inria and LJK, a reflector design software more efficient than geometrical methods used so far. Different solvers
Participants: Simon Legrand, Jean-David Benamou, Quentin Merigot and Boris Thibert
Contact: Jean-David Benamou
Lenaic Chizat
https://
This Julia toolbox provides several tools for solving optimal transport, the unbalanced extensions and related problems.
Bernhard Schmitzer
http://
G. Peyré, V. Duval, Q. Denoyelle,C. Poon
In , we have studied the stability of a classical image processing method, the Total Variation (TV) Denoising model introduced by Rudin, Osher and Fatemi . While TV denoising is a well studied problem, our contribution is one of the first to address the impact of noise on the solutions. We have shown that the level lines of the denoised image (hence the edges and the gradient of shades) are located near an area called “extended support” which depends on the curvature of the image to recover. This yields a precise description of the so-called “staircasing” effect which is characteristic of the method, as well as the support stability of the method (see Figure ). In particular, we have proved that indicator functions of calibrable sets are stable to noise, in the sense that the level lines of the denoised image will be close to the boundary of the original set.
In , we have studied the problem of recovering a sparse signal (say, a sum of Dirac masses), from its blurred, partial, Radon transform, or equivalently by sampling the low frequency coefficients of its Fourier transform along a few radial lines. We have proved that, using a total variation (of measures) regularization approach in the spirit of , one may reconstruct exactly the signal under some geometric condition, or, in a compressed sensing approach, with high probability if one subsamples the coefficients. We propose a numerical algorithm to exactly solve this problem, by a converting it to a few low-dimensional Semi-Definite Programs.
Roman Andreev
We apply the augmented Lagrangian method to the convex optimization problem of the instationary variational mean field games with diffusion. The system is first discretized with space-time tensor product piecewise polynomial bases. This leads to a sequence of linear problems posed on the space-time cylinder that are second order in the temporal variable and fourth order in the spatial variable. To solve these large linear problems with the preconditioned conjugate gradients method we propose a parameter-robust preconditioner that is based on a temporal transformation coupled with a spatial multigrid. Numerical examples illustrate the method. .
G. Carlier J-D. Benamou have written in collaboration with F. Santambrogio a review paper on variational MFG both on theoretical and numerical aspects, the latter being addressed by augmented Lagrangian techniques developed by our team also in the context of optimal transport for an arbitrary Finsler metric cost (the main advantage of our method being that we never have to evaluate the cost).
G. Peyré, J. Solomon, M. Cuturi
A bottleneck of optimal transport (OT) methods for some applications in graphics and machine learning is that it requires the knowledge of an a priori fixed ground cost. This cost is often chosen as some power of a distance, which in turn requires that the data to compare or modify are pre-registered in a common embedding metric space (e.g. the 3-D or 2-D Euclidean space for shapes matching). For many applications (such a shape matching in vision or molecule comparison in quantum chemistry), this is simply not the case. We thus propose in , to extend the computational machinery of OT to cope with an unknown cost by using the so-called Gromov-Wasserstein distance. This distance allows to compare probability distributions living in different and un-registered metric spaces, by coupling together pairs of points instead of single points. This allows to formulate a non-convex energy minimization, which is similar to the graph matching problem. We propose to use the entropic regularization scheme to solve it numerically, and we showed that it leads to a very effective Sinkhorn-like algorithm. In (published in SIGGRAPH, the best computer graphics conference) we explore various application in computer graphics (such as shape matching or organization of collections of surfaces and images), while (published in ICML, one of the two best machine learning conference) we extend this machinery to compute interpolation and barycenter of several metric space, with application to shape interpolation and supervised learning for quantum chemistry.
A. Genevay, G. Peyré, M. Cuturi, F. Bach
Optimal transport has recently proved (in particular through the works of our team) to be very successful to solve various low dimensional problems, mostly in 2-D and 3-D. These successes are mainly due to the specicfic structure of these problems (the connections with PDE's and the use of entropic regularization), but these approaches do not scale to high dimensional and large scale problems that one encounters in machine learning. In these problems, it is not possible to discretize the space, and one does not have a direct access to the density to compare. One can rather only sample from these distributions. To address these difficulties, we propose in (published in NIPS, one of the best two machine learning conferences), the first provably convergent algorithm that can cope with high dimensional OT problems, with both discrete and continuous input measures. This approach leverage both the structure of the dual problem, and the smoothness induced by an entropic regularization. We show application of this method for classification of high dimensional bag of features histograms.
F-X. Vialard Q. Mérigot L. Nenna G. Carlier J-D. Benamou
Several new algorithms based on Optimal Transport have applied to Generalized Euler Geodesics and the Cauchy problem for the Euler equation. The methods rely on the generalized polar decomposition of Brenier, numerically implemented whether through semi-discrete optimal transport or through entropic regularization. It is robust enough to extract non-classical, multi-valued solutions of Euler’s equations predicted by Brenier and Schnirelman. The semi-discrete approach also leads to a numerical scheme able to approximate regular solutions to the Cauchy problem for Euler equations. See Luca Nenna Thesis and .
A new link between optimal transport and fluid dynamic was discovered in . Since the work of Brenier, optimal transport is tightly linked with the incompressible Euler equation and can be seen as a nonlinear extension of the pressure. Recently, a new optimal transport model between unbalanced measures has been proposed by some of the members of Mokaplan. In , it is shown that the corresponding fluid dynamic equation is the Camassa-Holm equation, well known to model waves in shallow water and wave breaking. On the theoretical side, we prove that the solutions to the Camassa-Holm equation can be seen as particular solutions of the incompressible Euler equation. This work paves the way for the study of the generalized Camassa-Holm geodesics and numerical methods based on unbalanced optimal transport scaling algorithms to solve it.
G. Peyré F-X. Vialard L. Chizat B. Schmitzer S. Di Marino
B. Schmitzer has developed a sparse solver based on entropic regularization and numerical methods to solve unbalanced optimal transport (developed by our team in 2015) have been proposed in . The core of the method consists in using the entropy functional as a reguflarizer and a barrier method. This is a generalization of the Sinkhorn method that has been introduced recently by M. Cuturi in numerical optimal transport. One important contribution of this work is to give a unified formulation of unbalanced optimal transport that can address a whole range of possible metrics and encompasses different applications such as Karcher-Fréchet averages, gradient flows, multimarginal unbalanced optimal transport. These two works are essentially based on a log-domain stabilized formulation, an adaptive truncation of the kernel and a coarse-to-fine scheme. This allows to solve large problems where the regularization is almost negligible.
In particular, this scaling algorithm is applied in its gradient flow formulation in the unbalanced case to obtain accurate simulations of the Hele-Shaw model, which models the cancer tumor growth.
G. Carlier J-D. Benamou L. Nenna, G. De Bie
G. Carlier and L. Nenna in collaboration with Adrien Blanchet developed an entropic-regularization scheme to compute Cournot Nash equilibria (i.e. equilibria in games with a continuum of players) for generic costs. With Lina Mallozzi, G. Carlier introduced a partial optimal mass transport approach for spatial monopoly pricing both in the deterministic and stochastic cases. G. Carlier, J-D. Benamou and X. Dupuis developed various numerical strategies for solving the principal-agent problem in the framework of optimal pricing. Carlier, Chernozhukov and Galichon studied multivariate quantile regression by optimal transport and duality techniques beyond the specified case, Gwendoline de Bie implemented these ideas by entropic regularization.
(S. Legrand V. Duval L. Chizat J-D. Benamou).
This collaboration between CLS and and funded by CNES intends to test on Column of Tropospheric Humidity data Optimal transportation interpolation techniques for balanced and unbalanced data.
J-D. Benamou is the coordinator of the ANR
ISOTACE (Interacting Systems and Optimal Transportation, Applications to Computational Economics) ANR-12-MONU-0013 (2012-2016). The consortium explores new numerical methods in Optimal Transportation AND Mean Field Game
theory with applications in Economics and congested crowd motion. Check
https://
J-D. Benamou and G. Carlier are members of the ANR MFG (ANR-16-CE40-0015-01). Scientific topics of the project: Mean field analysis Analysis of the MFG systems and of the Master equation Numerical analysis Models and applications
J-D. Benamou G. Carlier and F-X. Vialard are members of ANR MAGA The Monge-Ampère equation is a fully nonlinear elliptic equation, which plays a central role in geometry and in the theory of optimal transport. However, the singular and non-linear nature of the equation is a serious obstruction to its efficient numerical resolution. The first aim of the MAGA project is to study and to implement discretizations of optimal transport and Monge-Ampère equations which rely on tools from computational geometry (Laguerre diagrams). In a second step, these solvers will be applied to concrete problems from various fields involving optimal transport or Monge-Ampère equations such as computational physics: early universe reconstruction problem, congestion/incompressibility constraints economics: principal agent problems, geometry: variational problems over convex bodies, reflector and refractor design for non-imaging optics
V. Duval and F-X. Vialard are members of the CAVALIERI project (CAlcul des VAriations pour L'Imagerie, l'Edition et la Recherche d'Images).
This project, coordinated by V. Duval, aims at proposing new methods for comparing and reconstructing images relying on recent progress in the calculus of variations. Typical applications are co-segmentation, statistics transfer and interpolation, as well as tomographic reconstruction. A major emphasis is given on methods derived from (generalized) Optimal Transportation. See
http://
Gabriel Peyré is the principal investigator of the ERC project SIGMA-Vision (http://
Title: Numerical Optimal Transportation in (Mathematical) Economics
International Partner (Institution - Laboratory - Researcher):
McGill University (Canada) - mathematics - Oberman Adam
Start year: 2014
See also: https://
The team investigates new modelling and numerical resolution methods in Mathematical Economics using the theory of Optimal Transportation.
F-X. Vialard was invited to participate in Mathematics of Shapes and Applications (4 - 31 July 2016) held in Singapore.
The following people visited MOKAPLAN during 2016.
Lina Mallozzi (Professor, Napoli): Feb. 28-March 5
Andrei Sobolevski (Research Associate, Moscow) and Aleksei Kroshnin (PhD Student, Moscow): Oct 17-Oct 21
Teresa Radice (Research Associate, Napoli): Jan. 25-Jan. 31, Apr. 7-Apr. 15 and Jul. 25-Aug. 10
Giuseppe Buttazzo (Professor, Pisa): Nov. 29-Dec. 2
Carlier stayed three weeks in Canada in July, one week in Victoria for a collaboration with Agueh (and a master committee) and two weeks in Montreal for the mokalien meeting and then duscussions with Oberman, he visited Naples twice (one week each time, to work with Mallozzi and Radice), Pisa twice (one week each time, to work with Buttazzo), NYU (3 days).
V. Duval and F-X. Vialard have organized the CAVALIERI workshop which was held in the Inria Paris research center (October, 11th and 12th).
The team has organized the “Journées MokaTAO” together with the McTAO team on October 3rd and 4th. G. Peyré co-organized the SIGMA 2016 conference at the CIRM in Nov. 2016.
J-D. Benamou has co-organized Computational Optimal Transportation Workshop at CRM montreal (July 18-22).
Guillaume Carlier is in the board of Journal de l'école Polytechnique, Applied Mathematics and optimization (since 2016) and Mathematics and financial economics, with Filippo Santambrogio and Thierry Champion he co-edited a special issues of RICAM Series devoted to optimal transport. G. Peyré is editor for SIAM Journal of Imaging Sciences and Springer Journal of Mathematical Imaging and Vision. He co-edited a special issues of RICAM Series devoted to inverse problems.
The members of the team are frequently reviewing papers in SIIMS (SIAM Journal on Imaging Sciences), JMAA (Journal of Mathematical Analysis and Applications), IPol (Image Processing Online), JVCI (Journal of Visual Communication and Image Representation), COCV, M2AN ... Discrete and computational geometry, Journal of the London Math Society, JOTA, JCP, “Information and Inference: A Journal of the IMA”, JMIV, Optimization Letters, PAMI, SIAM optimization and control, IPMI and MICCAI (leading conferences in medical imaging).
V. Duval was invited to give talks at: the Oberwolfach workshop on Mathematical Imaging and Surface processing (January 2016), the Séminaire Parisien de Statistique (SEMSTAT, March 2016), the McGill-Mokalien Workshop on Numerical Optimal transport (July 2016) and the Demi-heure de Science at Inria Paris (October 2016).
G. Carlier gave seminars in Grenoble, Liège, Bielefeld and NYU, was plenary speaker at Smai-Mode conference (Toulouse), gave talks at the workshop Nonlinear problems from materials science and shape optimization (Pisa), workshop computational optimal transport (Montreal), workshop New Developments in Econometrics and Time Series (Madrid), MAFE Meeting (Bielefeld) and OTT16 (Pisa).
G. Peyré was plenary speaker at : Oxford Summer School on inverse problems (Jul 2016) ; Optimization without Border, (les Houches, Feb 2016) ; UCL workshop on sparse signal processing (Sep. 2016) ; workshop computational optimal transport (Montreal, Jul 2016) ; OTT16 (Pisa, Dec 2016).
J-D. Benamou talked at the Calculus of Variation Seminar in U. Paris Diderot (Nov.), he was invited speaker at OTT16 (Pisa, Dec 2016).
F-X. Vialard gave talks in séminaire de mathématiques appliquées, Cermics, ENPC, janvier; Geometric analysis theory in vision and control conference, Voss - Bergen, May; SIAM Imaging Science, Large Scale Inverse Problems in Medical Imaging, Albuquerque, May; Mathematics of Shapes and Applications, Singapore, July; Geometric Measure Theory: Analysis and non-smooth objects, CIMI analysis semester, Toulouse, September; MokaTAO meeting, Inria Paris, October; séminaire calcul des variations, Orsay, October; Journée transport optimal, équation de Monge-Ampère et applications, IHES, December.
G. Peyré is in the scientific boards of Fondation Sciences Mathématiques de Paris (since 2013) ; Chaire CFM-ENS on Data Sciences (since 2016) ; Chaire Havas-Dauphine END (since 2013); Ceremade Paris-Dauphine (2013-2016)
J-D. Benamou is an elected member of the "Conseil Académique" of the PSL COMUE.
Licence : V. Duval, Analyse fonctionnelle, 27h équivalent TD, niveau L3, INSA Rouen-Normandie
Master : G. Peyré, Master 2 MVA on imaging and machine learning (Cachan)
Licence : F-X. Vialard, Algèbre linéaire 2, Université Paris-Dauphine,
Internship : Gwendoline de Bie did her master internship (2nd year at ENSAE Paris-Tech) with G. Carlier on Entropic regularization for multivariate quantile regresssion,
Internship : Christine Durok Une méthode itérative pour le probléme inverse du réflecteur discret J-D. Benamou Q. Mérigot
PhD in progress: Julien André, CIFRE PhD thesis with with the company OPTIS Grenoble-INP (co-supervision D. Attali, B. Thibert, Q. Mérigot)
PhD in progress : Jocelyn Meyron, Extension of semi-discrete Optimal transport to other costs, lED de Grenoble, Q. Mérigot, D. Attali and B. Thibert.
PhD in progress : Miao Yu, Optimal Transport distances and Geophyscial imaging J-D. Benamou (co-direction J.-P. Vilotte, IPGP).
PhD in progress : Paul Catala, Low-rank approaches for off-the-grid superresolution, October 2016, G. Peyré and V. Duval.
PhD in progress : Lenaic Chizat, Unbalanced Optimal Transport , october 2014, F-X. Vialard and G. Peyré.
PhD in progress : Aude Genevay, Optimal Transport for Machine Learning , october 2015, J-D. Benamou and G. Peyré.
PhD in progress : Quentin Denoyelle, Off-the-grid super-resolution: theory, algorithms and applications in fluorescence imaging, October 2014, G. Peyré and V. Duval.
Postdoc completed: Dario Prandi, sub-Riemannian model for imaging, Oct. 2015 – Aug. 2016, G. Peyré and J-M Mirebeau
Postdoc completed: Thomas Gallouèt, Fluid model and optimal transport, Oct. 2015 – Aug. 2016, Q. Mérigot and Yann Brenier.
Postdoc in progress: Roman Andreev, Numerical Methods for Mean Field Games , Mai 2015, Yves Achdou and J-D. Benamou.
PhD completed: Luca Nenna, Numerical Methods for Multi-Marginal Optimal Transportation, October 2013-December 2016, J-D. Benamou and G. Carlier.
PhD completed: Maxime Laborde, Systemes de particules en interaction, approche par flot de gradient dans l'espace de Wasserstein, september 2013-December 2016, G. Carlier.
PhD in progress: Jonathan Vacher, Machine learning approaches for neurosciences of the visual brain, October 2013-Jan 2017, G. Peyré and C. Monier.
Postdoc completed: Bernhard Schmitzer, fast algorithms for optimal transport, Oct. 2014-August 2016, G. Peyré.
Postdoc completed: Clarice Poon, Support recovery using total variation and others sparse priors, September 2015-August 2016, G. Peyré and V. Duval.
Vincent Duval was in the PhD comittee of Romain Hug (Grenoble, December, 9).
J-D. Benamou and G. Carlier were in the PhD comittee of Mathieu Laurière (Paris-Diderot, November 21), G. Carlier was on the Ph.D comittee of Guo (Ecole Polytechnique) and the HDR of Silva (Limoges, referee) and Lamboley (Dauphine, coordinateur). G. Peyré was in the PhD comittees of: Mitra Fatemi (EPFL, Feb. 2016), Morgane Henry (Grenoble, Mars 2016), Antoine Bonnefoy (Marseille, Mars 2016), Sébastien Combrexelle (Toulouse, Oct. 2016), Augustin Cosse (UCL, Aout 2016), Irène Kaltenmark (ENS Cachan, Oct. 2016), Olivia Miraucourt (Reims, Oct. 2016), Fred Maurice Ngole (CEA, Oct. 2016), Lara Raad (ENS Cachan, Oct. 2016), Emmanuel Soubies (Nice, Oct. 2016), Luc Le Magoarou (Rennes, Dec. 2016), Chen Da (Paris, Dec. 2016).
G. Peyré wrote the large audience articles : “Claude Shannon et la compression de données” on Image des Mathematiques