The last decade has witnessed a remarkable convergence between several sub-domains of the calculus of variations, namely optimal transport (and its many generalizations), infinite dimensional geometry of diffeomorphisms groups and inverse problems in imaging (in particular sparsity-based regularization). This convergence is due to (i) the mathematical objects manipulated in these problems, namely sparse measures (e.g. coupling in transport, edge location in imaging, displacement fields for diffeomorphisms) and (ii) the use of similar numerical tools from non-smooth optimization and geometric discretization schemes. Optimal Transportation, diffeomorphisms and sparsity-based methods are powerful modeling tools, that impact a rapidly expanding list of scientific applications and call for efficient numerical strategies. Our research program shows the important part played by the team members in the development of these numerical methods and their application to challenging problems.

*Optimal Mass Transportation* is a mathematical research topic which started two centuries ago with Monge's work on the “Théorie des déblais et des remblais" (see ).
This engineering problem consists in minimizing the transport cost between two given mass densities. In the 40's, Kantorovich introduced a powerful linear relaxation and introduced its dual formulation. The *Monge-Kantorovich* problem became a specialized research topic in optimization and Kantorovich obtained the 1975 Nobel prize in economics for his contributions to resource allocations problems. Since the seminal discoveries of Brenier in the 90's , Optimal Transportation has received renewed attention from mathematical analysts and the Fields Medal awarded in 2010 to C. Villani, who gave important contributions to Optimal Transportation and wrote the modern reference monographs , , arrived at a culminating moment for this theory. Optimal Mass Transportation is today a mature area of mathematical analysis with a constantly growing range of applications. Optimal Transportation has also received a lot of attention from probabilists (see for instance the recent survey for an overview of the Schrödinger problem which is a stochastic variant of the Benamou-Brenier dynamical formulation of optimal transport). The development of numerical methods for Optimal Transportation and Optimal Transportation related problems is a difficult topic and comparatively underdeveloped. This research field has experienced a surge of activity in the last 3 years, with important contributions of the Mokaplan group (see the list of important publications of the team). We describe below a few of recent and less recent Optimal Transportation concepts and methods which are connected to the future activities of Mokaplan :

A formal substitution of the optimal transport map as the gradient of a convex potential in the mass conservation constraint (a Jacobian equation) gives a non-linear Monge-Ampère equation. Caffarelli used this result to extend the regularity theory for the Monge-Ampère equation. In the last ten years, it also motivated new research on numerical solvers for non-linear degenerate Elliptic equations and the references therein. Geometric approaches based on Laguerre diagrams and discrete data have also been developed. Monge-Ampère based Optimal Transportation solvers have recently given the first linear cost computations of Optimal Transportation (smooth) maps.

In recent years, the classical Optimal Transportation problem has been extended in several directions. First, different ground costs measuring the “physical" displacement have been considered. In particular, well posedness for a large class of convex and concave cost has been established by McCann and Gangbo . Optimal Transportation techniques have been applied for example to a Coulomb ground cost in Quantum chemistry in relation with Density Functional theory . Given the densities of electrons Optimal Transportation models the potential energy and their relative positions. For more than more than 2 electrons (and therefore more than 2 densities) the natural extension of Optimal Transportation is the so called Multi-marginal Optimal Transport (see and the references therein). Another instance of multi-marginal Optimal Transportation arises in the so-called Wasserstein barycenter problem between an arbitrary number of densities . An interesting overview of this emerging new field of optimal transport and its applications can be found in the recent survey of Ghoussoub and Pass .

Optimal transport has found many applications, starting from its relation with several physical models such as the semi-geostrophic equations in meteorology , , , , , mesh adaptation , the reconstruction of the early mass distribution of the Universe , in Astrophysics, and the numerical optimisation of reflectors following the Optimal Transportation interpretation of Oliker and Wang . Extensions of OT such as multi-marginal transport has potential applications in Density Functional Theory , Generalized solution of Euler equations (DFT) and in statistics and finance , ...Recently, there has been a spread of interest in applications of OT methods in imaging sciences , statistics and machine learning . This is largely due to the emergence of fast numerical schemes to approximate the transportation distance and its generalizations, see for instance . Figure shows an example of application of OT to color transfer. Figure shows an example of application in computer graphics to interpolate between input shapes.

While the optimal transport problem, in its original formulation, is a static problem (no time evolution is considered), it makes sense in many applications to rather consider time evolution. This is relevant for instance in applications to fluid dynamics or in medical images to perform registration of organs and model tumor growth.

In this perspective, the optimal transport in Euclidean space corresponds to an evolution where each particule of mass evolves in straight line. This interpretation corresponds to the *Computational Fluid Dynamic* (CFD) formulation proposed by Brenier and Benamou in . These solutions are time curves in the space of densities and geodesics for the Wasserstein distance. The CFD formulation relaxes the non-linear mass conservation constraint into a time dependent continuity equation, the cost function remains convex but is highly non smooth. A remarkable feature of this dynamical formulation is that it can be re-cast as a convex but non smooth optimization problem. This convex dynamical formulation finds many non-trivial extensions and applications, see for instance . The CFD formulation also appears to be a limit case of *Mean Fields games* (MFGs), a large class of economic models introduced by Lasry and Lions leading to a system coupling an Hamilton-Jacobi with a Fokker-Planck equation. In contrast, the Monge case where the ground cost is the euclidan distance leads to a static system of PDEs .

Another extension is, instead of considering geodesic for transportation metric (i.e. minimizing the Wasserstein distance to a target measure), to make the density evolve in order to minimize some functional. Computing the steepest descent direction with respect to the Wasserstein distance defines a so-called Wasserstein gradient flow, also known as *JKO gradient flows* after its authors . This is a popular tool to study a large class of non-linear diffusion equations. Two interesting examples are the Keller-Segel system for chemotaxis , and a model of congested crowd motion proposed by Maury, Santambrogio and Roudneff-Chupin . From the numerical point of view, these schemes are understood to be the natural analogue of implicit scheme for linear parabolic equations. The resolution is however costly as in involves taking the derivative in the Wasserstein sense of the relevant energy, which in turns requires the resolution of a large scale convex but non-smooth minimization.

To tackle more complicated warping problems, such as those encountered in medical image analysis, one unfortunately has to drop the convexity of the functional involved to define the gradient flow. This gradient flow can either be understood as defining a geodesic on the (infinite dimensional) group of diffeomorphisms , or on a (infinite dimensional) space of curves or surfaces . The de-facto standard to define, analyze and compute these geodesics is the “Large Deformation Diffeomorphic Metric Mapping” (LDDMM) framework of Trouvé, Younes, Holm and co-authors , . While in the CFD formulation of optimal transport, the metric on infinitesimal deformations is just the

Beside image warping and registration in medical image analysis, a key problem in nearly all imaging applications is the reconstruction of high quality data from low resolution observations. This field, commonly referred to as “inverse problems”, is very often concerned with the precise location of features such as point sources (modeled as Dirac masses) or sharp contours of objects (modeled as gradients being Dirac masses along curves). The underlying intuition behind these ideas is the so-called sparsity model (either of the data itself, its gradient, or other more complicated representations such as wavelets, curvelets, bandlets and learned representation ).

The huge interest in these ideas started mostly from the introduction of convex methods to serve as proxy for these sparse regularizations. The most well known is the

However, the theoretical analysis of sparse reconstructions involving real-life acquisition operators (such as those found in seismic imaging, neuro-imaging, astro-physical imaging, etc.) is still mostly an open problem. A recent research direction, triggered by a paper of Candès and Fernandez-Granda , is to study directly the infinite dimensional problem of reconstruction of sparse measures (i.e. sum of Dirac masses) using the total variation of measures (not to be mistaken for the total variation of 2-D functions). Several works , , have used this framework to provide theoretical performance guarantees by basically studying how the distance between neighboring spikes impacts noise stability.

In image processing, one of the most popular method is the total variation regularization , . It favors low-complexity images that are piecewise constant, see Figure for some example to solve some image processing problems. Beside applications in image processing, sparsity-related ideas also had a deep impact in statistics and machine learning . As a typical example, for applications to recommendation systems, it makes sense to consider sparsity of the singular values of matrices, which can be relaxed using the so-called nuclear norm (a.k.a. trace norm) . The underlying methodology is to make use of low-complexity regularization models, which turns out to be equivalent to the use of partly-smooth regularization functionals , enforcing the solution to belong to a low-dimensional manifold.

The dynamical formulation of optimal transport creates a link between optimal transport and geodesics on diffeomorphisms groups. This formal link has at least two strong implications that Mokaplan's will elaborate on: (i) the development of novel models that bridge the gap between these two fields ; (ii) the introduction of novel fast numerical solvers based on ideas from both non-smooth optimization techniques and Bregman metrics, as highlighted in Section .

In a similar line of ideas, we believe a unified approach is needed to tackle both sparse regularization in imaging and various generalized OT problems. Both require to solve related non-smooth and large scale optimization problems. Ideas from proximal optimization has proved crucial to address problems in both fields (see for instance , ). Transportation metrics are also the correct way to compare and regularize variational problems that arise in image processing (see for instance the Radon inversion method proposed in ) and machine learning (see ). This unity in term of numerical methods is once again at the core of Section .

The first layer of methodological tools developed by our team is a set of theoretical continuous models that aim at formalizing the problems studied in the applications. These theoretical findings will also pave the way to efficient numerical solvers that are detailed in Section .

(*Participants:* G. Carlier, J-D. Benamou, V. Duval, Xavier Dupuis (LUISS Guido Carli University, Roma)) The principal agent problem plays a distinguished role in the literature on asymmetric information and contract theory (with important contributions from several Nobel prizes such as Mirrlees, Myerson or Spence) and it has many important applications in optimal taxation, insurance, nonlinear pricing. The typical problem consists in finding a cost minimizing strategy for a monopolist facing a population of agents who have an unobservable characteristic, the principal therefore has to take into account the so-called incentive compatibilty constraint which is very similar to the cyclical monotonicity condition which characterizes optimal transport plans. In a special case, Rochet and Choné reformulated the problem as a variational problem subject to a convexity constraint. For more general models, and using ideas from Optimal Transportation, Carlier considered the more general

*Our expertise:* We have already contributed to the numerical resolution of the Principal Agent problem in the case of the convexity constraint, see , , .

*Goals:* So far, the mathematical PA model can be numerically solved for simple utility functions.
A Bregman approach inspired by is currently being developed for more general functions. It would be extremely useful as a complement to the theoretical
analysis. A new semi-Discrete Geometric approach is also investigated where the method reduces to
non-convex polynomial optimization.

(*Participants:* G. Carlier, J-D. Benamou, G. Peyré)
A challenging branch of emerging generalizations of Optimal Transportation arising in *economics, statistics and finance* concerns Optimal Transportation with *conditional* constraints. The *martingale optimal transport* , which appears naturally in mathematical finance aims at computing robust bounds on option prices as the value of an optimal transport problem where not only the marginals are fixed but the coupling should be the law of a martingale, since it represents the prices of the underlying asset under the risk-neutral probability at the different dates. Note that as soon as more than two dates are involved, we are facing a multimarginal problem.

*Our expertise:* Our team has a deep expertise on the topic of OT and its generalization, including many already existing collaboration between its members, see for instance , , for some representative recent collaborative publications.

*Goals:* This is a non trivial extension of Optimal Transportation theory and Mokaplan will develop numerical methods (in the spirit of entropic regularization) to address it. A popular problem in statistics is the so-called quantile regression problem, recently Carlier, Chernozhukov and Galichon used an Optimal Transportation approach to extend quantile regression to several dimensions. In this approach again, not only fixed marginals constraints are present but also constraints on conditional means. As in the martingale Optimal Transportation problem, one has to deal with an extra conditional constraint. The usual duality approach usually breaks down under such constraints and characterization of optimal couplings is a challenging task both from a theoretical and numerical viewpoint.

(*Participants:* G. Carlier, J-D. Benamou, M. Laborde, Q. Mérigot, V. Duval) The connection between the static and dynamic transportation problems (see Section ) opens the door to many extensions, most notably by leveraging the use of gradient flows in metric spaces. The flow with respect to the transportation distance has been introduced by Jordan-Kindelherer-Otto (JKO) and provides a variational formulation of many linear and non-linear diffusion equations. The prototypical example is the Fokker Planck equation. We will explore this formalism to study new variational problems over probability spaces, and also to derive innovative numerical solvers.
The JKO scheme has been very successfully used to study evolution equations that have the structure of a gradient flow in the Wasserstein space. Indeed many important PDEs have this structure: the Fokker-Planck equation (as was first considered by ), the porous medium equations, the granular media equation, just to give a few examples. It also finds application in image processing . Figure shows examples of gradient flows.

*Our expertise:* There is an ongoing collaboration between the team members on the theoretical and numerical analysis of gradient flows.

*Goals:* We apply and extend our research on JKO numerical methods to treat various extensions:

Wasserstein gradient flows with a non displacement convex energy (as in the parabolic-elliptic Keller-Segel chemotaxis model )

systems of evolution equations which can be written as gradient flows of some energy on a product space (possibly mixing the Wasserstein and

perturbation of gradient flows: multi-species or kinetic models are not gradient flows, but may be viewed as a perturbation of Wasserstein gradient flows, we shall therefore investigate convergence of splitting methods for such equations or systems.

(*Participants:* G. Carlier, J-D. Benamou, G. Peyré, R. Hatchi) Congested transport theory in the discrete framework of networks has received a lot of attention since the 50's starting with the seminal work of Wardrop. A few years later, Beckmann proved that equilibria are characterized as solution of a convex minimization problem. However, this minimization problem involves one flow variable per path on the network, its dimension thus quickly becomes too large in practice. An alternative, is to consider continuous in space models of congested optimal transport as was done in which leads to very degenerate PDEs .

*Our expertise:* MOKAPLAN members have contributed a lot to the analysis of congested transport problems and to optimization problems with respect to a metric which can be attacked numerically by fast marching methods .

*Goals:* The case of general networks/anisotropies is still not well understood, general

(*Participants:* F-X. Vialard, J-D. Benamou, G. Peyré, L. Chizat) A major issue with the standard dynamical formulation of OT is that it does not allow for variation of mass during the evolution, which is required when tackling medical imaging applications such as tumor growth modeling or tracking elastic organ movements . Previous attempts , to introduce a source term in the evolution typically lead to mass teleportation (propagation of mass with infinite speed), which is not always satisfactory.

*Our expertise:* Our team has already established key contributions both to connect OT to fluid dynamics and to define geodesic metrics on the space of shapes and diffeomorphisms .

*Goals:* Lenaic Chizat's PhD thesis aims at bridging the gap between dynamical OT formulation, and LDDDM diffeomorphisms models (see Section ). This will lead to biologically-plausible evolution models that are both more tractable numerically than LDDM competitors, and benefit from strong theoretical guarantees associated to properties of OT.

(*Participants:* G. Carlier, J-D. Benamou) The Optimal Transportation Computational Fluid Dynamics (CFD) formulation is a limit case of variational Mean-Field Games (MFGs), a new branch of game theory recently developed by J-M. Lasry and P-L. Lions with an extremely wide range of potential applications . Non-smooth proximal optimization methods used successfully for the Optimal Transportation can be used in the case of deterministic MFGs with singular data and/or potentials . They provide a robust treatment of the positivity constraint on the density of players.

*Our expertise:* J.-D. Benamou has pioneered with Brenier the CFD approach to Optimal Transportation. Regarding MFGs, on the numerical side, our team has already worked on the use of augmented Lagrangian methods in MFGs and on the analytical side has explored rigorously the optimality system for a singular CFD problem similar to the MFG system.

*Goals:* We will work on the extension to stochastic MFGs. It leads to non-trivial numerical difficulties already pointed out in .

(*Participants:* G. Carlier, J-D. Benamou, Q. Mérigot, F. Santambrogio (U. Paris-Sud), Y. Achdou (Univ. Paris 7), R. Andreev (Univ. Paris 7))
Many models from PDEs and fluid mechanics have been used to give a description of *people or vehicles moving in a congested environment*.
These models have to be classified according to the dimension (1D model are mostly used for cars on traffic networks, while 2-D models are most suitable for pedestrians), to the congestion effects (“soft” congestion standing for the phenomenon where high densities slow down the movement, “hard” congestion for the sudden effects when contacts occur, or a certain threshold is attained), and to the possible rationality of the agents
Maury et al recently developed a theory for 2D hard congestion models without rationality, first in a discrete and then in a continuous framework. This model produces a PDE that is difficult to attack with usual PDE methods, but has been successfully studied via Optimal Transportation techniques again related to the JKO gradient flow paradigm. Another possibility to model crowd motion is to use the mean field game approach of Lions and Lasry which limits of Nash equilibria when the number of players is large. This also gives macroscopic models where congestion may appear but this time a global equilibrium strategy is modelled rather than local optimisation by players like in
the JKO approach. Numerical methods are starting to be available, see for instance , .

*Our expertise:* We have developed numerical methods to tackle both the JKO approach and the MFG approach. The Augmented Lagrangian (proximal) numerical method can
actually be applied to both models , JKO and deterministic MFGs.

*Goals:* We want to extend our numerical approach to more realistic congestion model where the speed of agents depends on the density, see Figure for
preliminary results. Comparison with different numerical approaches will also be performed inside the ANR ISOTACE.
Extension of the Augmented Lagrangian approach to Stochastic MFG will be studied.

(*Participants:* F-X. Vialard, G. Peyré, B. Schmitzer, L. Chizat) Diffeomorphic image registration is widely used in medical image
analysis. This class of problems can be seen as the computation of a
generalized optimal transport, where the optimal path is a geodesic on a
group of diffeomorphisms.
The major difference between the two approaches
being that optimal transport leads to non smooth optimal maps in
general, which is however compulsory in diffeomorphic image matching. In
contrast, optimal transport enjoys a convex variational formulation
whereas in LDDMM the minimization problem is non convex.

*Our expertise:* F-X. Vialard is an expert of diffeomorphic image matching (LDDMM)
, , .
Our team has already studied flows and geodesics over non-Riemannian
shape spaces, which allows for piecewise smooth
deformations .

*Goals:* Our aim consists in bridging the gap between standard
optimal transport and diffeomorphic methods by building new diffeomorphic matching variational formulations that are
convex (geometric obstructions might however appear). A related
perspective is the development of new registration/transport models in
a Lagrangian framework, in the spirit of , to
obtain more meaningful statistics on longitudinal studies.

Diffeomorphic matching consists in the minimization of a functional that is a sum of a deformation cost and a similarity measure. The choice of the similarity measure is as important as the deformation cost. It is often chosen as a norm on a Hilbert space such as functions, currents or varifolds. From a Bayesian perspective, these similarity measures are related to the noise model on the observed data which is of geometric nature and it is not taken into account when using Hilbert norms. Optimal transport fidelity have been used in the context of signal and image denoising , and it is an important question to extends these approach to registration problems. Therefore, we propose to develop similarity measures that are geometric and computationally very efficient using entropic regularization of optimal transport.

Our approach is to use a regularized optimal transport to design new similarity measures on all of those Hilbert spaces. Understanding the precise connections between the evolution of shapes and probability distributions will be investigated to cross-fertilize both fields by developing novel transportation metrics and diffeomorphic shape flows.

The corresponding numerical schemes are however computationally very costly. Leveraging our understanding of the dynamic optimal transport problem and its numerical resolution, we propose to develop new algorithms. These algorithms will use the smoothness of the Riemannian metric to improve both accuracy and speed, using for instance higher order minimization algorithm on (infinite dimensional) manifolds.

(*Participants:* F-X. Vialard, G. Peyré, B. Schmitzer, L. Chizat) The LDDMM framework has been advocated to enable statistics on the space
of shapes or images that benefit from the estimation of the deformation.
The statistical results of it strongly depend on the choice of the
Riemannian metric. A possible direction consists in learning the right
invariant Riemannian metric as done in
where a correlation matrix (Figure ) is learnt which represents
the covariance matrix of the deformation fields for a given population
of shapes.
In the same direction, a question of emerging interest in medical
imaging is the analysis of time sequence of shapes (called longitudinal
analysis) for early diagnosis of disease, for instance .
A key question is the inter subject comparison of the organ evolution
which is usually done by transport of the time evolution in a common
coordinate system via parallel transport or other more basic methods.
Once again, the statistical results (Figure ) strongly depend on
the choice of the metric or more generally on the connection that
defines parallel transport.

*Our expertise:* Our team has already studied statistics on longitudinal evolutions in
, .

*Goals:* Developing higher order numerical schemes for parallel transport (only
low order schemes are available at the moment) and
developing variational models to learn the metric or the connections for
improving statistical results.

(*Participants:* G. Peyré, V. Duval, C. Poon, Q. Denoyelle) As detailed in Section , popular methods for regularizing inverse problems in imaging make use of variational analysis over infinite-dimensional (typically non-reflexive) Banach spaces, such as Radon measures or bounded variation functions.

*Our expertise:* We have recently shown in how – in the finite dimensional case – the non-smoothness of the functionals at stake is crucial to enforce the emergence of geometrical structures (edges in images or fractures in physical materials ) for discrete (finite dimensional) problems. We extended this result in a simple infinite dimensional setting, namely sparse regularization of Radon measures for deconvolution .
A deep understanding of those continuous inverse problems is crucial to analyze the behavior of their discrete counterparts, and in we have taken advantage of this understanding to develop a fine analysis of the artifacts induced by discrete (*i.e.* which involve grids) deconvolution models.
These works are also closely related to the problem of limit analysis and yield design in mechanical plasticity, see , for an existing collaboration between Mokaplan's team members.

*Goals:* A current major front of research in the mathematical analysis of inverse problems is to extend these results for more complicated infinite dimensional signal and image models, such as for instance the set of piecewise regular functions. The key bottleneck is that, contrary to sparse measures (which are finite sums of Dirac masses), here the objects to recover (smooth edge curves) are not parameterized by a finite number of degrees of freedom.
he relevant previous work in this direction are the fundamental results of Chambolle, Caselles and co-workers , , . They however only deal with the specific case where there is no degradation operator and no noise in the observations. We believe that adapting these approaches using our construction of vanishing derivative pre-certificate could lead to a solution to these theoretical questions.

(*Participants:* G. Peyré, J-M. Mirebeau, D. Prandi) Modeling and processing natural images require to take into account their geometry through anisotropic diffusion operators, in order to denoise and enhance directional features such as edges and textures , . This requirement is also at the heart of recently proposed models of cortical processing . A mathematical model for these processing is diffusion on sub-Riemanian manifold. These methods assume a fixed, usually linear, mapping from the 2-D image to a lifted function defined on the product of space and orientation (which in turn is equipped with a sub-Riemannian manifold structure).

*Our expertise:* J-M. Mirebeau is an expert in the discretization of highly anisotropic diffusions through the use of locally adaptive computational stencils , . G. Peyré has done several contributions on the definition of geometric wavelets transform and directional texture models, see for instance . Dario Prandi has recently applied methods from sub-Riemannian geometry to image restoration .

*Goals:* A first aspect of this work is to study non-linear, data-adaptive, lifting from the image to the space/orientation domain. This mapping will be implicitly defined as the solution of a convex variational problem. This will open both theoretical questions (existence of a solution and its geometrical properties, when the image to recover is piecewise regular) and numerical ones (how to provide a faithful discretization and fast second order Newton-like solvers). A second aspect of this task is to study the implication of these models for biological vision, in a collaboration with the UNIC Laboratory (directed by Yves Fregnac), located in Gif-sur-Yvette. In particular, the study of the geometry of singular vectors (or “ground states” using the terminology of ) of the non-linear sub-Riemannian diffusion operators is highly relevant from a biological modeling point of view.

(*Participants:* G. Peyré, V. Duval, C. Poon) Scanner data acquisition is mathematically modeled as a (sub-sampled) Radon transform . It is a difficult inverse problem because the Radon transform is ill-posed and the set of observations is often aggressively sub-sampled and noisy . Typical approaches try to recovered piecewise smooth solutions in order to recover precisely the position of the organ being imaged. There is however a very poor understanding of the actual performance of these methods, and little is known on how to enhance the recovery.

*Our expertise:* We have obtained a good understanding of the performance of inverse problem regularization on *compact* domains for pointwise sources localization .

*Goals:* We aim at extending the theoretical performance analysis obtained for sparse measures to the set of piecewise regular 2-D and 3-D functions.
Some interesting previous work of C. Poon et al
(C. Poon is currently a postdoc in Mokaplan)
have tackled related questions in the field of variable Fourier sampling for compressed sensing application (which is a toy model for fMRI imaging). These approaches are however not directly applicable to Radon sampling, and require some non-trivial adaptations.
We also aim at better exploring the connection of these methods with optimal-transport based fidelity terms such as those introduced in .

(*Participants:* G. Peyré, F-X. Vialard, J-D. Benamou, L. Chizat)
Some applications in medical image analysis require to track shapes whose evolution is governed by a growth process. A typical example is tumor growth, where the evolution depends on some typically unknown but meaningful parameters that need to be estimated. There exist well-established mathematical models , of non-linear diffusions that take into account recently biologically observed property of tumors. Some related optimal transport models with mass variations have also recently been proposed , which are connected to so-called metamorphoses models in the LDDMM framework .

*Our expertise:* Our team has a strong experience on both dynamical optimal transport models and diffeomorphic matching methods (see Section ).

*Goals:* The close connection between tumor growth models , and gradient flows for (possibly non-Euclidean) Wasserstein metrics (see Section ) makes the application of the numerical methods we develop particularly appealing to tackle large scale forward tumor evolution simulation.
A significant departure from the classical OT-based convex models is however required.
The final problem we wish to solve is the backward (inverse) problem of estimating tumor parameters from noisy and partial observations.
This also requires to set-up a meaningful and robust data fidelity term, which can be for instance a generalized optimal transport metric.

The above continuous models require a careful discretization, so that the fundamental properties of the models are transferred to the discrete setting. Our team aims at developing innovative discretization schemes as well as associated fast numerical solvers, that can deal with the geometric complexity of the variational problems studied in the applications. This will ensure that the discrete solution is correct and converges to the solution of the continuous model within a guaranteed precision. We give below examples for which a careful mathematical analysis of the continuous to discrete model is essential, and where dedicated non-smooth optimization solvers are required.

(*Participants:* J-D. Benamou, G. Carlier, J-M. Mirebeau, Q. Mérigot) Optimal transportation models as well as continuous models in economics can be formulated as infinite dimensional convex variational problems with the constraint that the solution belongs to the cone of convex functions. Discretizing this constraint is however a tricky problem, and usual finite element discretizations fail to converge.

*Our expertise:* Our team is currently investigating new discretizations, see in particular the recent proposal for the Monge-Ampère equation and for general non-linear variational problems. Both offer convergence guarantees and are amenable to fast numerical resolution techniques such as Newton solvers.
Since explaining how to treat efficiently and in full generality Transport Boundary Conditions
for Monge-Ampère, this is a promising fast and new approach to compute Optimal Transportation viscosity solutions.
A monotone scheme is needed. One is based on Froese Oberman work , a new different
and more accurate approach has been proposed by Mirebeau, Benamou and Collino .
As shown in , discretizing the constraint for a continuous function to be convex is not trivial.
Our group has largely contributed to solve this problem with G. Carlier , Quentin Mérigot and J-M. Mirebeau . This problem is connected to the construction of monotone schemes for the Monge-Ampère equation.

*Goals:* The current available methods are 2-D. They need to be optimized and parallelized. A non-trivial extension to 3-D is necessary for many applications. The notion of

(*Participants:* J-D. Benamou, G. Carlier, J-M. Mirebeau, G. Peyré, Q. Mérigot) As detailed in Section , gradient Flows for the Wasserstein metric (aka JKO gradient flows ) provides a variational formulation of many non-linear diffusion equations. They also open the way to novel discretization schemes.
From a computational point, although the JKO scheme is constructive (it is based on the implicit Euler scheme), it has not been very much used in practice numerically because the Wasserstein term is difficult to handle (except in dimension one).

*Our expertise:*

Solving one step of a JKO gradient flow is similar to solving an Optimal transport problem. A geometrical a discretization of the Monge-Ampère operator approach has been proposed by Mérigot, Carlier, Oudet and Benamou in see Figure . The Gamma convergence of the discretisation (in space) has been proved.

*Goals:* We are also investigating the application of other numerical approaches to Optimal Transport to JKO gradient flows either
based on the CFD formulation or on the entropic regularization of the Monge-Kantorovich problem (see section 3.2.3).
An in-depth study and comparison of all these methods will be necessary.

(*Participants:* V. Duval, G. Peyré, G. Carlier, Jalal Fadili (ENSICaen), Jérôme Malick (CNRS, Univ. Grenoble)) While pervasive in the numerical analysis community, the problem of discretization and

*Our expertise:* We have provided the first results on the discrete-to-continous convergence in both sparse regularization variational problems , and the static formulation of OT and Wasserstein barycenters

*Goals:* In a collaboration with Jérôme Malick (Inria Grenoble), our first goal is to generalized the result of to generic partly-smooth convex regularizers routinely used in imaging science and machine learning, a prototypal example being the nuclear norm (see for a review of this class of functionals).
Our second goal is to extend the results of to the novel class of entropic discretization schemes we have proposed , to lay out the theoretical foundation of these ground-breaking numerical schemes.

(*Participants:* G. Peyré, V. Duval, C. Poon) There has been a recent spark of attention of the imaging community on so-called “grid free” methods, where one tries to directly tackle the infinite dimensional recovery problem over the space of measures, see for instance , .
The general idea is that if the range of the imaging operator is finite dimensional, the associated dual optimization problem is also finite dimensional (for deconvolution, it corresponds to optimization over the set of trigonometric polynomials).

*Our expertise:* We have provided in a sharp analysis of the support recovery property of this class of methods for the case of sparse spikes deconvolution.

*Goals:* A key bottleneck of these approaches is that, while being finite dimensional, the dual problem necessitates to handle a constraint of polynomial positivity, which is notoriously difficult to manipulate (except in the very particular case of 1-D problems, which is the one exposed in ). A possible, but very costly, methodology is to ressort to Lasserre's SDP representation hierarchy . We will make use of these approaches and study how restricting the level of the hierarchy (to obtain fast algorithms) impacts the recovery performances (since this corresponds to only computing approximate solutions). We will pay a particular attention to the recovery of 2-D piecewise constant functions (the so-called total variation of functions regularization ), see Figure for some illustrative applications of this method.

(*Participants:* G. Peyré, J-D. Benamou, G. Carlier, Jalal Fadili (ENSICaen)) Both sparse regularization problems in imaging (see Section ) and dynamical optimal transport (see Section ) are instances of large scale, highly structured, non-smooth convex optimization problems.
First order proximal splitting optimization algorithms have recently gained lots of interest for these applications because they are the only ones capable of scaling to giga-pixel discretizations of images and volumes and at the same time handling non-smooth objective functions. They have been successfully applied to optimal transport , , congested optimal transport and to sparse regularizations (see for instance and the references therein).

*Our expertise:* The pioneering work of our team has shown how these proximal solvers can be used to tackle the dynamical optimal transport problem , see also . We have also recently developed new proximal schemes that can cope with non-smooth composite objectives functions .

*Goals:* We aim at extending these solvers to a wider class of variational problems, most notably optimization under divergence constraints . Another subject we are investigating is the extension of these solvers to both non-smooth and non-convex objective functionals, which are mandatory to handle more general transportation problems and novel imaging regularization penalties.

(*Participants:* G. Peyré G. Carlier, L. Nenna, J-D. Benamou, L. Nenna, Marco Cuturi (Kyoto Univ.)) The entropic regularization of the Kantorovich linear program for OT has been shown to be surprisingly simple and efficient, in particular for applications in machine learning . As shown in , this is a special instance of the general method of Bregman iterations, which is also a particular instance of first order proximal schemes according to the Kullback-Leibler divergence.

*Our expertise:* We have recently shown how Bregman projections and Dykstra algorithm offer a generic optimization framework to solve a variety of generalized OT problems. Carlier and Dupuis have designed a new method based on alternate Dykstra projections and applied it to the *principal-agent problem* in microeconomics.
We have applied this method in computer graphics in a paper accepted in SIGGRAPH 2015 . Figure shows the potential of our approach to handle giga-voxel datasets: the input volumetric densities are discretized on a

*Goals:* Following some recent works (see in particular ) we first aim at studying primal-dual optimization schemes according to Bregman divergences (that would go much beyond gradient descent and iterative projections), in order to offer a versatile and very effective framework to solve variational problems involving OT terms.
We then also aim at extending the scope of usage of this method to applications in quantum mechanics (Density Functional Theory, see ) and fluid dynamics (Brenier's weak solutions of the incompressible Euler equation, see ). The computational challenge is that realistic physical examples are of a huge size not only because of the space discretization of one marginal but also because of the large number of marginals involved (for incompressible Euler the number of marginals equals the number of time steps).

Following the pioneering work of Caffarelli and Oliker , Wang has shown that the inverse problem of freeforming a *convex* reflector which sends a prescribed source to a target intensity is a particular instance of Optimal Transportation. This is a promising approach to automatize the industrial design of optimized energy efficient reflectors (car/public lights for instance).
We show in figure the experiment setting and one of the first numerical simulations produced by the ADT Mokabajour.

The method developed in has been used by researchers of TU Eindhoven in collaboration with Philips Lightning Labs to compute reflectors in a simplified setting (directional light source). Another approach, based on a geometric discretization of Optimal Transportation has been developed in , and is able to handle more realistic conditions (punctual light source).

Solving the exact Optimal Transportation model for the Reflector inverse problem involves a generalized Monge-Ampère problem and is linked to the open problem of c-convexity compatible discretization we plan to work on. The corresponding software development is the topic of the starting ADT Mokabajour.

See section 4.3 below for softwares. These method will clearly become mainstream in reflector design but also in lense design . The industrial problems are mainly on efficiency (light pollution) and security (car head lights) based on free tailoring of the illumination. The figure below is an extreme test case where we exactly reproduce an image. They may represent one of the first incursion on PDE discretization based methods into the field of non-imaging optics.

The analysis of large scale datasets to perform un-supervised (clustering) and supervised (classification, regression) learning requires the design of advanced models to capture the geometry of the input data. We believe that optimal transport is a key tool to address this problem because (i) many of these datasets are composed of histograms (social network activity, image signatures, etc.) (ii) optimal transport makes use of a ground metric that enhances the performances of classical learning algorithms, as illustrated for instance in .

Some of the theoretical and numerical tools developed by our team, most notably Wasserstein barycenters , , are now becoming mainstream in machine learning , . In its simplest (convex) form where one seeks to only maximize pairwise wasserstein distances, metric learning corresponds to the congestion problem studied by G. Carlier and collaborators , , and we will elaborate on this connection to perform both theoretical analysis and develop numerical schemes (see for instance our previous work ).

We aim at developing novel variational estimators extending classification regression energies (SVM, logistic regression ) and kernel methods (see ). One of the key bottleneck is to design numerical schemes to learn an optimal metric for these purpose, extending the method of Marco Cuturi to large scale and more general estimators. Our main targeted applications is natural language processing. The analysis and processing of large corpus of texts is becoming a key problems at the interface between linguistic and machine learning . Extending classical machine learning methods to this field requires to design suitable metrics over both words and bag-of-words (i.e. histograms). Optimal transport is thus a natural candidate to bring innovative solutions to these problems. In a collaboration with Marco Cuturi (Kyoto University), we aim at unleashing the power of transportation distances by performing ground distance learning on large database of text. This requires to lift previous works on distance on words (see in particular ) to distances on bags-of-words using transport and metric learning.

The Brenier interpretation of the generalized solutions of Euler equations in the sense of Arnold is an instance of multi-marginal optimal transportation, a recent and expanding research field which also appears in DFT (see chemistry below). Recent numerical developments in OT provide new means of exploring these class of solutions.

In the years 2000 and after the pioneering works of Otto, the theory of *many-particle systems* has become “geometrized”
thanks to the observed intimate relation between the geometric theory of geodesic convexity in the Wasserstein distance
and the proof of entropy dissipation inequalities that determine the trend to equilibrium.
The OT approach to the study of equilibration is still an extremely active field,
in particular the various recently established connections to sharp functional inequalities and isoperimetric problems.

A third specific topic is the use of optimal transport models
in *non-imaging optics*. Light intensity here plays the role of the source/target prescribed mass
and the transport map defines the physical shape of specular reflector or refracting lense achieving such a transformation.
This models have been around since the works of Oliker and Wang in the 90's. Recent numerical progresses indicate that OT may have an important industrial impact in the design of
optical elements and calls for further modelisation and analysis.

The treatment of *chemical reactions* in the framework of OT is a rather recent development.
The classical theory must be extended to deal with
the transfer of mass between different particle species by means of chemical reactions.
That extension is still far from complete at the moment,
but there is a lot of progress currently, some of which we try to capture in the workshop.

A promising and significant recent advance is the introduction and analysis of a novel metric
that combines the pure transport elements of the Wasserstein distance
with the annihilation and creation of mass, which is a first approximation of chemical reactions.
The logical next challenge is the extension of OT concepts to vectorial quantities,
which allows to rewrite cross-diffusion systems for the concentration of several chemical species as gradient flows in the associated metric.
An example of application is the modeling of a *chemical vapor deposition process*,
used for the manufacturing of thin-film solar cells for instance.
This leads to a degenerate cross-diffusion equations, whose analysis — without the use of OT theory — is delicate.
Finding an appropriate OT framework to give the formal gradient flow structure a rigorous meaning
would be a significant advance for the applicability of the theory, also in other contexts, like for biological multi-species diffusion.

A very different application of OT in chemistry is a novel approach to the understanding of *density functional theory* (DFT)
by using optimal transport with “Coulomb costs”, which is highly non convex and singular.
Albeit this theory shares some properties with the usual optimal transportation problems,
it does not induce a metric between probability measures.
It also uses the multi-marginal extension of OT, which is an active field on its own right.

OT methods have been introduced in biology via gradient flows in the Wasserstein metric.
Writing certain *chemotaxis* systems in variational form
allowed to prove sharp estimates on the long time asymptotics of the bacterial aggregation.
This application had a surprising payback on the theory:
it lead to a better understanding and novel proofs of important functional inequalities,
like the logarithmic Hardy-Littlewood-Sobolev inequality.
Further applications followed, like transport models for species that avoid over-crowding,
or cross-diffusion equations for the description of *biologic segregation*.
The inclusion of dissipative cross-diffusion systems into the framework of gradient flows in OT-like metrics
appears to be one of the main challenges for the future development of the theory.
This extension is not only relevant for biological applications,
but is clearly of interest to participants with primary interest in physics or chemistry as well.

Further applications include the connection of OT with game theory,
following the idea that many selection processes are based on competition.
The ansatz is quite universal and has been used in other areas of the *life sciences* as well,
like for the modeling of personal income in economics.
If time permits, some of those “exotic” applications will be discussed in the workshop as well.

Applications of variational methods are widespread in medical imaging and especially for diffeomorphic image matching. The formulation of large deformation by diffeomorphisms consists in finding geodesics on a group of diffeomorphisms. This can be seen as a non-convex and smoothed version of optimal transport where a correspondence is sought between objects that can be more general than densities. Whereas the diffeomorphic approach is well established, similarity measures between objects of interest are needed in order to drive the optimization. While being crucial for the final registration results, these similarity measures are often non geometric due to a need of fast computability and gradient computation. However, our team pioneered the use of entropic smoothing for optimal transport which gives fast and differentiable similarity measures that take into account the geometry. Therefore, we expect an important impact on this topic, work still in progress. This example of application belongs to the larger class of inverse problems where a geometric similarity measure such as optimal transport might enhance notably the results. Concerning this particular application, potential interactions with the Inria team ARAMIS and also the team ASCLEPIOS can leverage new proposed similarity measure towards a more applicative impact.

Recent years have seen intense cross-fertilization between OT and various problems arising in economics. The principal-agent problem with adverse selection is particularly important in modern microeconomics, mathematically it consists in minimizing a certain integral cost functional among the set of

*Fast entropic methods for optimal transport problems:* In a series of papers
, MOKAPLAN's team members derived a new class of algorithm to obtain efficient approximations of the solution to various problems related to OT (including barycenters, Euler equation, unbalanced problems, gradient flows). This method makes use of entropic regularization and first order optimization method for the Kullback-Leibler divergence. See Section for details about the software output.

*Relaxing the mass conservation constraints:* Our team derived a new theoretical and numerical framework to deal with “unbalanced” optimal transport problems , . This contribution is a breakthrough that will open the door to application in image processing and machine learning. See Section for more details.

Functional Description

ALG2 for Monge Mean-Field Games, Monge problem and Variational problems under divergence constraint. A generalisation of the ALG2 algorithm has been implemented in FreeFem++.

Contact: Jean-David Benamou

URL: https://

Functional Description

We design a software resolving the following inverse problem: define the shape of a mirror which reflects the light from a source to a defined target, distribution and support of densities being prescribed. Classical applications include the conception of solar oven, public lightning, car headlights...Mathematical modeling of this problem, related to the optimal transport theory, takes the form of a nonlinear Monge-Ampere type PDE. The numerical resolution of these models remained until recently a largely open problem. MOKABAJOUR project aims to develop, using algorithms invented especially at Inria and LJK, a reflector design software more efficient than geometrical methods used so far.

Participants: Jean-David Benamou, Vincent Duval, Simon Legrand, Quentin Mérigot and Boris Thibert

Contact: Jean-David Benamou

Functional Description

We design a software to compute fast approximation of optimal transport (and related problems such as barycenters) on geometric domains (either regular Euclidean grid or triangulated meshes). This numerical scheme relies on two key ideas: entropic regularization of the initial linear problem and fast approximate convolution on geometric domains This algorithm is both extremely fast and highly parallelizable, being able to take advantage of GPU computational architectures.

Gabriel Peyré, Jean-David Benamou, Guillaume Carlier, Marco Cuturi (Kyoto), Justin Solomon.

Contact: Gabriel Peyré

URL: https://

Functional Description

Several codes deevlloped by the team are available on an online Jupyter Notebook (Julia and Python) In particular the Semi Discrete Principal Agent Code and also a new Monge-Amère second boundary value problem Finite Difference code.

Simon Legrand, Xavier Dupuis, Vincent Duval, Jean-David Benamou.

Contact: Simon Legrand

*J-D. Benamou, G. Carlier, M. Laborde, G. Peyré, B. Schmitzer, V. Duval*

Taking advantage of the Benamou-Brenier dynamic formulation of optimal transport, we propose in , a convex formulation for each step of the JKO scheme for Wasserstein gradient flows which can be attacked by an augmented Lagrangian method which we call the ALG2-JKO scheme. We test the algorithm in particular on the porous medium equation. We also consider a semi implicit variant which enables us to treat nonlocal interactions as well as systems of interacting species. Regarding systems, we can also use the ALG2-JKO scheme for the simulation of crowd motion models with several species.

We have also investigated the entropy-regularization of the Wasserstein metric to compute gradient flows , . This entropic regularization trades the usual Wasserstein fidelity term for a Kullback-Leibler divergence term. Adapting first-order proximal methods to this framework, we have developed numerical schemes which dramatically reduce the computational load needed to simulate the evolution of a mass density through a JKO flow. By construction, the entropy regularization yields an additional diffusion effects to the evolution, but we have proved that a careful choice of the regularization parameter with respect to the timestep yields the convergence of the scheme towards the solutions of the continuous PDE.

A novel Lagrangian method using a discretization of the Monge-Ampère operator for JKO has been developed in . Not only convergence of the scheme has been established but also one advantage of this method is that it makes it possible to use a Newton's method .

*J-D. Benamou Luca Nenna, G. Carlier*

*G. Peyré, V. Duval, Q. Denoyelle,C. Poon*

In , we have analyzed the recovery performance of two popular finite dimensional approximations of the sparse spikes deconvolution problem over Radon measures, namely the LASSO, and the Continuous Basis-Pursuit. The LASSO is the de-facto standard for the sparse regularization of inverse problems in imaging. It performs a nearest neighbor interpolation of the spikes locations on the sampling grid. The C-BP method, introduced by Ekanadham, Tranchina and Simoncelli, uses a linear interpolation of the locations to perform a better approximation of the infinite-dimensional optimization problem, for positive measures. We have proved that, in the small noise regime, both methods estimate twice the number of original spikes, and we have provided an explicit formula which allows to predict the locations and amplitudes of the spurious spikes. All those properties are in fact connected to an intrinsinc property of the signal: the source condition , .

Those effects are typically due to the use of a discrete grid in the reconstruction process. Several authors have recently proposed algorithms to tackle the problem directly in a continuous setting , . As we have shown in , the method fails when the distance between spikes with opposite signs are below a certain threshold. However, when all the spikes have the same sign, the LASSO on a continuous domain works for arbitrarily close spikes, being all the more sensitive to noise. In , we have given a detailed analysis of the noise sensitivity of the method: if

Minimal geodesics along volume preserving maps, through semi-discrete optimal transport

Q. Mérigot and J.-M. Mirebeau introduced a numerical method for extracting minimal geodesics
along the group of volume preserving maps, equipped with the

*J-D. Benamou, Xavier Dupuis, G. Carlier*
An alternated projection numerical scheme for the more general

A semi-discrete approach to the PA problem is investigated. The range of products is discrete and leads to a non convex
problem. Non-linear optimization methods are tested. See https://

*G. Carlier, F-X. Vialard, B. Schmitzer, L. Chizat*
Classical optimal transport theory and algorithms assume that the input measures are normalized, i.e. that their total mass is 1. This is an important limitation for many problems in imaging sciences and machine learning, where input data are typically not normalized, and where one should enables local creation or destruction of mass. Handling such “unbalanced” transportation problem is also relevant for applications in biological modeling, for instance to take into account cellular growth through optimal transport gradient flows.

Recently, several researchers of MOKAPLAN made important progress on this problem, by deriving a general framework extending optimal transport to this “unbalanced” setting. In we derived a dynamic optimal transport formulation that enables a source term in the initial formulation of Benamou and Brenier . We proved that it defines a distance on positive measures, enjoy many important properties (dual formulation) and can be computed using fast first order convex optimization methods. We then provided in an even larger class of “unbalanced” optimal transport optimization problems, that are obtained via a static formulation, and show that one can recovers the dynamic formulation in some specific cases. Similar models were derived independently and at the same time by two other international teams , , which shows the timeliness of our research. We believe these new theoretical and numerical findings will have a strong impact on the developpement of optimal transport methods in imaging sciences and machine learning.

J-D. Benamou is the coordinator of the ANR
ISOTACE (Interacting Systems and Optimal Transportation, Applications to Computational Economics) ANR-12-MONU-0013 (2012-2016). The consortium explores new numerical methods in Optimal Transportation AND Mean Field Game
theory with applications in Economics and congested crowd motion. Check
https://

V. Duval and F-X. Vialard are members of the CAVALIERI project (CAlcul des VAriations pour L'Imagerie, l'Edition et la Recherche d'Images).
This project, coordinated by V. Duval, aims at proposing new methods for comparing and reconstructing images relying on recent progress in the calculus of variations. Typical applications are co-segmentation, statistics transfer and interpolation, as well as tomographic reconstruction. A major emphasis is given on methods derived from (generalized) Optimal Transportation. See
http://

Gabriel Peyré is the principal investigator of the ERC project SIGMA-Vision (http://

Title: Numerical Optimal Transportation in (Mathematical) Economics

International Partner (Institution - Laboratory - Researcher):

McGill University (Canada) - mathematics - Oberman Adam

Start year: 2014

See also: https://

The team investigate new modelization and numerical resolution methods i using the theory of Optimal Transportation.

Jun Kitagawa (University of Toronto) visited Q. Mérigot and B. Thibert from June 1st to 10th, 2015. They worked on theoretical properties of Newton's algorithm for semi-discrete optimal transport problems arising in geometric optics.

Marco Cuturi (Kyoto Univ.) visited MOKAPLAN as invited professor at Paris-Dauphine during the summer 2015 (2 months), to work on applications of optimal transport to machine learning.

Kévin Degraux, a PhD candidate from the Université Catholique de Louvain (Belgium) has visited Mokaplan from November 2015 to January 2016. His work focusses on sparse signal reconstruction.

Q. Mérigot visited Jose-Antonio Carrillo at Imperial College, to start a collaboration on the discretization of Wasserstein gradient flows using Voronoi diagrams.

F.-X. Vialard was invited for one month at the semester on geometric mechanics and stochastic analysis at EPFL Bernoulli institute in april to work with Darryl D. Holm and other researchers.

F.-X. Vialard was invited for the semester on Riemannian geometry in infinite dimension in Vienna in january and february.

G. Carlier has spent six month at U. Victoria visiting Prof. Martial Agueh.

Gabriel Peyré visited the laboratory of Marco Cuturi (Kyoto Univ.) as invited professor during April 2015, to work on applications of optimal transport to machine learning.

Q. Mérigot was chair of the annual SMAI-Sigma meeting in Paris (2 Nov 2015)?

G. Peyré is the chair of the conference SIGMA 2016 (https://

G. Peyré is in the organizing committee of Mathematics and Image Analysis MIA'16 (https://

G. Peyré is in the conference program committees of CANUM 2016 (http://

Q. Mérigot and G. Peyré were part of the program committee of Geometric Science of Information 2015

G. Carlier was member of the Scientific Committee of SMAI-2015.

G. Peyré is reviewer for conferences in machine learning (ICML, NIPS) and computer graphics (SIGGRAPH).

V. Duval has reviewed several contributions to the conferences GRETSI, CAMSAP, SSVM, SPARS.

Q. Mérigot has reviewed for Symposium on Computational Geometry (SoCG), Symposium on the theory of computational (STOC).

Guillaume carlier is member of the editorial Board of "Journal de l'Ecole Polytechnique" and co-editor of "Mathematics and Financial Economics".

G. Peyré associate editor for SIAM Journal on Imaging Sciences and Journal of Mathematical Imaging and Vision (Springer).

The members of the team are frequently reviewing papers in SIIMS (SIAM Journal on Imaging Sciences), JMAA (Journal of Mathematical Analysis and Applications), IPol (Image Processing Online), JVCI (Journal of Visual Communication and Image Representation), COCV, M2AN ... Discrete and computational geometry, Journal of the London Math Society, JOTA, JCP, “Information and Inference: A Journal of the IMA”, JMIV, Optimization Letters, PAMI, SIAM optimization and control ...

V. Duval has given invited talks at the Séminaire de Mathématique Appliquée au Traitement d'Image (Télécom ParisTech & Université Paris-Descartes), Journée Traitement d'Images du projet M2NUM du GRR LMN (INSA Rouen), and Mokameetings (Inria & Université Paris-Dauphine).

Q. Mérigot: Séminaire parisien de géométrie algorithmique, Paris (décembre 2015) ; Applied PDEs Seminar, Imperial College, Londres (décembre 2015), Convexity, Probability and Discrete Structures, Marne-la-vallée (octobre 2015) ; Journée thématique transport optimal et applications, Bordeaux (octobre 2015) ; Mini-symposium on gradients flow , SciCADE conference, Potsdam (september 2015) ; Geometric Computing Group Seminar, Stanford University (février 2015)

F-X. Vialard was invited at to give talks at: Semester on Riemannian geometry in infinite dimension in Vienna, Semester on geometric mechanics and stochastics at EPFL, Math on the Rocks conference, Séminaire d'analyse at University Paris 11.

The members of the team are frequently reviewing and evaluating ANR projects.

G. Peyré was in the 2015 recruitement comittees in Nice Univ. (Proffessor in analysis) and Paris-Dauphine (Maitre de Conference in statistics).

G. Carlier was in the AERES visiting comitee at Université du Havre.

Q. Mérigot participated to the secion comitee MCF 26 at Paris 6.

F-X. Vialard was reviewer for the DFG RSF grant proposal (Russian-German cooperation grant).

J-D. Benamou is a member of Inria-Paris Restaurant comittee.

J-D. Benamou is an elected member of the Academic council of PSL.

G. Peyré is in the scientific advisory committee of "Fondation Sciences Mathématiques de Paris" and in the scientific advisory committee of the Ceremade laboratory, University Paris-Dauphine.

Q. Mérigot teaches Analyse convexe approfondie, 50h equivalent TD, Univ. Paris Dauphine

teaches two courses on "Sparsity and Compressed Sensing" and
"Deformable Models and Geodesic Methods" in Master 2 MVA ENS Cachan, France.
Gabriel Peyré teaches a pre-doctoral course “image and surface processing” in PSL

PhD : Roméo Hatchi, intitule , Université Paris 9 Dauphine, december 2015, G. Carlier

PhD : Julien André, These CIFRE avec l'entreprise OPTIS Grenoble-INP (co-direction D. Attali, B. Thibert, Q. Mérigot)

PhD in progress : Jocelyn Meyron, lED de Grenoble, Q. Mérigot, D. Attali and B. Thibert.

PhD in progress : Lenaic Chizat, intitule , october 2014, F-X. Vialard and G. Peyré.

PhD in progress : Aude Genevay, intitule , october 2015, J-D. Benamou and G. Peyré.

PhD in progress : Luca Nenna, intitule , october 2013, J-D. Benamou and G. Carlier.

PhD in progress : Jonathan Vacher, Machine learning approaches for neurosciences of the visual brain, October 2013, G. Peyré and Y. Fregnac.

PhD in progress : Quentin Denoyelle, *Analyse théorique et numérique de la super-résolution sans grille*, October 2014, G. Peyré and V. Duval.

Postdoc in progress : Clarice Poon, *Support recovery using total variation and others sparse priors*, September 2015, G. Peyré and V. Duval.

Postdoc in progress: Dario Prandi, sub-Riemannian model for imaging, Oct. 2015, G. Peyré and J-M Mirebeau

Postdoc in progress: Bernhard Schmitzer, fast algorithms for optimal transport, Oct. 2014, G. Peyré.

Postdoc in progress: Thomas Gallouèt, Fluid model and optimal transport, Oct. 2015, Q. Mérigot and yann Brenier.

Postdoc in progress: Roman Andreev, Numerical Methods for Mean Field Games , Mai 2015, Yves Achdou anf J-D. Benamou.

J-D. Benamou and G. Carlier were in the Ph.D. committee of Roméo Hatchi (Paris 9, december 2015) and G. Carlier was referee for the Ph.D of A. Meszaros (Paris Sud Orsay).

Gabriel Peyré was PhD reviewer of Yi-Qing Wang (Cachan, mars 2015), Laurent Gajny (Lille, avril 2015), Arthur Leclaire (Paris, juin 2015), Nicolas Chauffert (Toulouse, Sept. 2015), Matthieu Toutain (Nice, Dec. 2015).

Gabriel Peyré was habilitation reviewer of Boris Thibert (Grenoble, June 2015), Marianne Clausel (Grenoble, Sep. 2015).

Gabriel Peyré was in the PhD comitees of Solène Ozeré (Rouen, Dec. 2015)

G. Carlier gave a general audience lecture on mathematics of urban traffic at the Consulat de France in Vancouver.