Keywords
Computer Science and Digital Science
- A3.4.1. Supervised learning
- A3.4.2. Unsupervised learning
- A3.4.4. Optimization and learning
- A3.4.6. Neural networks
- A5.5. Computer graphics
- A5.5.1. Geometrical modeling
- A5.5.4. Animation
- A6.1.4. Multiscale modeling
- A6.1.5. Multiphysics modeling
- A6.2.5. Numerical Linear Algebra
- A6.2.6. Optimization
- A6.2.8. Computational geometry and meshes
- A6.5.1. Solid mechanics
- A6.5.2. Fluid mechanics
- A8.3. Geometry, Topology
- A8.7. Graph theory
- A8.12. Optimal transport
- A9.2. Machine learning
Other Research Topics and Application Domains
- B9.2.2. Cinema, Television
- B9.2.3. Video games
- B9.5.1. Computer science
- B9.5.2. Mathematics
- B9.5.3. Physics
- B9.5.5. Mechanics
- B9.5.6. Data science
1 Team members, visitors, external collaborators
Research Scientists
- Mathieu Desbrun [Team leader, INRIA, Senior Researcher]
- Pooran Memari [CNRS, Researcher]
- Steve Oudot [INRIA, Senior Researcher, HDR]
Faculty Member
- Maks Ovsjanikov [Ecole Polytechnique, Professor, HDR]
Post-Doctoral Fellows
- Wei Li [Inria, from Sep 2022]
- Lei Li [LIX, from Sep 2022]
PhD Students
- Souhaib Attaiki [LIX, from Sep 2022]
- Theo Braune [ECOLE POLY PALAISEAU, from Sep 2022]
- Lucas Brifault [Dassault Systèmes, from Sep 2022, partnership Inria/DS]
- Nicolas Donati [LIX, from Sep 2022]
- Vadim Lebovici [Ecole Normale Supérieure, from Sep 2022]
- Robin Magnet [LIX, from Sep 2022]
- Nissim Maruani [Inria, from Nov 2022, co-advising with Pierre Alliez, Inria Sophia-Antipolis]
- Mariem Mezghanni [LIX, from Sep 2022]
- Julie Mordacq [Ministère des Armées, from Sep 2022]
- Ramana Sundararaman [LIX, from Sep 2022]
- Jiayi Wei [LIX, from Sep 2022]
Interns and Apprentices
- Siddharth Setlur [Inria, from Sep 2022, M.Sc. student at ETH Zurich]
Administrative Assistant
- Michael Barbosa [Inria, from May 2022]
2 Overall objectives
Historical context. Geometry has been a unifying formalism for science: predictive models of the world around us have often been derived using geometric notions which formalize observable symmetries and experimental invariants. Tools such as differential geometry and tensor calculus quickly became invaluable in describing the complexity of natural phenomena and mechanical systems through concise equations, condensing local and global properties into simple relations between measurable quantities. Today, geometry (be it Euclidean or not) is at the core of many current physical theories: general relativity, electromagnetism (E&M), gauge theory, quantum mechanics, as well as solid and fluid mechanics, all have strong underlying structures that are best described and elucidated through geometric notions like differential forms, curvatures, vector bundles, connections, and covariant derivative. Geometry also creeps up in unexpected fields such as number theory and functional analysis, offering new insights and even breakthroughs, e.g., the use of algebraic geometry to address Fermat's last theorem.
Geometry in Digital Sciences. In sharp contrast, the role of geometry was mostly ignored at the inception of computer science. Yet, it has now become clear that digital sciences are imbued with an overwhelming amount of fundamentally geometric and topological concepts. Some are rather obvious, when dealing with the modeling of Euclidean shapes in computer graphics or the analysis of images in computer vision for instance; some are more subtle, such as the “manifold hypothesis” underlying a number of supervised or unsupervised learning techniques; and some are only nascent, such as the fields of Information Geometry (basically, the geometry used to understand probability distributions), Geometric Statistics (new statistical methodology for non-Euclidean entities), and Topological Data Analysis (where algebraic topology is used as a tool to enhance data analysis pipelines). In fact, even the discretization of physical theories needed to offer fast numerical simulation has brought geometry back to the forefront after it was understood that the loss of numerical fidelity in standard numerical methods is due to a fundamental failure to preserve geometric or topological structures of the underlying continuous models: partial differential equations (PDEs) modeling our physical world are typically encoding invariants and structures that are independent of the choice of coordinates used to express the equations and the tensors involved in them; but invariance to the choice of basis is often lost during discretization, as numerical approximations will in general not capture, let alone preserve, the key geometric structures that exist in the continuous case. Seeing these numerical issues through the lens of geometry is thus not just of academic interest: failure to maintain geometric invariants has serious consequences for the accuracy and stability of solutions.
Rationale. Given the unusual reach of geometry and its rich literature, there is an opportunity to assemble a team of experts in geometry and its vernacular, to help broadly impact digital science and technology. We thus propose the creation of a new project-team whose core scientific mission is to use geometry as a bedrock for the development of numerical tools and algorithms: we wish to exploit the properties of infinite-dimensional and finite-dimensional spaces that are related with distance, shape, size, and relative position, and bringing them to bear on computational discretizations and algorithms for analysis, processing, and simulation. Adhering to geometric structures and invariants as a guiding principle for computations is a rich source of both theoretical and practical challenges, allowing to combine concepts and results from different areas of geometry broadly construed to produce new computational tools with solid mathematical foundations. While our team will be very focused in terms of the mathematical foundations and tools upon which it builds, it will also be very broad in terms of applications given the pervasiveness of geometry in sciences and technology. This makes for an unusual, yet powerful scientific setup that will facilitate interdisciplinary projects through the common use of geometric foundations and their specialized terminology. It will also allow us to contribute sporadically to pure and computational mathematics when appropriate in order to push our scientific mission forward.
Positioning. We see GeomeriX as first and foremost Inria Saclay’s graphics team, but with wider objectives afforded by the broad relevance of geometry. It is worth noting that graphics has evolved to the point where it often intersects with applied mathematics, machine learning, vision, and computational science in some of its efforts, and GeomeriX intends to continue this trend.
Objectives. Our project-team's overall scientific objective is to contribute, through a geometric perspective, both foundational and practical methods for geometric data processing. In particular, we seek the development of predictive computational tools by drawing from the many facets of geometry and topology: whether it be discrete geometry, basic differential geometry or exterior calculus, symplectic geometry, persistent homology or sheaf theory, optimal transport, Riemannian or conformal geometry, these topics of geometry inform and guide both our discretizations and algorithmic designs towards computing. Note that we do not plan to merely adapt and exploit geometric concepts and understanding for numerical purposes, as our focus on digital data may even result in contributions to these mathematical fields, extending the current body of knowledge. While we intentionally leave the range of our mathematical foundations open so as not to restrict our potential team-wide explorations, we concentrate our research on four concrete themes, which we believe can be most significantly impacted by a geometric approach to developing new numerical tools:
- Euclidean shape processing: from computer graphics to geometry processing and vision, the analysis and manipulation of low-dimensional shapes (2D and 3D) is an important endeavor with applications covering a wide range of areas from entertainment and classical computer-aided design, to reverse engineering and biomedical engineering. Our project-team intends to lead efforts in this competitive field, with key contributions in shape matching, geometric analysis, and discrete calculus on meshes.
- Simulation: traditional finite-element treatments of various physical models have had tremendous success. Recently, a number of geometric integrators have upended the field, either through structure-preserving integration which offers improved statistical predictability by respecting the geometric properties of the exact flow of the differential equations, or through novel discretizations of the state space. We intend to continue introducing novel integration methods for increasingly complex multiphysics systems, as well as exploiting the use of learning methods to accelerate simulation.
- Dynamical systems: we intend to leverage the geometric nature of dynamical systems to investigate and promote high-dimensional data analysis for dynamics: the study of dynamical systems from a limited number of observations of the state of a given system (for example, time series or a sparse set of trajectories) offers a unique opportunity to develop scalable computational tools to detect or characterize unusual features and coherent structures. Meanwhile, the study of dynamical systems from a combinatorial point of view opens up the possibility of characterizing their invariant sets and assessing their stability.
- Data science: finally, we are intent on exploring the underlying role of geometry in machine learning and statistical analysis. This role has been put forward in the recent years, with the emergence of approaches such as geometric deep learning or topological data analysis, whose aim is to leverage the underlying geometry or topology of the data to enhance the performance, robustness, or explainability of the methods used for their analysis. We will pursue investigations toward this goal, concentrating our efforts on topics related to explainable feature design, geometric feature learning, geometry-driven learning, and geometry for categorical and mixed data types.
Evidently, our research efforts may at times lie across multiple of these themes given our multi-disciplinary objectives, and it is our hope that we will all eventually participate in the four themes.
3 Research program
Below we introduce the details of our four research themes, in four separate subsections. In each subsection, we first present the scientific focus and research objectives of the corresponding theme, then we detail the research topics we intend to address and how we plan to leverage topology and geometry for each one of them. For each theme, we list the most likely contributors, and organize the various subtopics within each theme from short to long-term goals, based on our current expectations and focus.
3.1 Geometry for Euclidean shape processing
Euclidean space is the default setting of classical geometry in two or three dimensions. Shapes in 3D space are of particular interest as they represent the typical objects we interact with. Geometry processing is an area of research focusing on these low-dimensional shapes in Euclidean space, with the goal to design algorithms, data structures, as well as analysis tools for their digital acquisition, reconstruction, analysis, manipulation, synthesis, classification, transmission, and animation. Digital shapes are typically discretized through either point clouds, triangle meshes, or polygonal meshes for surfaces, and through tetrahedron or polytopal meshes for volumes. Analyzing and manipulating these digital representations already involve fundamental difficulties in terms of efficiency, scalability, and robustness to arbitrary sampling, for which computational geometry and computer graphics have generated a number of key algorithms. Simple surface meshes in 3D also offer a simple context in which to define discrete notions of basic topological properties (quantities preserved through arbitrary stretching, such as Euler characteristic, genus, Betti numbers, etc) and relevant geometric properties (normal, curvatures, covariant derivatives, parallel transport, etc). Yet the digital counterpart of the low-dimensional case of Euclidean geometry is far from being settled or complete: it remains obviously relevant in a number of scientific fields on which we plan to focus. A few research directions of particular interest are described below.
Operator-based methods for shape analysis
We plan to develop novel approaches for representing and manipulating geometric concepts as linear functional operators. Specifically we will focus on tools for shape matching, design and analysis of differential quantities such as vector fields or cross fields, shape deformation and shape comparison, where functional approaches have recently been shown to provide a natural and discretization-agnostic representation 98, 31, 32, 108. This “functional” point of view is classical in many scientific areas, including dynamical systems (where the pullback with respect to a map is closely related to the Koopman or composition operator, allowing the study ergodicity or mixing property of non-linear maps through the spectral properties of a linear operator), differential geometry (where vector fields are often defined by their action on real-valued functions) and representation theory among others. However, it has only recently been adopted in geometry processing with tremendous and constantly growing potential in both axiomatic or even learning-based approaches 87, 76, 59. We will continue developing efficient and robust algorithms by considering shapes as functional spaces and by representing various geometric operations as linear operators acting on appropriate real-valued functions. In addition to the efficiency and robustness of methods obtained by considering this linear operator point of view of geometry processing and dynamical systems, another very significant advantage of these techniques is that they allow to express many different geometric operations in a common language. This means, for example, that it makes it easy to define the pushforward of a vector field with respect to a map by simply considering a composition of appropriate discrete operators. Despite the significant recent success of tools within this area, especially related to the functional map framework 99, there does not exist a unified coherent theoretical framework in which different geometric concepts can be represented and manipulated via their functional equivalents. Our main long-term goal therefore would be to establish a novel field within geometry processing by creating both a computational framework and a coherent theoretical formalism in which all of the different basic geometric operations can be expressed, and thus in which different concepts can “communicate” with one another. We believe that such a formalism and associated computational tools, already quite well developed, will not only greatly extend the scope of applicability of many existing geometry processing pipelines, but will also help expand this language to novel concepts, and ultimately help pave the way towards representation-agnostic geometric data manipulation.
Discrete metrics and applications.
While three-dimensional shapes are often encoded via their Euclidean embedding, numerous research efforts have focused on studying and discretizing their intrinsic metric. Regge calculus 106, an early approach to numerical relativity without coordinates, proposed the use of edge lengths to encode a piecewise-Euclidean metric per simplex, from which the Riemann curvature tensor can be easily computed to derive local areas or curvatures. This early work led to a series of alternative metric representations: tip angles, for instance, are known to encode the intrinsic geometry of a triangle mesh up to a scaling, while local measurements (an angle 107 or a length cross-ratio 83 per edge) later formed the basis of circle patterns 35, 81 as well as conformal representations 113; the discrete Laplace-Beltrami cotan formula 102 also determines the edge lengths of a triangle mesh (and thus its discrete metric) up to a global scaling 125. More recently, generalized notions of metrics were proposed; for instance, 73 presented a characterization of an augmented discrete metric resulting from the orthogonal primal-dual structure of weighted triangulations. Common to many of these various metric characterizations is the existence of convex energies which allow to efficiently compute these metrics from various boundary conditions. We intend to investigate the discrete treatment of metric for low-dimensional manifolds as a counterpart to the discretization of antisymmetric tensors (differential forms), which is far less studied — and a discrete theory unifying symmetric and anti-symmetric tensors remains elusive despite recent advances 72. Moreover, the metric of a surface is known in the continuous realm to induce Hodge stars and a canonical torsion-free Levi-Civita connection (or parallel transport), but this picture is far less clear for discrete manifolds, even if the construction of arbitrary-order discrete Hodge stars and metric connections are well understood by now. A few research directions on generalized metrics seem particularly interesting due to their likelihood of resulting in novel algorithmic and computational frameworks:
- Metric-dependent meshing: Given a set of metric-based operators, optimized mesh structures can be designed to offer optimal accuracy akin to Hodge-star mesh optimization for the augmented weighted metric proposed in 95. Another interesting research question is the existence and construction of intrinsic Delaunay triangulation, the most common discrete shape representation, with respect to a particular metric 36.
- Metric-aware sampling: Metric-dependent descriptors such as the pair correlation function are particularly efficient in characterizing statistical properties of point distributions for texture synthesis 60. Extending this framework to arbitrary non-flat domains through Multi-Dimensional Scaling (MDS) seem particularly promising.
- Shape characterization: Highly convoluted embeddings like the cortical surface of the brain and its functional connectivity graph are naturally hyperbolic in nature 41. However, investigating a link between cortical folding and the volumetric fiber bundle structure from a pure geometric viewpoint through a hyperbolic metric characterization has surprisingly not be done in brain analysis, despite striking visual similarities between brain folding and geometric realizations of the hyperbolic plane (see 118 and Taimiņa's crochet model). We are hoping that this intrinsic metric characterization can be investigated through recent discrete hyperbolic parametrization tools 68, which may also lead to other shape classification techniques in more general contexts.
- Piecewise-linear maps: We also wish to study the classification of the deformation of a triangle mesh through its induced metric change in the embedding space. Developing an approach to decompose such a diffeomorphic piecewise-linear map into canonical geometric transformations through either linear algebra or convex minimization could offer new discrete equivalences for conformal, equiareal, and curvature-preserving maps between triangulations, with direct applications to mesh parameterization and more general processing of discrete meshes.
- Geodesic abstractions: curve-network representations 71 based on a few geodesics to describe a shape provide a compact encoding of surfaces. While it is increasingly useful for artistic depictions, we also want to study its relevance as a compact compression scheme from which the shape and its metric can be derived with controllable precision.
-
Metric-dependent cage: Finally, we also want to understand how to define optimized metric-dependent cages for intuitive & expressive deformation and animation of complex shapes 116, and how these cages can be understood as polygonal or polyhedral cells to locally simplify a simplicial complex.
Discrete differential and tensor calculus.
When working on low-dimensional spaces, the use of meshes (as opposed to just point clouds) pays dividends as it allows for the development of discrete versions of Exterior Calculus (see DEC 55 or FEEC 29), where -dimensional integrals can be directly evaluated in -cells, and differentiation can formally achieved through the boundary operator: the concept of chains and cochains from algebraic topology forms the basis of a discrete analog of Cartan's exterior calculus of differential forms, providing crucial numerical tools such as a discrete de Rham cohomology and a discrete Helmholtz-Hodge decomposition that precisely mimick their continuous counterparts. Moreover, finite elements of arbitrary order can be associated with these discrete forms through subdivision 70 to provide a powerful Isogeometric Analysis (IGA). Recent developments 88, 69 have offered also a discrete approach to tangent vector fields. While DEC encodes vector fields as 1-forms, processing tangent vectors and, more generally, directional fields sampled at vertices of discrete surfaces requires the development of discrete (metric) connections 52, 88 (which can be seen as discrete equivalent to the Christoffel symbols) to handle the non-linearity of non-flat domains. From these connections can be derived the usual continuous notions of covariant derivatives or Killing operator, and these discrete operators demonstrate the same intimate link between geometry and topology as exemplified by the hairy ball theorem (Hopf index theorem). While these operators apply equally well on discrete three-manifolds, much remains to do: properly defining the notion of curvature matrix-valued 2-form or torsion vector-valued 2-form in 3D and checking that these definitions provide consistent Bianchi identities (i.e., there exists an exterior covariant derivative satisfying fundamental geometric and topological properties) is an exciting research direction. Not only will it allow to deal with the line singularities in hexahedral meshing robustly, but it will also provide a Bochner Laplacian (also called the vector Laplacian) in 3D devoid of the type of spurious modes that discrete Laplacians over flat domains can introduce if one does not enforce a proper discrete deRham complex. Such a tensor calculus for three-manifolds may allow us to explore possible applications in the context of general relativity in the longer term. Finally, the design of simplicial or cell meshes that guarantee accurate computations while approximating a given domain well remains an important endeavor for practical applications.
3.2 Geometry for simulation
Mathematical models of the evolution in time of mechanical systems generally involve systems of differential equations. Simulating a physical system consists in figuring out how to move the system forward in time from a set of initial conditions, allowing the computation of an actual trajectory through classical methods such as fourth-order Runge-Kutta or Newmark schemes. However, a geometric — instead of a traditional numerical-analytic — approach to the problem of time integration is particularly pertinent 74: the very essence of a mechanical system is indeed characterized by its symmetries and invariants (e.g., momenta), thus preserving these geometric notions into the discrete computational setting is of paramount importance if one wants discrete time integration to properly capture the underlying continuous motion. Considering mechanics from a variational point of view goes back to Euler, Lagrange and Hamilton 62, and Poincaré famously stated that geometry and physics are “indissociable”. The variational principle most important for continuous mechanics is due to Hamilton, and is often called Hamilton’s principle or the least action principle: it states that a dynamical system always finds an optimal course from one position to another. One consequence is that we can recast the traditional way of thinking about an object accelerating in response to applied forces, into a geometric viewpoint: the path followed by the object between two space-time positions has optimal geometric properties, analogous to the notion of geodesics on curved surfaces. This point of view is equivalent to Newton’s laws in the context of classical mechanics, but is broad enough to encompass physical models ranging to E&M and quantum mechanics 91. While the idea of discretizing variational formulations of mechanics is standard for elliptic problems using Galerkin Finite Element methods for instance, only recently did it get used to derive variational time-stepping algorithms for mechanical systems 92. These variational integrators have been shown to be remarkably versatile, powerful, and general for simulations of physical phenomena when compared to traditional numerical time stepping methods: the symplectic character of variational integrators guarantees good statistical predictability through accurate preservation of the geometric properties of the exact flow of the differential equations. We endeavor to continue contributing to this particular application of geometry and extend it further, as we foresee a number of interesting scientific developments and industrial applications.
State-space discretization of statistical physics.
Kinetic equations are used to describe a variety of phenomena in various scientific fields, ranging from rarefied gas dynamics and plasma physics to biology and socio-economics, and appear naturally when one considers a statistical description of a large particle system evolving in time. In incompressible fluid simulation, kinetic solvers based on the lattice Boltzmann method (LBM) have generated growing interest due to their use of the Boltzmann transport equation and to its unusual state-space discretization based on a computationally-efficient lattice 111: compared to macroscopic solvers directly integrating Navier-Stokes equations, LBM totally bypasses the difficult issue of discretizing advection to high order, and absence of global pressure solves makes for extremely efficient parallel implementations, which are now surpassing alternative discretizations 85. However, the numerical treatment of the collision operator of the Boltmann equation has not reached maturity; most surprising is the complete absence of geometric approaches to deal with Boltzmann equations. One should be able to formulate a variational approach to LBM based on Hamilton's principle to derive a systematic integrator with guaranteed accuracy and structure-preserving properties. Moreover, while dealing with isothermal and incompressible flows is a good starting point, the kinetic standpoint of fluid dynamics is not theoretically restricted to this case: far more complex physical systems, from compressible flow (with shocks), to thermal conductivity, to even acoustics for example, can be handled; but far less is known on how to handle these more involved cases computationally, because no systematic numerical approach to handle Boltzmann equations is known. Success in our geometric approach to LBM should offer a much better handle to deal with these difficult cases: between new Hermite regularization tools 37, 51 and the recent introduction of variational integrators for non-equilibrium thermodynamical systems mentioned above should provide the necessary theoretical foundations to establish a geometric solver for this generalized case.
Learning-aided simulation.
Computational physics is experiencing a tectonic shift as data-driven approaches are quickly becoming mainstream. While we do not adhere to the idea being floated that numerical integration could be simply “learned” to improve current solvers, the fact is that many machine learning tools may have profound influence in practical applications using simulation. Long standing problems such as the design of perfectly matched layers (PML, an artificial absorbing layer for transport equations used to reduce the domain of simulation without suffering from reflected waves 49) or flux limiters in high resolution schemes 120 (to avoid the spurious oscillations (wiggles) that would otherwise occur due to shocks or sharp changes) could be found through training, and applied at very low numerical cost. We are curious to see if geometry can help design better architectures or approaches for this type of learning-aided simulation, by helping with better loss functions (with soft constraints) or better architectures (to enforce hard constraints) that account for the importance of structure preservation. Learning the highly non-linear and chaotic dynamics of fluids is also an interesting direction: we believe that one can infer predictive high-frequency details of a turbulent flow from a low-resolution simulation as it is an attractive alternative to non-linear turbulence modeling, extending the computationally-expensive Reynolds-Averaged Navier-Stokes (RANS 27), Large-Eddy Simulation (LES 79), or Detached-Eddy Simulation (DES 112) models used in CFD. Many other learning efforts in the domain of simulation are being explored, in particular towards the goal of allowing real-time design of shapes that satisfy some physical properties, such as lowest drag for improved aerodynamics or highest stiffness for a light cantilever.
Geometric integration of physical systems and multiphysics.
Although the use of geometric integrators for differential equations in computational physics has recently brought off many numerical improvements, the large body of knowledge in differential geometric mechanics remains vastly under-utilized in discrete mechanics. Many mechanical systems require geometric objects such as diffeomorphisms, vector fields, or (principal) connections for which no structure-preserving discretization exists. Hydrodynamics, for instance, has well established and rich differential geometric foundations, but rare are the numerical methods that take advantage of this rich body of knowledge as yet. Yet, satisfying a form of “particle relabeling” symmetry 91 on a discrete level could directly enforce Kelvin’s circulation theorem, a momentum preservation as important as angular momentum preservation for rigid bodies. Relativity is another example, albeit much more involved, where structure-preserving numerics would strongly impact the scientific community: having discretizations automatically enforcing Bianchi’s identities would not only simplify the numerical procedures involved in gravitational theory (as spectral accuracy would no longer be required to avoid spurious modes), but could in fact result in conservation of energy and angular momentum. Moreover, multiphysics (coupled mechanical systems involving more than one simultaneously occurring physical field) can be consistently described through constrained variational principles: a simple, yet already interesting example is the case of the equations of motion for the garden hose, where rod dynamics coupled with fluid motion was only fully modeled (along with its nonlinear solutions of traveling-wave type) a few years back 104 through such a geometric treatment. Now that a variational formulation of nonequilibrium thermodynamics extending Hamilton's principle to include irreversible processes has been proposed 66, we are particularly interested in advancing further the arsenal of computational methods for physical simulation.
3.3 Geometry for dynamical systems
Dynamical systems – whether physical, biological, chemical, or social – are ubiquitous in nature, and their study deals with the concept of change, rate of change, rate of rate of change, etc. Dynamical systems are often better elucidated and modeled through topology and geometry. Whether we consider a continuous-time dynamical system (flow) or discrete-time dynamical system (map), the geometric theory of dynamical systems studies phase portraits: on the state-space manifold (a geometric model for the set of all possible states of the system), the global behavior of the dynamical system is determined by a cellular structure of basins enclosed by separatrices, each basin being dominated by a different specific behavior or fate. A system's trajectories on the state-space manifold determine velocity vectors by differentiation; conversely, velocity vectors determine trajectories by integration. Bifurcations can also be understood as geometric models for the controlled change of one system into another, while the rate of divergence of trajectories in phase space measures a system's stability. Given this overwhelming relevance of geometry in dynamical systems, we intend to dedicate some of our activities to develop geometry-based computational tools to study time series and dynamical systems: while classic dynamical systems theory has established solid foundations to study structures in steady and time-periodic flows and maps, new tools are needed to analyze the complexity of time series or aperiodic large-scale flows from sampled trajectories, and to automatically generate a simplified skeleton of the overall dynamics of a system from input data. We discuss a few directions we are interested in further impacting next.
Time series.
Geometric methods play an important part in the study of time series. Of particular interest are time-delay embeddings, which are generically able to capture the underlying state space and dynamics from which the time series data have been acquired, by the Takens embedding theorem 115. Such embeddings transform discrete time series into point clouds in Euclidean space, so that the underlying geometry of the point cloud reflects the geometry of the phase space the data originate from. By doing so, questions related to the seasonality or anomalous behavior of the time series are naturally reformulated into questions about the geometry or topology of their embeddings 101. Beside this approach, other more direct methods apply geometric or topological tools in the original physical or frequency domain, which, despite its simplicity, has proven to be relevant in various contexts 54, 58. A common thread to all these developments is their restriction to numerical time series, including (but not restricted to) data for which geometry plays an obvious role—e.g. inertial or gyroscopic sensor data. With potential medical applications in mind, one of our main long-term goals will be to adapt and extend these approaches to handle categorical data, in connection to the item in the Geometry for data science theme. We also plan to find principled methods to tuning the various parameters involved in the techniques, e.g. the window size in time-delay embeddings: we will seek to optimize or learn these parameters automatically, in connection to the item Geometry-driven learning in the Geometry for data science theme. We will also seek to make these parameters adaptive, e.g. using time-varying window sizes in time-delay embeddings of irregular time series, in order to obtain more accurate data representations and improved learning performance.
Coherent structures.
Another interesting area in need of new numerical methods concerns coherent structures, i.e., persisting features of a flow over long periods that tend to favor or inhibit material transport between distinct flow regions. While there is no universally agreed-upon definition for coherent structures (there exist ergodicity-based 40, observer-based 93, and probabilistic 64 approaches to their definition), most variants and associated computational methods assume a fine knowledge of the Eulerian velocity field in space and time to deduce a good approximation of the flow. However, flows are often known only as a set of sparse particle trajectories in time (an example is the trajectory of buoys in the ocean). Such a sparse sampling of the dynamical system does not lend itself well to a geometric analysis of transport, so topological methods have recently been proposed to extract structures from a sparse set of trajectories by measuring their entanglement 117, 28, 124 based on the theory of braid groups, a classical area of topology. Coherent regions can then be defined as containing particles that possibly mix with other particles within the region itself but do not mix with particles outside the region; the set of trajectories arising from the particles within a coherent region forms a coherent bundle. Even if the use of braid groups offers sound foundations and numerical tools for the definition of coherent structures in 2D, there has been only limited efforts in developing practical and scalable computational tools for the efficient analysis of flow structures in 3D, offering a clear opportunity for us to try new geometric insights.
Invariant sets.
Much of the theory of dynamical systems revolves around the existence and structure of invariant sets, which by definition are subsets of the state space that are invariant under the action of the dynamics. Invariant sets come in many different forms (stationary solutions, periodic orbits, connecting orbits, chaotic invariant sets, etc), and their structure can be very complicated and can undergo drastic changes under perturbations of the system, thus making their study difficult. This is all the more true in practical applications, where the systems are only known through space and/or time discretizations. Conley index theory 50 overcomes these issues by restricting the focus to invariant sets that admit an isolating neighborhood, and by introducing a topological invariant—the Conley index—that characterizes whether such isolated invariant sets are attracting, repelling, or saddle-like. It is defined as the homotopy type of a pair of compact subsets of the neighborhood, and it is proven to be independent of the choice of neighborhood—thus characterizing the invariant set itself. We are interested in the study of invariant sets in the discrete space and continuous time setting, where the space is typically described by a simplicial complex and the dynamics by a combinatorial vector (or multivector) field. Building upon Forman's seminal work in combinatorial dynamical systems 61, recent advances 33, 86 have shown that isolated invariant sets and their Conley indices can be properly defined even in this setting, and that they can be related to the dynamics of some upper semicontinuous acyclic multivalued map defined on the geometric realization of the simplicial complex; in simpler terms, not only can Conley index theory be adapted to the combinatorial setting, but it also connects to its classical analog in the underlying space. Two important questions for applications arise from this line of work: (1) how to compute the invariant sets and their Conley indices (including choosing relevant isolating neighboroods) efficiently? (2) how do they behave under perturbations of the input vector field or simplicial complex? These questions have just started to be addressed 56, 57, mostly through the lens of single-parameter topological persistence theory, developed in the context of topological data analysis. We intend to push this direction further, notably using multi-parameter persistence theory to cope with some of the key difficulties such as the choice of isolating neighborhoods.
3.4 Geometry for data science
The last decade has seen the advent of machine learning (ML), and in particular deep learning (DL), in a large variety of fields, including some directly connected to geometry. For instance, DL-based approaches have become increasingly popular in geometry processing 105 due to their ability to outperform state-of-the-art, domain-specific methods by leveraging the ever-increasing amounts of available labeled data. On the downside, DL approaches suffer from a general lack of explainability. Moreover, their performances can be disappointing on small data due to their large numbers of parameters; this is especially true with end-to-end learning pipelines, which tend to require humongous amounts of training data to learn the right data representation. Finally, DL is by essence tied to Euclidean data representations, and as such it requires intermediate transforms in order to be applicable to non-Euclidean data types such as graphs or probability measures. Because of these limitations, we are seeing a rise of geometric and topological methods for data science in general, and for ML and DL in particular, whose aim is to help address the aforementioned challenges as well as others. For instance, geometric deep learning 38 tries to generalize deep neural models to non-Euclidean domains. This includes for instance using information geometry to apply deep neural models in probability spaces. Topological data analysis (TDA) 96 is another popular approach to enhance ML and DL methods. It contributes to data science in at least three different ways: first, by providing data mining tools that can help users uncover hidden structures in data; second, by providing generic descriptors for geometric data that can be turned into features for ML and DL with provable stability properties; third, by integrating itself deeply into existing ML methods or DL architectures to enhance their performances or to analyze their behavior 46, 89. Other contributions of geometry to data science at large include: the use of Forman’s Ricci curvature and its corresponding Ricci flow in networks, to understand the networks' properties and growth 121; the application of the Hodge-Hemholtz decomposition to statistical ranking problems with sparse response data, with theoretical connections to both PageRank and LASSO 78; the use of Reeb graphs or Morse-Smale complexes in statistical inference 47 as well as in data visualization 119. These important developments reinforce our argument that geometry and topology have their role to play in the elaboration of the next-generation data analysis tools. We plan to focus on a few research directions related to these developments, which are of particular interest in our view.
Deep learning for large-scale 3D geometric data analysis.
We first propose to develop efficient algorithms and mathematical tools for analyzing large geometric data collections using Deep Learning techniques. This includes 3D shapes represented as triangle or quad meshes, volumetric data, point clouds possibly embedded in high-dimensions, and graphs representing geometric (e.g. proximity) data. Our project is motivated by the fact that large annotated collections of geometric models have recently become available 45, 123, and that machine learning algorithms applied to such collections have shown promising initial results, both for data analysis as well as synthesis. We believe that these results can be significantly extended by building on recent advances in geometry processing, optimization and learning. Our ultimate goal is to design novel deep learning techniques capable both of handling geometric data directly and of combining and integrating different data sources into a unified analysis pipeline. A key challenge in this project is the fact that geometric data can come in a myriad different representations, such as point clouds and meshes among others, with variable sampling and discretization. Furthermore, geometric shapes can undergo both rigid and non-rigid deformations. Unfortunately, most existing deep learning approaches focus only on a particular type of representations and deformation classes (e.g., considering purely extrinsic or purely intrinsic methods). Instead we propose to place special focus on designing learning techniques capable of handing diverse multimodal data sources undergoing arbitrary deformations, in a coherent theoretical and practical framework. Moreover we propose to develop novel powerful interactive tools for analysis and annotation, to help harness user input, and also provide better mechanisms for exploration of variability in the data 108, 100.
Explainable geometric and topological features for data.
Another of our goals is to design geometric and topological features that can capture richer content from the data, while maintaining the robustness and stability properties that the existing features enjoy. If we can make our features rich enough so that they characterize the input data (or their underlying geometric structures, assuming such structures exist) completely, then we will be able to leverage them in the context of explainable AI, to compute pre-images with guarantees on the corresponding interpretations. In cases where our features cannot completely describe the data, we will study the geometry of the fibers of the feature extraction step, in order to quantify the discrepancy that may appear between different interpretations of the same feature. We envision two complementary approaches for this:
- The first approach relies on feature aggregation. In the context of TDA for instance, one may consider using multiple filtrations (or filter functions on a fixed simplicial complex), computing their corresponding topological descriptors, then aggregating these descriptors together to form a feature vector.
- The second approach relies on more elaborate geometric and topological tools to design the features. The idea is to encode the joint effect of multiple geometric and topological constructions on the data, in a more integrated way than just by aggregating the corresponding features. By encoding more complex effects, we hope to extract a richer content using smaller constructions.
Research on the first approach in TDA started with 53, 67, who proved that, in the special case where the data are sampled from some subanalytic compact sets in Euclidean space , the compact sets themselves are fully described by the aggregated features obtained by orthogonal projections onto lines. This follows from a more fundamental result on the invertibility of the Radon transforms of constructible functions 110, to which the above aggregated features belong. This initial result has sparked a thriving new direction of research, exploring larger and larger classes of compact sets 77, 90, 97. Many important questions arise from this line of work, some of which have been partially addressed, including: what kind of stability or robustness properties do these aggregated features enjoy? Can the size of the collection of filter functions used be reduced, to become finite and (more importantly) independent of the compact set under consideration? Can the aggregated features be computed efficiently? Can non-Euclidean compact sets, such as manifolds or length spaces, be considered as well, with similar guarantees?
The second approach is related to the development of multi-parameter persistence 42, which is undeniably the most widely open and long-standing research topic in TDA today. The core challenge is to define computationally tractable algebraic invariants that can capture as much of the joint structure of multiple topological constructions as possible. The notorious difficulty of this question comes from the fact that the algebraic objects underlying multi-parameter topological constructions are significantly more complicated than the ones underlying single-parameter constructions. The question also connects to notoriously hard problems in other areas of pure mathematics, such as the classification of isomorphism classes of indecomposable poset representations in quiver representation theory for instance. It can benefit from these connections, as mathematical tools that have been developed for those problems can be imported into the TDA literature—several promising such imports have been made in the recent past, including from representation theory 34 and from sheaf theory 80. In turn, mathematical and algorithmic advances made in multi-parameter persistence may benefit these other areas of mathematics as well. This is clearly a high-risk and long-term research topic, but if successful, it may eventually have an enormous impact on TDA and related areas.
Geometric feature learning.
Geometry and topology have played a key role in the design of feature extraction pipelines for certain types of data. The numerous existing geometric features for geometry processing (shape contexts 63, differential and integral invariants 103, heat or wave kernel signatures 30, 114, etc.) are a sign of the importance of this topic for the computer graphics community. Meanwhile, the TDA community has developed generic feature extraction pipelines, based on combinatorial constructions and their algebraic invariants, which have proven to be useful in a variety of application domains 96. All these approaches are, however, handcrafted, with hyperparameters being tuned via manual, grid, or random search. Our goal is to make these approaches transition from a paradigm of feature engineering to that of feature learning, in order to set up end-to-end learning pipelines for improved performances and adaptability. Two complementary directions are considered:
- designing piecewise-smooth variants of the existing pipelines, with a fine control over the underlying stratification. This will make it possible to apply variational optimization methods, typically stochastic (sub-)gradient descent, and to optimize the gradient sampling steps for improved convergence rates.
- designing novel pipelines based on a combination of geometric/topological tools and deep learning, in order to get the best out of both worlds.
Research in the first direction is still in its infancy. Promising theoretical advances were made recently, towards understanding the piecewise differentiability of the basic topological persistence operator in full generality 84, as well as towards optimizing its parameters using classical stochastic gradient descent 43. Can the knowledge gained in these studies about the underlying stratification of the operator be leveraged to optimize the gradient sampling step and thus improve the convergence rates? Can these results be extended to more advanced pipelines, such as the one for Mapper or for zigzags and multi-parameter persistence?
The idea behind the second direction is to integrate topological or geometric layers into neural network architectures such as auto-encoders or GANs for feature extraction — the challenge being to determine how to do it in the appropriate way, so that we can make the most of this combination. This question connects to the research topic described further down in this section.
Geometry-driven learning.
Most of the contributions of geometry and topology to machine learning until recently have been to the design of pre-processing steps (e.g. feature extraction) to enhance the performances of the learning pipeline. There is now a thriving effort of the community toward integrating geometric and/or topological computations deeper into the core of the pipeline. This includes for instance: ToMATo 46, which integrates a TDA-based feedback loop into density based algorithms to improve their stability and robustness; topological regularizers 48, 75, which add topology-based regularization terms to the loss in supervised statistical learning; topological layers 44, 65, 82, which are meant to be incorporated into neural networks. Meanwhile, geometry and topology have been used to analyze the behavior of neural networks 109, 39. This exciting line of work is just emerging, and our intent is to push this direction further, in particular to address the following important questions:
- How can we generalize the use of topological layers in neural networks? This question is connected to the differentiability of the TDA pipeline, addressed in the research topic Geometric feature learning. Inded, generalizing the current (nascent) framework for differential calculus and optimization with the TDA pipeline will be key to designing both generic and effective topological layers. Another more practical aspect of the question is to evaluate the contribution of topological layers as initial or intermediate layers, depending on the neural network architecture that they are combined with and on the data they are applied to.
- The same question arises for topological regularizers, with similar theoretical and practical challenges.
- The development of richer families of geometric and topological descriptors, undertaken in the item Richer geometric and topological features for data, will eventually lead to the question of generalizing the current differentiable framework to these new descriptors, in order to make them as widely applicable as the current descriptors, and also to the practical question of determining how to best combine them with existing loss functions, regularizers, or neural network architectures.
- The aforementioned contributions and research directions concern mostly supervised learning. Can we contribute as well to unsupervised learning problems, including clustering (as ToMATo does already for density-based clustering), dimensionality reduction, or unsupervised feature learning? This question connects also to the research topic Geometric feature learning described previously. One direction we may explore is the design of geometric or topological layers to be inserted in unsupervised neural network architectures such as auto-encoders or GANs.
- Finally, as TDA is concerned primarily with topology, an obvious (yet still wide open) question to ask is whether it can contribute to the current effort towards generating neural network architectures automatically.
Geometry for categorical and mixed data types.
Categorical data types are notoriously hard to deal with in the context of ML and AI. Indeed, most of the existing ML toolbox has been designed specifically to work with numerical variables, usually sitting in some vector or metric space. By contrast, spaces of categorical data do not naturally come equipped with a linear structure nor a metric. More importantly, these spaces are discrete by nature, so choices of metrics or (dis-)similarity measures can be scarce, with limited effects on the learning efficiency. To make things worse, categorical variables are often mixed with numerical variables, and choosing a proper weighting for them is a challenge in its own right. Meanwhile, categorical variables play an important part in many applications: for instance, in precision medicine, where the monitoring of patients relies on collected longitudinal data that include not only numerical variables such as temperature or blood pressure, but also categorical variables such as illness antecedents or symptoms lists. Thus, handling categorical and mixed data types represents an important challenge today. Unfortunately, with very few exceptions 122, it has been mostly overlooked so far in the development of topological methods for ML and AI, so our goal will be to help fix this situation. The standard approach for handling categorical variables is to define a proper vector representation, then to apply—either off-the-shelf or with minor adaptations—an analysis method designed for numerical variables to the new data representation. A prototypical instance of this approach is Multiple Correspondance Analysis for dimensionality reduction 26, which applies classical PCA to the one-hot encoding matrix of the input data. A variant of the approach replaces the vector representation by a suitable metric or (dis-)similarity measure on the initial categorical variables or on some transformed version of those. For instance, in clustering, one can define a metric on the input data, e.g. Jaccard or Hamming distance, then apply a hierarchical bottom-up clustering algorithm such as single-linkage to the resulting distance matrix. This variant seems quite appropriate for geometric or topological methods, since the latter typically work with metric or (dis-)similarity spaces. The challenge is to determine with which metrics or (dis-)similarity measures, and on which data types, geometric or topological methods will be provably better.
A more refined version of the approach learns the new data representation instead of engineering it, which is particularly relevant when end-to-end learning pipelines are sought for. The methods are usually taylored to a specific data type, for instance word2vec 94 computes word embeddings for text data using a two-layer neural network. Our developments in the research topic Geometry-driven learning will make it possible to combine TDA layers with such networks, and thus to benefit from the most recent advances on representation learning for these data types. The challenge will be to understand when and how to make the most of this combination.
4 Application domains
Our work aims at a wide range of applications covering 3D shape analysis and processing, simulation, and data science in general. While we typically focus on contributions that are of a fundamental, mathematical and algorithmic nature, we seek collaborations with academics and industrial from applied fields, who can use our tools on practical and concrete problems. Here are a few examples of collaborations:
- In the context of 3D geometry processing, we collaborate with Dassault Systèmes for a) the PhD of Lucas Brifault on the design of novel geometric representations for shapes through measure theory and b) the PhD of Mariem Mezghanni on the design of physical simulation layers for 3D modeling.
- In the context of personalized medicine, we collaborate with statisticians and medical doctors to incorporate our geometric and topological features into learning pipelines to design better dynamic treatment regimens (AEx PreMediT).
- In a collaboration with the French Ministry of Defense, we seek to develop tools to analyze multimodal time series data in order to predict the appearance of G-LOCs among fighter jet pilots in training or in operation (PhD of Julie Mordacq).
Beside these few illustrative examples, GeomeriX also maintains regular collaborations with Sanofi, EDF, Danone R&D, Immersion Tools, as well as with several key players in the world-wide tech industry, including Ansys, Adobe Research, Disney/Pixar, NVidia.
5 Highlights of the year
5.1 Awards
- P. Memari won a Best Paper award at JFIG in November 2022.
- M. Ovsjanikov received a Best Paper award for 3DV 2022, and one of his students received a Best Thesis award from the GdR IG-RV.
- M. Desbrun won a Best Paper award at JFIG 2022.
5.2 Nominations
- S. Oudot is a CAS fellow at the Norwegian Academy of Science and Letters for the academic year 2022-2023.
6 New results
We list our new results for each of the four themes that our team is articulated around.
6.1 Geometry for Euclidean shape processing
6.1.1 Point‐Pattern Synthesis using Gabor and Random Filters
Participants: Pooran Memari. In collaboration with Xingchang Huang, Hans-Peter Seidel, and Gurprit Singh (MPI Saarbrücken).
Point pattern synthesis requires capturing both local and non-local correlations from a given exemplar. Neural networks have shown remarkable success in such tasks for both point-pattern and texture synthesis. In this work 14, we show that more traditional Gabor transform-based features—together with convolutional filters—can perform even better, making the pipeline versatile compared to previous approaches. The resulting pipeline better captures both the local and non-local structures, does not require any specific data set training and can easily extend to handle multi-class and multi-attribute point patterns, e.g., disk and other element distributions. Our method outperforms state-of-the-art synthesis methods on a large variety of point patterns in terms of both qualitative and quantitative measures over different applications.
6.1.2 Fast and Robust Planar Cutting of Arbitrary Domains
Participants: Mathieu Desbrun. In collaboration with Prof. Jin Huang, Zhejiang University, PRC.
Given a complex three-dimensional domain delimited by a closed and non-degenerate input triangle mesh without any self-intersection, a common geometry processing task consists in cutting up the domain into cells through a set of planar cuts, creating a “cut-cell mesh”, i.e., a volumetric decomposition of the domain amenable to visualization (e.g., exploded views), animation (e.g., virtual surgery), or simulation (finite volume computations). A large number of methods have proposed either efficient or robust solutions, sometimes restricting the cuts to form a regular or adaptive grid for simplicity; yet, none can guarantee both properties, severely limiting their usefulness in practice. At the core of the difficulty is the determination of topological relationships among large numbers of vertices, edges, faces and cells in order to assemble a proper cut-cell mesh: while exact geometric computations provide a robust solution to this issue, their high computational cost has prompted a number of faster solutions based on, e.g., local floating-point angle sorting to significantly accelerate the process — but losing robustness in doing so. In this paper 13 entitled Topocut: Fast and Robust Planar Cutting of Arbitrary Domains, we introduce a new approach to planar cutting of 3D domains that substitutes topological inference for numerical ordering through a novel mesh data structure, and revert to exact numerical evaluations only in the few rare cases where it is strictly necessary. We show that our novel concept of topological cuts exploits the inherent structure of cut-cell mesh generation to save computational time while still guaranteeing exactness for, and robustness to, arbitrary cuts and surface geometry. We demonstrate the superiority of our approach over state-of-the-art methods on almost 10,000 meshes with a wide range of geometric and topological complexity. We also provide an open source implementation.
6.2 Geometry for simulation
6.2.1 General Regularized Green’s Functions for Elasticity
Participants: Mathieu Desbrun. In collaboration with Dr. Jiong CHEN, Telecom.
The fundamental solutions (Green’s functions) of linear elasticity for an infinite and isotropic media are ubiquitous in interactive graphics applications that cannot afford the computational costs of volumetric meshing and finite-element simulation. For instance, the recent work of de Goes and James at ACM SIGGRAPH 2017 leveraged these Green’s functions to formulate sculpting tools capturing in real-time broad and physically-plausible deformations more intuitively and realistically than traditional editing brushes. In this paper 21 entitled Go Green: General Regularized Green’s Functions for Elasticity, we extend this family of Green’s functions by exploiting the anisotropic behavior of general linear elastic materials, where the relationship between stress and strain in the material depends on its orientation. While this more general framework prevents the existence of analytical expressions for its fundamental solutions, we show that a finite sum of spherical harmonics can be used to decompose a Green’s function, which can be further factorized into directional, radial, and material-dependent terms. From such a decoupling, we show how to numerically derive sculpting brushes to generate anisotropic deformation and finely control their falloff profiles in real-time.
6.2.2 Efficient Kinetic Simulation of Two-Phase Flows
Participants: Mathieu Desbrun, Wei Li. In collaboration with Prof. Xiaopei Liu, Shanghaitech University, PRC.
Real-life multiphase flows exhibit a number of complex and visually appealing behaviors, involving bubbling, wetting, splashing, and glugging. However, most state-of-the-art simulation techniques in graphics can only demonstrate a limited range of multiphase flow phenomena, due to their inability to handle the real water-air density ratio and to the large amount of numerical viscosity introduced in the flow simulation and its coupling with the interface. Recently, kinetic-based methods have achieved success in simulating large density ratios and high Reynolds numbers efficiently; but their memory overhead, limited stability, and numerically-intensive treatment of coupling with immersed solids remain enduring obstacles to their adoption in movie productions. In this paper 17 entitled Efficient Kinetic Simulation of Two-Phase Flows, we propose a new kinetic solver to couple the incompressible Navier-Stokes equations with a conservative phase-field equation which remedies these major practical hurdles. The resulting two-phase immiscible fluid solver is shown to be efficient due to its massively-parallel nature and GPU implementation, as well as very versatile and reliable because of its enhanced stability to large density ratios, high Reynolds numbers, and complex solid boundaries. We highlight the advantages of our solver through various challenging simulation results that capture intricate and turbulent air-water interaction, including comparisons to previous work and real footage.
6.3 Geometry for data science
6.3.1 Signed Barcodes for Multi-Parameter Persistence via Rank Decompositions
Participants: Steve Oudot. In collaboration with Magnus Botnan (VU Amsterdam) and Steffen Oppermann (NTNU).
In this work 20 we introduce the signed barcode, a new visual representation of the global structure of the rank invariant of a multi-parameter persistence module or, more generally, of a poset representation. Like its unsigned counterpart in one-parameter persistence, the signed barcode encodes the rank invariant as a -linear combination of rank invariants of indicator modules supported on segments in the poset. It can also be enriched to encode the generalized rank invariant as a -linear combination of generalized rank invariants in fixed classes of interval modules. In the paper we develop the theory behind these rank decompositions, showing under what conditions they exist and are unique - so the signed barcode is canonically defined. We also illustrate the contribution of the signed barcode to the exploration of multi-parameter persistence modules through a practical example.
6.3.2 On the bottleneck stability of rank decompositions of multi-parameter persistence modules
Participants: Steve Oudot. In collaboration with Magnus Botnan (VU Amsterdam), Steffen Oppermann (NTNU), and Luis Scoccola (Northeastern University).
The notion of rank decomposition of a multi-parameter persistence module was introduced as a way of constructing complete and discrete representations of the rank invariant of the module. In particular, the minimal rank decomposition by rectangles of a persistence module, also known as the generalized persistence diagram, gives a uniquely defined representation of the rank invariant of the module by a pair of rectangle-decomposable modules. This pair is interpreted as a signed barcode, with the rectangle summands of the first (resp. second) module playing the role of the positive (resp. negative) bars. The minimal rank decomposition by rectangles generalizes the concept of persistence barcode from one-parameter persistence, and, being a discrete invariant, it is amenable to manipulations on a computer. However, here 25 we show that it is not bottleneck stable under the natural notion of signed bottleneck matching between signed barcodes. To remedy this, we turn our focus to the signed barcode induced by the Betti numbers of the module relative to the so-called rank exact structure, which we prove to be bottleneck stable under signed matchings. As part of our proof, we obtain two intermediate results of independent interest: we compute the global dimension of the rank exact structure on the category of finitely presentable multi-parameter persistence modules, and we prove a bottleneck stability result for hook-decomposable modules, which are in fact the relative projective modules of the rank exact structure. We also bound the size of the multigraded Betti numbers relative to the rank exact structure in terms of the usual multigraded Betti numbers, we prove a universality result for the dissimilarity function induced by the notion of signed matching, and we compute, in the two-parameter case, the global dimension of a different exact structure that is related to the upsets of the indexing poset.
6.3.3 On Rectangle-Decomposable 2-Parameter Persistence Modules
Participants: Vadim Lebovici, Steve Oudot. In collaboration with Magnus Botnan (VU Amsterdam).
In this work 12 we address two questions: (a) can we identify a sensible class of 2-parameter persistence modules on which the rank invariant is complete? (b) can we determine efficiently whether a given 2-parameter persistence module belongs to this class? We provide positive answers to both questions, and our class of interest is that of rectangle-decomposable modules. Our contributions include: on the one hand, a proof that the rank invariant is complete on rectangle-decomposable modules, together with an inclusion-exclusion formula for counting the multiplicities of the summands; on the other hand, algorithms to check whether a module induced in homology by a bifiltration is rectangle-decomposable, and to decompose it in the affirmative, with a better complexity than state-of-the-art decomposition methods for general 2-parameter persistence modules. Our algorithms are backed up by a new structure theorem, whereby a 2-parameter persistence module is rectangle-decomposable if, and only if, its restrictions to squares are. This local characterization is key to the efficiency of our algorithms, and it generalizes previous conditions derived for the smaller class of block-decomposable modules. It also admits an algebraic formulation that turns out to be a weaker version of the one for block-decomposability. By contrast, we show that general interval-decomposability does not admit such a local characterization, even when locality is understood in a broad sense. Our analysis focuses on the case of modules indexed over finite grids, the more general cases are left as future work.
6.3.4 Hybrid Transforms of Constructible Functions
Participants: Vadim Lebovici.
In this work 15 we introduce a general definition of hybrid transforms for constructible functions. These are integral transforms combining Lebesgue integration and Euler calculus. Lebesgue integration gives access to well-studied kernels and to regularity results, while Euler calculus conveys topological information and allows for compatibility with operations on constructible functions. We conduct a systematic study of such transforms and introduce two new ones: the Euler–Fourier and Euler–Laplace transforms. We show that the first has a left inverse and that the second provides a satisfactory generalization of Govc and Hepworth’s persistent magnitude to constructible sheaves, in particular to multi-parameter persistent modules. Finally, we prove index-theoretic formulae expressing a wide class of hybrid transforms as generalized Euler integral transforms. This yields expectation formulae for transforms of constructible functions associated with (sub)level-sets persistence of random Gaussian filtrations.
6.3.5 Learning Multi-resolution Functional Maps with Spectral Attention for Robust Shape Matching
Participants: Maks Ovsjanikov, Lei Li, Nicolas Donati.
In this work 23 published at Neurips, a novel non-rigid shape matching framework is presented that is based on multi-resolution functional maps with spectral attention. The framework is applicable in both supervised and unsupervised settings, and the network is trained so that it can adapt the spectral resolution, depending on the given shape input. The approach is not only accurate with near-isometric input, but also robust and able to produce reasonable matching even in the presence of significant non-isometric distortion. The superior performance of the approach is demonstrated through experiments on challenging near-isometric and non-isometric shape matching benchmarks.
6.3.6 Neural Correspondence Prior for Effective Unsupervised Shape Matching
Participants: Maks Ovsjanikov, Souhaib Attaiki.
In this work 19 published at Neurips, we presented a new paradigm for computing correspondences between 3D shapes. Our approach is fully unsupervised and can lead to high-quality correspondences even in challenging cases such as sparse point clouds or non-isometric meshes, where current methods fail. Most notably, we showed that given a noisy map as input, training a feature extraction network with the input map as supervision tends to remove artifacts from the input and can act as a powerful correspondence denoising mechanism, both between individual pairs and within a collection. We called this approach NCP or Neural Correspondence Prior (NCP) and showed that it significantly improves the accuracy of the state-of-the-art maps, especially when trained within a collection.
6.3.7 DiffusionNet: Discretization Agnostic Learning on Surfaces
Participants: Maks Ovsjanikov, Souhaib Attaiki. In collaboration with Nicholas Sharp and Keenan Crane (CMU).
In this work 18 we introduce a new general-purpose approach to deep learning on three-dimensional surfaces based on the insight that a simple diffusion layer is highly effective for spatial communication. The resulting networks are automatically robust to changes in resolution and sampling of a surface—a basic property that is crucial for practical applications. Our networks can be discretized on various geometric representations, such as triangle meshes or point clouds, and can even be trained on one representation and then applied to another. We optimize the spatial support of diffusion as a continuous network parameter ranging from purely local to totally global, removing the burden of manually choosing neighborhood sizes. The only other ingredients in the method are a multi-layer perceptron applied independently at each point and spatial gradient features to support directional filters. The resulting networks are simple, robust, and efficient. Here, we focus primarily on triangle mesh surfaces and demonstrate state-of-the-art results for a variety of tasks, including surface classification, segmentation, and non-rigid correspondence.
6.3.8 Reduced Representation of Deformation Fields for Effective Non-rigid Shape Matching
Participants: Maks Ovsjanikov, Ramana Sundararaman.
In this work 24 published at Neurips, a novel approach for computing correspondences between non-rigid objects is presented by exploiting a reduced representation of deformation fields. The approach is based on mesh-free methods and network learning deformation parameters at a sparse set of positions in space (nodes) and reconstructing the continuous deformation field in a closed-form with guaranteed smoothness. This reduction in degrees of freedom results in significant improvement in terms of data-efficiency and enabling limited supervision, while also providing direct access to first-order derivatives of deformation fields which facilitates enforcing desirable regularization effectively. The resulting model has high expressive power and is able to capture complex deformations, as shown by state-of-the-art results across multiple deformable shape matching benchmarks.
6.3.9 Weakly Supervised 3D Local Descriptor Learning for Point Cloud Registration
Participants: Maks Ovsjanikov, Lei Li.
In this work 16, we present a novel method called WSDesc to learn 3D local descriptors in a weakly supervised manner for robust point cloud registration. Our work builds upon recent 3D CNN-based descriptor extractors, which leverage a voxel-based representation to parameterize local geometry of 3D points. Instead of using a predefined fixed-size local support in voxelization, we propose to learn the optimal support in a data-driven manner. To this end, we design a novel differentiable voxelization layer that can back-propagate the gradient to the support size optimization. To train the extracted descriptors, we propose a novel registration loss based on the deviation from rigidity of 3D transformations, and the loss is weakly supervised by the prior knowledge that the input point clouds have partial overlap, without requiring ground-truth alignment information. Through extensive experiments, we show that our learned descriptors yield superior performance on existing geometric registration benchmarks.
6.3.10 Learning Locally Accurate and Globally Consistent Non-Rigid Shape Correspondence
Participants: Maks Ovsjanikov, Lei Li, Souhaib Attaiki.
In this work 22, we present a novel learning-based framework that combines the local accuracy of contrastive learning with the global consistency of geometric approaches, for robust non-rigid matching. We first observe that while contrastive learning can lead to powerful point-wise features, the learned correspondences commonly lack smoothness and consistency, owing to the purely combinatorial nature of the standard contrastive losses. To overcome this limitation we propose to boost contrastive feature learning with two types of smoothness regularization that inject geometric information into correspondence learning. With this novel combination in hand, the resulting features are both highly discriminative across individual points, and, at the same time, lead to robust and consistent correspondences, through simple proximity queries. Our framework is general and is applicable to local feature learning in both the 3D and 2D domains. We demonstrate the superiority of our approach through extensive experiments on a wide range of challenging matching benchmarks, including 3D non-rigid shape correspondence and 2D image keypoint matching.
7 Bilateral contracts and grants with industry
7.1 Bilateral contracts with industry
Participants: Mathieu Desbrun.
Mathieu Desbrun participated in the Inria-Dassault Systèmes convention, and Lucas Brifault became the first PhD student to start in GeomeriX using this convention (this is not a CIFRE grant), for a thesis whose title is “Théorie de la mesure géométrique appliquée pour la modélisation de formes complexes”.
8 Partnerships and cooperations
8.1 International research visitors
8.1.1 Visits of international scientists
- Prof. Christian Lessig, from Otto-von-Guericke Universitat Magdeburg in January 2022 for a talk and an afternoon of research discussions.
Other international visits to the team
Sara Hahner
-
Status
PhD student
-
Institution of origin:
Universität Bonn
-
Country:
Germany
-
Dates:
Sept. - Dec. 2022
-
Context of the visit:
Creating generative models with surface-based Deep Learning approaches, in collaboration with Maks Ovsjanikov
-
Mobility program/type of mobility:
research stay
Siddharth Setlur
-
Status
intern (Master)
-
Institution of origin:
ETH Zürich
-
Country:
Switzerland
-
Dates:
Sept. - Dec. 2022
-
Context of the visit:
Master's internship in topological data analysis under Steve Oudot
-
Mobility program/type of mobility:
internship
8.1.2 Visits to international teams
Research stays abroad
Mathieu Desbrun
-
Visited institution:
Caltech
-
Country:
USA
-
Dates:
March 2022
-
Context of the visit:
Collaboration with Prof. Houman Owhadi on numerical homogenization
-
Mobility program/type of mobility:
research stay and lecture
Theo Braune
-
Visited institution:
Technische Universität Berlin
-
Country:
Germany
-
Dates:
November 2022
-
Context of the visit:
Collaboration with Prof. Ulrich Pinkall.
-
Mobility program/type of mobility:
research stay
8.2 European initiatives
8.2.1 Horizon Europe
ERC Starting Grant
Participants: Maks Ovsjanikov.
-
Title:
EXPROTEA: Exploring Relations in Structured Data with Functional Maps
-
Partner Institution(s):
- None
-
Date/Duration:
2018-2022
-
Additionnal info/keywords:
Establishing theoretical foundations and designing efficient computational methods for analyzing, quantifying and exploring relations and variability in structured data sets, such as collections of geometric shapes, point clouds, and large networks or graphs, among others. Ultimately, we expect our study to create to a new rigorous, unified paradigm for computational variability, providing a common language and sets of tools applicable across diverse underlying domains.
8.2.2 H2020 projects
H2020 EU Project Clipe
Participants: Pooran Memari.
-
Title:
Creating Lively Interactive Populated Environments
-
Partner Institution(s):
- University of Cyprus, Universitat Politecnica de Catalunya, University College London, Trinity College Dublin, Max Planck Institute for Intelligent Systems, KTH Royal Institute of Technology.
-
Date/Duration:
2020-2024
-
Additionnal info/keywords:
This project designs new techniques to create and control interactive virtual worlds and characters, benefiting from opportunities open by the wide availability of emergent technologies in the domains of human digitization and artificial intelligence.
8.3 National initiatives
AEx PreMediT
Participants: Steve Oudot.
-
Title:
Precision Medicine using Topology
-
Partner Institution(s):
- CRESS, Hôtel-Dieu, France
-
Date/Duration:
2022-2025
-
Additionnal info/keywords:
While recent advances in machine learning are opening promising prospects for precision medicine, the sometimes small size, sparsity, or partly categorical nature of the data involved pose some crucial challenges. The goal of PreMediT is to address these challenges by integrating information about the geometric and topological structure of the data into the machine learning pipelines.
ANR AI Chair AIGRETTE
Participants: Maks Osjanikov.
-
Title:
Analyzing Large Scale Geometric Data Collections
-
Partner Institution(s):
- ANR
-
Date/Duration:
2020-2024
-
Additionnal info/keywords:
Motivated by the deluge of 3D data using geometric representations (point clouds, triangle, quad meshes, graphs...) that are ill-suited for modern applications, we are developing efficient algorithms and mathematical tools for analyzing diverse geometric data collections.
9 Dissemination
People involved: Mathieu Desbrun , Pooran Memari , Maks Ovsjanikov , Steve Oudot .
9.1 Promoting scientific activities
9.1.1 Scientific events: organisation
Member of the organizing committees
- Maks Ovsjanikov was the co-organizer of the Hi! PARIS Meet Up! on Computer Vision in Sep 2022. This event (which took place at CapGemini corporate building) consisted of presentations by experts in Computer Vision from Hi! PARIS institutions, as well as a general round table discussion around the topics of Computer Vision with the participation of corporate guests, and especially corporate donors of Hi! Paris.
9.1.2 Scientific events: selection
Chair of conference program committees
- M. Desbrun was chairing the international committee deciding on the the Best Thesis award at ACM SIGGRAPH 2022.
Member of the conference program committees
- Steve Oudot was a member of the scientific committee of the GETCO 2022 conference.
- Pooran Memari was a member of the program committees of the Eurographics 2022 conference, of the Symposium on Geometry Processing, and of the ACM SIGGRAPH technical papers track.
- Maks Osjanikov was a member of the program committee of the Symposium on Geometry Processing, and of the ACM SIGGRAPH technical papers track.
- Mathieu Desbrun was a member of the Courses Program Commitee for ACM SIGGRAPH Asia conference.
9.1.3 Journal
Member of the editorial boards
- Steve Oudot is a member of the Editorial Board of the Journal of Computational Geometry.
- Mathieu Desbrun is a member of the Editorial Board of the Journal of Geometric Mechanics.
- Pooran Memari is an Associate Editor of Computer Graphics Forum (CGF).
9.1.4 Invited talks
- M. Desbrun was a guest speaker of Luminy’s Demi Journées du Pôle Calcul in December 2022, a keynote speaker at Eurographics 2022, a keynote speaker at GraPhys 2022, a guest speaker at the Applied Geometry for Data Sciences at Chongqing, China, and an invited speaker at Inria Sophia-Antipolis.
- P. Memari was keynote speaker at the Journées Française d’Informatique Graphique (JFIG) in November.
- Steve Oudot. Interview of Frédéric Chazal. Applied Algebraic Topology Research Network Interview Series, 2022-2023.
- Steve Oudot. Optimization in topological data analysis. Workshop Geometry, topology and statistics in data sciences, Institut Henri Pioncaré (Paris), Oct. 2022.
- Steve Oudot. Generalized persistence diagrams, rank decompositions, signed barcodes. Workshop Interactions between representation theory and topological data analysis, Center for Advanced Study (Oslo), Dec. 2022.
9.1.5 Leadership within the scientific community
- Steve Oudot is co-responsible, with L. Castelli-Aleardi, of the GT GeoAlgo within the GdR-IM (until Sept. 2022).
- Pooran Memari is the local coordinator for the GT-MG (Modélisation Géométrique).
9.1.6 Research administration
- Steve Oudot is vice-president of the Commission Scientifique at Inria Saclay.
- Pooran Memari is a member of the comité de Web du LIX, and a deputy member of the conseil de laboratoire du LIX, École Polytechnique.
- Pooran Memari and Mathieu Desbrun were members of the comité de recrutement du département d’informatique de Polytechnique (DIX) in 2022.
9.2 Teaching - Supervision - Juries
9.2.1 Teaching
- Master: Steve Oudot, Topological data analysis, 45h eq-TD, M1, École polytechnique, France;
- Master: Mathieu Desbrun, Digital Representation and Analysis of Shapes, M2, École polytechnique, France;
- Master: Pooran Memari, Artificial Intelligence and Advanced Visual Computing, and Digital Representation and Analysis of Shapes, M2, École polytechnique, France;
- Master: Maks Ovsjanikov, Artificial Intelligence and Advanced Visual Computing, École polytechnique, France;
- Undergrad-Master: Steve Oudot, Algorithms for data analysis in C++, 22.5h eq-TD, L3/M1, École Polytechnique, France.
- Master-PhD: Pooran Memari is a member of the Jury d’admission Masters & PhD Track IGD (Interaction, Graphics & Design), IP-Paris (2020-2023).
9.2.2 Supervision
- PhD in progress: Vadim Lebovici, Laplace transform for constructible functions. Started Sept. 1st, 2020. Steve Oudot and François Petit (CRESS).
- PhD in progress: Julie Mordacq, Analyse Topologique des Données et Apprentissage Machine pour analyser et prédire des transitions de phase en n-dimensions. Started Sept. 1st, 2022. Steve Oudot.
- PhD in progress: Lucas Brifault, Théorie de la mesure géométrique appliquée pour la modélisation de formes complexes. Started May. 1st, 2022. Mathieu Desbrun.
- PhD in progress: Theo Braune, Discrétization d’Opérateurs Différentielles basée sur la Géometrie. Started October. 1st, 2022. Mathieu Desbrun.
- PhD in progress: Nissim Maruani, Modèles 3D cognitifs et informés par la physique pour les jumeaux numériques et les territoires intelligents. Started October. 1st, 2022. Pierre Alliez and Mathieu Desbrun.
- PhD in progress: Jiayi Wei, New Geometric Representations for Volumetric Brain Analysis. Started Oct. 1st, 2020. Pooran Memari and Damien Rohmer.
- PhD in progress: Nicolas Donati, Représentations robustes pour la mise en correspondence de formes 3D via apprentissage supervisé et non-supervisé. Started Sep. 1st, 2019. Maks Ovsjanikov and Etienne Corman.
- PhD in progress: Mariem Mezghanni, Apprentissage Structurel et Fonctionnel pour l'Automatisation du Design Industriel. Started Nov. 1st, 2019. Maks Ovsjanikov.
- PhD in progress: Souhaib Attaiki, Analyse des formes 3D avec méthodes de l'apprentissage profond. Started Oct. 1st, 2020. Maks Ovsjanikov.
- PhD in progress: Robin Magnet, Exploration de la variabilité dans les données génériques. Started Fev. 1st, 2021. Maks Ovsjanikov.
- PhD in progress: Ramana S Sundararaman, Analyse de formes 3D à grande échelle avec des approches basées sur l’apprentissage. Started Oct. 1st, 2021. Maks Ovsjanikov.
9.2.3 Juries
- Steve Oudot was an invited member of the Ph.D. defence of Julian Le Deunff, IMT Atlantique, Dec. 2022.
- Maks Ovsjanikov was a reviewer for the Ph.D. of Janis Born, Aachen University, Mar. 2022.
- Maks Ovsjanikov was an examiner for the Ph.D. defence of Théo Deprelle, Université Paris-Est, Oct. 2022.
- Maks Ovsjanikov was an examiner for the Ph.D. defence of Tarek Ben Charrada, Cergy Paris Université, Oct. 2022.
- Pooran Memari was an examiner for the Ph.D. defence of Yanis Marchand, Université Paris Est, Nov. 2022.
10 Scientific production
Major publications
10.1 Publications of the year
International journals
International peer-reviewed conferences
Reports & preprints
10.2 Other
Cited publications