Our overall objective is the computerized geometric modeling of complex scenes from physical measurements. On the geometric modeling and processing pipeline, this objective corresponds to steps required for conversion from physical to effective digital representations: analysis, reconstruction and approximation. Another longer term objective is the synthesis of complex scenes. This objective is related to analysis as we assume that the main sources of data are measurements, and synthesis is assumed to be carried out from samples.
The related scientific challenges include i) being resilient to defect-laden data due to the uncertainty in the measurement processes and imperfect algorithms along the pipeline, ii) being resilient to heterogeneous data, both in type and in scale, iii) dealing with massive data, and iv) recovering or preserving the structure of complex scenes. We define the quality of a computerized representation by its i) geometric accuracy, or faithfulness to the physical scene, ii) complexity, iii) structure accuracy and control, and iv) amenability to effective processing and high level scene understanding.
Geometric modeling and processing revolve around three main end goals: a computerized shape representation that can be visualized (creating a realistic or artistic depiction), simulated (anticipating the real) or realized (manufacturing a conceptual or engineering design). Aside from the mere editing of geometry, central research themes in geometric modeling involve conversions between physical (real), discrete (digital), and mathematical (abstract) representations. Going from physical to digital is referred to as shape acquisition and reconstruction; going from mathematical to discrete is referred to as shape approximation and mesh generation; going from discrete to physical is referred to as shape rationalization.
Geometric modeling has become an indispensable component for computational and reverse engineering. Simulations are now routinely performed on complex shapes issued not only from computer-aided design but also from an increasing amount of available measurements. The scale of acquired data is quickly growing: we no longer deal exclusively with individual shapes, but with entire scenes, possibly at the scale of entire cities, with many objects defined as structured shapes. We are witnessing a rapid evolution of the acquisition paradigms with an increasing variety of sensors and the development of community data, as well as disseminated data.
In recent years, the evolution of acquisition technologies and methods has translated in an increasing overlap of algorithms and data in the computer vision, image processing, and computer graphics communities. Beyond the rapid increase of resolution through technological advances of sensors and methods for mosaicing images, the line between laser scan data and photos is getting thinner. Combining, e.g., laser scanners with panoramic cameras leads to massive 3D point sets with color attributes. In addition, it is now possible to generate dense point sets not just from laser scanners but also from photogrammetry techniques when using a well-designed acquisition protocol. Depth cameras are getting increasingly common, and beyond retrieving depth information we can enrich the main acquisition systems with additional hardware to measure geometric information about the sensor and improve data registration: e.g., accelerometers or gps for geographic location, and compasses or gyrometers for orientation. Finally, complex scenes can be observed at different scales ranging from satellite to pedestrian through aerial levels.
These evolutions allow practitioners to measure urban scenes at resolutions that were until now possible only at the scale of individual shapes. The related scientific challenge is however more than just dealing with massive data sets coming from increase of resolution, as complex scenes are composed of multiple objects with structural relationships. The latter relate i) to the way the individual shapes are grouped to form objects, object classes or hierarchies, ii) to geometry when dealing with similarity, regularity, parallelism or symmetry, and iii) to domain-specific semantic considerations. Beyond reconstruction and approximation, consolidation and synthesis of complex scenes require rich structural relationships.
The problems arising from these evolutions suggest that the strengths of geometry and images may be combined in the form of new methodological solutions such as photo-consistent reconstruction. In addition, the process of measuring the geometry of sensors (through gyrometers and accelerometers) often requires both geometry process and image analysis for improved accuracy and robustness. Modeling urban scenes from measurements illustrates this growing synergy, and it has become a central concern for a variety of applications ranging from urban planning to simulation through rendering and special effects.
Complex scenes are usually composed of a large number of objects which may significantly differ in terms of complexity, diversity, and density. These objects must be identified and their structural relationships must be recovered in order to model the scenes with improved robustness, low complexity, variable levels of details and ultimately, semantization (automated process of increasing degree of semantic content).
Object classification is an ill-posed task in which the objects composing a scene are detected and recognized with respect to predefined classes, the objective going beyond scene segmentation. The high variability in each class may explain the success of the stochastic approach which is able to model widely variable classes. As it requires a priori knowledge this process is often domain-specific such as for urban scenes where we wish to distinguish between instances as ground, vegetation and buildings. Additional challenges arise when each class must be refined, such as roof super-structures for urban reconstruction.
Structure extraction consists in recovering structural relationships between objects or parts of object. The structure may be related to adjacencies between objects, hierarchical decomposition, singularities or canonical geometric relationships. It is crucial for effective geometric modeling through levels of details or hierarchical multiresolution modeling. Ideally we wish to learn the structural rules that govern the physical scene manufacturing. Understanding the main canonical geometric relationships between object parts involves detecting regular structures and equivalences under certain transformations such as parallelism, orthogonality and symmetry. Identifying structural and geometric repetitions or symmetries is relevant for dealing with missing data during data consolidation.
Data consolidation is a problem of growing interest for practitioners, with the increase of heterogeneous and defect-laden data. To be exploitable, such defect-laden data must be consolidated by improving the data sampling quality and by reinforcing the geometrical and structural relations sub-tending the observed scenes. Enforcing canonical geometric relationships such as local coplanarity or orthogonality is relevant for registration of heterogeneous or redundant data, as well as for improving the robustness of the reconstruction process.
Our objective is to explore the approximation of complex shapes and scenes with surface and volume meshes, as well as on surface and domain tiling. A general way to state the shape approximation problem is to say that we search for the shape discretization (possibly with several levels of detail) that realizes the best complexity / distortion trade-off. Such problem statement requires defining a discretization model, an error metric to measure distortion as well as a way to measure complexity. The latter is most commonly expressed in number of polygon primitives, but other measures closer to information theory lead to measurements such as number of bits or minimum description length.
For surface meshes we intend to conceive methods which provide control and guarantees both over the global approximation error and over the validity of the embedding. In addition, we seek for resilience to heterogeneous data, and robustness to noise and outliers. This would allow repairing and simplifying triangle soups with cracks, self-intersections and gaps. Another exploratory objective is to deal generically with different error metrics such as the symmetric Hausdorff distance, or a Sobolev norm which mixes errors in geometry and normals.
For surface and domain tiling the term meshing is substituted for tiling to stress the fact that tiles may be not just simple elements, but can model complex smooth shapes such as bilinear quadrangles. Quadrangle surface tiling is central for the so-called resurfacing problem in reverse engineering: the goal is to tile an input raw surface geometry such that the union of the tiles approximates the input well and such that each tile matches certain properties related to its shape or its size. In addition, we may require parameterization domains with a simple structure. Our goal is to devise surface tiling algorithms that are both reliable and resilient to defect-laden inputs, effective from the shape approximation point of view, and with flexible control upon the structure of the tiling.
Assuming a geometric dataset made out of points or slices, the process of shape reconstruction amounts to recovering a surface or a solid that matches these samples. This problem is inherently ill-posed as infinitely-many shapes may fit the data. One must thus regularize the problem and add priors such as simplicity or smoothness of the inferred shape.
The concept of geometric simplicity has led to a number of interpolating techniques commonly based upon the Delaunay triangulation. The concept of smoothness has led to a number of approximating techniques that commonly compute an implicit function such that one of its isosurfaces approximates the inferred surface. Reconstruction algorithms can also use an explicit set of prior shapes for inference by assuming that the observed data can be described by these predefined prior shapes. One key lesson learned in the shape problem is that there is probably not a single solution which can solve all cases, each of them coming with its own distinctive features. In addition, some data sets such as point sets acquired on urban scenes are very domain-specific and require a dedicated line of research.
In recent years the smooth, closed case (i.e., shapes without sharp features nor boundaries) has received considerable attention. However, the state-of-the-art methods have several shortcomings: in addition to being in general not robust to outliers and not sufficiently robust to noise, they often require additional attributes as input, such as lines of sight or oriented normals. We wish to devise shape reconstruction methods which are both geometrically and topologically accurate without requiring additional attributes, while exhibiting resilience to defect-laden inputs. Resilience formally translates into stability with respect to noise and outliers. Correctness of the reconstruction translates into convergence in geometry and (stable parts of) topology of the reconstruction with respect to the inferred shape known through measurements.
Moving from the smooth, closed case to the piecewise smooth case (possibly with boundaries) is considerably harder as the ill-posedness of the problem applies to each sub-feature of the inferred shape. Further, very few approaches tackle the combined issue of robustness (to sampling defects, noise and outliers) and feature reconstruction.
In addition to tackling scientific challenges, our research on geometric modeling and processing is motivated by applications to computational engineering, reverse engineering, digital mapping and urban planning. The main deliverables of our research are algorithms with theoretical foundations. Ultimately we wish to contribute making geometry modeling and processing routine for practitioners who deal with real-world data. Our contributions may also be used as a sound basis for future software and technology developments.
Our ambition for technology transfer is to consolidate the components of our research experiments in the form of new software components for the CGAL (Computational Geometry Algorithms Library) library. Through CGAL we wish to contribute to the “standard geometric toolbox”, so as to provide a generic answer to application needs instead of fragmenting our contributions. We already cooperate with the Inria spin-off company Geometry Factory, which commercializes CGAL, maintains it and provide technical support.
We also started increasing our research momentum with companies through advising Cifre Ph.D. theses and postdoctoral fellows.
CGAL is a C++ library of geometric algorithms and data structures. Our team is involved in several on-going implementations: surface reconstruction, point set processing, shape detection in unstructured point sets, constrained 3D Delaunay triangulations, generalized barycentric coordinates (in collaboration with Dmitry Anisimov). Pierre Alliez is a member of the CGAL Editorial Board.
Point processes are a natural extension of Markov Random Fields (MRF), designed to handle parametric objects. They have shown efficiency and competitiveness for tackling object extraction problems in vision. Simulating these stochastic models is however a difficult task. The performances of the existing samplers are limited in terms of computation time and convergence stability, especially on large scenes. We propose a new sampling procedure based on a Monte Carlo formalism . Our algorithm exploits the Markovian property of point processes to perform the sampling in parallel. This procedure is embedded into a data-driven mechanism so that the points are distributed in the scene in function of spatial information extracted from the input data. The performances of the sampler are analyzed through a set of experiments on various object detection problems from large scenes, including comparisons to the existing algorithms. The sampler is also tested as optimization algorithm for MRF-based labeling problems.
In collaboration with EADS ASTRIUM
We contributed a method for automatic reconstruction of permanent structures of indoor scenes, such as walls, floors and ceilings, from raw point clouds acquired by laser scanners . Our approach employs graph-cut to solve an inside/outside labeling of a space decomposition. To allow for an accurate reconstruction the space decomposition is aligned with permanent structures. A Hough Transform is applied for extracting the wall directions while allowing a flexible reconstruction of scenes. The graph-cut formulation takes into account data consistency through an inside/outside prediction for the cells of the space decomposition by stochastic ray casting, while favoring low geometric complexity of the model. Our algorithm produces watertight reconstructed models of multi-level buildings and complex scenes.
In collaboration with Matthew Berger, Andrea Tagliasacchi, Lee Seversky, Joshua Levine, Andrei Sharf and Claudio Silva.
The area of surface reconstruction has seen substantial progress in the past two decades. The traditional problem addressed by surface reconstruction is to recover the digital representation of a physical shape that has been scanned, where the scanned data contains a wide variety of defects. While much of the earlier work has been focused on reconstructing a piece-wise smooth representation of the original shape, recent work has taken on more specialized priors to address significantly challenging data imperfections, where the reconstruction can take on different representations – not necessarily the explicit geometry. This state-of-the-art report surveys the field of surface reconstruction, providing a categorization with respect to priors, data imperfections, and reconstruction output. By considering a holistic view of surface reconstruction, this report provides a detailed characterization of the field, highlights similarities between diverse reconstruction techniques, and provides directions for future work in surface reconstruction .
In collaboration with David Cohen-Steiner.
We describe a framework for robust shape reconstruction from raw point sets, based on optimal transportation between measures, where the input point sets are seen as distribution of masses. In addition to robustness to defect-laden point sets, hampered with noise and outliers, our approach can reconstruct smooth closed shapes as well as piecewise smooth shapes with boundaries .
In collaboration with Leif Kobbelt.
We contributed an algorithm that approximates 2-manifold surfaces with Zometool models while preserving their topology. Zometool is a popular hands-on mathematical modeling system used in teaching, research and for recreational model assemblies at home. This construction system relies on a single node type with a small, fixed set of directions and only nine different edge types in its basic form. While being naturally well suited for modeling symmetries, various polytopes or visualizing molecular structures, the inherent discreteness of the system poses difficult constraints on any algorithmic approach to support the modeling of freeform shapes. We contribute a set of local, topology preserving Zome mesh modification operators enabling the efficient exploration of the space of 2-manifold Zome models around a given input shape. Starting from a rough initial approximation, the operators are iteratively selected within a stochastic framework guided by an energy functional measuring the quality of the approximation. We demonstrate our approach on a number of designs and also describe parameters which are used to explore different complexities and enable coarse approximations .
In collaboration with Jean-Daniel Boissonnat and Mariette Yvinec.
CGALmesh is the mesh generation software package of the Computational Geometry Algorithm Library (CGAL). It generates isotropic simplicial meshes – surface triangular meshes or volume tetrahedral meshes – from input surfaces, 3D domains as well as 3D multi-domains, with or without sharp features. The underlying meshing algorithm relies on restricted Delaunay triangulations to approximate domains and surfaces, and on Delaunay refinement to ensure both approximation accuracy and mesh quality. CGALmesh provides guarantees on approximation quality as well as on the size and shape of the mesh elements. It provides four optional mesh optimization algorithms to further improve the mesh quality. A distinctive property of CGALmesh is its high flexibility with respect to the input domain representation. Such a flexibility is achieved through a careful software design, gathering into a single abstract concept, denoted by the oracle, all required interface features between the meshing engine and the input domain. We already provide oracles for domains defined by polyhedral and implicit surfaces .
In collaboration with Hans-Christian Ebke and Leif Kobbelt from RWTH Aachen.
The most effective and popular tools for obtaining feature aligned quad meshes from triangular input meshes are based on cross field guided parametrization. These methods are incarnations of a conceptual three-step pipeline: (1) cross field computation, (2) field- guided surface parametrization, (3) quad mesh extraction. While in most meshing scenarios the user prescribes a desired target quad size or edge length, this information is typically taken into account from step 2 onwards only, but not in the cross field computation step. This turns into a problem in the presence of small scale geometric or topological features or noise in the input mesh: closely placed singularities are induced in the cross field, which are not properly reproducible by vertices in a quad mesh with the pre- scribed edge length, causing severe distortions or even failure of the meshing algorithm. We reformulate the construction of cross fields as well as field-guided parameterizations in a scale-aware manner which effectively suppresses densely spaced features and noise of geometric as well as topological kind. Dominant large-scale features are adequately preserved in the output by relying on the unaltered input mesh as the computational domain .
In collaboration with Technicolor and Gwenael Doerr.
A watermarking strategy for triangle surface meshes consists in modifying the vertex positions along the radial directions, in order to adjust the distribution of radial distances and thereby encode the desired payload. To guarantee that watermark embedding does not alter the center of mass, prior work formulated this task as a quadratic programming problem. We contribute a generalization of this formulation with: (i) integral reference primitives, (ii) arbitrary relocation directions to alter the vertex positions, and (iii) alternate distortion metrics to minimize the perceptual impact of the embedding process. These variants are evaluated against a range of attacks and we report both improved robustness performances, in particular for simplification attacks, and improved control over the embedding distortion .
In collaboration with Technicolor, thesis co-advised by Pierre Alliez and Gwenael Doerr.
3D models are valuable assets widely used in the industry and likely to face piracy issues. This dissertation deals with robust mesh watermarking that is used for traitor-tracing. Following a review of state-of-the-art 3D watermarking systems, the robustness of several content adaptation transforms are evaluated. An embedding domain robust against pose is investigated, with a thickness estimation based on a robust distance function to a point cloud constructed from some mesh diameters. A benchmark showcases the performance of this domain that provides a basis for robust watermarking in 3D animations. For static meshes, modulating the radial distances is an efficient approach to watermarking. It has been formulated as a quadratic programming problem minimizing the geometric distortion while embedding the payload in the radial distances. This formulation is leveraged to create a robust watermarking framework, with the integration of the spread-transform, integral reference primitives, arbitrarily selected relocation directions and alternate metrics to minimize the distortion perceived. Benchmarking results showcase the benefits of these add-ons w.r.t the fidelity vs. robustness watermarking trade-off. The watermark security is then investigated with two obfuscation mechanisms and a series of attacks that highlight the remaining limitations. A resynchronization approach is finally integrated to deal with cropping attacks. The resynchronization embeds landmarks in a configuration that conveys synchronization information that will be lost after cropping. During the decoding, this information is blindly retrieved and significant robustness improvements are achieved .
In collaboration with Technicolor and Gwenael Doerr.
Modulating the distances between the vertices and the center of mass of a triangular mesh is a popular approach to watermark 3D objects. Prior work has formulated this approach as a quadratic programming problem which minimizes the geometric distortion while embedding the watermark payload in the histogram of distances. To enhance this framework, we introduce two watermarking components, namely the spread transform and perceptual shaping based on roughness information. Benchmarking results showcase the benefits of these add-ons with respect to the fidelity-robustness trade-off .
The main goal of this collaboration is to devise new algorithms for reconstructing 3D indoor models that are more accurate, meaningful and complete than existing methods. The conventional way for modeling indoor scenes is based on plane arrangements. This type of representation is particularly limited and must be improved by devising more complex geometric entities adapted to a detailed and semantized description of scenes.
- Starting date: April 2012
- Duration: 3 years
The aim of this collaboration is to devise a new type of 2.5D representation from satellite multi-view stereo images which is more accurate, compact and meaningful than the conventional DEMs. A key direction consists in incorporating semantic information directly during the image matching process. This semantic is related to the type of components of the scene, such as vegetation, roofs, building edges, roads and land.
- Starting date: November 2013
- Duration: 3 years
The goal of this collaboration was to devise a method for watermarking 3D models, with resilience to a wide range of attacks and poses.
- Starting date: October 2012
- Duration: 3 years
Culture 3D Clouds (started in October 2012, duration 3 years) is a national project aimed at devising a cloud computing platform for 3D scanning, documentation, preservation and dissemination of cultural heritage.
Information and communication technologies in the world offer new possibilities for cultural exchange, creation, education and shared knowledge to greatly expand the access to culture and heritage. Culture 3D Cloud is part of a process that aims to create a technical rupture approach in the field of digitization of heritage artifacts to allow the emergence of new viable business models. Today the field of 3D scanning artifacts heritage evolves slowly and only provides resources for researchers and specialists and the technology and equipment used for 3D scanning are sophisticated and require highly specialized skills. The cost is significant and limits the widespread practice. Culture 3D Clouds project aims to give back the caption to the photographers and the distribution to the agencies and image banks that will develop a value chain to commercialize 3D reproductions demand for their customers and expand the market valuation of business assets (commercial publishers, public).
Partners: IGN, CMN , RMN, Inria, EISTI, CNRS-MAP, UCP-ETIS, CEA, HPC Project, ValEISTI, BeIngenious.
Web site: http://
Type: IDEAS
Instrument: ERC Starting Grant
Duration: January 2011 - December 2015
Coordinator: Pierre Alliez
Inria contact: Pierre Alliez
Abstract: The purpose of this project is to bring forth the full scientific and technological potential of Digital Geometry Processing by consolidating its most foundational aspects. Our methodology will draw from and bridge the two main communities (computer graphics and computational geometry) involved in discrete geometry to derive algorithmic and theoretical contributions that provide both robustness to noisy, unprocessed inputs, and strong guarantees on the outputs. The intended impact is to make the digital geometry pipeline as generic and ironclad as its Digital Signal Processing counterpart.
Prof. Mathieu Desbrun, head of the Information Sciences and Mathematics Department of Caltech, obtained an Inria international Chair. We are collaborating on robust surface reconstruction, optimal transport and variational meshing.
Florent Lafarge was co-chair of the ISPRS working group on point cloud processing. Pierre Alliez joined the Horizon 2020 Advisory Group for Societal Challenge 6 ‘Europe in a changing world – Inclusive, Innovative and Reflective Societies. Pierre Alliez joined the Steering Board of the EUROGRAPHICS Workshop on Graphics and Cultural Heritage.
Pierre Alliez was program co-chair of GMP (Geometric Modeling and Processing).
Florent Lafarge was a program committee member for PCV'14 and 3DV'14. Pierre Alliez was program committee member for EUROGRAPHICS Symposium on Geometry Processing, Shape Modeling International, EUROGRAPHICS Workshop on Graphics and Cultural Heritage and EUROGRAPHICS Workshop on Urban Data Modelling and Visualisation.
Pierre Alliez is an associate editor of ACM Transactions on Graphics since 2009, of Elsevier Graphical Models since 2010, and of Computer Aided Geometric Design since 2013.
Florent Lafarge was a reviewer for CVPR, ECCV, SIGGRAPH, SIGGRAPH Asia, EUROGRAPHICS, IJCV, JPRS, CVIU, T-VCG and Computers
Florent Lafarge was a keynote speaker at the ISPRS Commission V Symposium, Riva del Garda on June 23 (talk:" Dense image matching and surface reconstruction: when photogrammetry meets computational geometry").
Master: Pierre Alliez and Florent Lafarge, Algorithmes géométriques - théorie et pratique, 9h, M2, university Nice Sophia Antipolis, France.
Master: Pierre Alliez and Florent Lafarge, 3D Meshes and Applications, 32h, M2, Ecole des Ponts ParisTech, France.
Master: Pierre Alliez, Mathématiques pour la géométrie, 24h, M2, EFREI, France.
Master: Florent Lafarge, Traitement d'images numériques, 9h, M2, university Nice Sophia Antipolis, France.
Master: Florent Lafarge, Imagerie numérique, 10h, M2, university Nice Sophia Antipolis, France.
HdR: Florent Lafarge, Some contributions to geometric modeling of urban environments, University of Nice Sophia Antipolis, September 29.
PhD: Xavier Rolland-Neviere, Watermarking of 3D Models, defended November 12, Pierre Alliez.
PhD in progress: Simon Giraudot, Robust surface reconstruction, since October 2011, Pierre Alliez.
PhD in progress: Sven Oesau, Reconstruction of indoor scenes, since October 2012, Florent Lafarge and Pierre Alliez.
PhD in progress: Manish Mandad, Shape approximation with guarantees, since October 2012, Pierre Alliez.
PhD in progress : Dorothy Duan, Semantized Elevation Maps, since October 2013, Florent Lafarge.
PhD in progress : Jean-Dominique Favreau, Sketch-based modeling in multi-view context, since October 2014, Florent Lafarge.
Florent Lafarge:
Thesis reviewer: Trung Thanh Pham (University of Adelaide, Australia).
Pierre Alliez:
Thesis reviewer: Thierry Guillemot (Telecom ParisTech).
Thesis reviewer: Ricard Campos (University of Girona, Spain).
Thesis reviewer: Henrik Zimmer (University of Aachen, Germany).
Thesis committee: Jean-Luc Peyrot (CNRS-Université de Nice Sophia Antipolis).
Thesis reviewer: Louis Cuel (Université de Savoie).
Thesis reviewer: Alexandre Boulch (Ecole des Ponts ParisTech).
HdR committee: Florent Lafarge (Université de Nice Sophia Antipolis).