Our overall objective is the computerized geometric modeling of complex scenes from physical measurements. On the geometric modeling and processing pipeline, this objective corresponds to steps required for conversion from physical to effective digital representations: *analysis*, *reconstruction* and *approximation*. Another longer term objective is the *synthesis* of complex scenes. This objective is related to analysis as we assume that the main sources of data are measurements, and synthesis is assumed to be carried out from samples.

The related scientific challenges include i) being resilient to defect-laden data due to the uncertainty in the measurement processes and imperfect algorithms along the pipeline, ii) being resilient to heterogeneous data, both in type and in scale, iii) dealing with massive data, and iv) recovering or preserving the structure of complex scenes. We define the quality of a computerized representation by its i) geometric accuracy, or faithfulness to the physical scene, ii) complexity, iii) structure accuracy and control, and iv) amenability to effective processing and high level scene understanding.

We obtained the second best paper award at the EUROGRAPHICS Symposium on Geometry Processing.

Geometric modeling and processing revolve around three main end goals: a computerized shape representation that can be visualized (creating a realistic or artistic depiction), simulated (anticipating the real) or realized (manufacturing a conceptual or engineering design). Aside from the mere editing of geometry, central research themes in geometric modeling involve conversions between physical (real), discrete (digital), and mathematical (abstract) representations. Going from physical to digital is referred to as shape acquisition and reconstruction; going from mathematical to discrete is referred to as shape approximation and mesh generation; going from discrete to physical is referred to as shape rationalization.

Geometric modeling has become an indispensable component for computational and reverse engineering. Simulations are now routinely performed on complex shapes issued not only from computer-aided design but also from an increasing amount of available measurements. The scale of acquired data is quickly growing: we no longer deal exclusively with individual shapes, but with entire *scenes*, possibly at the scale of entire cities, with many objects defined as structured shapes. We are witnessing a rapid evolution of the acquisition paradigms with an increasing variety of sensors and the development of community data, as well as disseminated data.

In recent years, the evolution of acquisition technologies and methods has translated in an increasing overlap of algorithms and data in the computer vision, image processing, and computer graphics communities. Beyond the rapid increase of resolution through technological advances of sensors and methods for mosaicing images, the line between laser scan data and photos is getting thinner. Combining, e.g., laser scanners with panoramic cameras leads to massive 3D point sets with color attributes. In addition, it is now possible to generate dense point sets not just from laser scanners but also from photogrammetry techniques when using a well-designed acquisition protocol. Depth cameras are getting increasingly common, and beyond retrieving depth information we can enrich the main acquisition systems with additional hardware to measure geometric information about the sensor and improve data registration: e.g., accelerometers or gps for geographic location, and compasses or gyrometers for orientation. Finally, complex scenes can be observed at different scales ranging from satellite to pedestrian through aerial levels.

These evolutions allow practitioners to measure urban scenes at resolutions that were until now possible only at the scale of individual shapes. The related scientific challenge is however more than just dealing with massive data sets coming from increase of resolution, as complex scenes are composed of multiple objects with structural relationships. The latter relate i) to the way the individual shapes are grouped to form objects, object classes or hierarchies, ii) to geometry when dealing with similarity, regularity, parallelism or symmetry, and iii) to domain-specific semantic considerations. Beyond reconstruction and approximation, consolidation and synthesis of complex scenes require rich structural relationships.

The problems arising from these evolutions suggest that the strengths of geometry and images may be combined in the form of new methodological solutions such as photo-consistent reconstruction. In addition, the process of measuring the geometry of sensors (through gyrometers and accelerometers) often requires both geometry process and image analysis for improved accuracy and robustness. Modeling urban scenes from measurements illustrates this growing synergy, and it has become a central concern for a variety of applications ranging from urban planning to simulation through rendering and special effects.

Complex scenes are usually composed of a large number of objects which may significantly differ in terms of complexity, diversity, and density. These objects must be identified and their structural relationships must be recovered in order to model the scenes with improved robustness, low complexity, variable levels of details and ultimately, semantization (automated process of increasing degree of semantic content).

*Object classification* is an ill-posed task in which the objects composing a scene are detected and recognized with respect to predefined classes, the objective going beyond scene segmentation. The high variability in each class may explain the success of the stochastic approach which is able to model widely variable classes. As it requires a priori knowledge this process is often domain-specific such as for urban scenes where we wish to distinguish between instances as ground, vegetation and buildings. Additional challenges arise when each class must be refined, such as roof super-structures for urban reconstruction.

*Structure extraction* consists in recovering structural relationships between objects or parts of object. The structure may be related to adjacencies between objects, hierarchical decomposition, singularities or canonical geometric relationships. It is crucial for effective geometric modeling through levels of details or hierarchical multiresolution modeling. Ideally we wish to learn the structural rules that govern the physical scene manufacturing. Understanding the main canonical geometric relationships between object parts involves detecting regular structures and equivalences under certain transformations such as parallelism, orthogonality and symmetry. Identifying structural and geometric repetitions or symmetries is relevant for dealing with missing data during data consolidation.

*Data consolidation* is a problem of growing interest for practitioners, with the increase of heterogeneous and defect-laden data. To be exploitable, such defect-laden data must be consolidated by improving the data sampling quality and by reinforcing the geometrical and structural relations sub-tending the observed scenes. Enforcing canonical geometric relationships such as local coplanarity or orthogonality is relevant for registration of heterogeneous or redundant data, as well as for improving the robustness of the reconstruction process.

Our objective is to explore the approximation of complex shapes and scenes with surface and volume meshes, as well as on surface and domain tiling. A general way to state the shape approximation problem is to say that we search for the shape discretization (possibly with several levels of detail) that realizes the best complexity / distortion trade-off. Such problem statement requires defining a discretization model, an error metric to measure distortion as well as a way to measure complexity. The latter is most commonly expressed in number of polygon primitives, but other measures closer to information theory lead to measurements such as number of bits or minimum description length.

For surface meshes we intend to conceive methods which provide control and guarantees both over the global approximation error and over the validity of the embedding. In addition, we seek for resilience to heterogeneous data, and robustness to noise and outliers. This would allow repairing and simplifying triangle soups with cracks, self-intersections and gaps. Another exploratory objective is to deal generically with different error metrics such as the symmetric Hausdorff distance, or a Sobolev norm which mixes errors in geometry and normals.

For surface and domain tiling the term meshing is substituted for tiling to stress the fact that tiles may be not just simple elements, but can model complex smooth shapes such as bilinear quadrangles. Quadrangle surface tiling is central for the so-called *resurfacing* problem in reverse engineering: the goal is to tile an input raw surface geometry such that the union of the tiles approximates the input well and such that each tile matches certain properties related to its shape or its size. In addition, we may require parameterization domains with a simple structure. Our goal is to devise surface tiling algorithms that are both reliable and resilient to defect-laden inputs, effective from the shape approximation point of view, and with flexible control upon the structure of the tiling.

Assuming a geometric dataset made out of points or slices, the process of shape reconstruction amounts to recovering a surface or a solid that matches these samples. This problem is inherently ill-posed as infinitely-many shapes may fit the data. One must thus regularize the problem and add priors such as simplicity or smoothness of the inferred shape.

The concept of geometric simplicity has led to a number of interpolating techniques commonly based upon the Delaunay triangulation. The concept of smoothness has led to a number of approximating techniques that commonly compute an implicit function such that one of its isosurfaces approximates the inferred surface. Reconstruction algorithms can also use an explicit set of prior shapes for inference by assuming that the observed data can be described by these predefined prior shapes. One key lesson learned in the shape problem is that there is probably not a single solution which can solve all cases, each of them coming with its own distinctive features. In addition, some data sets such as point sets acquired on urban scenes are very domain-specific and require a dedicated line of research.

In recent years the *smooth, closed case* (i.e., shapes without sharp features nor boundaries) has received considerable attention. However, the state-of-the-art methods have several shortcomings: in addition to being in general not robust to outliers and not sufficiently robust to noise, they often require additional attributes as input, such as lines of sight or oriented normals. We wish to devise shape reconstruction methods which are both geometrically and topologically accurate without requiring additional attributes, while exhibiting resilience to defect-laden inputs. Resilience formally translates into stability with respect to noise and outliers. Correctness of the reconstruction translates into convergence in geometry and (stable parts of) topology of the reconstruction with respect to the inferred shape known through measurements.

Moving from the smooth, closed case to the *piecewise smooth case* (possibly with boundaries) is considerably harder as the ill-posedness of the problem applies to each sub-feature of the inferred shape. Further, very few approaches tackle the combined issue of robustness (to sampling defects, noise and outliers) and feature reconstruction.

In addition to tackling scientific challenges, our research on geometric modeling and processing is motivated by applications to computational engineering, reverse engineering, digital mapping and urban planning. The main deliverables of our research are algorithms with theoretical foundations. Ultimately we wish to contribute to making geometric modeling and processing routine for practitioners who deal with real-world data. Our contributions may also be used as a sound basis for future software and technology developments.

Our ambition for technology transfer is to consolidate the components of our research experiments in the form of new software components for the CGAL (Computational Geometry Algorithms Library) library. Through CGAL we wish to contribute to the “standard geometric toolbox”, so as to provide a generic answer to application needs instead of fragmenting our contributions. We already cooperate with the Inria spin-off company Geometry Factory, which commercializes CGAL, maintains it and provide technical support.

We also started increasing our research momentum with companies through advising Cifre Ph.D. theses and postdoctoral fellows.

CGAL is a C++ library of geometric algorithms and data structures. Our team is involved in several on-going implementations: parallelization of mesh generation and triangulations, shape detection in unstructured point sets, geodesic distances on surface meshes and barycentric coordinates (in collaboration with Dmitry Anisimov). Pierre Alliez is a member of the CGAL Editorial Board.

MeshMantics is a software for segmenting 2-manifold surface meshes in an urban context. Four classes of interest are considered: ground, vegetation, roof and facades.

Point processes constitute a natural extension of Markov Random Fields (MRF), designed to handle parametric objects. They have shown efficiency and competitiveness for tackling object extraction problems in vision. Simulating these stochastic models is however a difficult task. The performance of existing samplers are limited in terms of computation time and convergence stability, especially on large scenes. We propose a new sampling procedure based on a Monte Carlo formalism . Our algorithm exploits the Markovian property of point processes to perform the sampling in parallel. This procedure is embedded into a data-driven mechanism so that the points are distributed in the scene as a function of spatial information extracted from the input data. The performance of the sampler is analyzed through a set of experiments on various object detection problems from large scenes, including comparisons to the existing algorithms. The sampler is also evaluated as an optimization algorithm for MRF-based labeling problems (Figure ).

In collaboration with Dengfeng Chai (Zheijiang University, China) and Wolfgang Forstner (University of Bonn, Germany).

We tackle the automatic extraction of line-networks from images. Appearance and shape considerations have been deeply explored in the literature to improve accuracy in presence of occlusions, shadows, and a wide variety of irrelevant objects. However most existing work has ignored the structural aspect of the problem. We present an original method which provides structurally-coherent solutions . Contrary to the pixel-based and object-based methods, our result is a graph in which each node represents either a connection or an ending in the line-network. Based on stochastic geometry, we develop a new family of point processes consisting in sampling junction-points in the input image by using a Monte Carlo mechanism. The quality of a configuration is measured by a probability density which takes into account both image consistency and shape priors. Our experiments on a variety of problems illustrate the potential of our approach in terms of accuracy, flexibility and efficiency (Figure ).

In collaboration with Leif Kobbelt from RWTH Aachen.

Quadrilateral remeshing approaches based on global parametrization enable many desirable mesh properties. Two of those are (1) high regularity due to explicit control over irregular vertices and (2) smooth distribution of distortion achieved by convex variational formulations. In this work we propose a novel convex Mixed-Integer Quadratic Programming (MIQP) formulation which ensures by construction that the resulting map is within the class of so called Integer-Grid Maps that are guaranteed to imply a quad mesh. In order to overcome the NP-hardness of MIQP we propose two additional optimizations: a complexity reduction algorithm and singularity separating conditions. While the former decouples the dimension of the MIQP search space from the input complexity of the triangle mesh, the latter improves the continuous relaxation, which is crucial for the success of modern MIQP optimizers. Our algorithm also enables the global search for high-quality coarse quad layouts as illustrated in Figure , a difficult task solely tackled by insufficient greedy methodologies before.

In collaboration with Leif Kobbelt from RWTH Aachen.

Among the class of quad remeshing techniques, the ones based on parameterization strive to generate an integer-grid map, i.e., a parametrization of the input surface in 2D such that the canonical grid of integer iso-lines forms a quad mesh when mapped back onto the surface in 3D. An essential, albeit broadly neglected aspect of these methods is the quad extraction step. This step is not a trivial matter: ambiguities induced by numerical inaccuracies and limited solver precision, as well as imperfections in the maps produced by most methods (unless costly countermeasures are taken) pose significant hurdles to the quad extractor. In this work we present a method to sanitize a provided parametrization such that it becomes numerically consistent even with limited precision floating point arithmetic. We also devise a novel strategy to cope with common local fold-overs in the parametrization. We can generate all-quadrilateral meshes where otherwise holes, non-quad polygons or no output at all would have been produced like for the example in Figure .

In collaboration with Leif Kobbelt (RWTH Aachen).

A purely topological approach for the generation of hexahedral meshes from quadrilateral surface meshes of genus zero has been proposed by M. Müller-Hannemann: in a first stage, the input surface mesh is reduced to a single hexahedron by successively eliminating loops from the dual graph of the quad mesh; in the second stage, the hexahedral mesh is constructed by extruding a layer of hexahedra for each dual loop from the first stage in reverse elimination order. We introduce several techniques to extend the scope of target shapes of the approach and significantly improve the quality of the generated hexahedral meshes . While the original method can only handle almost-convex objects and requires mesh surgery and remeshing in case of concave geometry, we propose a method to overcome this issue by introducing the notion of concave dual loops in order to handle non-convex objects like the one displayed in Figure . Furthermore, we analyze and improve the heuristic to determine the elimination order for the dual loops such that the inordinate introduction of interior singular edges, i.e., edges of degree other than four in the hexahedral mesh, can be avoided in many cases.

In collaboration with Mathieu Desbrun, Fernando de Goes and Houman Owhadi from Caltech.

We contributed a novel approach for the analysis and design of self-supporting simplicial masonry structures . A finite-dimensional formulation of their compressive stress field is derived, offering a new interpretation of thrust networks through numerical homogenization theory. We further leverage geometric properties of the resulting force diagram to identify a set of reduced coordinates characterizing the equilibrium of simplicial masonry. We finally derive computational form-finding tools that improve over previous work in efficiency, accuracy, and scalability.

In collaboration with David Cohen-Steiner (GEOMETRICA project-team)

We devised a noise-adaptive shape reconstruction method specialized to smooth, closed shapes . Our algorithm takes as input a defect-laden point set with variable noise and outliers, and comprises three main steps. First, we compute a novel noise-adaptive distance function to the inferred shape, which relies on the assumption that the inferred shape is a smooth submanifold of known dimension. Second, we estimate the sign and confidence of the function at a set of seed points, through minimizing a quadratic energy expressed on the edges of a uniform random graph. Third, we compute a signed implicit function through a random walker approach with soft constraints chosen as the most confident seed points computed in the previous step.

We present a method for reconstructing surfaces from point sets . The main novelty lies in a structure-preserving approach where the input point set is first consolidated by structuring and resampling the planar components, before reconstructing the surface from both the consolidated components and the unstructured points. Structuring facilitates the surface reconstruction as the point set is substantially reduced and the points are enriched with structural meaning related to adjacency between primitives. Our approach departs from the common dichotomy between smooth/piecewise-smooth and primitive-based representations by gracefully combining canonical parts from detected primitives and free-form parts of the inferred shape (Figure ).

In collaboration with Renaud Keriven (Acute3D), Mathieu Bredif (IGN), and Hiep Vu (Ecole des Ponts ParisTech).

We present an original multi-view stereo reconstruction algorithm which allows the 3D-modeling of urban scenes as a combination of meshes and geometric primitives . The method provides a compact model while preserving details: irregular elements are described by meshes whereas regular structures are described by canonical geometric primitives. We adopt a two-step strategy consisting first in segmenting the initial mesh-based surface using a multi-label Markov Random Field based model and second, in sampling primitive and mesh components simultaneously on the obtained partition by a Jump-Diffusion process. The quality of a reconstruction is measured by a multi-object energy model which takes into account both photo-consistency and semantic considerations (i.e. geometry and shape layout). The segmentation and sampling steps are embedded into an iterative refinement procedure which provides an increasingly accurate hybrid representation (Figure ).

In collaboration with EADS ASTRIUM

We present a method for automatic reconstruction of permanent structures of indoor scenes, such as walls, floors and ceilings, from raw point clouds acquired by laser scanners . Our approach employs graph-cut to solve an inside/outside labeling of a space decomposition. To allow for an accurate reconstruction the space decomposition is aligned with permanent structures. A Hough Transform is applied for extracting the wall directions while allowing a flexible reconstruction of scenes. The graph-cut formulation takes into account data consistency through an inside/outside prediction for the cells of the space decomposition by stochastic ray casting, while favoring low geometric complexity of the model. Our experiments produces watertight reconstructed models of multi-level buildings and complex scenes (Figure ).

In collaboration with Marc Van Kreveld and Remco Veltkamp

The demand for large geometric models is increasing, especially of urban environments. This has resulted in production of massive point cloud data from images or LiDAR. Visualization and further processing generally require a detailed, yet concise representation of the scene's surfaces. Related work generally either approximates the data with the risk of over-smoothing, or interpolates the data with excessive detail. Many surfaces in urban scenes can be modeled more concisely by planar approximations. We present a method that combines these polygons into a watertight model . The polygon-based shape is closed with free-form meshes based on visibility information. To achieve this, we divide 3-space into inside and outside volumes by combining a constrained Delaunay tetrahedralization with a graph-cut. We compare our method with related work on several large urban LiDAR data sets. We construct similar shapes with a third fewer triangles to model the scenes. Additionally, our results are more visually pleasing and closer to a human modeler's description of urban scenes using simple boxes (Figure ).

In collaboration with David Cohen-Steiner, Julie Digne, Mathieu Desbrun and Fernando de Goes

We introduce a robust and feature-capturing surface reconstruction and simplification method that turns an input point set into a low triangle-count simplicial complex . Our approach starts with a (possibly non-manifold) simplicial complex filtered from a 3D Delaunay triangulation of the input points. This initial approximation is iteratively simplified based on an error metric that measures, through optimal transport, the distance between the input points and the current simplicial complex, both seen as mass distributions. Our approach exhibits both robustness to noise and outliers, as well as preservation of sharp features and boundaries (Figure ). Our new feature-sensitive metric between point sets and triangle meshes can also be used as a post-processing tool that, from the smooth output of a reconstruction method, recovers sharp features and boundaries present in the initial point set.

In collaboration with Mariette Yvinec (EPI GEOMETRICA), Ricard Campos (University of Girona), Raphael Garcia (University of Girona)

We introduce a method for surface reconstruction from point sets that is able to cope with noise and outliers. First, a splat-based representation is computed from the point set. A robust local 3D RANSAC-based procedure is used to filter the point set for outliers, then a local jet surface – a low-degree surface approximation – is fitted to the inliers. Second, we extract the reconstructed surface in the form of a surface triangle mesh through Delaunay refinement (Figure ). The Delaunay refinement meshing approach requires computing intersections between line segment queries and the surface to be meshed. In the present case, intersection queries are solved from the set of splats through a 1D RANSAC procedure. .

The main goal of this collaboration is to devise new algorithms for reconstructing 3D indoor models that are more accurate, meaningful and complete than existing methods. The conventional way for modeling indoor scenes is based on plane arrangements. This type of representation is particularly limited and must be improved by devising more complex geometric entities adapted to a detailed and semantized description of scenes.

- Starting date: April 2012

- Duration: 3 years

The aim of this collaboration is to devise a new type of 2.5D representation from satellite multi-view stereo images which is more accurate, compact and meaningful than the conventional DEMs. A key direction consists in incorporating semantic information directly during the image matching process. This semantic is related to the type of components of the scene, such as vegetation, roofs, building edges, roads and land.

- Starting date: November 2013

- Duration: 3 years

The goal of this Cifre Ph.D. thesis project is to devise a method for watermarking 3D models, with resilience to a wide range of attacks and poses.

- Starting date: October 2012

- Duration: 3 years

...

Culture 3D Clouds (started in October 2012) is a national project aimed at devising a cloud computing platform for 3D scanning, documentation, preservation and dissemination of cultural heritage.

Type: IDEAS

Instrument: ERC Starting Grant

Duration: January 2011 - December 2015

Coordinator: Pierre Alliez

Inria contact: Pierre Alliez

Abstract: The purpose of this project is to bring forth the full scientific and technological potential of Digital Geometry Processing by consolidating its most foundational aspects. Our methodology will draw from and bridge the two main communities (computer graphics and computational geometry) involved in discrete geometry to derive algorithmic and theoretical contributions that provide both robustness to noisy, unprocessed inputs, and strong guarantees on the outputs. The intended impact is to make the digital geometry pipeline as generic and ironclad as its Digital Signal Processing counterpart.

Dmitry Anisimov, from University of Lugano, visited us in September-October. We also had short visits of Marcel Campen and Henrik Zimmer from RWTH Aachen.

Anmol Garg from IIT Bombay: Anisotropic metrics for shape approximation.

David Bommes visited the Applied Geometry Lab at California Institute of Technology (Caltech) from May to June.

Pierre Alliez, Clément Jamin, Manish Mandad and Thijs van Lankveld have contributed to the organization of Digital Heritage 2013 held in Marseille, October 28 to November 1st.

Pierre Alliez was a paper committee member of EUROGRAPHICS Symposium on Geometry Processing, ACM Virtual Reality International Conference and Shape Modeling International. He is an associate editor of ACM Transactions on Graphics since 2009, and of Elsevier Graphical Models since 2010. This year he joined the editorial board of Computer Aided Geometric Design.

David Bommes was a paper committee member of EUROGRAPHICS Symposium on Geometry Processing, Shape Modeling International, EUROGRAPHICS short papers, ACM SIGGRAPH Asia technical briefs and posters and Digital Heritage.

Florent Lafarge was a paper committee member of ISPRS workshop on City Models, Roads and Traffic, ACM SIGGRAPH Asia technical briefs and posters, CAD/Graphics, Laser Scanning, Digital Heritage and Indoor3D, and a reviewer for CVPR and ICCV.

Florent Lafarge gave seminars at the University of Auckland (New Zealand), UCL (United Kingdom) and ETH Zurich (Switzerland). He was a keynote speaker at ISPRS SSG'13 in Antalya (Turkey).

Master : Pierre Alliez, Algorithmes géométriques - théorie et pratique, 9h, M2, university Nice Sophia Antipolis, France.

Master : Pierre Alliez and Florent Lafarge, 3D Meshes and Applications, 32h, M2, Ecole des Ponts ParisTech, France.

Master : Pierre Alliez, Mathématiques pour la géométrie, 30h, M2, EFREI, France.

Master : Florent Lafarge, Traitement d'images numériques, 9h, M2, university Nice Sophia Antipolis, France.

Master : Florent Lafarge, Imagerie numérique, 10h, M2, university Nice Sophia Antipolis, France.

PhD : Yannick Verdie, Urban scene modeling from airborne data, University of Nice Sophia Antipolis, October 15, Florent Lafarge and Josiane Zérubia.

PhD in progress : Simon Giraudot, Noise-adaptive shape reconstruction, October 2011, Pierre Alliez.

PhD in progress : Xavier Rolland-Neviere, watermarking of 3D models, October 2011, Pierre Alliez.

PhD in progress : Manish Mandad, Shape approximation with guarantees, October 2012, Pierre Alliez.

PhD in progress : Sven Oesau, reconstruction of indoor scenes, October 2012, Florent Lafarge and Pierre Alliez.

PhD in progress : Dorothy Duan, Semantized Elevation Maps, October 2013, Florent Lafarge and Pierre Alliez.

Pierre Alliez:

Thesis committee: Ricardo Uribe-Lobello (LIRIS, CNRS Lyon).

Thesis committee: Yannick Verdie (Inria).

Thesis reviewer: Thijs van Lankveld (Utrecht University).

Thesis committee: Adrien Maglo (Centrale Paris).

Thesis committee: Noura Faraj (Telecom ParisTech).

HDR thesis reviewer: Guillaume Lavoue (Universite Lyon).

Florent Lafarge gave talks to high school students on city modeling (Lycee Tocqueville, Lycee De La Montagne, and Stage MathC2+). He is a member of the editorial board of LISA (Inria SAM newsletter).