PIXEL is a research team stemming from team ALICE founded in 2004 by Bruno Lévy. The main scientific goal of ALICE was to develop new algorithms for computer graphics, with a special focus on geometry processing. From 2004 to 2006, we developed new methods for automatic texture mapping (LSCM, ABF++, PGP), that became the de-facto standards. Then we realized that these algorithms could be used to create an abstraction of shapes, that could be used for geometry processing and modeling purposes, which we developed from 2007 to 2013 within the GOODSHAPE StG ERC project. We transformed the research prototype stemming from this project into an industrial geometry processing software, with the VORPALINE PoC ERC project, and commercialized it (TotalEnergies, Dassault Systems, + GeonX and ANSYS currently under discussion). From 2013 to 2018, we developed more contacts and cooperations with the “scientific computing” and “meshing” research communities.

After a part of the team “spun off” around Sylvain Lefebvre and his ERC project SHAPEFORGE to become the MFX team (on additive manufacturing and computer graphics), we progressively moved the center of gravity of the rest of the team from computer graphics towards scientific computing and computational physics, in terms of cooperations, publications and industrial transfer.

We realized that geometry plays a central role in numerical simulation, and that
“cross-pollinization” with methods from our field (graphics) will lead to original algorithms. In particular, computer graphics
routinely uses irregular and dynamic data structures, more seldom encountered in scientific computing.
Conversely, scientific computing routinely uses mathematical tools that are not well spread and not well
understood in computer graphics.
Our goal is to establish a stronger connection between both domains, and exploit the fundamental aspects of both
scientific cultures to develop new algorithms for computational physics.

Mesh generation is a notoriously difficult task. A quick search on the NSF grant web page with “mesh
generation AND finite element” keywords returns more than 30 currently active grants
for a total of $8 million.
NASA indicates mesh generation as one of the major challenges for 2030 42, and estimates that it costs 80% of time and effort in numerical simulation.
This is due to the need for constructing supports that match both the geometry and the physics of the system to be modeled.
In our team we pay a particular attention to scientific computing, because we believe it has a world changing impact.

It is very unsatisfactory that meshing, i.e. just “preparing the data” for the simulation, eats up the major part of the time and effort. Our goal is to change the situation, by studying the influence of shapes and discretizations, and inventing new algorithms to automatically generate meshes that can be directly used in scientific computing. This goal is a result of our progressive shift from pure graphics (“Geometry and Lighting”) to real-world problems (“Shape Fidelity”).

Meshing is central in geometric modeling because it provides a way to represent functions on the objects being studied (texture coordinates, temperature, pressure, speed, etc.). There are numerous ways to represent functions, but if we suppose that the functions are piecewise smooth, the most versatile way is to discretize the domain of interest. Ways to discretize a domain range from point clouds to hexahedral meshes; let us list a few of them sorted by the amount of structure each representation has to offer (refer to Figure 1).

At one end of the spectrum there are point clouds: they exhibit no structure at all (white noise point samples) or very little (blue noise point samples).
Recent explosive development of acquisition techniques (e.g. scanning or photogrammetry) provides an easy way to build 3D models of real-world objects
that range from figurines and cultural heritage objects to geological outcrops and entire city scans.
These technologies produce massive, unstructured data (billions of 3D points per scene) that can be directly used for visualization purposes,
but this data is not suitable for high-level geometry processing algorithms and numerical simulations that usually expect meshes.
Therefore, at the very beginning of the acquisition-modeling-simulation-analysis pipeline, powerful scan-to-mesh algorithms are required.

During the last decade, many solutions have already been proposed 38, 22, 34, 33, 25,
but the problem of building a good mesh from scattered 3D points is far from being solved.
Beside the fact that the data is unusually large, the existing algorithms are challenged also by the extreme variation of data quality.
Raw point clouds have many defects, they are often corrupted with noise, redundant, incomplete (due to occlusions): they all are uncertain.

Triangulated surfaces are ubiquitous, they are the most widely used representation for 3D objects.
Some applications like 3D printing do not impose heavy requirements on the surface: typically it has to be watertight, but triangles can have an arbitrary shape.
Other applications like texturing require very regular meshes, because they suffer from elongated triangles with large angles.

While being a common solution for many problems, triangle mesh generation is still an active topic of research. The diversity of representations (meshes, NURBS, ...) and file formats often results in a “Babel” problem when one has to exchange data. The only common representation is often the mesh used for visualization, that has in most cases many defects, such as overlaps, gaps or skinny triangles. Re-injecting this solution into the modeling-analysis loop is non-trivial, since again this representation is not well adapted to analysis.

Tetrahedral meshes are the volumic equivalent of triangle meshes, they are very common in the scientific computing community.
Tetrahedral meshing is now a mature technology. It is remarkable that still today all the existing software used in the
industry is built on top of a handful of kernels, all written by a small number of individuals
27, 40, 46, 29, 39, 41, 28, 50.

Meshing requires a long-term, focused, dedicated research effort that combines deep theoretical studies with advanced software development. We have the ambition to bring this kind of maturity to a different type of mesh (structured, with hexahedra), which is highly desirable for some simulations, and for which, unlike tetrahedra, no satisfying automatic solution exists. In the light of recent contributions, we believe that the domain is ready to overcome the principal difficulties.

Finally, at the most structured end of the spectrum there are hexahedral meshes composed of deformed cubes (hexahedra).
They are preferred for certain physics simulations (deformation mechanics, fluid dynamics ...) because they can significantly
improve both speed and accuracy. This is because (1)
they contain a smaller number of elements (5-6 tetrahedra for a single hexahedron),
(2) the associated tri-linear function basis has cubic terms that
can better capture higher-order variations, (3) they
avoid the locking phenomena encountered with tetrahedra 20,
(4) hexahedral meshes exploit inherent tensor product structure
and (5) hexahedral meshes are superior in direction
dominated physical simulations (boundary layer, shock waves, etc).
Being extremely regular, hexahedral meshes are often claimed to be The Holy Grail for many finite element methods 21,
outperforming tetrahedral meshes both in terms of computational speed and accuracy.

Despite 30 years of research efforts and important advances, mainly by the Lawrence Livermore
National Labs in the U.S. 45, 44, hexahedral meshing still requires
considerable manual intervention in most cases (days, weeks and even months for the most complicated domains).
Some automatic methods exist 32, 48, that constrain the boundary into a regular grid, but they are
not fully satisfactory either, since the grid is not aligned with the boundary. The advancing front method
19 does not have this problem, but generates irregular elements on
the medial axis, where the fronts collide. Thus, there is no
fully automatic algorithm that results in satisfactory boundary alignment.

Currently, transforming the raw point cloud into a triangular mesh is a long pipeline involving disparate geometry processing algorithms:

The output of this pipeline is a locally-structured model which is used in downstream mesh analysis methods such as feature extraction, segmentation in meaningful parts or building Computer-Aided Design (CAD) models.

It is well known that point cloud data contains measurement errors due to factors related
to the external environment and to the measurement system itself 43, 37, 23.
These errors propagate through all processing steps: pre-processing, registration and mesh generation.
Even worse, the heterogeneous nature of different processing steps makes it extremely difficult to know how these errors propagate through the pipeline.
To give an example, for cloud-to-cloud alignment it is necessary to estimate normals.
However, the normals are forgotten in the point cloud produced by the registration stage.
Later on, when triangulating the cloud, the normals are re-estimated on the modified data, thus introducing uncontrollable errors.

We plan to develop new reconstruction, meshing and re-meshing algorithms, with a specific focus on the accuracy and resistance to all defects present in the input raw data. We think that pervasive treatment of uncertainty is the missing ingredient to achieve this goal. We plan to rethink the pipeline with the position uncertainty maintained during the whole process. Input points can be considered either as error ellipsoids 47 or as probability measures 31. In a nutshell, our idea is to start by computing an error ellipsoid 49, 35 for each point of the raw data, and then to cumulate the errors (approximations) made at each step of the processing pipeline while building the mesh. In this way, the final users will be able to take the knowledge of the uncertainty into account and rely on this confidence measure for further analysis and simulations. Quantifying uncertainties for reconstruction algorithms, and propagating them from input data to high-level geometry processing algorithms has never been considered before, possibly due to the very different methodologies of the approaches involved. At the very beginning we will re-implement the entire pipeline, and then attack the weak links through all three reconstruction stages.

One of the favorite tools we use in our team are parameterizations, and we have major contributions to the field: we have solved a fundamental problem formulated more than 60 years ago 2. Parameterizations provide a very powerful way to reveal structures on objects. The most omnipresent application of parameterizations is texture mapping: texture maps provide a way to represent in 2D (on the map) information related to a surface. Once the surface is equipped with a map, we can do much more than a mere coloring of the surface: we can approximate geodesics, edit the mesh directly in 2D or transfer information from one mesh to another.

Parameterizations constitute a family of methods that involve optimizing an objective function, subject to a set of constraints (equality, inequality, being integer, etc.). Computing the exact solution to such problems is beyond any hope, therefore approximations are the only resort. This raises a number of problems, such as the minimization of highly nonlinear functions and the definition of direction fields topology, without forgetting the robustness of the software that puts all this into practice.

We are particularly interested in a specific instance of parameterization: hexahedral meshing.
The idea 6, 4 is to build a transformation

Current global parameterizations allow grids to be positioned inside geometrically simple objects whose internal structure (the singularity graph) can be relatively basic. We wish to be able to handle more configurations by improving three aspects of current methods:

All global parameterization approaches are decomposed into three steps: frame field generation, field integration to get a global parameterization, and final mesh extraction. Getting a full hexahedral mesh from a global parameterization means that it has positive Jacobian everywhere except on the frame field singularity graph. To our knowledge, there is no solution to ensure this property, but some efforts are done to limit the proportion of failure cases. An alternative is to produce hexahedral dominant meshes. Our position is in between those two points of view:

The global parameterization approach yields impressive results on some geometric objects, which is encouraging, but not yet sufficient for numerical analysis. Note that while we attack the remeshing with our parameterizations toolset, the wish to improve the tool itself (as described above) is orthogonal to the effort we put into making the results usable by the industry. To go further, our idea (as opposed to 36, 26) is that the global parameterization should not handle all the remeshing, but merely act as a guide to fill a large proportion of the domain with a simple structure; it must cooperate with other remeshing bricks, especially if we want to take final application constraints into account.

For each application we will take as an input domains, sets of constraints and, eventually, fields (e.g. the magnetic field in a tokamak). Having established the criteria of mesh quality (per application!) we will incorporate this input into the mesh generation process, and then validate the mesh by a numerical simulation software.

Numerical simulation is the main targeted application domain for the geometry processing tools that we develop. Our mesh generation tools will be tested and evaluated within the context of our cooperation with Hutchinson, experts in vibration control, fluid management and sealing system technologies. We think that the hex-dominant meshes that we generate have geometrical properties that make them suitable for some finite element analyses, especially for simulations with large deformations.

We also have a tight collaboration with a geophysical modeling specialists via the RING consortium. In particular, we produce hexahedral-dominant meshes for geomechanical simulations of gas and oil reservoirs. From a scientific point of view, this use case introduces new types of constraints (alignments with faults and horizons), and allows certain types of nonconformities that we did not consider until now.

Our cooperation with RhinoTerrain pursues the same goal: reconstruction of buildings from point cloud scans allows to perform 3D analysis and studies on insolation, floods and wave propagation, wind and noise simulations necessary for urban planification.

Our spin-off company Tessael was awarded a prize at the i-Lab innovation competition organized by Bpifrance.
The project is entitled “Subsurface2.0 (Tessael): New 3D mesh technology to de-risk the geological storage of CO

Geological storage of CO

Bringing together research work by Pixel and relying on the long experience in the field of geological resource exploitation of the co-founders of Tessael, our geological meshing solution offers three benefits of:

Thanks to the Subsurface2.0 project, with Tessael's scientific and industrial partners, such as Inria, the University of Lorraine, TotalEnergies and IFP, the goal is to double the activity related to CO

Our PhD students David Desobry, Yoann Coudert–Osmont, Justine Basselin, and Guillaume Coiffier (Fig. 4, left) participated in the Google Hash Code 2022, an international contest where a dozen thousand teams confront their heuristics on a few instances of a NP-hard problem defined for the contest in a time limit of 4 hours. Our team succeeded in qualifying among the 40 teams to participate in the world final. Finishing 18th in the final, the team obtained the best result for a 100% French team since 2016. The ranking and problem of the finals are available on Google's website.

Yoann Coudert–Osmont and David Desobry participated to the challenge organized by the International Symposium on Parameterized and Exact Computation (IPEC). They have designed a heuristic solver for Directed Feedback Vertex Set. The problem is to remove a minimum number of vertices from a digraph such that the resulting digraph is acyclic (Fig. 4, right). The team proposed new graph reductions for this problem 7.

The solver was submitted to the 2022 edition of Parameterized Algorithms and Computational Experiments challenge and the team was ranked 2nd out of 49 over a dataset composed of 200 graphs. DreyFVS first performs a guess on a reduced instance by leveraging the Sinkhorn-Knopp algorithm, to then improve this solution by pipelining two local search methods.

Many problems can be stated as a minimization of some objective function. Typically, if we can estimate a function value at some point and the corresponding gradient, we can descend in the gradient direction. We can do better and use second derivatives (Hessian matrix). There is an alternative to this costly option: L-BFGS is a quasi-Newton optimization algorithm for solving large nonlinear optimization problems [1,2]. It employs function value and gradient information to search for the local optimum. It uses (as the name suggests) the BGFS (Broyden-Goldfarb-Fletcher-Shanno) algorithm to approximate the inverse Hessian matrix. The size of the memory available to store the approximation of the inverse Hessian is limited (hence L- in the name): in fact, we do not need to store the approximated matrix directly, but rather we need to be able to multiply it by a vector (most often the gradient), and this can be done efficiently by using the history of past updates.

Suprisingly enough, most L-BFGS solvers that can be found in the wild are wrappers/transpilation of the original Fortran/Matlab codes by Jorge Nocedal and Dianne O'Leary.

Such code is impossible to improve if, for example, we are working near the limits of floating point precision, therefore meet stlbfgs: a from-scratch modern C++ implementation. The project has zero external dependencies, no Eigen, nothing, plain standard library.

This implementation uses Moré-Thuente line search algorithm [3]. The preconditioning is performed using the M1QN3 strategy [5,6]. Moré-Thuente line search routine is tested against data [3] (Tables 1-6), and L-BFGS routine is tested on problems taken from [4].

[1] J. Nocedal (1980). Updating quasi-Newton matrices with limited storage. Mathematics of Computation, 35/151, 773-782. 3

[2] Dong C. Liu, Jorge Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical Programming (1989).

[3] Jorge J. Moré, David J. Thuente. Line search algorithms with guaranteed sufficient decrease. ACM Transactions on Mathematical Software (1994)

[4] Jorge J. Moré, Burton S. Garbow, Kenneth E. Hillstrom, "Testing Unconstrained Optimization Software", ACM Transactions on Mathematical Software (1981)

[5] Gilbert JC, Lemaréchal C. The module M1QN3. INRIA Rep., version. 2006,3:21.

[6] Gilbert JC, Lemaréchal C. Some numerical experiments with variable-storage quasi-Newton algorithms. Mathematical Programming 45, pp. 407-435, 1989.

This library does not contain any ready-to-execute remeshing algorithms. It simply provides a friendly way to manipulate a surface/volume mesh, it is meant to be used by your geometry processing software.

There are lots of mesh processing libraries in the wild, excellent specimens are:

geogram

libigl

pmp

CGAL

We are, however, not satisfied with either of those. At the moment of this writing, Geogram, for instance, has 847 thousands (sic!) of lines of code. We strive to make a library under 10K loc able to do common mesh handling tasks for surfaces and volumes. Another reason to create yet-another-mesh-library is that we like explicit types and avoid auto as long as it reasonable. In practice it means that we cannot use libigl for the simple reason that we do not know what this data represents:

Eigen::MatrixXd V, Eigen::MatrixXi F, Is it a polygonal surface or a tetrahedral mesh? If surface, is it triangulated or is it a generic polygonal mesh? I simply can not tell... Thus, ultimaille provides several classes that allow to represent meshes:

PointSet PolyLine Triangles, Quads, Polygons Tetrahedra, Hexahedra, Wedges, Pyramids You can not mix tetrahedra and hexahedra in a single mesh, we believe that it is confusing to do otherwise. If you need a mixed mesh, create a separate mesh for each cell type: these classes allow to share a common set of vertices via a std::shared_ptr.

Common principles This library is meant to have a reasonable performance. It means that we strive to make it as rapid as possible as long as it does not deteriorate the readability of the source code. All the library is built around STL containers (mainly std::vector<int>), normally it will have no malloc/free/new/delete instructions. Despite that, there will be no size_t and iterator:: in the code. An int is an int. Period. There will be as little templates as it is reasonable, the default data types are int and double.

Polycube-maps are used as base complexes in various fields of computational geometry, including the generation of regular all-hexahedral meshes free of internal singularities. For our contract with the CEA, we focus on robust polycube generation enabling as coarse polycube hexahedral remeshing as possible. So, starting from a tetrahedral mesh, we want to compute a hexahedral mesh. Polycube methods are usually split into four steps (Fig. 5):

All four steps have robustness issues, and this year we have adressed the flagging and the deformation.

The strict alignment constraints behind polycube-based methods make their computation challenging for CAD models used in numerical simulation via finite element methods (FEM). We proposed a novel approach 13 based on an evolutionary algorithm to robustly compute polycube-maps in this context. We address the labelling problem, which aims to precompute polycube alignment by assigning one of the base axes to each boundary face on the input. Previous research has described ways to initialize and improve a labelling via greedy local fixes. However, such algorithms lack robustness and often converge to inaccurate solutions for complex geometries. Our proposed framework Evocube (Fig. 5) alleviates this issue by embedding labelling operations in an evolutionary heuristic, defining fitness, crossover, and mutations in the context of labelling optimization. We evaluate our method on a thousand smooth and CAD meshes, showing that it converges to accurate labellings on a wide range of shapes. The limitations of our method are also discussed thoroughly.

An important part of recent advances in hexahedral meshing focuses on the deformation of a domain into a polycube; the polycube deformed by the inverse map fills the domain with a hexahedral mesh. These methods are appreciated because they generate highly regular meshes. This year we have addressed a robustness issue that systematically occurs when a coarse mesh is desired: algorithms produce deformations that are not one-to-one, leading to collapse of large portions of the model when trying to apply the (undefined) inverse map (Fig. 5). The origin of the problem is that the deformation requires mixed integer optimization, where the difficulty to enforce the integer constraints is directly tied to the expected coarseness. Our solution 14 is to introduce sanity constraints preventing the loss of bijectivity due to the integer constraints.

We continue to improve performance of our mapping algorithms. In particular, we aim at finding meaningful correspondences between 3D shapes. We build upon a representation of mappings between surfaces as linear operators acting on functions called "functional maps". This representation has proven to be very flexible and efficient. The problem, however, as it mostly relies on geodesic distances, it is challenging to take the surface orientation into account. As a consequence, if a surface has internal symmetries, there are several equivalently good solutions to the shape matching problem.

This year, we introduced complex functional maps 11, which extend the functional map framework to conformal maps between tangent vector fields on surfaces. A key property of these maps is their orientation awareness (refer to Fig. 6). More specifically, we demonstrated that, unlike regular functional maps that link functional spaces of two surfaces, our complex functional maps establish a link between oriented tangent bundles, thus permitting robust and efficient transfer of tangent vector fields. By first endowing and then exploiting the tangent bundle of each shape with a complex structure, the resulting operations become naturally orientation aware, thus favoring orientation and angle preserving correspondence across shapes, without relying on descriptors or extra regularization.

Using complex functional map representation, we also proposed 12 a new deep learning approach to learn orientation-aware features in a fully unsupervised setting (refer to Fig 6). Our architecture is built on top of DiffusionNet, making it robust to discretization changes. Additionally, we introduce a vector field-based loss, which promotes orientation preservation without using (often unstable) extrinsic descriptors.

In collaboration with Elmar Eisemann and Ricardo Marroqium from TU Delft, we proposed 9 a semi-automatic method to extract perspective lines from paintings (Fig. 6). Perspective cues play an important role in painting analysis as they may unveil important characteristics about the painter’s techniques and creation process. Nevertheless, extracting perspective lines and their corresponding vanishing points is usually a laborious manual task. Moreover, small variations in the lines may lead to large variations in the vanishing points. The goal of this work is to mitigate the human variability factor and reduce the workload of this task.

In collaboration with the MFX team, we explored the optimization of closed space‐filling curves under orientation objectives (Fig. 6). By solidifying material along the closed curve, solid layers of 3D prints can be manufactured in a single continuous extrusion motion. The control over orientation enables the deposition to align with specific directions in different areas, or to produce a locally uniform distribution of orientations, patterning the solidified volume in a precisely controlled manner. Our optimization framework 8 proceeds in two steps. First, we cast a combinatorial problem, optimizing Hamiltonian cycles within a specially constructed graph. We rely on a stochastic optimization process based on local operators that modify a cycle while preserving its Hamiltonian property. Second, we use the result to initialize a geometric optimizer that improves the smoothness and uniform coverage of the cycle while further optimizing for alignment and orientation objectives.

Company: CEA

Duration: 01/10/2019 – 30/09/2022

Participants: Dmitry Sokolov, Nicolas Ray and François Protais

Abstract:
This project revolves around the generation of Polycubes guided by orientation fields. The first goal of the project is to define a new Polycube method that deforms an object along a previously generated 3D orientation field. Such a solution would overcome two major defects of Polycube methods, namely : (1) the possible absence of aligned mesh layers along the smooth edges; and (2) the poor treatment of sharp edges (which is very common on mechanical parts in CAD).

François defended his PhD 16 on October 21, 2022.

Company: RhinoTerrain

Duration: 01/12/2019 – 30/03/2024

Participants: Dmitry Sokolov, Nicolas Ray and Justine Basselin

Abstract: In this project, we are interested in the reconstruction phase in the context of LIDAR point clouds of city districts. These data are acquired via an airplane and contain every object present in the city: cars, trees, building roofs, roads and ground features, etc. Applications of these data, ranging from city visualization for tourism to flood and wind simulation, require the reconstruction of buildings as geometrical objects. As LIDAR point clouds are acquired from the sky, buildings are only represented by their roofs. Hence, determining the polygonal surfaces of roofs enables the reconstruction of the whole city by extrusion.

Justine defended her PhD 15 on December 14, 2022.

Company: TotalEnergies

Duration: 01/10/2020 – 30/03/2024

Participants: Dmitry Sokolov, Nicolas Ray and David Desobry

Abstract: The goal of this project is to improve the accuracy of rubber behavior simulations for certain parts produced by TotalEnergies, notably gaskets.
To do this, both parties need to develop meshing methods adapted to the simulation of large deformations in non-linear mechanics.
The Pixel team has a strong expertise in hex-dominant meshing, TotalEnergies has a strong background in numerical simulation within an industrial context. This collaborative project aims to take advantage of both expertises.

Members of the team were reviewers for Eurographics, SIGGRAPH, SIGGRAPH Asia, ISVC, Pacific Graphics, and SPM.

Members of the team were reviewers for Computer Aided Design (Elsevier), Computer Aided Geometric Design (Elsevier), Transactions on Visualization and Computer Graphics (IEEE), Transactions on Graphics (ACM), Computer Graphics Forum (Wiley), Computational Geometry: Theory and Applications (Elsevier) and Computers & Graphics (Elsevier).

Dmitry Sokolov is responsible for the Industrial Club of GdR IG-RV (Groupement de recherche Informatique Géométrique et Graphique, Réalité Virtuelle et Visualisation).

Dmitry Sokolov is responsible for the 3d year of computer science license at the University of Lorraine.

Members of the team have teached following courses:

This year two of our students defended their PhDs, both were in connection with bilateral contracts with industry (more details in section 8)

We have participated to different science popularization events:

Dmitry Sokolov was also invited to talk at the round table dedicated to teaching computer graphics in France during JFIG2022 (Bordeaux).