PIXEL is a research team stemming from team ALICE founded in 2004 by Bruno Lévy. The main scientific goal of ALICE was to develop new algorithms for computer graphics, with a special focus on geometry processing. From 2004 to 2006, we developed new methods for automatic texture mapping (LSCM, ABF++, PGP), that became the de-facto standards. Then we realized that these algorithms could be used to create an abstraction of shapes, that could be used for geometry processing and modeling purposes, which we developed from 2007 to 2013 within the GOODSHAPE StG ERC project. We transformed the research prototype stemming from this project into an industrial geometry processing software, with the VORPALINE PoC ERC project, and commercialized it (TotalEnergies, Dassault Systems, + GeonX and ANSYS currently under discussion). From 2013 to 2018, we developed more contacts and cooperations with the “scientific computing” and “meshing” research communities.

After a part of the team “spun off” around Sylvain Lefebvre and his ERC project SHAPEFORGE to become the MFX team (on additive manufacturing and computer graphics), we progressively moved the center of gravity of the rest of the team from computer graphics towards scientific computing and computational physics, in terms of cooperations, publications and industrial transfer.

We realized that geometry plays a central role in numerical simulation, and that
“cross-pollinization” with methods from our field (graphics) will lead to original algorithms. In particular, computer graphics
routinely uses irregular and dynamic data structures, more seldom encountered in scientific computing.
Conversely, scientific computing routinely uses mathematical tools that are not well spread and not well
understood in computer graphics.
Our goal is to establish a stronger connection between both domains, and exploit the fundamental aspects of both
scientific cultures to develop new algorithms for computational physics.

Mesh generation is a notoriously difficult task. A quick search on the NSF grant web page with “mesh
generation AND finite element” keywords returns more than 30 currently active grants
for a total of $8 million.
NASA indicates mesh generation as one of the major challenges for 2030 38, and estimates that it costs 80% of time and effort in numerical simulation.
This is due to the need for constructing supports that match both the geometry and the physics of the system to be modeled.
In our team we pay a particular attention to scientific computing, because we believe it has a world changing impact.

It is very unsatisfactory that meshing, i.e. just “preparing the data” for the simulation, eats up the major part of the time and effort. Our goal is to change the situation, by studying the influence of shapes and discretizations, and inventing new algorithms to automatically generate meshes that can be directly used in scientific computing. This goal is a result of our progressive shift from pure graphics (“Geometry and Lighting”) to real-world problems (“Shape Fidelity”).

Meshing is central in geometric modeling because it provides a way to represent functions on the objects being studied (texture coordinates, temperature, pressure, speed, etc.). There are numerous ways to represent functions, but if we suppose that the functions are piecewise smooth, the most versatile way is to discretize the domain of interest. Ways to discretize a domain range from point clouds to hexahedral meshes; let us list a few of them sorted by the amount of structure each representation has to offer (refer to Figure 1).

At one end of the spectrum there are point clouds: they exhibit no structure at all (white noise point samples) or very little (blue noise point samples).
Recent explosive development of acquisition techniques (e.g. scanning or photogrammetry) provides an easy way to build 3D models of real-world objects
that range from figurines and cultural heritage objects to geological outcrops and entire city scans.
These technologies produce massive, unstructured data (billions of 3D points per scene) that can be directly used for visualization purposes,
but this data is not suitable for high-level geometry processing algorithms and numerical simulations that usually expect meshes.
Therefore, at the very beginning of the acquisition-modeling-simulation-analysis pipeline, powerful scan-to-mesh algorithms are required.

During the last decade, many solutions have already been proposed 34, 18, 30, 29, 21,
but the problem of building a good mesh from scattered 3D points is far from being solved.
Beside the fact that the data is unusually large, the existing algorithms are challenged also by the extreme variation of data quality.
Raw point clouds have many defects, they are often corrupted with noise, redundant, incomplete (due to occlusions): they all are uncertain.

Triangulated surfaces are ubiquitous, they are the most widely used representation for 3D objects.
Some applications like 3D printing do not impose heavy requirements on the surface: typically it has to be watertight, but triangles can have an arbitrary shape.
Other applications like texturing require very regular meshes, because they suffer from elongated triangles with large angles.

While being a common solution for many problems, triangle mesh generation is still an active topic of research. The diversity of representations (meshes, NURBS, ...) and file formats often results in a “Babel” problem when one has to exchange data. The only common representation is often the mesh used for visualization, that has in most cases many defects, such as overlaps, gaps or skinny triangles. Re-injecting this solution into the modeling-analysis loop is non-trivial, since again this representation is not well adapted to analysis.

Tetrahedral meshes are the volumic equivalent of triangle meshes, they are very common in the scientific computing community.
Tetrahedral meshing is now a mature technology. It is remarkable that still today all the existing software used in the
industry is built on top of a handful of kernels, all written by a small number of individuals
23, 36, 42, 25, 35, 37, 24, 46.

Meshing requires a long-term, focused, dedicated research effort that combines deep theoretical studies with advanced software development. We have the ambition to bring this kind of maturity to a different type of mesh (structured, with hexahedra), which is highly desirable for some simulations, and for which, unlike tetrahedra, no satisfying automatic solution exists. In the light of recent contributions, we believe that the domain is ready to overcome the principal difficulties.

Finally, at the most structured end of the spectrum there are hexahedral meshes composed of deformed cubes (hexahedra).
They are preferred for certain physics simulations (deformation mechanics, fluid dynamics ...) because they can significantly
improve both speed and accuracy. This is because (1)
they contain a smaller number of elements (5-6 tetrahedra for a single hexahedron),
(2) the associated tri-linear function basis has cubic terms that
can better capture higher-order variations, (3) they
avoid the locking phenomena encountered with tetrahedra 16,
(4) hexahedral meshes exploit inherent tensor product structure
and (5) hexahedral meshes are superior in direction
dominated physical simulations (boundary layer, shock waves, etc).
Being extremely regular, hexahedral meshes are often claimed to be The Holy Grail for many finite element methods 17,
outperforming tetrahedral meshes both in terms of computational speed and accuracy.

Despite 30 years of research efforts and important advances, mainly by the Lawrence Livermore
National Labs in the U.S. 41, 40, hexahedral meshing still requires
considerable manual intervention in most cases (days, weeks and even months for the most complicated domains).
Some automatic methods exist 28, 44, that constrain the boundary into a regular grid, but they are
not fully satisfactory either, since the grid is not aligned with the boundary. The advancing front method
15 does not have this problem, but generates irregular elements on
the medial axis, where the fronts collide. Thus, there is no
fully automatic algorithm that results in satisfactory boundary alignment.

Currently, transforming the raw point cloud into a triangular mesh is a long pipeline involving disparate geometry processing algorithms:

The output of this pipeline is a locally-structured model which is used in downstream mesh analysis methods such as feature extraction, segmentation in meaningful parts or building Computer-Aided Design (CAD) models.

It is well known that point cloud data contains measurement errors due to factors related
to the external environment and to the measurement system itself 39, 33, 19.
These errors propagate through all processing steps: pre-processing, registration and mesh generation.
Even worse, the heterogeneous nature of different processing steps makes it extremely difficult to know how these errors propagate through the pipeline.
To give an example, for cloud-to-cloud alignment it is necessary to estimate normals.
However, the normals are forgotten in the point cloud produced by the registration stage.
Later on, when triangulating the cloud, the normals are re-estimated on the modified data, thus introducing uncontrollable errors.

We plan to develop new reconstruction, meshing and re-meshing algorithms, with a specific focus on the accuracy and resistance to all defects present in the input raw data. We think that pervasive treatment of uncertainty is the missing ingredient to achieve this goal. We plan to rethink the pipeline with the position uncertainty maintained during the whole process. Input points can be considered either as error ellipsoids 43 or as probability measures 27. In a nutshell, our idea is to start by computing an error ellipsoid 45, 31 for each point of the raw data, and then to cumulate the errors (approximations) made at each step of the processing pipeline while building the mesh. In this way, the final users will be able to take the knowledge of the uncertainty into account and rely on this confidence measure for further analysis and simulations. Quantifying uncertainties for reconstruction algorithms, and propagating them from input data to high-level geometry processing algorithms has never been considered before, possibly due to the very different methodologies of the approaches involved. At the very beginning we will re-implement the entire pipeline, and then attack the weak links through all three reconstruction stages.

One of the favorite tools we use in our team are parameterizations, and we have major contributions to the field: we have solved a fundamental problem formulated more than 60 years ago 2. Parameterizations provide a very powerful way to reveal structures on objects. The most omnipresent application of parameterizations is texture mapping: texture maps provide a way to represent in 2D (on the map) information related to a surface. Once the surface is equipped with a map, we can do much more than a mere coloring of the surface: we can approximate geodesics, edit the mesh directly in 2D or transfer information from one mesh to another.

Parameterizations constitute a family of methods that involve optimizing an objective function, subject to a set of constraints (equality, inequality, being integer, etc.). Computing the exact solution to such problems is beyond any hope, therefore approximations are the only resort. This raises a number of problems, such as the minimization of highly nonlinear functions and the definition of direction fields topology, without forgetting the robustness of the software that puts all this into practice.

We are particularly interested in a specific instance of parameterization: hexahedral meshing.
The idea 6, 4 is to build a transformation

Current global parameterizations allow grids to be positioned inside geometrically simple objects whose internal structure (the singularity graph) can be relatively basic. We wish to be able to handle more configurations by improving three aspects of current methods:

All global parameterization approaches are decomposed into three steps: frame field generation, field integration to get a global parameterization, and final mesh extraction. Getting a full hexahedral mesh from a global parameterization means that it has positive Jacobian everywhere except on the frame field singularity graph. To our knowledge, there is no solution to ensure this property, but some efforts are done to limit the proportion of failure cases. An alternative is to produce hexahedral dominant meshes. Our position is in between those two points of view:

The global parameterization approach yields impressive results on some geometric objects, which is encouraging, but not yet sufficient for numerical analysis. Note that while we attack the remeshing with our parameterizations toolset, the wish to improve the tool itself (as described above) is orthogonal to the effort we put into making the results usable by the industry. To go further, our idea (as opposed to 32, 22) is that the global parameterization should not handle all the remeshing, but merely act as a guide to fill a large proportion of the domain with a simple structure; it must cooperate with other remeshing bricks, especially if we want to take final application constraints into account.

For each application we will take as an input domains, sets of constraints and, eventually, fields (e.g. the magnetic field in a tokamak). Having established the criteria of mesh quality (per application!) we will incorporate this input into the mesh generation process, and then validate the mesh by a numerical simulation software.

Numerical simulation is the main targeted application domain for the geometry processing tools that we develop. Our mesh generation tools will be tested and evaluated within the context of our cooperation with Hutchinson, experts in vibration control, fluid management and sealing system technologies. We think that the hex-dominant meshes that we generate have geometrical properties that make them suitable for some finite element analyses, especially for simulations with large deformations.

We also have a tight collaboration with a geophysical modeling specialists via the RING consortium. In particular, we produce hexahedral-dominant meshes for geomechanical simulations of gas and oil reservoirs. From a scientific point of view, this use case introduces new types of constraints (alignments with faults and horizons), and allows certain types of nonconformities that we did not consider until now.

Our cooperation with RhinoTerrain pursues the same goal: reconstruction of buildings from point cloud scans allows to perform 3D analysis and studies on insolation, floods and wave propagation, wind and noise simulations necessary for urban planification.

Ultimaille is a lightweight mesh handling library. It does not contain any ready-to-execute remeshing algorithms. It simply provides a friendly way to manipulate a surface/volume mesh, it is meant to be used by external geometry processing software.

There are lots of mesh processing libraries in the wild, excellent specimens are: 1. geogram 2. libigl 3. pmp 4. CGAL

We are, however, not satisfied with either of those. At the moment of this writing, Geogram, for instance, has 847 thousand (sic!) of lines of code. We strive to make a library under 10K loc able to do common mesh handling tasks for surfaces and volumes. The idea is to propose a lightweight dependency for sharing geometry processing algorithms.

Another reason to create yet-another-mesh-library is the speed of development and debugging. We believe that explicit typing allows for an easier code maintenance, and we tend to avoid "auto" as long as it reasonable. In practice it means that we cannot use libigl for the simple reason that we do not know what this data represents:

Eigen::MatrixXd V, Eigen::MatrixXi F, Is it a polygonal surface or a tetrahedral mesh? If surface, is it triangulated or is it a generic polygonal mesh? Difficult to tell... Thus, ultimaille provides several classes that allow to represent meshes:

PointSet PolyLine Triangles, Quads, Polygons Tetrahedra, Hexahedra, Wedges, Pyramids You can not mix tetrahedra and hexahedra in a single mesh, we believe that it is confusing to do otherwise. If you need a mixed mesh, create a separate mesh for each cell type: these classes allow to share a common set of vertices via a std::shared_ptr.

This library is meant to have a reasonable performance. It means that we strive to make it as rapid as possible as long as it does not deteriorate the readability of the source code.

Many problems can be stated as a minimization of some objective function. Typically, if we can estimate a function value at some point and the corresponding gradient, we can descend in the gradient direction. We can do better and use second derivatives (Hessian matrix). There is an alternative to this costly option: L-BFGS is a quasi-Newton optimization algorithm for solving large nonlinear optimization problems [1,2]. It employs function value and gradient information to search for the local optimum. It uses (as the name suggests) the BGFS (Broyden-Goldfarb-Fletcher-Shanno) algorithm to approximate the inverse Hessian matrix. The size of the memory available to store the approximation of the inverse Hessian is limited (hence L- in the name): in fact, we do not need to store the approximated matrix directly, but rather we need to be able to multiply it by a vector (most often the gradient), and this can be done efficiently by using the history of past updates.

Suprisingly enough, most L-BFGS solvers that can be found in the wild are wrappers/transpilation of the original Fortran/Matlab codes by Jorge Nocedal and Dianne O'Leary.

Such code is impossible to improve if, for example, we are working near the limits of floating point precision, therefore meet stlbfgs: a from-scratch modern C++ implementation. The project has zero external dependencies, no Eigen, nothing, plain standard library.

This implementation uses Moré-Thuente line search algorithm [3]. The preconditioning is performed using the M1QN3 strategy [5,6]. Moré-Thuente line search routine is tested against data [3] (Tables 1-6), and L-BFGS routine is tested on problems taken from [4].

[1] J. Nocedal (1980). Updating quasi-Newton matrices with limited storage. Mathematics of Computation, 35/151, 773-782. 3

[2] Dong C. Liu, Jorge Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical Programming (1989).

[3] Jorge J. Moré, David J. Thuente. Line search algorithms with guaranteed sufficient decrease. ACM Transactions on Mathematical Software (1994)

[4] Jorge J. Moré, Burton S. Garbow, Kenneth E. Hillstrom, "Testing Unconstrained Optimization Software", ACM Transactions on Mathematical Software (1981)

[5] Gilbert JC, Lemaréchal C. The module M1QN3. INRIA Rep., version. 2006,3:21.

[6] Gilbert JC, Lemaréchal C. Some numerical experiments with variable-storage quasi-Newton algorithms. Mathematical Programming 45, pp. 407-435, 1989.

A large family of quadmeshing methods is based on a parametrization of the input surface. Intuitively, a quadmesh can be computed by first laying an input triangle mesh onto the plane using a parametrization technique, overlaying the uv-coordinates with an orthogonal grid of the plane and lifting the result back onto the surface. In other words, using a parametrization for quadmeshing boils down to applying a grid texture onto a surface and extracting the resulting connectivity, refer to Figure 3. However, one needs to be careful of what happens at the seams and the boundary of the input mesh in order to avoid discontinuities and retrieve complete and valid quads. Computing a global seamless parametrization is a challenging problem in geometry processing, mostly due to the fact that any attempt at solving it needs to determine two very different sets of variables. On the one hand, a seamless parametrization is defined by a discrete set of singular points (or cones) that concentrate all the curvature in multiples of

To overcome this apparent complexity a common approach is to decouple the problem into two independent subproblems: first computing a rotationally seamless parameterization, and then compute a truly seamless (integer seamless) map, refer to Figure 4 for an illustration. This year we contributed 7, 8 to both parts of the pipeline.

Even having relaxed a part of integer variables by targeting a rotationally seamless parameterization, we still face a hard problem.
The common strategy is to choose the singularity locations using a cross field or other proxies and then compute a global rotationally seamless parametrization constrained to given singularity locations. This procedure, however, loosens the link between cone placement and the distortion of the final result. As a consequence, a quadmesh extracted from this parametrization can present quads that stray far away from squares because its connectivity was optimized almost independently of its geometry.

We proposed the first method 7 that keeps the connection between the rotational and positional variables. In differential geometry, Cartan's method of moving frames provides a rich theory to design and describe local deformations. This framework uses local frames as references, making local coordinates translation and rotation invariant. Furthermore, Cartan's first structure equation provides a necessary and sufficient condition to the existence of an embedded surface, as it describes how differential coordinates should change relative to the frame's motion, effectively removing all influence of the ambient coordinate systems on the deformation.

In this work, we extend Cartan's method to singular frame fields and we prove that any solution of the derived structure equations is a valid cone parametrization. Most importantly, we provide a vertex-based discretization of the smooth theory which provably preserves all its properties. The absence of a global coordinate system allows us to compute parametrizations without prior knowledge of the cut positions and to automatically place quantized cones optimizing for a given distortion energy. This makes the algorithm very versatile to user prescribed constraints such as feature or boundary alignment and forced cone locations. The algorithm takes the form of a single non-linear least-square problem minimized using quasi-Newton methods. We demonstrate its performance on a large dataset of models where we are able to output seamless parametrization that are less distorted than previous works.

As we have mentioned earlier, truly seamless maps can be obtained by solving a mixed integer optimization problem: real variables define the geometry of the charts and integer variables define the combinatorial structure of the decomposition. To make this optimization problem tractable, a common strategy is to ignore integer constraints at first by computing a rotationally seamless map, then to enforce them in a so-called quantization step.

Actual quantization algorithms exploit the geometric interpretation of integer variables to solve an equivalent problem: they consider that the final quadmesh is a subdivision of a T-mesh embedded in the surface, and optimize the number of subdivisions for each edge of this T-mesh (Fig. 5). We propose 8 to operate on a decimated version of the original surface instead of the T-mesh. One motivation for this work is the easiness of implementation adaptation to constraints such as free boundaries, complex feature curves network, etc. In addition to that, this approach is also very promising for its possible extensions to the 3D (hex meshing) case.

This year we have also contributed to few topics not directly related to the core topic of the team.

One class classification (OCC), or outlier detection, is the task of determining whether a sample belongs to a given distribution. We propose a new method, dubbed One Class Signed Distance Function (OCSDF), to perform One Class Classification by provably learning the Signed Distance Function (SDF) to the boundary of the support of a distribution, refer to Fig. 5. The distance to the support can be interpreted as a normality score, and its approximation using 1-Lipschitz neural networks provides robustness bounds against l2 adversarial attacks, an underexplored weakness of deep learning-based OCC algorithms. We show that OCSDF is competitive against concurrent methods on tabular and image data while being way more robust to adversarial attacks, illustrating its theoretical properties. Finally, as exploratory research perspectives, we theoretically and empirically show how OCSDF connects OCC with image generation and implicit neural surface parametrization.

We designed a twin-width solver 9 for the exact track of the 2023 PACE (Parameterized Algorithms and Computational Experiments) Challenge. Our solver is based on a simple branch and bound algorithm with search space reductions and is implemented in C++. The solver has been evaluated over a dataset composed of 20 graphs and it get ranked third out of twenty teams. The twin-width of an undirected graph is a natural number, used to study the parameterized complexity of graph algorithms. Intuitively, it measures how similar the graph is to a cograph, a type of graph that can be reduced to a single vertex by repeatedly merging together twins vertices that have the same neighbors. The twin-width is defined from a sequence of repeated mergers where the vertices are not required to be twins, but have nearly equal sets of neighbors.

We present a simple algorithm 13, 14 to help generate simple structures: 'Fibonacci words'(i.e., certain words that are enumerated by Fibonacci numbers), Motzkin words, Schröder trees of size n. It starts by choosing an initial integer with uniform probability (i.e.,

Company: TotalEnergies

Duration: 01/10/2020 – 30/03/2024

Participants: Dmitry Sokolov, Nicolas Ray and David Desobry

Abstract: The goal of this project is to improve the accuracy of rubber behavior simulations for certain parts produced by TotalEnergies, notably gaskets.
To do this, both parties need to develop meshing methods adapted to the simulation of large deformations in non-linear mechanics.
The Pixel team has a strong expertise in hex-dominant meshing, TotalEnergies has a strong background in numerical simulation within an industrial context. This collaborative project aims to take advantage of both expertises.

David defended his PhD 12 on August 23, 2023.

This year we have hosted the 4th edition of the Frames workshop. The goal of the FRAMES is to gather both theoretical (computer science & applied mathematics) and practical (engineering & industry) specialists in the field of frame-based hex-meshing, in view of numerical computations.

Members of the team were reviewers for Eurographics, SIGGRAPH, SIGGRAPH Asia, ISVC, Pacific Graphics, and SPM.

Members of the team were reviewers for Computer Aided Design (Elsevier), Computer Aided Geometric Design (Elsevier), Transactions on Visualization and Computer Graphics (IEEE), Transactions on Graphics (ACM), Computer Graphics Forum (Wiley), Computational Geometry: Theory and Applications (Elsevier) and Computers & Graphics (Elsevier).

Dobrina Boltcheva is responsible of software engineering study program at IUT Saint-Die. Dmitry Sokolov is responsible for the 3d year of computer science license at the Université de Lorraine.

Members of the team have teached following courses:

We have participated to different science popularization events: