ALICE is a team founded in 2004 by Bruno Lévy. The main scientific goal of ALICE was to develop new algorithms for computer graphics, with a special focus on geometry processing. From 2004 to 2006, we developed new methods for automatic texture mapping (LSCM, ABF++, PGP), that became the de-facto standards. Then we realized that these algorithms could be used to create an abstraction of shapes, that could be used for geometry processing and modeling purposes, which we developed from 2007 to 2013 within the GOODSHAPE StG ERC project. We transformed the research prototype stemming from this project into an industrial geometry processing software, with the VORPALINE PoC ERC project, and commercialized it (TOTAL, Dassault Systems, + GeonX and ANSYS currently under discussion). From 2013 to 2018, we developed more contacts and cooperations with the “scientific computing” and “meshing” research communities.

After a part of the team “spun off” around Sylvain Lefebvre and his ERC project SHAPEFORGE to become the MFX team (on additive manufacturing and computer graphics), we progressively moved the center of gravity of the rest of the team from computer graphics towards scientific computing and computational physics, in terms of cooperations, publications and industrial transfer.

We realized that *geometry* plays a central role in numerical simulation, and that
“cross-pollinization” with methods from our field (graphics) will lead to original algorithms. In particular, computer graphics
routinely uses irregular and dynamic data structures, more seldom encountered in scientific computing.
Conversely, scientific computing routinely uses mathematical tools that are not well spread and not well
understood in computer graphics.
Our goal is to establish a stronger connection between both domains, and exploit the fundamental aspects of both
scientific cultures to develop new algorithms for computational physics.

Mesh generation is a notoriously difficult task. A quick search on the NSF grant web page **NASA indicates mesh generation as one of the major challenges for 2030 , and estimates that it costs 80% of time and effort in numerical simulation.**
This is due to the need for constructing supports that match both the geometry and the physics of the system to be modeled.
In our team we pay a particular attention to scientific computing, because we believe it has a world changing impact.

It is very unsatisfactory that meshing, i.e. just “preparing the data” for the simulation, eats up the major part of the time and effort. Our goal is to make the situation evolve, by studying the influence of shapes and discretizations, and inventing new algorithms to automatically generate meshes that can be directly used in scientific computing. This goal is a result of our progressive shift from pure graphics (“Geometry and Lighting”) to real world problems (“Shape Fidelity”).

Meshing is so central in geometric modeling because it provides a way to represent functions on the objects studied (texture coordinates, temperature, pressure, speed, etc.). There are numerous ways to represent functions, but if we suppose that the functions are piecewise smooth, the most versatile way is to discretize the domain of interest. Ways to discretize a domain range from point clouds to hexahedral meshes; let us list a few of them sorted by the amount of structure each representation has to offer (refer to Figure ).

At one end of the spectrum there are **point clouds:** they exhibit no structure at all (white noise point samples) or very little (blue noise point samples).
Recent explosive development of acquisition techniques (e.g. scanning or photogrammetry) provides an easy way to build 3D models of real-world objects
that range from figurines and cultural heritage objects to geological outcrops and entire city scans.
These technologies produce massive, unstructured data (billions of 3D points per scene) that can be directly used for visualization purposes,
but this data is not suitable for high-level geometry processing algorithms and numerical simulations that usually expect meshes.
Therefore, at the very beginning of the acquisition-modeling-simulation-analysis pipeline, powerful scan-to-mesh algorithms are required.

During the last decade, many solutions have already been proposed , , , , ,
but the problem of building a good mesh from scattered 3D points is far from being solved.
Beside the fact that the data is unusually large, the existing algorithms are challenged also by the extreme variation of data quality.
Raw point clouds have many defects, they are often corrupted with noise, redundant, incomplete (due to occlusions): *they all are uncertain*.

**Triangulated surfaces** are ubiquitous, they are the most widely used representation for 3D objects.
Some applications like 3D printing do not impose heavy requirements on the surface: typically it has to be watertight, but triangles can have an arbitrary shape.
Other applications like texturing require very regular meshes, because they suffer from elongated triangles with large angles.

While being a common solution for many problems, triangle mesh generation is still an active topic of research. The diversity of representations (meshes, NURBS, ...) and file formats often results in a “Babel” problem when one has to exchange data. The only common representation is often the mesh used for visualization, that has in most cases many defects, such as overlaps, gaps or skinny triangles. Re-injecting this solution into the modeling-analysis loop is non-trivial, since again this representation is not well adapted to analysis.

**Tetrahedral meshes** are the volumic equivalent of triangle meshes, they are very common in the scientific computing community.
Tetrahedral meshing is now a mature technology. It is remarkable that still today all the existing software used in the
industry is built on top of a handful of kernels, all written by a small number of individuals
, , , , , , , .
Meshing requires a long-term, focused, dedicated research effort, that combines deep theoretical studies with advanced software development.
We have the ambition bring to this kind of maturity a different type of mesh (structured, with hexahedra), which is highly
desirable for some simulations, and for which, unlike tetrahedra, no satisfying automatic solution exists.
In the light of recent contributions, we believe that the domain is ready to overcome the principal difficulties.

Finally, at the most structured end of the spectrum there are **hexahedral meshes** composed of deformed cubes (hexahedra).
They are preferred for certain physics simulations (deformation mechanics, fluid dynamics ...) because they can significantly
improve both speed and accuracy. This is because (1)
they contain a smaller number of elements (5-6 tetrahedra for a single hexahedron),
(2) the associated tri-linear function basis has cubic terms that
can better capture higher order variations, (3) they
avoid the locking phenomena encountered with tetrahedra ,
(4) hexahedral meshes exploit inherent tensor product structure
and (5) hexahedral meshes are superior in direction
dominated physical simulations (boundary layer, shock waves, etc).
Being extremely regular, hexahedral meshes are often claimed to be The Holy Grail for many finite element methods ,
outperforming tetrahedral meshes both in terms of computational speed and accuracy.

Despite 30 years of research efforts and important advances, mainly by the Lawrence Livermore
National Labs in the U.S. , , hexahedral meshing still requires
considerable manual intervention in most cases (days, weeks and even months for the most complicated domains).
Some automatic methods exist , , that constrain the boundary into a regular grid, but they are
not fully satisfactory either, since the grid is not aligned with the boundary. The advancing front method
does not have this problem, but generates irregular elements on
the medial axis, where the fronts collide. Thus, *there is no
fully automatic algorithm that results in satisfactory boundary alignment*.

Currently, transforming the raw point cloud into a triangular mesh is a long pipeline involving disparate geometry processing algorithms:

*Point pre-processing: * colorization, filtering to remove unwanted background, first noise reduction along acquisition viewpoint;

*Registration: * cloud-to-cloud alignment, filtering of remaining noise, registration refinement;

*Mesh generation: * triangular mesh from the complete point cloud, re-meshing, smoothing.

The output of this pipeline is a locally structured model which is used in downstream mesh analysis methods such as feature extraction, segmentation in meaningful parts or building CAD models.

It is well known that point cloud data contains measurement errors due to factors related
to the external environment and to the measurement system itself , , .
These errors propagate through all processing steps: pre-processing, registration and mesh generation.
Even worse, the heterogeneous nature of different processing steps makes it extremely difficult to know *how* these errors propagate through the pipeline.
To give an example, for cloud-to-cloud alignment it is necessary to estimate normals.
However, the normals are forgotten in the point cloud produced by the registration stage.
Later on, when triangulating the cloud, the normals are re-estimated on the modified data, thus introducing uncontrollable errors.

We plan to develop new reconstruction, meshing and re-meshing algorithms, with a specific focus on the accuracy and resistance to all defects present in the input raw data. We think that pervasive treatment of uncertainty is the missing ingredient to achieve this goal. We plan to rethink the pipeline with the position uncertainty maintained during the whole process. Input points can be considered either as error ellipsoids or as probability measures . In a nutshell, our idea is to start by computing an error ellipsoid , for each point of the raw data, and then to cumulate the errors (approximations) committed at each step of the processing pipeline while building the mesh. In this way, the final users will be able to take the uncertainty knowledge into account and rely on this confidence measure for further analysis and simulations. Quantifying uncertainty for reconstruction algorithms, and propagating them from input data to high-level geometry processing algorithms has never been considered before, possibly due to the very different methodologies of the approaches involved. At the very beginning we will re-implement the entire pipeline, and then attack the weak links through all three reconstruction stages.

One of the favorite tools we use in our team are parameterizations. They provide a very powerful way to reveal structures on objects. The most omnipresent application of parameterizations is texture mapping: texture maps provide a way to represent in 2D (on the map) information related to a surface. Once the surface is equipped with a map, we can do much more than a mere coloring of the surface: we can approximate geodesics, edit the mesh directly in 2D or transfer information from one mesh to another.

Parameterizations constitute a family of methods that involve optimizing an objective function, subject to a set of constraints (equality, inequality, being integer, etc.). Computing the exact solution to such problems is beyond any hope, therefore approximations are the only resort. This raises a number of problems, such as the minimization of highly nonlinear functions and the definition of direction fields topology, without forgetting the robustness of the software that puts all this into practice.

We are particularly interested in a specific instance of parameterization: hexahedral meshing.
The idea is to build a transformation

Current global parameterizations allow grids to be positioned inside geometrically simple objects whose internal structure (the singularity graph) can be relatively basic. We wish to be able to handle more configurations by improving three aspects of current methods:

Local grid orientation is usually prescribed by minimizing the curvature of a 3D steering field. Unfortunately, this heuristic does not always provide singularity curves that can be integrated by the parameterization. We plan to explore how to embed integrability constraints in the generation of the direction fields. To address the problem, we already identified necessary validity criteria, for example, the permutation of axes along elementary cycles that go around a singularity must preserve one of the axes (the one tangent to the singularity). The first step to enforce this (necessary) condition will be to split the frame field generation into two parts: first we will define a locally stable vector field, followed by the definition of the other two axes by a 2.5D directional field (2D advected by the stable vector field).

The grid combinatorial information is characterized by a set of integer coefficients whose values are currently determined through
numerical optimization of a geometric criterion: the shape of the hexahedra must be as close as possible to the steering direction field.
Thus, the number of layers of hexahedra between two surfaces is determined solely by the size of the hexahedra that one wishes to generate.
In this setting degenerate configurations arise easily, and we want to avoid them.
In practice, mixed integer solvers often choose to allocate a negative or zero number of layers of hexahedra between two constrained sheets (boundaries of the object, internal constraints or singularities).
We will study how to inject strict positivity constraints into these cases, which is a very complex problem because of the subtle interplay between different degrees of freedom of the system.
Our first results for quad-meshing of surfaces give promising leads, notably thanks to *motorcycle graphs* ,
a notion we wish to extend to volumes.

Optimization for the geometric criterion makes it possible to control the average size of the hexahedra, but it does not ensure the bijectivity (even locally) of the resulting parameterizations. Considering other criteria, as we did in 2D , would probably improve the robustness of the process. Our idea is to keep the geometry criterion to find the global topology, but try other criteria to improve the geometry.

All global parameterization approaches are decomposed into three steps: frame field generation, field integration to get a global parameterization, and final mesh extraction. Getting a full hexahedral mesh from a global parameterization means that it has positive Jacobian everywhere except on the frame field singularity graph. To our knowledge, there is no solution to ensure this property, but some efforts are done to limit the proportion of failure cases. An alternative is to produce hexahedral dominant meshes. Our position is in between those two points of view:

We want to produce full hexahedral meshes;

We consider as pragmatic to keep hexahedral dominant meshes as a fallback solution.

The global parameterization approach yields impressive results on some geometric objects, which is encouraging, but not yet sufficient for numerical analysis. Note that while we attack the remeshing with our parameterizations toolset, the wish to improve the tool itself (as described above) is orthogonal to the effort we put into making the results usable by the industry. To go further, our idea (as opposed to , ) is that the global parameterization should not handle all the remeshing, but merely act as a guide to fill a large proportion of the domain with a simple structure; it must cooperate with other remeshing bricks, especially if we want to take final application constraints into account.

For each application we will take as an input domains, sets of constraints and, eventually, fields (e.g. the magnetic field in a tokamak). Having established the criteria of mesh quality (per application!) we will incorporate this input into the mesh generation process, and then validate the mesh by a numerical simulation software.

Numerical simulation is the main targeted application domain for the geometry processing tools that we develop. Our mesh generation tools will be tested and evaluated within the context of our cooperation with Hutchinson, experts in vibration control, fluid management and sealing system technologies. We think that the hex-dominant meshes that we generate have geometrical properties that make them suitable for some finite element analyses, especially for simulations with large deformations.

We also have a tight collaboration with Wan Chiu Li, a geophysical modeling specialist. He is creating a start-up company MESHSPACE whose goal is to transfer to the industry remeshing tools we developed in the ALICE team. In particular, he uses hexahedral-dominant meshes for geomechanical simulations of gas and oil reservoirs. From a scientific point of view, this use case introduces new types of constraints (alignments with faults and horizons), and allows certain types of nonconformities that we did not consider until now.

Our cooperation with RhinoTerrain pursues the same goal: reconstruction of buildings from point cloud scans allows to perform 3D analysis and studies on insolation, floods and wave propagation, wind and noise simulations necessary for urban planification.

Dmitry Sokolov has won the "best expertise" nomination for TechnoText 2019, the challenge for the best russian language IT-related text of 2019.
The award was given for the article on understandable raytracing

*VORPALINE mesh generator*

Keywords: 3D modeling - Unstructured heterogeneous meshes

Scientific Description: This software is the result of the team's work on the parameterization of surfaces and volumes, on the generation of Voronoi diagrams and mesh generation.

Functional Description: VORPALINE is a surfacic and volumetric mesh generator, for simplicial meshes (triangles and tetrahedra), for quad-dominant and hex-dominant meshes. It also contains surfacic and volumic parameterization modules.

Release Functional Description: Computer vision algorithms allow us to reconstruct surfaces in a 3d scene. The colours associated with these surfaces can be stored in textures, but these are often incomplete due to a lack of reliable data. For example, some points on the surface are not present in the images, are insufficiently illuminated or, on the contrary, in a reflection that does not give the true color of the object. We have developed in Vorpaline an algorithm capable of generating these missing colors from those present in their vicinity. The originality of our approach is the optimization according to the neighbourhoods defined on the surface and not in the texture space.

Participants: Bruno Lévy, Dmitry Sokolov and Nicolas Ray

Contact: Bruno Lévy

When printing 3D objects with Fused Filament Fabrication technology, the plastic is deposited by following a 2D path for producing the first layer. Each following layer is printed with the same method on the top of the previous layers. For technical reasons, it is convenient to use horizontal layers with constant height, but this generates aliasing errors that are especially visible (Figure , right) when the object's surface is close to horizontal. The objective of this project is to reduce these artefacts by printing curved layers (Figure , left). Printing curved layers is a challenging task because all technical aspects of printing have to be adapted to the curved case. The key idea of our approach is to (virtually) deform the object in such a way that the surface that is close to horizontal becomes exactly horizontal, then define all the printing instructions (tool path, slicing, pressure, etc.) in this deformed space with standard algorithms. The final printing instructions are obtained by coming back to the original space. In collaboration with MFX team, we have worked on the problem of finding the deformation by a global optimization method that tries to make horizontal large portions of the object's surface under constraints of layer thickness, tools collisions, object self-intersections, etc. The results were published at SIGGRAPH this year .

This work is done as part of an informal (soon to be formalized) collaboration between our team and CEA.
Many simulation codes require block-structured meshes.
This requires decomposing the geometric domain into a set of hexahedral blocks, each one being discretized by a regular grid.
Our approach to generate such structures is to generate global parameterizations.
Those methods give promising results in many cases, but still face many robustness issues.
To tackle those issues, we are currently working on a subset of those methods, called Polycube deformation.
The idea is to deform our original domain

This work is done as part of an informal (soon to be formalized) collaboration between our team and RhinoTerrain. We have roof models in the form of surface meshes (Figure ). Our data are LIDAR point clouds. Based on this data and a roof model chosen by the user, we seek to optimize the position of the model so that it “best” matches the data. This optimization must comply with two constraints:

It is important to ignore possible outliers in the point cloud, such as parts that do not belong to the roof (trees, electrical wires, *etc.*) or should not be taken into account by the model (chimney, skylight, parabolic antenna, *etc.*);

The roof geometry is subject to certain constraints, such as the planarity of certain rectangular faces or the alignment of certain axes.

This work is an extension of the VSDM algorithm (*Voronoï Squared Distance Minimization*) developed by the team .
The idea is to optimize a well-chosen energy function, the overall minimum of which corresponds to the desired position for the mesh size.
The preliminary results are very promising, and we are preparing a publication.

*Company: * Polygonal Design

*Duration: * 01/02/2018 – 01/08/2020

*Participants: * Bruno Lévy and Laurent Alonso

*Amount: * 38k euros

*Abstract: * The goal of this project is to provide a scientific and technical expertise to Polygonal Design. In particular this concerns the Unfold3d software, developed and marketed by the company.
This software is built based on our algorithms developed in 2002–2006.

We coordinate a work package for the CPER CyberEntreprise 2017–2020 (

*Program:* CPER (Contrat de Plan État Région)

*Project title:* Cyber-Entreprises

*Duration:* 01/07/2015 – 31/12/2020

*Participants: * Bruno Lévy, Dmitry Sokolov and Nicolas Ray

*Coordinator:* Emmanuel Thomé and Marc Jungers (CRAN)

The team has organized the “2nd Workshop of the Grand-Est in Computer Graphics and Virtual Reality” (50 participants).

Members of the team were IPC members for SPM and ISVC.

Members of the team were reviewers for Eurographics, SIGGRAPH, SIGGRAPH Asia, ISVC, Pacific Graphics, and SPM.

Members of the team were reviewers for Computer Aided Design (CAD), Journal of Computational Physics (Elsevier), Transactions on Visualization and Computer Graphics (IEEE), Transactions on Graphics (ACM), and Computers & Graphics (Elsevier).

Licence : Dobrina Boltcheva, Computer Graphics, 30h, 3A, IUT Saint-Dié-des-Vosges

Licence : Dobrina Boltcheva, Advanced Object Oriented Programming & UML, 60h, 2A, IUT SaintDié-des-Vosges

Licence : Dobrina Boltcheva, Advanced algorithmics, 50h, 2A, IUT Saint-Dié-des-Vosges

Licence : Dobrina Boltcheva, Image Processing, 30h, 2A, IUT Saint-Dié-des-Vosges

Licence : Dobrina Boltcheva, UML Modeling, 20h, 1A, IUT Saint-Dié-des-Vosges

Licence : Dobrina Boltcheva, Algorithmics, 30h, 3A, PolyTech Nancy

Licence : Dobrina Boltcheva, Computer Graphics, 12h, 5A, PolyTech Nancy

Licence : Dmitry Sokolov, Computer Graphics, 12h, 5A, PolyTech Nancy

Licence : Dmitry Sokolov, C++, 40h, 2A, University of Lorraine

Licence : Dmitry Sokolov, Programming, 30h, 1A, University of Lorraine

Licence : Dmitry Sokolov, Logic, 30h, 3A, University of Lorraine

Master : Dmitry Sokolov, Logic, 22h, M1, University of Lorraine

Master : Dmitry Sokolov, 3D printing, 12h, M2, University of Lorraine

Master : Dmitry Sokolov, Numerical modeling, 12h, M2, University of Lorraine

PhD in progress: Justine Basselin, *“Reconstruction of buildings from 3D point clouds”*, since December 2019, Dmitry Sokolov, Nicolas Ray and Hervé Barthélémy.

PhD in progress: François Protais, *“Polycube-dominant meshing”*, since October 2019, Dmitry Sokolov and Franck Ledoux.

Dmitry Sokolov participated in the PhD jury of Lucas Morlet as an examinator.

Dmitry Sokolov continues to develop TinyRenderer