Numerical simulation has been booming over the last thirty years, thanks to increasingly powerful numerical methods, computer-aided design (CAD) and the mesh generation for complex 3D geometries, and the coming of supercomputers (HPC). The discipline is now mature and has become an integral part of design in science and engineering applications. This new status has lead scientists and engineers to consider numerical simulation of problems with ever increasing geometrical and physical complexities. A simple observation of this chart

shows: no mesh = no simulation
along with "bad" mesh = wrong simulation.
We have concluded that the mesh is at the core of the classical computational pipeline and a key component to significant improvements.
Therefore, the requirements on meshing methods are an ever increasing need, with increased difficulty, to produce high quality meshes to enable reliable solution output predictions in an automated manner.
These requirements on meshing or equivalent technologies cannot be removed and all approaches face similar issues.

In this context, Gamma team was created in 1996 and has focused on the development of robust automated mesh generation methods in 3D, which was clearly a bottleneck at that time
when most of the numerical simulations were 2D. The team has been very successful in tetrahedral meshing with the well-known software Ghs3d, which has been distributed worldwide so far and in hexahedral meshing with the software Hexotic, which was the first automated full hex mesher. The team has also worked on surface meshers with Yams and BLSurf and visualization with Medit. Before Medit, we were unable to visualize in real time 3D meshes !

In 2010, Gamma3 team has replaced Gamma with the choice to focus more on meshing for numerical simulations. The main goal was to emphasize and to strengthen the link between meshing technologies and numerical methods (flow or structure solvers). The metric-based anisotropic mesh adaptation strategy has been very successful with the development of
many error estimates, the generation of highly anisotropic meshes, its application to compressible Euler and Navier-Stokes equations , and its extension to unsteady problems with moving geometries leading to the development of several softwares
Feflo.a/AMG-Lib, Wolf, Metrix, Wolf-Interpol. A significant accomplishment
was the high-fidelity prediction of the sonic boom emitted by supersonic aircraft .
We were the first to compute a certified aircraft sonic boom propagation in the atmosphere, thanks to mesh adaptation.
The team has started to work on parallelism with the development of the multi-thread library LPlib and the efficient management of memory using space filling curves, and the generation of large meshes (a billion of elements) . Theoretical work on high-order meshes has been also done .

Today, numerical simulation is an integral part of design in engineering applications with the main goal of reducing costs and speeding up the process of creating new design. Four main issues for industry are:

Let us now discuss in more details each of these issues.

Generating a discrete surface mesh from a CAD geometry definition has been the numerical analysis Achille's heel for the last 30 years. Significant issues are far too common and range from persistent translation issues between systems that can produce ill defined geometry definitions to overwhelming complexity for full configurations with all components. A geometry definition that is ill defined often does not perfectly capture the geometry's features and leads to a bad mesh and a broken simulation. Unfortunately, CAD system design is essentially decoupled from the needs of numerical simulation and is largely driven by the those of manufacturing and other areas. As a result, this step of the numerical simulation pipeline is still labor intensive and the most time consuming. There is a need to develop alternative geometry processes and models that are more suitable for numerical simulations.

Companies working on high-tech projects with high added value (Boeing, Safran, Dassault-Aviation, Ariane Group, ...) consider their design pipeline
inside a HPC framework. Indeed, they are performing complex numerical simulations on complex geometries on a daily-basis, and they aim at using this in a shape-optimization loop. Therefore, any tools added to their numerical platform should be HPC compliant. This means that all developments should consider hybrid parallelism, i.e., to be compatible with distributed memory architecture (MPI) and shared memory architecture (multi-threaded),
to achieve scalable parallelism.

One of the main goals of numerical simulation is to reduce the cost of creating new designs (e.g reduce the number of wind-tunnel and flight tests in the aircraft industry). The emergence of 3D printers is, in some cases, making tests easier to perform, faster and cheaper. It is thus mandatory to control the cost of the numerical simulations, in other word, it is important to use less resources to achieve the same accuracy. The cost takes into account the engineer time as well as the computing resources needed to perform the numerical simulation. The cost for one simulation can vary from 15 euros for simple models (1D-2D), to 150 euros for Reynolds-averaged Navier-Stokes (3D) stationary models, or up to 15 000 euros for unsteady models like LES or Lattice-Boltzmann . It is important to know that a design loop is equivalent to performing between 100 and 1 000 numerical simulations. Consequently, the need for more efficient algorithms and processes is still a key factor.

Another crucial point is checking and certification of errors and uncertainties in high-fidelity numerical simulations. These errors can come from several sources:

The error assessment and mesh generation procedure employed in the aerospace industry for CFD simulations relies heavily on the experience of the CFD user. The inadequacy of this practice even for geometries frequently encountered in engineering practice has been highlighted in studies of the AIAA CFD Drag Prediction Workshops and High-Lift Prediction Workshops , . These studies suggest that the range of scales present in the turbulent flow cannot be adequately resolved using meshes generated following what is considered best present practices. In this regard, anisotropic mesh adaptation is considered as the future, as stated in the NASA report "CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences" and the study dedicated to mesh adaptation .

These preoccupations are the core of the Gamma project scientific program.
To answer the first issue, Gamma will focus on designing and developing
a geometry modeling framework specifically intended for mesh generation and numerical
simulation purposes. This is a mandatory step for automated geometry-mesh
and mesh adaptation processes with an integrated geometry model.
To answer the last three issues, the Gamma team will work on the development of a high-order mesh-adaptive solution platform compatible with
HPC environment. To this end, Gamma will pursue its work on advanced mesh generation methods which should fulfill the following capabilities:

Note that items Gamma will continue to work on robust flow solvers, solving the turbulent Navier-Stokes equations from second order using Finite Volume - Finite Element numerical scheme to higher-order using Flux Reconstruction (FR) method.

The combination of adaptation - high-order - multi-elements coupled with appropriate error estimates is for the team the way to go to reduce the cost of numerical simulations while ensuring high-fidelity in a fully automated framework.

The main axes are:

Our research in mesh generation, mesh adaptation and certification of the Numerical Simulation Pipeline finds applications in several different domains such as aviation and aerospace but also all fields where computation and simulation are used: fluid mechanics, solid mechanics, solving wave equations (acoustic, electromagnetism...), energy or biomedical.

After the publication of the third volume on Meshing, Geometric Modeling and Numerical Simulation 3 was published in 2020 , , , books also available in French , , , a glossary in French on mesh has been written . From A to Z, this glossary provides definitions and gives a number of comments. Moreover, subtilities are given probably not so easy to understand even by people familiar with the french dialect.

The whole library was completely rewritten to implement an automatic finite-element shader generation that converts a simple user source code into an OpenCL source that is in compiled on the GPU at run time.
The library handles all meshing data structures, from file reading, renumbering and vectorizing for efficient access on the GPU, and transfer to the graphic card, all automatically and transparently.
With this framework, the user can focus on the calculation part of the code, known as kernel, as all the rest is taken care of by the library.
The OpenCL language was chosen as it is hardware agnostic and runs on any GPU (Intel, Nvidia and AMD) and can also use the multicore and vector capacities of modern CPUs.

Julien Vanharen developed a basic heat solver using the v3.0 as a test case so we could validate the software with various boundary conditions, calculation scheme, unstructured meshes and different memory access patterns with success. Even with basic calculation which does not stress the full GPU's power, we achieved two orders of magnitude greater speed against a single CPU core and one order of magnitude compared to a multithreaded implementation. As Julien moved to ONERA, we plan on setting up a collaboration between the two teams to implement more complex HPC codes.

Works continued on P1toPk, a software that transform any first order hybrid mesh (triangles, quads, tets, pyramids, prisms and hexes) into a second order one while respecting a prescribe surface curvature.
Efforts were made on boundary layers curving, which was challenging because jacobian validity is harder to guarantee as the elements get highly stretched, and a lot effort were also made to speed up the code by optimizing mathematical operations and parallelizing them.
The code is now mature enough to be sent to industrial users for real life usage and we are waiting for valuable feedback in the present year.

Classic visualization software like ParaView , TecPlot , FieldView , Ensight , Medit , Vizir (OpenGL legacy based version)
, Gmsh , ... historically rely on the display of linear triangles with linear solutions on it. More precisely, each element of the mesh is divided into a set of elementary
triangles. At each vertex of the elementary triangle is attached a value and an associated color. The value and the color inside the triangle is then deduced by a linear interpolation inside the triangle. With the
increase of high-order methods and high-order meshes, these softwares adapted their technology by using subdivision methods. If a mesh has high-order elements, these elements are subdivided into a set of linear
triangles in order to approximate the shape of the high-order element . Likewise, if a mesh has a high-order solution on it, each element is subdivided into smaller linear triangles in order to
approximate the rendering of the high-order solution on it. The subdivision process can be really expensive if it is done in a naive way. For this reason, mesh adaptation procedures
, , are used to efficiently render high-order solutions and high-order elements using the standard linear rendering approaches. Even when optimized these approaches do have a
huge RAM memory footprint as the subdivision is done on CPU in a preprocessing step. Also the adaptive subdivision process can be dependent on the palette (i.e. the range of values where the solution is studied)
as the color only vary when the associated value is in this range. In this case, a change of palette inevitably imposes a new adaptation process. Finally, the use of a non conforming mesh adaptation can lead to a
discontinuous rendering for a continuous solution.

Other approaches are specifically devoted to high-order solutions and are based on ray casting , , . The idea is for a given pixel, to find exactly its color. To do so, for each
pixel, rays are cast from the position of the screen in the physical space and their intersection with the scene determines the color for the pixel. If high-order features are taken into account, it determines the color
exactly for this pixel. However, this method is based on two non-linear problems: the root-finding problem and the inversion of the geometrical mapping. These problems are really costly and do not compete with the
interactivity of the standard linear rendering methods even when these are called with a subdivision process unless they are done conjointly on the GPU. However, synchronization between GPU and OpenGL buffer are
non-trivial combination.

The proposed method intends to be a good compromise between both methods. It does guarantee pixel-exact rendering on linear elements without extra subdivision or ray casting and it keeps the interactivity of a classical method. Moreover, the subdivision of the curved entities is done on the fly on GPU which leaves the RAM memory footprint at the size of the loaded mesh.

We are developing a software, ViZiR 4, with exact non linear solution rendering to address the high-order visualization challenge .
ViZiR 4 is bundled as a light, simple and interactive high-order meshes and solutions visualization software. It is based on OpenGL 4 core graphic pipeline. The use of OpenGL Shading Language (GLSL) allows to perform pixel exact rendering of high order solutions on straight elements and almost pixel exact rendering on curved elements (high-order meshes). ViZiR 4 enables the representation of high order meshes (up to degree 4) and high order solutions (up to degree 10) with pixel exact rendering. Furthermore, in comparison with standard rendering techniques based on legacy OpenGL, the use of OpenGL 4 core version improves the speed of rendering, reduces the memory footprint and increases the flexibility. Many post-processing tools, such as picking, hidding surfaces, isolines, clipping, capping, are integrated to enable on the fly the analysis of the numerical results.

We added in ViZiR 4 the visualization of polygonal and polyhedral meshes.
Such meshes offer flexibility as the number of vertices and faces are arbitrary. Dual meshes are examples of such meshes.
Many visualization softwares do not handle polygons and when it is the case, the interactivity is often limited.
We showed that our graphic pipeline can be used to render polygons.
New keywords have been introduced in the libMeshb library to define these polygons and polyhedra. Furthermore, some functions have been introduced to ease the access of these data.
New tessellation algorithms have been developed and do not add extra vertices in the tessellation as the aim is to minimize the number of triangles to maximize the rendering performances.

For years, the resolution of numerical methods has consisted in solving Partial Derivative Equations by means of a piecewise linear representation of the physical phenomenon on linear meshes. This choice was merely driven by computational limitations. With the increase of the computational capabilities, it became possible to increase the polynomial order of the solution while keeping the mesh linear. This was motivated by the fact that even if the increase of the polynomial order requires more computational resources per iteration of the solver, it yields a faster convergence of the approximation error and it enables to keep track of unsteady features for a longer time and with a coarser mesh than with a linear approximation of the solution. However, in , , it was theoretically shown that for elliptic problems the optimal convergence rate for a high-order method was obtained with a curved boundary of the same order and in , evidence was given that without a high-order representation of the boundary the studied physical phenomenon was not exactly solved using a high-order method. In , it was even highlighted that, in some cases, the order of the mesh should be of a higher degree than the one of the solver. In other words, if the used mesh is not a high-order mesh, then the obtained high-order solution will never reliably represent the physical phenomenon.

Based on these issues, the development of high-order mesh generation procedures appears mandatory . To generate high-order meshes, several approaches exist. The first approach was tackled twenty years ago for both surface and volume meshing. At this moment the idea was to use all the meshing tools to get a valid high-order mesh. The same problem was revisited a few years later in for bio-medical applications. In these first approaches and in all the following, the underlying idea is to use a linear mesh and elevate it to the desired order. Some make use of a PDE or variational approach to do so , , , , , , , others are based on optimization and smoothing operations and start from a linear mesh with a constrained high-order curved boundary in order to generate a suitable high-order mesh , , . Also, when dealing with Navier-Stokes equations, the question of generating curved boundary layer meshes (also called viscous meshes) appears. Most of the time, dedicated approaches are set-up to deal with this problem , . In all these techniques, the key feature is to find the best deformation to be applied to the linear mesh and to optimize it. The prerequisite of these methods is that the initial boundary is curved and will be used as an input data. A natural question is consequently to study an optimal position of the high-order nodes on the curved boundary starting from an initial linear or high-order boundary mesh. This can be done in a coupled way with the volume , or in a preprocessing phase , . In this process, the position of the nodes is set by projection onto the CAD geometry or by minimization of an error between the surface mesh and the CAD surface. Note that the vertices of the boundary mesh can move as well during the process. In the case of an initial linear boundary mesh with absence of a CAD geometry, some approaches based on normal reconstructions can be used to create a surrogate for the CAD model , . Finally, a last question remains when dealing with such high-order meshes: Given a set of degrees of freedom, is the definition of these objects always valid ? Until the work presented in , , , no real approach was proposed to deal in a robust way with the validity of high-order elements. The novelty of these approaches was to see the geometrical elements and their Jacobian as Bézier entities. Based on the properties of the Bézier representation, the validity of the element is concluded in a robust sense, while the other methods were only using a sampling of the Jacobian to conclude about its sign without any warranty on the whole validity of the elements.

In this context, several issues have been addressed : the analogy between high-order and Bézier elements, the development of high-order error estimates suitable for parametric high-order surface mesh generation and the generalization of mesh optimization operators and their applications to curved mesh generation, moving-mesh methods, boundary layer mesh generation and mesh adaptation.

Metric fields are the link between particular error estimates - be they for low-order , or high-order methods , for the solution of a PDE or a quantity of interest derived from it such as drag or lift - and automatic mesh adaptation. In the case of linear meshes, a metric field locally distorts the measure or distance such that, when the mesh adaptation algorithm has constructed an uniform mesh in the induced Riemannian space, it is strongly anisotropic in the usual Euclidean (physicial) space. As such, anisotropy arises naturally, without it ever being explicitely sought by the (re)meshing algorithm.

We seek to extend these principles of metric-based

The metric field's own intrinsic curvature may derive from any error estimate, be it boundary approximation error , or an interpolation error estimate. So far, interpolation error estimates on high-order elements are limited to isotropy ( in

Robustness and modularity of the general remeshing algorithm may be derived from the use of a single topological operator such as the cavity operator , , . This operator remeshes element subsets (so-called cavities) by reconnecting cavity boundary nodes to a given point in space, already belonging to the mesh or otherwise. This very elementary operation can handle the most common topological operations: insertions, collapses, edge or face swaps. Therefore, it is central both to mesh adaptation (node creation, deletion) and to mesh optimization (mainly swaps).

A new

The scope of this paper is to demonstrate the viability and efficiency of unstructured anisotropic mesh adaptation techniques to turbomachinery applications. The main difficulty in turbomachinery is the periodicity of the domain that must be taken into account inthe solution mesh-adaptive process. The periodicity is strongly enforced in the flow solver using ghost cells to minimize the impact on the source code. For the mesh adaptation, the local remeshing is done in two steps. First, the inner domain is remeshed with frozen periodic frontiers, and, second, the periodic surfaces are remeshed after moving geometrice ntities from one side of the domain to the other. One of the main goal of this work is to demonstrate how mesh adaptation, thanks to its automation, is able to generate meshes that are extremely difficult to envision and almost impossible to generate manually. This study only considers feature-based error estimate based on the standard multi-scale Lpinterpolation error estimate. We presents all the specific modifications that have been introduced in the adaptive process to deal with periodic simulations used for turbomachinery applications. The periodic mesh adaptation strategy is then tested and validated on the LS89 high pressure axial turbine vane and the NASA Rotor 37 test cases.

Due to their various nature, physical phenomena that we seek to capture in CFD simulations may have specific specific mesh requirements. For example, to solve the boundary layer, some numerical schemes favor structured meshes respecting alignment with the boundary of the domain, while these constraints are not necessary elsewhere. Our approach is to use the techniques of metric-based mesh adaptation to generate a mixed-element mesh that can fulfill these different mesh requirements. This approach is based on the metric-orthogonal point-placement, creating some structured parts from the intrinsic directional information bore by the metric-field. Some unstructured areas may remain where structure is not needed. The main goals of this work are to improve the orthogonality of the output mesh and its alignment with the metric field. This work has three main axes. First, we have improved the preprocessing gradation step to smooth the metric field and improve the orthogonality of the final mesh. Then, we have studied two methods to obtain quadrilaterals: one using an a priori quadrilaterals recombination, the other detecting straightforwardly the orthogonal patterns during the re-meshing step. Finally, the work on the solver Wolf has been carried on and corrected to perform robust and accurate simulations on mixed-element meshes. These three developments were embodied in a mixed-element adaptation loop. The first two topics are detailed in what follows.

The previously described generation method highly relies on the metric field. However, a metric field computed from a solution during the adaptation process is most of the time quite messy and shows abrupt size variations. In standard mesh adaptation, it leads to low-quality elements. In orthogonal mesh adaptation, it additionally breaks the alignment and the structure of the output mesh. An additional step to smooth the input metric field is therefore required. In the context of mixed-element mesh adaptation, this gradation correction process has been modified to improve the number and the quality of the quadrilaterals in the final mesh. Further developments have been considered on this topic, in particular to increase the robustness of the method. Results have been published in .

Metric-orthogonal point-placement is currently used to generate quasi-structured meshes with right-angled triangles where the metric is the most anisotropic and unit triangles elsewhere. The aim of this work is to recover some quadrilaterals in the structure. To do so, two approaches can be considered: an a posteriori quadrilateral recombination based on geometrical criteria, and an a priori quadrilateral detection. The latter is more straightforward because it uses directly the point-placement information. A framework was established to set up this method. Developments and preliminary results were presented in .

In order to obtain a correct metric field on hybrid meshes, a robust hybrid solver is mandatory. When dealing with 2D (3D) elements different from triangles (tetrahedra), the most tricky aspect is the gradient formulation. This is due to the fact that within a Finite Elements interpolation framework, the gradient on an element with more than three nodes (i.e. not simplicial complex) is not element-wise constant. This brings many added difficulties to the flux balance computation. A first attempt at performing inviscid and laminar simulations on hybrid meshes was to approximate gradients on quadrilaterals with its iso-barycenter values . The extension of this formulation to turbulent flows, however highlighted a lack of efficiency and robustness. For this reason, a APFE (APproximated Finite-Element) method has been implemented and as well extended to a implicit time integration scheme. This approach turns out to be very efficient and robust in many fully-structured mesh verification cases. The extension to 3D cases (prisms and pyramids) is ongoing.

When solving the RANS equations, usually one decouples the equations relative to the mean-flow and the equations relative to turbulence. This division provides two separated systems to be solved at each time step, one relative to the mean-flow and the other relative to the turbulence. This presents two main drawbacks: in the flow solver, the Jacobian of the system lacks of the terms bounding the mean-flow and the turbulence, and this can slow down the residual convergence. The second drawback regards the adjoint problem, which consists into a linear system assembled with the transpose of the Jacobian matrix of the residuals and, on the right-hand side, the derivative of an aeronautical coefficient with respect to the flow variables. A Jacobian missing the coupling terms between the mean-flow and the turbulence provides a null adjoint turbulent viscosity, and this is a limitation in the development of more complex discretization error estimates. We have therefore developed a 2D version of the coupled flow and adjoint solver which includes in the Jacobian the coupling terms between the mean-flow and the turbulence. When have tested this method on the 2D geometry provided for the 4th CFD AIAA High Lift Prediction Workshop, and the result provided several features to emerge, such as a high mesh refinement inside the boundary layer of the leading edges, and inside the regions of high turbulence destruction. The work is pursued in collaboration with Philippe Spalart (Boeing), and is presented in .

Goal-oriented mesh adaptation is a methodology used to adapt the mesh in order to minimize the discretization error commited on a functional depending on the solution. As an intermediate step, one finds an upper bound to such discretization error taking the form of a weighted sum of the interpolation errors of the solution, and such upper bound is called error estimate. Regarding Wolf and the RANS equations, up to now we have focused only on the mean-flow part of such an error estimate, meaning that the terms coming from turbulence have been neglected. The scope of this work is to enrich the error estimate with the information coming from the turbulence. In particular, the methodology has been tested on the 2D geometry provided for the 4th CFD AIAA High Lift Prediction Workshop, providing high mesh refinements on the boundaries of the turbulent regions. Also this work is pursued in collaboration with Philippe Spalart (Boeing), and is presented in .

Ideal Mesh generation for modern solvers and comPuting ArchiteCTureS.