Numerical simulation has been booming over the last thirty years, thanks to increasingly powerful numerical methods, computer-aided design (CAD) and the mesh generation for complex 3D geometries, and the coming of supercomputers (HPC). The discipline is now mature and has become an integral part of design in science and engineering applications. This new status has lead scientists and engineers to consider numerical simulation of problems with ever increasing geometrical and physical complexities. A simple observation of this chart

shows: **no mesh = no simulation** along with **"bad" mesh = wrong simulation**.
We have concluded that the mesh is at the core of the classical computational pipeline and a key component to significant improvements.
Therefore, the requirements on meshing methods are an ever increasing need, with increased difficulty, to produce high quality meshes to enable reliable solution output predictions in an automated manner.
These requirements on meshing or equivalent technologies cannot be removed and all approaches face similar issues.

In this context, `Gamma` team was created in 1996 and has focused on the development of robust automated mesh generation methods in 3D, which was clearly a bottleneck at that time
when most of the numerical simulations were 2D. The team has been very successful in tetrahedral meshing with the well-known software `Ghs3d` , which has been distributed worldwide so far and in hexahedral meshing with the software `Hexotic` , which was the first automated full hex mesher. The team has also worked on surface meshers with `Yams` and `BLSurf` and visualization with `Medit`. Before `Medit`, we were unable to visualize in real time 3D meshes !

In 2010, `Gamma3` team has replaced `Gamma` with the choice to focus more on meshing for numerical simulations. The main goal was to emphasize and to strengthen the link between meshing technologies and numerical methods (flow or structure solvers). The metric-based anisotropic mesh adaptation strategy has been very successful with the development of
many error estimates, the generation of highly anisotropic meshes, its application to compressible Euler and Navier-Stokes equations , and its extension to unsteady problems with moving geometries leading to the development of several software
`Feflo.a/AMG-Lib, Wolf, Metrix, Wolf-Interpol`. A significant accomplishment was the high-fidelity prediction of the sonic boom emitted by supersonic aircraft .
We were the first to compute a certified aircraft sonic boom propagation in the atmosphere, thanks to mesh adaptation.
The team has started to work on parallelism with the development of the multi-thread library `LPlib` and the efficient management of memory using space filling curves, and the generation of large meshes (a billion of elements) . Theoretical work on high-order meshes has been also done .

Today, numerical simulation is an integral part of design in engineering applications with the main goal of reducing costs and speeding up the process of creating new design. Four main issues for industry are:

Generation of a discrete surface mesh from a continuous CAD is the last non-automated step of the design pipeline and, thus, the most human time consuming

High-performance computing (HPC) for all tools included in the design loop

The cost in euros of a numerical simulation

Certification of high-fidelity numerical simulations by controlling errors and uncertainties.

Let us now discuss in more details each of these issues.

Generating a discrete surface mesh from a CAD geometry definition has been the numerical analysis Achille's heel for the last 30 years. Significant issues are far too common and range from persistent translation issues between systems that can produce ill defined geometry definitions to overwhelming complexity for full configurations with all components. A geometry definition that is ill defined often does not perfectly capture the geometry's features and leads to a bad mesh and a broken simulation. Unfortunately, CAD system design is essentially decoupled from the needs of numerical simulation and is largely driven by the those of manufacturing and other areas. As a result, this step of the numerical simulation pipeline is still labor intensive and the most time consuming. There is a need to develop alternative geometry processes and models that are more suitable for numerical simulations.

Companies working on high-tech projects with high added value (Boeing, Safran, Dassault-Aviation, Ariane Group, ...) consider their design pipeline
inside a HPC framework. Indeed, they are performing complex numerical simulations on complex geometries on a daily-basis, and they aim at using this in a shape-optimization loop. Therefore, any tools added to their numerical platform should be HPC compliant. This means that all developments should consider hybrid parallelism, *i.e.,* to be compatible with distributed memory architecture (MPI) and shared memory architecture (multi-threaded),
to achieve scalable parallelism.

One of the main goals of numerical simulation is to reduce the cost of creating new designs
(e.g reduce the number of wind-tunnel and flight tests in the aircraft industry).
The emergence of 3D printers is, in some cases, making tests easier to perform, faster and cheaper.
It is thus mandatory to control the cost of the numerical simulations, in other word, it is important to use less resources to achieve the same accuracy. The cost takes into account the engineer time as well as the computing resources needed to perform the numerical simulation.
The cost for one simulation can vary from 15 euros for simple models (1D-2D), to 150 euros for Reynolds-averaged Navier-Stokes (3D) stationary models,
or up to 15 000 euros for unsteady models like LES or Lattice-Boltzmann

Another crucial point is checking and certification of errors and uncertainties in high-fidelity numerical simulations.
These errors can come from several sources: i) modeling error (for example via turbulence models or initial conditions), ii) discretization error (due to the mesh),
iii) geometry error (due to the representation of the design) and iv) implementation errors in the considered software. The error assessment and mesh generation procedure employed in the aerospace industry for CFD
simulations relies heavily on the experience of the CFD user. The inadequacy of this practice even for geometries
frequently encountered in engineering practice has been highlighted in studies of the AIAA

These preoccupations are the core of the Gamma project scientific program.
To answer the first issue, Gamma will focus on designing and developing
a geometry modeling framework specifically intended for mesh generation and numerical
simulation purposes. This is a mandatory step for automated geometry-mesh and mesh adaptation processes with an integrated geometry model. To answer the last three issues, the Gamma team will work on the development of a high-order mesh-adaptive solution platform compatible with
HPC environment. To this end, Gamma will pursue its work on advanced mesh generation methods which should fulfill the following capabilities:

The combination of adaptation - high-order - multi-elements coupled with appropriate error estimates is for the team the way to go to reduce the cost of numerical simulations while ensuring high-fidelity in a fully automated framework.

*Metrix: Error Estimates and Mesh Control for Anisotropic Mesh Adaptation*

Keywords: Meshing - Metric - Metric fields

Functional Description: Metrix is a software that provides by various ways metric to govern the mesh generation. Generally, these metrics are constructed from error estimates (a priori or a posteriori) applied to the numerical solution. Metrix computes metric fields from scalar solutions by means of several error estimates: interpolation error, iso-lines error estimate, interface error estimate and goal oriented error estimate. It also contains several modules that handle meshes and metrics. For instance, it extracts the metric associated with a given mesh and it performs some metric operations such as: metric gradation and metric intersection.

Participants: Adrien Loseille and Frédéric Alauzet

Contact: Frédéric Alauzet

URL: https://

Un projet important, initié en 2017, et toujours actif en 2020, consiste à écrire noir sur blanc un livre (en plusieurs volumes) et la motivation de ce travail est détaillée dans ce qui suit.

Pourquoi ce livre, pourquoi 2 volumes, pourquoi pas 3 volumes?

Notre dernier livre (généraliste) sur le maillage date de 2000 avec une mise à jour en 2008. Un collègue a commis un nouveau livre en 2015, très bien écrit mais assez classique dans son contenu, loin de préoccupations industrielles et (!) contenant quelques énormités (pas assez d'experience sur de vrais problèmes).

Ajoutons ma facilite (c'est P.L. G. qui parle) à écrire (bien ou mal, là n'est pas la question, il me suffit en effet de taper sur quelques touches d'un clavier), le désir de mon (premier) co-auteur de marquer le coup dans le domaine et la volonté (à leur corps defendant) des autres co-auteurs de participer à cette aventure. Le tout couplé avec les récents progrès dans le domaine (pensons aux éléments courbes et aux méthodes d'ordre élevé mais aussi à ce que peut être le HPC dans le domaine), tous les ingrédients sont là, on y va.

Le premier jet (un seul volume) se montre impossible à réaliser, il faudrait au minimum 800 pages, donc deux volumes a minima. Les deux volumes finis, ne reste il pas la place pour un troisième volume. Constatant avec effroi que nos étudiants (mais pas seulement) maitrisent bien force concepts mais sont incapables de voir, en pratique, comment les mettre en musique, le troisième volume est apparu comme une évidence (et on sera, au total, autour de 1000 pages).

A qui s'adresse ces volumes, bonne question. Ce n'est pas précisément de la littérature de gare mais nous nous sommes efforcé de prendre le malheureux lecteur par la main pour l'amener progressivement vers des concepts (très) avancés. Ainsi, le livre est très verbose et, en aucun cas, n'est un étalage savant de théorèmes et autres propositions, ce qui n'empêche pas de dire les choses. Par ailleurs, nous avons délibérément mis une part de subjectivité dans le propos pour suggérer (cela pouvant être contredit) que telle ou telle méthode n'avait pas notre faveur. A titre personnel, je pense que, bien que rares dans les livres, ces opinions ne peuvent qu'aider le lecteur a se former sa propre idée sur tel ou tel point.

Les livres sont publiés chez ISTE et écrits en français, eh oui, mais une traduction en anglais est avalable chez Wiley. La présence de la langue française dans la littérature scientifique me semble importante (et rejoint la politique de mon (notre) éditeur). Pour conclure, c'est plutôt satisfaisant de penser que ces livres (peut être destinés à faire référence sur le sujet) sont issus de l'Inria dans le neuf un.

Classic visualization software like ParaView , TecPlot , FieldView , Ensight , Medit , Vizir (OpenGL legacy based version)
, Gmsh , ...historically rely on the display of linear triangles with linear solutions on it. More precisely, each element of the mesh is divided into a set of elementary
triangles. At each vertex of the elementary triangle is attached a value and an associated color. The value and the color inside the triangle is then deduced by a linear interpolation inside the triangle. With the
increase of high-order methods and high-order meshes, these softwares adapted their technology by using subdivision methods. If a mesh has high-order elements, these elements are subdivided into a set of linear
triangles in order to approximate the shape of the high-order element . Likewise, if a mesh has a high-order solution on it, each element is subdivided into smaller linear triangles in order to
approximate the rendering of the high-order solution on it. The subdivision process can be really expensive if it is done in a naive way. For this reason, mesh adaptation procedures
, , are used to efficiently render high-order solutions and high-order elements using the standard linear rendering approaches. Even when optimized these approaches do have a
huge RAM memory footprint as the subdivision is done on CPU in a preprocessing step. Also the adaptive subdivision process can be dependent on the palette (*i.e.* the range of values where the solution is studied)
as the color only vary when the associated value is in this range. In this case, a change of palette inevitably imposes a new adaptation process. Finally, the use of a non conforming mesh adaptation can lead to a
discontinuous rendering for a continuous solution.

Other approaches are specifically devoted to high-order solutions and are based on ray casting , , . The idea is for a given pixel, to find exactly its color. To do so, for each
pixel, rays are cast from the position of the screen in the physical space and their intersection with the scene determines the color for the pixel. If high-order features are taken into account, it determines the color
exactly for this pixel. However, this method is based on two non-linear problems: the root-finding problem and the inversion of the geometrical mapping. These problems are really costly and do not compete with the
interactivity of the standard linear rendering methods even when these are called with a subdivision process unless they are done conjointly on the GPU. However, synchronization between GPU and `OpenGL` buffer are
non-trivial combination.

The proposed method intends to be a good compromise between both methods. It does guarantee pixel-exact rendering on linear elements without extra subdivision or ray casting and it keeps the interactivity of a classical method. Moreover, the subdivision of the curved entities is done on the fly on GPU which leaves the RAM memory footprint at the size of the loaded mesh.

For years, the resolution of numerical methods has consisted in solving Partial Derivative Equations by means of a piecewise linear representation of the physical phenomenon on linear meshes. This choice was merely
driven by computational limitations. With the increase of the computational capabilities, it became possible to increase the polynomial order of the solution while keeping the mesh linear. This was motivated by the fact
that even if the increase of the polynomial order requires more computational resources per iteration of the solver, it yields a faster convergence of the approximation error

Based on these issues, the development of high-order mesh generation procedures appears mandatory. To generate high-order meshes, several approaches exist. The first approach was tackled twenty years ago for both surface and volume meshing. At this moment the idea was to use all the meshing tools to get a valid high-order mesh. The same problem was revisited a few years later in for bio-medical applications. In these first approaches and in all the following, the underlying idea is to use a linear mesh and elevate it to the desired order. Some make use of a PDE or variational approach to do so , , , , , , , others are based on optimization and smoothing operations and start from a linear mesh with a constrained high-order curved boundary in order to generate a suitable high-order mesh , , . Also, when dealing with Navier-Stokes equations, the question of generating curved boundary layer meshes (also called viscous meshes) appears. Most of the time, dedicated approaches are set-up to deal with this problem , . In all these techniques, the key feature is to find the best deformation to be applied to the linear mesh and to optimize it. The prerequisite of these methods is that the initial boundary is curved and will be used as an input data. A natural question is consequently to study an optimal position of the high-order nodes on the curved boundary starting from an initial linear or high-order boundary mesh. This can be done in a coupled way with the volume , or in a preprocessing phase , . In this process, the position of the nodes is set by projection onto the CAD geometry or by minimization of an error between the surface mesh and the CAD surface. Note that the vertices of the boundary mesh can move as well during the process. In the case of an initial linear boundary mesh with absence of a CAD geometry, some approaches based on normal reconstructions can be used to create a surrogate for the CAD model , . Finally, a last question remains when dealing with such high-order meshes: Given a set of degrees of freedom, is the definition of these objects always valid ?. Until the work presented in , , , no real approach was proposed to deal in a robust way with the validity of high-order elements. The novelty of these approaches was to see the geometrical elements and their Jacobian as Bézier entities. Based on the properties of the Bézier representation, the validity of the element is concluded in a robust sense, while the other methods were only using a sampling of the Jacobian to conclude about its sign without any warranty on the whole validity of the elements.

In this context, several issues have been addressed : the analogy between high-order and Bézier elements, the development of high-order error estimates suitable for parametric high-order surface mesh generation andthe generalization of mesh optimization operators and their applications to curved mesh generation, moving-mesh methods, boundary layer mesh generation and mesh adaptation.

The scope of this paper is to demonstrate the viability and efficiency of unstructured anisotropic mesh adaptation techniques to turbomachinery applications. The main difficulty in turbomachinery is the periodicity of the domain that must be taken into account inthe solution mesh-adaptive process. The periodicity is strongly enforced in the flow solver using ghost cells to minimize the impact on the source code. For the mesh adaptation, the local remeshing is done in two steps. First, the inner domain is remeshed with frozen periodic frontiers, and, second, the periodic surfaces are remeshed after moving geometrice ntities from one side of the domain to the other. One of the main goal of this work is to demonstrate how mesh adaptation, thanks to its automation, is able to generate meshes that are extremely difficult to envision and almost impossible to generate manually. This study only considers feature-based error estimate based on the standard multi-scale Lpinterpolation error estimate. We presents all the specific modifications that have been introduced in the adaptive process to deal with periodic simulations used for turbomachinery applications. The periodic mesh adaptation strategy is then tested and validated on the LS89 high pressure axial turbine vane and the NASA Rotor 37 test cases.

.

The aim of mesh adaptation is to generate the optimal mesh to perform a specific numerical simulation. It is nowadays a mature tool which is mathematically well-posed and fully automatic regarding tetrahedral meshes.
Yet, there is still a strong demand for structured meshes, as many numerical schemes have proven to be more accurate on quadrilateral meshes than on triangular meshes, and as many favor structured elements in the
boundary layer instead of tetrahedra to simulate viscous turbulent flows. Since no method can automatically provide pure hexahedral adapted meshes respecting alignment constraints, one solution is to use hybrid meshes,
*i.e.* meshes containing both structured and unstructured elements. Accordingly, the following work focuses on hybrid metric-based mesh adaptation and CFD simulation on such meshes. Regarding hybrid mesh
generation, the method relies on a preliminary mesh obtained through so-called metric-aligned and metric-orthogonal approaches. These approaches utilize the directional information held by a prescribed metric-field to
generate right angled elements, that can be combined into structured elements to form a hybrid mesh. The result highly depends on the quality of the metric field. Thus, emphasis is put on the size gradation control
performed beforehand. This process is re-designed to favor metric-orthogonal meshes. To validate the method, some CFD simulations are performed. The modifications brought to the existing Finite Volume solver to enable
such computations has been developed.

.

A new strategy for mesh adaptation dealing with Fluid-Structure Interaction (FSI) problems is presented using
a partitioned approach. The Euler equations are solved by an edge-based Finite Volume solver whereas
the linear elasticity equations are solved by the Finite Element Method using the Lagrange

Boeing

Safran Tech

Projet RAPID DGA

Ideal Mesh generation for modern solvers and comPuting ArchiteCTureS.

Coordinateur : Adrien Loseille

The rapid improvement of computer hardware and physical simulation capabilities has revolutionized science and engineering, placing computational simulation on an equal footing with theoretical analysis and physical experimentation. This rapidly increasing reliance on the predictive capabilities has created the need for rigorous control of numerical errors which strongly impact these predictions. A rigorous control of the numerical error can be only achieved through mesh adaptivity. In this context, the role of mesh adaptation is prominent, as the quality of the mesh, its refinement, and its alignment with the physics are major contributions to these numerical errors. The IMPACTS project aims at pushing the envelope in mesh adaptation in the context of large size, very high fidelity simulations by proposing a new adaptive mesh generation framework. This framework will be based on new theoretical developments on Riemannian metric-field and on innovative algorithmic developments coupling a unique cavity-operator with an advancing-point techniques in order to produce high quality hybrid, curved and adapted meshes.