Numerical simulation has been booming over the last thirty years, thanks to increasingly powerful numerical methods, computer-aided design (CAD) and the mesh generation for complex 3D geometries, and the coming of supercomputers (HPC). The discipline is now mature and has become an integral part of design in science and engineering applications. This new status has lead scientists and engineers to consider numerical simulation of problems with ever increasing geometrical and physical complexities. A simple observation of this chart
shows: no mesh = no simulation
along with "bad" mesh = wrong simulation.
We have concluded that the mesh is at the core of the classical computational pipeline and a key component to significant improvements.
Therefore, the requirements on meshing methods are an ever increasing need, with increased difficulty, to produce high quality meshes to enable reliable solution output predictions in an automated manner.
These requirements on meshing or equivalent technologies cannot be removed and all approaches face similar issues.
In this context, Gamma team was created in 1996 and has focused on the development of robust automated mesh generation methods in 3D, which was clearly a bottleneck at that time
when most of the numerical simulations were 2D. The team has been very successful in tetrahedral meshing with the well-known software Ghs3d28, 29 which has been distributed worldwide so far and in hexahedral meshing with the software Hexotic49, 50 which was the first automated full hex mesher. The team has also worked on surface meshers with Yams20 and BLSurf12 and visualization with Medit. Before Medit, we were unable to visualize in real time 3D meshes !
In 2010, Gamma3 team has replaced Gamma with the choice to focus more on meshing for numerical simulations. The main goal was to emphasize and to strengthen the link between meshing technologies and numerical methods (flow or structure solvers). The metric-based anisotropic mesh adaptation strategy has been very successful with the development of
many error estimates, the generation of highly anisotropic meshes, its application to compressible Euler and Navier-Stokes equations 5, and its extension to unsteady problems with moving geometries 8 leading to the development of several softwares
Feflo.a/AMG-Lib, Wolf, Metrix, Wolf-Interpol. A significant accomplishment
was the high-fidelity prediction of the sonic boom emitted by supersonic aircraft 6.
We were the first to compute a certified aircraft sonic boom propagation in the atmosphere, thanks to mesh adaptation.
The team has started to work on parallelism with the development of the multi-thread library LPlib and the efficient management of memory using space filling curves, and the generation of large meshes (a billion of elements) 43. Theoretical work on high-order meshes has been also done 27.
Today, numerical simulation is an integral part of design in engineering applications with the main goal of reducing costs and speeding up the process of creating new design. Four main issues for industry are:
Let us now discuss in more details each of these issues.
Generating a discrete surface mesh from a CAD geometry definition has been the numerical analysis Achille's heel for the last 30 years. Significant issues are far too common and range from persistent translation issues between systems that can produce ill defined geometry definitions to overwhelming complexity for full configurations with all components. A geometry definition that is ill defined often does not perfectly capture the geometry's features and leads to a bad mesh and a broken simulation. Unfortunately, CAD system design is essentially decoupled from the needs of numerical simulation and is largely driven by the those of manufacturing and other areas. As a result, this step of the numerical simulation pipeline is still labor intensive and the most time consuming. There is a need to develop alternative geometry processes and models that are more suitable for numerical simulations.
Companies working on high-tech projects with high added value (Boeing, Safran, Dassault-Aviation, Ariane Group, ...) consider their design pipeline
inside a HPC framework. Indeed, they are performing complex numerical simulations on complex geometries on a daily-basis, and they aim at using this in a shape-optimization loop. Therefore, any tools added to their numerical platform should be HPC compliant. This means that all developments should consider hybrid parallelism, i.e., to be compatible with distributed memory architecture (MPI) and shared memory architecture (multi-threaded),
to achieve scalable parallelism.
One of the main goals of numerical simulation is to reduce the cost of creating new designs (e.g reduce the number of wind-tunnel and flight tests in the aircraft industry). The emergence of 3D printers is, in some cases, making tests easier to perform, faster and cheaper. It is thus mandatory to control the cost of the numerical simulations, in other word, it is important to use less resources to achieve the same accuracy. The cost takes into account the engineer time as well as the computing resources needed to perform the numerical simulation. The cost for one simulation can vary from 15 euros for simple models (1D-2D), to 150 euros for Reynolds-averaged Navier-Stokes (3D) stationary models, or up to 15 000 euros for unsteady models like LES or Lattice-Boltzmann 1. It is important to know that a design loop is equivalent to performing between 100 and 1 000 numerical simulations. Consequently, the need for more efficient algorithms and processes is still a key factor.
Another crucial point is checking and certification of errors and uncertainties in high-fidelity numerical simulations. These errors can come from several sources:
The error assessment and mesh generation procedure employed in the aerospace industry for CFD simulations relies heavily on the experience of the CFD user. The inadequacy of this practice even for geometries frequently encountered in engineering practice has been highlighted in studies of the AIAA 2 CFD Drag Prediction Workshops 53 and High-Lift Prediction Workshops 68, 67. These studies suggest that the range of scales present in the turbulent flow cannot be adequately resolved using meshes generated following what is considered best present practices. In this regard, anisotropic mesh adaptation is considered as the future, as stated in the NASA report "CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences" 70 and the study dedicated to mesh adaptation 59.
These preoccupations are the core of the Gamma project scientific program.
To answer the first issue, Gamma will focus on designing and developing
a geometry modeling framework specifically intended for mesh generation and numerical
simulation purposes. This is a mandatory step for automated geometry-mesh
and mesh adaptation processes with an integrated geometry model.
To answer the last three issues, the Gamma team will work on the development of a high-order mesh-adaptive solution platform compatible with
HPC environment. To this end, Gamma will pursue its work on advanced mesh generation methods which should fulfill the following capabilities:
Note that items Gamma will continue to work on robust flow solvers, solving the turbulent Navier-Stokes equations from second order using Finite Volume - Finite Element numerical scheme to higher-order using Flux Reconstruction (FR) method.
The combination of adaptation - high-order - multi-elements coupled with appropriate error estimates is for the team the way to go to reduce the cost of numerical simulations while ensuring high-fidelity in a fully automated framework.
The main axes are:
Applied Mathematics, Computation and Simulation.
Input: a triangulated surface mesh and an optional size map to control the size of inner elements.
Output: a fully hexahedral mesh (no hybrid elements), valid (no negative jacobian) and conformal (no dangling nodes) whose surface matches the input geometry.
The software is a simple command line that requires no knowledge on meshing. Its arguments are an input mesh and some optional parameters to control elements sizing, curvature and subdomains as well as some features like boundary layers generation.
The third volume on Meshing, Geometric Modeling and Numerical Simulation 3 was published in 2020 (see Fig. 1) after 11, 25. These books are also written in French (see Fig. 2) 10, 24, 23.
The whole library was completely rewritten to implement an automatic finite-element shader generation that converts a simple user source code into an OpenCL source that is in compiled on the GPU at run time.
The library handles all meshing data structures, from file reading, renumbering and vectorizing for efficient access on the GPU, and transfer to the graphic card, all automatically and transparently.
With this framework, the user can focus on the calculation part of the code, known as kernel, as all the rest is taken care of by the library.
The OpenCL language was chosen as it is hardware agnostic and runs on any GPU (Intel, Nvidia and AMD) and can also use the multicore and vector capacities of modern CPUs.
Julien Vanharen developed a basic heat solver using the v3.0 as a test case so we could validate the software with various boundary conditions, calculation scheme, unstructured meshes and different memory access patterns with success. Even with basic calculation which does not stress the full GPU's power, we achieved two orders of magnitude greater speed against a single CPU core and one order of magnitude compared to a multithreaded implementation. As Julien moved to ONERA, we plan on setting up a collaboration between the two teams to implement more complex HPC codes.
Classic visualization software like ParaView 34, TecPlot 35, FieldView 40, Ensight 33, Medit 21, Vizir (OpenGL legacy based version)
46, Gmsh 30, ...historically rely on the display of linear triangles with linear solutions on it. More precisely, each element of the mesh is divided into a set of elementary
triangles. At each vertex of the elementary triangle is attached a value and an associated color. The value and the color inside the triangle is then deduced by a linear interpolation inside the triangle. With the
increase of high-order methods and high-order meshes, these softwares adapted their technology by using subdivision methods. If a mesh has high-order elements, these elements are subdivided into a set of linear
triangles in order to approximate the shape of the high-order element 75. Likewise, if a mesh has a high-order solution on it, each element is subdivided into smaller linear triangles in order to
approximate the rendering of the high-order solution on it. The subdivision process can be really expensive if it is done in a naive way. For this reason, mesh adaptation procedures
62, 51, 52 are used to efficiently render high-order solutions and high-order elements using the standard linear rendering approaches. Even when optimized these approaches do have a
huge RAM memory footprint as the subdivision is done on CPU in a preprocessing step. Also the adaptive subdivision process can be dependent on the palette (i.e. the range of values where the solution is studied)
as the color only vary when the associated value is in this range. In this case, a change of palette inevitably imposes a new adaptation process. Finally, the use of a non conforming mesh adaptation can lead to a
discontinuous rendering for a continuous solution.
Other approaches are specifically devoted to high-order solutions and are based on ray casting 57, 58, 60. The idea is for a given pixel, to find exactly its color. To do so, for each
pixel, rays are cast from the position of the screen in the physical space and their intersection with the scene determines the color for the pixel. If high-order features are taken into account, it determines the color
exactly for this pixel. However, this method is based on two non-linear problems: the root-finding problem and the inversion of the geometrical mapping. These problems are really costly and do not compete with the
interactivity of the standard linear rendering methods even when these are called with a subdivision process unless they are done conjointly on the GPU. However, synchronization between GPU and OpenGL buffer are
non-trivial combination.
The proposed method intends to be a good compromise between both methods. It does guarantee pixel-exact rendering on linear elements without extra subdivision or ray casting and it keeps the interactivity of a classical method. Moreover, the subdivision of the curved entities is done on the fly on GPU which leaves the RAM memory footprint at the size of the loaded mesh.
We are developing a software, ViZiR 4, with exact non linear solution rendering to address the high-order visualization challenge 2.
ViZiR 4 is bundled as a light, simple and interactive high-order meshes and solutions visualization software. It is based on OpenGL 4 core graphic pipeline. The use of OpenGL Shading Language (GLSL) allows to perform pixel exact rendering of high order solutions on straight elements and almost pixel exact rendering on curved elements (high-order meshes). ViZiR 4 enables the representation of high order meshes (up to degree 4) and high order solutions (up to degree 10) with pixel exact rendering. Furthermore, in comparison with standard rendering techniques based on legacy OpenGL, the use of OpenGL 4 core version improves the speed of rendering, reduces the memory footprint and increases the flexibility. Many post-processing tools, such as picking, hidding surfaces, isolines, clipping, capping, are integrated to enable on the fly the analysis of the numerical results.
For years, the resolution of numerical methods has consisted in solving Partial Derivative Equations by means of a piecewise linear representation of the physical phenomenon on linear meshes. This choice was merely driven by computational limitations. With the increase of the computational capabilities, it became possible to increase the polynomial order of the solution while keeping the mesh linear. This was motivated by the fact that even if the increase of the polynomial order requires more computational resources per iteration of the solver, it yields a faster convergence of the approximation error 3 74 and it enables to keep track of unsteady features for a longer time and with a coarser mesh than with a linear approximation of the solution. However, in 14, 39, it was theoretically shown that for elliptic problems the optimal convergence rate for a high-order method was obtained with a curved boundary of the same order and in 9, evidence was given that without a high-order representation of the boundary the studied physical phenomenon was not exactly solved using a high-order method. In 78, it was even highlighted that, in some cases, the order of the mesh should be of a higher degree than the one of the solver. In other words, if the used mesh is not a high-order mesh, then the obtained high-order solution will never reliably represent the physical phenomenon.
Based on these issues, the development of high-order mesh generation procedures appears mandatory 1. To generate high-order meshes, several approaches exist. The first approach was tackled twenty years ago 16 for both surface and volume meshing. At this moment the idea was to use all the meshing tools to get a valid high-order mesh. The same problem was revisited a few years later in 69 for bio-medical applications. In these first approaches and in all the following, the underlying idea is to use a linear mesh and elevate it to the desired order. Some make use of a PDE or variational approach to do so 4, 61, 18, 54, 73, 76, 31, others are based on optimization and smoothing operations and start from a linear mesh with a constrained high-order curved boundary in order to generate a suitable high-order mesh 38, 22, 71. Also, when dealing with Navier-Stokes equations, the question of generating curved boundary layer meshes (also called viscous meshes) appears. Most of the time, dedicated approaches are set-up to deal with this problem 55, 37. In all these techniques, the key feature is to find the best deformation to be applied to the linear mesh and to optimize it. The prerequisite of these methods is that the initial boundary is curved and will be used as an input data. A natural question is consequently to study an optimal position of the high-order nodes on the curved boundary starting from an initial linear or high-order boundary mesh. This can be done in a coupled way with the volume 64, 72 or in a preprocessing phase 65, 66. In this process, the position of the nodes is set by projection onto the CAD geometry or by minimization of an error between the surface mesh and the CAD surface. Note that the vertices of the boundary mesh can move as well during the process. In the case of an initial linear boundary mesh with absence of a CAD geometry, some approaches based on normal reconstructions can be used to create a surrogate for the CAD model 75, 32. Finally, a last question remains when dealing with such high-order meshes: Given a set of degrees of freedom, is the definition of these objects always valid ? Until the work presented in 27, 36, 26, no real approach was proposed to deal in a robust way with the validity of high-order elements. The novelty of these approaches was to see the geometrical elements and their Jacobian as Bézier entities. Based on the properties of the Bézier representation, the validity of the element is concluded in a robust sense, while the other methods were only using a sampling of the Jacobian to conclude about its sign without any warranty on the whole validity of the elements.
In this context, several issues have been addressed : the analogy between high-order and Bézier elements, the development of high-order error estimates suitable for parametric high-order surface mesh generation and the generalization of mesh optimization operators and their applications to curved mesh generation, moving-mesh methods, boundary layer mesh generation and mesh adaptation.
Metric fields are the link between particular error estimates - be they for low-order 41, 42 or high-order methods 15, for the solution of a PDE 44 or a quantity of interest derived from it such as drag or lift 45 - and automatic mesh adaptation. In the case of linear meshes, a metric field locally distorts the measure or distance such that, when the mesh adaptation algorithm has constructed an uniform mesh in the induced Riemannian space, it is strongly anisotropic in the usual Euclidean (physicial) space. As such, anisotropy arrises naturally, without it ever being explicitely sought by the (re)meshing algorithm.
We seek to extend these principles of metric-based
The metric field's own intrinsic curvature may derive from any error estimate, be it boundary approximation error 19, 17 or an interpolation error estimate. So far, interpolation error estimates on high-order elements are limited to isotropy (13 in
If genericity with regards to error estimation is achieved through the use of a metric field, robustness and modularity of the general remeshing algorithm may be derived from the use of a single topological operator such as the cavity operator 43, 47, 48. This is the reason why we chose to extend the original
Work has begun on the new
The scope of this paper is to demonstrate the viability and efficiency of unstructured anisotropic mesh adaptation techniques to turbomachinery applications. The main difficulty in turbomachinery is the periodicity of the domain that must be taken into account inthe solution mesh-adaptive process. The periodicity is strongly enforced in the flow solver using ghost cells to minimize the impact on the source code. For the mesh adaptation, the local remeshing is done in two steps. First, the inner domain is remeshed with frozen periodic frontiers, and, second, the periodic surfaces are remeshed after moving geometrice ntities from one side of the domain to the other. One of the main goal of this work is to demonstrate how mesh adaptation, thanks to its automation, is able to generate meshes that are extremely difficult to envision and almost impossible to generate manually. This study only considers feature-based error estimate based on the standard multi-scale Lpinterpolation error estimate. We presents all the specific modifications that have been introduced in the adaptive process to deal with periodic simulations used for turbomachinery applications. The periodic mesh adaptation strategy is then tested and validated on the LS89 high pressure axial turbine vane and the NASA Rotor 37 test cases.
CFD simulations aim at capturing several phenomena of various natures. Therefore, these phenomena have very different mesh requirements. For example, most numerical schemes for the boundary layer require a structured mesh, that is aligned with the boundary of the domain.
We use the techniques of mesh metric-based mesh adaptation to generate a hybrid mesh that can fulfill these different mesh requirements. This approach is based on the metric-orthogonal point-placement, creating some structured parts from the intrinsic directional information bore by the metric-field. Some unstructured areas may remain where structure is not required. The main goals of this work in progress are to improve the orthogonality of the output mesh and its alignment with the metric field. This work can fall into three parts. First, we have re-designed the preliminary step of size gradation correction. Then, we have studied two hybrid mesh generation processes methods: one using an a priori quadrilaterals recombination, the other building straightforwardly the quadrilaterals during the re-meshing step. Finally, some modifications have been added to the solver Wolf to perform simulations on hybrid meshes. Wolf is not able to run simulations of inviscid and viscous (laminar and turbulent) flows on two dimensional hybrid meshes. The two first topics are detailed in what follows.
The previously described generation method highly relies on the metric field. However, a metric field computed from a solution during the adaptation process is most of the time quite messy, with for example strong size variations. In standard mesh adaptation, it leads to low-quality elements. In orthogonal mesh adaptation, it additionally breaks the alignment and the structure of the output mesh. In both cases, a step has been added to smooth the input metric field. In the context of hybrid mesh adaptation, this gradation correction process has been re-designed to improve the number and the quality of the quadrilaterals in the output mesh.
Metric-orthogonal point-placement is currently used to generate quasi-structured meshes with right-angled triangles where the metric is the most anisotropic and unit triangles elsewhere. The aim of this work is to recover some quadrilaterals in the structure. To do so, two approaches can be considered: an a posteriori quadrilateral recombination based on geometrical criteria, and an a priori quadrilateral detection. The latter is more straightforward because it uses directly the point-placement information.
A new strategy for mesh adaptation dealing with Fluid-Structure Interaction (FSI) problems is presented using
a partitioned approach. The Euler equations are solved by an edge-based Finite Volume solver whereas
the linear elasticity equations are solved by the Finite Element Method using the Lagrange
When using anisotropic mesh adaptation in computational fluid dynamics, the interactions occurring among complex geometries, high gradients (such as boundary layers and shocks) present some drawbacks, such as stallations and oscillations in the global residual convergence, specially when one makes use of slope limiters. In particular, we studied how the presence of a slope limiter affects the overall convergence of a simulation of the Navier-Stokes equations making use of anisotropic mesh adaptation and the Spalart-Allmaras turbulence model, and we have shown several techniques to reduce such such undesirable effects. With this regard, we successfully tested the freezeing of the slope limiter and the CFL reduction inside the slope limiter-oscillating vertices of the mesh, and then we tested the same methodologies inside the shockwaves generated by transonic flows.
In aeronautical engineering, anisotropic mesh adaptation is used to predict accurately dimensionless quantities such as the lift and the drag coefficients, and, in general, functionals depending on the solution field. Anyway, in order to get the optimal adapted mesh with respect to the accuracy of a goal functional, it is necessary to solve an adjoint system providing the sensitivity of the goal functional with respect to the residuals of the equations. The linear system associated to the adjoint problem revealed to be stiff for RANS equations with a standard solver such as the GMRES preconditioned with several SGS iterations, and hence an alternative method has been developed, which is based on the transient simulation of the RANS adjoint state, starting from a previous valid solution.
Ideal Mesh generation for modern solvers and comPuting ArchiteCTureS.