Numerical simulation has been booming over the last thirty years, thanks to increasingly powerful numerical methods, computer-aided design (CAD) and the mesh generation for complex 3D geometries, and the coming of supercomputers (HPC). The discipline is now mature and has become an integral part of design in science and engineering applications. This new status has lead scientists and engineers to consider numerical simulation of problems with ever increasing geometrical and physical complexities. A simple observation of this chart

shows: no mesh = no simulation
along with "bad" mesh = wrong simulation.
We have concluded that the mesh is at the core of the classical computational pipeline and a key component to significant improvements.
Therefore, the requirements on meshing methods are an ever increasing need, with increased difficulty, to produce high quality meshes to enable reliable solution output predictions in an automated manner.
These requirements on meshing or equivalent technologies cannot be removed and all approaches face similar issues.

In this context, Gamma team was created in 1996 and has focused on the development of robust automated mesh generation methods in 3D, which was clearly a bottleneck at that time
when most of the numerical simulations were 2D. The team has been very successful in tetrahedral meshing with the well-known software Ghs3d29, 30 which has been distributed worldwide so far and in hexahedral meshing with the software Hexotic38, 39 which was the first automated full hex mesher. The team has also worked on surface meshers with Yams26 and BLSurf23 and visualization with Medit. Before Medit, we were unable to visualize in real time 3D meshes !

In 2010, Gamma3 team has replaced Gamma with the choice to focus more on meshing for numerical simulations. The main goal was to emphasize and to strengthen the link between meshing technologies and numerical methods (flow or structure solvers). The metric-based anisotropic mesh adaptation strategy has been very successful with the development of
many error estimates, the generation of highly anisotropic meshes, its application to compressible Euler and Navier-Stokes equations 20, and its extension to unsteady problems with moving geometries 22 leading to the development of several softwares
Feflo.a/AMG-Lib, Wolf, Metrix, Wolf-Interpol. A significant accomplishment
was the high-fidelity prediction of the sonic boom emitted by supersonic aircraft 21.
We were the first to compute a certified aircraft sonic boom propagation in the atmosphere, thanks to mesh adaptation.
The team has started to work on parallelism with the development of the multi-thread library LPlib and the efficient management of memory using space filling curves, and the generation of large meshes (a billion of elements) 36. Theoretical work on high-order meshes has been also done 28.

Today, numerical simulation is an integral part of design in engineering applications with the main goal of reducing costs and speeding up the process of creating new design. Four main issues for industry are:

Let us now discuss in more details each of these issues.

Generating a discrete surface mesh from a CAD geometry definition has been the numerical analysis Achille's heel for the last 30 years. Significant issues are far too common and range from persistent translation issues between systems that can produce ill defined geometry definitions to overwhelming complexity for full configurations with all components. A geometry definition that is ill defined often does not perfectly capture the geometry's features and leads to a bad mesh and a broken simulation. Unfortunately, CAD system design is essentially decoupled from the needs of numerical simulation and is largely driven by the those of manufacturing and other areas. As a result, this step of the numerical simulation pipeline is still labor intensive and the most time consuming. There is a need to develop alternative geometry processes and models that are more suitable for numerical simulations.

Companies working on high-tech projects with high added value (Boeing, Safran, Dassault-Aviation, Ariane Group, ...) consider their design pipeline
inside a HPC framework. Indeed, they are performing complex numerical simulations on complex geometries on a daily-basis, and they aim at using this in a shape-optimization loop. Therefore, any tools added to their numerical platform should be HPC compliant. This means that all developments should consider hybrid parallelism, i.e., to be compatible with distributed memory architecture (MPI) and shared memory architecture (multi-threaded),
to achieve scalable parallelism.

One of the main goals of numerical simulation is to reduce the cost of creating new designs (e.g reduce the number of wind-tunnel and flight tests in the aircraft industry). The emergence of 3D printers is, in some cases, making tests easier to perform, faster and cheaper. It is thus mandatory to control the cost of the numerical simulations, in other word, it is important to use less resources to achieve the same accuracy. The cost takes into account the engineer time as well as the computing resources needed to perform the numerical simulation. The cost for one simulation can vary from 15 euros for simple models (1D-2D), to 150 euros for Reynolds-averaged Navier-Stokes (3D) stationary models, or up to 15 000 euros for unsteady models like LES or Lattice-Boltzmann 1. It is important to know that a design loop is equivalent to performing between 100 and 1 000 numerical simulations. Consequently, the need for more efficient algorithms and processes is still a key factor.

Another crucial point is checking and certification of errors and uncertainties in high-fidelity numerical simulations. These errors can come from several sources:

The error assessment and mesh generation procedure employed in the aerospace industry for CFD simulations relies heavily on the experience of the CFD user. The inadequacy of this practice even for geometries frequently encountered in engineering practice has been highlighted in studies of the AIAA 2 CFD Drag Prediction Workshops 40 and High-Lift Prediction Workshops 44, 43. These studies suggest that the range of scales present in the turbulent flow cannot be adequately resolved using meshes generated following what is considered best present practices. In this regard, anisotropic mesh adaptation is considered as the future, as stated in the NASA report "CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences" 45 and the study dedicated to mesh adaptation 41.

These preoccupations are the core of the Gamma project scientific program.
To answer the first issue, Gamma will focus on designing and developing
a geometry modeling framework specifically intended for mesh generation and numerical
simulation purposes. This is a mandatory step for automated geometry-mesh
and mesh adaptation processes with an integrated geometry model.
To answer the last three issues, the Gamma team will work on the development of a high-order mesh-adaptive solution platform compatible with
HPC environment. To this end, Gamma will pursue its work on advanced mesh generation methods which should fulfill the following capabilities:

Note that items Gamma will continue to work on robust flow solvers, solving the turbulent Navier-Stokes equations from second order using Finite Volume - Finite Element numerical scheme to higher-order using Flux Reconstruction (FR) method.

The combination of adaptation - high-order - multi-elements coupled with appropriate error estimates is for the team the way to go to reduce the cost of numerical simulations while ensuring high-fidelity in a fully automated framework.

The main axes are:

Our research in mesh generation, mesh adaptation and certification of the Numerical Simulation Pipeline finds applications in several different domains such as aviation and aerospace but also all fields where computation and simulation are used: fluid mechanics, solid mechanics, solving wave equations (acoustic, electromagnetism...), energy or biomedical.

Input: a triangulated surface mesh and an optional size map to control the size of inner elements.

Output: a fully hexahedral mesh (no hybrid elements), valid (no negative jacobian) and conformal (no dangling nodes) whose surface matches the input geometry.

The software is a simple command line that requires no knowledge on meshing. Its arguments are an input mesh and some optional parameters to control elements sizing, curvature and subdomains as well as some features like boundary layers generation.

The whole library was completely rewritten to implement an automatic finite-element shader generation that converts a simple user source code into an OpenCL source that is in compiled on the GPU at run time.
The library handles all meshing data structures, from file reading, renumbering and vectorizing for efficient access on the GPU, and transfer to the graphic card, all automatically and transparently.
With this framework, the user can focus on the calculation part of the code, known as kernel, as all the rest is taken care of by the library.
The OpenCL language was chosen as it is hardware agnostic and runs on any GPU (Intel, Nvidia and AMD) and can also use the multicore and vector capacities of modern CPUs.

Julien Vanharen developed a basic heat solver using the v3.0 as a test case so we could validate the software with various boundary conditions, calculation scheme, unstructured meshes and different memory access patterns with success. Even with basic calculation which does not stress the full GPU's power, we achieved two orders of magnitude greater speed against a single CPU core and one order of magnitude compared to a multithreaded implementation.

Works continued on P1toPk, a software that transform any first order hybrid mesh (triangles, quads, tets, pyramids, prisms and hexes) into a second order one while respecting a prescribe surface curvature.
Efforts were made on boundary layers curving, which was challenging because jacobian validity is harder to guarantee as the elements get highly stretched, and a lot effort were also made to speed up the code by optimizing mathematical operations and parallelizing them.
The code is now mature enough to be sent to industrial users for real life usage and we are waiting for valuable feedback in the present year.

We are developing ViZiR 4, a visualization software with pixel exact rendering to address the high-order visualization challenges 25, 15.
ViZiR 4 is bundled as a light, simple and interactive high-order meshes and solutions visualization software. It is based on OpenGL 4 core graphic pipeline. The use of OpenGL Shading Language (GLSL) allows to perform pixel exact rendering of high order solutions on straight elements (without extra subdivision or ray casting) and almost pixel exact rendering on curved elements (high-order meshes). ViZiR 4 enables the representation of high order meshes (up to degree 4) and high order solutions (up to degree 10) with pixel exact rendering. Unlike other visualization software (ParaView 33, TecPlot 34, FieldView 35, Ensight 32, Medit 27, Vizir (OpenGL legacy based version) 37, Gmsh 31), there is no subdivision process that is expensive nor visualization error that has to be controlled.
Moreover, the subdivision of the curved entities is done on the fly on GPU which leaves the RAM memory footprint at the size of the loaded mesh.
Furthermore, in comparison with standard rendering techniques based on legacy OpenGL, the use of OpenGL 4 core version improves the speed of rendering, reduces the memory footprint and increases the flexibility. Many post-processing tools, such as picking, hidding surfaces, isolines, clipping, capping, are integrated to enable on the fly the analysis of the numerical results.

We aim to deal with three main topics around high-order mesh adaptation with applications to classic a posteriori curving.

One, a new method to untangle high-order Bézier meshes is introduced.
It maximizes directly the minimum control coefficient over the mesh.
Under some conditions, the

Two, Riemannian edge length minimization is used to prescribe metric-based mesh-interior curvature.
This generalizes weakly the unit mesh definition on linear meshes, and is very fast as only edge scale problems are solved.
Through metric gradation on surface metrics, surface curvature is naturally propagated to the interior by length minimization.
Similarly, the mesh-intrinsic metric can be used to curve boundary layers.
In realistic settings, a so-called back mesh holding the discrete metric field is kept unchanged throughout remeshing.
This prevents anisotropy loss through repeated interpolation.
It also leads to the metric at a point

Three, the cavity operator if extended to

Necessary curvature results from subsequent high-order untangling, as prescribed curvature is not expected to yield valid cavities.

Numerical results focus on a posteriori curving of difficult cases. Boundary control points are projected. Metric-based curvature is used to propagate surface curvature, curve boundary layers, and follow natural curvature of a metric field resulting from CFD adaptation. The simplex-based Jacobian smoother corrects the resulting meshes. Examples are based on 3D real-world geometries encountered in Computational Fluid Dynamics (CFD). This framework allow us to curve highly anisotropic meshes with around 10 million elements within minutes.

We aim to demonstrate the viability and efficiency of unstructured anisotropic mesh adaptation techniques to turbomachinery applications. The main difficulty in turbomachinery is the periodicity of the domain that must be taken into account inthe solution mesh-adaptive process. The periodicity is strongly enforced in the flow solver using ghost cells to minimize the impact on the source code. For the mesh adaptation, the local remeshing is done in two steps. First, the inner domain is remeshed with frozen periodic frontiers, and, second, the periodic surfaces are remeshed after moving geometric entities from one side of the domain to the other. One of the main goal of this work is to demonstrate how mesh adaptation, thanks to its automation, is able to generate meshes that are extremely difficult to envision and almost impossible to generate manually. This study only considers feature-based error estimate based on the standard multi-scale Lp-interpolation error estimate. We presents all the specific modifications that have been introduced in the adaptive process to deal with periodic simulations used for turbomachinery applications. The periodic mesh adaptation strategy is then tested and validated on the LS89 high pressure axial turbine vane and the NASA Rotor 37 test cases.

Due to their various nature, physical phenomena that we seek to capture in CFD simulations may have specific specific mesh requirements. For example, to solve the boundary layer, some numerical schemes favor structured meshes respecting alignment with the boundary of the domain, while these constraints are not necessary elsewhere. Our approach is to use the techniques of metric-based mesh adaptation to generate a mixed-element mesh that can fulfill these different mesh requirements. This approach is based on the metric-orthogonal point-placement, creating some structured parts from the intrinsic directional information bore by the metric-field. Some unstructured areas may remain where structure is not needed. The main goals of this work are to improve the orthogonality of the output mesh and its alignment with the metric field. This work has three main axes. First, we have improved the preprocessing gradation step to smooth the metric field and improve the orthogonality of the final mesh. Then, we have studied two methods to obtain quadrilaterals: one using an a priori quadrilaterals recombination, the other detecting straightforwardly the orthogonal patterns during the re-meshing step. Finally, the work on the solver Wolf has been carried on and corrected to perform robust and accurate simulations on mixed-element meshes. These three developments were embodied in a mixed-element adaptation loop. The first two topics are detailed in what follows.

The previously described generation method highly relies on the metric field. However, a metric field computed from a solution during the adaptation process is most of the time quite messy and shows abrupt size variations. In standard mesh adaptation, it leads to low-quality elements. In orthogonal mesh adaptation, it additionally breaks the alignment and the structure of the output mesh. An additional step to smooth the input metric field is therefore required. In the context of mixed-element mesh adaptation, this gradation correction process has been modified to improve the number and the quality of the quadrilaterals in the final mesh. Further developments have been considered on this topic, in particular to increase the robustness of the method. Results have been published in 47.

Metric-orthogonal point-placement is currently used to generate quasi-structured meshes with right-angled triangles where the metric is the most anisotropic and unit triangles elsewhere. The aim of this work is to recover some quadrilaterals in the structure. To do so, two approaches can be considered: an a posteriori quadrilateral recombination based on geometrical criteria, and an a priori quadrilateral detection. The latter is more straightforward because it uses directly the point-placement information. A framework was established to set up this method. Developments and preliminary results were presented in 48.

In order to obtain a correct metric field on hybrid meshes, a robust hybrid solver is mandatory. When dealing with 2D (3D) elements different from triangles (tetrahedra), the most tricky aspect is the gradient formulation. This is due to the fact that within a Finite Elements interpolation framework, the gradient on an element with more than three nodes (i.e. not simplicial complex) is not element-wise constant. This brings many added difficulties to the flux balance computation. A first attempt at performing inviscid and laminar simulations on hybrid meshes was to approximate gradients on quadrilaterals with its iso-barycenter values 48. The extension of this formulation to turbulent flows, however highlighted a lack of efficiency and robustness. For this reason, a APFE (APproximated Finite-Element) method 42 has been implemented and as well extended to a implicit time integration scheme. This approach turns out to be very efficient and robust in many fully-structured mesh verification cases. The extension to 3D cases (prisms and pyramids) is ongoing. Details can be found in 17.

An extension of the Vertex-Centered Mixed-Element-Volume MUSCL scheme to 3D mixed-element meshes is proposed for the convective fluxes. This scheme involves a clever exploitation of the FE gradients, which can be efficiently implemented as

The combination of the proposed discretization strategies for convective and diffusive fluxes shows a certain robustness for regular and not regular mixed element meshes as well interesting results for turbulent flows. The two dimensional analysis highlighted the influence of gradient formulation in source terms for highly anisotropic meshes. This aspect is under investigation in 3D. On the whole, the methodology proposed seems to be a promising candidate to tackle mixed-element adaptation in some future works. More details in 46.

When solving the RANS equations, usually one decouples the equations relative to the mean-flow and the equations relative to turbulence. This division provides two separated systems to be solved at each time step, one relative to the mean-flow and the other relative to the turbulence. This presents two main drawbacks: in the flow solver, the Jacobian of the system lacks of the terms bounding the mean-flow and the turbulence, and this can slow down the residual convergence. The second drawback regards the adjoint problem, which consists into a linear system assembled with the transpose of the Jacobian matrix of the residuals and, on the right-hand side, the derivative of an aeronautical coefficient with respect to the flow variables. A Jacobian missing the coupling terms between the mean-flow and the turbulence provides a null adjoint turbulent viscosity, and this is a limitation in the development of more complex discretization error estimates. We have therefore developed a 2D version of the coupled flow and adjoint solver which includes in the Jacobian the coupling terms between the mean-flow and the turbulence. When have tested this method on the 2D geometry provided for the 4th CFD AIAA High Lift Prediction Workshop, and the result provided several features to emerge, such as a high mesh refinement inside the boundary layer of the leading edges, and inside the regions of high turbulence destruction. The work is pursued in collaboration with Philippe Spalart (Boeing), and is presented in 24.

Goal-oriented mesh adaptation is a methodology used to adapt the mesh in order to minimize the discretization error commited on a functional depending on the solution. As an intermediate step, one finds an upper bound to such discretization error taking the form of a weighted sum of the interpolation errors of the solution, and such upper bound is called error estimate. Regarding Wolf and the RANS equations, up to now we have focused only on the mean-flow part of such an error estimate, meaning that the terms coming from turbulence have been neglected. The scope of this work is to enrich the error estimate with the information coming from the turbulence. In particular, the methodology has been tested on the 2D geometry provided for the 4th CFD AIAA High Lift Prediction Workshop, providing high mesh refinements on the boundaries of the turbulent regions. Also this work is pursued in collaboration with Philippe Spalart (Boeing), and is presented in 24.

NEXTAIR project on cordis.europa.eu

Radical changes in aircraft configurations and operations are required to meet the target of climate-neutral aviation. To foster this transformation, innovative digital methodologies are of utmost importance to enable the optimisation of aircraft performances.

NEXTAIR will develop and demonstrate innovative design methodologies, data-fusion techniques and smart health-assessment tools enabling the digital transformation of aircraft design, manufacturing and maintenance. NEXTAIR proposes digital enablers covering the whole aircraft life-cycle devoted to ease breakthrough technology maturation, their flawless entry into service and smart health assessment. They will be demonstrated in 8 industrial test cases, representative of multi-physics industrial design, maintenance problems and environmental challenges and interest for aircraft and engines manufacturers.

NEXTAIR will increase high-fidelity modelling and simulation capabilities to accelerate and derisk new disruptive configurations and breakthrough technologies design. NEXTAIR will also improve the efficiency of uncertainty quantification and robust optimisation techniques to effectively account for manufacturing uncertainty and operational variability in the industrial multi-disciplinary design of aircraft and engine components. Finally, NEXTAIR will extend the usability of machine learning-driven methodologies to contribute to aircraft and engine components' digital twinning for smart prototyping and maintenance.

NEXTAIR brings together 16 partners from 6 countries specialised in various disciplines: digital tools, advanced modelling and simulation, artificial intelligence, machine learning, aerospace design, and innovative manufacturing. The consortium includes 9 research organisations, 4 leading aeronautical industries providing digital-physical scaled demonstrator aircraft and engines and 2 high-Tech SME providing expertise in industrial scientific computing and data intelligence.