This interdisciplinary project brings together researchers coming from different horizons and backgrounds (applied mathematics and fluid mechanics), who gradually elaborated a common vision of what should be the simulation tools for fluid dynamics of tomorrow. Our applications will be focused on
wall bounded turbulent flows, featuring complex phenomena such as aeroacoustics, hydrodynamic instabilities, phase change processes, complex walls, buoyancy or localized relaminarization. Because such flows are exhibiting a multiplicity of time and length scales of fluctuations resulting from complex interactions, their simulation is extremely challenging. Even if various methods of simulation (DNS 1) and turbulence modeling(RANS 2, LES 3, hybrid RANS-LES) are available and have been significantly improved over time, none of them does satisfy all the needs encountered in industrial and environmental configurations. We consider that all these methods will be useful in the future in different situations or regions of the flow, if combined in the same simulation in order to benefit from their respective advantages wherever relevant, while mutually compensating their known limitations. It will thus lead to a description of turbulence at widely varying scales in the computational domain, hence the name multi-scale simulations. For example, the RANS mode may extend throughout regions where turbulence is sufficiently close to equilibrium, leaving to LES or DNS the handling of regions where large-scale coherent structures are present. However, a considerable body of work is required to:

But the best agile modeling and high-order discretization methods are useless without the recourse to high performance computing (HPC) to bring the simulation time down to values compatible with the requirement of the end users. Therefore, a significant part of our activity will be devoted to the proper handling of the constantly evolving supercomputer architectures. But even the best ever simulation library is useless if it is not disseminated and increasingly used by the CFD community as well as our industrial partners. In that respect, the significant success of the low-order finite volume simulation suite OpenFOAM 4 or the more recently proposed SU2 5 from Stanford are considered as examples of quite successful dissemination stories that could be, if not followed, but at least considered as a source of inspiration. Our natural inclination though will be to promote the use of the library in direction of our present and future industrial and academic partners, with a special interest on the SMEs active in the highly competitive and strategic economical sectors of energy production and aerospace propulsion. Indeed, these sectors are experiencing a revolution of the entire design process especially for complex parts with an intimate mix between simulations and additive manufacturing (3D printing) processes in the early stages of the design process. For large companies, such as General Electric or Safran (co-developing the CFM Leap-1 engines with 3D printed fuel nozzles), as well as medium-size companies such as Aerojet Rocketdyne, this is a unique opportunity to reduce the duration and hence the cost of development of their systems, while preserving if not strengthening their capability of designing innovative components that cannot be produced by classical manufacturing processes. On the other hand, for the small companies of this sector, this may have a rather detrimental effect on their competitiveness since their capability of mastering both these new manufacturing processes and advanced simulation approaches is far more limited. Thus, through our sustained direct (EDF, Turbomeca, PSA group, AD Industrie, Dassault Aviation) or indirect (European programs: WALLTURB, KIAI, IMPACT-AE, SOPRANO; ANR program MONACO_2025) partnership with different companies, we are able to identify relevant generic configurations, from our point of view of scientists, to serve as support for the development of our approach. This methodological choice was motivated by the desire to lead an as efficient as possible transfer activity, while maintaining a clear distinction between what falls within our field of competence of researchers and what is related to the development of their products by our industrial partners. The long-term objective of this project is to develop, validate, promote and transfer an original and effective approach for modeling and simulating generic flows representative of flow configurations encountered in the field of energy production and aeronautical/automotive propulsion. Our approach will be combining mesh (h) + turbulence model (m) + discretization order (p) agility. This will be achieved by:

Concerning applications, our objective are :

A typical continuous solution of the Navier-Stokes equations at sufficiently large values of the Reynolds number is governed
by a wide spectrum of temporal and spatial scales closely connected with the turbulent nature of the flow. The term deterministic chaos employed by Frisch in his enlightening book 39 is certainly conveying most adequately the difficulty in analyzing and simulating this kind of flows.
The broadness of the turbulence spectrum is directly controlled by the Reynolds number defined as the ratio between the inertial forces and the viscous forces. This number is not only useful to determine the transition from a laminar to a turbulent flow regime, it also indicates the range of scales of fluctuations that are present in the flow under consideration. Typically, for the velocity field and far from solid walls, the ratio between the largest scale (the integral length scale) and the smallest one (Kolmogorov scale) is proportional to (i) improve our knowledge on turbulent flows and (ii) test (i.e., validate or invalidate) and improve the modeling hypotheses inherently associated to the RANS and LES approaches. From a numerical point of view, due to the steady nature of the RANS equations, numerical accuracy is generally not ensured via the use of high-order schemes, but rather on careful grid convergence studies. In contrast, the high computational cost of LES or DNS makes necessary the use of highly-accurate numerical schemes in order to optimize the use of computational resources.

To the noticeable exception of the hybrid RANS-LES modeling, which is not yet accepted as a reliable tool for industrial design, as mentioned in the preamble of the Go4hybrid European program 6, a turbulence model represents turbulent mechanisms in the same way in the whole flow. Thus, depending on its intrinsic strengths and weaknesses, accuracy will be a rather volatile quantity, strongly dependent on the flow configuration. For instance, RANS is perfectly suited to attached boundary layers, but exhibits severe limitations in massively-separated flow regions. Therefore, the turbulence modeling and industrial design communities waver between the desire to continue to rely on the RANS approach, which is unrivaled in terms of computational cost, but is still not able to accurately represent all the complex phenomena; and the temptation to switch to LES, which outperforms RANS in many situations, but is prohibitively expensive in high-Reynolds number wall-bounded flows. In order to account for the limitations of the two approaches and to combine them for significantly improving the overall performance of the models, the hybrid RANS-LES approach has emerged during the last two decades as a viable, intermediate way, and we are definitely inscribing our project in this innovative field of research, with an original approach though, based on temporal filtering (Hybrid temporal LES, HTLES) rather than spatial filtering, and a systematic and progressive validation process against experimental data produced by the team.

All the methods considered in the project are mesh-based methods: the computational domain is divided into cells, that have an elementary shape:
triangles and quadrangles in two dimensions, and tetrahedra, hexahedra, pyramids, and prisms in three dimensions. If the cells are only regular hexahedra, the mesh is said to be structured. Otherwise, it is said to be unstructured. If the mesh is composed of more than one sort of elementary shape, the mesh is said to be hybrid. In the project, the numerical strategy is based on discontinuous Galerkin methods. These methods were introduced by Reed and Hill 55 and first studied by Lesaint and Raviart 49. The extension to the Euler system with explicit time integration was mainly led by Shu, Cockburn and their collaborators. The steps of time integration and slope limiting were similar to high-order ENO schemes,
whereas specific constraints given by the finite-element nature of the scheme were gradually solved for scalar conservation laws 34, 32,
one dimensional systems 31, multidimensional scalar conservation laws 30, and multidimensional systems 33. For the same system, we can
also cite the work of 38, 45, which is slightly different: the stabilization is made by adding a nonlinear term, and the time
integration is implicit. In contrast to continuous Galerkin methods, the discretization of diffusive operators is not straightforward. This is due to the discontinuous
approximation space, which does not fit well with the space function in which the diffusive system is well posed. A first stabilization was proposed by Arnold 22. The first application of discontinuous Galerkin methods to Navier-Stokes equations was proposed in 27 by
mean of a mixed formulation. Actually, this first attempt led to a non-compact computational stencil, and was later proved to be unstable.
A compactness improvement was made in 28, which was later analyzed, and proved to be stable in a more
unified framework 23. The combination with the

For concluding this section, there already exists numerical schemes based on the discontinuous Galerkin method, which proved to be efficient for computing compressible viscous flows. Nevertheless, there remain many things to be improved, which include: efficient shock capturing methods for supersonic flows, high-order discretization of curved boundaries, low-Mach-number behavior of these schemes and combination with second-moment RANS closures. Another aspect that deserves attention is the computational cost of discontinuous Galerkin methods, due to the accurate representation of the solution, calling for a particular care of implementation for being efficient. We believe that this cost can be balanced by the strong memory locality of the method, which is an asset for porting on emerging many-core architectures.

With the considerable and constant development of computer performance, many people were thinking at the turn of the 21st century that in the short term, CFD would replace experiments, considered as too costly and not flexible enough. Simply flipping through scientific journals such as Journal of Fluid Mechanics, Combustion and Flame, Physics of Fluids or Journal of Computational Physics or through websites such that of Ercoftac 7 is sufficient to convince oneself that the recourse to experiments to provide either a quantitative description of complex phenomena or reference values for the assessment of the predictive capabilities of models and simulations is still necessary. The major change that can be noted though concerns the content of the interaction between experiments and CFD (understood in the broad sense). Indeed, LES or DNS assessment calls for the experimental determination of temporal and spatial turbulent scales, as well as time-resolved measurements and determination of single or multi-point statistical properties of the velocity field. Thus, the team methodology incorporates from the very beginning an experimental component that is operated in strong interaction with the modeling and simulation activities.

A crucial point for any multi-scale simulation able to locally switch (in space or time) from a coarse to a fine level of description of turbulence, is the enrichment of the solution by fluctuations as physically meaningful as possible. Basically, this issue is an extension of the problem of the generation of realistic inlet boundary conditions in DNS or LES of subsonic turbulent flows. In that respect, the method of anisotropic linear forcing (ALF) we have developed in collaboration with EDF proved very encouraging, by its efficiency, its generality and simplicity of implementation. So, it seems natural, on the one hand, to extend this approach to the compressible framework and to implement it in AeroSol. On the other hand, we shall concentrate (in cooperation with EDF R&D in Chatou in the framework of a the CIFRE PhD of V. Duffal) on the theoretical link between the local variations of the scale of description of turbulence (e.g. a sudden variations in the size of the time filter) and the intensity of the ALF forcing, transiently applied to promote the development of missing fluctuating scales.

In aerodynamics, and especially for subsonic computations, handling inlet and outlet boundary conditions is a difficult issue. A significant amount of work has already been performed for second-order schemes for Navier-Stokes equations, see 54, 60 and the huge number of papers citing it. On the one hand, we believe that decisive improvements are necessary for higher-order schemes: indeed, the less dissipative the scheme is, the worse impact have the spurious reflections. For this purpose, we will first concentrate on the linearized Navier-Stokes system, and analyze the way to impose boundary conditions in a discontinuous Galerkin framework with a similar approach as in 40. We will also try to extend the work of 61, which deals with Euler equations, to the Navier-Stokes equations.

We shall develop in parallel our multi-scale turbulence modeling and the related adaptive numerical methods of AeroSol. Without prejudice to methods that will be on the podium in the future, a first step in this direction will be to extend to a compressible framework the continuous temporal hybrid RANS/LES method we have developed up to now in a Mach zero context.

In the targeted application domains, turbulence/wall interactions and heat transfer at the fluid-solid interface are physical phenomena whose numerical prediction is at the heart of the concerns of our industrial partners. For instance, for a jet engine manufacturer, being able to properly design the configuration of the cooling of the walls of its engine combustion chamber in the presence of thermoacoustic instabilities is based on the proper identification and a thorough understanding of the major mechanisms that drive the dynamics of the parietal transfer. Our objective is to take advantage of our analysis, experimental and computational tools to actively participate in the improvement of the collective knowledge of such kind of transfer. The flow configurations dealt with from the beginning of the project are those of subsonic, single-phase impinging jets or JICF (jets in crossflow) with the possible presence of an interacting acoustic wave. The issue of conjugate heat transfer at the wall will be also gradually investigated. The existing switchover criteria of the hybrid RANS/LES models will be tested on these flow configurations in order to determine their domain of validity. In parallel, the hydrodynamic instability modes of the JICF will be studied experimentally and theoretically (in cooperation with the SIAME laboratory) in order to determine the possibility to drive a change of instability regime (e.g., from absolute to convective) and thus to propose challenging flow conditions that would be relevant for the setting-up of an hybrid LES/DNS approach aimed at supplementing the hybrid RANS/LES approach.

The production and subsequent use of DNS (AeroSol library) and experimental (MAVERIC bench) databases dedicated to the improvement of the physical models is a significant part of our activity. In that respect, our present capability of producing in-situ experimental data for simulation validation and flow analysis is clearly a strongly differentiating mark of our project. The analysis of the DNS and experimental data produced make the improvement of the hybrid RANS/LES approach possible. Our hybrid temporal LES (HTLES) method has a decisive advantage over all other hybrid RANS/LES approaches since it relies on a well-defined time-filtering formalism. This feature greatly facilitates the proper extraction from the databases of the various terms appearing in transport equations obtained at the different scales involved (e.g. from RANS to LES). But we would not be comprehensive in that matter if we were not questioning the relevance of any simulation-experiment comparisons. In other words, a central issue is the following question: are we comparing the same quantities between simulations and experiment? From an experimental point of view, the questions to be raised will be, among others, the possible difference in resolution between the experiment and the simulations, the similar location of the measurement points and simulation points, the acceptable level of random error associated to the necessary finite number of samples. In that respect, the recourse to uncertainty quantification techniques will be advantageously considered.

As the flows simulated are very computationally demanding, we will maintain our efforts in the development of AeroSol in the following directions:

In high-order discontinuous Galerkin methods, the unknown vector is composed of a concatenation of the unknowns in the cells of the mesh. An explicit residual computation is composed of three loops: an integration loop on the cells, for which computations in two different cells are independent, an integration loop on boundary faces, in which computations depend on data of one cell and on the boundary conditions, and an integration loop on the interior faces, in which computations depend on data of the two neighboring cells. Each of these loops is composed of three steps: the first step consists in interpolating data at the quadrature points; the second step in computing a nonlinear flux at the quadrature points (the physical flux for the cell loop, an upwind flux for interior faces or a flux adapted to the kind of boundary condition for boundary faces); and the third step in projecting the nonlinear flux on the degrees of freedom.

In this research direction, we propose to exploit the strong memory locality of the method (i.e., the fact that all the unknowns of a cell are stocked contiguously). This formulation can reduce the linear steps of the method (interpolation on the quadrature points and projection on the degrees of freedom) to simple matrix-matrix product which can be optimized. For the nonlinear steps, composed of the computation of the physical flux on the cells and of the numerical flux on the faces, we will try to exploit vectorization.

For our computations of the IMPACT-AE project, we have used explicit time stepping. The time stepping is limited by the CFL condition, and in our flow, the time step is limited by the acoustic wave velocity. As the Mach number of the flow we simulated in IMPACT-AE was low, the acoustic time restriction is much lower than the turbulent time scale, which is driven by the velocity of the flow. We hope to have a better efficiency by using time implicit methods, for using a time step driven by the velocity of the flow.

Using implicit time stepping in compressible flows in particularly difficult, because the system is fully nonlinear, such that the nonlinear solving theoretically requires to build many times the Jacobian. Our experience in implicit methods is that the building of a Jacobian is very costly, especially in three dimensions and in a high-order framework, because the optimization of the memory usage is very difficult. That is why we propose to use a Jacobian-free implementation, based on 47. This method consists in solving the linear steps of the Newton method by a Krylov method, which requires Jacobian-vector product. The smart idea of this method is to replace this product by an approximation based on a difference of residual, therefore avoiding any Jacobian computation. Nevertheless, Krylov methods are known to converge slowly, especially for the compressible system when the Mach number is low, because the system is ill-conditioned. In order to precondition, we propose to use an aggregation-based multigrid method, which consists in using the same numerical method on coarser meshes obtained by aggregation of the initial mesh. This choice is driven by the fact that multigrid methods are the only one to scale linearly 63, 64 with the number of unknowns in term of number of operations, and that this preconditioning does not require any Jacobian computation.

Beyond the technical aspects of the multigrid approach, which is challenging to implement, we are also interested in the design of an efficient aggregation. This often means to perform an aggregation based on criteria (anisotropy of the problem, for example) 53. To this aim, we propose to extend the scalar analysis of 65 to a linearized version of the Euler and Navier-Stokes equations, and try to deduce an optimal strategy for anisotropic aggregation, based on the local characteristics of the flow. Note that discontinuous Galerkin methods are particularly well suited to h-p aggregation, as this kind of methods can be defined on any shape 25.

Until the beginning of the 2000s, the computing capacities have been improved by interconnecting an increasing number of more and more powerful computing nodes. The computing capacity of each node was increased by improving the clock speed, the number of cores per processor, the introduction of a separate and dedicated memory bus per processor, but also the instruction level parallelism, and the size of the memory cache. Even if the number of transistors kept on growing up, the clock speed improvement has flattened since the mid 2000s 59. Already in 2003, 44 pointed out the difficulties for efficiently using the biggest clusters: "While these super-clusters have theoretical peak performance in the Teraflops range, sustained performance with real applications is far from the peak. Salinas, one of the 2002 Gordon Bell Awards was able to sustain 1.16 Tflops on ASCI White (less than 10% of peak)." From the current multi-core architectures, the trend is now to use many-core accelerators. The idea behind many-core is to use an accelerator composed of a lot of relatively slow and simplified cores for executing the most simple parts of the algorithm. The larger the part of the code executed on the accelerator, the faster the code may become. Therefore, it is necessary to work on the heterogeneous aspects of computations. These heterogeneities are intrinsic to our computations and have two sources. The first one is the use of hybrid meshes, which are necessary for using a locally-structured mesh in a boundary layer. As the different cell shapes (pyramids, hexahedra, prisms and tetrahedra) do not have the same number of degrees of freedom, nor the same number of quadrature points, the execution time on one face or one cell depends on its shape. The second source of heterogeneity are the boundary conditions. Depending on the kind of boundary conditions, user-defined boundary values might be needed, which induces a different computational cost. Heterogeneities are typically what may decrease efficiency in parallel if the workload is not well balanced between the cores. Note that heterogeneities were not dealt with in what we consider as one of the most advanced work on discontinuous Galerkin on GPU 46, as only straight simplicial cell shapes were addressed. For managing at best our heterogeneous computations on heterogeneous architectures, we propose to use the execution runtime StarPU 24. For this, the discontinuous Galerkin algorithm will be reformulated in terms of a graph of tasks. The previous tasks on the memory management will be useful for that. The linear steps of the discontinuous Galerkin methods require also memory transfers, and one issue consists in determining the optimal task granularity for this step, i.e. the number of cells or face integrations to be sent in parallel on the accelerator. On top of that, the question of which device is the most appropriate to tackle such kind of tasks is to be discussed.

Last, we point out that the combination of shared-memory and distributed-memory parallel programming models is better suited than only the distributed-memory one for multigrid, because in a hybrid version, a wider part of the mesh shares the same memory, therefore making a coarser aggregation possible.

These aspects will benefit from a particularly stimulating environment in the INRIA Bordeaux Sud Ouest center around high-performance computing, which is one of the strategic axes of the center.

We will gradually insert models developed in research direction 3.2.2 in the AeroSol library in which we develop methods for the DNS of compressible turbulent flows at low Mach number. Indeed, due to its formalism based on temporal filtering, the HTLES approach offers a consistent theoretical framework characterized by a continuous transition from RANS to DNS, even for complex flow configurations (e.g. without directions of spatial homogeneity). As for the discontinuous Galerkin method available presently in AeroSol, it is the best suited and versatile method able to meet the requirements of accuracy, stability and cost related to the local (varying) level of resolution of the turbulent flow at hand, regardless of its complexity. The first step in this direction was taken in 2017 during the internship of Axelle Perraud, who has implemented a turbulence model (

To supplement whenever necessary the test flow configuration of MAVERIC and apart from configurations that could emerge in the course of the project, the following configurations for which either experimental data, simulation data or both have been published will be used whenever relevant for benchmarking the quality of our agile computations:

Cagire is presently involved in studies mainly related to:

In 2020, the following points were addressed in AeroSol

* Update, documentation, and update of wiki for the test case

* Development of Continuous Galerkin based models for Advection, Diffusion and AdvectionDiffusion.

* Implementation of eddy viscosity models (k-epsilon, Spalart-Almarras) turbulence models.

* Keeping on the interfacing with PARMMG for parallel mesh adaptation.

* Development of a HLL flux for finite volume and DG discretizations

* Development of a filtering method for low Mach number flows

* Development of symmetric boundary conditions for Advection and Diffusion problems.

This work aims at developing an efficient and versatile tool to simulate non-reacting and reacting flow fields in a zero Mach number framework. The formulation and implementation of the unsteady artificial compressibility approach in a rather standard finite volume framework has been carried out in a two-step way. In the first one, only divergence free velocity flow fields were considered, hence imposing quite a severe restriction on the class of reacting flows that could be dealt with. The preliminary results obtained during this first phase have been presented in 13. Then, during the second step, the approach has been generalised so as to deal also with non divergence free velocity flow fields, opening the way for a much broader field of application. The resulting time-accurate scheme was tested in five different cases: the Stokes’ second problem, the unsteady wake flow past a circular cylinder, the flow around a heated cylinder placed in a square enclosure, a steady Tsuji diffusion flame, and a flickering buoyant diffusion flame. The flow over the oscillating plate and past a circular cylinder case were chosen to put into evidence the basic properties of the implemented time-accurate approach, such as the ability to describe unsteady non-reactive flows and its convergence rate. An analysis of the influence of the artificial compressibility factor on the convergence rate was carried out. For the flow in the enclosure hosting a heated cylinder, a configuration that featured quite high density gradients, the predicted flow topology as well as the heat flux distribution along the enclosure walls matched very well those reported in the literature. Then, a comparison with experimental and numerical results was performed for a steady diffusion flame case. The simulated temperature profile and the flame shape were in good agreement with those reported in the literature, except in the region around the maximum temperature, where the reaction layer was not so well described, certainly because of the infinite-rate chemistry assumption considered here. The fifth case investigated the capability of the present approach to predict correctly the large scale unsteadiness of a flickering diffusion flame. An excellent agreement with the experimental results was observed in term of the fundamental frequency of the flame flickering. Qualitatively speaking, the predicted flame topology appeared to be in line with that observed experimentally. A journal paper gathering these results has been submitted.

Our approach to model the wall/turbulence interaction, based on Elliptic Blending, was successfully applied to flows with standard thermal boundary conditions at the walls 10. However, Conjugate Heat Transfer, which couples fluid and solid domains, are particularly challenging for turbulence models. We have developed an innovative model, the Elliptic Blending Differential Flux Model, to account for the influence of various wall thermal boundary conditions on the turbulent heat flux and the temperature variance. An assessment of this new model in Conjugate Heat Transfer has been performed for several values of fluid-solid thermal diffusivity and conductivity ratios. A careful attention is paid to the discontinuity of the dissipation rate associated with the temperature variance at the fluid-solid interface. The analysis is supported by successful comparisons with Direct Numerical Simulations. This work led to the publication of an article in Journal of Fluid Mechanics 11. This work constitutes the main part of the thesis of Gaetan Mangeon, defended in 2020 19.

The EB-RSM, developed by the team, is a sophisticated Reynolds-stress model sensitized to the influence of the wall, implemented in several open-source, industrial and commercial codes (OpenFoam, Code

Eddy-viscosity turbulence models have been sensitized to the effects of buoyancy, in order to improve the prediction in natural convection flows. The approach extends in a linear way the constitutive relations for the Reynolds stress and the turbulent heat flux, in order to account for the anisotropic influence of buoyancy. The novelty of this work involves the buoyancy extension applied to three very different eddy-viscosity models, which leads to encouraging results for the highly challenging case of the differentially heated vertical channel and the differentially heated cavity 42, 43. This models were developed during the PhD of Saad Jameel, defended in December 2020 18.

In the framework of the ANR MONACO project (post-doc of Saad Jameel), in collaboration with PSA Group, these models are implemented in the commercial code ANSYS Fluent and applied to real underhood configurations of vehicules in situations driven by natural convection.

The HTLES (hybrid temporal LES) approach, developed by the team, has been improved by introducing shielding functions and an internal consistency constraint to enforce the RANS behavior in the near-wall regions 51. The influence of the underlying closure model was studied by applying HTLES to two RANS models: the k-

In the framework of the ANR project MONACO, the HTLES approach is extended to flows influenced by buoyancy. During the PhD thesis of Puneeth Bikkanahally, the model has been applied to cavity flows, and in particular the strongly stratified cavity of 57. Due to the co-existence of a turbulent, buoyant boundary layer and a laminar region in the center of the cavity, the shielding function based on the Kolmogorov scale developed during the thesis of Vladimir Duffal 16 deteriorates the results. The ongoing work is dedicated to the development of an alternative shielding function based on the resolution of an elliptic equation 29.

Jointly with Alireza Mazaheri (NASA Langley) and Chi Wang Shu, we have developed a compact WENO stabilization that moreover ensures the positivity of physical quantities. The work was published in 52.

National University of Córdoba (UNC), Córdoba, Argentina: ECOS-Sud A17A07 project

During 2020, Mariovane Sabino Donini (Instituto Nacional de Pesquisas Espaciais - (INPE) - Brazil) spent the last eight months of his 12-month stay in the team. He worked with P. Bruel to develop a numerical methodology to simulate inert and reacting gaseous flows featuring large density variations. This is the first time that the simulation of reacting flows (diffusion flames in that case) is considered in the team. As a first step, a Boussinesq-like approximation was adopted do deal with these reacting flows and a first version of the program was developed on that basis. As expected, for reacting flows, the validation test cases carried out clearly evidenced the limitations of such an approximation as soon as the Froude number is sufficiently small to induce a strong buoyancy related flame unsteadiness 13. Thus, the methodology has been adapted to remove this limitations and the first results obtained with the new version of the code proved to be quite promising.

We are member of the CNRS GIS Success (Groupement d'Intérêt Scientifique) organised around two of the major CFD codes employed by the Safran group, namely AVBP and Yales2. This year, the evaluation of the capability of the compressible module of Yales2 at low Mach number has been carried out.

The ambition of the MONACO_2025 project, coordinated by Rémi Manceau, is to join the efforts made in two different industrial sectors in order to
tackle the industrial simulation of transient, turbulent flows affected by buoyancy effects. It brings together two academic partners, the project-team Cagire hosted by the university of Pau, and the institute Pprime of the CNRS/ENSMA/university of Poitiers (PPRIME), and R&D departments of two industrial partners, the PSA group and the EDF group, who
are major players of the automobile and energy production sectors, respectively.

SEIGLE means "Simulation Expérimentation pour l'Interaction de Gouttes Liquides avec un Ecoulement fortement compressible". It is a 3-year program which has started since October 2017 and was funded by Région Nouvelle-Aquitaine, ISAE-ENSMA, CESTA and Inria. The interest of understanding aerodynamic mechanisms and liquid drops atomization is explained by the field of applications where they play a key role, specially in the new propulsion technologies through detonation in the aerospace as well as in the securities field. The SEIGLE project was articulated around a triptych experimentation, modeling and simulation. An experimental database will be constituted. It will rely on a newly installed facility (Pprime), similar to a supersonic gust wind tunnel/ hypersonic from a gaseous detonation tube at high pressure. This will allow to test modeling approaches (Pprime / CEA) and numerical simulation (Inria / CEA) with high order schemes for multiphasic compressible flows, suitable for processing shock waves in two-phase media.

HPC scalable ecosystem is a 3-year program funded by Région Nouvelle-Aquitaine (call 2018), Airbus, CEA-CESTA, University of Bordeaux, INRA, ISAE-ENSMA and Inria. A two-year post-doc will be hired in 2019 or 2020. The objective is to extend the prototype developed in 37 to high order (discontinuous Galerkin) and non-reactive diffusive flows in 3d. The same basis will be developed in collaboration with Pprime for WENO based methods for reactive flows.

The participation in the following thesis juries is noted:

The participation in the following HDR jury is noted: