CARDAMOM is a joint team of INRIA Bordeaux - Sud-Ouest,
University of Bordeaux and Bordeaux Inst. Nat. Polytechnique) and IMB (Institut de Mathématiques
de Bordeaux – CNRS UMR 5251, University of Bordeaux).
CARDAMOMst, 2015 (website).

The CARDAMOM project aims at providing a robust modelling strategy for
engineering applications involving complex flows with moving fronts.
The term front here denotes either an actual material boundary (e.g. multiple phases),
a physical discontinuity (e.g. shock waves),
or a transition layer between regions with completely different dominant flow behaviour (e.g. breaking waves).
These fronts introduce a multi-scale behaviour. The resolution of
all the scales is however not feasible in certification and optimization cycles. Moreover, the full scale behaviour is not
necessary in many engineering applications, while in others it is enough to
model the average effect of small scales on large ones (closure models).
We plan to develop application-tailored models
obtained by a tight combination of asymptotic PDE (Partial Differential Equations) modelling,
adaptive high order PDE discretizations, and a quantitative certification step assessing
the sensitivity of outputs to both model components
(equations, numerical methods, etc) and random variations of the data.
The goal is to improve operational models used in parametric analysis and design cycles,
by increasing both accuracy and confidence in the results. This is achieved by combining
improved physical and numerical modelling, and assessment of output uncertainties.
This requires a research program mixing of PDE analysis,
high order discretizations, Uncertainty Quantification (UQ), and to some extend optimization and inverse modelling. These skiss need to be also combined with some specific engineering know how to tackle specific applications.
Part of this scientific themes and of these activities have been part of the work of the BACCHUS and MC teams.
CARDAMOM harmonizes and gives new directions to this know how.

The objective of CARDAMOM is to provide improved analysis and design tools for engineering applications involving fluid flows with moving fronts.
In our applications a front is either an actual material interface, a boundary of the domain, or a well identified transition region in which
the flow undergoes a change in its dominant macroscopic character. One example is the certification of wing de-anti icing systems, involving the predictions of ice formation and detachment,
and of ice debris trajectories to evaluate the risk of downstream impact on aircraft components 114, 58.
Another application, relevant for space reentry, is the study of transitional regimes in high altitude gas dynamics in which extremely thin
layers appear in the flow which cannot be analysed with classical continuous models (Navier-Stokes equations) used by engineers 65, 87. A classical example relevant in
coastal engineering is free surface flows. The free surface itself is a material interface, but we can identify also other fronts as e.g. the flooding line (wet/dry transition) or the transition between propagating and breaking waves,
across which relevance of dissipation and vorticity changes dramatically 66. For wave energies, as well as
for aquifers, the transition between free surface and congested flows (below a solid surface) is another example 78.
Other similar examples exist in geophysics, astrophysics, aeronatic and aerospace engineering, civil engineering, energy engineering, material engineering, etc.

In all cases, computationally affordable, fast, and accurate numerical modelling is essential to allow reliable predictions in early stages of the design/analysis 117. Such computational models are also needed for simulations over very long times, especially if changes in many variable input parameters need to be investigated.

To achieve this goal one needs to have a physically relevant Partial Differential Equation (PDE) model, which can be treated numerically efficiently and accurately,
which means possibly with some adaptive numerical technique allowing to mimimize the computational effort.
To this end, the dynamics of some of the fronts can be modelled by appropriate asymptotic/homogeneised PDEs, while other interfaces are explicitly described.
Even in the best of circumstances in all practical applications the reliability of the numerical predictions is limited by the intrinsic uncertainty on the operational conditions
(e.g. boundary/initial conditions, geometry, etc.). To this aleatory uncertainty we must add
the structural epistemic uncertainty related possibly to the use of approximate PDE models.
Besides the limited validity of the derivation
assumptions, these models are often calibrated/validated with experimental data which is itself subject to errors and post-processing procedures (filtering, averaging, etc ..) 73, 106.
This is even worse in complex flows for which measurements are difficult or impossible to plan or perform due to the inherent exceptional character of the phenomenon (e.g. tsunami events), or technical issues and
danger (e.g. high temperature reentry flows, or combustion), or
impracticality due to the time scales involved (e.g. study of some
new materials' micro-/meso- structure 74).
So the challenge is to construct
computationally affordable models robust under variability of input paramters
due to uncertainties, certification/optimization, as well as coming from modelling choices.

To face this challenge and provide new tools to accurately and robustly modelize and certify engineering devices based on fluid flows with moving fronts, we propose a program mixing scientific research in asymptotic PDE analysis, high order adaptive PDE discretizations and uncertainty quantification.

We propose a research program mixing asymptotic PDE analysis, high order adaptive discretizations, and uncertainty quantification. In a standard approach a certification study can be described as a modelling exercise involving two black boxes. The first box is the computational model itself, composed of: PDE system, mesh generation/adaptation, and discretization of the PDE (numerical scheme). The second box is the main robust certification loop which contains separate boxes involving the evaluation of the physical model, the post-processing of the output, and the exploration of the spaces of physical and stochastic parameters (uncertainties). Many interactions exist in this process. Exploiting these interactions could allow to tap as much as possible into the potential of high order methods 91 such as e.g. h-, p-, r- adaptation in the physical model w.r.t. some parametric quantity/sensitivity non necessarily associated to the solution's smoothness.

Our objective is to provide some fundamental advances allowing to bring closer to the operational level modern high order numerical techniques and multi-fidelity certification and optimization algorithms, possibly using some clever paradigm different from the 2-black box approaches above, and involving tight interactions between all the parts of the play: PDE modelling, numerical discretization techniques, uncertainty quantification methods, mesh generation/adaptation methods, physical model validation/calibration, etc. The initial composition of the team provided a unique combination of skills covering all the necessary topics allowing to explore such an avenue. The questions that need to be tackled can be organized in the following main axes/scientific questions:

These themes are discussed in the following sections together with some challenges specific to the engineering applications considered:

In many of the applications we consider intermediate fidelity models can be derived using an asymptotic expansion for the relevant scale resolving PDEs, possibly combined with some form of homogeneization or averaging. The resulting systems of PDEs are often very complex. One of the main challenges is to characterize the underlying structure of such systems: possible conservation laws embedded; additional constraints related to consistency with particular physical states (exact solutions), or to stability (entropy/energy dissipation); etc. A question of paramount importance in practical applications is also the formulation of the boundary conditions. The understanding of these properties is necessary for any new model. Moreover, different forms of the PDE may be better suited to enforce some of these properties at the numerical level.

Another issue when working with asymptotic approximations is that of closure. Indeed, important physical phenomena may be unaccounted for either due to some initial modelling assumptions, or because they involve scales much smaller than those modelled. A typical example is wave breaking in some depth averaged models. Another, relevant for our work, is the appropriate prediction of heat fluxes in turbulent flows.

So our main activities on this axis can be classified according to three main questions:

The efficient and robust discretization of complex PDEs is a classical and widespread research subject. The notion of efficiency is in general related to the combination of high order of accuracy and of some adaptation strategy based on an appropriate model of the error 108, 116.

This strategy is of course also part of our work. However, we are convinced that a more effective path
to obtain effective discretizations consists in exploiting the knowledge of the PDE structure, embedding as much as possible the PDE structure in the discrete equations. This is related to the notion of enhanced consistency that
goes in the direction of what is today often referred to as
constraint or property preserving discretizations. For the type of PDE systems of our interest, the
properties which are of paramount importance to be controlled are for example: the balance between
flux divergence and forcing terms (so called well balanced of C-property 61, 105) and the preservation of some
specific steady states; the correct reproduction of the dispersion
relation of the system, especially but not only for dispersive wave propagation;
the preservation of some algebraic constraints, typically the non-negativity of some thermodynamic quantities;
the respect of a discrete entropy/energy equality or inequality (for stability); the strong consistency with some asymptotic limit of the PDE (AP property); etc.

A fundamental issue is the efficient and accurate treatment of boundary and interface conditions. The idea is to have some approach which tolerates the use of non-conformal meshes, which is genuinely high order, and compatible with adaptation, and of course conformal meshing of the boundary/discontinuity. Techniques allowing the control of the geometrical error due to non-conformity is required. For discontinuities, this also requires an ad-hoc treatment of the jump condition. For wall boundaries, initial work using penalization has been done in CARDAMOM in the past 55, 98. On Cartesian meshes several techniques exist to control the consistency order based on extrapolation/interpolation, or adaptive methods (cf e.g. 109, 96, 56, 71, 80, 115 and references therein). For discontinuities, we can learn from fitting techniques 62, and from some past work by Prof. Glimm and co-workers 69.

For efficiency, mesh adaptation plays a major role. Mesh size adaptation based on both deformation, r-adaptation, or remeshing h-adaptation, can be designed based on some error model representative. For unsteady flows, the capability to use moving meshes becomes necessary, and geometrical conservation (GCL) needs to be added to the list of constraints to be accounted for 110, 92. In particular, one technique that provides meshes with optimal quality moving together with the unsteady flows, reduction of errors due to convective terms, GCL respected up to machine precision, and high order of accuracy, is offered by the Direct Arbitrary-Lagrangian-Eulerian (ALE) methods on moving Voronoi meshes with topology changes 82, 81 that will be further investigated.

As already mentioned, our focus is on four main classes of problems:

There are several common aspects. One is the use of asymptotic vertically averaged approximations to produce efficient application-Taylored PDE models. Another common point is the construction of possibly high order constraint/property preserving numerical approximations. This entails the characterization of the underlying PDE models with a set of embedded properties, which go from classical conservation, to exact solutions (steady or moving), to the preservation of differential operators, to the thermodynamic adissibility (non-negativity, preservation of physical bounds). For all applications, the investigation of the parameter dependence of the results will take several forms from sensitivity analyses, to classical parametric studies to understand physical processes, to approximation in parameter space in the framework of hybrid PDE-meta-/reduced-order models.

Impact of large ice debris on downstream aerodynamic surfaces and ingestion by aft mounted engines must be considered during the aircraft certification process. It is typically the result of ice accumulation on unprotected surfaces, ice accretions downstream of ice protected areas, or ice growth on surfaces due to delayed activation of ice protection systems (IPS) or IPS failure. This raises the need for accurate ice trajectory simulation tools to support pre-design, design and certification phases while improving cost efficiency. Present ice trajectory simulation tools have limited capabilities due to the lack of appropriate experimental aerodynamic force and moment data for ice fragments and the large number of variables that can affect the trajectories of ice particles in the aircraft flow field like the shape, size, mass, initial velocity, shedding location, etc... There are generally two types of model used to track shed ice pieces. The first type of model makes the assumption that ice pieces do not significantly affect the flow. The second type of model intends to take into account ice pieces interacting with the flow. We are concerned with the second type of models, involving fully coupled time-accurate aerodynamic and flight mechanics simulations, and thus requiring the use of high efficiency adaptive tools, and possibly tools allowing to easily track moving objects in the flow. We will in particular pursue and enhance our initial work based on adaptive immerse boundary capturing of moving ice debris, whose movements are computed using basic mechanical laws.

In 59 it has been proposed to model ice shedding trajectories by an innovative paradigm that is based on CArtesian grids, PEnalization and LEvel Sets (LESCAPE code). Our objective is to use the potential of high order unstructured mesh adaptation and immersed boundary techniques to provide a geometrically flexible extension of this idea. These activities will be linked to the development of efficient mesh adaptation and time stepping techniques for time dependent flows, and their coupling with the immersed boundary methods we started developing in the FP7 EU project STORM 55, 98. In these methods we compensate for the error at solid walls introduced by the penalization by using anisotropic mesh adaptation 76, 93, 94. From the numerical point of view one of the major challenges is to guarantee efficiency and accuracy of the time stepping in presence of highly stretched adaptive and moving meshes. Semi-implicit, locally implicit, multi-level, and split discretizations will be explored to this end.

Besides the numerical aspects, we will deal with modelling challenges. One source of complexity is the initial conditions which are essential to compute ice shedding trajectories. It is thus extremely important to understand the mechanisms of ice release. With the development of next generations of engines and aircraft, there is a crucial need to better assess and predict icing aspects early in design phases and identify breakthrough technologies for ice protection systems compatible with future architectures. When a thermal ice protection system is activated, it melts a part of the ice in contact with the surface, creating a liquid water film and therefore lowering ability of the ice block to adhere to the surface. The aerodynamic forces are then able to detach the ice block from the surface 60. In order to assess the performance of such a system, it is essential to understand the mechanisms by which the aerodynamic forces manage to detach the ice. The current state of the art in icing codes is an empirical criterion. However such an empirical criterion is unsatisfactory. Following the early work of 64, 58 we will develop appropriate asymptotic PDE approximations to describe the water runoff on the wing surface, also accounting for phase change, thus allowing to describe the ice formation and possibly rupture and detachment. These models will constitute closures for aerodynamics/RANS and URANS simulations in the form of PDE wall models, or modified boundary conditions.

In addition to this, several sources of uncertainties are associated to the ice geometry, size, orientation and the shedding location. In very few papers 101, some sensitivity analysis based on Monte Carlo method have been conducted to take into account the uncertainties of the initial conditions and the chaotic nature of the ice particle motion. We aim to propose some systematic approach to handle every source of uncertainty in an efficient way relying on some state-of-art techniques developed in the Team. In particular, we will perform an uncertainty propagation of some uncertainties on the initial conditions (position, orientation, velocity,...) through a low-fidelity model in order to get statistics of a multitude of particle tracks. This study will be done in collaboration with ETS (Ecole de Technologies Supérieure, Canada). The longterm objective is to produce footprint maps and to analyse the sensitivity of the models developed.

Wave energy conversion is an emerging sector in energy engineering. The design of new and efficient Wave Energy Converters (WECs) is thus a crucial activity.
As pointed out by Weber 117, it is more economical to raise the technology performance level (TPL) of a wave energy converter concept at low technology readiness level (TRL).
Such a development path puts a greater demand on the numerical methods used.

Our previous work 7863 has shown the potential of depth-averaged models for simulating
wave energy devices. The approach followed so far relies on an explicit coupling of the different domains involving the flow under the structure and the free surface region. This approach has the advantage to need efficient solvers of well-known system of equations (compressible and incompressible flow). However, the transmission condition between this two regimes is now always well understood, depending on the underlying PDE models. Moreover, several sources of numerical instabilities exist because of the different nature of the regions involved (compressible/incompressible). A different approach is proposed in 86, 85, and will be pursued in the comping years. The idea is to solve a unique model in the whole computational domain, with the effect of the structure being accounted for by means of an appropriate pressure variable playing the role of a Lagrange multiplier.
Out numerical developments will be performed withing the parallel platform GeoFun, based on the Aerosol library. In order to simulate the dynamic of the floating structures, we will consider the coupling with the open source code tChrono1, an external code specialized in the resolution of the rigid body dynamics. The coupling is still under development.
In parallel, we will add closure for other complex physical effects as e.g. the modeling of air pocket trapped under the structures. Several industrial processes (SeaTurns, Hace...) are based on chamber compressing air inside by the movement of the water surface. This strategy has the advantage of taking the turbines for energy production out of the water. The strategy is based on a polytropic modeling of the gas dynamics taking into account merging and splitting of the pockets, without a major impact on the efficiency of the simulation (robustness and numerical cost).
This works benefits of the associated team LARME
with RISE (C. Eskilson).

Because of their high strength and low weight, ceramic-matrix composite materials (CMCs) are the focus of active research for aerospace and energy applications involving high temperatures, either military or civil. Self-healing (SH) CMCs are composed of a complex three-dimensional topology of woven fabrics containing fibre bundles immersed in a matrix coating of different phases. The oxide seal protects the fibres which are sensitive to oxidation, thus delaying failure. The obtained lifetimes reach hundreds of thousands of hours 104.

The behaviour of a fibre bundle is actually extremely variable, as the oxidation reactions generating the self-healing mechanism have kinetics strongly dependent on temperature and composition. In particular, the lifetime of SH-CMCs depends on: (i) temperature and composition of the surrounding atmosphere; (ii) composition and topology of the matrix layers; (iii) the competition of the multidimensional diffusion/oxidation/volatilization processes; (iv) the multidimensional flow of the oxide in the crack; (v) the inner topology of fibre bundles; (vi) the distribution of critical defects in the fibres. Unfortunately, experimental investigations on the full materials are too long (they can last years) and their output too qualitative (the coupled effects can only be observed a-posteriori on a broken sample). Modelling is thus essential to study and to design SH-CMCs.

In collaboration wit the LCTS laboratory (a joint CNRS-CEA-SAFRAN-Bordeaux University lab devoted to the study of thermo-structural materials in Bordeaux), we are developing a multi-scale model in which a structural mechanics solver is coupled with a closure model for the crack physico chemistry. This model is obtained as a multi-dimensional asymptotic crack averaged approximation fo the transport equations (Fick's laws) with chemical reactions sources, plus a potential model for the flow of oxide 74, 77, 102. We have demonstrated the potential of this model in showing the importance of taking into account the multi-dimensional topology of a fibre bundle (distribution of fibres) in the rupture mechanism. This means that the 0-dimensional model used in most of the studies (see e.g. 72) will underestimate appreciably the lifetime of the material. Based on these recent advances, we will further pursue the development of multi-scale multi-dimensional asymptotic closure models for the parametric design of self healing CMCs. Our objectives are to provide: (i) new, non-linear multi-dimensional mathematical model of CMCs, in which the physico-chemistry of the self-healing process is more strongly coupled to the two-phase (liquid gas) hydro-dynamics of the healing oxide ; (ii) a model to represent and couple crack networks ; (iii) a robust and efficient coupling with the structural mechanics code ; (iv) validate this platform with experimental data obtained at the LCTS laboratory. The final objective is to set up a multi-scale platform for the robust prediction of lifetime of SH-CMCs, which will be a helpful tool for the tailoring of the next generation of these materials.

Our objective is to bridge the gap between the development of high order adaptive methods, which has mainly been performed in the industrial context and environmental applications, with particular attention to coastal and hydraulic engineering. We want to provide tools for adaptive non-linear modelling at large and intermediate scales (near shore, estuarine and river hydrodynamics). We will develop multi-scale adaptive models for free surface hydrodynamics. Beside the models and codes themselves, based on the most advanced numerics we will develop during this project, we want to provide sufficient know how to control, adapt and optimize these tools.

We will focus our effort in the understanding of the interactions between asymptotic approximation and numerical approximation. This is extremely important in several ways. An example is the capability of a numerical model to handle highly dispersive wave propagation. This is usually done by high accuracy asymptotic PDE expansions of by means of multilayer models. In the first case, there is an issue with the constraints on the numerical approximation. Investigations of approriated error models for adaptivity in the horizontal may permit to alleviate some of these constraints, allowing a reasonable use of lower order discretizations. Concerning multi-layer models, we plan can use results concerning the relations between vertical asymptotic expansions and truncation/approximation error to improve the models by some adaptive approach.

Another important aspect which is not understood well enough at the moment is the role of dissipation in the evolution of the free surface dynamics, and of course in wave breaking regions. There are several examples of breaking closure, going from algebraic and PDE-based eddy viscosity methods 90, 107, 100, 75, to hybrid methods coupling dispersive PDEs with hyperbolic ones, and trying to mimic wave breaking with travelling bores 112, 113, 111, 88, 79. In both cases, numerical dissipation plays an important role and the activation or not of the breaking closure, as well as on the establishement of stationary travelling profiles, or on the appearance of solitary waves. These aspects are related to the notion of numnerical dissipation, and to its impact on the resulting numerical solutions. These elements must be clarified to allow full control of adaptive techniques for the models used in this type of applications.

A fundamental issue that needs to be adressed is the proper discrete formulation of the boundary conditions for dispersive wave approximations. These conditions play of course a critical role in applications and remain an open problem for most Boussinesq models.

This is work is related to large scale simulations requiring the solution of PDEs on manifolds. Examples are tsunami simulations, as those performed in the past in the TANDEM project, as well as some applications considered in the ANR LAGOON for climate change. The MSCA project SuPerMan proposes applciations in astrophysics which also involve similar issues. The idea is to consider both coordinate changes related to mesh movement, and in ALE formulations, as well as genuinely space-time manifolds as in hyperbolic reformulations of relativity 83, and combinations of both when for example considered mesh movement and adaptation in curvilinear coordinates 57. Challenges are related to the appropriate PDE formulation, and the respect of continuous constraints at the discrete level.

The objective here is to devise the most appropriate manifold representation, and formulate the PDE system in the appropriate way allowing to embed as many continuous constraints as possible (well balancing, energy conservation, positivity preservation, etc). Embedding the ALE mapping will be necessary to envisage adaptive strategies, improving on 57 and 84.

Geophysical applications are of interest for BRGM, while the more exploratory application to general relativity of the MSCA project SuPerMan will push the numerical discretizations to their limit, due to the great complexity of the model, and allow new collaborations in the domain of astrophysics, as e.g. with Max Planck institute.

Cardamom passed with success its second evaluation in September 2022.

Luca Cirrottola, who is a permanent research engineer in scientific computing and high performance computing in the SED department, joint CARDAMOM and CAGIRE teams. He is working on the AEROSOL software.

Mathieu Colin has been promoted to a professor’s position.

Operational platform for near shore coastal application based on the following main elements:

- Fully-nonlinear wave propagation.

- Wave breaking handled by some mechanism allowing to mimic the energy dissipation in breakers.

- A high order finite element discretization combined with mesh and polynomial order adaptation for optimal efficiency.

- An efficient parallel object oriented implementation based on a hierarchical view of all the data management aspects cared for by middle-ware libraries developed at Inria within the finite element platform Aerosol.

- A modular wrapping allowing for application tailored processing of all input/output data (including mesh generation, and high order visualization).

- Spherical coordinates based on a local projection on a real 3D spherical map (as of 2021)

- Compilation with GUIX available (as of 2022)

- Homogenization and standardization of code outputs and hazard quatification (as of 2022)

- Correction of the management of dry/wet fronts in the presence of structures represented by a single high point (as of 2022)

- Use of FES for the calculation of the tide directly in UHAINA through an API. New compilation option for activation (as of 2022)

- Boundary conditions accounting tides from FES and corrected with the effect of the inverse barometer, for the simulation of the tidal propagation and the surge on domains at the regional scale (as of 2022)

- Hydraulic connections (e.g. sewers) in the simulation of urban flooding (as of 2022)

- Mass source term, for the injection of the volume of water overtopping structures not accounted in the elevation model during flooding episodes by sea surges (as of 2022)

Explicit, arbitrary high order accurate, one step (ADER), Finite Volume and Discontinuous Galerkin schemes on 2D moving Voronoi meshes for the solution of general first-order hyperbolic PDEs. Main peculiarity: the Voronoi mesh is moved according to the fluid flow using a direct Arbitrary-Lagrangian-Eulerian (ALE) method achieving high quality of the moving mesh for long simulation times. The high quality of the mesh is maintained thanks to a) mesh optimization techniques and b) the additional freedom of allowing topology changes. The high quality of the results is obtained thanks to the high order ADER schemes. The main novelty is the capability of using high-order schemes on moving Voronoi meshes with topology changes.

The code is written in Fortran + OpenMP.

The main highlight of 2022 concerning the AeroSol library was the hiring of Luca Cirrottola as an INRIA permanent engineer.

In 2022, the development of the library was focused on the following points

* Development environment - Work on the packaging with Guix. - Work on continuous integration by using Plafrim as gitlab runner, with dependencies handled by Guix (and Modules for legacy). - Fix of an old memory leak on PaMPA and integration in the packaging. - Beginning of work for merging the branch master, the branch used for Uhaina/Lagoon project, and the branch including turbulence models and axi models. - the development of a new library DM2 has started. It aims at replacing PaMPA.

* General numerical feature of the library - Postprocessing on wall and lines, including computation of Cp and Cf - New finite elements for quads and hexa, based on Gauss-Lobatto or Gauss-Lagrange elements were added.

* Work on SBM methods - Shifted boundary method for Neumann and Dirichlet boundary conditions, - development of high order derivative in some of the finite element classes

* Low Mach number flows: - Low Mach number filtering was extended on quads. Implementation of dicrete semi-norms div and grad, extension of some fixes to full Euler with arbitrary EOS.

* RANS turbulent flow computations: - Inlet/Outlet boundary conditions for Euler/Navier-Stokes systems allowing to get a stationary solution - HLLC numerical flux, exact jacobian with frozen wave speeds and extension to the RANS equations coupled to transport equations (turbulence models) - test case: laminar and turbulent flat plate, development and documentation of axisymmetric test case, - Fix of Spalart-Almaras model, negative fix handle - Beginning of implementation of coupled elliptic problem with Neumann and Dirichlet boundary conditions on one variable.

* Flux-reconstruction - Implementation of Flux-Reconstruction methods on Cartesian meshes

DM2 is a C++ library for managing mesh and data on mesh in a MPI parallel environment. It is conceived to provide parallel mesh and data management in high order finite element solvers for continuum mechanics.

The user should provide a mesh file which is read by the library. Then DM2 is able to:

- Read the mesh, and read the data provided in the mesh file, possibly in parallel

- Redistribute the mesh in order to distribute the data on a given set of processors. This redistribution is made through a graph partitioner such as PARMETIS or PT-SCOTCH.

- Allocate the memory in parallel if a number of unknown by entity type is provided by the user.

- Centralize the data.

- Compute the halo required for a numerical method. The halo is adapted for each of the possible discretization.

- Renumber mesh elements for making a difference between mesh elements that need or need not communication.

- Aggregate a mesh based on a metric for developing a multigrid method.

This year, the development of the DM2 library began. We focused on - the mesh reading: the parallel mesh reader of AeroSol was reused and improved. It can

* read a GMSH file with lines, triangles, quads, prisms, tetrahedra or pyramids, each cell or face being straight or curved.

* the mesh reading can be performed in parallel

* the mesh distribution when reading in parallel can rely on a geometrical prepartitioner

- from the mesh reading, a C++ structure graph is built. When read in parallel, it is parallel consistent. Some work was done on the graph structure (c++ containers, adjacency list) for ensuring interoperability with C structures.

" from the c++ structure, a graph c-structure compatible with scotch or metis can be built. A parallel mesh redistribution of this graph can be performed based on these software

Fundamental work on schemes. To reduce the costs associated with the DG finite element method in the previous approach we study the use of (both continuous and discontinuous) cubature elements allowing a considerable reduction of the number of operations, including a full diagonalization of the mass matrix. In 97 we have provided a first investigation of the fully discrete linear stability of continuous finite elements with different stabilization operators. The theoretical results are confirmed by numerical computations on linear and nonlinear problems, confirming the potential of the cubature approach in temps of CPU time for a given error. This year we have propose a multidimensional extension of this fully discrete analysis. The challenges in two-dimensions are related to the Fourier analysis. Here, we perform it on two types of periodic
triangular meshes varying the angle of the advection, and we combine all the results for a general stability analysis. Due to the large number of modes involved, in the fully fiscrete case we have combined the basic Fourier stability criterion with the one obtained from Dahlquist’s equation for the spatial discretization alone.
Furthermore, we introduce additional a high order viscosity to stabilize the discontinuities, in order to show how to use
these methods for tests of practical interest. A simple model for this non-linear term has been included in the spectral analysis as well. All the theoretical results are thoroughly validated numerically both on
linear and non-linear problems, and error-CPU time curves are provided. Our final conclusions suggest that Cubature
elements combined with SSPRK and OSS stabilization is the most promising combination. Work discussed in 21 (accepted on J.Sci.Comp.).

Global flux based schemes. We have developed schemes based on a fully discrete well balanced criterion which exploits the idea of a global flux formulation. In this framework, classically some primitive function of the source terms is defined and included in the hyperbolic flux. In our work however this idea is mostly used to infer an ad-hoc quadrature strategy of the soource which we refer to as "global flux quadrature". This quadrature approach allows to establish a one to one correspondence, for a given local set of data on a given stencil, between the discretization of a non-local integral operator, and the discretization of the local steady differential problem. This equivalence is a discrete well balanced notion which allows to construct balanced schemes without explicit knowledge of the steady state, and in particular without the need of solving a local Cauchy problem. We have used this idea in the setting of finite elements, both discontinuous in 50 and continuous n the framerowk of the PhD of Lorenzo Micalizzi at U. Zurich, co-advised by R. Abgrall and M. Ricchiuto. We have shown that the used of specific finite element spaces allows to provide a very precice characterization of the discrete solution. For example,
the use of Gauss-Lobatto elements provides a natural connection to continuous collocation methods for integral equations. This allows to prove super-convergence estimates for the steady discrete solution.
In the continuous finite element case, this new well balanced criterion requires the design of compatible stabilization operators. This aspect was presented at the HONOM2022 conference. A paper on the latest developments is in preparation.
Similar ideas have been used to build fully well balanced WENO finite volume schemes in 47. Genuinely multidimensional extensions, as well as extensions to time dependent solutions are also in the works.

Shallow water equations on manifolds.
The work on well balanced schemes for shallow water type models had major enhancements.
We proposed a novel hyperbolic re-formulation of the shallow water model written in covariant coordinates, i.e. metric independent, and for this system we derived a cheap well balanced method able to maintain at machine precision water at rest equilibria for general metrics and complex-shaped domains. The obtained numerical results, both in 1D and 2D are reported in the following publications: 10, 40.
This work has been presented at Shark 2022 (Portugal), HYP 2022 (Spain) and in the conference Essentially hyperbolic problems: unconventional numerics, and applications 2022 (Switzerland).

Astrophysics.
Following the work in 83, we have continued our research on the implementation of robust and accurate numerical schemes for the simulation of hyperbolic models of general relativity.
We investigated in particular the general relativistic magnetohydrodynamics (GRMHD) equations and the Einstein field equations with different first order reformulations as the Z4 and CCZ4 models.
We also considered the Buchman 67 and the novel TEGR 103 tedrad formualations.
The implementation work is done in a 3D Cartesian code. In particular, the need for well balanced techniques appears to be more and more crucial as the applications increase their complexity and a series of numerical tests of increasing difficulty shows how the well balancing significantly improves the long-time stability of the finite volume scheme compared to a standard one, in particular for the study of neutron stars.
Various manuscripts are in preparation. The obtained results have been presented during the HYP conference in Malaga (Spain) and at the "Seminarire du Laboratoire LJLL" in November in Paris.

Entropy conservative ADER-DG. We have developed a fully discrete entropy preserving ADER-Discontinuous Galerkin (ADER-DG) method.
To obtain this result, we have equipped the space part of the method with entropy correction terms that balance the entropy production in space, inspired by the work of Abgrall. Whereas for the time-discretization we have applied the relaxation approach introduced by Ketcheson that allows to modify the timestep to preserve the entropy to machine precision. Up to our knowledge, this is the first time that a provable fully discrete entropy preserving ADER-DG scheme has been constructed. We have also verified our theoretical results with various numerical simulations, reporting our results in 16.

A posteriori sub-cell conservative correction of nonconservative schemes. We proposed a novel quasi-conservative high order discontinuous Galerkin (DG) method able
to capture contact discontinuities avoiding any spurious numerical artifacts, thanks to the PDE evolution in primitive variables, while at the same time being strongly conservative on shocks, thanks to a conservative a posteriori subcell finite volume (FV) limiter.
In particular, we have verified the improved reliability of our scheme on the multi-fluid Euler system on examples like the interaction of a shock with a helium bubble.
The obtained results have been presented in a seminar at the CEA center of Bruyeres le Chatel and a paper is in preparation.

Coupling dispersive and non-dispersive models. We continue our work on the coupling of dispersive shallow water models, by deriving asymptotic interface operators. We derived transmission operators for coupling linear Green-Naghdi equations (LGNE) with linear shallow water equations (LSWE) –the heterogeneous case – or for coupling LGNE with LGNE –the homogeneous case. We derived them from a domain decomposition method (Neumann-Dirichlet) of the linear Euler equations by applying the same vertical-averaging process and truncation of the asymptotic expansion of the velocity field used in the derivation of the equations. We find that the new asymptotic transmission conditions also correspond to Neumann and Dirichlet operators. In the homogeneous case the method has the same convergence condition as the parent domain decomposition method but leads to a solution that is different from the monodomain solution due to an

Dispersive waves with porous media. This year we started working in a conservative form of the extended Boussinesq equations for waves in porous media. This model can be used in both porous and non-porous media since it does not requires any boundary condition at the interface between the porous and non-porous media. A hybrid Finite Volume/Finite Difference (FV/FD) scheme has been coded in order to solve the conservative form of the extended Boussinesq equations for waves in porous media. For the hyperbolic part of the governing equations, the FV formulation is applied with Riemann solver of Roe approximation. Whereas, the dispersive and porosity terms are discretized by using the FD. The model is validated with experimental data for solitary waves interacting with porous structures and a porous dam break in a one-dimensional flow. The work was published in the Ocean engineering journal 27

Wave breaking. 2022 we continued our work on wave breaking for Boussinesq type modeling and more precisely for the GN equations. Using the numerical model already discribed in 79 we attempted at providing some more understanding of the sensitivity of some closure approaches to the numerical set-up. More precisely and based on 89 we focus on two closure strategies for modelling wave breaking. The first one is the hybrid method consisting of suppressing the dispersive terms on a breaking region and the second one is an eddy viscosity approach based on the solution of a turbulent kinetic energy model. The two closures use the same conditions for the triggering of the breaking mechanisms. Both the triggering conditions and the breaking models themselves use case dependent, ad hoc, parameters which are affecting the numerical solution while changing. The scope of this work is to make use of sensitivity indexes computed by means of Analysis of Variance (ANOVA) to provide the sensitivity of wave breaking simulation to the variation of parameters such as the breaking parameters involved in each breaking model. The paper was published in Water Waves journal 18.

Hyperbolic-elliptic spliting. We performed additional work on the use of the splitting between hyperbolic and elliptic steps proposed in 79 to solve Boussinesq type equations. For the Green-Naghdi (GN) model the latest results and developments are discussed in 19. In this paper we use a high order FV method to solve the hyperbolic step, and a standard P1 finite element method for the elliptic system associated to the dispersive correction. We study the impact of the reconstruction used in the hyperbolic phase; the representation of the FV data in the FE method used in the elliptic phase and their impact on the theoretical accuracy of the method; the well-posedness of the overall method. For the first element we proposed a systematic implementation of an iterative reconstruction providing on arbitrary meshes up to third order solutions, full second order first derivatives, as well as a consistent approximation of the second derivatives. These properties are exploited to improve the assembly of the elliptic solver, showing dramatic improvement of the finale accuracy, if the FV representation is correctly accounted for. Concerning the elliptic step, the original problem is usually better suited for an approximation in H(div) spaces. However, it has been shown that perturbed problems involving similar operators with a small Laplace perturbation are well behaved in H1 provided that some numerical dissipation is embedded in the overall discretization. In our case, the use of upwind numerical fluxes in the hyperbolic step suffices to this end, allowing to not only obtain convergent results, but also provide the expected convergence rates.

One of the drawbacks of this splitting approach is that a linear system needs to be assembled and solved at each substep. For multi-stage temporal integration, as in classical high order RK methods, this leads to schemes with a considerable overhead when passing from shallow water to Boussinesq. In 11 we have proposed a strategy to design one step discretizations based on the splitting strategy allowing a single evaluation of the elliptic step. The schemes are based on a simplified Lax-Wendroff procedure in which polynomial extrapolation in time is used to evaluate part of the dispersive terms required to obtain the necessary truncation order in the Taylor development in time underlying the Lax-Wendroff process. Second and third order schemes are constructed based on this idea, their spectral properties are analyzed, and several nonlinear 1D benchmarks are used to show the benefits of the proposed method. The multidimensional extension has already been implemented and its study is ongoing.

Model order reduction for weakly dispersive waves. To alleviate the overheads related to the approximation of dispersive effects we have explored a hybrid strategy using the hyperbolic-elliptic splitting in which a reduced model is used to approdimate the dispersive elliptic operator. The numerical evidence suggests that not only this is possible, but the resulting discrete model provides accurate predictions with computational savings of at least one order of magnitude, with increased robustness compared to fully reduced approximations 26. This work opens the way to many developments the first of which will be related to the extension of the initial results to breaking waves, and to the multidimensional case.

Projection structure of the time-discrete Green-Naghdi equations. The direct use of this structure has allowed to answer to several open questions. One of the most pressing is the correct imposition of the boundary conditions.
In particular, using techniques inspired by the discretization of incompressible flow equations, we proposed in 22 a numerical treatment allowing to account for very general boundary conditions, while guaranteeing that the whole scheme is entropy-stable.
We are currently working on exploiting this structure to design a well-balanced numerical scheme, i.e. a scheme able to preserved all the steady states even not at rest.

In-flight icing is a major source of incidents and accidents. The effects of atmospheric icing can be anticipated by Computational Fluid Dynamics (CFD). Past studies show that the convective heat transfer influences the ice accretion and is itself a function of surface roughness. Uncertainty quantification (UQ) could help quantify the impact of surface roughness parameters on the reliability of ice accretion prediction. The prediction of heat transfers in Reynolds-Averaged Navier–Stokes (RANS) simulations requires corrections for rough surfaces. The turbulence models are adapted to cope with surface roughness impacting the near-wall behaviour compared to a smooth surface. These adjustments in the models correctly predict the skin friction but create a tendency to overpredict the heat transfers compared to experiments. These overpredictions require the use of an additional thermal correction model to lower the heat transfers. Finding the correct numerical parameters to best fit the experimental results is non-trivial, since roughness patterns are often irregular. The objective of this paper 17 is to develop a methodology to calibrate the roughness parameters for a thermal correction model for a rough curved channel test case. First, the design of the experiments allows the generation of metamodels for the prediction of the heat transfer coefficients. The polynomial chaos expansion approach is used to create the metamodels. The metamodels are then successively used with a Bayesian inversion and a genetic algorithm method to estimate the best set of roughness parameters to fit the available experimental results. Starting with unknown roughness parameters, this methodology allows calibrating them and obtaining between 4.7% and 10% of average discrepancy between the calibrated RANS heat transfer prediction and the experimental results. The methodology is promising, showing the ability to finely select the roughness parameters to input in the numerical model to fit the experimental heat transfer, without an a priori knowledge of the actual roughness pattern.

This year, numerical investigations have been performed on the Eulerian droplet model to solve the droplet impingement. The model equations are close to the Euler equations but without a pressure term. Consequently, the resulting system is weakly hyperbolic and standard Riemann solvers cannot be used. To circumvent this problem, the system is modified to include the divergence of a particle pressure. The main purpose of this work 31 is to implement a multidimensional HLLC Riemann solver for the modified formulation of the Eulerian droplet model. The method should preserve physical properties such as the density positivity and must produce accurate results compared to existing codes.

We have continued exploring new ideas allowing to improve the accuracy of immersed and embedded boundary methods, both on a fundamental level and in applications.

Shifted boundary method, extensions and developments. We have proposed several applications and extensions of the original ideas behind the shifted boundary method of 95. This method is based on the simple idea that when applying the boundary conditions on a modified boundary we can modify the imposed conditions to account for this offset by means of a Taylor series expantion truncated to the desired accuracy. First of all, we extended the enriched formulation proposed in 99 to the approxiamtion of parabolic problems with moving interfaces, modeling phase change. This is being done in the PhD of T. Carlier 34, 45.

We have worked on extending the use of the same idea to higher orders. In this case the evaluation of the entire Taylor expansion in each boundary quadrature point is quite expensive. On the other hand, when using local data to evaluate the Taylor series this procedure can be readily shown to be simply a local change in polynomial basis. This has allowed to re-write the method via a simple but effective high order polynomial correction based on the polynomial representation already available in the cell. This approach has been thoroughly tested for hyperbolic problems in 2D and 3D in 12. Its in depth study for elliptic problems is ongoing in collaboration with DTU compute and Duke University.

At the same time, we have tried to reformulate the problem based on a continuous view of the scheme. Using the anisotropy of the thin region between the under-resolved and physical boundaries we have been able to derive a sub-grid asymptotic approaximation whose trace on the surrogate boundary is precisely the condition used in the shifted boundary method. This allows to design several boundary conditions within any desired order of accuracy. This work is perfomed in collaboration with L. Nouveau (INSA Rennes), and C. Poignard (Inria, MONC).

Embedding shock waves and discontinuities. Using similar ideas we have proposed a discontinuity tracking approach which uses polynomial extrapolation to pass from the computational mesh to a mesh approximating the discontinuity. This method, originally introduced in 70 avoids introducing small cut cells, as the original front tracking method does, and allows to recover the formal accuracy of the scheme, without the need of any limiter and with none of the spurious effects related to the capturing of strong shocks. This year we have extended this work to the approximation of interactions of several discontinuities using both unstructured 14 and structured flow solvers 4. Preliminary results in 3D are discussed in 30.

Immersed boundaries for turbulent flows. Realistic applications to external aerodynamics are being pursued in collaboration with ONERA and CEA-Cesta.
Within the PhD of Benjamin Constant (ONERA) we have proposed an improved Immersed Boundary Method based on volume penalization for turbulent flow simulations on Cartesian grids. The proposed approach enables to remove spurious oscillations on the wall on skin pressure and friction coefficients. Results are compared to a body-fitted simulations using the same wall function, showing that the stair-step immersed boundary provides a smooth solution compared to the body-fitted one. The IBM has been modified to adapt the location of forced and forcing points involved in the immersed boundary reconstruction to the Reynolds number. This method has been validated either for subsonic and transonic flow regimes, through the simulation of the subsonic turbulent flow around a NACA0012 profile and the transonic flow around a RAE2822 profile and the three-dimensional ONERA M6 wing. This work has been extended to more complex 3D geometries and presented in 35. This work continues with the thesis of Michele Romanelli. In 23, Michele presents a data-based methodology to build Reynolds-Averaged Navier–Stokes (RANS) wall models for aerodynamic simulations at low Mach numbers. Like classical approaches, the model is based on nondimensional local quantities derived from the wall friction velocity, the wall viscosity, and the wall density. A fully-connected neural network approximates the relation. We consider reference data (obtained with RANS simulations based on fine meshes up to the wall) of attached turbulent flows at various Reynolds numbers over different geometries of bumps, covering a range of wall pressure gradients. After training the neural networks on a subset of the reference data, the paper assesses their ability to accurately recover data for unseen conditions on meshes that have been trimmed from the wall up to an interface height where the learned wall law is applied. The network’s interpolation and extrapolation capabilities are quantified and carefully examined. Overall, when tested within its interpolation and extrapolation capabilities, the neural network model shows good robustness and accuracy. The global error on the skin friction coefficient is a few percent and behaves consistently over all the considered test cases.

Florent Nauleau doctoral work aims at adapting the immersed boundary conditions (IBC) technique to three-dimensional (3D) large eddy simulations (LES) of viscous hypersonic flows around complex vehicles. The work relies on a pre-existing in-house IBC code, HYPERION (HYPERsonic vehicle design using Immersed bOuNdaries). As a first step towards the optimization of the 3D HYPERION, we discuss in this paper 32 a novel MPI/OpenMP hybrid rasterization algorithm allowing for the detection of immersed cells in record time even for very large problems. We then consider the least-square-based reconstruction algorithm from HYPERION. In 3D configurations it is found that the number of neighbors has to be very high to ensure the proper conditioning of the least-square matrix. If the computation is distributed on several MPI processes (as is always the case in 3D for realistic return times), gathering the information from that many neighbors can cause obvious communication issues - it amounts to covering large stencils with unrealistically large MPI halos. We therefore introduce an algorithm designed for a hybrid MPI/OpenMP environment based on migratable tasks and the consensus algorithm to remedy the former shortcoming. Finally, we discuss the premise of the implementation of LES capabilities in HYPERION.

Another aspect of the work of Florent Nauleau concerns scientific visualisation. This application paper 33 presents a comprehensive experimental evaluation of the suitability of Topological Data Analysis (TDA) for the quantitative comparison of turbulent flows. Specifically, our study documents the usage of the persistence diagram of the maxima of flow enstrophy (an established vorticity indicator), for the topological representation of 180 ensemble members, generated by a coarse sampling of the parameter space of five numerical solvers. We document five main hypotheses reported by domain experts, describing their expectations regarding the variability of the flows generated by the distinct solver configurations. We contribute three evaluation protocols to assess the validation of the above hypotheses by two comparison measures: (i) a standard distance used in scientific imaging (the L2 norm) and (ii) an established topological distance between persistence diagrams (the L2 -Wasserstein metric). Extensive experiments on the input ensemble demonstrate the superiority of the topological distance (ii) to report as close to each other flows which are expected to be similar by domain experts, due to the configuration of their vortices. Overall, the insights reported by our study bring an experimental evidence of the suitability of TDA for representing and comparing turbulent flows, thereby providing to the fluid dynamics community confidence for its usage in future work. Also, our flow data and evaluation protocols provide to the TDA community an application-approved benchmark for the evaluation and design of further topological distances.

We have completed in collaboration with the LCTS laboratory a first fully coupled study of a self healing mini-composite. To this end we have proposed a slow crack growth model explicitly dependent on the environmental parameters which we calibrated using a particular exact solution of the corresponding ODE, and integrated numerically in the general case. The tow failure results from the statistical distribution of the fibres’ initial strength, the slow crack growth kinetics, and the load transfer following fibres breakage. The lifetime prediction capabilities of the model, as well as the effect of temperature, spatial variation of the statistical distribution of fibres strength, and applied load, are investigated highlighting the influence of the diffusion/reaction processes (healing) on the fibre breakage scenarios. This work is reported in 6.

Based on this model we have proposed an extensive characterization of the uncertainty propagation as well as a sensitivity analysis study which has permitted to envision a first approach to upscale the closure for the healing to a single (or to a finite number of) ODE(s) in each crack. Results are discussed in the dissertation 41. A paper is in preparation.

Parallel mesh adaptation. Task-based parallelism using Mmg tools was investigated within the European project
Microcard. Several approaches were considered to implement MMG algorithms in the StarPU framework, although none yet was really successful. Broader investigations on adaptation of the code base for shared-memory parallelism were also carried on.
The source code, documentation and contributions to these projects are hosted at
github.

Goal oriented mesh adaptation. The work on goal-oriented mesh adaptation techniques for geophysical flows has continued, in the context of the collaboration with Imperial College London.

The MPI-parallel metric-based mesh adaptation toolkit ParMmg was integrated into the solver library PETSc. This coupling brings robust, scalable anisotropic mesh adaptation to a wide community of PETSc users, as well as users of downstream packages. We demonstrate the new functionality via the solution of Poisson problems in three dimensions, with both uniform and spatially-varying right-hand sides.

Metric-based mesh adaptation methods was applied to advection-dominated tracer transport modelling problems in two and three dimensions, using the finite element package Firedrake. In particular, the mesh adaptation methods considered were built upon goal-oriented estimates for the error incurred in evaluating a diagnostic quantity of interest (QoI). In the motivating example of modelling to support desalination plant outfall design, such a QoI could be the salinity at the plant inlet, which could be negatively impacted by the transport of brine from the plant’s outfall. Four approaches were considered, one of which yields isotropic meshes. The focus on advection-dominated problems means that flows are often anisotropic; thus, three anisotropic approaches were also considered. Meshes resulting from each of the four approaches yield solutions to the tracer transport problem which give better approximations to QoI values than uniform meshing, for a given mesh size. The methodology was validated using an existing 2D tracer transport test case with a known analytical solution. Goal-oriented meshes for an idealised time-dependent desalination outfall scenario were also presented.

Further work was carried out to examine the accuracy and sensitivity of tidal array performance assessment by numerical techniques applying goal-oriented mesh adaptation. We seeked to improve the accuracy of the discontinuous Galerkin method applied to a depth-averaged shallow water model of a tidal energy farm, where turbines are represented using a drag parametrisation and the energy output is specified as the QoI. Two goal-oriented adaptation strategies were considered, which give rise to meshes with isotropic and anisotropic elements. We reproduced results from the literature which demonstrate how a staggered array configuration extracts more energy than an aligned array. We also made detailed qualitative and quantitative comparisons between the fixed mesh and adaptive outputs. The proposed goal-oriented mesh adaptation strategies were validated for the purposes of tidal energy resource assessment. Using 10% as many degrees of freedom as a high resolution fixed mesh benchmark, they are shown to enable energy output differences smaller than 10%. Applied to a tidal array with aligned rows of turbines, the anisotropic adaptation scheme is shown to yield differences smaller than 1

Coupling mesh adaptation with model reduction. We started a new line of research investigating the introduction of anisotropic mesh adaptation techniques in Model Order Reduction. In the framwework of the Eflows4HPC project,

We have continued with the development of the code called AleVoronoi: Direct Arbitrary Lagrangian Eulerian high order finite volume and discontinous Galerkin schemes on VORONOI moving meshes with topology changes. The code is written in Fortran with the OpenMP parallel paradigm.
It implements an arbitrary high order accurate numerical scheme which exploits the ADER paradigm both for the Finite Volume and Discontinous Galerkin case and can be used for studying the Burgers equation, Euler equations, multi-material Euler equations, Shallow Water models written in various system of coordinates and in covariant form, MHD equations, and the GPR model.
It belongs to the family of Arbitrary-Lagrangian-Eulerian methods, so it can be used in fixed Eulerian framework or with a moving Voronoi mesh regenerated at each time step. Its peculiarity is the capability of dealing with topology changes maintaining the high order of accuracy.

The work of 2022 has been devoted to the following major improvements.
First, we have inserted in AleVoronoi a robust a posteriori subcell Finite Volume limiter and we have started the implementation of novel mesh adaptation strategies. A manuscript completely devoted to these major implementation works is in preparation and a book chapter collecting some preliminary results can be found in 39.
In particular, the manuscript aims at showing, by simple and very salient examples, the capabilities of high-order ALE schemes, and of our novel technique, based on the high-order space-time treatment of topology changes.

Then, we have compared the capabilities of different sets of basis functions on Voronoi meshes: the obtained results are described in 9.
There indeed we have proposed a new high order accurate nodal discontinuous Galerkin (DG) where, rather than using classical polynomials of degree piecewise continuous polynomials of degree within each Voronoi element, using a continuous finite element basis defined on a subgrid inside each polygon.
We call the resulting subgrid basis an agglomerated finite element (AFE) basis for the DG method on general polygons, since it is obtained by the agglomeration of the finite element basis functions associated with the subgrid triangles.
The basis functions on each sub-triangle are defined, as usual, on a universal reference element, hence allowing to compute universal mass, flux and stiffness matrices for the subgrid triangles once and for all in a pre-processing stage for the reference element only. Consequently, the construction of an efficient quadrature-free algorithm is possible, despite the unstructured nature of the computational grid.

Finally, we have implemented a novel quasi-conservative strategy to deal with multi-material flow simulations that we have already described above and for which here we want just to underline that the method has been implemented in two-dimension on unstructured Voronoi meshes.

The main developer of AleVoronoi is Elena Gaburro. The described work has been realized in collaboration with MArio Ricchiuto, Michele Giuliano Carlino, Walter Boscheri (University of Ferrara, Italy) and Simone Chiocchetti (University of Trento, Italy).

The objective of this project is to propose a numerical tool (software GeoFun) for the simulation of flows in aquifers based on unified models. Different types of flows can appear in an aquifer: free surface flows (hyperbolic equations) for lakes and rivers, and porous flows (elliptic equations) for ground water. The variation in time of the domain where each type of flow must be solved makes the simulation of flows in aquifers a scientific challenge. Our strategy consists of writing a model that can be solved in the whole domain, i.e. without domain decomposition.

For the beginning of the project we start by considering only the saturated areas. In 46, we propose and study a unified model between shallow water and Dupuit-Forchheimer models, which are both classical models in each areas. A numerical scheme has been proposed and analysed. It satisfies a discrete entropy dissipation which ensure a strong stability.

In 51, we also propose a model and a numerical strategy to take into account the air pockets that can be trapped under a impermeable structure. This work can also be used for the simulation of some marine energy conververters such are the solution of Seaturns of Hace. In parallel, we work on the structure of the code in order to integrate more easily the furthers ideas. In particular, specific numerical integrators and time schemes have been implemented.

SuPerMan project on cordis.europa.eu

SuPerMan proposes the development and efficient implementation of new structure preserving schemes for conservation laws formulated in an elegant and universal form through covariant derivatives on spacetime manifolds.

Indeed, nonlinear systems of hyperbolic PDEs are characterized by invariants, whose preservation at the discrete level is not trivial, but plays a fundamental role in improving the long term predictivity and reducing the computational effort of modern algorithms. Besides mass and linear momentum conservation, typical of any Finite Volume scheme, the preservation of stationary and moving equilibria, asymptotic limits and interfaces still represents an open challenge, especially in astrophysical applications, such as turbulent flows in gas clouds rotating around black holes.

In this project, our focus will thus be on General Relativistic Hydrodynamics (GRHD) for which such Well Balanced (WB) Structure Preserving (SP) schemes have never been studied before. In particular, we plan to devise smart methods, independent of the coordinate system. This will be achieved by directly including the metric, implicitly contained in the covariant derivative, as a conserved variable inside the GRHD model.

This approach will first be tested on the Euler equations of gasdynamics with Newtonian gravity, extending already existing WB-SP techniques to general coordinate systems. All novel features will be carefully proven theoretically. Next, the new schemes will be incorporated inside a massively parallel high order accurate Arbitrary-Lagrangian-Eulerian Finite Volume (FV) and Discontinuous Galerkin (DG) code, to be released as open source. The feasibility of the project is guaranteed by the strong network surrounding the ER, including experts on WB-SP techniques and mesh adaptation (INRIA France), FV-DG schemes and GRHD (UniTN Italy) and computational astrophysics (MPG Germany). This MSCA project will allow the applicant transition to become an independent researcher.

eFlows4HPC project on cordis.europa.eu

Today, developers lack tools that enable the development of complex workflows involving HPC simulation and modelling with data analytics (DA) and machine learning (ML). TheFlows4HPC aims to deliver a workflow software stack and an additional set of services to enable the integration of HPC simulation and modelling with big data analytics and machine learning in scientific and industrial applications. The software stack will allow to develop innovative adaptive workflows that efficiently use the computing resources and also considering innovative storage solutions.

To widen the access to HPC to newcomers, the project will provide HPC Workflows as a Service (HPCWaaS), an environment for sharing, reusing, deploying and executing existing workflows on HPC systems. The workflow technologies, associated machine learning and big data libraries used in the project leverages previous open source European initiatives. Specific optimization tasks for the use of accelerators (FPGAs, GPUs) and the EPI will be performed in the project use cases.

To demonstrate the workflow software stack, use cases from three thematic pillars have been selected. Pillar I focuses on the construction of DigitalTwins for the prototyping of complex manufactured objects integrating state-of-the-art adaptive solvers with machine learning and data-mining, contributing to the Industry 4.0 vision. Pillar II develops innovative adaptive workflows for climate and for the study of Tropical Cyclones (TC) in the context of the CMIP6 experiment, including in-situ analytics. Pillar III explores the modelling of natural catastrophes - in particular, earthquakes and their associated tsunamis- shortly after such an event is recorded. Leveraging two existing workflows, the Pillar will work of integrating them with the eFlows4HPC software stack and on producing policies for urgent access to supercomputers. The pillar results will be demonstrated in the target community CoEs to foster adoption and get feedback.

Microcard project on cordis.europa.eu

Cardiovascular diseases are the most frequent cause of death worldwide and half of these deaths are due to cardiac arrhythmia, a disorder of the heart's electrical synchronization system. Numerical models of this complex system are highly sophisticated and widely used, but to match observations in aging and diseased hearts they need to move from a continuum approach to a representation of individual cells and their interconnections. This implies a different, harder numerical problem and a 10,000-fold increase in problem size. Exascale computers will be needed to run such models.

We propose to develop an exascale application platform for cardiac electrophysiology simulations that is usable for cell-by-cell simulations. The platform will be co-designed by HPC experts, numerical scientists, biomedical engineers, and biomedical scientists, from academia and industry. We will develop, in concert, numerical schemes suitable for exascale parallelism, problem-tailored linear-system solvers and preconditioners, and a compiler to translate high-level model descriptions into optimized, energy-efficient system code for heterogeneous computing systems. The code will be parallelized with a recently developed runtime system that is resilient to hardware failures and will use an energy-aware task placement strategy.

The platform will be applied in real-life use cases with high impact in the biomedical domain and will showcase HPC in this area where it is painfully underused. It will be made accessible for a wide range of users both as code and through a web interface.

We will further employ our HPC and biomedical expertise to accelerate the development of parallel segmentation and (re)meshing software, necessary to create the extremely large and complex meshes needed from available large volumes of microscopy data.

The platform will be adaptable to similar biological systems such as nerves, and components of the platform will be reusable in a wide range of applications.

ExaQUte project on cordis.europa.eu