`CARDAMOM``CARDAMOM`^{st}, 2015
(https://`CARDAMOM`*asymptotic PDE* (Partial Differential Equations) modelling,
*adaptive high order PDE discretizations*, and a *quantitative certification* step assessing
the sensitivity of outputs to both model components
(equations, numerical methods, etc) and random variations of the data.
The goal is to improve parametric analysis and design cycles,
by increasing both accuracy and confidence in the results thanks to
improved physical and numerical modelling, and to a quantitative assessment of output uncertainties.
This requires a research program mixing of PDE analysis,
high order discretizations, Uncertainty Quantification (UQ), robust optimization, and some specific engineering know how.
Part of these scientific activities started in the `BACCHUS``MC2``CARDAMOM`

The objective of this project is to provide improved analysis and design tools for engineering applications involving fluid flows, and in particular flows with moving fronts.
In our applications *a front is either an actual material interface, or a well identified and delimited transition region in which
the flow undergoes a change in its dominant macroscopic character*. One example is the certification of wing de anti-icing systems, involving the predictions of ice formation and detachment,
and of ice debris trajectories to evaluate the risk of downstream impact on aircraft components , .
Another application, relevant for space reentry, is the study of transitional regimes in high altitude gas dynamics in which extremely thin
layers appear in the flow which cannot be analysed with classical continuous models (Navier-Stokes equations) used by engineers , . An important example in
coastal engineering is the transition between propagating and breaking waves, characterized by a strong local production of vorticity and by
dissipative effects absent when waves propagates . Similar examples in energy and material engineering provide the motivation of this project.

All these application fields involve either the study of new technologies (e.g. new design/certification tools for aeronautics , , , or for wave energy conversion ), or parametric studies of complex environments (e.g. harbour dynamics , or estuarine hydrodynamics ), or hazard assessment and prevention . In all cases, computationally affordable, quick, and accurate numerical modelling is essential to improve the quality of (or to shorten) design cycles and allow performance level enhancements in early stages of development . The goal is to afford simulations over very long times with many parameters or to embed a model in an alert system.

In addition to this, even in the best of circumstances, the reliability of numerical predictions is limited by the intrinsic randomness of the data used in practice
to define boundary conditions, initial conditions, geometry, etc. This uncertainty, related to the measurement errors,
is defined as *aleatory*, and cannot be removed, nor reduced. In addition, physical models and the related Partial Differential Equations (PDEs),
feature a structural uncertainty, since they are derived with assumptions of limited validity and calibrated with manipulated experimental data (filtering, averaging, etc ..). These uncertainties are defined as *epistemic*, as they are a deficiency due to a lack of knowledge , .
Unfortunately, measurements in fluids are delicate and expensive.
In complex flows, especially in flows involving interfaces and moving fronts, they are sometimes impossible to carry out, due to scaling problems, repeatability issues
(e.g. tsunami events), technical issues (different physics in the different flow regions)
or dangerousness (e.g. high temperature reentry flows, or combustion). Frequently,
they are impractical, due to the time scales involved (e.g. characterisation of oxidation processes related
to a new material micro-/meso- structure ).
This increases the amount of uncertainties associated to measurements and reduces the amount of information available to construct physical/PDE models.
These uncertainties play also a crucial role when one wants to deal with numerical certification or optimization of a fluid based device.
However, this makes the required number of flow simulations grow as high as hundreds or even thousands of times.
The associated costs are usually prohibitive. So the real challenge is to be able to construct
an accurate and computationally affordable numerical model handling efficiently uncertainties.
In particular, this model should be able to take into account the variability due to uncertainties,
those coming from the certification/optimization parameters as well as those coming from modelling choices.

To face this challenge and provide new tools to accurately and robustly modelize and certify engineering devices based on fluid flows with moving fronts, we propose a program mixing scientific research in asymptotic PDE analysis, high order adaptive PDE discretizations and uncertainty quantification.

A standar way a certification study may be contacted can be described as two box modelling. The first box is the
physical model itself, which is composed of the 3 main elements: PDE system, mesh generation/adaptation, and discretization of the PDE (numerical scheme). The second box is the main robust certification loop which contains separate boxes involving the evaluation of the physical model, the post-processing of the output, and the exploration of the spaces of physical and stochastic parameters (uncertainties).
There are some known interactions taking place in the loop which are a necessary to exploit as much as possible the potential of high order methods such as
e.g.

As things stand today, we will not be able to take advantage of the potential of new high order numerical techniques and of hierarchical (multi-fidelity) robust certification approaches without some very aggressive adaptive methodology. Such a methodology, will require interactions between e.g. the uncertainty quantification methods and the adaptive spatial discretization, as well as with the PDE modelling part.
Such a strategy cannot be developed, let alone implemented in an operational context, without completely disassembling the scheme of the two boxes, and letting all the parts (PDE system, mesh generation/adaptation, numerical scheme, evalutaion of the physical model, the post processing of the output, exploration of the spaces of physical and stochastic parameters) interact together. This is what we want to do in `CARDAMOM`

Our strength is also our unique chance of exploring the interactions between all the parts. We will try to answer some fundamental questions related to the following aspects

What are the relations between PDE model accuracy (asymptotic error) and scheme accuracy, and how to control, en possibly exploit these relations to minimize the error for a given computational effort ;

How to devise and implement adaptation techniques (

How to exploit the wide amount of information made available from the optimization *and* uncertainty quantification process to construct a more aggressive adaptation strategy in physical, parameter, and stochastic space, and in the physical model itself ;

These research avenues related to the PDE models and numerical methods used, will allow us to have an impact on the applications communities targeted which are

Aeronautics and aerospace engineering (de-anti icing systems, space re-entry) ;

Energy engineering (organic Rankine cycles and wave energy conversion) ;

Material engineering (self healing composite materials) ;

Coastal engineering (coastal protection, hazard assessment etc.).

The main research directions related to the above topics are discussed in the following section.

In many of the applications we consider, intermediate fidelity models are or can be derived using an asymptotic expansion for the relevant scale resolving PDEs, and eventually considering some averaged for of the resulting continuous equations. The resulting systems of PDEs are often very complex and their characterization, e.g. in terms of stability, unclear, or poor, or too complex to allow to obtain discrete analogy of the continuous properties. This makes the numerical approximation of these PDE systems a real challenge. Moreover, most of these models are often based on asymptotic expansions involving small geometrical scales. This is true for many applications considered here involving flows in/of thin layers (free surface waves, liquid films on wings generating ice layers, oxide flows in material cracks, etc). This asymptotic expansion is nothing else than a discretization (some sort of Taylor expansion) in terms of the small parameter. The actual discretization of the PDE system is another expansion in space involving as a small parameter the mesh size. What is the interaction between these two expansions ? Could we use the spatial discretization (truncation error) as means of filtering undesired small scales instead of having to explicitly derive PDEs for the large scales ? We will investigate in depth the relations between asymptotics and discretization by :

comparing the asymptotic limits of discretized forms of the relevant scale resolving equations with the discretization of the analogous continuous asymptotic PDEs. Can we discretize a well understood system of PDEs instead of a less understood and more complex one ? ;

study the asymptotic behaviour of error terms generated by coarse one-dimensional discretization in the direction of the “small scale”. What is the influence of the number of cells along the vertical direction, and of their clustering ? ;

derive equivalent continuous equations (modified equations) for anisotropic discretizations in which the direction is direction of the “small scale” is approximated with a small number of cells. What is the relation with known asymptotic PDE systems ?

Our objective is to gain sufficient control of the interaction between discretization and asymptotics to be able to replace
the coupling of several complex PDE systems by adaptive strongly anisotrotropic finite elemient approximations of relevant and well understood PDEs.
Here the anisotropy is intended in the sense of having a specific direction in which a much poorer (and possibly variable with the flow conditions)
polynomial approximation (expansion) is used. The final goal is, profiting from the availability of faster and cheaper computational platforms,
to be able to automatically control numerical *and* physical accuracy of the model with the same techniques.
This activity will be used to improve our modelling in coastal engineering as well as
for de-anti icing systems, wave energy converters, composite materials (cf. next sections).

In parallel to these developments, we will make an effort in to gain a better understanding of continuous asymptotic PDE models. We will in particular work on improving, and possibly, simplifying their numerical approximation. An effort will be done in trying to embed in these more complex nonlinear PDE models discrete analogs of operator identities necessary for stability (see e.g. the recent work of , and references therein).

We will work on both the improvement of high order mesh generation and adaptation techniques, and the construction of more efficient, adaptive high order discretisation methods.

Concerning curved mesh generation, we will focus on two points. First propose a robust and automatic method to generate curved simplicial meshes for realistic geometries. The untangling algorithm we plan to develop is a hybrid technique that gathers a local mesh optimization applied on the surface of the domain and a linear elasticity analogy applied in its volume. Second we plan to extend the method proposed in to hybrid meshes (prism/tetra).

For time dependent adaptation we will try to exploit as much as possible the use of

The development of high order schemes for the discretization of the PDE will be a major part of our activity. We will work from the start in an Arbitrary Lagrangian Eulerian setting, so that mesh movement will be easily accommodated, and investigate the following main points:

the ALE formulation is well adapted both to handle moving meshes, and to provide conservative, high order, and monotone remaps between different meshes. We want to address the issue of cost-accuracy of adaptive mesh computations by exploring different degrees of coupling between the flow and the mesh PDEs. Initial experience has indicated that a clever coupling may lead to a considerable CPU time reduction for a given resolution . This balance is certainly dependent on the nature of the PDEs, on the accuracy level sought, on the cost of the scheme, and on the time stepping technique. All these elements will be taken into account to try to provide the most efficient formulation ;

the conservation of volume, and the subsequent preservation of constant mass-momentum-energy states on deforming domains is one of the most primordial elements of Arbitrary Lagrangian-Eulerian formulations. For complex PDEs as the ones considered here, of especially for some applications, there may be a competition between the conservation of e.g. mass, an the conservation of other constant states, as important as mass. This is typically the case for free surface flows, in which mass preservation is in competitions with the preservation of constant free surface levels . Similar problems may arise in other applications. Possible solutions to this competition may come from super-approximation (use of higher order polynomials) of some of the data allowing to reduce (e.g. bathymetry) the error in the preservation of one of the competing quantities. This is similar to what is done in super-parametric approximations of the boundaries of an object immersed in the flow, except that in our case the data may enter the PDE explicitly and not only through the boundary conditions. Several efficient solutions for this issue will be investigated to obtain fully conservative moving mesh approaches:

an issue related to the previous one is the accurate treatment of wall boundaries. It is known that even for standard lower order (second) methods,
a higher order, curved, approximation of the boundaries may be beneficial. This, however, may become difficult when considering moving objects, as in the case e.g. of the study of the impact of ice debris in the flow.
To alleviate this issue, we plan to follow on with our initial work on the combined use of immersed boundaries techniques with high order, anisotropic (curved) mesh adaptation.
In particular, we will develop combined approaches involving high order hybrid meshes on fixed boundaries with the use of penalization techniques and immersed boundaries
for moving objects. We plan to study the accuracy obtainable across discontinuous functions with

the proper treatment of different physics may be addressed by using mixed/hybrid schemes in which different variables/equations are approximated using a different polynomial expansion. A typical example is our work on the discretization of highly non-linear wave models in which we have shown how to use a standard continuous Galerkin method for the elliptic equation/variable representative of the dispersive effects, while the underlying hyperbolic system is evolved using a (discontinuous) third order finite volume method. This technique will be generalized to other classes of discontinuous methods, and similar ideas will be used in other context to provide a flexible approximation. Such mathods have clear advantages in multiphase flows but not only. A typical example where such mixed methods are beneficial are flows involving different species and tracer equations, which are typically better treated with a discontinuous approximation. Another example is the use of this mixed approximation to describe the topography with a high order continuous polynomial even in discontinuous method. This allows to greatly simplify the numerical treatment of the bathymetric source terms ;

the enhancement of stabilized methods based on some continuous finite element approximation will remain a main topic. We will further
pursue the study on the construction of simplified stabilization operators which do not involve any contributions to the mass matrix.
We will in particular generalize our initial results to higher order spatial approximations using cubature points, or Bezier polynomials, or also hierarchical approximations.
This will also be combined with time dependent variants of the reconstruction techniques initially proposed by D. Caraeni ,
allowing to have a more flexible approach similar to the so-called

time stepping is an important issue, especially in presence of local mesh adaptation. The techniques we use will force us to investigate local and multilevel techniques. We will study the possibility constructing semi-implicit methods combining extrapolation techniques with space-time variational approaches. Other techniques will be considered, as multi-stage type methods obtained using Defect-Correction, Multi-step Runge-Kutta methods , as well as spatial partitioning techniques . A major challenge will be to be able to guarantee sufficient locality to the time integration method to allow to efficiently treat highly refined meshes, especially for viscous reactive flows. Another challenge will be to embed these methods in the stabilized methods we will develop.

As already remarked, classical methods for uncertainty quantification are affected by the so-called Curse-of-Dimensionality. Adaptive approaches proposed so far, are limited in terms of efficiency, or of accuracy. Our aim here is to develop methods and algorithms permitting a very high-fidelity simulation in the physical and in the stochastic space at the same time. We will focus on both non-intrusive and intrusive approaches.

Simple non-intrusive techniques to reduce the overall cost of simulations under uncertainty will be based on adaptive quadrature in stochastic space with mesh adaptation in physical space using error monitors related to the variance of to the sensitivities obtained e.g. by an ANOVA decomposition. For steady state problems, remeshing using metric techniques is enough. For time dependent problems both mesh deformation and re-meshing techniques will be used. This approach may be easily used in multiple space dimensions to minimize the overall cost of model evaluations by using high order moments of the properly chosen output functional for the adaptation (as in optimization). Also, for high order curved meshes, the use of high order moments and sensitivities issued from the UQ method or optimization provides a viable solution to the lack of error estimators for high order schemes.

Despite the coupling between stochastic and physical space, this approach can be made massively parallel by means of extrapolation/interpolation techniques for the high order moments, in time and on a reference mesh, guaranteeing the complete independence of deterministic simulations. This approach has the additional advantage of being feasible for several different application codes due to its non-intrusive character.

To improve on the accuracy of the above methods, intrusive approaches will also be studied. To propagate uncertainties in stochastic differential equations, we will use Harten's multiresolution framework, following . This framework allows a reduction of the dimensionality of the discrete space of function representation, defined in a proper stochastic space. This reduction allows a reduction of the number of explicit evaluations required to represent the function, and thus a gain in efficiency. Moreover, multiresolution analysis offers a natural tool to investigate the local regularity of a function and can be employed to build an efficient refinement strategy, and also provides a procedure to refine/coarsen the stochastic space for unsteady problems. This strategy should allow to capture and follow all types of flow structures, and, as proposed in , allows to formulate a non-linear scheme in terms of compression capabilities, which should allow to handle non-smooth problems. The potential of the method also relies on its moderate intrusive behaviour, compared to e.g. spectral Galerkin projection, where a theoretical manipulation of the original system is needed.

Several activities are planned to generalize our initial work, and to apply it to complex flows in multiple (space) dimensions and with many uncertain parameters.

The first is the improvement of the efficiency. This may be achieved by means of anisotropic mesh refinement, and by experimenting with a strong parallelization of the method. Concerning the first point, we will investigate several anisotropic refinement criteria existing in literature (also in the UQ framework), starting with those already used in the team to adapt the physical grid. Concerning the implementation, the scheme formulated in is conceived to be highly parallel due to the external cycle on the number of dimensions in the space of uncertain parameters. In principle, a number of parallel threads equal to the number of spatial cells could be employed. The scheme should be developed and tested for treating unsteady and discontinuous probability density function, and correlated random variables. Both the compression capabilities and the accuracy of the scheme (in the stochastic space) should be enhanced with a high-order multidimensional conservative and non-oscillatory polynomial reconstruction (ENO/WENO).

Another main objective is related to the use of multiresolution in both physical and stochastic space. This requires a careful handling of data and an updated definition of the wavelet. Until now, only a weak coupling has been performed, since the number of points in the stochastic space varies according to the physical space, but the number of points in the physical space remains unchanged. Several works exist on the multiresolution approach for image compression, but this could be the first time i in which this kind of approach would be applied at the same time in the two spaces with an unsteady procedure for refinement (and coarsening). The experimental code developed using these technologies will have to fully exploit the processing capabilities of modern massively parallel architectures, since there is a unique mesh to handle in the coupled physical/stochastic space.

Due to the computational cost, it is of prominent importance to consider multi-fidelity approaches gathering high-fidelity and low-fidelity computations. Note that low-fidelity solutions can be given by both the use of surrogate models in the stochastic space, and/or eventually some simplified choices of physical models of some element of the system. Procedures which deal with optimization considering uncertainties for complex problems may require the evaluation of costly objective and constraint functions hundreds or even thousands of times. The associated costs are usually prohibitive. For these reason, the robustness of the optimal solution should be assessed, thus requiring the formulation of efficient methods for coupling optimization and stochastic spaces. Different approaches will be explored. Work will be developed along three axes:

a robust strategy using the statistics evaluation will be applied separately,
*i.e.* using only low or high-fidelity evaluations. Some classical optimization algorithms will be used in this case.
Influence of high-order statistics and model reduction in the robust design optimization will be explored,
also by further developing some low-cost methods for robust design optimization working on the so-called Simplex

a multi-fidelity strategy by using in an efficient way low fidelity and high-fidelity estimators both in physical and stochastic space will be conceived, by using a Bayesian framework for taking into account model discrepancy and a PC expansion model for building a surrogate model ;

develop advanced methods for robust optimization. In particular, the Simplex

This work is related to the activities foreseen in the EU contract MIDWEST, in the ANR LabCom project VIPER (currently under evaluation), in a joint project with DGA and VKI, in two projects under way with AIRBUS and SAFRAN-HERAKLES.

Impact of large ice debris on downstream aerodynamic surfaces and ingestion by aft mounted engines must be considered during the aircraft certification process. It is typically the result of ice accumulation on unprotected surfaces, ice accretions downstream of ice protected areas, or ice growth on surfaces due to delayed activation of ice protection systems (IPS) or IPS failure. This raises the need for accurate ice trajectory simulation tools to support pre-design, design and certification phases while improving cost efficiency. Present ice trajectory simulation tools have limited capabilities due to the lack of appropriate experimental aerodynamic force and moment data for ice fragments and the large number of variables that can affect the trajectories of ice particles in the aircraft flow field like the shape, size, mass, initial velocity, shedding location, etc... There are generally two types of model used to track shed ice pieces. The first type of model makes the assumption that ice pieces do not significantly affect the flow. The second type of model intends to take into account ice pieces interacting with the flow. We are concerned with the second type of models, involving fully coupled time-accurate aerodynamic and flight mechanics simulations, and thus requiring the use of high efficiency adaptive tools, and possibly tools allowing to easily track moving objects in the flow. We will in particular pursue and enhance our initial work based on adaptive immerse boundary capturing of moving ice debris, whose movements are computed using basic mechanical laws.

In it has been proposed to model ice shedding trajectories by an innovative paradigm that is based on CArtesian grids, PEnalization and LEvel Sets (LESCAPE code). Our objective is to use the potential of high order unstructured mesh adaptation and immersed boundary techniques to provide a geometrically flexible extension of this idea. These activities will be linked to the development of efficient mesh adaptation and time stepping techniques for time dependent flows, and their coupling with the immersed boundary methods we started developing in the FP7 EU project STORM , . In these methods we compensate for the error at solid walls introduced by the penalization by using anisotropic mesh adaptation , , . From the numerical point of view one of the major challenges is to guarantee efficiency and accuracy of the time stepping in presence of highly stretched adaptive and moving meshes. Semi-implicit, locally implicit, multi-level, and split discretizations will be explored to this end.

Besides the numerical aspects, we will deal with modelling challenges. One source of complexity is the initial conditions which are essential to compute ice shedding trajectories. It is thus extremely important to understand the mechanisms of ice release. With the development of next generations of engines and aircraft, there is a crucial need to better assess and predict icing aspects early in design phases and identify breakthrough technologies for ice protection systems compatible with future architectures. When a thermal ice protection system is activated, it melts a part of the ice in contact with the surface, creating a liquid water film and therefore lowering ability of the ice block to adhere to the surface. The aerodynamic forces are then able to detach the ice block from the surface . In order to assess the performance of such a system, it is essential to understand the mechanisms by which the aerodynamic forces manage to detach the ice. The current state of the art in icing codes is an empirical criterion. However such an empirical criterion is unsatisfactory. Following the early work of , we will develop appropriate asymptotic PDE approximations allowing to describe the ice formation and detachment, trying to embed in this description elements from damage/fracture mechanics. These models will constitute closures for aerodynamics/RANS and URANS simulations in the form of PDE wall models, or modified boundary conditions.

In addition to this, several sources of uncertainties are associated to the ice geometry, size, orientation and the shedding location. In very few papers , some sensitivity analysis based on Monte Carlo method have been conducted to take into account the uncertainties of the initial conditions and the chaotic nature of the ice particle motion. We aim to propose some systematic approach to handle every source of uncertainty in an efficient way relying on some state-of-art techniques developed in the Team. In particular, we will perform an uncertainty propagation of some uncertainties on the initial conditions (position, orientation, velocity,...) through a low-fidelity model in order to get statistics of a multitude of particle tracks. This study will be done in collaboration with ETS (Ecole de Technologies Supérieure, Canada). The longterm objective is to produce footprint maps and to analyse the sensitivity of the models developed.

We will develop modelling and design tools, as well as dedicated platforms, for Rankine cycles using complex fluids (organic compounds), and for wave energy extraction systems.

*Organic Rankine Cycles (ORCs)* use heavy organic compounds as working fluids. This results in superior efficiency over steam Rankine cycles for source temperatures below 900 K.
ORCs typically require only a single-stage rotating component making them much simpler than typical multi-stage steam turbines.
The strong pressure reduction in the turbine may lead to supersonic flows in the rotor, and thus to the appearance of shocks, which reduces the efficiency due to the associated losses.
To avoid this, either a larger multi stage installation is used, in which smaller pressure drops are obtained in each stage, or centripetal turbines are used, at very high rotation speeds (of the order of 25,000 rpm).
The second solution allows to keep the simplicity of the expander, but leads to poor turbine efficiencies (60-80%) - w.r.t. modern, highly optimized, steam and gas turbines - and to higher mechanical constraints.
The use of *dense-gas working fluids*, *i.e.* operating close to the saturation curve, in properly chosen conditions could increase the turbine critical Mach number avoiding the formation of shocks,
and increasing the efficiency. Specific shape optimization may enhance these effects, possibly allowing the reduction of rotation speeds.
However, dense gases may have significantly different properties with respect to dilute ones. Their
dynamics is governed by a thermodynamic parameter known as the fundamental derivative of gas dynamics

where

The simulation of these gases requires accurate thermodynamic models, such as Span-Wagner or Peng-Robinson (see ). The data to build these models is scarce due to the difficulty of performing reliable experiments. The related uncertainty is thus very high. Our work will go in the following directions:

develop deterministic models for the turbine and the other elements of the cycle. These will involve multi-dimensional high fidelity, as well as intermediate and low fidelity (one- and zero-dimensional), models for the turbine, and some 0D/1D models for other element of the cycle (pump, condenser, etc) ;

validation of the coupling between the various elements. The following aspects will be considered: characterization of the uncertainties on the cycle components (e.g. empirical coefficients modelling the pump or the condenser), calibration of the thermodynamic parameters, model the uncertainty of each element, and the influence of the unsteady experimental data ;

demonstrate the interest of a specific optimization of geometry, operating conditions, and the choice of the fluid, according to the geographical location by including local solar radiation data. Multi-objective optimization will be considered to maximize performance indexes (e.g. Carnot efficiency, mechanical work and energy production), and to reduce the variability of the output.

This work will provide modern tools for the robust design of ORCs systems. It benefits from the direct collaboration with the SME EXOES (ANR LAbCom VIPER), and from a collaboration with LEMMA.

*Wave energy conversion* is an emerging sector in energy engineering. The design of new and efficient Wave Energy Converters (WECs) is thus a crucial activity.
As pointed out by Weber , it is more economical to raise the technology performance level (TPL) of a wave energy converter concept at low technology readiness level (TRL).
Such a development path puts a greater demand on the numerical methods used. The findings of Weber also tell us that important design decisions as well as optimization should be performed as early in the development process as possible. However, as already mentioned, today the wave energy sector relies heavily on the use of tools based on simplified linear hydrodynamic models for the prediction of motions, loads, and power production.
Our objective is to provide this sector, and especially SMEs, with robust design tools
to minimize the uncertainties in predicted power production, loads, and costs of wave energy.

Following our initial work , we will develop, analyse, compare, and use for multi-fidelity optimization,non-linear models of different scales (fidelity) ranging from simple linear hydrodynamics over asymptotic discrete nonlinear wave models, to non-hydrostatic anisoptropic Euler free surface solvers. We will not work on the development of small scale models (VOF-RANS or LES) but may use such models, developed by our collaborators, for validation purposes. These developments will benefit from all our methodological work on asymptotic modelling and high order discretizations. As shown in , asymptotic models foe WECs involve an equation for the pressure on the body inducing a PDE structure similar to that of incompressible flow equations. The study of appropriate stable and efficient high order approximations (coupling velocity-pressure, efficient time stepping) will be an important part of this activity. Moreover, the flow-floating body interaction formulation introduces time stepping issues similar to those encountered in fluid structure interaction problems, and require a clever handling of complex floater geometries based on adaptive and ALE techniques. For this application, the derivation of fully discrete asymptotics may actually simplify our task.

Once available, we will use this hierarchy of models to investigate and identify the modelling errors, and provide a more certain estimate of the cost of wave energy. Subsequently we will look into optimization cycles by comparing time-to-decision in a multi-fidelity optimization context. In particular, this task will include the development and implementation of appropriate surrogate models to reduce the computational cost of expensive high fidelity models. Here especially artificial neural networks (ANN) and Kriging response surfaces (KRS) will be investigated. This activity on asymptotic non-linear modelling for WECs, which has had very little attention in the past, will provide entirely new tools for this application. Multi-fidelity robust optimization is also an approach which has never been applied to WECs.

This work is the core of the EU OCEANEranet MIDWEST project, which we coordinate. It will be performed in collaboration with our European partners, and with a close supervision of European SMEs in the sector, which are part of the steering board of MIDWEST (WaveDragon, Waves4Power, Tecnalia).

Because of their high strength and low weight, ceramic-matrix composite materials (CMCs) are the focus of active research for aerospace and energy applications involving high temperatures, either military or civil. Though based on brittle ceramic components, these composites are not brittle due to the use of a fibre/matrix interphase that preserves the fibres from cracks appearing in the matrix. Recent developments aim at implementing also in civil aero engines a specific class of Ceramic Matrix Composite materials (CMCs) that show a self-healing behaviour. Self-healing consists in filling cracks appearing in the material with a dense fluid formed in-situ by oxidation of part of the matrix components. Self-healing (SH) CMCs are composed of a complex three-dimensional topology of woven fabrics containing fibre bundles immersed in a matrix coating of different phases. The oxide seal protects the fibres which are sensitive to oxidation, thus delaying failure. The obtained lifetimes reach hundreds of thousands of hours .

The behaviour of a fibre bundle is actually extremely variable, as the oxidation reactions generating the self-healing mechanism have kinetics strongly dependent on temperature and composition. In particular, the lifetime of SH-CMCs depends on: (i) temperature and composition of the surrounding atmosphere; (ii) composition and topology of the matrix layers; (iii) the competition of the multidimensional diffusion/oxidation/volatilization processes; (iv) the multidimensional flow of the oxide in the crack; (v) the inner topology of fibre bundles; (vi) the distribution of critical defects in the fibres. Unfortunately, experimental investigations on the full materials are too long (they can last years) and their output too qualitative (the coupled effects can only be observed a-posteriori on a broken sample). Modelling is thus essential to study and to design SH-CMCs.

In collaboration wit the LCTS laboratory (a joint CNRS-CEA-SAFRAN-Bordeaux University lab devoted to the study of thermo-structural materials in Bordeaux), we are developing a multi-scale model in which a structural mechanics solver is coupled with a closure model for the crack physico chemistry. This model is obtained as a multi-dimensional asymptotic crack averaged approximation fo the transport equations (Fick's laws) with chemical reactions sources, plus a potential model for the flow of oxide , , . We have demonstrated the potential of this model in showing the importance of taking into account the multi-dimensional topology of a fibre bundle (distribution of fibres) in the rupture mechanism. This means that the 0-dimensional model used in most of the studies (se e.g. ) will underestimate appreciably the lifetime of the material. Based on these recent advances, we will further pursue the development of multi-scale multi-dimensional asymptotic closure models for the parametric design of self healing CMCs. Our objectives are to provide: (i) new, non-linear multi-dimensional mathematical model of CMCs, in which the physico-chemistry of the self-healing process is more strongly coupled to the two-phase (liquid gas) hydro-dynamics of the healing oxide ; (ii) a model to represent and couple crack networks ; (iii) a robust and efficient coupling with the structural mechanics code ; (iv) validate this platform with experimental data obtained at the LCTS laboratory. The final objective is to set up a multi-scale platform for the robust prediction of lifetime of SH-CMCs, which will be a helpful tool for the tailoring of the next generation of these materials.

Our objective is to bridge the gap between the development of high order adaptive methods, which has mainly been performed in the industrial context and environmental applications, with particular attention to coastal and hydraulic engineering. We want to provide tools for adaptive non-linear modelling at large and intermediate scales (near shore, estuarine and river hydrodynamics). We will develop multi-scale adaptive models for free surface hydrodynamics. Beside the models and codes themselves, based on the most advanced numerics we will develop during this project, we want to provide sufficient know how to control, adapt and optimize these tools.

We will focus our effort in the understanding of the interactions between asymptotic approximations
and numerical approximations. This is extremely important in at least two aspects.
The first is the capability of a numerical model to handle highly dispersive wave propagation.
This is usually done by high accuracy asymptotic PDE expansions. Here we plan to make heavily use of our
results concerning the relations between vertical asymptotic expansions and
standard finite element approximations. In particular, we will invest some effort in the development of

Another important aspect which is not understood well enough at the moment is the role of dissipation in wave breaking regions. There are several examples of breaking closure, going from algebraic and PDE-based eddy viscosity methods , , , , to hybrid methods coupling dispersive PDEs with hyperbolic ones, and trying to mimic wave breaking with travelling bores , , , , . In both cases, numerical dissipation plays an important role and the activation or not of the breaking closure, as the quantitative contribution of numerical dissipation to the flow has not been properly investigated. These elements must be clarified to allow full control of adaptive techniques for the models used in this type of applications.

Another point we want to clarify is how to optimize the discretization of asymptotic PDE models. In particular, when adding mesh size(s) and time step, we are in presence of at least 3 (or even more) small parameters. The relations between physical ones have been more or less investigates, as have been the ones between purely numerical ones. We plan to study the impact of numerics on asymptotic PDE modelling by reverting the usual process and studying asymptotic limits of finite element discretizations of the Euler equations. Preliminary results show that this does allow to provide some understanding of this interaction and to possibly propose considerably improved numerical methods .

In September 2019 Martin Parisot, previously CR in the ANGE team, has joined CARDAMOM;

In September 2019 Nicolas Barral, previously post-doc in the Computational Geoscience and Energy division of the Department of Earth Science and Engineering at Imperial College London, has joined CARDAMOM

H. Beaugendre has contributed to the organization of Inria's Autumn school, November 4-8 2019, Inria Bordeaux Sud-Ouest.

School's objective: The school will aim at simulating a physical problem, from its modeling to its implementation in a high performance computing (HPC) framework. The school will offer both plenary courses and hands-on sessions. The physical problem considered will be the harmonic wave propagation.

The first day will be dedicated to the modeling of the problem and its discretization using a Discontinuous Galerkin scheme. The following two days will be dedicated to linear algebra for solving large sparse systems. Background on direct, iterative and hybrid methods for sparse linear systems will be discussed. Hands-on on related parallel solvers will then be proposed. Will follow a session dedicated to advanced parallel schemes using task-based paradigms, including a hands-on with the starpu runtime system. The ultimate hands-on session will be devoted to the use of parallel profiling tools. The school will be closed with plenary talks illustrating the usage of such a workflow in an industrial context.

38 participants, mostly PhD students and Post-docs.

This school received support form cea, Inria, prace and sysnum.

In November 2019 M. Ricchiuto has been granted the honorary appointment of Adjunct Professor at the Civil and Environmental Engineering department of Duke University in North Carolina (USA).

Keyword: Finite element modelling

Functional Description: The AeroSol software is a high order finite element library written in C++. The code has been designed so as to allow for efficient computations, with continuous and discontinuous finite elements methods on hybrid and possibly curvilinear meshes. The work of the team CARDAMOM (previously Bacchus) is focused on continuous finite elements methods, while the team Cagire is focused on discontinuous Galerkin methods. However, everything is done for sharing the largest part of code we can. More precisely, classes concerning IO, finite elements, quadrature, geometry, time iteration, linear solver, models and interface with PaMPA are used by both of the teams. This modularity is achieved by mean of template abstraction for keeping good performances. The distribution of the unknowns is made with the software PaMPA , developed within the team TADAAM (and previously in Bacchus) and the team Castor.

News Of The Year: In 2019, the following points were addressed in AeroSol

*Update, documentation, and wiki for the test case

*exact solution of Riemann problem and exact Godunov solver

*Development of Droplet model, and of a Baer and Nunziato diphasic model.

*Beginning of implementation of eddy viscosity models (k-epsilon, Spalart-Almarras) turbulence models.

*Add the possibility of mesh dependent data (for example, a flow computed by AeroSol with the Euler system) for being used as input for another model (e.g. advection of droplets within this flow). This feature is used also for wall distance for turbulent models.

*Penalization problems, with single core mesh adaptation was merged in the master branch.

*Improvements of PETSc usage: possibility of solving linear problems that are not of size nvar, usage of MUMPS LU solver through PETSc.

*Interfacing with SLEPc for solving eigenvalues and eigenvectors problems.

*High order visualization based on GMSH.

*Beginning of interfacing with PARMMG for parallel mesh adaptation.

*Clean of warning, error messages, etc...

Participants: Benjamin Lux, Damien Genet, Mario Ricchiuto, Vincent Perrier, Héloïse Beaugendre, Subodh Madhav Joshi, Christopher Poette, Marco Lorini, Jonathan Jung and Enrique Gutierrez Alvarez

Partner: BRGM

Contact: Vincent Perrier

*Mmg Platform*

Keywords: Mesh adaptation - Anisotropic - Mesh generation - Mesh - Isovalue discretization

Scientific Description: The Mmg plateform gathers open source software for two-dimensional, surface and volume remeshing. The platform software perform local mesh modifications. The mesh is iteratively modified until the user prescriptions satisfaction.

The 3 softwares can be used by command line or using the library version (C, C++ and Fortran API) : - Mmg2d performs mesh generation and isotropic and anisotropic mesh adaptation. - Mmgs allows isotropic and anisotropic mesh adaptation for 3D surface meshes. - Mmg3d is a new version af the MMG3D4 software. It remesh both the volume and surface mesh of a tetrahedral mesh. It performs isotropic and anisotropic mesh adaptation and isovalue discretization of a level-set function.

The platform software allow to control the boundaries approximation: The "ideal" geometry is reconstruct from the piecewise linear mesh using cubic Bezier triangular partches. The surface mesh is modified to respect a maximal Hausdorff distance between the ideal geometry and the mesh.

Inside the volume, the software perform local mesh modifications ( such as edge swap, pattern split, isotropic and anisotropic Delaunay insertion...).

Functional Description: The Mmg plateform gathers open source software for two-dimensional, surface and volume remeshing. It provides three applications : 1) mmg2d: generation of a triangular mesh , adaptation and optimization of a triangular mesh 2) mmgs: adaptation and optimization of a surface triangulation representing a piecewise linear approximation of an underlying surface geometry 3) mmg3d: adaptation and optimization of a tetrahedral mesh and isovalue discretization

The platform software perform local mesh modifications. The mesh is iteratively modified until the user prescription satisfaction.

News Of The Year: Release 5.3.0 improves: - the mmg3d algorithm for mesh adaptation (better convergency and edge lengths closest to 1) - the software behaviour in case of failure (warnings/error messages are printed only 1 time and there is no more exits in the code) - the mmg2d software that now uses the same structure than mmgs and mmg3d

It adds: - the -hsiz option for mmg2d/s/3d (that allows to generate a uniform mesh of size ) - the -nosurf option for mmg2d (that allows to not modify the mesh boundaries) - the -opnbdy option for mmg3d (that allow to preserve an open boundary inside a volume mesh) - the possibility to provide meshes containing prisms to mmg3d (the prisms entities are preserved while the tetra ones are modified)

Participants: Algiane Froehly, Charles Dapogny, Pascal Frey and Luca Cirrottola

Partners: Université de Bordeaux - CNRS - IPB - UPMC

Contact: Algiane Froehly

*Mmg3d*

Keywords: Mesh - Anisotropic - Mesh adaptation

Scientific Description: Mmg3d is an open source software for tetrahedral remeshing. It performs local mesh modifications. The mesh is iteratively modified until the user prescriptions satisfaction.

Mmg3d can be used by command line or using the library version (C, C++ and Fortran API) : - It is a new version af the MMG3D4 software. It remesh both the volume and surface mesh of a tetrahedral mesh. It performs isotropic and anisotropic mesh adaptation and isovalue discretization of a level-set function.

Mmg3d allows to control the boundaries approximation: The "ideal" geometry is reconstruct from the piecewise linear mesh using cubic Bezier triangular partches. The surface mesh is modified to respect a maximal Hausdorff distance between the ideal geometry and the mesh.

Inside the volume, the software perform local mesh modifications ( such as edge swap, pattern split, isotropic and anisotropic Delaunay insertion...).

Functional Description: Mmg3d is one of the software of the Mmg platform. Is is dedicated to the modification of 3D volume meshes. It perform the adaptation and the optimization of a tetrahedral mesh and allow to discretize an isovalue.

Mmg3d perform local mesh modifications. The mesh is iteratively modified until the user prescription satisfaction.

Participants: Algiane Froehly, Charles Dapogny, Pascal Frey and Luca Cirrottola

Partners: Université de Bordeaux - CNRS - IPB - UPMC

Contact: Algiane Froehly

Keywords: Finite element modelling - Multi-physics simulation - Chemistry - Incompressible flows - 2D

Functional Description: Numerical modelling of the healing process in ceramic matrix composites

Participants: Gérard Vignoles, Gregory Perrot, Guillaume Couegnat, Mario Ricchiuto and Giulia Bellezza

Partner: LCTS (UMR 5801)

Contact: Mario Ricchiuto

*Shallow-water fLOWS*

Keywords: Simulation - Free surface flows - Unstructured meshes

Scientific Description: Three different approaches are available, based on conditionally depth-positivity preserving implicit schemes, or on conditionally depth-positivity preserving genuinely explicit discretizations, or on an unconditionally depth-positivity preserving space-time approach. Newton and frozen Newton loops are used to solve the implicit nonlinear equations. The linear algebraic systems arising in the discretization are solved with the MUMPS library. This year implicit and explicit (extrapolated) multistep higher order time integration methods have been implemented, and a mesh adaptation technique based on simple mesh deformation has been also included.

Functional Description: SLOWS is a C-platform allowing the simulation of free surface shallow water flows with friction. It can be used to simulate near shore hydrodynamics, wave transformations processes, etc.

Participants: Maria Kazolea and Mario Ricchiuto

Contact: Mario Ricchiuto

URL: https://

Keyword: Physical simulation

Scientific Description: A novel work that advances a step ahead the methodology of the solution of dispersive models. TUCWave uses a high-order well-balanced unstructured finite volume (FV) scheme on triangular meshes for modeling weakly nonlinear and weakly dispersive water waves over varying bathymetries, as described by the 2D depth-integrated extended Boussinesq equations of Nwogu (1993), rewritten in conservation law form. The FV scheme numerically solves the conservative form of the equations following the median dual node-centered approach, for both the advective and dispersive part of the equations. The code developed follows an efficient edge based structured technique. For the advective fluxes, the scheme utilizes an approximate Riemann solver along with a well-balanced topography source term up-winding. Higher order accuracy in space and time is achieved through a MUSCL-type reconstruction technique and through a strong stability preserving explicit Runge-Kutta time stepping. Special attention is given to the accurate numerical treatment of moving wet/dry fronts and boundary conditions. Furthermore, the model is applied to several examples of wave propagation over variable topographies and the computed solutions are compared to experimental data.

Functional Description: Fortran Planform which accounts for the study of near shore processes

Participants: Argiris Delis, Ioannis Nikolos and Maria Kazolea

Partner: Technical University of Crete

Contact: Maria Kazolea

Keyword: Mesh adaptation

Functional Description: FMG is a library deforming an input/reference simplicial mesh w.r.t. a given smoothness error monitor (function gradient or Hessian), metric field, or given mesh size distribution. Displacements are computed by solving an elliptic Laplacian type equation with a continuous finite element method. The library returns an adapted mesh with a corresponding projected solution, obtained by either a second order projection, or by an ALE finite element remap. The addiction of a new mass conservative approach developed ad-hoc for shallow water flows is under way.

News Of The Year: - Development of the Elasticity model to compute the nodes displacement. - Development of a new model to compute the nodes displacement. This mixed model takes the advantages of the Laplacian model and the Elasticity model: a refined mesh where the solution varies a lot and a smooth gradation of the edges size elsewhere. - Extension in three dimension

Participants: Leo Nouveau, Luca Arpaia, Mario Ricchiuto and Luca Cirrottola

Contact: Algiane Froehly

Keywords: 3D - Mesh adaptation - Anisotropic - Isotropic - Isovalue discretization - Distributed Applications - MPI communication

Functional Description: The ParMmg software build parallel (MPI based) mesh adaptation capabilities on top of the sequential open-source remesher Mmg, iteratively called over sub-meshes of the initial mesh.

ParMmg is available: - through command line , - in library mode using the dedicated API.

Release Functional Description: The version 1.2 of ParMmg provide 3D volume mesh adaptation with constrained surface.

It adds to the previous release: - Mesh repartitioning through parallel interface displacement , - Support for Scotch renumeration.

Participants: Algiane Froehly and Luca Cirrottola

Partners: FUI Icarus - ExaQUte

Contact: Algiane Froehly

URL: https://

Participants: Umberto Bosi, Mathieu Colin, Maria Kazolea, Mario Ricchiuto

Corresponding member: Maria Kazolea

This year we continued our work on free surface flow modelling. We can dived our work on four main axis. First, we presented a depth-integrated Boussinesq model for the efficient simulation of nonlinear wave-body interaction . The model exploits a ÂÂunifiedÂÂ Boussinesq framework, i.e. the fluid under the body is also treated with the depth-integrated approach. The unified Boussinesq approach was initially proposed by Jiang and recently analysed by Lannes . The choice of Boussinesq-type equations removes the vertical dimension of the problem, resulting in a wave-body model with adequate precision for weakly nonlinear and dispersive waves expressed in horizontal dimensions only. The framework involves the coupling of two different domains with different flow characteristics. Inside each domain, the continuous spectral/hp element method is used to solve the appropriate flow model since it allows to achieve high-order, possibly exponential, convergence for non-breaking waves. Flux-based conditions for the domain coupling are used, following the recipes provided by the discontinuous Galerkin framework. The main contribution of this work is the inclusion of floating surface-piercing bodies in the conventional depth-integrated Boussinesq framework and the use of a spectral/hp element method for high-order accurate numerical discretization in space. The model is verified using manufactured solutions and validated against published results for wave-body interaction. The model is shown to have excellent accuracy and is relevant for applications of waves interacting with wave energy devices. The outcome of this work is the phd thesis of Uberto Bosi..

Second, a detailed analysis of undular bore dynamics in channels of variable cross-section is performed and presented in . Two undular bore regimes, low Froude number (LFN) and high Froude number (HFN), are simulated with a Serre Green Naghdi model, and the results are compared with the experiments by Treske (1994). We show that contrary to Favre waves and HFN bores, which are controlled by dispersive non-hydrostatic mechanisms, LFN bores correspond to a hydrostatic phenomenon. The dispersive-like properties of the LFN bores is related to wave refraction on the banks in a way similar to that of edge waves in the near shore. A fully hydrostatic asymptotic model for these dispersive-like bores is derived and compared to the observations, confirming our claim.

We also continue to work on the modelling of free surface flows by investigating a new family of models, derived from the so-called Isobe-Kakinuma models. The Isobe–Kakinuma model is a system of Euler–Lagrange equations for a Lagrangian approximating Luke's Lagrangian for water waves. In , We consider the Isobe–Kakinuma model for two-dimensional water waves in the case of the flat bottom. We show theoretically the existence of a family of small amplitude solitary wave solutions to the Isobe–Kakinuma model in the long wave regime. We have also performed numerical computations for a toy system included large amplitude solitary wave solutions. Our computations suggest the existence of a solitary wave of extreme form with a sharp crest. This models seems very promising for future research.

Participants: Héloïse Beaugendre, Mathieu Colin and Francois Morency

Corresponding member: Héloïse Beaugendre

In-flight icing on an aircrafts surface can be a major hazard in aeronautics's safety. Numerical simulations of ice accretion on aircraft is a common procedure to anticipate ice formation when flying in a supercooled water droplets cloud. Numerical simulations bring a better understanding of ice accretion phenomena, performance degradations and lead to even more efficient thermal de-icing systems designs. Such simulations imply modelling the phase change of water and the mass and energy transfers. The Messinger model developed in the 1950 is still used today as a reliable basis for new models development. This model estimates the ice growth rate using mass and energy balances coupled to a runback water flow. The main parameter introduced with this approach is the freezing fraction, denoting the fraction of incoming water that effectively freezes on the airfoil.

In-flight ice accretion code predictions depend on heat loss over rough surfaces. The equivalent sand grain roughness models the friction coefficient, but an additional model is needed for heat transfer prediction. The turbulent Prandtl number correction and the sublayer Stanton-based models are commonly used. This year, both models have been used with the Spalart-Allmaras turbulence model to predict heat transfer over rough surfaces typical of ice accretion. The objective here is to compare the results of the two models. First, the sublayer Stanton-based model is rewritten in the context of a turbulent Prandtl number correction formulation. Then, the two models are implemented in the open source software, SU2, and then verified and validated for flow over rough flat plates, airfoils, and wings. The two models' predictions evolve differently with the local Reynolds number, but are always within the experimental

Numerical simulation of separated flow around an iced airfoil is still a challenge. Predictions of post-stall aerodynamic performance by RANS models are unsatisfactory. Recent hybrid RANS/LES methods, based on modified DDES models, have shown promising results for separated flow. However, questions still arise about the best compromise between computation time and accuracy for the unsteady 3D simulations. The span width of the domain and the best grid practice to obtain accurate results still has to be investigated, especially taking into account the recent method improvements. A recent method such as shear-layer adapted DDES should give acceptable flow prediction with a relatively coarse mesh. In the paper , we further study the effects of span width length on predicted aerodynamic coefficients and on the pressure coefficient. The study is done using the open-source software SU2. The backward facing step and a stalled NACA0012 is used to validate the numerical results. Then, the numerical flow around an iced Model 5-6 is studied, especially the flow within the separation bubble behind the ice. The accuracy of the CFD results is discussed and a recommendation is made about the span width of the computation domain and the grid size.

In order to spare time and resources while increasing the results' accuracy of the stalled wing configuration's aerodynamic coefficients, the following study offers a parametric grid study for the DDES model. For three different grid refinements, characteristics of lift and eddy phenomena are presented and compared to determine, for an infinite wing, the best compromise between time and resources' consumption, and results's accuracy. Using the open software SU2 6.1 (Stanford University Unstructured), we generate three different types of grid refinements around an airfoil, developed spanwise to obtain a straight wing. On the same stalled configuration for each mesh, CFD solutions are ran with the DDES model, and the raw data are post processed with the open software ParaView 5.6. We then compare the aerodynamic coefficients' distributions obtained by the three mesh. The general modelling of vortex shedding's topology and turbulence viscosity are compared with the literature to ensure the right rendering of vortex structures. Chordwise pressure and friction coefficients' distributions as well as the spanwiselift coefficient are also compared. We conclude with the optimum mesh in term of results and resources' consumption.

Participants: Héloïse Beaugendre, Mirco Ciallella, Benjamin Constant and Mario Ricchiuto

Corresponding member: Héloïse Beaugendre

In the last years the team has invested some effort in developing the high order embedded method known as "shifted boundary method". In this method, geometrical boundaries are not meshed exactly but embedded in the mesh. Boundary conditions are imposed on a surrogate boundary, toughly defined by the collection of mesh faces “closest” to the true boundary. To recover high order of accuracy, the boundary condition imposed on this surrogate boundary is modified to account for the distance from the true boundary.

This year's work has focused on two aspects. First, we proposed an efficient extension of the method to elliptic diffusion equations in mixed form (e.g., Darcy flow, heat diffusion problems with rough coefficients, etc.) Our aim is to obtain an improved formulation that, for linear finite elements, is at least second-order accurate for both flux and primary variable, when either Dirichlet or Neumann boundary conditions are applied. Following previous work of Nishikawa and Mazaheri in the context of residual distribution methods, we consider the mixed form of the diffusion equation (i.e., with Darcy-type operators), and introduce an enrichment of the primary variable. This enrichment is obtained exploiting the relation between the primary variable and the flux variable, which is explicitly available at nodes in the mixed formulation. The proposed enrichment mimics a formally quadratic pressure approximation, although only nodal unknowns are stored, similar to a linear finite element approximation. We consider both continuous and discontinuous finite element approximations and present two approaches: a non-symmetric enrichment, which, as in the original references, only improves the consistency of the overall method; and a symmetric enrichment, which enables a full error analysis in the classical finite element context. Combined with the shifted boundary method, these two approaches are extended to high-order embedded computations, and enable the approximation of both primary and flux (gradient) variables with second-order accuracy, independently on the type of boundary conditions applied. We also show that the the primary variable is third-order accurate, when pure Dirichlet boundary conditions are embedded.

Second, using the same ideas underlying the shifted boundary method, a novel approach to handle shock waves has been proposed. In this method shocks are seen as embedded
boundaries on which appropriately shifted jump conditions are imposed, allowing to connect the upstream and down-stream domains. This new
technique, named "shifted shock-fitting", has been implemented on two-dimensional unstructured grids to deal with shocks by treating them as they were immersed boundary.
The new algorithm is aimed at coupling a floating shock-fitting technique with the shifted boundary method, so far introduced only to simulate flows with embedded boundaries
(see , full length paper in revision on *J.Comput.Phys.*).

A new PhD in collaboration with ONERA has started (Benjamin Constant 's thesis) involving the numerical simulation of unsteady flows around complex geometries in aeronautics. In the CFD simulation process, mesh generation is the main bottleneck when one wishes to study realistic configurations, such as an aircraft landing gear. A mesh can represent a month to several months of engineer time for a specialist, which is prohibitive in the pre-design phase where several geometries are evaluated in a very short time frame. For this reason, Inria and Onera have been interested for several years in the development of an immersed boundary method, which does not require representing obstacles by mesh conforming to the wall, thus simplifying the generation of mesh. The consideration of the wall is carried out by the introduction of a forcing term at certain points in the vicinity of obstacles. In our approach, this technique is combined with a method of generating adaptive octree cartesian mesh. This allows us to exploit the advantages of Cartesian mesh (generation and rapid adaptation, performance gains of a dedicated Cartesian solver). In order to model the boundary layer, a wall model is used to avoid an extra cost. This method has been implemented for the simulation of steady turbulent flows around geometries studied in compressible aerodynamics (winged fuselage, engine air intake, helicopter fuselage...), providing a very good compromise between the quality of the aerodynamic solution and the time it takes to return the solution from the definition of geometry. However, the quality of the solution obtained by steady simulations is not sufficient to predict the acoustics satisfactorily. Indeed, oscillations appear for certain sizes of interest (such as turbulent viscosity or pressure fluctuations) in the vicinity of the wall. In addition, the passage of grids of different levels causes reflections, which can greatly degrade the prediction of the acoustic solution. The objective of this thesis is to solve these two problems in order to be able to perform unsteady simulations around complex geometries, such as a landing gear. On the one hand, we will study an algorithm to regularize the solution at the points of interest of the IBM method located near the wall. Together, we will also be interested in improving the wall model, in collaboration with modeling specialists from the department. We will do a study by error estimators to analyze the impact of these improvements on the solution. Validations on academic test cases will be carried out. A second step is to improve the transfer of the solution between grids of different levels (for which the mesh size double). We will propose an algorithm to regularize this passage, by geometrically modifying the mesh and by modifying the transfer formula of the solution at the passage of the fitting. A validation will be carried out on an unsteady case of LEISA profile. Finally, a demonstrative application that is both geometrically complex and of acoustic interest will be performed, typically a LAGOON or Gulfstream landing gear.

Participants: Giulia Bellezza, Mathieu Colin and Mario Ricchiuto

Corresponding member: Mario Ricchuito

Self-healing is an important phenomenon in new-generation refractory ceramic-matrix composites, obtained by the oxidation of a glass-forming phase in the composite. The dynamics of oxygen diffusion, glass formation and flow are the basic ingredients of a self-healing model that has been developed here in 2D in a trans-verse crack of a mini composite . The presented model can work on a realistic image of the material section and is able to simulate healing and quantify the exposure of the material to oxygen, a prerequisite for its lifetime prediction. Crack reopening events are handled satisfactorily, and secondary healing can be simulated. This papers describes and discusses a typical case in order to show the model potentialities.

Additional work involve two main topics. The first one is dedicated to the modeling of the propagation of a self-healing oxyde in a crack. The aim here is to introduce new models which describe both the self-healing behavior and oxygen diffusion towards fibers. In general, the evolution of an incompressible fluid can be described by the Navier-Stokes equations. However, we observe that a direct numerical method applied to these equations will induce a significant computational cost, especially in our case, due to the long lifespan of the material. Thus, alternatively, we derive several asymptotic models obtained by performing a dimensional analysis on the Navier-Stokes equations : we focus here on shallow water models and thin film models. The second aspect under study is more theoretical. We propose in the full generality a link between the BD entropy introduced by D. Bresch and B. Desjardins for the viscous shallow-water equations and the Bernis-Friedman (called BF) dissipative entropy introduced to study the lubrications equations. Different dissipative entropies are obtained playing with the drag terms on the viscous shallow water equations. It helps for instance to prove global existence of nonnegative weak solutions for the lubrication equations starting from the global existence of nonnegative weak solutions for appropriate viscous shallow-water equations.

Participants: Nicolas Barral, Héloïse Beaugendre, Luca Cirrottola, Algiane Froehly, Mario Ricchiuto.

Corresponding member: Nicolas Barral

Similar adaptive strategies have been investigate for shallow water flows in the context of space-time residual distribution methods . In the scalar case, these schemes can be designed to be unconditionally (w.r.t. the time step) positive, even on the distorted space-time prisms which arise from moving the nodes of an unstructured triangular mesh. Consequently, a local increase in mesh resolution does not impose a more restrictive stability constraint on the time-step, which can instead be chosen according to accuracy or physical requirements. Moreover, schemes of this type are analogous to conservative ALE formulations and automatically satisfy a discrete geometric conservation law. For shallow water flows over variable bed topography, the so-called C-property (retention of hydrostatic balance between flux and source terms, required to maintain the steady state of still, flat, water) can also be satisfied by considering the mass balance equation in terms of free surface level instead of water depth, even when the mesh is moved. Combined with a simple implementation of Laplacian based r-adaptation, this technique has been shown to allow up to 60% CPU time savings for a given error.

We also extended these methods to curvilinear coordinates to do shallow water simulations on the sphere for oceanographic applications . To provide enhanced resolution of moving fronts present in the flow we consider adaptive discrete approximations on moving triangulations of the sphere. To this end, we re- state all Arbitrary Lagrangian Eulerian (ALE) transport formulas, as well as the volume transformation laws, for a 2D manifold. Using these results, we write the set of ALE-SWEs on the sphere. We then propose a Residual Distribution discrete approximation of the governing equations. Classical properties as the DGCL and the C-property (well balancedness) are reformulated in this more general context. An adaptive mesh movement strategy is proposed. The discrete framework obtained is thoroughly tested on standard benchmarks in large scale oceanography to prove their potential as well as the advantage brought by the adaptive mesh movement.

Title: ETRURIA: Robust simulation tools for non-hydrostatic free surface flows

Type: Apple à Projets Recherche Région Nouvelle Aquitaine

Coordinator: M. Ricchiuto

Other partners: BRGM, UMR EPOC (P. Bonneton)

Abstract: The objective of this project is to combine high order continuous finite elements, with embedded methods and mesh adaptation in the simulation of coastal and urban inundation. Realistic validation cases will be provided by BRGM. This project co-funds (50%) the PhD of S. Michel.

Title: VIrtual Self-healing Composites for Aeronautic Propulsion

Type: ANR

Duration: 48 months

Starting date : 1st Jan 2018

Coordinator: Vignoles Gerard (Université de Bordeaux and LCTS - UMR 5801)

Abstract: Self-healing Ceramic-Matrix Composites (SH-CMCs) have extremely long lifetimes even under severe thermal, mechanical and chemical solicitations. They are made of ceramic fibres embedded in a brittle ceramic matrix subject to multi-cracking, yielding a damageable-elastic mechanical behaviour. These materials have the particularity of protecting themselves against corrosion by the formation of a sealing oxide that fills the matrix cracks, delaying considerably the fibres degradation. Applications encompass civil aeronautic propulsion engine hot parts and they represent a considerable market; however this is only possible if the lifetime duration of the materials is fully certified. The ambition of this innovative project is to provide reliable, experimentally validated numerical models able to reproduce the behaviour of SH-CMCs. The starting point is an existing image-based coupled model of progressive oxidative degradation under tensile stress of a mini-composite (i.e. a unidirectional bundle of fibres embedded in multi-layered matrix). Important improvements will be brought to this model in order to better describe several physic-chemical phenomena leading to a non-linear behaviour: this will require an important effort in mathematical analysis and numerical model building. A systematic benchmarking will allow creating a large database suited for the statistical analysis of the impact of material and environmental parameter variations on lifetime. Experimental verifications of this model with respect to tests carried out on model materials using in-situ X-ray tomography ? in a specially adapted high-temperature environmental & mechanical testing cell ? and other characterizations are proposed. The extension of the modelling procedure to Discrete Crack Networks for the large-scale description of the material life will be the next action; it will require important developments on mesh manipulations and on mathematical model analysis. Finally, experimental validation will be carried out by comparing the results of the newly created software to tests run on 3D composite material samples provided by the industrial partner of the project. The project originality lies in a multidisciplinary character, mixing competences in physico-chemistry, mechanics, numerical and mathematical modelling, software engineering and high-performance computing. It aims creating a true computational platform describing the multi-scale, multidimensional and multi-physics character of the phenomena that determine the material lifetime. Important outcomes in the domain of civil aircraft jet propulsion are expected, that could relate to other materials than those considered in this study.

Title: Intensive Calculation for AeRo and automotive engines Unsteady Simulations.

Type: FUI

Duration: January 2017 - December 2019

Coordinator: Turbomeca, Safran group

Abstract: Large Eddy Simulation is an accurate simulation tool for turbulent flows which is becoming more and more attractive as the parallel computing techniques and platforms become more and more efficient. This project aims at improving the performances of some existing simulation tools (such as AVBP, Yales and ARGO), at developing meshing/re-meshing tools tailored to LES simulations, at improving the ergonomy of these tools to the industrial world (improved interfaces, data handling, code coupling, etc), and validate the progress made on case studies representative of typical design simulations in the automotive and aeronautic industry

Title : Modélisation d'un système de dégivrage thermique

Type : Project University of Bordeaux

Duration : 36 months

Starting : October 2016

Coordinator : H. Beaugendre and M. Colin

Abstract : From the beginning of aeronautics, icing has been classified as a serious issue : ice accretion on airplanes is due to the presence of supercooled droplets inside clouds and can lead to major risks such as aircrash for example. As a consequence, each airplane has its own protection system : the most important one is an anti-icing system which runs permanently. In order to reduce gas consumption, de-icing systems are developed by manufacturers. One alternative to real experiment consists in developing robust and reliable numerical models : this is the aim of this project. These new models have to take into account multi-physics and multi-scale environnement : phase change, thermal transfer, aerodynamics flows, etc. We aim to use thin films equations coupled to level-set methods in order to describe the phase change of water. The overall objective is to provide a simulation plateform, able to provide a complete design of these systems.

Program: FETHPC-02

Project acronym: ExaQute

Project title: Exascale quantification of uncertainties for technology and science simulation

Duration: June 2018 - April 2019

Coordinator: CIMNE (Spain)

Other partners: BSC (Spain), TUM (Germany), IT4 (Czech Republic), EPFL (Switzerland), UPC (Spain), Structure (Germany).

Abstract: The ExaQUte project aims at constructing a framework to enable Uncertainty Quantification and Optimization Under Uncertainties in complex engineering problems, using computational simulations on Exascale systems. The description of complex geometries will be possible by employing embedded methods, which guarantee a high robustness in the mesh generation and adaptation steps, while allowing preserving the exact geometry representation. The efficient exploitation of the Exascale system will be addressed by combining State-of-the-Art dynamic task-scheduling technologies with space-time accelerated solution methods, where parallelism is harvested both in space and time. The methods and tools developed in ExaQUte will be applicable to many fields of science and technology. The chosen application focuses on wind engineering, a field of notable industrial interest for which currently no reliable solution exist.

Program: OCEANEraNET

Project acronym: MIDWEST

Project title: Multi-fIdelity Decision making tools for Wave Energy SysTems

Duration: December 2015 - April 2019

Coordinator: Mario Ricchiuto

Other partners: Chalmers University (Sweden), DTU Compute (Denmark), IST Lisbon (Portugal)

Abstract: Wave energy converters (WECs) design currently relies on low-fidelity linear hydrodynamic models. While these models disregard fundamental nonlinear and viscous effects - which might lead provide sub-optimal designs - high-fidelity fully nonlinear Navier-Stokes models are prohibitively computational expensive for optimization. The MIDWEST project will provide an efficient asymptotic nonlinear finite element model of intermediate fidelity, investigate the required fidelity level to resolve a given engineering output, construct a multi-fidelity optimization platform using surrogate models blending different fidelity models. Combining know how in wave energy technology, finite element modelling, high performance computing, and robust optimization, the MIDWEST project will provide a new efficient decision making framework for the design of the next generation WECs which will benefit all industrial actors of the European wave energy sector.

Title: High order Adaptive moving MeSh finiTE elements in immeRsed computational mechanics

International Partner (Institution - Laboratory - Researcher):

Duke (United States) - Civil and Environmental Engineering and Mechanical Engineering and Material Science - Guglielmo Scovazzi

Inria Bordeaux -SO (France) - CARDAMOM team - Mario Ricchiuto

Start year: 2017

See also: https://

This project focuses on adaptive unstructured mesh finite element-type methods for fluid flows with moving fronts. These fronts may be interfaces between different fluids, or fluid/solid, and modelling or physical fronts (e.g. shock waves) present in the flow. The two teams involved in the project have developed over the years complementary strategies, one focusing more on an Eulerian description aiming at capturing fronts on adaptive unstructured grids, the other is working more on Lagrangian approaches aiming at following exactly some of these features. Unfortunately, classical Lagrangian methods are at a disadvantage in the presence of complex deformation patterns, especially for fronts undergoing large deformations, since the onset of vorticity quickly leads to mesh rotation and eventually tangling. On the other end, capturing approaches, as well as Immersed Boundary/Embedded (IB/EB) methods, while providing enormous flexibility when considering complex cases, require a careful use of mesh adaptivity to guarantee an accurate capturing of interface physics. The objective of this team is to study advanced hybrid methods combining high order, adaptive, monotone capturing techniques developed in an Eulerian or ALE setting, with fitting techniques and fully Lagrangian approaches.

** IIC ABGRALL Rémi**

Title: Numerical approximation of complex PDEs & Interaction between modes, schemes, data and ROMs

International Partner (Institution - Laboratory - Researcher):

ETH Zurich (Switzerland) - Institut fur Mathematik & Computational Science - Rémi Abgrall

Duration: 2019 - 2023

Start year: 2019

Claes Eskilsson, associated professor at Aalborg University, visited Mario Ricchiuto in Jul 2019.

Francois Morency, Professeur at Ecole de Technologie Supérieure de Montréal has visited Héloïse Beaugendre to work on aircraft icing, roughness modeling and performance degradation, in January 2019 and July 2019.

Masahito Ohta, Professor at Tokyo University of Science visited Mathieu Colin in Dec 2019.

Nicolas Perinet, Postocdoral fellow at University of Chile has visited Mario Ricchiuto to work on the benchmarking of the SLOWS CODE in October 2019.

Guglielmo Scovazzi, Prof. at Duke University, has visited M. Ricchiuto in the summer to work on the shifted boundary method;

Davide Torlo, PhD candidate at U. Zurich, visited M. Ricchiuto in June 2019 to work on relaxation finite element approximations of the shallow water equations

Mirco Ciallella (Inria, M. Sc. Student). Until Jan 2019.

Simon Le Berre (Inria, M. Sc. Student). From Apr 2019 until Sep 2019.

M. Ricchiuto co-organized the workshop Hywec2 on the hydrodynamics of wave energy converters.
The workshop has been organized in the framework of the work packages 3 and 5 of the Excellence Cluster Sysnum as one of two twin events devoted to marine renewable energies organized in Bilbao
(the VI Marine Energy Conference, June 25th) and Bordeaux. The event is also
supported by the Oceanera-net Midwest and the Fondation Del Duca. Its main goals have been to
focus on PDE and numerical modelling techniques, with attention on advanced and recent approaches,
and to give an overview of examples of industrial techniques and applications with several European industrial actors.
For more info refer to the web https://

Mathieu Colin is a member of the scientific committee of the JEF day's.

M. Ricchiuto has co-organized the mini-symposium “Some modern questions in the simulation of advection dominated problems”, MS FT-1-10 at the ICIAM conference in Valencia

Mathieu Colin is a member of the board of the journal Applications and Applied Mathematics: An International Journal (AAM)

M. Ricchiuto is a member of the editorial boards of Computers & Fluids (Elsevier) and of Water Waves (Springer)

We reviewed papers for top international journals in the main scientific themes of the team : Nonlinearity, Water Waves, Analysis and PDE, Comm. Cont. Math., Journal of Scientific Computing, Open Physics, Computational and applied mathematics, Journal of Fluid Mechanics.

License: Nicolas Barral, TD d'Analyse Numérique, 24h, L3, ENSEIRB-MATMÉCA, France

Master : Nicolas Barral, TD C++, 48h, M1, ENSEIRB-MATMÉCA, France

Master : Nicolas Barral, Techniques de maillage, 36h, M2, ENSEIRB-MATMÉCA et Université de Bordeaux, France

License: Héloïse Beaugendre, Encadrement de projets sur la modélisation de la portance, 20h, L3, ENSEIRB-MATMÉCA, France

Master : Héloïse Beaugendre, TD C++, 48h, M1, ENSEIRB-MATMÉCA, France

Master : Héloïse Beaugendre, Calcul Haute Performance (OpenMP-MPI), 40h, M1, ENSEIRB-MATMÉCA et Université de Bordeaux, France

Master : Héloïse Beaugendre, Responsable de filière de 3ème année, 15h, M2, ENSEIRB-MATMÉCA, France

Master : Héloïse Beaugendre, Calcul parallèle (MPI), 39h, M2, ENSEIRB-MATMÉCA, France

Master : Héloïse Beaugendre, Encadrement de projets de la filière Calcul Haute Performance, 6h, M2, ENSEIRB-MATMÉCA, France

Master : Héloïse Beaugendre, Encadrement de projets sur la modélisation de la pyrolyse, 20h, M1, ENSEIRB-MATMÉCA, France

Master : Héloïse Beaugendre , Projet fin d'études, 4h, M2, ENSEIRB-MATMÉCA, FRANCE

Master : Mathieu Colin : Integration, M1, 54h, ENSEIRB-MATMÉCA, FRANCE

Master : Mathieu Colin : Fortran 90, M1, 44h, ENSEIRB-MATMÉCA, FRANCE

Master : Mathieu Colin : PDE, M1, 30h, University of Bordeaux, FRANCE

Master : Mathieu Colin : Analysis, L1, 47h, ENSEIRB-MATMÉCA, FRANCE

Master : Mathieu Colin : projet professionnel and internship responsibility : 15 h, ENSEIRB-MATMÉCA, FRANCE

Master : Mathieu Colin : Encadrement de projets TER, 20h, ENSEIRB-MATMÉCA, FRANCE

Master : Mathieu Colin : responsable relation entreprise formation en alternance ENSEIRB-MATMECA (30h)

Master : Mathieu Colin : suivi d'apprenti en entreprise (28h)

Master : Mario Ricchiuto, Multiphysics Course, 21h cours magistrale, M2, ENSEIRB-MATMÉCA, FRANCE

PhD : Umberto Bosi, A unified spectral/hp element depth-integrated Boussinesq model for nonlinear wave-floating body interaction, U. Bordeaux, defended in June 2019, supervised by M. Ricchiuto (see )

PhD in progress : E. Solai, Multi-fidelity modeling of an immersive battery cooling system for electric vehicles, started in November 2018, co-supervised by H. Beaugendre and P.M. Congedo

PhD in progres: B. Constant, high order immersed methods for turbulent flows, started in September 2019, supervised by H. Beaugendre

PhD in progress : S. Michel, shallow water simulations with immersed higher order residual methods on adaptive meshes, started in November 2018, supervised by M. Ricchiuto

PhD in progress : G. Bellezza, multi scale modelling for self-healing composite materials, started in February 2019, supervised by M. Ricchiuto and G. Vignoles (LCTS)

PhD in progress : M. Ciallella, bridging shock fitting and embedded methods to handle shock waves in hyperbolic systems, started in October 2019, supervised by M. Ricchiuto and R. Paciorri (U. Roma La Sapienza)

PhD in progress : A. Cauquis, high order shock capturing methods for tsunami simulations, started in November 2019, supervised by M. Ricchiuto and P. Heinrich (CEA)

Héloïse Beaugendre has contributed to the following theses defense:

Iñigo Bidaguren, BCAM, Bilboa, Spain in June 2019 (as examiner)

Pierre Trontin HDR, Toulouse University in September 2019 (as examiner)

Quentin Carmouze, Côte D'Azur University in November 2019 (as reviewer)

Maria Kazolea has contributed to the following theses defense:

Umberto Bosi PhD, University of Bordeaux, April 2019 (as examiner)

Mario Ricchiuto has participated to the following juries:

A. Menasria, PhD ENSAM Paritech, in March 2019 (as president)

Julien Carlier, PhD U. Paris-Saclay, in December 2019 (as reviewer)