`CARDAMOM``CARDAMOM`^{st}, 2015
(https://`CARDAMOM`*asymptotic PDE* (Partial Differential Equations) modelling,
*adaptive high order PDE discretizations*, and a *quantitative certification* step assessing
the sensitivity of outputs to both model components
(equations, numerical methods, etc) and random variations of the data.
The goal is to improve parametric analysis and design cycles,
by increasing both accuracy and confidence in the results thanks to
improved physical and numerical modelling, and to a quantitative assessment of output uncertainties.
This requires a research program mixing of PDE analysis,
high order discretizations, Uncertainty Quantification (UQ), robust optimization, and some specific engineering know how.
Part of these scientific activities started in the `BACCHUS``MC2``CARDAMOM`

The objective of this project is to provide improved analysis and design tools for engineering applications involving fluid flows, and in particular flows with moving fronts.
In our applications *a front is either an actual material interface, or a well identified and delimited transition region in which
the flow undergoes a change in its dominant macroscopic character*. One example is the certification of wing de anti-icing systems, involving the predictions of ice formation and detachment,
and of ice debris trajectories to evaluate the risk of downstream impact on aircraft components , .
Another application, relevant for space reentry, is the study of transitional regimes in high altitude gas dynamics in which extremely thin
layers appear in the flow which cannot be analysed with classical continuous models (Navier-Stokes equations) used by engineers , . An important example in
coastal engineering is the transition between propagating and breaking waves, characterized by a strong local production of vorticity and by
dissipative effects absent when waves propagates . Similar examples in energy and material engineering provide the motivation of this project.

All these application fields involve either the study of new technologies (e.g. new design/certification tools for aeronautics , , , or for wave energy conversion ), or parametric studies of complex environments (e.g. harbour dynamics , or estuarine hydrodynamics ), or hazard assessment and prevention . In all cases, computationally affordable, quick, and accurate numerical modelling is essential to improve the quality of (or to shorten) design cycles and allow performance level enhancements in early stages of development . The goal is to afford simulations over very long times with many parameters or to embed a model in an alert system.

In addition to this, even in the best of circumstances, the reliability of numerical predictions is limited by the intrinsic randomness of the data used in practice
to define boundary conditions, initial conditions, geometry, etc. This uncertainty, related to the measurement errors,
is defined as *aleatory*, and cannot be removed, nor reduced. In addition, physical models and the related Partial Differential Equations (PDEs),
feature a structural uncertainty, since they are derived with assumptions of limited validity and calibrated with manipulated experimental data (filtering, averaging, etc ..). These uncertainties are defined as *epistemic*, as they are a deficiency due to a lack of knowledge , .
Unfortunately, measurements in fluids are delicate and expensive.
In complex flows, especially in flows involving interfaces and moving fronts, they are sometimes impossible to carry out, due to scaling problems, repeatability issues
(e.g. tsunami events), technical issues (different physics in the different flow regions)
or dangerousness (e.g. high temperature reentry flows, or combustion). Frequently,
they are impractical, due to the time scales involved (e.g. characterisation of oxidation processes related
to a new material micro-/meso- structure ).
This increases the amount of uncertainties associated to measurements and reduces the amount of information available to construct physical/PDE models.
These uncertainties play also a crucial role when one wants to deal with numerical certification or optimization of a fluid based device.
However, this makes the required number of flow simulations grow as high as hundreds or even thousands of times.
The associated costs are usually prohibitive. So the real challenge is to be able to construct
an accurate and computationally affordable numerical model handling efficiently uncertainties.
In particular, this model should be able to take into account the variability due to uncertainties,
those coming from the certification/optimization parameters as well as those coming from modelling choices.

To face this challenge and provide new tools to accurately and robustly modelize and certify engineering devices based on fluid flows with moving fronts, we propose a program mixing scientific research in asymptotic PDE analysis, high order adaptive PDE discretizations and uncertainty quantification.

A standar way a certification study may be contacted can be described as two box modelling. The first box is the
physical model itself, which is composed of the 3 main elements: PDE system, mesh generation/adaptation, and discretization of the PDE (numerical scheme). The second box is the main robust certification loop which contains separate boxes involving the evaluation of the physical model, the post-processing of the output, and the exploration of the spaces of physical and stochastic parameters (uncertainties).
There are some known interactions taking place in the loop which are a necessary to exploit as much as possible the potential of high order methods such as
e.g.

As things stand today, we will not be able to take advantage of the potential of new high order numerical techniques and of hierarchical (multi-fidelity) robust certification approaches without some very aggressive adaptive methodology. Such a methodology, will require interactions between e.g. the uncertainty quantification methods and the adaptive spatial discretization, as well as with the PDE modelling part.
Such a strategy cannot be developed, let alone implemented in an operational context, without completely disassembling the scheme of the two boxes, and letting all the parts (PDE system, mesh generation/adaptation, numerical scheme, evalutaion of the physical model, the post processing of the output, exploration of the spaces of physical and stochastic parameters) interact together. This is what we want to do in `CARDAMOM`

Our strength is also our unique chance of exploring the interactions between all the parts. We will try to answer some fundamental questions related to the following aspects

What are the relations between PDE model accuracy (asymptotic error) and scheme accuracy, and how to control, en possibly exploit these relations to minimize the error for a given computational effort ;

How to devise and implement adaptation techniques (

How to exploit the wide amount of information made available from the optimization *and* uncertainty quantification process to construct a more aggressive adaptation strategy in physical, parameter, and stochastic space, and in the physical model itself ;

These research avenues related to the PDE models and numerical methods used, will allow us to have an impact on the applications communities targeted which are

Aeronautics and aerospace engineering (de-anti icing systems, space re-entry) ;

Energy engineering (organic Rankine cycles and wave energy conversion) ;

Material engineering (self healing composite materials) ;

Coastal engineering (coastal protection, hazard assessment etc.).

The main research directions related to the above topics are discussed in the following section.

In many of the applications we consider, intermediate fidelity models are or can be derived using an asymptotic expansion for the relevant scale resolving PDEs, and eventually considering some averaged for of the resulting continuous equations. The resulting systems of PDEs are often very complex and their characterization, e.g. in terms of stability, unclear, or poor, or too complex to allow to obtain discrete analogy of the continuous properties. This makes the numerical approximation of these PDE systems a real challenge. Moreover, most of these models are often based on asymptotic expansions involving small geometrical scales. This is true for many applications considered here involving flows in/of thin layers (free surface waves, liquid films on wings generating ice layers, oxide flows in material cracks, etc). This asymptotic expansion is nothing else than a discretization (some sort of Taylor expansion) in terms of the small parameter. The actual discretization of the PDE system is another expansion in space involving as a small parameter the mesh size. What is the interaction between these two expansions ? Could we use the spatial discretization (truncation error) as means of filtering undesired small scales instead of having to explicitly derive PDEs for the large scales ? We will investigate in depth the relations between asymptotics and discretization by :

comparing the asymptotic limits of discretized forms of the relevant scale resolving equations with the discretization of the analogous continuous asymptotic PDEs. Can we discretize a well understood system of PDEs instead of a less understood and more complex one ? ;

study the asymptotic behaviour of error terms generated by coarse one-dimensional discretization in the direction of the “small scale”. What is the influence of the number of cells along the vertical direction, and of their clustering ? ;

derive equivalent continuous equations (modified equations) for anisotropic discretizations in which the direction is direction of the “small scale” is approximated with a small number of cells. What is the relation with known asymptotic PDE systems ?

Our objective is to gain sufficient control of the interaction between discretization and asymptotics to be able to replace
the coupling of several complex PDE systems by adaptive strongly anisotrotropic finite element approximations of relevant and well understood PDEs.
Here the anisotropy is intended in the sense of having a specific direction in which a much poorer (and possibly variable with the flow conditions)
polynomial approximation (expansion) is used. The final goal is, profiting from the availability of faster and cheaper computational platforms,
to be able to automatically control numerical *and* physical accuracy of the model with the same techniques.
This activity will be used to improve our modelling in coastal engineering as well as
for de-anti icing systems, wave energy converters, composite materials (cf. next sections).

In parallel to these developments, we will make an effort in to gain a better understanding of continuous asymptotic PDE models. We will in particular work on improving, and possibly, simplifying their numerical approximation. An effort will be done in trying to embed in these more complex nonlinear PDE models discrete analogs of operator identities necessary for stability (see e.g. the recent work of , and references therein).

We will work on both the improvement of high order mesh generation and adaptation techniques, and the construction of more efficient, adaptive high order discretisation methods.

Concerning curved mesh generation, we will focus on two points. First propose a robust and automatic method to generate curved simplicial meshes for realistic geometries. The untangling algorithm we plan to develop is a hybrid technique that gathers a local mesh optimization applied on the surface of the domain and a linear elasticity analogy applied in its volume. Second we plan to extend the method proposed in to hybrid meshes (prism/tetra).

For time dependent adaptation we will try to exploit as much as possible the use of

The development of high order schemes for the discretization of the PDE will be a major part of our activity. We will work from the start in an Arbitrary Lagrangian Eulerian setting, so that mesh movement will be easily accommodated, and investigate the following main points:

the ALE formulation is well adapted both to handle moving meshes, and to provide conservative, high order, and monotone remaps between different meshes. We want to address the issue of cost-accuracy of adaptive mesh computations by exploring different degrees of coupling between the flow and the mesh PDEs. Initial experience has indicated that a clever coupling may lead to a considerable CPU time reduction for a given resolution , . This balance is certainly dependent on the nature of the PDEs, on the accuracy level sought, on the cost of the scheme, and on the time stepping technique. All these elements will be taken into account to try to provide the most efficient formulation ;

the conservation of volume, and the subsequent preservation of constant mass-momentum-energy states on deforming domains is one of the most primordial elements of Arbitrary Lagrangian-Eulerian formulations. For complex PDEs as the ones considered here, of especially for some applications, there may be a competition between the conservation of e.g. mass, an the conservation of other constant states, as important as mass. This is typically the case for free surface flows, in which mass preservation is in competitions with the preservation of constant free surface levels . Similar problems may arise in other applications. Possible solutions to this competition may come from super-approximation (use of higher order polynomials) of some of the data allowing to reduce (e.g. bathymetry) the error in the preservation of one of the competing quantities. This is similar to what is done in super-parametric approximations of the boundaries of an object immersed in the flow, except that in our case the data may enter the PDE explicitly and not only through the boundary conditions. Several efficient solutions for this issue will be investigated to obtain fully conservative moving mesh approaches:

an issue related to the previous one is the accurate treatment of wall boundaries. It is known that even for standard lower order (second) methods,
a higher order, curved, approximation of the boundaries may be beneficial. This, however, may become difficult when considering moving objects, as in the case e.g. of the study of the impact of ice debris in the flow.
To alleviate this issue, we plan to follow on with our initial work on the combined use of immersed boundaries techniques with high order, anisotropic (curved) mesh adaptation.
In particular, we will develop combined approaches involving high order hybrid meshes on fixed boundaries with the use of penalization techniques and immersed boundaries
for moving objects. We plan to study the accuracy obtainable across discontinuous functions with

the proper treatment of different physics may be addressed by using mixed/hybrid schemes in which different variables/equations are approximated using a different polynomial expansion. A typical example is our work on the discretization of highly non-linear wave models in which we have shown how to use a standard continuous Galerkin method for the elliptic equation/variable representative of the dispersive effects, while the underlying hyperbolic system is evolved using a (discontinuous) third order finite volume method. This technique will be generalized to other classes of discontinuous methods, and similar ideas will be used in other context to provide a flexible approximation. Such mathods have clear advantages in multiphase flows but not only. A typical example where such mixed methods are beneficial are flows involving different species and tracer equations, which are typically better treated with a discontinuous approximation. Another example is the use of this mixed approximation to describe the topography with a high order continuous polynomial even in discontinuous method. This allows to greatly simplify the numerical treatment of the bathymetric source terms ;

the enhancement of stabilized methods based on some continuous finite element approximation will remain a main topic. We will further
pursue the study on the construction of simplified stabilization operators which do not involve any contributions to the mass matrix.
We will in particular generalize our initial results , , to higher order spatial approximations using cubature points, or Bezier polynomials, or also hierarchical approximations.
This will also be combined with time dependent variants of the reconstruction techniques initially proposed by D. Caraeni ,
allowing to have a more flexible approach similar to the so-called

time stepping is an important issue, especially in presence of local mesh adaptation. The techniques we use will force us to investigate local and multilevel techniques. We will study the possibility constructing semi-implicit methods combining extrapolation techniques with space-time variational approaches. Other techniques will be considered, as multi-stage type methods obtained using Defect-Correction, Multi-step Runge-Kutta methods , as well as spatial partitioning techniques . A major challenge will be to be able to guarantee sufficient locality to the time integration method to allow to efficiently treat highly refined meshes, especially for viscous reactive flows. Another challenge will be to embed these methods in the stabilized methods we will develop.

As already remarked, classical methods for uncertainty quantification are affected by the so-called Curse-of-Dimensionality. Adaptive approaches proposed so far, are limited in terms of efficiency, or of accuracy. Our aim here is to develop methods and algorithms permitting a very high-fidelity simulation in the physical and in the stochastic space at the same time. We will focus on both non-intrusive and intrusive approaches.

Simple non-intrusive techniques to reduce the overall cost of simulations under uncertainty will be based on adaptive quadrature in stochastic space with mesh adaptation in physical space using error monitors related to the variance of to the sensitivities obtained e.g. by an ANOVA decomposition. For steady state problems, remeshing using metric techniques is enough. For time dependent problems both mesh deformation and re-meshing techniques will be used. This approach may be easily used in multiple space dimensions to minimize the overall cost of model evaluations by using high order moments of the properly chosen output functional for the adaptation (as in optimization). Also, for high order curved meshes, the use of high order moments and sensitivities issued from the UQ method or optimization provides a viable solution to the lack of error estimators for high order schemes.

Despite the coupling between stochastic and physical space, this approach can be made massively parallel by means of extrapolation/interpolation techniques for the high order moments, in time and on a reference mesh, guaranteeing the complete independence of deterministic simulations. This approach has the additional advantage of being feasible for several different application codes due to its non-intrusive character.

To improve on the accuracy of the above methods, intrusive approaches will also be studied. To propagate uncertainties in stochastic differential equations, we will use Harten's multiresolution framework, following . This framework allows a reduction of the dimensionality of the discrete space of function representation, defined in a proper stochastic space. This reduction allows a reduction of the number of explicit evaluations required to represent the function, and thus a gain in efficiency. Moreover, multiresolution analysis offers a natural tool to investigate the local regularity of a function and can be employed to build an efficient refinement strategy, and also provides a procedure to refine/coarsen the stochastic space for unsteady problems. This strategy should allow to capture and follow all types of flow structures, and, as proposed in , allows to formulate a non-linear scheme in terms of compression capabilities, which should allow to handle non-smooth problems. The potential of the method also relies on its moderate intrusive behaviour, compared to e.g. spectral Galerkin projection, where a theoretical manipulation of the original system is needed.

Several activities are planned to generalize our initial work, and to apply it to complex flows in multiple (space) dimensions and with many uncertain parameters.

The first is the improvement of the efficiency. This may be achieved by means of anisotropic mesh refinement, and by experimenting with a strong parallelization of the method. Concerning the first point, we will investigate several anisotropic refinement criteria existing in literature (also in the UQ framework), starting with those already used in the team to adapt the physical grid. Concerning the implementation, the scheme formulated in is conceived to be highly parallel due to the external cycle on the number of dimensions in the space of uncertain parameters. In principle, a number of parallel threads equal to the number of spatial cells could be employed. The scheme should be developed and tested for treating unsteady and discontinuous probability density function, and correlated random variables. Both the compression capabilities and the accuracy of the scheme (in the stochastic space) should be enhanced with a high-order multidimensional conservative and non-oscillatory polynomial reconstruction (ENO/WENO).

Another main objective is related to the use of multiresolution in both physical and stochastic space. This requires a careful handling of data and an updated definition of the wavelet. Until now, only a weak coupling has been performed, since the number of points in the stochastic space varies according to the physical space, but the number of points in the physical space remains unchanged. Several works exist on the multiresolution approach for image compression, but this could be the first time i in which this kind of approach would be applied at the same time in the two spaces with an unsteady procedure for refinement (and coarsening). The experimental code developed using these technologies will have to fully exploit the processing capabilities of modern massively parallel architectures, since there is a unique mesh to handle in the coupled physical/stochastic space.

Due to the computational cost, it is of prominent importance to consider multi-fidelity approaches gathering high-fidelity and low-fidelity computations. Note that low-fidelity solutions can be given by both the use of surrogate models in the stochastic space, and/or eventually some simplified choices of physical models of some element of the system. Procedures which deal with optimization considering uncertainties for complex problems may require the evaluation of costly objective and constraint functions hundreds or even thousands of times. The associated costs are usually prohibitive. For these reason, the robustness of the optimal solution should be assessed, thus requiring the formulation of efficient methods for coupling optimization and stochastic spaces. Different approaches will be explored. Work will be developed along three axes:

a robust strategy using the statistics evaluation will be applied separately,
*i.e.* using only low or high-fidelity evaluations. Some classical optimization algorithms will be used in this case.
Influence of high-order statistics and model reduction in the robust design optimization will be explored,
also by further developing some low-cost methods for robust design optimization working on the so-called Simplex

a multi-fidelity strategy by using in an efficient way low fidelity and high-fidelity estimators both in physical and stochastic space will be conceived, by using a Bayesian framework for taking into account model discrepancy and a PC expansion model for building a surrogate model ;

develop advanced methods for robust optimization. In particular, the Simplex

This work is related to the activities foreseen in the EU contract MIDWEST, in the ANR LabCom project VIPER (currently under evaluation), in a joint project with DGA and VKI, in two projects under way with AIRBUS and SAFRAN-HERAKLES.

Impact of large ice debris on downstream aerodynamic surfaces and ingestion by aft mounted engines must be considered during the aircraft certification process. It is typically the result of ice accumulation on unprotected surfaces, ice accretions downstream of ice protected areas, or ice growth on surfaces due to delayed activation of ice protection systems (IPS) or IPS failure. This raises the need for accurate ice trajectory simulation tools to support pre-design, design and certification phases while improving cost efficiency. Present ice trajectory simulation tools have limited capabilities due to the lack of appropriate experimental aerodynamic force and moment data for ice fragments and the large number of variables that can affect the trajectories of ice particles in the aircraft flow field like the shape, size, mass, initial velocity, shedding location, etc... There are generally two types of model used to track shed ice pieces. The first type of model makes the assumption that ice pieces do not significantly affect the flow. The second type of model intends to take into account ice pieces interacting with the flow. We are concerned with the second type of models, involving fully coupled time-accurate aerodynamic and flight mechanics simulations, and thus requiring the use of high efficiency adaptive tools, and possibly tools allowing to easily track moving objects in the flow. We will in particular pursue and enhance our initial work based on adaptive immerse boundary capturing of moving ice debris, whose movements are computed using basic mechanical laws.

In it has been proposed to model ice shedding trajectories by an innovative paradigm that is based on CArtesian grids, PEnalization and LEvel Sets (LESCAPE code). Our objective is to use the potential of high order unstructured mesh adaptation and immersed boundary techniques to provide a geometrically flexible extension of this idea. These activities will be linked to the development of efficient mesh adaptation and time stepping techniques for time dependent flows, and their coupling with the immersed boundary methods we started developing in the FP7 EU project STORM , . In these methods we compensate for the error at solid walls introduced by the penalization by using anisotropic mesh adaptation , , . From the numerical point of view one of the major challenges is to guarantee efficiency and accuracy of the time stepping in presence of highly stretched adaptive and moving meshes. Semi-implicit, locally implicit, multi-level, and split discretizations will be explored to this end.

Besides the numerical aspects, we will deal with modelling challenges. One source of complexity is the initial conditions which are essential to compute ice shedding trajectories. It is thus extremely important to understand the mechanisms of ice release. With the development of next generations of engines and aircraft, there is a crucial need to better assess and predict icing aspects early in design phases and identify breakthrough technologies for ice protection systems compatible with future architectures. When a thermal ice protection system is activated, it melts a part of the ice in contact with the surface, creating a liquid water film and therefore lowering ability of the ice block to adhere to the surface. The aerodynamic forces are then able to detach the ice block from the surface . In order to assess the performance of such a system, it is essential to understand the mechanisms by which the aerodynamic forces manage to detach the ice. The current state of the art in icing codes is an empirical criterion. However such an empirical criterion is unsatisfactory. Following the early work of , we will develop appropriate asymptotic PDE approximations allowing to describe the ice formation and detachment, trying to embed in this description elements from damage/fracture mechanics. These models will constitute closures for aerodynamics/RANS and URANS simulations in the form of PDE wall models, or modified boundary conditions.

In addition to this, several sources of uncertainties are associated to the ice geometry, size, orientation and the shedding location. In very few papers , some sensitivity analysis based on Monte Carlo method have been conducted to take into account the uncertainties of the initial conditions and the chaotic nature of the ice particle motion. We aim to propose some systematic approach to handle every source of uncertainty in an efficient way relying on some state-of-art techniques developed in the Team. In particular, we will perform an uncertainty propagation of some uncertainties on the initial conditions (position, orientation, velocity,...) through a low-fidelity model in order to get statistics of a multitude of particle tracks. This study will be done in collaboration with ETS (Ecole de Technologies Supérieure, Canada). The longterm objective is to produce footprint maps and to analyse the sensitivity of the models developed.

As already mentioned, atmospheric re-entry involves multi-scale fluid flow physics including highly rarefied effects, aerothermochemistry, radiation. All this must be coupled to the response of thermal protection materials to extreme conditions. This response is most often the actual objective of the study, to allow the certification of Thermal Protection Systems (TPS).

One of the applications we will consider is the so-called post-flight analysis of a space mission. This involves reconstructing the history of the re-entry module (trajectory and flow) from data measured on the spacecraft by means of a Flush Air Data System (FADS), a set of sensors flush mounted in the thermal protection system to measure the static pressure (pressure taps) and heat flux (calorimeters). This study involves the accurate determination of the freestream conditions during the trajectory. In practice this means determining temperature, pressure, and Mach number in front of the bow shock forming during re-entry. As shown by zur Nieden and Olivier , state of the art techniques for freestream characterization rely on several approximations, such as e.g. using an equivalent calorically perfect gas formulas instead of taking into account the complex aero-thermo-chemical behaviour of the fluid. These techniques do not integrate measurement errors nor the heat flux contribution, for which a correct knowledge drives more complex models such as gas surface interaction. In this context, CFD supplied with UQ tools permits to take into account chemical effects and to include both measurement errors and epistemic uncertainties, e.g. those due to the fluid approximation, on the chemical model parameters in the bulk and at the wall (surface catalysis).

Rebuilding the freestream conditions from the stagnation point data therefore amounts to solving a stochastic inverse problem, as in robust optimization. Our objective is to build a robust and global framework for rebuilding freestream conditions from stagnation-point measurements for the trajectory of a re-entry vehicle. To achieve this goal, methods should be developed for

an accurate simulation of the flow in all the regimes, from rarefied, to transitional, to continuous ;

providing a complete analysis about the reliability and the prediction of the numerical simulation in hypersonic flows, determining the most important source of error in the simulation (PDE model, discretization, mesh, etc)

reducing the overall computational cost of the analysis .

Our work on the improvement of the simulation capabilities for re-entry flows will focus both on the models and on the methods. We will in particular provide an approach to extend the use of standard CFD models in the transitional regime, with CPU gains of several orders of magnitude w.r.t. Boltzmann solvers. To do this we will use the results of a boundary layer analysis allowing to correct the Navier-Stokes equations. This theory gives modified (or extended) boundary conditions that are called "slip velocity" and "temperature jump" conditions. This theory seems to be completely ignored by the aerospace engineering community. Instead, people rather use a simpler theory due to Maxwell that also gives slip and jump boundary conditions: however, the coefficients given by this theory are not correct. This is why several teams have tried to modify these coefficients by some empirical methods, but it seems that this does not give any satisfactory boundary conditions.

Our project is twofold. First, we want to revisit the asymptotic theory, and to make it known in the aerospace community. Second, we want to make an intensive sensitivity analysis of the model to the various coefficients of the boundary conditions. Indeed, there are two kinds of coefficients in these boundary conditions. The first one is the accomodation coefficient: in the kinetic model, it gives the proportion of molecules that are specularly reflected, while the others are reflected according to a normal distribution (the so-called diffuse reflexion). This coefficient is a data of the kinetic model that can be measured by experiments: it depends on the material and the structure of the solid boundary, and of the gas. Its influence on the results of a Navier-Stokes simulation is certainly quite important. The other coefficients are those of the slip and jump boundary conditions: they are issued from the boundary layer analysis, and we have absolutely no idea of the order of magnitude of their influence on the results of a Navier-Stokes solution. In particular, it is not clear if these results are more sensitive to the accomodation coefficient or to these slip and jump coefficients.

In this project, we shall make use of the expertise of the team on uncertainty quantification to investigate the sensitivity of the Navier-Stokes model with slip and jump coefficients to these various coefficients. This would be rather new in the field of aerospace community. It could also have some impacts in other sciences in which slip and jump boundary conditions with incorrect coefficients are still used, like for instance in spray simulations: for very small particles immersed in a gas, the drag coefficient is modified to account for rarefied effects (when the radius of the particle is of the same order of magnitude as the mean free path in the gas), and slip and jump boundary conditions are used.

Another application which has very close similarities to the physics of de-anti icing systems is the modelling of the solid and liquid ablation of the thermal protective system of the aircraft. This involves the degradation and recession of the solid boundary of the protection layer due to the heating generated by the friction. As in the case of de-anti icing systems, the simulation of these phenomena need to take into account the heat conduction in the solid, its phase change, and the coupling between a weakly compressible and a compressible phase. Fluid/Solid coupling methods are generally based on a weak approach. Here we will both study, by theoretical and numerical techniques, a strong coupling method for the interaction between the fluid and the solid, and, as for de-anti icing systems, attempt at developing appropriate asymptotic models. These would constitute some sort of thin layer/wall models to couple to the external flow solver.

These modelling capabilities will be coupled to high order adaptive discretizations to provide high fidelity flow models. One of the most challenging problems is the minimization of the influence of mesh and scheme on the wall conditions on the re-entry module. To reduce this influence, we will investigate both high order adaptation across the bow shock, and possibly adaptation based on uncertainty quantification high order moments related to the heat flux estimation, or shock fitting techniques , . These tools will be coupled to our robust inverse techniques. One of our objectives is to development of a low-cost strategy for improving the numerical prediction by taking into account experimental data. Some methods have been recently introduced for providing an estimation of the numerical errors/uncertainties. We will use some metamodels for solving the inverse problem, by considering all sources of uncertainty, including those on physical models. We will validate the framework sing the experimental data available in strong collaboration with the von Karman Institute for Fluid dynamics (VKI). In particular, data coming from the VKI Longshot facility will be used. We will show application of the developed numerical tool for the prediction in flight conditions.

These activities will benefit from our strong collaborations with the CEA and with the von Karman Institute for Fluid Dynamics and ESA.

We will develop modelling and design tools, as well as dedicated platforms, for Rankine cycles using complex fluids (organic compounds), and for wave energy extraction systems.

*Organic Rankine Cycles (ORCs)* use heavy organic compounds as working fluids. This results in superior efficiency over steam Rankine cycles for source temperatures below 900 K.
ORCs typically require only a single-stage rotating component making them much simpler than typical multi-stage steam turbines.
The strong pressure reduction in the turbine may lead to supersonic flows in the rotor, and thus to the appearance of shocks, which reduces the efficiency due to the associated losses.
To avoid this, either a larger multi stage installation is used, in which smaller pressure drops are obtained in each stage, or centripetal turbines are used, at very high rotation speeds (of the order of 25,000 rpm).
The second solution allows to keep the simplicity of the expander, but leads to poor turbine efficiencies (60-80%) - w.r.t. modern, highly optimized, steam and gas turbines - and to higher mechanical constraints.
The use of *dense-gas working fluids*, *i.e.* operating close to the saturation curve, in properly chosen conditions could increase the turbine critical Mach number avoiding the formation of shocks,
and increasing the efficiency. Specific shape optimization may enhance these effects, possibly allowing the reduction of rotation speeds.
However, dense gases may have significantly different properties with respect to dilute ones. Their
dynamics is governed by a thermodynamic parameter known as the fundamental derivative of gas dynamics

where

The simulation of these gases requires accurate thermodynamic models, such as Span-Wagner or Peng-Robinson (see ). The data to build these models is scarce due to the difficulty of performing reliable experiments. The related uncertainty is thus very high. Our work will go in the following directions:

develop deterministic models for the turbine and the other elements of the cycle. These will involve multi-dimensional high fidelity, as well as intermediate and low fidelity (one- and zero-dimensional), models for the turbine, and some 0D/1D models for other element of the cycle (pump, condenser, etc) ;

validation of the coupling between the various elements. The following aspects will be considered: characterization of the uncertainties on the cycle components (e.g. empirical coefficients modelling the pump or the condenser), calibration of the thermodynamic parameters, model the uncertainty of each element, and the influence of the unsteady experimental data ;

demonstrate the interest of a specific optimization of geometry, operating conditions, and the choice of the fluid, according to the geographical location by including local solar radiation data. Multi-objective optimization will be considered to maximize performance indexes (e.g. Carnot efficiency, mechanical work and energy production), and to reduce the variability of the output.

This work will provide modern tools for the robust design of ORCs systems. It benefits from the direct collaboration with the SME EXOES (ANR LAbCom VIPER), and from a collaboration with LEMMA.

*Wave energy conversion* is an emerging sector in energy engineering. The design of new and efficient Wave Energy Converters (WECs) is thus a crucial activity.
As pointed out by Weber , it is more economical to raise the technology performance level (TPL) of a wave energy converter concept at low technology readiness level (TRL).
Such a development path puts a greater demand on the numerical methods used. The findings of Weber also tell us that important design decisions as well as optimization should be performed as early in the development process as possible. However, as already mentioned, today the wave energy sector relies heavily on the use of tools based on simplified linear hydrodynamic models for the prediction of motions, loads, and power production.
Our objective is to provide this sector, and especially SMEs, with robust design tools
to minimize the uncertainties in predicted power production, loads, and costs of wave energy.

Following our initial work , we will develop, analyse, compare, and use for multi-fidelity optimization,non-linear models of different scales (fidelity) ranging from simple linear hydrodynamics over asymptotic discrete nonlinear wave models, to non-hydrostatic anisoptropic Euler free surface solvers. We will not work on the development of small scale models (VOF-RANS or LES) but may use such models, developed by our collaborators, for validation purposes. These developments will benefit from all our methodological work on asymptotic modelling and high order discretizations. As shown in , asymptotic models foe WECs involve an equation for the pressure on the body inducing a PDE structure similar to that of incompressible flow equations. The study of appropriate stable and efficient high order approximations (coupling velocity-pressure, efficient time stepping) will be an important part of this activity. Moreover, the flow-floating body interaction formulation introduces time stepping issues similar to those encountered in fluid structure interaction problems, and require a clever handling of complex floater geometries based on adaptive and ALE techniques. For this application, the derivation of fully discrete asymptotics may actually simplify our task.

Once available, we will use this hierarchy of models to investigate and identify the modelling errors, and provide a more certain estimate of the cost of wave energy. Subsequently we will look into optimization cycles by comparing time-to-decision in a multi-fidelity optimization context. In particular, this task will include the development and implementation of appropriate surrogate models to reduce the computational cost of expensive high fidelity models. Here especially artificial neural networks (ANN) and Kriging response surfaces (KRS) will be investigated. This activity on asymptotic non-linear modelling for WECs, which has had very little attention in the past, will provide entirely new tools for this application. Multi-fidelity robust optimization is also an approach which has never been applied to WECs.

This work is the core of the EU OCEANEranet MIDWEST project, which we coordinate. It will be performed in collaboration with our European partners, and with a close supervision of European SMEs in the sector, which are part of the steering board of MIDWEST (WaveDragon, Waves4Power, Tecnalia).

Because of their high strength and low weight, ceramic-matrix composite materials (CMCs) are the focus of active research for aerospace and energy applications involving high temperatures, either military or civil. Though based on brittle ceramic components, these composites are not brittle due to the use of a fibre/matrix interphase that preserves the fibres from cracks appearing in the matrix. Recent developments aim at implementing also in civil aero engines a specific class of Ceramic Matrix Composite materials (CMCs) that show a self-healing behaviour. Self-healing consists in filling cracks appearing in the material with a dense fluid formed in-situ by oxidation of part of the matrix components. Self-healing (SH) CMCs are composed of a complex three-dimensional topology of woven fabrics containing fibre bundles immersed in a matrix coating of different phases. The oxide seal protects the fibres which are sensitive to oxidation, thus delaying failure. The obtained lifetimes reach hundreds of thousands of hours .

The behaviour of a fibre bundle is actually extremely variable, as the oxidation reactions generating the self-healing mechanism have kinetics strongly dependent on temperature and composition. In particular, the lifetime of SH-CMCs depends on: (i) temperature and composition of the surrounding atmosphere; (ii) composition and topology of the matrix layers; (iii) the competition of the multidimensional diffusion/oxidation/volatilization processes; (iv) the multidimensional flow of the oxide in the crack; (v) the inner topology of fibre bundles; (vi) the distribution of critical defects in the fibres. Unfortunately, experimental investigations on the full materials are too long (they can last years) and their output too qualitative (the coupled effects can only be observed a-posteriori on a broken sample). Modelling is thus essential to study and to design SH-CMCs.

In collaboration wit the LCTS laboratory (a joint CNRS-CEA-SAFRAN-Bordeaux University lab devoted to the study of thermo-structural materials in Bordeaux), we are developing a multi-scale model in which a structural mechanics solver is coupled with a closure model for the crack physico chemistry. This model is obtained as a multi-dimensional asymptotic crack averaged approximation fo the transport equations (Fick's laws) with chemical reactions sources, plus a potential model for the flow of oxide , , . We have demonstrated the potential of this model in showing the importance of taking into account the multi-dimensional topology of a fibre bundle (distribution of fibres) in the rupture mechanism. This means that the 0-dimensional model used in most of the studies (se e.g. ) will underestimate appreciably the lifetime of the material. Based on these recent advances, we will further pursue the development of multi-scale multi-dimensional asymptotic closure models for the parametric design of self healing CMCs. Our objectives are to provide: (i) new, non-linear multi-dimensional mathematical model of CMCs, in which the physico-chemistry of the self-healing process is more strongly coupled to the two-phase (liquid gas) hydro-dynamics of the healing oxide ; (ii) a model to represent and couple crack networks ; (iii) a robust and efficient coupling with the structural mechanics code ; (iv) validate this platform with experimental data obtained at the LCTS laboratory. The final objective is to set up a multi-scale platform for the robust prediction of lifetime of SH-CMCs, which will be a helpful tool for the tailoring of the next generation of these materials.

Our objective is to bridge the gap between the development of high order adaptive methods, which has mainly been performed in the industrial context and environmental applications, with particular attention to coastal and hydraulic engineering. We want to provide tools for adaptive non-linear modelling at large and intermediate scales (near shore, estuarine and river hydrodynamics). We will develop multi-scale adaptive models for free surface hydrodynamics. Beside the models and codes themselves, based on the most advanced numerics we will develop during this project, we want to provide sufficient know how to control, adapt and optimize these tools.

We will focus our effort in the understanding of the interactions between asymptotic approximations
and numerical approximations. This is extremely important in at least two aspects.
The first is the capability of a numerical model to handle highly dispersive wave propagation.
This is usually done by high accuracy asymptotic PDE expansions. Here we plan to make heavily use of our
results concerning the relations between vertical asymptotic expansions and
standard finite element approximations. In particular, we will invest some effort in the development of

Another important aspect which is not understood well enough at the moment is the role of dissipation in wave breaking regions. There are several examples of breaking closure, going from algebraic and PDE-based eddy viscosity methods , , , , to hybrid methods coupling dispersive PDEs with hyperbolic ones, and trying to mimic wave breaking with travelling bores , , , , . In both cases, numerical dissipation plays an important role and the activation or not of the breaking closure, as the quantitative contribution of numerical dissipation to the flow has not been properly investigated. These elements must be clarified to allow full control of adaptive techniques for the models used in this type of applications.

Another point we want to clarify is how to optimize the discretization of asymptotic PDE models. In particular, when adding mesh size(s) and time step, we are in presence of at least 3 (or even more) small parameters. The relations between physical ones have been more or less investigates, as have been the ones between purely numerical ones. We plan to study the impact of numerics on asymptotic PDE modelling by reverting the usual process and studying asymptotic limits of finite element discretizations of the Euler equations. Preliminary results show that this does allow to provide some understanding of this interaction and to possibly propose considerably improved numerical methods .

Developed since 2011 by V. Perrier in partnership with the Cardamom Inria team, the AeroSol library is a high order finite element library written in C++. The code design has been carried for being able to perform efficient computations, with continuous and discontinuous finite element methods on hybrid and possibly curvilinear meshes.

The work of the Cardamom team is focused on continuous finite element methods, while we focus on discontinuous Galerkin methods. However, everything is done for sharing the largest possible part of code. The distribution of the unknowns is made with the software PaMPA, first developed within the Inria teams Bacchus and Castor, and currently maintained in the Tadaam team.

The generic features of the library are Adaptive wall treatment for a second moment closure in the industrial context

**High order**. It can be theoretically any order of accuracy, but
the finite element basis, and quadrature formula are implemented for having
up to a fifth order of accuracy.

**Hybrid and curvilinear meshes**. AeroSol can deal with
up to fifth order conformal meshes composed of lines, triangles, quadrangles,
tetrahedra, hexahedra, prism, and pyramids.

**Continuous and discontinuous discretization**. AeroSol deals
with both continuous and discontinuous finite element methods.

We would like to emphasize three assets of this library:

**Its development environment** For allowing a good collaborative work and a functional library, a strong emphasis has been put on the use of modern
collaborative tools for developing our software. This includes the active use
of a repository, the use of CMake for the compilation, the constant
development of unitary and functional tests for all the parts of the library
(using CTest),
and the use of the continuous integration tool Jenkins for testing the
different configurations of AeroSol and its dependencies. Efficiency is
regularly tested with direct interfacing with the PAPI library or with
tools like scalasca.

**Its genericity** A lot of classes are common to all the
discretization, for example classes concerning I/O, finite element functions,
quadrature, geometry, time integration, linear solver, models and interface
with PaMPA. Adding simple features (e.g. models, numerical flux, finite element
basis or quadrature formula) can be easily done by writing the class, and
declaring its use in only one class of the code.

**Its efficiency** This modularity is achieved by means
of template abstraction for keeping good performances. Dedicated efficient
implementation, based on the data locality of the discontinuous Galerkin
method has been developed. As far as parallelism is concerned, we use
point-to-point communications, the HDF5 library for parallel I/O. The behavior of
the AeroSol library at medium scale (1000 to 2000 cores) was studied in
.

The AeroSol project fits with the first axis of the Bordeaux Sud Ouest development strategy, which is to build a coherent software suite scalable and efficient on new architectures, as the AeroSol library relies on several tools developed in other Inria teams, especially for the management of the parallel aspects. At the end of 2015, AeroSol had the following features:

**Boundary conditions** Periodic boundary conditions, time-dependent
inlet and outlet boundary conditions. Adiabatic wall and isothermal wall.
Steger-Warming based boundary condition. Synthetic Eddy Method for generating turbulence.

**C++/Fortran interface** Tests for binding fortran with C++.

**Development environment**
An upgraded use of CMake for compilation (gcc, icc and xlc),
CTest for automatic tests and memory checking,
lcov and gcov for code coverage reports. A CDash server for collecting the unitary tests and the memory checking.
An under development interface for functional tests.
Optional linking with HDF5, PAPI, with dense small matrices libraries (BLAS, Eigen).
An updated shared project Plafrim and joint project Aerosol/Scotch/PaMPA project on the continuous integration platform. An on-going integration of SPack for handling dependencies. A fixed ESSL interface.

**Finite elements** up to fourth degree for Lagrange finite elements and hierarchical orthogonal finite element basis (with Dubiner transform on simplices) on lines, triangles, quadrangles, tetrahedra, prisms, hexaedra and pyramids. Finite element basis that are interpolation basis on Gauss-Legendre points for lines, quadrangles, and hexaedra, and triangle (only 1st and 2nd order).

**Geometry** Elementary geometrical functions for first order lines, triangles, quadrangles, prisms, tetrahedra, hexaedra and pyramids. Handling of high order meshes.

**In/Out**
Link with the XML library for handling with parameter files. Parallel reader
for GMSH, with an embedded geometrical pre-partitioner.
Writer on the VTK-ASCII legacy format (cell and point centered). Parallel output in vtu and pvtu (Paraview) for cell-centered visualization, and XDMF/HDF5 format for both cell and point centered visualization. Ability of saving the high order solution and restarting from it. Computation of volumic and probe statistics. Ability of saving averaged layer data in quad and hexa meshes. Ability of defining user defined output visualization variables.

**Instrumentation** Aerosol can give some traces on memory consumption/problems with an interfacing with the PAPI library. Tests have also been performed with VTUNE and TAU. Tests with Maqao and Scalasca (VIHPS workshop).

**Linear Solvers** Link with the external linear solver UMFPack, PETSc and MUMPS. Internal solver for diagonal and block-diagonal matrices.

**Memory handling** discontinuous and continuous, sequential and parallel discretizations
based on PaMPA for generic meshes, including hybrid meshes.

**Models** Perfect gas Euler system, real gas Euler system (template based abstraction for a generic equation of state), scalar
advection, Waves equation in first order formulation, generic interface for defining space-time models from space models.
Diffusive models: isotropic and anisotropic diffusion, compressible Navier-Stokes. Scalar advection-diffusion model. Linearized Euler equations, and Sutherland model for non isothermal diffusive flows. Shallow-water model.

**Multigrid** Development of

**Numerical fluxes** Centered fluxes, exact Godunov' flux for linear hyperbolic systems, and Lax-Friedrich flux. Riemann solvers for Low Mach flows. Numerical flux accurate for steady and unsteady computations.

**Numerical schemes** Continuous Galerkin method for the Laplace problem (up to fifth order) with non consistent time iteration or with direct matrix inversion.
Explicit and implicit discontinuous Galerkin methods for hyperbolic systems, diffusive and advection-diffusion problems. In progress optimization by stocking the geometry for advection problems. SUPG and Residual distribution schemes. Optimization of DG schemes for
advection-diffusion problems: stocking of the geometry and use of BLAS for all the linear phases of the scheme.

**Parallel computing** Mesh redistribution, computation of Overlap with PaMPA. Collective asynchronous communications (PaMPA based). Asynchronous point to point communications. Tests on the cluster Avakas from MCIA, and on Mésocentre de Marseille, and PlaFRIM. Tier-1 Turing (BlueGene). Weighted load balancing for hybrid meshes.

**Postprocessing** High order projections over line postprocessing, possibility of stocking averaged data, such as the average flow and the Reynolds stresses.

**Quadrature formula** up to 11th order for Lines, Quadrangles, Hexaedra, Pyramids, Prisms, up to 14th order for tetrahedron, up to 21st order for triangles. Gauss-Lobatto type quadrature formula for lines, triangles, quadrangles and hexaedra.

**Time iteration** explicit Runge-Kutta up to fourth order, explicit
Strong Stability Preserving schemes up to third order. Optimized CFL time schemes: SSP(2,3) and SSP(3,4). CFL time stepping. Implicit integration with BDF schemes from 2nd to 6th order Newton method for stationary problems. Implicit unstationary time iterator non consistent in time for stationary problems. Implementation of in house GMRES and conjugate gradient based on Jacobian free iterations.

**Validation ** Poiseuille, Taylor-Green vortex. Laplace equation on a ring and Poiseuille flow on a ring. Volumic forcing
based on wall dissipation.Turbulent channel flow.

In 2016, the following features have been added:

Geometric multigrid methods: aggregation of the mesh based on PaMPA, definition of finite element basis on arbitrary shape cells. Definition of geometry, quadratures and numerical schemes on aggregated finite elements.

Sutherland law in the Navier-Stokes equations.

Mass matrix free implementation of discontinuous Galerkin methods.

Improvement of installation documentation. Spack based installation.

Implementation of Boussinesq type models and Shallow water discretizations with well balancing, positivity preserving, wet-dry handling, limiters based on entropy viscosity,

Implementation of Barotropic Euler equations

Implementation of Taylor-based basis on simplices.

Keywords: Moving bodies, Rarefied flows

Contact: Luc Mieussens

Functional Description

Cake (Cut cell Algorithm for Kinetic Equations) can simulate 2D plane rarefied flows around moving obstacles, by using an immersed boundary technique with Cartesian grids. This code can simulate flows induced by temperature gradients, like the thermal creep flow. It has for instance been applied to the simulation of the Crookes radiometer.

Keywords: Image processing, structural analysis, 2D crystallography

Participants: Jean Mercat and Cécile Dobrzynski

Partners: ISM - LCPO - LCTS (UMR 5801)

Contact: Cécile Dobrzynski

Crysa is a library allowing to study the organization of objects once placed in an hexagonal grid thus allowing to analyze the crystal structure/organization of an image. The library allows to detect regions of coherence in an image of crystals, and it allows to determine e.g. the good separation of objects in an experiment (new polymeric materials, plasma, ...).

Keywords: Stochastic models - Uncertainty quantification

Participants: Pietro-Marco Congedo

Contact: Pietro-Marco Congedo

Scientific Description

An anchored analysis of variance (ANOVA) method is proposed to decompose the statistical moments. Compared to the standard ANOVA with mutually orthogonal component functions, the anchored ANOVA, with an arbitrary choice of the anchor point, loses the orthogonality if employing the same measure. However, an advantage of the anchored ANOVA consists in the considerably reduced number of deterministic solver's computations, which renders the uncertainty quantification of real engineering problems much easier. Different from existing methods, the covariance decomposition of the output variance is used in this work to take account of the interactions between non-orthogonal components, yielding an exact variance expansion and thus, with a suitable numerical integration method, provides a strategy that converges. This convergence is verified by studying academic tests. In particular, the sensitivity problem of existing methods to the choice of anchor point is analyzed via the Ishigami case, and we point out that covariance decomposition survives from this issue. Also, with a truncated anchored ANOVA expansion, numerical results prove that the proposed approach is less sensitive to the anchor point. The covariance-based sensitivity indices (SI) are also used, compared to the variance-based SI. Furthermore, we emphasize that the covariance decomposition can be generalized in a straightforward way to decompose higher-order moments. For academic problems, results show the method converges to exact solution regarding both the skewness and kurtosis. The proposed method can indeed be applied to a large number of engineering problems.

Functional Description

The Cut-ANOVA code (Fortran 90, MPI + OpenMP) is devoted to the stochastic analysis of numerical simulations. The method implemented is based on the spectral expansion of "anchored ANOVA", allowing the covariance-based sensitivity analysis. Compared to the conventional Sobol method, "Cut-ANOVA" provides three sensitivity indices instead of one, which allows a better analysis of the reliability of the numerical prediction. On the other hand, "Cut-ANOVA" is able to compute the higher order statistical moments such as the Skewness (3-rd order moment) and Kurtosis (4-th order moment). Several dimension reduction techniques have also been implemented to reduce the computational cost. Finally, thanks to the innovative method implemented into the Code Cut-ANOVA, one can obtain a similar accuracy for stochastic quantities by using a considerably less number of deterministic model evaluations, compared with the classical Monte Carlo method.

Keywords: Mesh adaptation, mesh deformation, elasticity, laplacian mesh equation

Participants: Cécile Dobrzynski, Mario Ricchiuto, Leo Nouveau and Luca Arpaia

Contact: Cécile Dobrzynski

Functional Description

FMG is a library deforming an input/reference simplicial mesh w.r.t. a given smoothness error monitor (function gradient or Hessian), metric field,
or given mesh size distribution. Displacements are computed by solving an elliptic Laplacian type equation with a continuous finite element method.
The library returns an adapted mesh with a corresponding projected solution, obtained by either a second order projection, or by an ALE finite element remap.
This year a new semi-linear elasticity formulation has been implemented involving a constant coefficient PDE with a nonlinear *force* accounting for the smoothness of the target
function. The advantage of this approach is that the non-linearity does not influence the elastic differential operator thus leading to a description of the deformation
governed by a time-invariant matrix even in unsteady simulations. Other developments currently implemented in SLOWS are being imported in FMG.

Keywords: Mesh - Anisotropic - Mesh adaptation

Participants: Cécile Dobrzynski, Pascal Frey, Charles Dapogny and Algiane Froehly

Partners: CNRS - IPB - Université de Bordeaux - UPMC

Contact: Cécile Dobrzynski

Scientific Description

Mmg3d is an open source software for tetrahedral remeshing. It performs local mesh modifications. The mesh is iteratively modified until the user prescriptions satisfaction.

Mmg3d can be used by command line or using the library version (C, C++ and Fortran API) : - It is a new version af the MMG3D4 software. It remesh both the volume and surface mesh of a tetrahedral mesh. It performs isotropic and anisotropic mesh adaptation and isovalue discretization of a level-set function.

Mmg3d allows to control the boundaries approximation: The "ideal" geometry is reconstruct from the piecewise linear mesh using cubic Bezier triangular partches. The surface mesh is modified to respect a maximal Hausdorff distance between the ideal geometry and the mesh.

Inside the volume, the software perform local mesh modifications ( such as edge swap, pattern split, isotropic and anisotropic Delaunay insertion...).

Functional Description

Mmg3d is one of the software of the Mmg platform. Is is dedicated to the modification of 3D volume meshes. It perform the adaptation and the optimization of a tetrahedral mesh and allow to discretize an isovalue.

Mmg3d perform local mesh modifications. The mesh is iteratively modified until the user prescription satisfaction.

Keywords: Mesh - Mesh generation - Anisotropic - Mesh adaptation - Isovalue discretization

Participants: Cécile Dobrzynski, Charles Dapogny, Pascal Frey and Algiane Froehly

Partners: CNRS - IPB - Université de Bordeaux - UPMC

Contact: Cécile Dobrzynski

Scientific Description

The Mmg platform gathers open source software for two-dimensional, surface and volume remeshing. The platform software perform local mesh modifications. The mesh is iteratively modified until the user prescriptions satisfaction.

The 3 softwares can be used by command line or using the library version (C, C++ and Fortran API) : - Mmg2d performs mesh generation and isotropic and anisotropic mesh adaptation. - Mmgs allows isotropic and anisotropic mesh adaptation for 3D surface meshes. - Mmg3d is a new version af the MMG3D4 software. It remesh both the volume and surface mesh of a tetrahedral mesh. It performs isotropic and anisotropic mesh adaptation and isovalue discretization of a level-set function.

The platform software allow to control the boundaries approximation: The "ideal" geometry is reconstruct from the piecewise linear mesh using cubic Bezier triangular partches. The surface mesh is modified to respect a maximal Hausdorff distance between the ideal geometry and the mesh.

Inside the volume, the software perform local mesh modifications ( such as edge swap, pattern split, isotropic and anisotropic Delaunay insertion...).

Functional Description

The Mmg plateform gathers open source software for two-dimensional, surface and volume remeshing. It provides three applications : 1) mmg2d: generation of a triangular mesh , adaptation and optimization of a triangular mesh 2) mmgs: adaptation and optimization of a surface triangulation representing a piecewise linear approximation of an underlying surface geometry 3) mmg3d: adaptation and optimization of a tetrahedral mesh and isovalue discretization

The platform software perform local mesh modifications. The mesh is iteratively modified until the user prescription satisfaction.

Participants: Cécile Dobrzynski and Algiane Froehly

Partners: CNRS - IPB - Université de Bordeaux

Contact: Cécile Dobrzynski

Keywords: Mesh - Curved mesh - Tetrahedral mesh

Functional Description

NOMESH is a software allowing the generation of three order curved simplicial meshes. Starting from a "classical" mesh with straight elements composed by triangles and/or tetrahedra, we are able to curve the boundary mesh. Starting from a mesh with some curved elements, we can verify if the mesh is valid, that means there is no crossing elements and only positive Jacobian. If the curved mesh is non valid, we modify it using linear elasticity equations until having a valid curved mesh.

Keywords: Free surface flows, Unstructured meshes, shallow water equations, Boussinesq equations

Participants: Luca Arpaia, Andrea Filippini, Maria Kazolea, Mario Ricchiuto and Nikolaos Pattakos

Contact: Mario Ricchiuto

Functional Description

SLOWS (Shallow-water fLOWS) is a C-platform allowing the simulation of free surface shallow water flows with friction. It can be used to simulate near shore hydrodynamics, wave transformations processes, etc. The kernel of the CODE is a shallow water solver based on second order residual distribution or second and third order finite volume schemes. Three different approaches are available to march in time, based on conditionally depth-positivity preserving implicit schemes, or on conditionally depth-positivity preserving genuinely explicit discretizations, or on an unconditionally depth-positivity preserving space-time approach. Newton and frozen Newton loops are used to solve the implicit nonlinear equations. Linear algebraic sparse systems arising in the discretization are solved with This year several enhancements have been implemented

a correction of both the residual distribution and finite volume methods to solve the shallow water equations in spherical (or Mercator) curvilinear coordinates;

mass-conserving mesh movement to adapt an initial grid to wet-dry interfaces as well as to other physical features of the flow;

a new library has been developed to enhance the shallow water equations. This library computes an algebraic source term by inverting an elliptic (grad-div type) PDE. The addition of this term to the shallow water version of SLOWS allows to recover the a fully nonlinear weakly dispersive Green-Naghdi solver. The solution of the elliptic PDE is performed with a classical Galerkin FEM approach, and MUMPS is the MUMPS library is used to invert the resulting matrix;

Initial optimization and OpenMP parallelization of the shallow water kernel.

SLOWS is our main simulation tool in both the TANDEM and Tides projects.

Adaptive sparse polynomial dimensional decomposition for global sensitivity analysis

Keywords: Stochastic models - Uncertainty quantification

Participants: Pietro-Marco Congedo

Contact: Pietro-Marco Congedo

Scientific Description

The polynomial dimensional decomposition (PDD) is employed in this code for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate structure between the PDD and the Analysis of Variance (ANOVA) approach, PDD is able to provide a simpler and more direct evaluation of the Sobol sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this code proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this code: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-square regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.

Functional Description

This code allows an efficient meta-modeling for a complex numerical system featuring a moderate-to-large number of uncertain parameters. This innovative approach involves polynomial representations combined with the Analysis of Variance decomposition, with the objective to quantify the numerical output uncertainty and its sensitivity upon the variability of input parameters.

Keywords: fre surface flows, Boussinesq equaions, weakly nonlinear models, unstructured grids

Participants: Maria Kazolea, Argiris Delis and Ioannis Nikolos

Contact: Maria Kazolea

Functional Description

Fortran Planform which accounts for the study of near shore processes. TUCWave uses a high-order well-balanced unstructured finite volume (FV) scheme on triangular meshes for modeling weakly nonlinear and weakly dispersive water waves over varying bathymetries, as described by the 2D depth-integrated extended Boussinesq equations of Nwogu (1993), rewritten in conservation law form. The FV scheme numerically solves the conservative form of the equations following the median dual node-centered approach, for both the advective and dispersive part of the equations. The code developed follows an efficient edge based structured technique. For the advective fluxes, the scheme utilizes an approximate Riemann solver along with a well-balanced topography source term up-winding. Higher order accuracy in space and time is achieved through a MUSCL-type reconstruction technique and through a strong stability preserving explicit Runge-Kutta time stepping. Special attention is given to the accurate numerical treatment of moving wet/dry fronts and boundary conditions. Furthermore, the model is applied to several examples of wave propagation over variable topographies and the computed solutions are compared to experimental data. TUCWave is used in the TANDEM project to provide reference solutions with a weakly nonlinear and dispersive model.

Participants: Héloise Beaugendre, Cécile Dobrzynski, Léo Nouveau, Mario Ricchiuto, Quentin Viville

Corresponding member: Héloise Beaugendre

Our work on high order unstructured discretizations this year has pursued three main avenues:

We have extended the team's previous work on the consistent residual based approximation
of viscous flow equations to the framework of Immersed Boundary Methods (IBM).
This is an increasingly popular approach in Computational Fluid Dynamics as it simplifies the mesh generation problem.
In our work, we consider a technique based on the addition of a penalty term to the Navier-Stokes equations to account for the wall boundary conditions.
To adapt the residual distribution method method to the IBM, we developed a new formulation based on a Strang splitting appproach in time.
This approach, couples in a fully consistent manner an implicit asymptoticly exact integration procedure of the penalization ODE,
with the explicit residual distribution discretization for the Navier-Stokes equations, based on the method proposed in . The ODE integrator provides an operator which is exact up to orders

Another research axis consists in proposing a novel approach that allows to use p-adaptation with continuous finite elements. Under certain conditions, primarily the use of a residual distribution scheme, it is possible to avoid the continuity constraint imposed to the approximate solution, while still retaining the advantages of a method using continuous finite elements. The theoretical material, the complete numerical method and practical results show as a proof of concept that p-adaptation is possible with continuous finite elements. This year, we extended the p-adaptation method to Navier-Stokes equations and coupled it with immersed boundary method.

We have studied the high order approximation of problems with dispersion and suggested a route allowing to construct high order methods (up to order 4) allowing to obtain the same accuracy for the solution, and for its first and second order derivatives. Initial validation for the approach proposed has been shown for the time dependent KdV equations , .

Participants: Luca Arpaia , Cécile Dobrzynski, Ghina El Jannoun, Léo Nouveau, Mario Ricchiuto

Corresponding member: Cécile Dobrzynski

This year several new algorithmic improvements have been obtained which will allow to enhance our meshing tools:

We have enhanced our work on

Participants: Andrea Cortesi , Pietro Marco Congedo, Nassim Razaaly, Sanson Francois

Corresponding member: Pietro Marco Congedo

We have developed an efficient sparse polynomial decomposition for sensitivity analysis and for building a surrogate in a problems featuring a large number of parameters. The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta- model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out : 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

Concerning sensitivity analysis, we illustrate how third and fourth-order moments, i.e. skewness and kurtosis, respectively, can be decomposed mimicking the ANOVA approach. It is also shown how this decomposition is correlated to a Polynomial Chaos (PC) expansion leading to a simple strategy to compute each term. New sensitivity indices, based on the contribution to the skewness and kurtosis, are proposed. The outcome of the proposed analysis is depicted by considering several test functions. Moreover, the ranking of the sensitivity indices is shown to vary according to their statistics order. Furthermore, the problem of formulating a truncated polynomial representation of the original function is treated. Both the reduction of the number of dimensions and the reduction of the order of interaction between parameters are considered. In both cases, the impact on the reduction is assessed in terms of statistics, namely the probability density function. Feasibility of the proposed analysis in a real-case is then demonstrated by presenting the sensitivity analysis of the performances of a turbine cascade in an Organic Rankine Cycles (ORCs), in the presence of complex thermodynamic models and multiple sources of uncertainty.

Moreover, we have developed a new framework for performing robust design optimization. a strategy is developed to deal with the error affecting the objective functions in uncertainty-based optimization. We refer to the problems where the objective functions are the statistics of a quantity of interest computed by an uncertainty quantification technique that propagates some uncertainties of the input variables through the system under consideration. In real problems, the statistics are computed by a numerical method and therefore they are affected by a certain level of error, depending on the chosen accuracy. The errors on the objective function can be interpreted with the abstraction of a bounding box around the nominal estimation in the objective functions space. In addition, in some cases the uncertainty quantification methods providing the objective functions also supply the possibility of adaptive refinement to reduce the error bounding box. The novel method relies on the exchange of information between the outer loop based on the optimization algorithm and the inner uncertainty quantification loop. In particular, in the inner uncertainty quantification loop, a control is performed to decide whether a refinement of the bounding box for the current design is appropriate or not. In single-objective problems, the current bounding box is compared to the current optimal design. In multi-objective problems, the decision is based on the comparison of the error bounding box of the current design and the current Pareto front. With this strategy, fewer computations are made for clearly dominated solutions and an accurate estimate of the objective function is provided for the interesting, non-dominated solutions. The results presented in this work prove that the proposed method improves the efficiency of the global loop, while preserving the accuracy of the final Pareto front.

Concerning semi-intrusive methods, a novel multiresolution framework, namely the Truncate and Encode (TE) approach is generalized and extended for taking into account uncertainty in partial differential equations (PDEs). Innovative ingredients are given by an algorithm permitting to recover the multiresolution representation without requir- ing the fully resolved solution, the possibility to treat a whatever form of pdf and the use of high-order (even non-linear, i.e. data-dependent) reconstruction in the stochastic space. Moreover, the spatial-TE method is introduced, which is a weakly intrusive scheme for uncertainty quantification (UQ), that couples the physical and stochastic spaces by minimizing the computational cost for PDEs. The proposed scheme is particularly attractive when treating moving discontinuities (such as shock waves in compressible flows), even if they appear during the simulations as it is common in unsteady aerodynamics applications. The proposed method is very flexible since it can easily coupled with different deterministic schemes, even with high-resolution features. Flexibility and performances of the present method are demonstrated on various numerical test cases (algebraic functions and ordinary differential equations), including partial differential equations, both linear and non-linear, in presence of randomness.

We applied a part of this method to a problem associated to the atmospheric reentry. In fact, an accurate determination of the catalytic property of thermal protection materials is crucial to design reusable atmospheric entry vehicles. This property is determined by combining experimental measurements and simulations of the reactive boundary layer near the material surface. The inductively-driven Plasmatron facility at the von Karman Institute for Fluid Dynamics provides a test environment to analyze gas-surface interactions under effective hypersonic conditions. In this study, we develop an uncertainty quantification methodology to rebuild values of the gas enthalpy and material catalytic property from Plasmatron experiments. A non-intrusive spectral projection method is coupled with an in-house boundary-layer solver, to propagate uncertainties and provide error bars on the rebuilt gas enthalpy and material catalytic property, as well as to determine which uncertainties have the largest contribution to the outputs of the experiments. We show that the uncertainties computed with the methodology developed are significantly reduced compared to those determined using a more conservative engineering approach adopted in the analysis of previous experimental campaigns.

Participants: Luca Arpaia , Stevan Bellec , Mathieu Collin , Sebastien De Brye , Andrea Filippini , Maria Kazolea, Luc Mieussens, and Mario Ricchiuto

Corresponding member: Mario Ricchiuto

We have introduced a new systematic method to obtain discrete numerical models for incompressible free-surface flows.
our approach allows to recover discrete asymptotic equations from a semi-discretized form (keeping the vertical

We also continue our study on wave breaking techniques on unstructured meshes . In particular, we evaluate the coupling of both a weakly and a fully non-linear Boussinesq system with a turbulence model. We reformulate an evolution model for the turbulent kinetic energy, initially proposed by Nwogu , and evaluate its capabilities to provide sufficient dissipation in breaking regions. We also compare this dissipation to the one introduced by the numerical discretization. A research paper on the topic, is under preparation. Further more we studied and tested the application and validation of TUCWave code on the transformation breaking and run-up of irregular waves. Its is the first time that an unstructured high-resolution FV numerical solver for the 2D extended BT equations of Nwogu is tested on the generation and propagation of irregular waves. A research paper is under preparation.

The tools developed have been also used intensively in funded research programs. Within the TANDEM project, several benchmarks relevant to tsunami modelling have been performed and several common publications with the project partners are submitted and/or in preparation , . We also our code SLOWS, to study the conditions for tidal bore formation in convergent alluvial estuaries . A new set of dimensionless parameters has been introduced to describe the problem, and the code SLOWS has been used to explore the space of these parameters allowing to determine a critical curve allowing to characterize an estuary as "bore forming" or not. Surprising physical behaviours, in terms of dissipation and nonlinearity of the tides, have been highlighted.

Finally, in collaboration with F. Veron (University of Delaware at Newark, USA), L. Mieussens has developed a model to describe the effect of rain falling on water waves . This model is based on a kinetic description of rain droplets that is used to compute the induced pression on a water wave. This allows to estimate the dissipation (or amplification) of the wave due to rainy conditions.

Participants: Umberto Bosi, Mario Ricchiuto

Corresponding member: Mario Ricchiuto

We have developed a prototype spectral element solver four a coupled set of differential equations modelling wave propagation (so-called outer domain),
and the submerged flow under a floating body (inner domain). Both systems of equations are depth-averaged (Boussinesq type) systems involving some dispersive terms.
They are further coupled to a force balance providing a (system of) ODE(s) for the floater. This model constitutes an intermediate fidelity approximation
for the hydrodynamics of a wave energy converter. Differently from all industrial state of the art, it is a (fully) nonlinear model.
However, its cost is extremely low when compared to full three-dimensional CFD analyses, due to the dimensional reduction brought from the depth averaged modelling.
Last year we have shown the potential of this approach to predict the hydrodynamics of a floater in a simplified case
, (journal version to appear on *J. Ocean Eng. and Marine Energy*).
This year we have further studied the issue of the coupling between domains with different PDE models (as in our case the inner and outer domains),
and suggested an approach (based on a first order reformulation) allowing to coupled domains with different equations and with or without dispersive effects on either side.
This work is done in the framework of the MIDWEST project funded by the EU OCEANEranet call.

Several contracts have been realized:

SAFRAN-HERAKLES, 20Keuros for the development of a code for computing low-probability.

CNES, 10 KEuros, for the technological transfer of Sparse-PDD code.

CEA 2015 10237, 60 Keuro for the supervision of the post-doc of Maxence Clayes by P.M. Congedo

CEA 16-CIFRE PELUCHON, 20 Keuro for the supervision by L. Mieussens of the PhD of Simon Peluchon at the CEA-CESTA (1/1/15 - 31/12/17)

BGS IT&E (2016-2018), 20 Keuro for a consulting by M. Ricchiuto on the implantation of some of the technology in the code SLOWS in their in-house model.

**CRA 15/ THESE SANSON 10199**

These co-funded by Airbus Safran Launchers and the Aquitaine Region during the period 2016-2019

Topic : uncertainty propagation approach in a system of codes

**VIPER Projet**

These co-funded by the Aquitaine Region and Inria. PhD student to recruit during the period 2017-2020

Topic : robust design of the EVE engine in collaboration with the SME EXOES.

Title: TIDES: Robust simulation tools for non-hydrostatic free surface flows

Type: Apple à Projets Recherche du Conseil de la Région Aquitaine

Coordinator: M. Ricchiuto

Other partners: UMR EPOC (P. Bonneton)

Abstract: This project proposes to combine modern high order adaptive finite elements techniques with state of the art nonlinear and non-hydrostatic models for free surface waves to provide an accurate tool for the simulation of near shore hydrodynamics, with application to the study and prediction of tidal bores. The Garonne river will be used as a case study. This project co-funds (50%) the PhD of A. Filippini.

Title: Maillages adaptatifs pour les interfaces instationnaires avec deformations, etirements, courbures.

Type: ANR

Duration: 48 months

Starting date : 1st Oct 2013

Coordinator: Dervieux Alain (Inria Sophia)

Abstract: Mesh adaptive numerical methods allow computations which are otherwise impossible due to the computational resources required. We address in the proposed research several well identified main obstacles in order to maintain a high-order convergence for unsteady Computational Mechanics involving moving interfaces separating and coupling continuous media. A priori and a posteriori error analysis of Partial Differential Equations on static and moving meshes will be developed from interpolation error, goal-oriented error, and norm-oriented error. From the minimization of the chosen error, an optimal unsteady metric is defined. The optimal metric is then converted into a sequence of anisotropic unstructured adapted meshes by means of mesh regeneration, deformation, high stretching, and curvature. A particular effort will be devoted to build an accurate representation of physical phenomena involving curved boundaries and interfaces. In association with curved boundaries, a part of studies will address third-order accurate mesh adaption. Mesh optimality produces a nonlinear system coupling the physical fields (velocities, etc.) and the geometrical ones (unsteady metric, including mesh motion). Parallel solution algorithms for the implicit coupling of these different fields will be developed. Addressing efficiently these issues is a compulsory condition for the simulation of a number of challenging physical phenomena related to industrial unsolved or insufficiently solved problems. Non-trivial benchmark tests will be shared by consortium partners and by external attendees to workshops organized by the consortium. The various advances will be used by SME partners and proposed in software market.

Title: Tsunamis in the Atlantic and the English ChaNnel: Definition of the Effects through numerical Modeling (TANDEM)

Type: PIA - RSNR (Investissement d'Avenir, “Recherches en matière de Sûreté Nucléaire et Radioprotection”)

Duration: 48 months

Starting date : 1st Jan 2014

Coordinator: H. Hebert (CEA)

Abstract: TANDEM is a project dedicated to the appraisal of coastal effects due to tsunami waves on the French coastlines, with a special focus on the Atlantic and Channel coastlines, where French civil nuclear facilities have been operated since about 30 years. As identified in the call RSNR, this project aims at drawing conclusions from the 2011 catastrophic tsunami, in the sense that it will allow, together with a Japanese research partner, to design, adapt and check numerical methods of tsunami hazard assessment, against the outstanding observation database of the 2011 tsunami. Then these validated methods will be applied to define, as accurately as possible, the tsunami hazard for the French Atlantic and Channel coastlines, in order to provide guidance for risk assessment on the nuclear facilities.

Title : Reactive fluid flows with interface : macroscopic models and application to self-healing materials

Type : Project Bordeaux 1

Duration : 36 months

Starting : September 2014

Coordinator : M. Colin

Abstract : Because of their high strength and low weight, ceramic-matrix composite materials (CMCs) are the focus of active research, for aerospace and energy applications involving high temperatures. Though based on brittle ceramic components, these composites are not brittle due to the use of a fiber/matrix interphase that manages to preserve the fibers from cracks appearing in the matrix. The lifetime-determining part of the material is the fibers, which are sensitive to oxidation; when the composite is in use, it contains cracks that provide a path for oxidation. The obtained lifetimes can be of the order of hundreds of thousands of hours. These time spans make most experimental investigations impractical. In this direction, the aim of this project is to furnish predictions based on computer models that have to take into account: 1) the multidimensional topology of the composite made up of a woven ceramic fabric; 2) the complex chemistry taking place in the material cracks; 3) the flow of the healing oxide in the material cracks.

Title : Modélisation d'un système de dégivrage thermique

Type : Project University of Bordeaux

Duration : 36 months

Starting : October 2016

Coordinator : H. Beaugendre and M. Colin

Abstract : From the beginning of aeronautics, icing has been classified as a serious issue : ice accretion on airplanes is due to the presence of supercooled droplets inside clouds and can lead to major risks such as aircrash for example. As a consequence, each airplane has its own protection system : the most important one is an anti-icing system which runs permanently. In order to reduce gas consumption, de-icing systems are developed by manufacturers. One alternative to real experiment consists in developing robust and reliable numerical models : this is the aim of this project. These new models have to take into account multi-physics and multi-scale environnement : phase change, thermal transfer, aerodynamics flows, etc. We aim to use thin films equations coupled to level-set methods in order to describe the phase change of water. The overall objective is to provide a simulation plateform, able to provide a complete design of these systems.

Type: COOPERATION

Instrument: Specific Targeted Research Project

Objectif: The main objectives of this research programme are to develop, through the ESR's individual projects, fundamental mathematical methods and algorithms to bridge the gap between Uncertainty Quantification and Optimisation and between Probability Theory and Imprecise Probability Theory for Uncertainty Quantification, and to efficiently solve high-dimensional, expensive and complex engineering problems.

Duration: 2017 - 2021

Coordinator: University of Strathclyde (Scotland, UK)

Partner: University of Strathclyde (Scotland, UK), Inria Bordeaux Sud-Ouest (France), ESTECO (Italy), CIRA, Centro Italiano Aerospaziali (Italy), Politecnico di Milano (Italy), Jozef Stefan Institute (Slovenia), Cologne University of Applied Sciences (Germany), University of Durham (England, UK), Ghent University (Belgium), Von Karman Institute (Belgium), DLR, Institute of Aerodynamics and Flow Technology (Germany), National Physical Laboratory (England, UK), Leonardo Aircraft S.p.A (Italy), Airbus Operations Gmbh (England, UK), Stanford University (USA)

Inria contact: Pietro Marco Congedo

Abstract:
Research activities will be developed in the context of the European project - UTOPIAE http://

Type: COOPERATION

Instrument: Specific Targeted Research Project

Duration: October 2013 - September 2016

Coordinator: SNECMA (France)

Partner: SNECMA SA (FR), AEROTEX UK LLP (UK), AIRBUS OPERATIONS SL (ES), Airbus Operations Limites (UK), AIRCELLE SA (FR), ARTTIC (FR), CENTRO ITALIANO RICERCHE AEROSPAZIALI SCPA (IT), CRANFIELD UNIVERSITY (UK), DEUTSCHES ZENTRUM FUER LUFT - UND RAUMFAHRT EV (DE), EADS DEUTSCHLAND GMBH (DE), ONERA (FR), TECHSAPACE AERO SA (BE)

Inria contact: Héloise Beaugendre

Abstract: During the different phases of a flight, aircraft face severe icing conditions. When this ice then breaks away, and is ingested through the reminder of the engine and nacelle it creates multiple damages which have a serious negative impact on the operations costs and may also generate some incident issues. To minimise ice accretion, propulsion systems (engine and nacelle) are equipped with Ice Protection Systems (IPS), which however have themselves performance issues. Design methodologies used to characterise icing conditions are based on empirical methods and past experience. Cautious design margins are used non-optimised designs solutions. In addition, engine and nacelle manufacturers are now limited in their future architectures solutions development because of lack of knowledge of icing behaviour within the next generation of propulsive systems solutions, and of new regulations adopted that require aero engine manufacturers to address an extended range of icing conditions.

In this context that STORM proposes to: characterise ice accretion and release through partial tests ; Model ice accretion, ice release and ice trajectories ; Develop validated tools for runback ; characterise ice phobic coatings ; select and develop innovative low cost and low energy anti-icing and de-icing systems. Thus, STORM will strengthen the predictability of the industrial design tools and reduce the number of tests needed. It will permit lower design margins of aircraft systems, and thus reduce the energy consumption as well as prevent incidents and break downs due to icing issues.

Program: OCEANEraNET

Project acronym: MIDWEST

Project title: Multi-fIdelity Decision making tools for Wave Energy SysTems

Duration: December 2015 - December 2018

Coordinator: Mario Ricchiuto

Other partners: Chalmers University (Sweden), DTU Compute (Denmark), IST Lisbon (Portugal)

Abstract: Wave energy converters (WECs) design currently relies on low-fidelity linear hydrodynamic models. While these models disregard fundamental nonlinear and viscous effects - which might lead provide sub-optimal designs - high-fidelity fully nonlinear Navier-Stokes models are prohibitively computational expensive for optimization. The MIDWEST project will provide an efficient asymptotic nonlinear finite element model of intermediate fidelity, investigate the required fidelity level to resolve a given engineering output, construct a multi-fidelity optimization platform using surrogate models blending different fidelity models. Combining know how in wave energy technology, finite element modelling, high performance computing, and robust optimization, the MIDWEST project will provide a new efficient decision making framework for the design of the next generation WECs which will benefit all industrial actors of the European wave energy sector.

**Inria@SiliconValley**

Associate Team involved in the International Lab:

Title: Advanced methods for uncertainty quantification in compressible flows

International Partner (Institution - Laboratory - Researcher):

Stanford (United States) - Department of Mechanical Engineering - Gianluca Iaccarino

Start year: 2014

See also: http://

This research project deals with uncertainty quantification in computational fluid dynamics. Uncertainty Quantification (UQ) aims at developing rigorous methods to characterize the impact of limited knowledge on quantities of interest. Main objective of this collaboration is to build a flexible and efficient numerical platform, using intrusive methods, for solving stochastic partial differential equations. In particular, the idea is to handle highly non-linear system responses driven by shocks.

Title: Advanced Modeling on Shear Shallow Flows for Curved Topography : water and granular flows.

International Partner (Institution - Laboratory - Researcher):

Inria Sophia-Antipolis and University of Nice (France)

Inria Bordeaux and University of Bordeaux (France)

University of Marseille (France)

National Cheng Kung University, Tainan, Taiwan

National Taiwan University and Academia Sinica,Taipei, Taiwan

Duration: 2014 - 2016

See also: https://

Our objective is to generalize the promising modeling strategy proposed in G.L. Richard and S.L. Gavrilyuk 2012, to genuinely 3D shear flows and also take into account the curvature effects related to topography. Special care will be exercised to ensure that the numerical methodology can take full advantage of massively parallel computational platforms and serve as a practical engineering tool. At first we will consider quasi-2D sheared flows on a curve topography defined by an arc, such as to derive a model parameterized by the local curvature and the nonlinear profile of the bed. Experimental measurements and numerical simulations will be used to validate and improve the proposed modeling on curved topography for quasi-2D flows. Thereafter, we will focus on 3D flows first on simple geometries (inclined plane) before an extension to quadric surfaces and thus prepare the generalization of complex topography in the context of geophysical flows.

University of Zurich : R. Abgrall. Collaboration on penalisation on unstructured grids and high order adaptive methods for CFD and uncertainty quantification.

Politecnico di Milano, Aerospace Department (Italy) : Pr. A. Guardone. Collaboration on ALE for complex flows (compressible flows with complex equations of state, free surface flows with moving shorelines).

von Karman Institute for Fluid Dynamics (Belgium). With Pr. T. Magin we work on Uncertainty Quantification problems for the identification of inflow condition of hypersonic nozzle flows. With Pr. H. Deconinck we work on the design of high order methods, including goal oriented mesh adaptation strategies

NASA Langley: Dr. Alireza Mazaheri. Collaboration on high order schemes for PDEs with second and third order derivatives, with particular emphasis on high order approximations of solution derivatives.

Technical University of Crete, School of Production Engineering & Management : Pr. A.I. Delis. Collaboration on high order schemes for depth averaged free surface flow models, including robust code to code validation

Chalmers University (C. Eskilsson) and Technical University of Denmark (A.-P. Engsig-Karup) : our collaboration with Chalmers and with DTU compute in Denmark aims at developing high order non hydrostatic finite element Boussinesq type models for the simulation floating wave energy conversion devices such as floating point absorbers ;

University of Delaware: F. Veron. Collaboration on the modelling of rain effects on wave propagation.

From 27/11 to 03/12/2016 Pascal POULLET (Université des Antilles) has visited M. Ricchiuto to work on nonlinear residual based approximations of free sudface flows with moving bathymetries.

From 21/11 to 09/12/2016 Luca CIRROTTOLA (Politecnico di Milano) has visited C. Dobrizinsky to work on parallel mesh adaptation.

From 21/10 to 05/11/2016 François MORENCY (ETS, University of Québec, Montréal) has visited us to work on LESCAPE code with Héloïse, Léo and Aurore. The Spalart-Allmaras turbulent model has been validated using the perodic channel flow test case.

From 01/10 to 29/10/2016 Claes ESKILSSON (Chalmers University of Technology, Sweden) has visited us to work with Mario Ricchiuto and U. Bosi on spectral element methods for Boussinesq models with floating structures.

From 12/09 to 22/09/2016 Kazuo AOKI (University of Taiwan) has visited us to work with Luc Mieussens on models for reentry flows.

From 07/07 to 09/07/2016 Volker ROEBER (Tohoku University, International Research Institute of Disaster Science) has visited us to work with Maria Kazolea and Mario Ricchiuto on robust code to code validation, on coastal engineering problems.

From 27/03 to 01/04/2016 Alireza MAZAHERI (NASA Langley) came to visit Mario Ricchiuto and V. Perrier to work on the implementation of a hyperbolic formulation of the Navier-Stokes equations in the AeroSol platform.

From 16/03/2016 Guglielmo SCOVAZZI (Duke University) has visited M. Ricchiuto to work on stabilized finite elements for geo-mechanics.

From 1/01/2016 to 31/04/2016 Gianluca IACCARINO (Stanford University) has visited the Team in the context of AQUARIUS Team, collaborating actively with all the PhD student involved in uncertainty quantification research. All the students involved (Razaaly, Sanson and Cortesi) have then visited the group of G. Iaccarino in Stanford University in the fall 2016.

From 15/05/2016 to 17/07/2016 Fabrice VERON (University of Delaware at Newark, USA) has visited us to work with Luc Mieussens on a project dedicated to the modelling and simulation of the interaction rain/water waves.

From April 2015 to April 2016 : T. WATANABE, Department of Mathematics, Faculty of Science Kyoto Sangyo University visited M. Colin to work on the approximation of solitary wave solutions of nonlinear dispersive PDEs.

From Feb 2016 to Jul 2016 Rama Ayoub (Inria, M. Sc. Student)

From Apr 2016 to Sep 2016 Toufik Boubehziz (EDF, M. Sc. Student )

From Jan 2016 to Mar 2016 Maxence Claeys (CEA, Phd Student)

From Feb 2016 to Jul 2016 Antoine Fondaneche (Inria, M. Sc. Student)

From Oct 2016 to Feb 2016 Esben Grange (Inria, M. Sc. Student)

From Jun 2016 to Sep 2016 Adrien Paumelle (Inria, Univ. Bordeaux )

From May 2016 to Sep 2016 Raphael Robyn (Inria, Univ. Bordeaux)

H. Beaugendre: Numerical workshop for the STORM European project, Inria Bordeaux, France, November 2016

M. Colin: Congress JEF, dedicated to young researchers in PDE analysis and applications, Bordeaux, France, March 2016

M. Ricchiuto : International workshop B'WAVES 2016, Bergen, Norway, June 2016 ( https://

M. Ricchiuto : TANDEM and Defis Littoral Tsunami School, Bordeaux, France, April 2016 (http://

M. Ricchiuto : Verification, Validation et Quantification des incertitudes en simulation numerique (VVUQ), Aristote seminar cycles, Ecole Polytechnique, France, November 2016 (http://

P.M. Congedo : NICFD 2016 Conference, Varenna, Italy, October 2016.

Mathieu Colin is a member of the board of the journal Applications and Applied Mathematics: An International Journal (AAM)

Mario Ricchiuto is member of the editorial board of *Computers & Fluids (Elsevier)*, and of *GEM - International Journal on Geomathematics (Springer)*

A special issue of the European Journal of Mechanics / B Fluids will be dedicated to the 2 editions of the international workshop B'Waves on wave breaking, held in 2014 in Bordeaux (M. Colin and M. Ricchiuto as co-organizers), and in 2016 in Bergen (M. Ricchiuto as co-organizer). M. Colin and M. Ricchiuto will be guest editors of this issue

We reviewed papers for top international journals in the main scientific themes of the team : journal of Computational Physics, Computer Methods in Applied Mechanics and Engineering, Optimization and Engineering, International Journal of Numerical Methods in Fluids, Physics of Fluids, Journal of Marine Science and Technology, Engineering Applications of Computational Fluid Mechanics, Computers and Fluids, International Journal of Modelling and Simulation in Engineering Aircraft Engineering and Aerospace Technology, International Journal of Computational Fluid Dynamics, Applications and applied mathematics : An international journal, Discrete and Continuous Dynamical Systems - Series A, Electronic Journal of Differential Equations, Calculus of Variations and Partial Differential Equations, Nonlinear Analysis: Modelling and Control, Advanced Nonlinear Studies, Communications on Pure and Applied Analysis, Communications in Computational Physics, Nonlinearity, Applications and Applied Mathematics: An International Journal, Journal of Differential Equations, Analysis and Mathematical Physics.

P.M. Congedo, Presentation at Journees Scientifiques Inria, June 2016, Rennes

P.M. Congedo, “General introduction to Uncertainty Quantification”, CNES, March 2016, Toulouse

M. Kazolea, “Wave breaking in Boussinesq free sruface models”, International Workshop B'Waves2016, Bergen (Norway)

M. Ricchiuto, “Numerical issues in tsunami simulation: dispersion and diffusion ?scales?, what order of accuracy ?”, TANDEM and Defis Littoral 2016 Tsunami school, Bordeaux

P.M. Congedo has been appointed as the Co-Director of the Inria International Lab Inria-CWI.

Licence : Cécile Dobrzynski, Langages en Fortran 90, 54h, L3, ENSEIRB-MATMÉCA, FRANCE

Master : Héloïse Beaugendre, TP langage C++, 48h, M1, ENSEIRB-MATMÉCA, FRANCE

Master : Héloïse Beaugendre, Calcul Haute Performance (OpenMP-MPI), 40h, M1, ENSEIRB-MATMÉCA et Université de Bordeaux, France

Master : Héloïse Beaugendre, Initiation librairie MPI, 12h, M2, Ecole de Technologie Superieure, Université du Québec, Montréal, Canada

Master : Héloïse Beaugendre, Responsable de filière de 3ème année, 15h, M2, ENSEIRB-MATMÉCA, France

Master : Héloïse Beaugendre, Calcul parallèle (MPI), 78h, M2, ENSEIRB-MATMÉCA, France

Master : Héloïse Beaugendre, Encadrement de projets de la filière Calcul Haute Performance, 11h, M2, ENSEIRB-MATMÉCA, France

Master : Héloïse Beaugendre , Projet fin d'études, 4h, M2, ENSEIRB-MATMÉCA, FRANCE

Master : Mathieu Colin : Integration, M1, 54h, ENSEIRB-MATMÉCA, FRANCE

Master : Mathieu Colin : PDE, M2, 30h, ENSEIRB-MATMÉCA, FRANCE

Master : Mathieu Colin : Fortran 90, M1, 44h, ENSEIRB-MATMÉCA, FRANCE

Master : Mathieu Colin : PDE, M1, 28h, University of Bordeaux, FRANCE

Master : Mathieu Colin : Analysis, L1, 47h, ENSEIRB-MATMÉCA, FRANCE

Master: Luc Mieussens, Transport de particules : modèles, simulation, et applications, 24h, M2, ENSEIRB-MATMECA, France

Master : Luc Mieussens, Projet fin d'études, 4h, M2, ENSEIRB-MATMÉCA, FRANCE

Doctorat : P.M. Congedo, Uncertainty quantification, theory and application to algorithms, CFD and global change, Apr 2015, CERFACS, Toulouse, France, 4h.

Master : Mario Ricchiuto : Fluid Dynamics II, 20h, ENSEIRB-MATMÉCA, FRANCE

Master : Mario Ricchiuto, Encadrement de projets TER, 10h, ENSEIRB-MATMÉCA, FRANCE

HdR : Héloise Beaugendre, Contributions à la simulation numérique des écoulements fluides : exemples en milieu poreux et en aéronautique , Bordeaux University, 18 March 2016.

PhD : Fusi Francesca, Stochastic robust optimization of a helicopter rotor airfoil, March 2016.

PhD : Bellec Stevan, Discrete asymptotic modelling of free surface flows, 5 October 2016.

PhD : Viville Quentin, Construction d'une méthode hp-adaptative pour les schémas aux Résidus Distribués, Bordeaux University, 22 November 2016.

PhD: Filippini Andrea, Nonlinear finite element Boussinesq modelling of non-hydrostatic free surface flows, 14 December 2016.

PhD: Nouveau Léo, Adaptative Residual Based Schemes for Solving the Penalized Navier-Stokes Equations with Moving Bodies - Application to Ice Shedding Trajectories, Bordeaux University, 16 December 2016.

PhD in progress : Arpaia Luca, Continuous mesh deformation and coupling with uncertainty quantification for coastal inundation problems, started in March 2014.

PhD in progress : Bosi, Umberto, ALE spectral element Boussinesq modelling of wave energy converters, started in November 2015

PhD in progress : Cortesi Andrea, Predictive numerical simulation for rebuilding freestream conditions in atmospheric entry flows, started in October 2014.

PhD in progress: Lin Xi, Asymptotic modelling of incompressible reactive flows in self-healing composites, started in October 2014.

PhD in progress: Perrot Gregory, Physico-chemical modelling of self-healing ceramic composites, started in October 2011.

PhD in progress : Peluchon Simon, Approximation numérique et modélisation de l'ablation différentielle de deux matériaux: application à l'ablation liquide. Started in December 2014. Advisor: Luc Mieussens. PhD hosted in CEA-CESTA.

PhD in progress: Aurore Fallourd, Modeling and Simulation of inflight de-icing systems, Started in October 2016.

PhD in progress: Guillaume Jeanmasson, Explicit methods with local time stepping for the simulation of unsteady turbulent flows. Started in October 2016. Advisor: Luc Mieussens. Hosted in ONERA Châtillon.

PhD in progress: Francois Sanson, Uncertainty propagation in a system of codes, started in February 2016.

PhD in progress: Nassim Razaaly, Robust optimization of ORC systems, started in February 2016.

P.M. Congedo : Rapporteur de thèse de Elio Bufi, ENSAM Paris Tech, December 2016.