`BACCHUS` is a joint team of Inria Bordeaux - Sud-Ouest, LaBRI
(Laboratoire Bordelais de Recherche en Informatique – CNRS UMR 5800,
University of Bordeaux and Bordeaux Inst. Nat. Polytechnique) and IMB (Institut de Mathématiques
de Bordeaux – CNRS UMR 5251, University of Bordeaux).
`BACCHUS` has been created on January 1^{st}, 2009
(http://

The purpose of the `BACCHUS` project is to analyze and solve
efficiently scientific computation problems that arise in complex
research and industrial applications and that involve scaling. By
scaling we mean that the applications considered require an enormous
computational power, of the order of tens or hundreds of teraflops,
and that they handle huge amounts of data. Solving these kinds of
problems requires a multidisciplinary approach involving both applied
mathematics and computer science.

Our major focus are fluid problems, and especially the simulation of
*physical wave propagation problems* including fluid mechanics,
inert and reactive flows, multimaterial and multiphase flows,
acoustics, real-gas effects, etc. `BACCHUS` intends to contribute to the solution of
these problems by bringing contributions to all steps of the
development chain that goes from the design of new high-performance,
more robust and more precise numerical schemes, to the creation and
implementation of optimized parallel algorithms and high-performance
codes.

By taking into account architectural and performance concerns from the early stages of design and implementation, the high-performance software which will implement our numerical schemes will be able to run efficiently on most of today's major parallel computing platforms (UMA and NUMA machines, large networks of nodes, production GRIDs).

A large number of engineering problems involve fluid
mechanics. They may involve the coupling of one or more physical
models. An example is provided by aeroelastic problems, which have
been studied in details by other Inria teams. Another example is given by
flows in pipelines where the fluid (a mixture of air–water–gas) does
not have well-known physical properties, and there are even more exotic situations.
In some occasions, one needs specific numerical tools to take into account *e.g.* a fluids' exotic equation of state,
or a the influence of small flow scales in a macro-/meso-scopic flow model, etc. Efficient schemes are needed
in unsteady flows where
the amount of required computational resources becomes huge. Another situation
where specific tools are needed is when one is interested in very specific physical quantities, such as
*e.g.* the lift and drag of an airfoil, or the boundary of the area flooded by a Tsunami.

In these situations, commercial tools can only provide a crude answer.
These codes, while allowing users to
simulate a lot of different flow types, and “always” providing an
answer, often give results of poor quality.
This is mainly due to their general purpose character, and on the fact that
the numerical technology implemented in these codes
is not the most recent. To give a few examples, consider the noise generated
by wake vortices in supersonic flows (external aerodynamics/aeroacoustics),
or the direct simulation of a 3D compressible mixing layer in a complex geometry (as in combustion chambers).
Up to our knowledge, due to the very different temporal
and physical scales that need to be captured,
a direct simulation of these phenomena is
not in the reach of the most recent technologies because the numerical
resources required are currently unavailable.
*We need to invent specific algorithms for this purpose.*

*Our goal is to develop more accurate and more efficient schemes that can adapt to modern computer architectures,
and allow the efficient simulation of complex real life flows*.

*We develop a class of numerical schemes*, known in literature as
Residual Distribution schemes, *specifically
tailored to unstructured and hybrid meshes*. They have the most possible compact
stencil that is compatible with the expected order of accuracy.
This *accuracy is at least of second order, and it can go up to any order
of accuracy, even though fourth order is considered for practical applications.*
Since the stencil is compact, the implementation on parallel machines becomes
simple. These schemes are very flexible in nature, which is so far one of the most important advantage
over other techniques. This feature has allowed us to adapt the schemes to the requirements of different
physical situations (*e.g.* different formulations allow either en efficient explicit
time advancement for problems involving small time-scales, or a fully implicit space-time
variant which is unconditionally stable and allows to handle stiff problems
where only the large time scales are relevant). This flexibility has also enabled
to devise a variant using the same data structure of the popular Discontinuous Galerkin
schemes, which are also part of our scientific focus.

The compactness of the second order version of the schemes enables us to use efficiently the high performance parallel linear algebra tools developed by the team. However, the high order versions of these schemes, which are under development, require modifications to these tools taking into account the nature of the data structure used to reach higher orders of accuracy. This leads to new scientific problems at the border between numerical analysis and computer science. In parallel to these fundamental aspects, we also work on adapting more classical numerical tools to complex physical problems such as those encountered in interface flows, turbulent or multiphase flows, geophysical flows, and material science. A particular attention has been devoted to the implementation of complex thermodynamic models permitting to simulate several classes of fluids and to take into account real-gas effects and some exotic phenomenon, such as rarefaction shock waves.

Within these applications, a strong effort has been made in developing more predictive tools for both multiphase compressible flows and non-hydrostatic free surface flows.

Concerning multiphase flows, several advancements have been performed, i.e. considering a more complete systems of equations including viscosity, working on the thermodynamic modelling of complex fluids, and developing stochastic methods for uncertainty quantification in compressible flows. Concerning depth averaged free surface flow modelling, on one hand we have shown the advantages of the use of the compact schemes we develop for hydrostatic shallow water models. On the other, we have shown ho to extend our approach to non-hydrostatic Boussinesq modelling, including wave dispersion, and wave breaking effects.

We expect to be able to demonstrate the potential of our developments on applications ranging from the the reproduction of the complex multidimensional interactions between tidal waves and estuaries, to the unsteady aerodynamics and aeroacoustics associated to both external and internal compressible flows, and the behaviour of complex materials. This will be achieved by means of a multi-disciplinary effort involving our research on residual discretizations schemes, the parallel advances in algebraic solvers and partitioners, and the strong interactions with specialists in computer science, scientific computing, physics, mechanics, and mathematical modeling.

Concerning the software platforms,
our research in numerical algorithms has led to the development of the
`RealfluiDS` platform which is described in
section , and to the `SLOWS` (Shallow-water fLOWS) code for free surface flows,
described in sections . Simultaneously,
we have contributed to the advancement of the
new, object oriented, parallel finite elements library `AeroSol`, described in section
, which is destined to replace the existing codes and become the team's CFD kernel.
Concerning uncertainty quantification and robust optimization, we are developing the platform `RobUQ`.

New software developments are under way in the field of complex materials modeling and multiphase flows with heat and mass transfer.
Concerning the materials modelling,
these developments are performed in the code in the solver `COCA` (CodeOxydationCompositesAutocicatrisants)
for the simulation of the self-healing process in composite materials.
These developments will be described in section .
Concerning the numerical simulation of multiphase flows, we have developed the code `sDEM`, which is one of rare code, permitting to simulate metastable states with a complex thermodynamics and considering uncertainty quantification techniques.

*Funding and external collaborations.* This work is supported by several sources including the
last of the `ADDECCO` ERC grant, the FP7 STORM,
the ANR `UFO` and the PIA project `TANDEM`. Important contributions to these activities are given by our external collaborators,
and in particular R. Abgrall (Universit`AQUARIUS`).

Another topic of interest is the quantification of uncertainties in non linear problems. In many applications, the physical model is not known accurately. The typical example is that of turbulence models in aeronautics. These models all depend on a number of parameters which can radically change the output of the simulation. Being impossible to lump the large number of temporal and spatial scales of a turbulent flow in a few model parameters, these values are often calibrated to quantitatively reproduce a certain range of effects observed experimentally. A similar situation is encountered in many applications such as real gas or multiphase flows, where the equation of state form suffer from uncertainties, and free surface flows with sediment transport, where often both the hydrodynamic model and the sediment transport model depend on several parameters, and my have more than one formal expression.

This type of uncertainty, called *epistemic*, is associated
with a lack of knowledge and could be reduced by further experiments and investigation.
Instead, another type of uncertainty, called *aleatory*, is related to the
intrinsec aleatory quality of a physical measure and can not be reduced.
The dependency of the numerical simulation from these uncertainties can be studied by propagation of chaos techniques such as those developed during the recent years via
polynomial chaos techniques. Different implementations exists,
depending whether the method is intrusive or not. The accuracy of these
methods is still a matter of research, as well how they can handle an
as large as possible number of uncertainties or their versatility with
respect to the structure of the random variable pdfs.

Our objective is to develop some non-intrusive and semi-intrusive methods, trying to define an unified framework for obtained a reliable and accurate numerical solution at a moderate computational cost. This work have produced a large number of publications on peer-reviewed journal. Concerning the class of intrusive methods, we are developing an unified scheme in the coupled physical/stochastic space based on a multi-resolution framework. Here, the idea is to build a framework for being capable to refine a discontinuity in both stochastic and deterministic mesh. We are extending this class of methods to complex models in CFD, such as in multiphase flows. Concerning the non-intrusive methods, we are working on several methods for treating the following problems : handling a large number of uncertainties, treating high-order statistical decomposition (variance, skewness and kurtosis), and solving efficiently inverse problems.

We have used these methods to several ends: either to have highly accurate quantitative reconstruction of a simulation output's variation over a complex space of parameter variations to study a given model (uncertainty propagation), or as a means of comparing different model's variability to certain parameters thus assessing their robustness (model robustness), or as a tool to compare different numerical implementation (schemes and codes) of a similar model to assess simultaneously the robustness of the numerics and the universality of the trends of the statistics and of the sensitivity measures (robust cross-validation). Moreover, we rebuild statistically some input parameters relying on some experimental measures of the output, thus solving an inverse problem.

The developed methods and tools have been applied to several applications of interest : real-gas effects, multi-phase flows, cavitation, aerospace applications and geophysical flows.

Concerning robust optimization, we focus on problems with high dimensional representation of stochastic inputs, that can be computationally prohibitive. In fact, for a robust design, statistics of the fitness functions are also important, then uncertainty quantification (UQ) becomes the predom- inant issue to handle if a large number of uncertainties is taken into account. Several methods are proposed in literature to consider high dimension stochastic problem but their accuracy on realistic problems where highly non-linear effects could exist is not proven at all. We developed several efficient global strategies for robust optimization: the first class of method is based on the extension of simplex stochastic collocation to the optimization space, the second one consists in hybrid strategies using ANOVA decomposition.

These developments and computations are performed in the platform `RobUQ`,
which includes the most part of methods developed in the Team.

*Funding and external collaborations.* This part of our activities is supported by the ANR-MN project `UFO`, and the associated
team `AQUARIUS`. It benefits from the collaborations with external members, and in particular R. Abgrall (Universit

Many simulations which model the evolution of a given phenomenon along with time (turbulence and unsteady flows, for instance) need to re-mesh some portions of the problem graph in order to capture more accurately the properties of the phenomenon in areas of interest. This re-meshing is performed according to criteria which are closely linked to the undergoing computation and can involve large mesh modifications: while elements are created in critical areas, some may be merged in areas where the phenomenon is no longer critical. To alleviate the cost of this re-meshing phase, we have started looking into time dependent continuous mesh deformation techniques. These may allow some degree of adaptation between two re-meshing phases, which in theory may be less frequent, and more local.

When working in parallel, re-meshing introduces additional problems. In particular, splitting an element which is located on the frontier between several processors is not an easy task, because deciding when splitting some element, and defining the direction along which to split it so as to preserve numerical stability most, require shared knowledge which is not available in distributed memory architectures. Ad-hoc data structures and algorithms have to be devised so as to achieve these goals without resorting to extra communication and synchronization which would impact the running speed of the simulation.

Most of the works on parallel mesh adaptation attempt to parallelize in some way all the mesh operations: edge swap, edge split, point insertion, etc. It implies deep modifications in the (re)mesher and often leads to bad performance in term of CPU time. An other work proposes to base the parallel re-meshing on existing mesher and load balancing to be able to modify the elements located on the frontier between several processors.

In addition, the preservation of load balance in the re-meshed simulation requires dynamic redistribution of mesh data across processing elements. Several dynamic repartitioning methods have been proposed in the literature , , which rely on diffusion-like algorithms and the solving of flow problems to minimize the amount of data to be exchanged between processors. However, integrating such algorithms into a global framework for handling adaptive meshes in parallel has yet to be done.

The path that we are following bases on the decomposition of the areas to remesh into balls that can be processed concurrently, each by a sequential remesher. It requires to devise scalable algorithms for building such boules, scheduling them on as many processors as possible, reconstructing the remeshed mesh and redistributing its data.

*Funding and external collaborations.* Most of this
research has started within the context of the PhD of Cédric Lachat,
funded by a CORDI grant of EPI `PUMAS` and was continued thanks
to a funding by ADT grant `El Gaucho` that completed this year. The work on
adaptation by continuous deformation has started with the PhD of L. Arpaia and benefits of the funding of the PIA project `TANDEM`.

Unlike their predecessors of two decades ago, today's very large parallel architectures can no longer implement a uniform memory model. They are based on a hierarchical structure, in which cores are assembled into chips, chips are assembled into boards, boards are assembled into cabinets and cabinets are interconnected through high speed, low latency communication networks. On these systems, communication is non uniform: communicating with cores located on the same chip is cheaper than with cores on other boards, and much cheaper than with cores located in other cabinets. The advent of these massively parallel, non uniform machines impacts the design of the software to be executed on them, both for applications and for service tools. It is in particular the case for the software whose task is to balance workload across the cores of these architectures.

A common method for task allocation is to use graph partitioning tools. The elementary computations to perform are represented by vertices and their dependencies by edges linking two vertices that need to share some piece of data. Finding good solutions to the workload distribution problem amounts to computing partitions with small vertex or edge cuts and that balance evenly the weights of the graph parts. Yet, computing efficient partitions for non uniform architectures requires to take into account the topology of the target architecture. When processes are assumed to coexist simultaneously for all the duration of the program, this generalized optimization problem is called mapping. In this problem, the communication cost function to minimize incorporates architecture-dependent, locality improving terms, such as the dilation of each edge (that is, by how much it is “stretched” across the graph representing the target architecture), which is sometimes also expressed as some “hop metric”. A mapping is called static if it is computed prior to the execution of the program and is never modified at run-time.

The sequential `Scotch` tool being developed within the `BACCHUS` team
(see Section )
was able to perform static mapping since its first version, in 1994,
but this feature was not widely known nor used by the community. With
the increasing need to map very large problem graphs onto very large
and strongly non uniform parallel machines, there is an increasing
demand for parallel static mapping tools. Since, in the context of
dynamic repartitioning, parallel mapping software will have to
run on their target architectures, parallel mapping and remapping
algorithms suitable for efficient execution on such heterogeneous
architectures have to be investigated. This leads to solve three
interwoven challenges:

scalability: such algorithms must be able to map graphs of more than a billion vertices onto target architectures comprising millions of cores;

heterogeneity: not only do these algorithms must take into account the topology of the target architecture they map graphs onto, but they also have themselves to run efficiently on these very architectures;

asynchronicity: most parallel partitioning algorithms use collective communication primitives, that is, some form of heavy synchronization. With the advent of machines having several millions of cores, and in spite of the continuous improvement of communication subsystems, the demand for more asynchronicity in parallel algorithms is likely to increase.

This year, our work mostly concerned the tighter integration of
`Scotch` with `PaMPA`. In particular, the routines for partitioning
with fixed vertices, which are mandatory in `PaMPA` to balance
remeshing workload across processing elements that already contain
some mesh data, have been redesigned almost from scratch.

The `AeroSol` software is jointly developed in teams Bacchus and
Cagire. It is a high order finite element library written in
C++. The code has been designed so as to allow for efficient
computations, with continuous and discontinuous finite elements methods
on hybrid and possibly curvilinear meshes.
The work of the team Bacchus is focused on continuous finite elements
methods, while the team Cagire is focused on discontinuous Galerkin methods. However, everything
is done for sharing the largest
part of code we can. More precisely, classes concerning IO, finite
elements, quadrature, geometry, time iteration,
linear solver, models and interface with `PaMPA` are used by both of the
teams. This modularity is achieved by mean
of template abstraction for keeping good performances.
The distribution of the unknowns
is made with the software `PaMPA`, developed within the team Bacchus and
the team Castor.

The work performed this year in the `BACCHUS`team has focused
on the experimentation of parallelisation solutions on heterogenous machines, and
in particular on the study of efficient solution for shared memory parallelism.
In particular, in the framework of the PhD of D. Genet, the coupling with runtime systems such as
StarPU (Inria team Runtime) and DAGuE (University of Tennesee), has bee compared to a more classical OpenMP implementation.
This initial work, done for scalar problems, will be now extended to systems of equations.

`COCA`(CodeOxydationCompositesAutocicatrisants) is a
`Fortran 90` code for the simulation of the oxidation process
in self-healing composites `COCA` solves the discrete finite element equations
relative to the oxidation (chemistry) and flow (potential) models.
Time integration is performed with an implicit approach (Backward Euler or second order backward
differencing). The linear algebraic systems arising in the discretization are solved with
the `MUMPS``COCA` as a numerical closure for
continuous mechanics solvers in order to perform numerical strain tests for self-healing composites.

`RealfluiDS` is a software dedicated to the simulation of inert or
reactive flows. It is also able to simulate multiphase, multimaterial,
MHD flows and turbulent flows (using the SA model). There exist 2D and 3D
dimensional versions. The 2D version is used to test new ideas that
are later implemented in the 3D one. This software implements the more
recent residual distribution schemes. The code has been parallelized
with and without overlap of the domains. The uncertainty quantification
library `RobUQ` has been coupled to the software. A partitioning tool exists in
the package, which uses `Scotch`. Recently, the code has been developed for taking into account real-gas effects,
in order to use arbitrarily complex equations of state. Further developments concerning multiphase effects are under way.

`MMG3D` is a tetrahedral fully automatic remesher. Starting from a
tetrahedral mesh, it produces quasi-uniform meshes with respect to a
metric tensor field. This tensor prescribes a length and a direction
for the edges, so that the resulting meshes will be anisotropic.
The software is based on local mesh modifications and an anisotropic
version of Delaunay kernel is implemented to insert vertices in the
mesh. Moreover, `MMG3D` allows one to deal with rigid body motion and
moving meshes. When a displacement is prescribed on a part of the
boundary, a final mesh is generated such that the surface points will be
moved according this displacement. `MMG3D` is/was used in gamma3 team, at EPFL (maths
department), Dassault Aviation, Lemma (a french SME), Renault etc. `MMG3D` can
be used in `FreeFem++` (http://

Version 5.0 of `MMG3D` allows the modification of the surface triangulation based on cubic Bezier patches.
A. Froehly, ingenieer in the FUI Rodin, is working on this new version.

More details can be found on
http://

The `ORComp` platform is a simulation tool permitting to design an
ORC cycle. Starting from the solar radiation, this plateform computes
the cycle providing the best performance with optimal choices of the
fluid and the operating conditions. It includes `RobUQ`, a
simulation block of the ORC cycles, the `RealfluiDS`code for
the simulation of the turbine and of the heat exchanger, the software
`FluidProp` (developed at the University of Delft) for
computing the fluid thermodynamic properties.

The `sDEM` platform is a simulation tool permitting to simulate multiphase flows with transition modelling.
In particular, the code relies on the formulation of a DEM method, the use of a complex thermodynamics,
the possibility to model cavitating phenomena. Moreover, the method has been generalized in order to
take into account directly uncertainty, thus proposing the so-called Stochastic DEM (sDEM) method.
This is one of the first stochastic semi-intrusive scheme, permitting to consider uncertainties
in multiphase flows including heat and mass transfer terms.
This software is developed together with the University of Zurich.

`PaMPA` (“Parallel Mesh Partitioning and Adaptation”) is a
middleware library dedicated to the management of distributed
meshes. Its purpose is to relieve solver writers from the tedious and
error prone task of writing again and again service routines for mesh
handling, data communication and exchange, remeshing, and data
redistribution. It is based on a distributed data structure that
represents meshes as a set of *entities* (elements, faces,
edges, nodes, etc.), linked by *relations* (that is,
computation dependencies).

`PaMPA` interfaces with `Scotch` for mesh redistribution, and with
`MMG3D` for parallel remeshing of tetrahedral elements. Other sequential
remeshers can be plugged-in, in order to handle other types of elements.

Version `1.0` of `PaMPA` allows users to declare distributed
meshes, to declare values attached to the entities of the meshes
(e.g. temperature attached to elements, pressures to the faces, etc.),
to exchange values between overlapping entities located at the
boundaries of subdomains assigned to different processors, to iterate
over the relations of entities (e.g. iterate over the faces of
elements), to remesh in parallel the areas of a mesh that need to be
emeshed, and to redistribute evenly the remeshed mesh across the
processors of the parallel architecture.

`PaMPA` is already used as the data structure manager for two solvers
being developed at Inria: `Plato` (team PUMAS) and `AeroSol` (teams
BACCHUS and CAGIRE).

Following expressions of interest from industrial partners, a formal
industrialization process of `PaMPA` has been started, under the
auspices of the *Direction du Transfert Technologique* (DTI) at
Inria. In this context, much work was directed towards improving the
robustness of the code, by including it into a continuous integration
framework based on `Jenkins`.

The `RobUQ` platform has been conceived to solve problems in
uncertainty quantification and robust design. It includes the
optimization code `ALGEN`, and the uncertainty quantification
code `NISP`. It includes also some methods for the computation
of high-order statistics, efficient strategies for robust
optimization, the Simplex2 method. Some methods are developed in
partnership with the Stanford University (in the framework of the
associated team AQUARIUS). Other methods are developed in the
context of ANR `UFO`.

parallel graph partitioning, parallel static mapping, parallel sparse matrix block ordering, graph repartitioning, mesh partitioning.

`Scotch` (http://

The initial purpose of `Scotch` was to compute high-quality static
mappings of valuated graphs representing parallel computations onto
target architectures of arbitrary topologies. This allows the mapper
to take into account the topology and heterogeneity of the target
architecture in terms of processor speed and link bandwidth. This
feature, which was meant for the NUMA machines of the 1980's, has not
been widely used in the past because machines in the 1990's became UMA
again thanks to hardware advances. Now, architectures become NUMA
again, and these features are regaining popularity.

The `Scotch` package consists of two libraries: the sequential
`Scotch` library, and the parallel `PT-Scotch` library (for “*Parallel Threaded *`Scotch`* *”) that operates according to the
distributed memory paradigm, using MPI. `Scotch` was the first full
64-bit implementation of a general purpose graph partitioner.

Version `6.0`, released on December 2012, offers many new
features: static mapping with fixed vertices, static remapping, and
static remapping with fixed vertices. Several critical algorithms of
the formerly strictly sequential `Scotch` library can now run in a
multi-threaded way. All of these features, which exist only in the
sequential version, will be available to the parallel
`PT-Scotch` library in the upcoming release `6.1`, the
development of which has been pursued this year. Also, `Scotch` has
been integrated into the `Jenkins` continuous integration
framework that is used for other projects of the team, such as
`PaMPA` and `AeroSol`.

`Scotch` has been integrated in numerous third-party software, which
indirectly contribute to its diffusion.
It is natively available in several Linux and Unix distributions,
as well as on some vendors platforms (SGI, etc).

`SLOWS` (“*Shallow-water fLOWS*”) is a `C`-platform
allowing the simulation of free surface shallow water flows with
friction. Arbitrary bathymetries are allowed, defined either by some
complex piecewise analytical expression, or by `MUMPS``SLOWS`

`Nomesh` is a software allowing the generation of third order curved simplicial meshes.
Starting from a "classical" mesh with straight elements composed by triangles and/or tetrahedra, we are able
to curve the boundary mesh.
Starting from a mesh with some curved elements, we can verify if the mesh is valid, that means there is no crossing elements
and only positive Jacobian. If the curved mesh is non valid, we modify it using linear elasticity equations until having a valid curved mesh.

In Computational Fluid Dynamics the interest on embedded boundary methods for Navier-Stokes equations increases because they simplify the meshing issue, the simulation of multi-physics flows and the coupling of fluid-solid interactions in situation of large motions or deformations. Nevertheless an accurate treatment of the wall boundary conditions remains an issue of these methods. In this work we develop an immersed boundary method for unstructured meshes based on a penalization technique and we use mesh adaption to improve the accuracy of the method close to the boundary. The idea is to combine the strength of mesh adaptation, that is to provide an accurate flow description especially when dealing with wall boundary conditions, to the simplicity of embedded grids techniques, that is to simplify the meshing issue and the wall boundary treatment when combined with a penalization term to enforce boundary conditions. The bodies are described using a level-set method and are embedded in an unstructured grid. Once a first numerical solution is computed mesh adaptation based on two criteria the level-set and the quality of the solution is performed. The full paper has been published in the Journal of Computational Physics in January 2014.

*External contributors.* This work has benefitted from the collaboration with the University of Zurich, and in particular with R. Abgrall.

As discussed in section Meshes and scalable discrete data structures an accurate resolution of time dependent flows requires a dynamic mesh adaptation procedure which is quite complex and costly, especially when combined with parallel distributed memory implementations. To alleviate this cost, and still allow mesh adaptation for time dependent problems we have started to look into adaptation techniques which do not involve any re-meshing. In particular, we have studied methods based on continuous mesh deformation. These methods require, at each time step, the solution of a PDE for the mesh as well as for the flow variables. This year we have settled several fundamental questions related to the basic formulation of the method, and its coupling with either implicit or explicit time discretisation methods of the flow variables. Initial applications to free surface flows have been considered showing the generality and potential of our results .

This year we have made a lot of progress in the understanding of the properties of Boussinesq-type models for near shore applications. In particular, we have performed a systematic analysis of the nonlinear behaviour of these models in the surf-zone, and in particular of their shoaling properties. These properties influence fundamentally the wave breaking process, and thus the impact of the wave on coastal structures. We have clearly identified two families of physical behaviours, associated to a similar formal structure of the equations. This result has been presented in , , and the full study is currently in revision on the Coastal Engineering journal.

In parallel, we have continued the study of the implementation of wave breaking models, comparing several physical criteria for the detection of the beginning and end of the breaking process. So far, we have only tested the so-called hybrid approach in which the hyperbolic Shallow Water equations are used in breaking regions, and the energy dissipation of breaking waves is modelled by the dissipation of mathematical entropy in shock waves. The work performed complements the initial study performed by M. Kazolea in her PhD and also proposes new physical detection criteria , (a full paper is in preparation).

Furthermore, we have began a systematic study on the existence of particular solutions (such as solitary waves for example) to the different Boussinesq-type models in view of having efficient materials to determinate the efficiency of our numerical schemes and to perform preliminary simulations.

The last important theoretical brick we added this year is the study of fully discrete asymptotic models, obtained by pre-discretizing the two-dimensional incompressible free surface Euler equations with a finite element method, and then by performing an asymptotic development (in terms of the classical nonlinearity and dispersion parameters). We have thus obtained a discrete model which, although consistent with a known continuous Boussinesq system, represents a surprisingly improved discret eversion of these equations, hardly obtainable by classical discretisation choices.

Besides the modelling effort, we have also started woking on real applications. In particular, we have worked on case studies involving harbour dynamics and river hydraulics. In the first case, M. Kazolea has performed a systematic study of the contribution of harbour resonance in the excitation of the Venetian harbor basin of Chania, during typical winder storms. Concerning river hydraulics, we have performed a parametric study of the appearance of tidal bores in estuaries, with parameters given by the tide non-linearity (amplitude), and the friction in the river. Both works will be presented at the next world congress of the International Association for Hydro-Environment Engineering.

*External contributors.* This work has benefitted from the collaboration with the EPOC lab in Bordeaux, and in particular with P. Bonneton.

A discrete equation method (DEM) for the simulation of compressible multiphase flows including real-gas effects has been developed. A reduced five equation model is obtained starting from the semi-discrete numerical approximation of the two-phase model. A simple procedure is then proposed for using a more complex equation of state, thus improving the quality of the numerical prediction. Classical test-cases well-known in literature are performed featuring a strong importance of thermodynamic complexity for a good prediction of temperature evolution. Finally, a computational study on the occurrence of rarefaction shock waves (RSW) in a two-phase shock tube is presented, with dense vapors of complex organic fluids. Since previous studies have shown that a RSW is relatively weak in a single-phase (vapor) configuration, its occurrence and intensity are investigated considering the influence of the initial volume fraction, initial conditions and the thermodynamic model . A transition modelling has been also introduced for considering heat and mass transfer terms. In this way, metastable states have been simulated in cavitating flows Finally, a semi-intrusive stochastic technique has been formulated for taking into account uncertainties in the simulation of metastable states.

*External contributors.* This work has benefitted from the collaboration with the University of Zurich, and in particular with R. Abgrall.

A novel adaptive strategy for stochastic problems has been developed, inspired from the classical Harten’s framework. The proposed algorithm allows building, in a very general manner, stochastic numerical schemes starting from a whatever type of deterministic schemes and handling a large class of problems, from unsteady to discontinuous solutions. Its formulations permits to recover the same results concerning the interpolation theory of the classical multiresolution approach, but with an extension to uncertainty quantification problems. The present strategy permits to build numerical scheme with a higher accuracy with respect to other classical uncertainty quantification techniques, but with a strong reduction of the numerical cost and memory requirements. Moreover, the flexibility of the proposed approach allows to employ any kind of probability density function, even discontinuous and time varying, without introducing further complications in the algorithm. The advantages of the present strategy are demonstrated by performing several numerical problems where different forms of uncertainty distributions are taken into account, such as discontinuous and unsteady custom-defined probability density functions. In addition to algebraic and ordinary differential equations, numerical results for the challenging 1D Kraichnan–Orszag are reported in terms of accuracy and convergence. Finally, a two degree-of-freedom aeroelastic model for a subsonic case is presented. Though quite simple, the model allows recovering some physical key aspect, on the fluid/structure interaction, thanks to the quasi-steady aerodynamic approximation employed. The injection of an uncertainty is chosen in order to obtain a complete parameterization of the mass matrix. All the numerical results are compared with respect to classical Monte Carlo solution and with a non-intrusive Polynomial Chaos method .

*External contributors.* This work has benefitted from the collaboration with the University of Zurich, and in particular with R. Abgrall.

Title: TIDES: Robust simulation tools for non-hydrostatic free surface flows

Type: Apple à Projets Recherche du Conseil de la Région Aquitaine

Coordinator: M. Ricchiuto

Other partners: UMR EPOC (P. Bonneton)

Abstract: This project proposes to combine modern high order adaptive finite elements techniques with state of the art nonlinear and non-hydrostatic models for free sruface waves to provide an accurate tool for the simulation of near shore hydrodynamics, with application to the study and prediction of tidal bores. The Garonne river will be used as a case study. This project co-funds (50%) the PhD of A. Filippini.

Since January 2013, the team is participating to the
C2S@Exa http://

Title: Robust structural Optimization for Design in Industry (Rodin)

Type: FUI

Duration: July 2012 - July 2015

Coordinator: ALBERTELLI Marc (Renault)

Abstract: From the research point of view, the RODIN project will focus on: (1) extending level set methods to nonlinear mechanical or multiphysics models and to complex geometrical constraints, (2) developing algorithms for moving meshes with a possible change of topology, (3) adapting in a level-set framework second-order optimization algorithms having the ability of handling a large number of design variables and constraints.

The project will last 3 years and will be supported by a consortium of 7 partners: (1) 2 significant end-users, Renault and EADS, who will provide use-cases reflecting industrial complexity; (2) 3 academics partners, CMAP, J.-L. Lions laboratory and Inria of Bordeaux, who will bring expertise in applied mathematics, structural optimization and mesh deformation; (3) A software editor, ESI Group, who will provide mechanical software package and will pave the way of an industrialization; (4) A SME, Eurodecision, specialized in large-scale optimization.

Title: Maillages adaptatifs pour les interfaces instationnaires avec deformations, etirements, courbures.

Type: ANR

Duration: 48 months

Starting date : 1st Oct 2013

Coordinator: Dervieux Alain (Inria Sophia)

Abstract: Mesh adaptive numerical methods allow computations which are otherwise impossible due to the computational resources required. We address in the proposed research several well identified main obstacles in order to maintain a high-order convergence for unsteady Computational Mechanics involving moving interfaces separating and coupling continuous media. A priori and a posteriori error analysis of Partial Differential Equations on static and moving meshes will be developed from interpolation error, goal-oriented error, and norm-oriented error. From the minimization of the chosen error, an optimal unsteady metric is defined. The optimal metric is then converted into a sequence of anisotropic unstructured adapted meshes by means of mesh regeneration, deformation, high stretching, and curvature. A particular effort will be devoted to build an accurate representation of physical phenomena involving curved boundaries and interfaces. In association with curved boundaries, a part of studies will address third-order accurate mesh adaption. Mesh optimality produces a nonlinear system coupling the physical fields (velocities, etc.) and the geometrical ones (unsteady metric, including mesh motion). Parallel solution algorithms for the implicit coupling of these different fields will be developed. Addressing efficiently these issues is a compulsory condition for the simulation of a number of challenging physical phenomena related to industrial unsolved or insufficiently solved problems. Non-trivial benchmark tests will be shared by consortium partners and by external attendees to workshops organized by the consortium. The various advances will be used by SME partners and proposed in software market.

Title: Uncertainty quantification For compressible fluid dynamics and Optimisation.

Type: ANR

Duration: 36 months

Starting date : 1st June 2011

Coordinator: Remi Abgrall (Inria Bordeaux Sud-Ouest)

Abstract: This project deals with the simulation and the optimization of stochastic flows where the uncertainties can be both in the data and in the models. The focus will be on handling the uncertainties coming from the turbulence models or thermodynamics models in dense-gas flows. Since the thermodynamic models for dense-gas flows are not well-known, it is mandatory to compute the probability density functions of some quantities of interest by starting from the experimental data. Several methods have been developed for both reducing the global computational cost and increasing the accuracy in the statistics computation.

Title: Tsunamis in the Atlantic and the English ChaNnel: Definition of the Effects through numerical Modeling (TANDEM)

Type: PIA - RSNR (Investissement d'Avenir, “Recherches en matière de Sûreté Nucléaire et Radioprotection”)

Duration: 48 months

Starting date : 1st Jan 2014

Coordinator: H. Hebert (CEA)

Abstract: TANDEM is a project dedicated to the appraisal of coastal effects due to tsunami waves on the French coastlines, with a special focus on the Atlantic and Channel coastlines, where French civil nuclear facilities have been operated since about 30 years. As identified in the call RSNR, this project aims at drawing conclusions from the 2011 catastrophic tsunami, in the sense that it will allow, together with a Japanese research partner, to design, adapt and check numerical methods of tsunami hazard assessment, against the outstanding observation database of the 2011 tsunami. Then these validated methods will be applied to define, as accurately as possible, the tsunami hazard for the French Atlantic and Channel coastlines, in order to provide guidance for risk assessment on the nuclear facilities.

Title On a new mathematical and numerical approach for simulations in coastal engineering

Type : PEPS IDEX-CNRS

Duration : 12 months

Starting : Date May 2013

Coordinator : M. Colin

Abstract : The modeling of free surface flows is a major challenge in coastal engineering and its understanding is crucial if one wants to predict the impact of large-scale phenomena such as Tsunami propagations for example. The aim of this project is to provide pertinent and efficient numerical asymptotic models describing fluid flows in view of producing a computational plate-form. We will give a particular attention to scalar models in order to describe wave breaking in the near-shore region. Finally , we will introduce a new method to obtain numerical asymptotic models which consists in inverting the usual paradigm

**Full models****Asymptotic models****Numerical scheme**.

Title : Reactive fluid flows with interface : macroscopic models and application to self-healing materials

Type : Project Bordeaux 1

Duration : 36 months

Starting : September 2014

Coordinator : M. Colin

Abstract : Because of their high strength and low weight, ceramic-matrix composite materials (CMCs) are the focus of active research, for aerospace and energy applications involving high temperatures. Though based on brittle ceramic components, these composites are not brittle due to the use of a fiber/matrix interphase that manages to preserve the fibers from cracks appearing in the matrix. The lifetime-determining part of the material is the fibers, which are sensitive to oxidation; when the composite is in use, it contains cracks that provide a path for oxidization. The obtained lifetimes can be of the order of hundreds of thousands of hours. These time spans make most experimental investigations impractical. In this direction, the aim of this project is to furnish predictions based on computer models that have to take into account: 1) the multidimensional topology of the composite made up of a woven ceramic fabric; 2) the complex chemistry taking place in the material cracks; 3) the flow of the healing oxide in the material cracks.

Type: COOPERATION

Defi: NC

Instrument: Specific Targeted Research Project

Objectif: NC

Duration: October 2013 - September 2016

Coordinator: SNECMA (France)

Partner: SNECMA SA (FR), AEROTEX UK LLP (UK), AIRBUS OPERATIONS SL (ES), Airbus Operations Limites (UK), AIRCELLE SA (FR), ARTTIC (FR), CENTRO ITALIANO RICERCHE AEROSPAZIALI SCPA (IT), CRANFIELD UNIVERSITY (UK), DEUTSCHES ZENTRUM FUER LUFT - UND RAUMFAHRT EV (DE), EADS DEUTSCHLAND GMBH (DE), ONERA (FR), TECHSAPACE AERO SA (BE)

Inria contact: Heloise Beaugendre

Abstract: During the different phases of a flight, aircraft face severe icing conditions. When this ice then breaks away, and is ingested through the reminder of the engine and nacelle it creates multiple damages which have a serious negative impact on the operations costs and may also generate some incident issues. To minimise ice accretion, propulsion systems (engine and nacelle) are equipped with Ice Protection Systems (IPS), which however have themselves performance issues. Design methodologies used to characterise icing conditions are based on empirical methods and past experience. Cautious design margins are used non-optimised designs solutions. In addition, engine and nacelle manufacturers are now limited in their future architectures solutions development because of lack of knowledge of icing behaviour within the next generation of propulsive systems solutions, and of new regulations adopted that require aero engine manufacturers to address an extended range of icing conditions.

In this context that STORM proposes to: characterise ice accretion and release through partial tests ; Model ice accretion, ice release and ice trajectories ; Develop validated tools for runback ; characterise ice phobic coatings ; select and develop innovative low cost and low energy anti-icing and de-icing systems. Thus, STORM will strengthen the predictability of the industrial design tools and reduce the number of tests needed. It will permit lower design margins of aircraft systems, and thus reduce the energy consumption as well as prevent incidents and break downs due to icing issues.

Title: Uncertainty quantification and numerical simulation of high Reynolds number flows

International Partner (Institution - Laboratory - Researcher):

Stanford University (ÉTATS-UNIS)

Duration: 2011 - 2016

See also: http://

This research project deals with uncertainty quantification and numerical simulation of high Reynolds number flows. It represents a challenging study demanding accurate and efficient numerical methods. It involves the Inria team BACCHUS and the groups of Pr. Charbel Farhat from the Department of Aeronautics and Astronautics and Pr. G. Iaccarino from the Department of Mechanical Engineering at Stanford University. The first topic concerns the simulation of flows when only partial information about the physics or the simulation conditions (initial conditions, boundary conditions) is available. In particular we are interested in developing methods to be used in complex flows where the uncertainties represented as random variables can have arbitrary probability density functions. The second topic focuses on the accurate and efficient simulation of high Reynolds number flows. Two different approaches are developed (one relying on the XFEM technology, and one on the Discontinuous Enrichment Method (DEM), with the coupling based on Lagrange multipliers). The purpose of the proposed project is twofold : i) to conduct a critical comparison of the approaches of the two groups (Stanford and Inria) on each topic in order to create a synergy which will lead to improving the status of our individual research efforts in these areas ; ii) to apply improved methods to realistic problems in high Reynolds number flow.

Title: Advanced Modeling on Shear Shallow Flows for Curved Topography : water and granular flows.

International Partner (Institution - Laboratory - Researcher):

Inria Sophia-Antipolis and University of Nice (France)

Inria Bordeaux and University of Bordeaux (France)

University of Marseille (France)

National Cheng Kung University, Tainan, Taiwan

National Taiwan University and Academia Sinica,Taipei, Taiwan

Duration: 2014 - 2016

See also: https://

Our objective is to generalize the promising modeling strategy proposed in G.L. Richard and S.L. Gavrilyuk 2012, to genuinely 3D shear flows and also take into account the curvature effects related to topography. Special care will be exercised to ensure that the numerical methodology can take full advantage of massively parallel computational platforms and serve as a practical engineering tool. At first we will consider quasi-2D sheared flows on a curve topography defined by an arc, such as to derive a model parameterized by the local curvature and the nonlinear profile of the bed. Experimental measurements and numerical simulations will be used to validate and improve the proposed modeling on curved topography for quasi-2D flows. Thereafter, we will focus on 3D flows first on simple geometries (inclined plane) before an extension to quadric surfaces and thus prepare the generalization of complex topography in the context of geophysical flows.

University of Zurich : R. Abgrall. Collaboration on penalisation on unstructured grids and high order adaptive methods for CFD and uncertainty quantification.

Politecnico di Milano, Aerospace Department (Italy) : Pr. A. Guardone. Collaboration on ALE for complex flows (compressible flows with complex equations of state, free surface flows with moving shorelines).

von Karman Institute for Fluid Dynamics (Belgium). With Pr. T. Magin we work on Uncertainty Quantification problems for the identification of inflow condition of hypersonic nozzle flows. With Pr. H. Deconinck we work on the design of high order methods, including goal oriented mesh adaptation strategies

University of Nottingham, Department of Mathematics : Dr. M.E. Hubbard. Collaboration on high order schemes for time dependent shallow water flows

Technical University of Crete, School of Production Engineering & Management : Pr. A.I. Delis. Collaboration on high order schemes for depth averaged free surface flow models, including robust code to code validation

Chalmers University (C. Eskilsson) and Technical University of Denmark (A.-P. Engsig-Karup) : our collaboration with Chalmers and with DTU compute in Denmark aims at developing high order non hydrostatic finite element Boussinesq type models for the simulation floating wave energy conversion devices such as floating point absorbers ;

In the context of the `HOSCAR` project jointly funded by Inria
and CNPq, coordinated by Stéphane LANTERI on the French side,
François Pellegrini and Pierre Ramet have participated in a
joint workshop in Petrópolis last September.
A collaboration is envisioned regarding parallel graph partitioning
algorithms for data placement in the context of big data applications.

Prof. B. Muller (Norwegian University of Science and Technology) has been hosted for a sabbatical from January to May. During his stay he has interacted with P. Congedo and M.G. Rodio on the milling of compressible multiphase flows ;

Prof. A.I. Delis (Technical University of Crete) has been hosted during the whole month of September (funding from the mathematics department invited professors campaign, university of Bordeaux). During his stay he worked with M. Ricchiuto on the set up of a robust code-to-code comparison strategy for long wave run-up ;

A. Larat (CNRS, EM2C lab Paris) has been hosted for a month during November and December to work with M. Ricchiuto on space time Galerkin schemes for KdV type equations.

Besides these longer stays, this year we have hosted several of our collaborators such as K. AOKI (Kyoto University), E. Miglio (Politecnico di Milano), S. Blaise (University of Louvain la Neuve), C. Eskilsson (Chalmers University), A.-P. Engsig-Karup (DTU Compute), and many others.

In the context of the associated team AQUARIUS2, three 1-month visits have been done during September-October 2014 in Stanford University (Pietro Marco Congedo, Maria Giovanna Rodio, Francesca Fusi).

M. Colin and M. Ricchiuto have co-organized (with P. Lubin, I2M Bordeaux) the event B'Waves 2014, a workshop on breaking waves hosting some of the world's leaders on modelling and simulation of free surface waves. The event hs been hosted by Inria BSO, and a second edition B'Waves 2016 will be held in Norway (host: Bergen University) ;

M. Ricchiuto has co-roganized (with A. Delis, Technical University of Crete) the mini-symposium `non-hydrostatic free surface flows: models and methods' and the ECMI 2014 conference (Taormina, June 2014) ;

Ci̇s member of the board of the GAMNI group of SMAI and she is secretary. She has participated to the organization of the "journées SMAI-MAIRCI-GAMNI sur le Maillage" (Paris, May 2014) ;

P.M. Congedo has organized a mini-symposium on Uncertainty Quantification Techniques for Fluid-flow Problems at the ECCOMAS 2014 Conference (Barcelona, July 2014) ;

P.M. Congedo has contributed to the organization of VKI Lecture Series Uncertainty Quantification in Computational Fluid Dynamics - STO-AVT-235 (Brussels, September 2014).

We reviewed papers for top international journals and conferences in the main scientific themes of the team : Journal of Computational Physics, Computer Methods in Applied Mechanics and Engineering, Optimization and Engineering, International Journal of Numerical Methods in Fluids, Physics of Fluids, Journal of Marine Science and Technology, Engineering Applications of Computational Fluid Mechanics, Computers and Fluids, International Journal of Modelling and Simulation in Engineering Aircraft Engineering and Aerospace Technology, International Journal of Computational Fluid Dynamics, Applications and applied mathematics : An international journal, Discrete and Continuous Dynamical Systems - Series A, Electronic Journal of Differential Equations, Calculus of Variations and Partial Differential Equations, Nonlinear Analysis: Modelling and Control, Advanced Nonlinear Studies, Communications on Pure and Applied Analysis, Communications in Computational Physics.

License : Héloïse Beaugendre, Co-Responsable des projets TER de première année, 10h, L3, ENSEIRB-MATMÉCA, FRANCE

License : Héloïse Beaugendre, Encadrement TER, 16h, L3, ENSEIRB-MATMÉCA, FRANCE

Licence : Mathieu Colin, Analyse Fonctionnelle et Intégration, 54 h, L3, ENSEIRB-MATMÉCA, FRANCE

Licence : Mathieu Colin, TER 32h, L3, ENSEIRB-MATMÉCA, FRANCE

Licence, Mathieu Colin, Analyse, L1, Formation alternée INP, FRANCE

Licence : Pietro Marco Congedo, Fundamentals of Numerical Analysis II, 24h, ENSEIRB-MATMÉCA, France.

Licence : Pietro Marco Congedo, Fundamentals of Fluid Mechanics II, 20h, ENSEIRB-MATMÉCA, France.

Licence : Cécile Dobrzynski, Langages en Fortran 90, 54h, L3, ENSEIRB-MATMÉCA, FRANCE

Licence : Cécile Dobrzynski, Analyse numérique, 24h, L3, ENSEIRB-MATMÉCA, FRANCE

Licence : Mario Ricchiuto, Fundamentals of Numerical Analysis, 24h,ENSEIRB-MATMÉCA, France.

Master : Héloïse Beaugendre, Responsable de filiière de 3ème année, 10h, M2, ENSEIRB-MATMÉCA, France

Master : Héloïse Beaugendre, Approximation numérique et problèmes industriels, 26h, M1, ENSEIRB-MATMÉCA, France

Master : Héloïse Beaugendre, Outils informatiques pour l'insertion professionnelle, 9h, M2, Université de Bordeaux, France

Master : Héloïse Beaugendre, Calcul Parallèle (OpenMP-MPI), 40h, M1, ENSEIRB-MATMÉCA et Université de Bordeaux, France

Master : Héloïse Beaugendre, Calcul Haute Performance (MPI), 36h, M2, ENSEIRB-MATMÉCA, France

Master : Héloïse Beaugendre, Calcul Haute Performance et décomposition de domaine, 36h, M2, ENSEIRB-MATMÉCA et Université Bordeaux, France

Master : Mathieu Colin, PDE, 30 H, M1, ENSEIRB-MATMÉCA, FRANCE

Master : Mathieu Colin, EDP appronfondies, 36 h, M2, Université de Bordeaux, FRANCE

Master : Mathieu Colin, TER, 12h, M1, ENSEIRB-MATMÉCA, FRANCE

Master : Mathieu Colin, Projet fin d'études, 6h, M2, ENSEIRB-MATMÉCA, FRANCE

Master : Pietro Marco Congedo, Simulation Numérique des écoulements fluides, 20h, M2, ENSEIRB-MATMÉCA, France

Master : Cécile Dobrzynski, Projet fin d'études, 6h, M2, ENSEIRB-MATMÉCA, FRANCE

Master : Cécile Dobrzynski, TER, 16h, M1, ENSEIRB-MATMÉCA, FRANCE

Master : Cécile Dobrzynski, Théorie du maillage, 12h, M2, formation Structures Composites, ENSCBP, FRANCE

Master : Cécile Dobrzynski, Techniques de maillages, 36h, M2, ENSEIRB-MATMÉCA, FRANCE

Master : Mario Ricchiuto, Simulation Numérique des écoulements fluides, 16h, M3, ENSEIRB-MATMÉCA, France

Master : Mario Ricchiuto, Post-graduate course on introduction to CFD, 18h, M2 IAS (Master Spécialisé Ingénierie Aéronautique et Spatiale, http://

PhD in progress : Arpaia Luca, Continuous mesh deformation and coupling with uncertainty quantification for coastal inundation problems, started in March 2014.

PhD in progress : Bellec Stevan, Discrete asymptotic modelling of free surface flows, October 2013.

PhD in progress : Cortesi Andrea, Predictive numerical simulation for rebuilding freestream conditions in atmospheric entry flows, started in October 2014.

PhD in progress : Filippini Andrea, Nonlinear finite element Boussinesq modelling of non-hydrostatic free surface flows, started in February 2014.

PhD in progress: Fusi Francesca, Stochastic robust optimization of a helicopter rotor airfoil, started in October 2013.

PhD in progress: Lin Xi, Asymptotic modelling of incompressible reactive flows in self-healing composites, started in October 2014.

PhD in progress : Nouveau Léo, Adaptation de maillage non structurés anisotropes pour les méthodes de pénalisation en mécanique des fluides compressibles, started in Oct 2013.

PhD in progress: Perrot Gregory, Physico-chemical modelling of self-healing ceramic composites, started in October 2011.

PhD in progress : Peluchon Simon, Approximation numérique et modélisation de l'ablation différentielle de deux matériaux: application à l'ablation liquide. Started in December 2014.

PhD in progress : Viville Quentin, Etude sur les méthodes de pénalisation adaptées aux maillages non-structurés fortement anisotropiques et utilisation de l'adaptation de maillage, started in Oct 2013.