The main objective of this project is the development of innovative algorithms and efficient software tools for the simulation of complex flow problems. Accurate predictions of physical quantities are of great interest in fluid mechanics, for example in order to analyze instabilities, predict forces acting on a body, estimate the flow through an orifice, or predict thermal conductivity coefficients. Due to the complex and highly nonlinear equations to be solved, it is difficult to know in advance how fine the spatial or temporal resolution should be and how detailed a given physical model has to be represented. We propose to develop a systematic approach to these questions based on auto-adaptive methods.

Most of the physical problems under consideration have a three-dimensional character and involve the coupling of models and extremely varying scales. This makes the development of fast numerical methods and efficient implementation a question of feasibility. Our contributions concern modern discretization methods (high-order and adaptivity) and goal-oriented simulation tools (prediction of physical quantities, numerical sensitivities, and parameter identification). Concrete applications originate from aerodynamics, viscoelastic flows, heat transfer, and porous media.

The goal of the **first phase** of the project is to develop
flow solvers based on modern numerical methods such as high-order
discretization in space and time and self-adaptive
algorithms. Adaptivity based on a posteriori error estimators has
become a new paradigm in scientific computing, first because of
the necessity to give rigorous error bounds, and second because of
the possible speed-up of simulation tools. A systematic approach
to these questions requires an appropriate variational framework
and the development of a posteriori error estimates and adaptive
algorithms, as well as sufficiently general software tools able to
realize these algorithms. To this end we develop a single common
library written in C++ and study at hand of concrete applications
the possible benefits and difficulties related to these algorithms
in the context of fluid mechanics. The main ingredients of our
numerical approach are adaptive finite element discretizations
combined with multilevel solvers and hierarchical modeling. We
develop different kinds of finite element methods, such as
discontinuous (DGFEM) and stabilized finite element methods
(SFEM), either based on continuous or non-conforming finite
element spaces (NCFEM). The availability of such tools is also a
prerequisite for testing advanced physical models, concerning for
example turbulence, compressibility effects, and realistic models
for viscoelastic flows.

The goal of the **second phase** is to tackle questions going
beyond forward numerical simulations: parameter identification,
design optimization, and questions related to the interaction
between numerical simulations and physical experiments. It
appears that many questions in the field of complex flow problems
can neither be solved by experiments nor by simulations alone. In
order to improve the experiment, the software has to be able to
provide information beyond the results of simple simulation.
Here, information on sensitivities with respect to selected
measurements and parameters is required. The parameters could in
practice be as different in nature as a diffusion coefficient and
a velocity boundary condition. It is our long-term objective to
develop the necessary computational framework and to contribute to
the rational interaction between simulation and experiment.

The interdisciplinary collaboration is at the heart of this project. The team consists of mathematicians and physicists, and we develop collaborations with computer scientists.

First, we describe some typical difficulties in our fields of application which require the improvement of established and the development of new methods.

Coupling of equations and models

The general equations of fluid dynamics consist in a strongly coupled nonlinear system. Its mathematical nature depends on the precise model, but in general contains hyperbolic, parabolic, and elliptic parts. The spectrum of physical phenomena described by these equations is very large: convection, diffusion, waves... In addition, it is often necessary to couple different models in order to describe different parts of a mechanical system: chemistry, fluid-fluid-interaction, fluid-solid-interaction...

Robustness with respect to physical parameters

The values of physical parameters such as diffusion coefficients and constants describing different state equations and material laws lead to different behaviour characterized for example by the Reynolds, Mach, and Weissenberg numbers. Optimized numerical methods are available in many situations, but it remains a challenging problem in some fields of applications to develop robust discretizations and solution algorithms.

Multiscale phenomena

The inherent nonlinearities lead to an interplay of a wide range of physical modes, well-known for example from the study of turbulent flows. Since the resolution of all modes is often unreachable, it is a challenging task to develop numerical methods, which are still able to reproduce the essential features of the physical phenomenon under study.

The discontinuous Galerkin method , , , has gained enormous success in CFD due to its flexibility, links with finite volume methods, and its local conservation properties. In particular, it seems to be the most widely used finite element method for the Euler equations . On the other hand, the main drawback of this approach is the large number of unknowns as compared to standard finite element methods. The situation is even worse if one counts the population of the resulting system matrices. In order to find a more efficient approach, it seems therefore important to study the connections with other finite element methods.

In view of the ubiquous problem of large Péclet numbers, stabilization techniques have been introduced since a long time. They are either based on upwinding or additional terms in the discrete variational formulation. The drawback of the first technique is a loss in consistency which generally leads to large numerical diffusion. The grand-father of the second technique is the SUPG/GLS method , . Recently, new approaches have been developed, which try do avoid coupling of the different equations due to the residuals. In this context we cite LPS (local projection stabilization) , , and CIP (continuous interior penalty) , .

The construction of finite element methods on quadrilateral, and particularly, hexahedral meshes can be a complicated task; especially the development of mixed and non-conforming methods is an active field of research. The difficulties arise not only from the fact that adequate degrees of freedom have to be found, but also from the non-constantness of the element Jacobians; an arbitrary hexahedron, which we define as the image of the unit cube under a tri-linear transformation, does in general not have plane faces, which implies for example, that the normal vector is not constant on a side.

In collaboration with Eric Dubach (Associate professor at LMAP) and Jean-Marie Thomas (Former professor at LMAP) we have built a new class of finite element functions (named pseudo-conforming) on quadrilateral and hexahedral meshes. The degrees of freedom are the same as those of classical iso-parametric finite elements but the basis functions are defined as polynomials on each element of the mesh. On general quadrilaterals and hexahedra, our method leads to a non-conforming method; in the particular case of parallelotopes, the new finite elements coincide with the classical ones , .

The NXFEM (Nitsche eXtended finite element method) has been developed in and . It is based on a pure variational formulation with standard finite element spaces, which are locally enriched in such a way that the accurate capturing of an interface not aligned with the underlying mesh is possible, giving a rigorous formulation of the very popular XFEM. A typical computation for the Stokes problem with varying, piecewise constant viscosity is shown in Figure . This technology opens the door to many applications in the field of fluid mechanics, such as immiscible flows, free surface flows and so on.

Adaptive finite element methods are becoming a standard tool in
numerical simulations, and their application in CFD is one of the
main topics of Concha. Such methods are based on a posteriori
error estimates of the discretization error avoiding explicit
knowledge of properties of the solution, in contrast to a priori
error estimates. The estimator is used in an adaptive loop by
means of a local mesh refinement algorithm. The mathematical
theory of these algorithms has for a long time been bounded to the
proof of upper and lower bounds, but has made important
improvements in recent years. For illustration, a typical sequence
of adaptively refined meshes on an

The theoretical analysis of mesh-adaptive methods, even in the most standard case of the Poisson problem, is in its infancy. The first important results in this direction concern the convergence of the sequence of solution generated by the algorithm (the standard a priori error analysis does not apply since the global mesh-size does not necessarily go to zero). In order to prove convergence, an unavoidable data approximation term has to be treated in addition to the error estimator . These result do not say anything about the convergence speed, that is the number of unknowns required to achieve a given accuracy. Such complexity estimates are the subject of active research, the first fundamental result in this direction is .

Our first contribution to this field has been the introduction of a new adaptive algorithm which makes use of an adaptive marking strategy, which refines according to the data oscillations only if they are by a certain factor larger then the estimator. This algorithm allowed us to prove geometric convergence and quasi-optimal complexity, avoiding additional iteration as used before . We have extended our results to conforming FE without inner node refinement and to mixed FE . In this case, a major additional difficulty arises from the fact that, due to the saddle-point formulation, the orthogonality relation known from continuous FEM does not hold. In addition, we have considered the case of incomplete solution of the discrete systems. To this end, we have developed a simple adaptive stopping criterion based on comparison of the iteration error with the discretization error estimator, see also .

Goal-oriented error estimation has been introduced in . It allows to error control and adaptivity directly oriented to the computation of physical quantities, such as the drag and lift coefficient, the Nusselt number, and other physical quantities.

Aerodynamics provide a challenging field for numerical simulations in fluid dynamics with a wide range of applications. Robustness of the simulation software with respect to physical parameters as the Reynolds and Mach numbers is necessary condition. In general, realistic simulations need to be done in three dimensions, which makes the efficiency of the numerical approach and implementation a question of feasibility. Therefore, different efforts are made in this project in order to tackle these subjects.

Hammou El-Otmany started his PhD thesis in October 2012 in our group, supervised by D. Capatina and D. Graebling. The thesis is financed by UPPA (50%) and CDAPP (50%) and concerns the numerical simulation of biological fluids flows. We will focus more particularly on the physical and numerical modeling of red blood cells.

Clinically, some pathologies such as drepanocytosis or sickle cell anemia are due to the abnormal form of red blood cells. In the microcirculation, where cells must deform to pass through narrow capillaries, the deformability of individual red blood cells is a major determinant of resistance to flow.

The goal is twofold. On the one hand, we want to propose a realistic modeling of red blood cells in artery flow, by taking into account the membrane's viscoelasticity and thus, its deformability. The latter is essentially linked to its structure (i.e. its cellular geometry, membrane properties and cytoplasmic viscosity); thus structural abnormalities, as found in some haematological disorders can be expected to affect blood flow in the microcirculation and/or red cell lifespan.

On the other hand, we want to develop an efficient and stable numerical method in order to treat the coupling between the different models involved: Navier-Stokes for the matrix (blood) and for the cytoplasme (interior of the cell) and a non-Newtonian fluid (for instance, Giesekus) for the membrane. We will use the NXFEM method to take into account the interfaces between fluids.

Heat transfer problems involve the coupling of the flow field of the fluid with temperature inside the flow and possibly on the boundary of the flow domain. A typical example of a heat transfer problem is the cooling of a combustion engine, see the project Optimal described in Section .

Turbulent flows are ubiquitous in industrial applications. Direct numerical simulation (DNS), which aims at complete resolution of the flow field up to the Kolmogorov scale, has historically been limited to very simple geometries. The increase of computational power and the development of specialized numerical methods open the door to a wider range of applications. However, for most applications of practical interest, the use of some kind of turbulence modeling is unavoidable in order to obtain the prediction of averaged values and commercial software is in general based on such approaches combined with wall laws. In many applications, such as the project Optimal, see Section , the Reynolds number is at an intermediate level, which means that the turbulence is not fully developed, and the heuristics behind most turbulence models are questionable. Especially, in heat transfer problems, the usage of wall laws seems to considerably lower the accuracy of the predicted mean values. In order to improve the computation of such values, we are particularly interested in variational multiscale methods and its relations to stabilized finite element methods.

Flows in fractured pourous media are very important in petroleum engineering. They represent a good framework for the application of the tools developed in the Concha library such as the NXFEM method, goal oriented adaptivity, multiscale coupling of different models and multilevel solvers.

The objectives of our library Concha are to offer a flexible and extensible software with respect to:

Numerical methods and

Physical models.

The aim is to have a flexible code which could easily switch between the different discretizations, in order to provide a toolbox for rapid testing of new ideas.

The software architecture is designed in such a way that a group of core developers can contribute in an efficient manner, and that independent development of different physical applications is possible. Further, in order to accelerate the integration of new members and in order to provide a basis for our educational purposes (see Section ), the software proposes different entrance levels. The basic structure consists of a common block, and several special libraries which correspond to the different fields of applications described in Sections – Hyperbolic solvers, Low-Mach number flow solvers, DNS, and viscoelastic flows. A more detailed description of each special library may be found below. In order to coordinate the cooperative development of the library, Concha is based on the Inria-Gforge.

We are confronted with heterogenous backgrounds and levels of implication of the developers and users. It seems therefore crucial to be able to respond to the different needs. Our aim is to facilitate the development of the library, and at the same time, to make it possible that our colleagues involved in physical modeling can have access to the functionality of the software with a reasonable investment of time. Two graphical user interfaces have been developed: one for the installation of the library and another one for the building and execution of projects. They are based on common database and scripts written in python. The scripts can also be launched in a shell. In Figure the user interface of the install tool is shown. The option panel allows to choose the components for conditional compilation and the compilation type (debug and release).

The tools offered by this development platform are based on a python interface for the library, called pyConcha. It offers a common interface, based on a pluggin-system, which allows the devloppement of command line tools in parallel. This year the consolidation of the interface part of pyConcha has been an important task. The pyConcha library is now a framework rather than a simple interface to Concha C++ library. It allows now creation of plugins, so that each user-programmer can customize pyConcha to his own goals. Previously, two main programs where working: concha-install.py to install library, and concha-project.py for (semi-)end-users. Both are now plugins of pyConcha, and can be launched by pyConcha at startup. A plugin visualization could now be developped in an independant way, and launched by pyConcha on demand.

The structure of pyConcha framework is clearly splitted in various modules(layers): Command Line Interface module, Graphical User Interface module and Handlers modules, see Figure . A great effort has been made for internationalization of pyConcha.

Based on the library Concha we have developed a solver for hyperbolic PDE's based on DGFEM. So far different standard solvers for the Euler equations such as Lax-Friedrichs, Steger-Warming, and HLL have been implemented for test problems. A typical example is the scram jet test case shown in Figure .

We have started the validation of the implementation of different finite element methods for incompressible flows at hand of standard benchmark problems as the Stokes flow around a symmetric cylinder and the stationary flow around a slightly non symmetric cylinder , see Figure .

For the direct numerical simulation of incompressible turbulent flows, we have started to develop a special solver based on structured meshes with a fast multigrid algorithm incorporating projection-like schemes. The main idea is to use non-conforming finite elements for the velocities with piecewise constant pressures, leading to a special structure of the discrete Schur complement, when an explicit treatment of the convection and diffusion term is used.

Validation and comparison with other CFD-software is crucial in order to evaluate the potential of our numerical schemes concerning accuracy, computing time and other practical aspects.

We have compared the Concha library for incompressible and compressible flows. For incompressible flows, we have used a test case proposed by Hulsen and the well-known Schafer-Turek cylinder benchmark in order to validate the accuracy of the Stokes and Navier-Stokes solvers. The viscoelastic code has been compared with PolyFlow for different test configurations.

The compressible Euler code has been compared to the ELSA software developed by ONERA.

For further comparison and validation, it would be important to consider other commercial and research tools such as: *Aéro3* (Inria-Smash), AVBP (CERFACS), Fluent (ANSYS), and OpenFOAM (OpenCfd).

For this purpose we have proposed the ADT-project VALSE in collaboration with a small company involved in aerodynamics (EPSILON Toulouse), which unfortunately has been rejected by Inria.

The theoretical analysis of mesh-adaptive methods is a very active field of research. We have generalized our previous results concerning optimality of adaptive methods to nonconforming finite elements . Our results include the error due to iterative solution of the system matrices by means of a simple stopping criterion related to the error estimator. The main difficulty was the treatment of the nonconformity which leads to a perturbation of the orthogonality relation at the heart of the proofs for conforming finite elements. We have been able to extend this result to the Stokes equations, considering different lowest-order nonconforming finite elements on triangular and quadrilateral meshes .

Finally, we have shown optimality of a new goal-oriented method in .

Our theoretical studies, which are motivated by the aim to develop better adaptive algorithms, have been accompanied by software implementation with the Concha library, see Section . It hopefully opens the door to further theoretical and experimental studies.

The original formulation of NXFEM is based on the doubling of elements. In some situations, as the case of a moving interface, it is computationally more convenient to have a method with local enrichment, as for the standard XFEM. In we have developed such an approach based on NXFEM. We have developed an hierarchical formulation for a fictitious domain formulation in .

One of the technical difficulties is the simultaneous robustness of the method with respect to the size of the intersection of a mesh cell with the interface and with respect to the discontinuous diffusion parameters. In [ ] (note CRAS 2012) we proposed a modified formulation of the NXFEM which allows us to obtain this robustness to solve the Darcy equation.

In connection with the thesis of Nelly Barrau, supervised by Robert Luce and Eric Dubach (LMAP) we have:

implemented lots of geometrical tools in 2D and 3D necessary to use the NXFEM methods,

extended the method to

generalized the residual estimator and developed an adaptative process with hanging node (),

adapted the method to the transport equation.

Mesh adaptivity is nowadays an essential tool in numerical
simulations; in order to achieve it, reliable and efficient, easily
computable *a posteriori* error estimators are needed. Such
estimators obtained by reconstructing locally conservative fluxes in
the Raviart-Thomas finite element space have been largely employed
in the past years.

We have so far considered the convection-diffusion equation and proposed a unified framework for several finite element approximations (conforming, nonconforming and discontinuous Galerkin). The main advantage of our approach is to use, contrarily to the existing references, only the primal mesh for the flux reconstruction, which presents certain facilities from a computational point of view.

For this purpose, the construction of the

Our first results were presented in . We are working on the extension to higher-order approximations, to quadrilateral meshes and to other model problems.

Over the past years, significant advances have been made in developing discontinuous Galerkin finite element methods (DGFEM) for applications in fluid flow and heat transfer. Certain features of the method have made it attractive as an alternative to other popular methods such as finite volume and more convenient finite element methods in thermal fluid engineering analyses. The DGFEM has been used successfully to solve hyperbolic systems of conservation laws. It makes use of the same local function space as the continuous method, but with relaxed continuity at inter-element boundaries. Since it uses discontinuous piecewise polynomial bases, the discretization is locally conservative and in the considered lowest-order case, the method preserves the maximum principle for scalar equations.

One of the challenges in Computational Fluid Dynamic (CFD) is to obtain as accurate as possible the solution of the problem under consideration at very low cost in terms of computational time. So our principal work is to find some relevant and robust strategies and technics of meshes adaptation in order to concentrate just the calculation where there are physical phenomena to capture. From Industrial point of view, the aim is to get the stationary solution as quick as possible with as much accuracy as possible. The main limitation of these results in CFD concern the underlying models: for example, nearly nothing seems to be known for (even linear) first-order systems or for realistic nonlinear equations. We therefore have developed different modern techniques, especially adaptive methods, to tackle this kind of problems in compressible CFD. The strategy is to iteratively improve the quality of the approximate solutions based on computed information (a posteriori error analysis). In this way, a sequence of locally refined meshes is constructed, which allows for better efficiency as compared to more classical approaches in the presence of different kind of singularities. The main goal is to improve the aerodynamical design process for complex configurations by significantly reducing the time from geometry to solution at engineering-required accuracy using high-order adaptive methods.

One of our strategies of refinement is based on the creation of hanging nodes commonly called non-conforming refinement. The figures show superposition of two kinds of meshes. One is a non-conforming refined mesh (black color) and the other one is the initial grid (red color) on which the refinement has been performed. It shows the technic of cutting the cells where singularities occur in the scramjet inlet.

The mesh adaptation is designed using some criteria as a posteriori error estimates. We have designed criteria based on the calculation of the jump of physical quantities like density, pressure, entropy, temperature and mach number at the inter-element. This criteria seems to be a very good indicator for the mesh adaptation. Figure is the comparison of isoline of the density in scramjet internal flow at mach 3 of the initial mesh, the third and the sixth mesh after refinement.The indicator used is the density jump. It shows the impact and the accuracy of the solution obtained after the sixth iteration of the refinement.

We have also settled another indication which is hierarchical. It
measures the difference of

We compare the computational time between a non-conforming mesh refinement and a globally mesh refined with nearly the same amount of cells. The meshes contain quadrangles or triangles. We can observe trough the following tables that the adapted meshes wether triangular or quadrangular meshes allow to save 20 to 90 times the computational time than the normal globally refined mesh. (see tables 1 and 2)

In table 1, the gain in time is 35 times in quadrangular grid case and 90 times triangular ones and in table 2, the gain in time: 18 times in quadrangular grid case and 58 times triangular ones. So one can say that the adaptive mesh with the strategies and technics we have settled are efficient and robust in capturing physical phenomenon at a very reasonable low cost.

In concluding, the procedure of refinement permit to save computational time and have good accuracy of the approximated solution computed. Our focus is to continue the improve our methods and strategies in order to meet the requirement of accuracy, robustness and efficiency. Many other works are in hand such as slope limiters for high-order Discontinous Galerkin, low mach number computation with some remarkable approaches.

Optimal is a research project related to the cooling of the stator of a turbomachinery. Both physical experiments and numerical simulations are employed. This project has three industrial (Liebherr, Epsilon, and SIBI) and three academic partners (Universities of Pau, Poitiers, and Toulouse). It has been evaluated by the cluster Aerospace Valley. The PhD-thesis of Kossivi Gokpi is financed by this project.

Our contributions concern the numerical simulation of the viscous flow in different geometrical configurations. Comparison with experimental data will be investigated with respect to the Nusselt number. The computed temperature and streamlines for typical geometries are shown in Figure . In addition, the computed Nusselt numbers for the two configurations and varying inflow velocities are given.

Among the different questions concerning modeling such as the boundary conditions at the in- and outlets and the sensitivity to the geometry, a particular point of interest is the study of compressibility effects.

The experimental part of the product is conducted in collaboration with Mathieu Mory, professor at UPPA, and the post-doctoral position of Stéphane Soubacq, who started to work in 10/2009, is financed by the project. The modeling and numerical simulation is done in collaboration with Abdellah Saboni, professor at UPPA.

We have developped specific meshing tools in order to take into account the interaction between faults and a petroleum reservoir for the company Total. This work was done in collaboration with Eric Dubach and Pierre Puiseux from LMA.

The LMA has proposed a new Master program starting in 2007, which is called MMS (Mathématiques, Modélisation et Simulation) and has a focus on analysis, modeling, and numerical computations in PDEs; Robert Luce and R. Becker are co-responsables of this Master program. The core of this education is formed by lectures in four fields : PDE-theory, mechanics, numerical analysis, and simulation tools.

This master program includes lectures on physical applications, one of the three proposed application fields is CFD; lectures are provided by the members of the project; especially the following lectures have been given:

Simulation numérique 1, Robert Luce and Eric Dubach,

Analyse numérique des EDP, R. Becker and D. Capatina,

Simulation numérique 2, Robert Luce and Eric Dubach,

Méthodes numériques pour les EDP, R. Becker,

Mécanique des fluides, R. Becker,

Simulation numérique 3, P. Puiseux

Mécanique des Fluides et Turbulence, Eric Schall, D. Graebling