The focus of our research is on the development of novel parallel numerical algorithms and tools appropriate for state-of-the-art mathematical models used in complex scientific applications, and in particular numerical simulations. The proposed research program is by nature multi-disciplinary, interweaving aspects of applied mathematics, computer science, as well as those of several specific applications, as porous media flows, elasticity, wave propagation in multi-scale media.

Our first objective is to develop numerical methods and tools for complex scientific and industrial applications, that will enhance their scalable execution on the emergent heterogeneous hierarchical models of massively parallel machines. Our second objective is to integrate the novel numerical algorithms into a middle-layer that will hide as much as possible the complexity of massively parallel machines from the users of these machines.

The research described here is directly relevant to several steps of the numerical simulation chain. Given a numerical simulation that was expressed as a set of differential equations, our research focuses on mesh generation methods for parallel computation, novel numerical algorithms for linear algebra, as well as algorithms and tools for their efficient and scalable implementation on high performance computers. The validation and the exploitation of the results will be performed with collaborators from applications and it will be based on the usage of existing tools. In summary, the topics studied in our group are the following:

Numerical methods and algorithms

Mesh generation for parallel computation

Solvers for numerical linear algebra

Computational kernels for numerical linear algebra

Validation on numerical simulations

In the engineering, researchers, and teachers communities, there is a
strong demand for simulation frameworks that are simple to install and
use, efficient, sustainable, and that solve efficiently and accurately
complex problems for which there are no dedicated tools or codes
available. In our group we develop FreeFem++ (see http://

getting a quick answer to a specific problem,

prototype the resolution of a new complex problem.

The current users of FreeFem++ are mathematicians, engineers, university professors, and students. In general for these users the installation of public libraries as MPI, MUMPS, Ipopt, Blas, lapack, OpenGL, fftw, scotch, is a very difficult problem. For this reason, the authors of FreeFem++ have created a user friendly language, and over years have enriched its capabilities and provided tools for compiling FreeFem++ such that the users do not need to have special knowledge of computer science. This leads to an important work on porting the software on different emerging architectures.

Today, the main components of parallel FreeFem++ are:

definition of a coarse grid,

splitting of the coarse grid,

mesh generation of all subdomains of the coarse grid, and construction of parallel datat structures for vectors and sparse matrices from the mesh of the subdomain,

call to a linear solver,

analysis of the result.

All these components are parallel, except for point (5) which is not in the focus of our research. However for the moment, the parallel mesh generation algorithm is very simple and not sufficient, for example it addresses only polygonal geometries. Having a better parallel mesh generation algorithm is one of the goals of our project. In addition, in the current version of FreeFem++, the parallelism is not hidden from the user, it is done through direct calls to MPI. Our goal is also to hide all the MPI calls in the specific language part of FreeFem++.

Iterative methods are widely used in industrial applications, and preconditioning is the most important research subject here. Our research considers domain decomposition methods and iterative methods and its goal is to develop solvers that are suitable for parallelism and that exploit the fact that the matrices are arising from the discretization of a system of PDEs on unstructured grids.

One of the main challenges that we address is the lack of robustness and scalability of existing methods as incomplete LU factorizations or Schwarz-based approaches, for which the number of iterations increases significantly with the problem size or with the number of processors. This is often due to the presence of several low frequency modes that hinder the convergence of the iterative method. To address this problem, we study direction preserving solvers in the context of multilevel domain decomposition methods with adaptive coarse spaces and multilevel incomplete decompositions. A judicious choice for the directions to be preserved through filtering or low rank approximations allows us to alleviate the effect of low frequency modes on the convergence.

We also focus on developing boundary integral equation methods that would be adapted to the simulation of wave propagation in complex physical situations, and that would lend themselves to the use of parallel architectures, which includes devising adapted domain decomposition approaches. The final objective is to bring the state of the art on boundary integral equations closer to contemporary industrial needs.

The design of new numerical methods that are robust and that have well proven convergence properties is one of the challenges addressed in Alpines. Another important challenge is the design of parallel algorithms for the novel numerical methods and the underlying building blocks from numerical linear algebra. The goal is to enable their efficient execution on a diverse set of node architectures and their scaling to emerging high-performance clusters with an increasing number of nodes.

Increased communication cost is one of the main challenges in high performance computing that we address in our research by investigating algorithms that minimize communication, as communication avoiding algorithms. We propose to integrate the minimization of communication into the algorithmic design of numerical linear algebra problems. This is different from previous approaches where the communication problem was addressed as a scheduling or as a tuning problem. The communication avoiding algorithmic design is an aproach originally developed in our group since 2007 (initially in collaboration with researchers from UC Berkeley and CU Denver). While at mid term we focus on reducing communication in numerical linear algebra, at long term we aim at considering the communication problem one level higher, during the parallel mesh generation tool described earlier.

We study the simulation of compositional multiphase flow in porous media with different types of applications, and we focus in particular on reservoir/bassin modeling, and geological CO2 underground storage. All these simulations are linearized using Newton approach, and at each time step and each Newton step, a linear system needs to be solved, which is the most expensive part of the simulation. This application leads to some of the difficult problems to be solved by iterative methods. This is because the linear systems arising in multiphase porous media flow simulations cumulate many difficulties. These systems are non-symmetric, involve several unknowns of different nature per grid cell, display strong or very strong heterogeneities and anisotropies, and change during the simulation. Many researchers focus on these simulations, and many innovative techniques for solving linear systems have been introduced while studying these simulations, as for example the nested factorization [Appleyard and Cheshire, 1983, SPE Symposium on Reservoir Simulation].

The research of F. Nataf on inverse problems is rather new since this activity was started from scratch in 2007. Since then, several papers were published in international journals and conference proceedings. All our numerical simulations were performed in FreeFem++.

We focus on methods related to time reversal techniques. Since the seminal paper by [M. Fink et al., Imaging through inhomogeneous media using time reversal mirrors. Ultrasonic Imaging, 13(2):199, 1991.], time reversal is a subject of very active research. The main idea is to take advantage of the reversibility of wave propagation phenomena such as it occurs in acoustics, elasticity or electromagnetism in a non-dissipative unknown medium to back-propagate signals to the sources that emitted them. Number of industrial applications have already been developped: touchscreen, medical imaging, non-destructive testing and underwater communications. The principle is to back-propagate signals to the sources that emitted them. The initial experiment, was to refocus, very precisely, a recorded signal after passing through a barrier consisting of randomly distributed metal rods. In [de Rosny and Fink. Overcoming the difraction limit in wave physics using a time-reversal mirror and a novel acoustic sink. Phys. Rev. Lett., 89 (12), 2002], the source that created the signal is time reversed in order to have a perfect time reversal experiment. Since then, numerous applications of this physical principle have been designed, see [Fink, Renversement du temps, ondes et innovation. Ed. Fayard, 2009] or for numerical experiments [Larmat et al., Time-reversal imaging of seismic sources and application to the great sumatra earthquake. Geophys. Res. Lett., 33, 2006] and references therein.

We are interested in the development of fast numerical methods for the simulation of electromagnetic waves in multi-scale situations where the geometry of the medium of propagation may be described through caracteristic lengths that are, in some places, much smaller than the average wavelength. In this context, we propose to develop numerical algorithms that rely on simplified models obtained by means of asymptotic analysis applied to the problem under consideration.

Here we focus on situations involving boundary layers and *localized* singular
perturbation problems where wave propagation takes place in media whose geometry or material
caracteristics are submitted to a small scale perturbation localized around a point, or a surface,
or a line, but not distributed over a volumic sub-region of the propagation medium. Although a huge
literature is already available for the study of localized singular perturbations and boundary layer
pheneomena, very few works have proposed efficient numerical methods that rely on asymptotic
modeling. This is due to their natural functional framework that naturally involves singular functions,
which are difficult handle numerically. The aim of this part of our reasearch is to develop and analyze
numerical methods for singular perturbation methods that are prone to high order numerical approximation,
and robust with respect to the small parameter caracterizing the singular perturbation.

We focus on computationally intensive numerical algorithms arising in the data analysis of current and forthcoming Cosmic Microwave Background (CMB) experiments in astrophysics. This application is studied in collaboration with researchers from University Paris Diderot, and the objective is to make available the algorithms to the astrophysics community, so that they can be used in large experiments.

In CMB data analysis, astrophysicists produce and analyze
multi-frequency 2D images of the universe when it was 5% of its
current age. The new generation of the CMB experiments observes the
sky with thousands of detectors over many years, producing
overwhelmingly large and complex data sets, which nearly double every
year therefore following the Moore's Law. Planck
(http://

**FreeFem++** is a PDE solver based on a flexible
language that allows a large number of problems to be expressed
(elasticity, fluids, etc) with different finite element
approximations on different meshes. There are more than 2000 users,
and on the mailing list there are 430 members. Among those, we are
aware of at least 10 industrial companies, 8 french companies and 2
non-french companies. It is used for teaching at Ecole
Polytechnique, Ecole Centrale, Ecole des Ponts, Ecole des Mines,
University Paris 11, University Paris Dauphine, La Rochelle, Nancy,
Metz, Lyon, etc. Outside France, it is used for example at
universities in Japan (Tokyo, Kyoto, Hiroshima, there is a userguide
FreeFem++ in japan), Spain (Sevilla, BCAM, userguide available in
spanish), UK (Oxford), Slovenia, Switzerland (EPFL, ETH), China.
For every new version, there are 350 regression tests, and we
provide a rapid correction of reported bugs. The licence of
FreeFem++ is LGPL.

In the project-team we
develop a library that integrates the direction preserving
and low rank approximation preconditioners for both approached factorizations and domain
decomposition like methods. It will be
available through `FreeFem++` and also as a stand alone
library, and we expect to have one version of this library available
in 2014.

**HPDDM** is an efficient implementation of various domain decomposition methods (DDM) such as one- and two-level Restricted Additive Schwarz methods, the Finite Element Tearing and Interconnecting (FETI) method, and the Balancing Domain Decomposition (BDD) method. These methods can be enhanced with deflation vectors computed automatically by the framework using methods developed by members of the team:

Generalized Eigenvalue problems on the Overlap (GenEO), an approach first introduced in the PhD of Nicole Spillane.

local Dirichlet-to-Neumann operators, an approach first introduced in a paper by Nataf et al. and recently revisited by Conen et al.

This code has been proven to be efficient for solving various elliptic problems such as scalar diffusion equations, the system of linear elasticity, but also frequency domain problems like the Helmholtz equation. A comparison with modern multigrid methods can be found in the thesis of Pierre Jolivet.

HPDDM is a header-only library written in C++11 with MPI and OpenMP for parallelism. While its interface relies on plain old data objects, it requires a modern C++ compiler: g++ 4.7.3 and above, clang++ 3.3 and above, icpc 15.0.0.090 and above. HPDDM has to be linked against BLAS and LAPACK (as found in OpenBLAS, in the Accelerate framework on OS X, in IBM ESSL, or in Intel MKL) as well as a direct solver like MUMPS, SuiteSparse, MKL PARDISO, or PaStiX. At compilation, just define before including HPDDM.hpp one of these preprocessor macros MUMPSSUB, SUITESPARSESUB, MKL_PARDISOSUB, or PASTIXSUB (resp. DMUMPS, DSUITESPARSE, DMKL_PARDISO, or DPASTIX) to use the corresponding solver inside each subdomain (resp. for the coarse operator). Additionally, an eigenvalue solver is recommended. There is an existing interface to ARPACK. Other (eigen)solvers can be easily added using the existing interfaces. For building robust two-level methods, an interface with a discretization kernel like FreeFem++ or Feel++ is also needed. It can then be used to provide, for example, elementary matrices, that the GenEO approach requires. As such HPDDM is not an algebraic solver, unless only looking at one-level methods. Note that for substructuring methods, this is more of a limitation of the mathematical approach than of HPDDM itself.

We have released a version of FreeFem++ (v 3.33) which introduces new and important features related to high performance computing:

Interface with PETSc library

Interface with HPDDM (see above)

improved interface with the parallel direct solver MUMPS

This release enables, for the first time, end-users to run the very same code on computers ranging from laptops to clusters and even large scale computers with thousands of computing nodes

Our group continues to work on algorithms for dense linear algebra operations that minimize communication. During this year we focused on improving the performance of communication avoiding QR factorization as well as designing algorithms that reduce communication on multilevel hierarhical platforms.

Krylov subspace methods are among the most practical and popular iterative methods today. They are polynomial iterative methods that aim to solve systems of linear equations (

Our work focused on the design of robust algebraic preconditioners and domain decomposition methods to accelerate the convergence of iterative methods.

The Helmholtz equation governing wave propagation and scattering phenomena is difficult to solve numerically. Its discretization with piecewise linear finite elements results in typically large linear systems of equations. The inherently parallel domain decomposition methods constitute hence a promising class of preconditioners. An essential element of these methods is a good coarse space. Here, the Helmholtz equation presents a particular challenge, as even slight deviations from the optimal choice can be devastating.

Coarse spaces are instrumental in obtaining scalability for domain decomposition methods for partial differential equations (PDEs). However, it is known that most popular choices of coarse spaces perform rather weakly in the presence of heterogeneities in the PDE coefficients, especially for systems of PDEs. In , we introduce in a variational setting a new coarse space that is robust even when there are such heterogeneities. We achieve this by solving local generalized eigenvalue problems in the overlaps of subdomains that isolate the terms responsible for slow convergence. We prove a general theoretical result that rigorously establishes the robustness of the new coarse space and give some numerical examples on two and three dimensional heterogeneous PDEs and systems of PDEs that confirm this property.

Multiphase, compositional porous media flow models lead to the solution of highly heterogeneous systems of Partial Differential Equations (PDEs). In , we focus on overlapping Schwarz type methods on parallel computers and on multiscale methods. We recall a coarse space that is robust even when there are such heterogeneities. The two-level domain decomposition approach is compared to multiscale methods.

We studied a spectral problem

We consider an electromagnetic wave propagation problem in harmonic regime in a bounded cavity, in the case where the medium of propagation contains small perfectly conducting inclusions. We prove that the solution to this problem depends continuously on the data in a uniform manner with respect to the size of the inclusions.

We study direct first-kind boundary integral equations arising from transmission problems for the Helmholtz equation with piecewise constant coefficients and Dirichlet boundary conditions imposed on a closed surface. We identify necessary and sufficient conditions for the occurrence of so-called spurious resonances, that is, the failure of the boundary integral equations to possess unique solutions.

Following [A. Buffa and R. Hiptmair, Numer Math, 100, 1–19 (2005)] we propose a modified version of the boundary integral equations that is immune to spurious resonances. Via a gap construction it will serve as the basis for a universally well-posed stabilized global multi-trace formulation that generalizes the method of [X. Claeys and R. Hiptmair, Commun Pure and Appl Math, 66, 1163–1201 (2013)] to situations with Dirichlet boundary conditions.

One of the application domain on which our algorithms are validated is data analysis in astrophysics. Estimation of the sky signal from sequences of time order data is one of the key steps in the Cosmic Microwave Background (CMB) data analysis, commonly referred to as the map-making problem. Some of the most popular and general methods proposed for this problem involve solving generalised least squares (GLS) equations with non-diagonal noise weights given by a block-diagonal matrix with Toeplitz blocks. In we study new map-making solvers potentially suitable for applications to the largest, anticipated data sets. They are based on iterative conjugate gradient (CG) approaches enhanced with novel, parallel, two-level preconditioners (2lvl-PCG). We apply the proposed solvers to examples of simulated, non-polarised and polarised CMB observations and a set of idealised scanning strategies with a sky coverage ranging from nearly a full sky down to small sky patches. We discuss in detail their implementation for massively parallel computational platforms and their performance for a broad range of parameters characterising the simulated data sets. We find that our best new solver can outperform carefully optimised, standard solvers as used today, by as much as a factor of 5 in terms of the convergence rate and a factor of 4 in terms of the time to solution, and does so without increasing significantly the memory consumption or the volume of inter-processor communication. The performance of the new algorithms is also found to be more stable, robust and less dependent on specific characteristics of the analysed data set.We therefore conclude that the proposed approaches are well suited to address successfully challenges posed by new and forthcoming CMB data sets.

Spherical Harmonic Transforms (SHT) are at the heart of many scientific and practical applications ranging from climate modelling to cosmological observations. In many of these areas new, cutting-edge science goals have been recently proposed requiring simulations and analyses of experimental or observational data at very high resolutions and of unprecedented volumes. Both these aspects pose formidable challenge for the currently existing implementations of the transforms.

ANR-MN (Modèles Numériques) October 2013 - September 2017

The main goal is the methodological and numerical development of a new robust inversion tool, associated with the numerical solution of the electromagnetic forward problem, including the benchmarking of different other existing approaches (Time Reverse Absorbing Condition, Method of Small-Volume Expansions, Level Set Method). This project involves the development of a general parallel open source simulation code, based on the high-level integrated development environment of FreeFEm++, for modeling an electromagnetic direct problem, the scattering of arbitrary electromagnetic waves in highly heterogeneous media, over a wide frequency range in the microwave domain. The first applications considered here will be medical applications: microwave tomographic images of brain stroke, brain injuries, from both synthetic and experimental data in collaboration with EMTensor GmbH, Vienna (Austria), an Electromagnetic Medical Imaging company.

Type: COOPERATION

Instrument: Specific Targeted Research Project

Objectif: NC

Duration: September 2013 - August 2016

Coordinator: Imec, Belgium

Partner: UA Belgium, USI Switzerland, Intel France, NAG England, UVSQ France, T-Systems SfR Germany, IT4Inovations Czech Republic.

Inria contact: Luc Giraud

Abstract: The goal of this project is to develop novel algorithms and programming models to tackle what will otherwise be a series of major obstacles to using a crucial component of many scientific codes at exascale, namely solvers and their constituents. The results of this work will be combined in running programs that demonstrate the application-targeted use of these algorithms and programming models in the form of proto-applications. The application targeting will be done by an analysis of a representative selection of scientific applications using solvers and/or the constituent parts that we target. The results of the project will be disseminated to the reference application owners through a scientific and industrial board (SIB), and board-partner specific code targeting activities, to help generate momentum behind our approach in the HPC community. The proto-applications will serve as a proof-of-concept, a benchmark for doing machine/software co-design, and as a basis for constructing future exascale full applications. In addition, the use of the SIB is a means to extract the commonalities of a range of HPC problems from different scientific domains and different industrial sectors to be able to concentrate on maximising the impact of the project by improving precisely those parts that are common across different simulation needs.

Alpines role: in charge of the Task "Preconditioners" in the working group focusing on numerical algorithms.

Members of Alpines are part of the International Lab JLPC Etats-Unis.

Title: Communication Optimal Algoritms for Linear Algebra

International Partner (Institution - Laboratory - Researcher):

University of California Berkeley (ÉTATS-UNIS)

Duration: 2010 - 2015

See also: https://

Our goal is to continue COALA associated team that focuses on the design and implementation of numerical algorithms for today's large supercomputers formed by thousands of multicore processors, possibly with accelerators. We focus on operations that are at the heart of many scientific applications as solving linear systems of equations or least squares problems. The algorithms belong to a new class referred to as communication avoiding that provably minimize communication, where communication means the data transferred between levels of memory hierarchy or between processors in a parallel computer. This research is motivated by studies showing that communication costs can already exceed arithmetic costs by orders of magnitude, and the gap is growing exponentially over time. An important aspect that we consider here is the validation of the algorithms in real applications through our collaborations. COALA is an Inria associate team that focuses on the design and implementation of numerical algorithms for today's large supercomputers formed by thousands of multicore processors, possibly with accelerators. We focus on operations that are at the heart of many scientific applications as solving linear systems of equations or least squares problems. The algorithms belong to a new class referred to as communication avoiding that provably minimize communication, where communication means the data transferred between levels of memory hierarchy or between processors in a parallel computer. This research is motivated by studies showing that communication costs can already exceed arithmetic costs by orders of magnitude, and the gap is growing exponentially over time. An important aspect that we consider here is the validation of the algorithms in real applications through our collaborations.

A collaboration focused on the theoretical and numerical analysis for the simulation of wave scattering by means of boundary integral formulation has been in place for several years between Xavier Claeys and the group of Ralf Hiptmair from the Seminar of Applied Mathematics at ETH Zürich.

Joint Laboratory for Petascale Computing, JLPC Etats-Unis. We take part in this joint effort, in the numerical libraries aspects of the joint laboratory. We collaborate and interact in particular with B. Gropp, UIUC, and J. Brown and M. Knepley, Argonne.

Visit of Jed Brown, Argonne National Laboratory, 1 week, June 2014, in the context of JLPC, Etats-Unis.

Jean-Yves Pallaro, Master 2 student, University of Lille. Jean-Yves worked on LORASC preconditioner.

Grigori Laura

Date: Aug 2014 - Aug 2015

Institution: University of California Berkeley (USA)

Xavier Claeys, Visit to SAM ETH Zürich for collaboration with Ralf Hitpmair, 3rd August - 16th August 2014.

Sebastien Cayrols, Visit to UC Berkeley in the context of COALA associated team, December 2014 - April 2015.

Laura Grigori: Co-Chair, Parallel Matrix Algorithms and Applications Workshop, Lugano, Switzerland, June 2014.

Laura Grigori: Program Director of the SIAM SIAG on Supercomputing (SIAM special interest group on

Frédéric Hecht: Organizing the 6th FreeFem++ days (December 2014, paris)

Frédéric Hecht: responsable de la session Maillage, Horizon Maths 2014, FSMP and IFPen, Reuil-Malmaison. supercomputing), January 2014 - December 2015.

Laura Grigori: Member of organizing committee, SIAM Conference on Parallel Processing and Scientific Computing, February 2014, Portland, USA.

Laura Grigori: Member of Organizing Committee, Combinatorial Scientific Computing Workshop, Lyon 2014.

Laura Grigori: Chair of Parallel Numerical Algorithms Area, EuroPar Conference September 2014.

Laura Grigori: Member of Program Committee of IEEE International Parallel and Distributed Processing Symposium, IPDPS 2015.

Laura Grigori: Member of Program Committee of HiPC 2014, 19th IEEE Int'l Conference on High Performance Computing.

Laura Grigori: Area editor for Parallel Computing Journal, Elsevier, since June 2013.

Laura Grigori: Member of the editorial board for the SIAM book series Software, Environments and Tools. See
http://

Frédéric Nataf: Member of the editorial board of Journal of Numerical Mathematics since 2012.

Licence : Xavier Claeys, 2MOI1: Orientation et Insertion Professionnelle, 40 hrs de travaux pratiques, niveau L2, Université Pierre et Marie Curie, France

Master : Xavier Claeys, MM009: Informatique de base, 72 hrs de travaux pratiques en programmation C++, niveau M1, Université Pierre et Marie Curie, France

Master : Xavier Claeys, NM406: Résolution des EDP par la méthode des éléments finis, 18 hrs de travaux pratiques en programmation C++, niveau M2, Université Pierre et Marie Curie, France

Master : Xavier Claeys, MM031: Informatique Scientifique, 44 hrs de de cours magistral, niveau M1, Université Pierre et Marie Curie, France

Master: Laura Grigori, Course on *Calcul haute performance, algorithmes
parallèles d'algèbre linéaire à grande echelle, stabilité numérique*, Master 2 Mathematiques & Applications, UPMC, 24 hours.

Master : Frederic Hecht, Tutorials of practical C++ programming for the course *Informatique de Base*, 36 hours, Master 1 Mathématiques et Applications, UPMC, France

Master : Frederic Hecht, Course on Numerical methods for fluid mechanics, 30 hours, Master 2 Mechanics, UPMC, France.

Master : Frederic Hecht, From PDEs to their resolution by finite element methods, 36 hours per year, Master 2 UPMC, France.

Master : Frederic Nataf, Course on Domain Decomposition Methods, 30 hours, Master 2, UPMC, France.

Master : Frederic Nataf, Course on Domain Decomposition Methods, 15 hours, Master 2, ENSTA, France.

PhD : Pierre Jolivet, defended September 2014, University of Grenoble, advisors F. Nataf, C. Prudhome and F. Hecht

PhD : Sophie Moufawad, Enlarged Krylov Subspace Methods and Preconditioners for Avoiding Communication, defended December 2014, UPMC, advisor L. Grigori

PhD : Nicole Spillane, (funded by Michelin), co-advisor F. Nataf

PhD : Sylvain Auliac, defended 2014, UPMC, advisor F. Hecht

PhD : Pierre-Henri Tournier, defended 2015, UPMC, advisor F. Hecht and M. Comte

PhD in progress : Sebastien Cayrols, since October 2013 (funded by Maison de la simulation), adivsor L. Grigori.

PhD in progress : Ryadh Haferssas, since October 2013 (funded by Ecole Doctorale, UPMC), advisor F. Nataf

PhD in progress : Mireille El-HAddad, since March 2014 (Cotuelle l’Université Saint-Joseph de Beyrouth, Liban et UPMC) , advisors F. Hecht and T. Sayah

Laura Grigori: Phd thesis, Laurent Berenguer, Rapportrice, Université Claude Bernard, Lyon 1, Mathématiques Appliquées, defended October 2014.

Laura Grigori: HDR habilitation of Clement Pernet, Rapportrice, Université de Grenoble, Informatique, defended November 2014.

F. Nataf: HDR habilitation of Michel Belliard, Rapporteur, Université de Marseille, Applied Mathematics, defended in November 2014.

F. Nataf: HDR habilitation of Marion Darbas, Université de Picardie, defended in December 2014.

F. Nataf: Phd thesis, Abdoulaye Samake, Université de Grenoble, Applied Mathematics, defended in December 2014.

F. Hecht: HDR habilitation of Mme. Naïma Debit, Rapporteur, Université Claude Bernard Lyon I, defended in December 2014.

F. Hecht: Phd thesis, M. Ange Toulougoussou Université Pierre et Marie Curie , defended in December 2014.

F. Hecht: Phd thesis, M. Jimmy Mullaert, Université Pierre et Marie Curie, defended in December 2014.

F. Hecht: Phd thesis, M. Stéphane Veys Université Joseph Fourier, Grenoble , defended in November 2014.

F. Hecht: Phd thesis, M. Jeremy Veysset, CEMEF - Mines ParisTech, Sophia Antipolis Grenoble , defended in November 2014.

F. Hecht: Phd thesis, M. Jad Dakroub, Université Pierre et Marie Curie et Université Saint Joseph de Beyrouth defended in October 2014.