The team SAGE undertakes research on high-performance computing and deals with three subjects :
numerical algorithms, mostly large sparse linear algebra,
large scale high performance computing, involving parallel and grid computing,
environmental and geophysical applications, mostly in hydrogeology.
These three subjects are highly interconnected: the first topic aims at designing numerical algorithms, which will lead to high performances on parallel and grid architectures and which will be applied in geophysical models.
The focus of this topic is the design of efficient and robust numerical algorithms in linear algebra. The main objective is to solve large systems of equations
Ax=
b, where the matrix
Ahas a sparse structure (many coefficients are zero). High performance computing (
) is required in order to tackle large scale problems.
Algorithms and solvers are applied to problems arising from hydrogeology and geophysics (
).
Direct methods, based on the factorization
A=
LU, induce fill-in in matrices
Land
U. Reordering techniques can be used to reduce this fill-in, hence memory requirements and floating-point operations
.
More precisely, direct methods involve two steps, first
factoringthe matrix
Ainto the product
A=
P1LUP2where
P1and
P2are permutation matrices,
Lis lower triangular, and
Uis upper triangular, then solving
P1LUP2x=
bby processing one factor at a time. The most time consuming and complicated step is the first one, which is further broken down into the following steps :
Choose
P1and diagonal matrices
D1and
D2so that
P1D1AD2has a ``large diagonal.'' This helps to assure accuracy of the final solution.
Choose
P2so that the
Land
Ufactors of
P1AP2are as sparse as possible.
Perform
symbolic analysis, i.e. identify the locations of nonzero entries of
Land
U.
Factorize
P1AP2into
Land
U.
The most efficient iterative methods build a Krylov subspace, for example
. If the matrix is symmetric positive definite, the method of choice is the Conjugate Gradient; for symmetric undefinite matrices, there are mainly three methods, SYMMLQ, MINRES and
LSQR. For unsymmetric matrices, it is not possible to have both properties of minimization and short recurrences. The GMRES method minimizes the error but must be restarted to limit memory
requirements. The BICGSTAB and QMR methods have short recurrences but do not guarantee a decreasing residual
,
. All iterative methods require preconditioning
to speed-up convergence : the system
M-1Ax=
M-1bis solved, where
Mis a matrix close to
Asuch that linear systems
Mz=
care easy to solve. A family of preconditioners uses incomplete factorizations
A=
LU+
R, where
Ris implicitely defined by the level of fill-in allowed in
Land
U. Other types of preconditioners include an algebraic multigrid approach or an approximate inverse
.
For linear least-squares problems
, direct methods are based on the normal equations
ATAx=
ATb, using either a Cholesky factorization of
ATAor a
QRfactorization of
A, whereas the most common Krylov iterative method is LSQR. If the discrete problem is ill-posed, regularization like Tychonov or a Truncated Singular Value
Decomposition (TSVD) is required
,
. For large matrices, the so-called complete
factorization is also useful. The first step is a pivoted QR factorization, followed by a second factorization
where
Uand
Vare orthogonal matrices and
Eis a matrix neglectable with respect to the chosen threshold. Such a decomposition is a robust rank-revealing factorization and it provides for free the Moore-Penrose
Generalized Inverse. Recently, efficient
QRfactorization software libraries became available but they do not consider column or row permutations based on numerical considerations since the corresponding orderings often end up
with a non tractable level of fill-in.
The team studies iterative Krylov methods for regularized problems, as well as rank-revealing
QRfactorizations.
Nonlinear methods to solve
F(
x) = 0include fixed-point methods, nonlinear stationary methods, secant method, Newton method
,
,
. The team studies Newton-Krylov methods, where
the linearized problem is solved by a Krylov method
, Broyden methods, Proper Orthogonalization
Decomposition methods.
Another subject of interest is parallelization of ODE in time. The idea is to divide the time interval into subintervals, to apply a timestep in each subinterval and to apply a nonlinear correction at both ends of subintervals.
Let us consider the problem of computing some extremal eigenvalues of a large sparse and symmetric matrix
A. The Davidson method is a subspace method that builds a sequence of subspaces, which the initial problem is projected on. At every step, approximations of the sought
eigenpairs are computed : let
Vmbe an orthonormal basis of the subspace at step
mand let
(
,
z)be an eigenpair of the matrix
Hm=
VmTAVm ; then the Ritz pair
(
,
x=
V
m
z)is an approximation of an eigenpair of
A. The specificity of the method comes from how the subspace is augmented for the next step. In contrast to the Lanczos method, which is the method to refer to, the
subspaces are not Krylov subspaces, since the new vector
t=
x+
ywhich will be added to the subspace is obtained by an acceleration procedure : the correction
yis obtained by an exact Newton step (Jacobi-Davidson method) or an inexact Newton step (Davidson method). The behavior of the Davidson method is studied in
while the Jacobi-Davidson method is described
in
. These methods bring a substantial improvement
over the Lanczos method when computing the eigenvalues of smallest amplitude. For that reason, the team considered Davidson method to compute the smallest singular values of a matrix
Bby applying them to the matrix
BTB
.
In several applications, the eigenvalues of a nonsymmetric matrix are often needed to decide whether they belong to a given part of the complex plane (e.g. half-plane of the negative real part complex numbers, unit disc). However, since the matrix is not exactly known (at most, the precision being the precision of the floating point representation), the result of the computation is not always guaranteed, especially for ill-conditioned eigenvalues. Actually, the problem is not to compute the eigenvalues precisely, but to characterize whether they lie in a given region of the complex field. For that purpose the notion of -spectrum or equivalently the notion of pseudospectrum was simultaneously introduced by Godunov and Trefethen . Several teams proposed softwares to compute pseudospectra, including the SAGE team with the software PPAT , described in Section .
The focus of this topic is the development of parallel algorithms and software. The objectives are to solve large scale equations in linear algebra ( ) and to use high performance computing for dealing with problems arising from hydrogeology and geophysics ( ).
Algorithms have been described above (
). The team works on the development of parallel software
for sparse direct solvers (
LUfactorization), iterative solvers (PCG, GMRES, subdomain method), least-squares solvers (
QRfactorization). The target is Giga-systems with billions (
10
9) of unknowns.
Our applications in hydrogeology and geophysics ( ) are in the framework of Partial Differential Algebraic Equations (PDAE). We usually discretize time by a classical one-step or multi-step scheme and space by a Finite Element Method or a similar method. To get a fully parallel implementation, it is necessary to parallelize the matrix computation and generation. A common approach is to divide the computational domain into subdomains. Once the matrix is computed, it is used in linear solvers. The challenge is to reduce communication between the two phases. Recently, we have also investigated particle methods. Parallel particle trackers are still an area of research.
Our applications are quite often multi-physics models, where nonlinear coupling occurs. Our objective is to design software components, which provide a great modularity and flexibility for using the models in different contexts. The main numerical difficulty is to design a coupling algorithm with parallel potentiality. We also investigate the implementation on grid architectures, in collaboration with Paris Inria-team. The challenge is to develop and use a middleware for high-level applications.
In our applications, we use stochastic modelling in order to take into account geophysical variability. From a numerical point of view, it amounts to run multiparametric simulations. The objective is to use the power of grid computing. The target architecture is a heterogeneous collection of parallel clusters, with high-speed networks in clusters and slower networks interconnecting the clusters.
The team has chosen a particular domain of application, which is geophysics. In this domain, many problems require to solve large scale systems of equations, arising from the discretization of coupled models. Emphasis is put on hydrogeology, but the team investigates also geodesy, submarine acoustics, geological rock formation and heat transfer in soil. One of the objectives is to use high performance computing in order to tackle 3D large scale computational domains with complex physical models.
Many environmental studies rely on modelling geo-chemical and hydrodynamic processes. Some issues concern aquifer contamination, underground waste disposal, underground storage of nuclear wastes, land-filling of waste, clean-up of former waste deposits. Simulation of contaminant transport in groundwater is a highly complex problem, governed by coupled nonlinear PDAEs. The main objective of the team is to design and to implement an efficient and robust numerical method to solve the systems of nonlinear coupled equations at each time step. The output will be a software running on parallel platforms such as clusters and on experimental computational grids. Simulations of several test cases will assess the performance of the software.
Recent research showed that rock solid masses are in general fractured and that fluids can percolate through networks of inter-connected fractures. Rock media are thus interesting for water resources as well as for the underground storage of nuclear wastes. Fractured media are by nature very heterogeneous and multi-scale, so that homogenisation approaches are not relevant. The team develops a numerical model for fluid flow and contaminant transport in three-dimensional fracture networks.
The kernel of SCILAB includes a special format for sparse matrices and some factorizations as well. Iterative linear solvers (PCG, GMRES, BiCGSTAB, QMR, etc) for large sparse linear systems have been integrated in the Scilab's distribution. SCILIN is a SCILAB toolbox providing preconditioners for these solvers. SCILIN can be downloaded at the address : http://www.irisa.fr/sage/SCILIN/.
PPAT (Parallel PATh following software) is a parallel code, developed by D. Mezher, W. Najem (University of Saint-Joseph, Beirut, Lebanon) and B. Philippe. This tool can follow the contours
of a functional from
to
. The present version is adapted for determining the level curves of the function
f(
z) =
min(
A-
ZI)which gives the pseudospectrum of matrix
A.
The algorithm is reliable : it does not assume that the curve has a derivative everywhere. The process is proved to terminate even when taking into account roundoff errors. The structure of the code spawns many independent tasks which provide a good efficiency in the parallel runs.
The software can be downloaded under the GPL licence from: http://sourceforge.net/projects/ppat.
Gridmesh is an interactive 2D structured mesh generator. A first version has a graphical user interface entirely built with Matlab. A second version is developed in Fortran 95 with the use of the MUESLI library (see ). The Matlab version is more friendly than the F95 one, but is practically limited to moderate meshes. Gridmesh can create/modify a 2D mesh with associated boundary conditions for both the flow and transport parts. Several numbering schemes can be used, in order to get a more or less banded connectivity matrix. Mesh partition can also be imposed, with an arbitrarily number in subdivisions (but this number must be a power of two). A description of this software can be found in .
Doing linear algebra with sparse and dense matrices is somehow difficult in scientific computing. Specific libraries do exist to deal with this area ( e.g.BLAS and LAPACK for dense matrices, SPARSKIT for sparse ones) but their use is often awful and tedious, mainly because of the great number of arguments which must be used. Moreover, classical libraries do not provide dynamic allocation. Lastly, the two types of storage (sparse and dense) are so different that the user must know in advance the storage used in order to declare correctly the corresponding numerical arrays.
MUESLI is designed to help in dealing with such structures and it provides the convenience of coding with a matrix-oriented syntax; its aim is therefore to speed-up development process and to enhance portability.
MUESLI is a Fortran 95 library split in two modules:
FML (Fortran Muesli Library) contains all necessary material to numerically work with a dynamic array (dynamic in size, type and structure), called mfArray.
FGL (Fortran Graphics Library) contains graphical routines (some are interactive) which use the mfArrayobjects.
MUESLI includes some parts of the following numerical libraries: Arpack, GSL, HSL, Slatec, SuiteSparse and Triangle. Moreover, it requires some external libraries: zlib, pnglib, hdf5 (with the f90 interface), BLAS and LAPACK.
Linux is the platform which has been used for developping and testing MUESLI. Whereas the FML part (numerical computations) should work on any platform ( e.g.Win32, Mac OS X, Unix), the FGL part is intended to be used only with X11 ( i.e.under all UNIXes).
MUESLI has been first used to built an efficient interactive X11 version of 'gridmesh' (see . It is now used by members of the team and visitors. A user's guide of version 1.0 is provided in . We plan to register MUESLI for the APP and then to make it available via a specific web page.
When dealing with non-linear free-surface flows, mixed Eulerian-Lagrangian methods have numerous advantages, because we can follow marker particles distributed on the free-surface and then compute with accuracy the surface position without the need of interpolation over a grid. Besides, if the liquid velocity is great enough, Navier-Stokes equations can be reduced to a Laplace equation, which is numerically solved by a Boundary Element Method (BEM); this latter method is very fast and efficient because computing occur only on the fluid boundary. This method is applied to the spreading of a liquid drop impacting on a solid wall and to the droplet formation at a nozzle; applications take place, among others, in ink-jet printing processes.
The code used (CANARD) has been developped with Jean-Luc ACHARD (LEGI, Grenoble) for fifteen years and is used today mainly through collaboration with Carmen GEORGESCU at UPB (University Polytechnica of Bucarest) in Romania. Publications and are related to this code.
This software is developed in collaboration with J.-R. de Dreuzy, from Geosciences, university of Rennes 1 and with A. Beaudoin, from university of Le Havre.
Hydrolab aims at modeling flow and transport of solute in highly heterogeneous porous or fractured media. Numerical models currently include steady-state flow in saturated media and transport by advection-diffusion. Physical models can be either a porous medium or a network of fractures. For flow equations, Hydrolab uses a mixed finite element method or a finite volume method and it includes a particle tracker for transport equations. The software is organized in software components and relies as far as possible on existing free libraries, such as sparse linear solvers. Because the target is large computational domains, the software makes use of high performance computing and all modules have a parallel version. The target is currently clusters with distributed memory and grid architectures. All modules are written in C++ and use the MPI library for parallel computing. The software is currently implemented on Windows systems and will be implemented on Linux systems as well. The objective is to develop a free software available on the Web; it has already been registered for the Gforge of Inria and a second step will be to register the different components for the APP.
We have pursued the work on a parallel version of the GMRES method preconditionned by Multiplicative Schwarz by testing the parallel codes on large problems. It appears that the method is very robust since convergence occured almost always at the desired accuracy. In collaboration with M. Sosonkina, from the University of Iowa (USA), we compared our method to the approaches developed in the software pARMS http://www-users.cs.umn.edu/~saad/software/pARMS/. Only a Schur complement approach was potentially performing better than our approach. Further experiments are going on , , , , , .
A graph partitioning algorithm has been designed. It aims at partitioning a sparse matrix into a block-diagonal form, such that any two consecutive blocks overlap. We denote this form of
the matrix as the overlapped block-diagonal matrix. The partitioned matrix is suitable for applying the explicit formulation of Multiplicative Schwarz preconditioner (EFMS) described in a
previous work (see the 2005 activity report). The graph partitioning algorithm partitions the graph of the input matrix into
Kpartitions, such that every partition
ihas at most two neighbors
i+ 1and
i-1. First, an ordering algorithm, such as the reverse Cuthill-McKee algorithm, that reduces the matrix profile is performed. An initial overlapped block-diagonal partition is obtained
from the profile of the matrix. An iterative strategy is then used to further refine the partitioning by allowing nodes to be transfered between neighboring partitions. Some experiments have
been performed on matrices arising from real-world applications to show the feasibility and usefulness of this approach
,
. Further work is going on to improve the
efficiency of the block partitionning.
In a common work with Nabil Gmati in Tunis, we have studied the convergence of the GMRES method when it is efficiently preconditionned. It is already well-known that, when the eigenvalues
of the transformed system are all included in the disk centered at 1 and of unit radius, convergence occurs. We have proved that, when
peigenvalues lie out of that disk, they may delay the convergence by
psteps at most. That result can be applied to the Schwarz Alternate preconditionner and more generally to the Multiplicative Schwarz preconditionner
,
.
The Aitken-Schwarz domain decomposition method makes use of the convergence property of Schwarz type domain decomposition methods in order to accelerate the solution convergence at the artificial interfaces using the Aitken convergence acceleration technique . The generalized Schwarz alternating method (GSAM) was introduced in . Its purely linear convergence in the case of linear operators, as shown in , suggests that the convergent sequence of traces solution at the artificial interfaces can be accelerated by the well-known process of Aitken convergence acceleration.
In , we faced the problem of extending Aitken acceleration method to nonuniform meshes. For this purpose, we developed a new original method to compute the Non Uniform Discrete Fourier Transform (NUDFT) based on the function values at the nonuniform points. Moreover, this technique creates a robust framework for the adaptive acceleration of the Schwarz method, by using an approximation of the error operator at artificial interfaces based on a posterioriestimates of the Fourier mode behavior (preprint , submitted).
The target application is flow in heterogeneous porous media; in collaboration with J.-R. De Dreuzy, the structure of the code was modified in order to handle Aitken-Schwarz Domain Decomposition.
This work was done in the context of Grid'5000 project, in collaboration with A. Beaudoin, from the University of le Havre and J.-R. de Dreuzy, from the department of Geosciences at the University of Rennes 1. We have compared a direct linear solver (PSPASES from the University of Minnesota and IBM) and two iterative multigrid linear solvers (SMG/HYPRE and Boomer-AMG/HYPRE from Lawrence Livermore National Laboratory) to compute the flow in a heterogeneous 2D porous medium. Computations were done on a cluster of PC of the grid at Irisa. Our results show that we are able to deal with very large 2D highly heterogeneous porous media , , . The direct solver is highly parallel and scalable, but the complexity and the memory requirements are prohibitive for very large computational domains. On the other hand, multigrid solvers are not so scalable, but the complexity and the memory requirements are kept low. Moreover, Boomer-AMG is more efficient than SMG when the heterogeneity increases (paper in preparation).
We have presented an algorithm to compute a rank revealing sparse QR factorization. First, a sparse QR factorization with no pivoting is performed, that allows us to obtain efficiently a sparse upper triangular factor R. Second, an incremental condition number estimator is used iteratively on the factor R to identify redundant columns. These columns are moved to the end of the matrix, and R is kept in an upper triangular form by means of Givens rotations and its sparsity is preserved as most as possible. Numerical results have shown the effectiveness of our algorithm (papers in preparation).
Eigenvalue solvers are described in
. By considering the eigenvalue problem as a
system of nonlinear equations, it is possible to develop a number of solution schemes which are related to the Newton iteration. For example, to compute eigenvalues and eigenvectors of an
n×
nmatrix
A, the Davidson and the Jacobi-Davidson techniques, construct `good' basis vectors by approximately solving a ``correction equation'' which provides a correction to be
added to the current approximation of the sought eigenvector. That equation is a linear system with the residual
rof the approximated eigenvector as right-hand side.
In cooperation with Yousef Saad, we have extended this general technique to the ``block'' situation, i.e., the case where a set of
papproximate eigenpairs is available, in which case the residual
rbecomes an
n×
pmatrix. The paper
defines two algorithms based on this approach.
For symmetric real matrices, the first algorithm converges quadratically and the second cubically. In a second part of the same paper is considered the class of substructuring methods such as
the Component Mode Synthesis (CMS) and the Automatic Multi-Level Substructuring (AMLS) methods, and to view them from the angle of the block correction equation. In particular this viewpoint
allows us to define an iterative version of well-known one-level substructuring algorithms (CMS or one-level AMLS).
In the context of Mohammed Ziani's thesis we have started to study the convergence of non linear solvers on classical physical cases. We start with the well-known Newton's algorithm and use a few rules to have a more robust convergence (Armijo's criteria) and also to have faster convergence (forcing terms). Then we investigate Broyden's method. This method tries to "learn" the Jacobian matrix along the non linear iterations. But it also requires to store all the so called Broyden's directions in memory thus a classical heuristic of limited memory is often used. The goal is to shrink the memory needed for the Broyden's directions but this new parameter is quite hard to tune. Therefore, we propose a new method, called autoadaptative limited memory Broyden's method, which doesn't have the drawback of an extra parameter. The method starts with only one Broyden's direction and increases the size of its space when it detects a poor convergence. This need of extra storage is detected when the biggest singular value of the Broyden's vector we discarded goes above the nonlinear residual , .
The acceleration of Newton methods by proper orthogonal decomposition of the time iterate solutions of a non stationary non linear problems have been proposed with the lid driven cavity problems in stream function biharmonic formulation . The developed methodology satisfies the constraints of metacomputer/grid architecture with a client-server approach . First results with modest acceleration on the IPARS code of M. Wheeler of the University of Texas at Austin can be found in .
This work was done in the context of the MOMAS GdR ( ), in collaboration with M. Kern (Estime Inria team at Rocquencourt), and in the context of the Andra contract ( ), in collaboration with A. Dimier (Andra).
Reactive transport models are complex nonlinear PDEs, coupling the transport engine with the reaction operator. We consider here chemical reactions at equilibrium. We have compared the different solutions in the literature and we have proposed to use a PDAE (Partial Differential Algebraic Equations) framework, a method of lines and a DAE solver. In contrary to other approaches, we solve neither the nonlinear chemistry equations nor the transport equations, but the linearized chemistry equations, coupled with the transport equations. We have developed a prototype in Matlab, on a 1D domain, which shows the efficiency of the method (paper in preparation). We have also analyzed the softwares MT3D and TRACES for transport equations and the software PHREEQC for chemistry equations, in order to use components of these libraries for our global method .
Three approaches have been developed concerning the solving of big ODEs/ADEs systems and time domain decomposition. The first one consists to enhance domain decomposition methods by introducing a definition of a coarse grid based on the relative tolerance of the time integrator. This allows to solve stiff problems as shown in . The second approach, consists in an automatic preconditioned Schur domain decomposition to solve the linearized jacobian problem with no a priori knowledge on the structure of the Jacobian matrix . The third approach combines a time domain decomposition with the spectral deferred correction in order to obtain a two level parallelism with pipelining the iterations to increase the accuracy on the solution .
The rescaling method has been designed for solving differential equations related to evolution problems by generating, through a change of variables, a sequence of time slices such that the time variable and the solution are restored to zero in the beginning of each slice, and the rescaled solution is controlled by a uniform criterion for ending slices.
This method has the advantage of leading to the solving of similar rescaled models and has been very efficient for solving evolution problems whose solution was explosive in finite time.
The sequential implementation of the rescaling method has shown the existence of a relation between the initial values of the successive time slices (ratio phenomenon). Approximating this relation allows the prediction of the initial values in order to start on a parallel-in-time integration through a prediction-correction scheme, with the double advantage of similarity between the rescaled models on the successive slices, on each of which there is absence of any singularity, allowing (due to the uniform criterion for ending slices) the solution to be sufficiently regular .
RaPTI Algorithm (Ratio-based Parallel Time Integration) starts with running the rescaling method sequentially on a few slices and computing the ratios of the successive initial values, in order to determine the way they are related and predict the following initial values necessary for the starting of the prediction-correction scheme.
Unlike the other parallel-in-time algorithms, RaPTI does not involve any sequential computation (except for the first slices) and generates time slices whose sizes vary with the behavior of the solution insuring a similarity between the slices.
This algorithm has first been tested on the reaction-diffusion equation, showing a fast convergence , .
The rescaling method has then been adapted to oscillatory evolution problems such those following from the population dynamics, namely the logistic predator-prey model of Lotka-Volterra (with 2 or 3 species) . Such models lead to trajectories presenting an oscillating behavior which will approach a constant. Their ``orbits'' in a phase-portrait environment shows inwards spirals toward a stable equilibrium point.
After the sequential adaptation of the rescaling method (that showed again a ratio phenomenon), RaPTI algorithm has successfully been tested leading to a fast convergence.
Then, after a survey over the mathematical models for infectious diseases, the rescaling method has been applied to the endemic classical SIR model (whose behavior is very similar to the one of the Lotka-Volterra model) . At present, we are investigating the age-structured SIR epidemic model with vertical transmission , .
In the fourth part of the IFREMER contract (see section ), we consider a new seismic inverse problem; we seek to invert seismic traveltime measures from an available seismic survey to find the wave velocity underground. Our inversion method is based on ray tracing and genetic algorithms.
We studied the seismic ray theory which may be applied to wave propagation problems by doing some approximations. We developed a code (EIKOLIN) which is more suitable for our application than another existing code available from Internet (ANRAY). EIKOLIN is not expensive in computation because only linear interpolations are used (instead of cubic B-Spline for ANRAY). This feature let us process the inversion using genetic algorithms which requires a great number of simulations.
This code has been applied to wide angle seismic data taken from a campaign made in Morocco (Ifremer 2002). Figure shows the wave paths (seismic rays) in a cross section of the underground. These rays are refracted when they cross the interface between the two layers (direct and reflected rays are not considered). Note that the scale is not the same in the x- and z- directions. In the lower layer, the rays are curved due to the presence of a velocity gradient. Figure represents the solution of our inverse problem: it shows the velocity distribution in a cross section of the underground.
A SVD-based sensitivity analysis has been carried out. It gives the identifiable parameter of the velocity from the seismic inversion: we found that only the wave velocity of the lower layer and near the interface can be recovered.
Report is related to the previous part of this contract whereas publications , and are related to this work.
This work was done in the context of the STIC/Tunisia project (see
), in collaboration with ENIT. The geoid is the level
surface of the earth attraction at the sea level. That surface is obtained as a correction of a regular surface by fitting existing measures. The problem ends up with a large structured
generalized least squares problem. Therefore, we plan to apply our algorithms on
QRfactorizations (
). During that year, a Matlab chain of treatments has
been developed from existing techniques. Its reliability is presently tested on public data. One of the goals is to apply it to data from Tunisian services in order to recover parts of the
Tunisian geoid
. The main research direction on which we now
focus, is the determination of an equivalent mass system which can generate a given geoid. That type of inverse problems meets common interest of the SAGE team and of the LAMSIN team as
well.
This work was done in the context of the STIC/Tunisia project (see ), in collaboration with ENIT. We have designed two different methods for recovering missing data on the heart surface, from electrical measures done on the body surface. Both methods recover the epicardial potential and flux. One method is based on a self-regularizing energy-like error functional, uses two direct problems and their adjoint to compute the gradient of the functional. The other method is based on a classical measurement to computations misfit function, uses a boundary integral model and a regularized generalized least-squares formulation. Numerical experiments highlight the efficiency and the robustness of the two methods (paper in preparation).
This work was done in collaboration with M. Al Ghoul, from the American University of Beirut, Lebanon, in the context of the Sarima project ( ).
The project consists of setting forth a theoretical and numerical model describing the transport and chemical reaction processes taking place in a geological self-organizing system, as an attempt to simulate the observed banding. We would like then to compare the simulation results with novel experiments designed and performed in the laboratory of Professor Rabih Sultan, American University of Beirut. These novel experiments are to explore the possible similarities between the well-known Liesegang banding phenomenon in precipitate systems and the stripe formation observed in a large number of rocks. Banded rock formations are supposed to arise from the cyclic precipitation as mineral-rich water infiltrates a porous rock and reacts to form an insoluble product. Very few attempts have been cited in the literature to simulate the Liesegang banding phenomena in rocks, experimentally and in situ. In this study, we propose to carry out theoretical simulation of reaction-diffusion in a rock-bearing medium. The coupling of transport (diffusion and flow velocity) to chemical reactions (here precipitation) causes the deposition of the mineral in the form of bands resembling those of a Liesegang pattern. In 2D, concentric rings of the precipitate originating from a central diffusion source are expected to form. The dynamics of this system can be described by coupled reaction-diffusion equations where the weakly soluble salts are represented by a continuous spatio-temporal size distribution functions. The reaction-diffusion equations for the aqueous species are given by conservation equations for the concentrations of these various species using Fick's second law. The global system of Partial Differential Equations is discretised by a Finite Difference Method and an implicit time scheme adapted to stiff systems. Sparse linear systems arising in each time step are solved by a direct method. Simulation results are validated through comparison with experimental results. We plan now to use a Finite Element Method and to reduce computational time by mesh adapting methods.
This work is done in collaboration with A. Beaudoin, from the University of le Havre and J.-R. de Dreuzy, from the department of Geosciences at the University of Rennes 1, in the context of the HYDROLAB project (see , ). We have developed a parallel software for simulating flow and solute transport in a 2D rectangle, where the permeability field is highly heterogeneous. The flow module includes problem generation, spatial discretization by a finite volume method using a structured mesh, linear solving, flux computation and visualization. The transport module includes a parallel particle tracker for advection-dispersion. All algorithms are parallelized using a message-passing approach and a subdomain decomposition. We have used this software to run many random simulations and to derive a stochastic analysis of the results. To obtain a well-defined asymptotic regime, we have used very large computational domains and run our simulations on a cluster ( ). We could compute with no ambiguity the longitudinal and transverse dispersion coefficient for large heterogeneities (paper submitted to Water Resources Research).
This work was done in collaboration with J.-R. de Dreuzy, from the department of Geosciences at the University of Rennes 1, and with J. Demmel, from the University of Berkeley.
We have developed a parallel software for simulating flow in a 3D network of interconnected plane fractures; we assume that the matrix (the rock) surrounding the fractures is impervious and that the fractures have a constant thickness. Flow computation includes problem generation, mesh generation, spatial discretization by a mixed finite element method, linear solving, flux computation and visualization. Numerical results show that we are able to deal with complex large 3D networks of fractures (paper in preparation).
During his internship at the University of Berkeley and Irisa, B. Poirriez has analyzed the matrix structures and studied a computational model, in order to increase performances.
Mohamad Muhhiedine begins his PhD thesis in october 2006 on the subject: "Numerical simulations of prehistoric fires", co-advised by Ramiro March (ArcheoSciences, Rennes). This project takes place in the archeological/human sciences program: "Man and fire: towards a comprehension of the evolution of thermal energy control and its technical, cultural and paleo-environmental consequences". Both physical and numerical approach is used to understand the functioning mode and the thermal history of the studied structures. We plan to improve an existent numerical code to include heat transfer phenomena taking into account the propagation of a dry/humid front. Modelling structures in three dimensions needs to give up the current method used (finite differences) and to switch to a finite element method.
Contract with Andra
time: three years from October 2005.
This contract is related to C. de Dieuleveult's PhD thesis. The subject is reactive transport, with application to nuclear waste disposal , . For more details, see sections , .
See http://momas.univ-lyon1.fr/
The working group MOMAS is led now by A. Ern from CERMICS and includes many partners from CNRS, INRIA, universities, CEA, ANDRA, EDF and BRGM. It covers many subjects related to mathematical modeling and numerical simulations for nuclear waste disposal problems. We participate in the project entitled ``Numerical and mathematical methods for reactive transport in porous media'' , . See sections , .
HydroGrid: Coupling codes for flow and solute transport in geological media: a software component approach.
ACI GRID grant, No. 102C07270021319
time: from October 2002 until February 2006
Coordinator: M. Kern, Estime Inria team
Participants: Paris Irisa team, Estime Inria team, Sage Irisa team, IMFS (U. of Strasbourg), CAREN (U. of Rennes 1).
See http://www-rocq.inria.fr/~kern/HydroGrid/HydroGrid.html
The final report of the project was presented in February. We have worked on three applications: saltwater intrusion , reactive transport , and 3D network of fractures . See sections , . We have developed parallel software components and run experiments on the grid at Irisa . Wa have also developed the software GRIDMESH for generating a regular mesh . See section .
ACI GRID program GRID'5000, two projects entitled Grille Rennes
time: three years from 2003 and three years from 2004. Coordinator : Y. Jégou, Paris team, INRIA-Rennes.
Our parallel algorithms for solving linear systems were tested on the clusters of the Grid at Rennes , , , , , , . See sections , and .
BQR from the University of Rennes 1
Participants : Sage team and Geosciences department.
time: 2006.
We aim at developing a platform, called Hyrolab, integrating our different modules for simulating flow and solute transport in porous and fractured media . We have organized the software in well-defined components, tested the simulations on both Windows and Linux systems, created a project in the Inria Gforge and created a subversion repository. See sections , , , , .
IFREMER contracts, No 06/2 210 099
Partners : Irisa, IFREMER
Title : Numerical model for the propagation of elastic waves
time : from January 2006 until March 2007.
This work is done in the context of the ``Contrat de Plan Etat Région Bretagne 2000-2006'' (signed in October 2002 – it contains five parts and is spread out over the period 2002-2007), for the development of new geophysical exploration means.
The first objective of this study was to develop a software simulating the propagation of elastic waves in the seawater and in the underwater geophysical layers. We have used the code FLUSOL from the INRIA-team ONDES. The second objective is to study inverse methods to find layer properties in the ground, only from acoustic measurements recorded near the sea surface by a ship. The reflection of the wave at each interface allows us to collect information in the fluid and, via appropriate numerical methods, to determine the density and the elasticity of each layer constituting the solid sub-marine soil , , , . See sections , .
ERCIM Working Group, started in 2001.
Title :Matrix Computations and Statistics
Chairman :B. Philippe(former) and E. Kontoghiorghes (current) (U. Cyprus)
Members :50 researchers from 15 European countries.
http://www.irisa.fr/sage/wg-statlin/
This working group aims to find new topics of research emerging from some statistical applications which involve the use of linear algebra methods. The members are especially concerned by very large problems that need the design of reliable and fast procedures. High Performance Computing including parallel computing is addressed.
The Sage team could hire Petko Yanev as a postdoctoral fellow from the Ercim fellowship program. In 2006, the WG met in Salerno in August during its 8th Workshop. A new organization of the Working Group will occur in 2007.
ERCIM Working Group, started in 2001.
Title :Applications of Numerical Mathematics in Science
Chairman :Mario Arioli, RAL.
Members :27 european research teams.
The Working Group wants to create a forum within ERCIM Institutional Organizations in which a cross fertilization between numerical techniques used in different fields of scientific computing might take place. Thus, the Working Group intends to focus on this underpinning theme of computational and numerical mathematics. In this way, the intention is that any resulting numerical algorithm will achieve wider applicability, greater robustness, and better accuracy.
The team SAGE collaborates with the laboratory LAMSIN at ENIT, Tunis. The collaboration is funded by an agreement between INRIA and the Ministry of University and Research in Tunisia. The title of the activity is ``Generalized Least Squares Problems - Application to the determination of the Geoid of Tunisia''. In that framework, M. Moakher (ENIT) and B. Philippe co-advise A. Abdelmoula's PhD research and collaborate with OTC (Office de Topographie et Cartographie), the Tunisian Department of Topography and Cartography. Also, J. Erhel collaborates with T. N. Baranger, A. Ben Abda and N.Hariga-Tlatli on energy-like error functionals for data completion. See sections , , and .
Title: Numerical simulations in hydrogeology
This 3-year project includes six partners from Rabat, Annaba, Tunis, Naples, Barcelona and Rennes. The project deals with the numerical simulation of the groundwater flows and the transport of pollutants. The goal consists in organizing a network of teams which gathers expertise for the whole spectrum of the domain from the physical models, the mathematical methods, the numerical algorithms, the codes. The activity will be organized through a list of cooperative actions which will be defined during the first year. The network will be a training tool for each involved team. The success of the approach should be materialized after three years, by the availability of some common codes, by publications and by the ability to access and use computing grids.
A first workshop has been organized in Rabat (Sep. 20-22) , , , , . From it, six tasks have been defined on topics including modelling, numerical approximations, inverse problems.
SARIMA project Inria/Ministry of Foreign Affairs
Support to Research Activities in Mathematics and Computer Science in Africa
Partner : CIMPA (International Center for Pure and Applied Mathematics)
Duration : 2004-2007.
The project SARIMA is managed by the ministry of Foreign Affairs. It involves INRIA and CIMPA as financial operators.
The aim of the project is to reinforce the cooperation between French research teams and African and Middle-East ones in mathematics and computer science. The strategy consists in reinforcing existing research teams so that they become true poles of excellence for their topic and their region. A network based organization should strengthen the individual situation of the groups. From the CARI experience (African Conference on Research in Computer Science) and the CIMPA's experience (International Center for Pure and Applied Mathematics), the initial network includes seven teams (five teams in French speaking sub-Saharan countries, two teams acting for the whole Maghreba, one in Tunisia in Applied maths and one in Algeria in Computer Science, and one team in Lebanon).
The activity of the network is managed by the SARIMA GIS (Groupe d'Intérêt Scientifique). In this project, INRIA is responsible for all the visits of African researchers to research groups in France. In 2006, more than 120 researchers (PhD students and researchers) were funded to visit France for one to six months long visits.
B. Philippe is the coordinator of the project for INRIA and the president of the Goupement d'Intérêt Scientific which manages the project. The Sage team could hire five Ph-D students from Cameroon, Lebanon, Marocco and Tunisia, thanks to Sarima grants, with advisors in their respective countries. Exchanges could also be organized. See sections , , , , , , .
the SAGE team and the CDCSP (University of Lyon 1) organized a workshop entitled Journées des Doctorants SAGE/CDCSP autour du Calcul scientifique sur architectures parallèles et grilles, May, Irisa. http://www.irisa.fr/sage/SAGE_CDCSP.htm.
The SAGE team organized the fourth edition of the conference Parallel Matrix Algorithms and Applications (PMAA'06, Sept. 6-9) in Rennes. Local organizing committee: É. Canot, J. Erhel, L. Grigori, F. Guyomarc'h, É. Lebret, B. Philippe, P. Yanev. That edition gathered 60 attendees coming from 20 countries. É. Canot was the webmaster: http://pmaa06.irisa.fr. All the abstracts and slides are available at this site. An award of the best student presentation was given. A special issue of Parallel Computing will be devoted to the conference. The guest editors of this special issue are Laura Grigori (guest editor in charge), Bernard Philipe, Ahmed Sameh, Damien Tromeur-Dervout, Marian Vajtersic.
J. Erhel was member of the program committee of Renpar'17, October, Perpignan, France.
B. Philippe and Masha Sosonkina (Ames Lab., Iowa) organized a minsymposium, entitled ``Issues on robustness of Algebraic Parallel Preconditioners'', at the SIAM Conference on Parallel Processing (PP06, Feb. 22-24, San Francisco).
B. Philippe organized the invited session ``Matrix Computations and Statistics'' at the congress COMPSTAT'06 in Rome.
B. Philippe co-organized the workshop of the network HYDRO3+3 which was held in Rabat (Sept. 20-22). The network is sponsored by the programme Euro-Méditerranée of INRIA and is devoted to hydrogeology.
B. Philippe was the chairman of the special week ``High Performance Computing'' organized at Tunis by the LAMSIN (November).
B. Philippe was member of the program committee of the Functional and Numerical Analysis Days, in honor of Michel Crouzeix (June 2-3, Guidel).
B. Philippe was member of the program committee of the 8th workshop of the ERCIM Working Group on Matrix Computations and Statistics (Sep. 2-3, Salerno).
B. Philippe was member of the program committee of the 8th African Conference on Research in Computer Science (CARI06, Nov. 6-9, Cotonou).
B. Philippe was member of the program committee of the 15th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP2007, Feb. 7-9, 2007, Naples).
B. Philippe is one of the four chief-editors of the electronic journal ARIMA.
B. Philippe is member of the editorial board of the International Journal on High Speed Computing (Word Scientific Publishing)
J. Erhel is member and secretary of the Comité de Gestion Local of AGOS at INRIA-Rennes.
J. Erhel is member of Comité Technique Paritaire of INRIA.
J. Erhel is member of commission de spécialistes, section 27, of the University of Rennes 1.
F. Guyomarc'h is member of the CUMI (Commission des Utilisateurs de Moyens Informatiques), of INRIA-Rennes.
F. Guyomarc'h is member of commission de spécialistes, section 27, of the University of Rennes 1.
F. Guyomarc'h is responsible for the first year of the DIIC (Diplôme d'Ingénieur en Informatique et Communication) and is a member of the working group for updating the academic plans.
In the International Affairs Department of INRIA, B. Philippe is in charge of the cooperating programmes with scientific teams in Africa and Middle-East countries.
B. Philippe is the INRIA coordinator for the SARIMA project (see ).
B. Philippe is the corresponding person for the agreement between the University of Rennes 1, The University of Reims, the Lebanese University and AUF (Agence Universitaire Francophone) which supports a Master.
É. Canot, C. de Dieuleveult, J. Erhel and S. Zein taught about Applied Mathematics (MAP) for DIIC, IFSIC, Rennes 1 (second year). Lecture notes on http://www.irisa.fr/sage/jocelyne
F. Guyomarc'h gave lectures on algorithms (ALG2) for Master (M2-CCI), IFSIC, University of Rennes 1.
F. Guyomarc'h gave lectures on object oriented programmation (PROG2) for Master (M2-CCI), IFSIC, University of Rennes 1 and also in the first year of DIIC.
B. Philippe gave a one-week course in February on Methods for Solving Large Systems, in Beirut, Lebanon (DEA de mathématiques appliquées, co-organized by the Lebanese University, IRISA and University of Reims). Lecture notes on http://www.irisa.fr/sage/jocelyne
B. Philippe gave a 3 hours tutorial on eigenvalue solvers at the Collège Polytechnique in Paris (March) during the session ``Méthodes performantes en algèbre linéaire pour la résolution de systèmes linéaires et le calcul de valeurs propres".
G. Atenekeng Kahou gave a course on implementation of parallel libraries, with H. Leroy (Irisa), at the workshop on High Performance Computing, ENIT, Tunis, November.
J. Erhel gave a talk at Festival des Sciences, organized by Rennes Metropole, in October. The conference, with four contributors, was about Un tourbillon d'équationsand the talk was entitled Comprendre l'écoulement de l'eau dans les roches grâce à l'informatique.
G. Atenekeng Kahou and J. Erhel: participation in training on scientific writing, Irisa, September.
É. Canot, C. de Dieuleveult, F. Guyomarc'h, B. Philippe, D. Tromeur-Dervout and M. Ziani: participation in Workshop 3+3 on numerical computing for groundwater flows, Rabat (Marocco), September. Communications.
C. de Dieuleveult: participation in training on geochemistry and transport, Andra, Paris, February.
C. de Dieuleveult and Z. Zein: participation in training "Formation Reussir vos interventions en public", Irisa, December.
J.Erhel and B. Philippe: participation in Colloque d'Analyse Numérique CANUM, Guidel, June.
J.Erhel and B. Philippe: participation in Conference on Approximation and Iterative Methods, Lille, June. Invited presentation by J. Erhel.
J. Erhel: participation in ECCOMAS CFD 2006, Egmond aan Zee, The Netherlands, September. Invited communication in a mini-symposium.
C. de Dieuleveult and J. Erhel: participation in working day of GdR Momas, Orsay, October. Communication.
A. Abdelmoula, G. Atenekeng Kahou, J. Erhel, B. Philippe and M. Ziani: participation in Workshop HPC on High Performance Computing at LAMSIN, Tunis, November. Invited talks and communications.
F. Guyomarc'h: participation in ILAS06, Amsterdam (Nederland), July. Communication.
F. Guyomarc'h: scientific visit to DART team, INRIA-Futurs, Lille, October-December.
M. Muheiddine: attendance to the lecture "Archéosciences" given by R. March, University of Rennes, October.
B. Philippe: participation in International meeting on GRID and parallel computing, Beirut (Lebanon), January. Invited conference.
B. Philippe: participation in SIAM conference on Parallel Processing (PP06), San Francisco (USA), February. Organization of a mini-symposium and communication.
B. Philippe: participation in COMPSTAT'06, Rome (Italy), August.
B. Philippe: participation in ERCIM Working Group Matrix Computations and Statistics, Salerno (Italy), September. Communication.
B. Philippe: participation in CARI, Cotonou (Benin), November.
J. Erhel visited ENIT/LAMSIN in Tunis, during one week, in May, in the context of the STIC grant.
B. Philippe visited ENIT/LAMSIN in Tunis, during one week, in June, in the context of the STIC grant.
The team has invited the following persons:
I. Moukouop, Ph-D, University of Yaounde, Cameroon, 3 months, April-June.
M. Sosonkina, University of Iowa, USA, 2 weeks, May.
N. Nassif, American University of Beirut, Lebanon, 1 week, June.
R. Badé, Ph-D, University of Tunis, Tunisia, 2 months, June-July.
M. Moakher, University of Tunis, Tunisia, 2 weeks, March and September.
E. Kamgnia, University of Yaounde, Cameroon, 2 months, September-October.