EN FR
EN FR


Section: New Results

Communication avoiding algorithms for preconditioned iterative methods

Our group continues to work on algorithms for dense and sparse linear algebra operations that minimize communication, introduced in [1], [4]. An overview of communication avoiding algorithms for dense linear algebra operations is presented in [18]. During this year we focused on communication avoiding iterative methods and designing algorithms for computing rank revealing and low rank approximations of dense and sparse matrices.

Iterative methods are widely used in industrial applications, and in the context of communication avoiding algorithms, our research is related to increasing the scalability of Krylov subspace iterative methods. Indeed the dot products related to the orthogonalization of the Krylov subspace and performed at each iteration of the Krylov method require collective communication among all processors. This collective communication does not scale to very large number of processors, and thus is a main bottleneck in the scalability of Krylov subspace methods. Our research focuses on enlarged Krylov subspace methods, a new approach that we have introduced in the recent years [5] that consists of enlarging the Krylov subspace by a maximum of t vectors per iteration, based on a domain decomposition of the graph of the input matrix. The solution of the linear system is searched in the enlarged subspace, which is a superset of the classic subspace. The enlarged Krylov projection subspace methods lead to faster convergence in terms of iterations and parallelizable algorithms with less communication, with respect to Krylov methods.

In [20] we propose an algebraic method in order to reduce dynamically the number of search directions during block Conjugate Gradient iterations. Indeed, by monitoring the rank of the optimal step αk it is possible to detect inexact breakdowns and remove the corresponding search directions. We also propose an algebraic criterion that ensures in theory the equivalence between our method with dynamic reduction of the search directions and the classical block Conjugate Gradient. Numerical experiments show that the method is both stable, the number of iterations with or without reduction is of the same order, and effective, the search space is significantly reduced. We use this approach in the context of enlarged Krylov subspace methods which reduce communication when implemented on large scale machines. The reduction of the number of search directions further reduces the computation cost and the memory usage of those methods.

In [19] we propose a variant of the GMRES method for solving linear systems of equations with one or multiple right-hand sides. Our method is based on the idea of the enlarged Krylov subspace to reduce communication. It can be interpreted as a block GMRES method. Hence, we are interested in detecting inexact breakdowns. We introduce a strategy to perform the test of detection. Furthermore, we propose an eigenvalues deflation technique aiming to have two benefits. The first advantage is to avoid the plateau of convergence after the end of a cycle in the restarted version. The second is to have a very fast convergence when solving the same system with different right-hand sides, each given at a different time (useful in the context of CPR preconditioner). With the same memory cost, we obtain a saving of up to 50% in the number of iterations to reach convergence with respect to the original method.