EN FR
EN FR


Section: New Results

Enlarged Krylov methods

Krylov subspace methods are among the most practical and popular iterative methods today. They are polynomial iterative methods that aim to solve systems of linear equations (Ax=b) by finding a sequence of vectors x1,x2,x3,x4,...,xk that minimizes some measure of error over the corresponding spaces x0+𝒦i(A,r0),i=1,...,k where 𝒦i(A,r0)=span{r0,Ar0,A2r0,...,Ai-1r0} is the Krylov subspace of dimension i, x0 is the initial iterate, and r0 is the initial residual. These methods are governed by Blas1 and Blas2 operations as dot products and sparse matrix vector multiplications. Parallelizing dot products is constrained by communication since the performed computation is negligible. If the dot products are performed by one processor, then there is a need for a communication before and after the computation. In both cases, communication is a bottleneck. In [21] we introduce a new approach for reducing communication in Krylov subspace methods that consists of enlarging the Krylov subspace by a maximum of t vectors per iteration, based on the domain decomposition of the graph of A. The obtained enlarged Krylov subspace 𝒦t,k(A,r0) is a superset of the Krylov subspace 𝒦k(A,r0), 𝒦k(A,r0)𝒦t,k+1(A,r0). Thus it is possible to search for the solution of the system Ax=b in 𝒦t,k(A,r0) instead of 𝒦k(A,r0). Moreover, we show that the enlarged Krylov projection subspace methods lead to faster convergence in terms of iterations and parallelizable algorithms with less communication, with respect to Krylov methods.