• The Inria's Research Teams produce an annual Activity Report presenting their activities and their results of the year. These reports include the team members, the scientific program, the software developed by the team and the new results of the year. The report also describes the grants, contracts and the activities of dissemination and teaching. Finally, the report gives the list of publications of the year.

• Legal notice
• Personal data

## Section: Partnerships and Cooperations

### International Initiatives

#### Inria International Labs

In the framework of the Joint Laboratory for Extreme Scale Computing (JLESC) within a collaboration between Inria and Argonne national laboratory an new joint project studies how lossy compression can be monitored by Krylov solvers to significantly reduce the memory footprint when solving very-large sparse linear systems. The resulting solvers will alleviate the I/O penalty paid when running large calculations using either check-point mechanisms to address resiliency or out-of-core techniques to solve huge problems. For the solution of large linear systems of the form $Ax=b$ where $A\in {ℝ}^{n×n}$, $x$ and $b\in {ℝ}^{n}$, Krylov subspace methods are among the most commonly used iterative solvers; they are further extended to cope with extreme scale computing as one can integrate features such as communication hidden in their variants referred to as pipelined Krylov solvers [7]. On the one hand, the Krylov subspace methods such as GMRES allow some inexactness when computing the orthonormal search basis; more precisely theoretical results  [24], [25] show that the matrix-vector product involved in the construction of the new search directions can be more and more inexact when the convergence towards the solution takes place. An inexact scheme of that form writes into a generalized Arnoldi equality

 $\left[\left(A+{E}_{1}\right){v}_{1},\cdots ,\left(A+{E}_{k}\right){v}_{k}\right]=\left[{v}_{1},\cdots ,{v}_{k},{v}_{k+1}\right]{\overline{H}}_{k},$ (1)

where the theory gives a bound on $\parallel {E}_{k}\parallel$ that depends on the residual norm $\parallel b-A{x}_{k}\parallel$ at step $k$, where ${x}_{k}$ is the ${k}^{\mathrm{th}}$ iterate. Such a result has a major interest in applications where the matrix is not formed explicitly, e.g., in the fast mutipole (FMM) or domain decomposition (DDM) methods context, where this allows one to drastically reduce the computational effort.

One the other hand, novel agnostic lossy data compression techniques are studied to reduce the I/O footprint of large applications that have to store snapshots of the calculation, for a posteriori analysis, because they implement out-of-core calculation or for checkpointing data for resilience. Those lossy compression techniques allow for precise control on the error introduced by the compressor to ensure that the stored data are still meaningful for the considered application. In the context of the Krylov method, the basis ${V}_{k+1}=\left[{v}_{1},\cdots ,{v}_{k},{v}_{k+1}\right]$ represents the most demanding data in terms of memory footprint, so that, in a fault-tolerant or out-of-core context, storing it in a lossy form would allow for a tremendous saving.

The objective of this work, developped within the post-doc of N. Schenkels, is to dynamically control the compression error of ${V}_{k+1}$ to comply with the inexact Krylov theory. The main difficulty is to translate the known theoretical inexactness on ${E}_{k}$ into a suited lossy compression mechanism for ${v}_{k}$ with loss $\parallel \delta {v}_{k}\parallel$ .