Keywords
Computer Science and Digital Science
 A2.4. Formal method for verification, reliability, certification
 A4.3. Cryptography
 A7.1. Algorithms
 A8. Mathematics of computing
 A8.1. Discrete mathematics, combinatorics
 A8.4. Computer Algebra
 A8.5. Number theory
 A8.10. Computer arithmetic
Other Research Topics and Application Domains
 B6.6. Embedded systems
 B9.5. Sciences
 B9.10. Privacy
1 Team members, visitors, external collaborators
Research Scientists
 Bruno Salvy [Team leader, Inria, Senior Researcher]
 Nicolas Brisebarre [CNRS, Researcher, HDR]
 ClaudePierre Jeannerod [Inria, Researcher]
 Vincent Lefèvre [Inria, Researcher]
 Benoît Libert [CNRS, Senior Researcher, HDR]
 JeanMichel Muller [CNRS, Senior Researcher, HDR]
 Alain Passelègue [Inria, Researcher]
 Nathalie Revol [Inria, Researcher]
 Gilles Villard [CNRS, Senior Researcher, HDR]
Faculty Members
 Guillaume Hanrot [École Normale Supérieure de Lyon, Professor, HDR]
 Fabien Laguillaumie [Univ Claude Bernard, Professor, until Aug 2020, HDR]
 Nicolas Louvet [Univ Claude Bernard, Associate Professor]
 Damien Stehlé [École Normale Supérieure de Lyon, Professor, HDR]
PostDoctoral Fellows
 Qian Chen [Université de Trondheim  Norvège, until Oct 2020]
 Rikki Amit Inder Deo [Inria  ENS, (MarchJune 2020), ENSL since July 2020]
 Alonso Gonzalez [École Normale Supérieure de Lyon]
 Dingding Jia [CNRS, until Oct 2020]
 Changmin Lee [Univ de Lyon, until Sep 2020]
 Herve Tale Kalachi [Inria, until Aug 2020]
PhD Students
 Orel Cosseron [Zama Sas, CIFRE, from Oct 2020]
 Julien Devevey [École Normale Supérieure de Lyon, from Sep 2020]
 Adel Hamdi [Orange Labs, CIFRE]
 Huyen Nguyen [École Normale Supérieure de Lyon]
 Miruna Rosca [BitDefender Bucarest  Roumanie, until Oct 2020]
 Hippolyte Signargout [École Normale Supérieure de Lyon, from Oct 2020]
 Radu Titiu [BitDefender Bucarest  Roumanie, until Oct 2020]
 Ida Tucker [École Normale Supérieure de Lyon, until Sep 2020]
Technical Staff
 Rikki Amit Inder Deo [Inria, Engineer, until Feb 2020]
 Joris Picot [École Normale Supérieure de Lyon, Engineer]
Interns and Apprentices
 Calvin Abou Haidar [École Normale Supérieure de Lyon, until Jun 2020]
 Bilel Bensaid [École Normale Supérieure de Lyon, from Jun 2020 until Aug 2020]
 Quentin Corradi [École Normale Supérieure de Lyon, from Apr 2020 until Jul 2020]
 Miguel Cueto Noval [ENS Lyon, Intern, From May 2020 until June 2020]
 Mathis Deronzier [École Normale Supérieure de Lyon, from Jun 2020 until Aug 2020]
 Julien Devevey [École Normale Supérieure de Lyon, until Jun 2020]
 Justine Sauvage [École Normale Supérieure de Lyon, from Apr 2020 until Jul 2020]
 Mohamed Sidi Ali Cherif [École Normale Supérieure de Lyon, from Apr 2020 until Jul 2020]
 Hermenegilde Valentin [École Normale Supérieure de Lyon, until Jun 2020]
Administrative Assistants
 Nelly Amsellem [École Normale Supérieure de Lyon, until Mar 2020]
 Chiraz Benamor [École Normale Supérieure de Lyon, from Mar 2020]
 Virginie Bouyer [École Normale Supérieure de Lyon, until Mar 2020]
 Octavie Paris [Ecole Normale Supérieure de Lyon, June 2019  Present, European H2020 project manager]
 Octavie Paris [École Normale Supérieure de Lyon]
External Collaborator
 Fabien Laguillaumie [Univ de Montpellier, from Sep 2020, HDR]
2 Overall objectives
A major challenge in modeling and scientific computing is the simultaneous mastery of hardware capabilities, software design, and mathematical algorithms for the efficiency and reliability of the computation. In this context, the overall objective of AriC is to improve computing at large, in terms of performance, efficiency, and reliability. We work on the fine structure of floatingpoint arithmetic, on controlled approximation schemes, on algebraic algorithms and on new cryptographic applications, most of these themes being pursued in their interactions. Our approach combines fundamental studies, practical performance and qualitative aspects, with a shared strategy going from highlevel problem specifications and standardization actions, to computer arithmetic and the lowestlevel details of implementations.
This makes AriC the right place for drawing the following lines of action:
 Design and integration of new methods and tools for mathematical program specification, certification, security, and guarantees on numerical results. Some main ingredients here are: the interleaving of formal proofs, computer arithmetic and computer algebra; error analysis and computation of certified error bounds; the study of the relationship between performance and numerical quality; and on the cryptography aspects, focus on the practicality of existing protocols and design of more powerful latticebased primitives.
 Generalization of a hybrid symbolicnumeric trend: interplay between arithmetic for both improving and controlling numerical approaches (symbolic $\to $ numeric), as well actions accelerating exact solutions (symbolic $\leftarrow $ numeric). This trend, especially in the symbolic computation community, has acquired a strategic role for the future of scientific computing. The integration in AriC of computer arithmetic, reliable computing, and algebraic computing is expected to lead to a deeper understanding of the problem and novel solutions.
 Mathematical and algorithmic foundations of computing. We address algorithmic complexity and fundamental aspects of approximation, polynomial and matrix algebra, and latticebased cryptography. Practical questions concern the design of high performance and reliable computing kernels, thanks to optimized computer arithmetic operators and an improved adequacy between arithmetic bricks and higher level ones.
According to the application domains that we target and our main fields of expertise, these lines of actions are declined in three themes with specific objectives.
 Efficient approximation methods (§3.1). Here lies the question of interleaving formal proofs, computer arithmetic and computer algebra, for significantly extending the range of functions whose reliable evaluation can be optimized.
 Lattices: algorithms and cryptography (§3.2). Long term goals are to go beyond the current design paradigm in basis reduction, and to demonstrate the superiority of latticebased cryptography over contemporary publickey cryptographic approaches.
 Algebraic computing and high performance kernels (§3.3). The problem is to keep the algorithm and software designs in line with the scales of computational capabilities and application needs, by simultaneously working on the structural and the computer arithmetic levels.
3 Research program
3.1 Efficient and certified approximation methods
3.1.1 Safe numerical approximations
The last twenty years have seen the advent of computeraided proofs in mathematics and this trend is getting more and more important. They request: fast and stable numerical computations; numerical results with a guarantee on the error; formal proofs of these computations or computations with a proof assistant. One of our main longterm objectives is to develop a platform where one can study a computational problem on all (or any) of these three levels of rigor. At this stage, most of the necessary routines are not easily available (or do not even exist) and one needs to develop ad hoc tools to complete the proof. We plan to provide more and more algorithms and routines to address such questions. Possible applications lie in the study of mathematical conjectures where exact mathematical results are required (e.g., stability of dynamical systems); or in more applied questions, such as the automatic generation of efficient and reliable numerical software for function evaluation. On a complementary viewpoint, numerical safety is also critical in robust space mission design, where guidance and control algorithms become more complex in the context of increased satellite autonomy. We will pursue our collaboration with specialists of that area whose questions bring us interesting focus on relevant issues.
3.1.2 Floatingpoint computing
Floatingpoint arithmetic is currently undergoing a major evolution, in particular with the recent advent of a greater diversity of available precisions on a same system (from 8 to 128 bits) and of coarsergrained floatingpoint hardware instructions. This new arithmetic landscape raises important issues at the various levels of computing, that we will address along the following three directions.
Floatingpoint algorithms, properties, and standardization
One of our targets is the design of building blocks of computing (e.g., algorithms for the basic operations and functions, and algorithms for complex or doubleword arithmetic). Establishing properties of these building blocks (e.g., the absence of “spurious” underflows/overflows) is also important. The IEEE 754 standard on floatingpoint arithmetic (which has been revised slightly in 2019) will have to undergo a major revision within a few years: first because advances in technology or new needs make some of its features obsolete, and because new features need standardization. We aim at playing a leading role in the preparation of the next standard.
Error bounds
We will pursue our studies in rounding error analysis, in particular for the “low precision–high dimension” regime, where traditional analyses become ineffective and where improved bounds are thus most needed. For this, the structure of both the data and the errors themselves will have to be exploited. We will also investigate the impact of mixedprecision and coarsergrained instructions (such as small matrix products) on accuracy analyses.
High performance kernels
Most directions in the team are concerned with optimized and high performance implementations. We will pursue our efforts concerning the implementation of well optimized floatingpoint kernels, with an emphasis on numerical quality, and taking into account the current evolution in computer architectures (the increasing width of SIMD registers, and the availability of low precision formats). We will focus on computing kernels used within other axes in the team such as, for example, extended precision linear algebra routines within the FPLLL and HPLLL libraries.
3.2 Lattices: algorithms and cryptology
We intend to strengthen our assessment of the cryptographic relevance of problems over lattices, and to broaden our studies in two main (complementary) directions: hardness foundations and advanced functionalities.
3.2.1 Hardness foundations
Recent advances in cryptography have broadened the scope of encryption functionalities (e.g., encryption schemes allowing to compute over encrypted data or to delegate partial decryption keys). While simple variants (e.g., identitybased encryption) are already practical, the more advanced ones still lack efficiency. Towards reaching practicality, we plan to investigate simpler constructions of the fundamental building blocks (e.g., pseudorandom functions) involved in these advanced protocols. We aim at simplifying known constructions based on standard hardness assumptions, but also at identifying new sources of hardness from which simple constructions that are naturally suited for the aforementioned advanced applications could be obtained (e.g., constructions that minimize critical complexity measures such as the depth of evaluation). Understanding the core source of hardness of today's standard hard algorithmic problems is an interesting direction as it could lead to new hardness assumptions (e.g., tweaked version of standard ones) from which we could derive much more efficient constructions. Furthermore, it could open the way to completely different constructions of advanced primitives based on new hardness assumptions.
3.2.2 Cryptanalysis
Latticebased cryptography has come much closer to maturity in the recent past. In particular, NIST has started a standardization process for postquantum cryptography, and latticebased proposals are numerous and competitive. This dramatically increases the need for cryptanalysis:
Do the underlying hard problems suffer from structural weaknesses? Are some of the problems used easy to solve, e.g., asymptotically?
Are the chosen concrete parameters meaningful for concrete cryptanalysis? In particular, how secure would they be if all the known algorithms and implementations thereof were pushed to their limits? How would these concrete performances change in case (fullfledged) quantum computers get built?
On another front, the cryptographic functionalities reachable under lattice hardness assumptions seem to get closer to an intrinsic ceiling. For instance, to obtain cryptographic multilinear maps, functional encryption and indistinguishability obfuscation, new assumptions have been introduced. They often have a lattice flavour, but are far from standard. Assessing the validity of these assumptions will be one of our priorities in the midterm.
3.2.3 Advanced cryptographic primitives
In the design of cryptographic schemes, we will pursue our investigations on functional encryption. Despite recent advances, efficient solutions are only available for restricted function families. Indeed, solutions for general functions are either way too inefficient for pratical use or they rely on uncertain security foundations like the existence of circuit obfuscators (or both). We will explore constructions based on wellstudied hardness assumptions and which are closer to being usable in reallife applications. In the case of specific functionalities, we will aim at more efficient realizations satisfying stronger security notions.
Another direction we will explore is multiparty computation via a new approach exploiting the rich structure of class groups of quadratic fields. We already showed that such groups have a positive impact in this field by designing new efficient encryption switching protocols from the additively homomorphic encryption we introduced earlier. We want to go deeper in this direction that raises interesting questions, such as how to design efficient zeroknowledge proofs for groups of unknown order, how to exploit their structure in the context of 2party cryptography (such as twoparty signing) or how to extend to the multiparty setting.
In the context of the PROMETHEUS H2020 project, we will keep seeking to develop new quantumresistant privacypreserving cryptographic primitives (group signatures, anonymous credentials, ecash systems, etc). This includes the design of more efficient zeroknowledge proof systems that can interact with latticebased cryptographic primitives.
3.3 Algebraic computing and high performance kernels
The connections between algorithms for structured matrices and for polynomial matrices will continue to be developed, since they have proved to bring progress to fundamental questions with applications throughout computer algebra. The new fast algorithm for the bivariate resultant opens an exciting area of research which should produce improvements to a variety of questions related to polynomial elimination. Obviously, we expect to produce results in that area.
For definite summation and integration, we now have fast algorithms for single integrals of general functions and sequences and for multiple integrals of rational functions. The longterm objective of that part of computer algebra is an efficient and general algorithm for multiple definite integration and summation of general functions and sequences. This is the direction we will take, starting with single definite sums of general functions and sequences (leading in particular to a faster variant of Zeilberger's algorithm). We also plan to investigate geometric issues related to the presence of apparent singularities and how they seem to play a role in the complexity of the current algorithms.
4 Application domains
4.1 Floatingpoint and Validated Numerics
Our expertise on validated numerics is useful to analyze and improve, and guarantee the quality of numerical results in a wide range of applications including:
 scientific simulation;
 global optimization;
 control theory.
Much of our work, in particular the development of correctly rounded elementary functions, is critical to the reproducibility of floatingpoint computations.
4.2 Cryptography, Cryptology, Communication Theory
Lattice reduction algorithms have direct applications in
 publickey cryptography;
 diophantine equations;
 communications theory.
5 Highlights of the year
5.1 Awards
Ida Tucker is one of 35 winners of the 2020 L’OréalUNESCO France Rising Talent Award for Women in Science.
6 New software and platforms
6.1 New software
6.1.1 FPLLL
 Keywords: Euclidean Lattices, Computer algebra system (CAS), Cryptography
 Scientific Description: The fplll library is used or has been adapted to be integrated within several mathematical computation systems such as Magma, Sage, and PariGP. It is also used for cryptanalytic purposes, to test the resistance of cryptographic primitives.

Functional Description:
fplll contains implementations of several lattice algorithms. The implementation relies on floatingpoint orthogonalization, and LLL is central to the code, hence the name.
It includes implementations of floatingpoint LLL reduction algorithms, offering different speed/guarantees ratios. It contains a 'wrapper' choosing the estimated best sequence of variants in order to provide a guaranteed output as fast as possible. In the case of the wrapper, the succession of variants is oblivious to the user.
It includes an implementation of the BKZ reduction algorithm, including the BKZ2.0 improvements (extreme enumeration pruning, preprocessing of blocks, early termination). Additionally, Slide reduction and self dual BKZ are supported.
It also includes a floatingpoint implementation of the KannanFinckePohst algorithm that finds a shortest nonzero lattice vector. For the same task, the GaussSieve algorithm is also available in fplll. Finally, it contains a variant of the enumeration algorithm that computes a lattice vector closest to a given vector belonging to the real span of the lattice.

URL:
https://
github. com/ fplll/ fplll  Contact: Damien Stehlé
6.1.2 Gfun
 Name: generating functions package
 Keyword: Symbolic computation
 Functional Description: Gfun is a Maple package for the manipulation of linear recurrence or differential equations. It provides tools for guessing a sequence or a series from its first terms, for manipulating rigorously solutions of linear differential or recurrence equations, using the equation as a datastructure.

URL:
http://
perso. enslyon. fr/ bruno. salvy/ software/ thegfunpackage/  Contact: Bruno Salvy
6.1.3 GNUMPFR
 Keywords: MultiplePrecision, Floatingpoint, Correct Rounding
 Functional Description: GNU MPFR is an efficient arbitraryprecision floatingpoint library with welldefined semantics (copying the good ideas from the IEEE 754 standard), in particular correct rounding in 5 rounding modes. It provides about 80 mathematical functions, in addition to utility functions (assignments, conversions...). Special data (Not a Number, infinities, signed zeros) are handled like in the IEEE 754 standard. GNU MPFR is based on the mpn and mpz layers of the GMP library.

URL:
https://
www. mpfr. org/  Publications: hal01394289, hal01502326, inria00069930, inria00070174, inria00103655, inria00000026
 Contact: Vincent Lefèvre
 Participants: Guillaume Hanrot, Paul Zimmermann, Philippe Théveny, Vincent Lefèvre
6.1.4 Sipe
 Keywords: Floatingpoint, Correct Rounding
 Functional Description: Sipe is a minilibrary in the form of a C header file, to perform radix2 floatingpoint computations in very low precisions with correct rounding, either to nearest or toward zero. The goal of such a tool is to do proofs of algorithms/properties or computations of tight error bounds in these precisions by exhaustive tests, in order to try to generalize them to higher precisions. The currently supported operations are addition, subtraction, multiplication (possibly with the error term), fused multiplyadd/subtract (FMA/FMS), and miscellaneous comparisons and conversions. Sipe provides two implementations of these operations, with the same API and the same behavior: one based on integer arithmetic, and a new one based on floatingpoint arithmetic.

URL:
https://
www. vinc17. net/ research/ sipe/  Publications: hal00763954, hal00864580
 Contact: Vincent Lefèvre
 Participant: Vincent Lefèvre
6.1.5 LinBox
 Keyword: Exact linear algebra
 Functional Description: LinBox is an opensource C++ template library for exact, highperformance linear algebra computations. It is considered as the reference library for numerous computations (such as linear system solving, rank, characteristic polynomial, Smith normal forms,...) over finite fields and integers with dense, sparse, and structured matrices.

URL:
http://
linalg. org/  Contacts: Clément Pernet, Thierry Gautier, Gilles Villard
 Participants: Clément Pernet, Thierry Gautier
6.1.6 HPLLL
 Keywords: Euclidean Lattices, Computer algebra system (CAS)
 Functional Description: Software library for linear algebra and Euclidean lattice problems

URL:
http://
perso. enslyon. fr/ gilles. villard/ hplll/  Contact: Gilles Villard
7 New results
7.1 Efficient approximation methods
7.1.1 Computation of Tight Enclosures for Laplacian Eigenvalues
Recently, there has been interest in high precision approximations of the first eigenvalue of the Laplace–Beltrami operator on spherical triangles for combinatorial purposes. We compute improved and certified enclosures to these eigenvalues. This is achieved by applying the method of particular solutions in high precision, the enclosure being obtained by a combination of interval arithmetic and Taylor models. The index of the eigenvalue is certified by exploiting the monotonicity of the eigenvalue with respect to the domain. The classically troublesome case of singular corners is handled by combining expansions at all corners and an expansion from an interior point. In particular, this allows us to compute 100 digits of the fundamental eigenvalue for the threedimensional Kreweras model that has been the object of previous efforts. 6
7.2 Floatingpoint and Validated Numerics
7.2.1 Error Analysis of some Operations Involved in the CooleyTukey Fast Fourier Transform
In 4, we obtain error bounds for the classical CooleyTukey FFT algorithm in floatingpoint arithmetic, for the 2norm as well as for the infinity norm. For that purpose we also give some results on the relative error of the complex multiplication by a root of unity, and on the largest value that can take the real or imaginary part of one term of the FFT of a vector x, assuming that all terms of x have real and imaginary parts less than some value b.
7.2.2 Influence of the Condition Number on Interval Computations: Illustration on Some Examples
The condition number is a quantity that is wellknown in “classical” numerical analysis, that is, where numerical computations are performed using floatingpoint numbers. This quantity appears much less frequently in interval numerical analysis, that is, where the computations are performed on intervals. The goal of 29 is twofold. On the one hand, it is stressed that the notion of condition number already appears in the literature on interval analysis, even if it does not bear that name. On the other hand, three small examples are used to illustrate experimentally the impact of the condition number on interval computations.
7.2.3 The Relative Accuracy of (x+y)*(xy)
We consider in 8 the relative accuracy of evaluating $(x+y)(xy)$ in IEEE floatingpoint arithmetic, when $x$, $y$ are two floatingpoint numbers and rounding is to nearest. This expression can be used, for example, as an efficient cancellationfree alternative to ${x}^{2}{y}^{2}$ and (at least in the absence of underflow and overflow) is well known to have low relative error, namely, at most about $3u$ with $u$ denoting the unit roundoff. In this paper we propose to complement this traditional analysis with a finergrained one, aimed at improving and assessing the quality of that bound. Specifically, we show that if the tiebreaking rule is to away then the bound $3u$ is asymptotically optimal (as the precision tends to $\infty $). In contrast, if the tiebreaking rule is to even, we show that asymptotically optimal bounds are now $2.25u$ for base two and $2u$ for larger bases, such as base ten. In each case, asymptotic optimality is obtained by the explicit construction of a certificate, that is, some floatingpoint input $(x,y)$ parametrized by $u$ and such that the error of the associated result is equivalent to the error bound as $u$ tends to zero. We conclude with comments on how $(x+y)(xy)$ compares with ${x}^{2}$ in the presence of floatingpoint arithmetic, in particular showing cases where the computed value of $(x+y)(xy)$ exceeds that of ${x}^{2}$.
7.2.4 Emulating Roundtonearesttiestozero “Augmented” Floatingpoint Operations Using Roundtonearesttiestoeven Arithmetic
The 2019 version of the IEEE 754 Standard for FloatingPoint Arithmetic recommends that new “augmented” operations should be provided for the binary formats. These operations use a new “rounding direction”: round to nearest tiestozero. In 2, we show how they can be implemented using the currently available operations, using roundtonearest tiestoeven with a partial formal proof of correctness. This is a collaboration with S. Boldo (Inria Saclay) and C. Lauter (University of Alaska).
7.2.5 Elementary Functions and Approximate Computing
In 12, we review some of the classical methods used for quickly obtaining lowprecision approximations to the elementary functions. Then, for each of the three main classes of elementary function algorithms (shiftandadd algorithms, polynomial or rational approximations, tablebased methods) and for the additional, specific to approximate computing, “bitmanipulation” techniques, we examine what can be done for obtaining very fast estimates of a function, at the cost of a (controlled) loss in terms of accuracy.
7.2.6 Algorithms for Manipulating Quaternions in Floatingpoint Arithmetic
Quaternions form a set of four global but not unique parameters, which can represent threedimensional rotations in a nonsingular way. They are frequently used in computer graphics, drone and aerospace vehicle control. Floatingpoint quaternion operations (addition, multiplication, reciprocal, norm) are often implemented “by the book”. Although all usual implementations are algebraically equivalent, their numerical behavior can be quite different. For instance, the arithmetic operations on quaternions as well as conversion algorithms to/from rotation matrices are subject to spurious under/overflow (an intermediate calculation underflows or overflows, making the computed final result irrelevant, although the exact result is in the domain of the representable numbers). We analyze and then propose workarounds and better accuracy alternatives for such algorithms 24.
7.2.7 Alternative Split Functions and Dekker's Product
We introduce algorithms for splitting a positive binary floatingpoint number into two numbers of around half the system precision, using arithmetic operations all rounded either toward $\infty $ or toward $+\infty $. We use these algorithms to compute “exact” products (i.e., to express the product of two floatingpoint numbers as the unevaluated sum of two floatingpoint numbers, the rounded product and an error term). This is similar to the classical Dekker product, adapted here to directed roundings 23.
7.2.8 Formalization of Doubleword Arithmetic
Recently, a complete set of algorithms for manipulating doubleword numbers (some classical, some new) was analyzed by Joldes, Popescu and Muller. We have formally proven all the theorems given in that paper, using the Coq proof assistant. The formal proof work led us to: i) locate mistakes in some of the original paper proofs (mistakes that, however, do not hinder the validity of the algorithms), ii) significantly improve some error bounds, and iii) generalize some results by showing that they are still valid if we slightly change the rounding mode. The consequence is that the algorithms presented in Joldes et al.'s paper can be used with high confidence, and that some of them are even more accurate than what was believed before. This illustrates what formal proof can bring to computer arithmetic: beyond mere (yet extremely useful) verification, correction and consolidation of already known results, it can help to find new properties. All our formal proofs are freely available 34.
7.2.9 Hardware Implementation of Division Algorithms
We show the details of an implementation of Ercegovac and Muller's variable radix division algorithm. This implementation takes advantage of the easier prescaling offered by lowradix division and recodes it as necessary for higher radix iterations throughout the design. This, along with proper use of redundant digit sets, allows us to significantly alter performance characteristics relative to exclusively highradix division implementations. Comparisons to existing architectures are shown, as well as common implementation optimizations for future iterations. Results are given in cmos32soi 32nm MTCMOS technology using ARMbased standardcells and commercial EDA toolsets. This work was done in cooperation with M. Ercegovac (U.C. Los Angeles) and J. Stine (Oklahoma State Univ.) 28.
7.3 Lattices: Algorithms and Cryptology
7.3.1 On the Smoothing Parameter and Last Minimum of Random Orthogonal Lattices
Let $X\in {\mathbb{Z}}^{n\times m}$, with each entry independently and identically distributed from an integer Gaussian distribution. We conside the orthogonal lattice ${\Lambda}^{\perp}\left(X\right)$, i.e., the set of vectors $v\in {\mathbb{Z}}^{m}$ such that $Xv=0$. We prove probabilistic upper bounds on the smoothing parameter and the $(mn)$th minimum of ${\Lambda}^{\perp}\left(X\right)$. These bounds improve and the techniques build upon prior works of Agrawal, Gentry, Halevi and Sahai [Asiacrypt’13], and of Aggarwal and Regev [Chicago J. Theoret. Comput. Sci.’16]. 9
7.3.2 ModFalcon: Compact Signatures Based On ModuleNTRU Lattices
Lattices lead to promising practical postquantum digital signatures, combining asymptotic efficiency with strong theoretical security guarantees. However, tuning their parameters for practical instantiations is a delicate task. On the one hand, NIST round2 candidates based on Lyubashevsky's design (such as dilithium and qtesla) allow several tradeoffs between security and efficiency, but at the expense of a large bandwidth consumption. On the other hand, the hashandsign falcon signature is much more compact and is still very efficient, but it allows only two security levels, with large compactness and security gaps between them. We introduce a new family of signature schemes based on the falcon design, which relies on module lattices. Our concrete instantiation enjoys the compactness and efficiency of falcon, and allows an intermediate security level. It leads to the most compact latticebased signature achieving a quantum security above 128 bits.19
7.3.3 Faster EnumerationBased Lattice Reduction
We give a lattice reduction algorithm that achieves root Hermite factor ${k}^{1/\left(2k\right)}$ in time ${k}^{k/8+o\left(k\right)}$ and polynomial memory. This improves on the previously best known enumerationbased algorithms which achieve the same quality, but in time ${k}^{k/\left(2e\right)+o\left(k\right)}$. A cost of ${k}^{k/8+o\left(k\right)}$ was previously mentioned as potentially achievable (HanrotStehlé'10) or as a heuristic lower bound (Nguyen'10) for enumeration algorithms. We prove the complexity and quality of our algorithm under a heuristic assumption and provide empirical evidence from simulation and implementation experiments attesting to its performance for practical and cryptographic parameter sizes. The techniques used to achieve these results also suggest potential avenues for achieving costs below ${k}^{k/8+o\left(k\right)}$ for the same root Hermite factor, based on the geometry of SDBKZreduced bases.14
7.3.4 MPSign: a Signature from SmallSecret MiddleProduct Learning with Errors
We describe a digital signature scheme MPSign, whose security relies on the conjectured hardness of the Polynomial Learning With Errors problem (PLWE) for at least one defining polynomial within an exponentialsize family (as a function of the security parameter). The proposed signature scheme follows the FiatShamir framework and can be viewed as the Learning With Errors counterpart of the signature scheme described by Lyubashevsky at Asiacrypt 2016, whose security relies on the conjectured hardness of the Polynomial Short Integer Solution (PSIS) problem for at least one defining polynomial within an exponentialsize family. As opposed to the latter, MPSign enjoys a security proof from PLWE that is tight in the quantumaccess random oracle model.
The main ingredient is a reduction from PLWE for an arbitrary defining polynomial among exponentially many, to a variant of the MiddleProduct Learning with Errors problem (MPLWE) that allows for secrets that are small compared to the working modulus. We present concrete parameters for MPSign using such small secrets, and show that they lead to significant savings in signature length over Lyubashevsky's Asiacrypt 2016 scheme (which uses larger secrets) at typical security levels. As an additional small contribution, and in contrast to MPSign (or MPLWE), we present an efficient keyrecovery attack against Lyubashevsky's scheme (or the inhomogeneous PSIS problem), when it is used with sufficiently small secrets, showing the necessity of a lower bound on secret size for the security of that scheme.16
7.3.5 Towards Practical GGMBased PRF from (Module)Learning withRounding
We investigate the efficiency of a (module)LWR based PRF built using the GGM design. Our construction enjoys the security proof of the GGM construction and the (module)LWR hardness assumption which is believed to be postquantum secure. We propose GGMbased PRFs from PRGs with larger ratio of output to input. This reduces the number of PRG invocations which improves the PRF performance and reduces the security loss in the GGM security reduction. Our construction bridges the gap between practical and provably secure PRFs. We demonstrate the efficiency of our construction by providing parameters achieving at least 128bit postquantum security and optimized implementations utilizing AVX2 vector instructions. Our PRF requires, on average, only 39.4 cycles per output byte.
7.3.6 MeasureRewindMeasure: Tighter Quantum Random Oracle Model Proofs for OneWay to Hiding and CCA Security
We introduce a new technique called ‘MeasureRewindMeasure’ (MRM) to achieve tighter security proofs in the quantum random oracle model (QROM). We first apply our MRM technique to derive a new security proof for a variant of the ‘doublesided’ quantum OneWay to Hiding Lemma (O2H) of Bindel et al. [TCC 2019] which, for the first time, avoids the squareroot advantage loss in the security proof. In particular, it bypasses a previous ‘impossibility result’ of Jiang, Zhang and Ma [IACR eprint 2019]. We then apply our new O2H Lemma to give a new tighter security proof for the FujisakiOkamoto transform for constructing a strong (INDCCA) Key Encapsulation Mechanism (KEM) from a weak (INDCPA) publickey encryption scheme satisfying a mild injectivity assumption. 25
7.3.7 New Constructions of Statistical NIZKs: DualMode DVNIZKs and More
Noninteractive zeroknowledge proofs (NIZKs) are important primitives in cryptography. A major challenge since the early works on NIZKs has been to construct NIZKs with a statistical zeroknowledge guarantee against unbounded verifiers. In the common reference string (CRS) model, such “statistical NIZK arguments” are currently known from kLin in a pairinggroup and from LWE. In the (reusable) designatedverifier model (DVNIZK), where a trusted setup algorithm generates a reusable verification key for checking proofs, we also have a construction from DCR. If we relax our requirements to computational zeroknowledge, we additionally have NIZKs from factoring and CDH in a pairing group in the CRS model, and from nearly all assumptions that imply publickey encryption (e.g., CDH, LPN, LWE) in the designatedverifier model. Thus, there still remains a gap in our understanding of statistical NIZKs in both the CRS and the designatedverifier models. In 27, we develop new techniques for constructing statistical NIZK arguments. First, we construct statistical DVNIZK arguments from the kLin assumption in pairingfree groups, the QR assumption, and the DCR assumption. These are the first constructions in pairingfree groups and from QR that satisfy statistical zeroknowledge. All of our constructions are secure even if the verification key is chosen maliciously (i.e., they are “maliciousdesignatedverifier” NIZKs), and moreover, they satisfy a “dualmode” property where the CRS can be sampled from two computationally indistinguishable distributions: one distribution yields statistical DVNIZK arguments while the other yields computational DVNIZK proofs. We then show how to adapt our kLin construction in a pairing group to obtain new publiclyverifiable statistical NIZK arguments from pairings with a qualitatively weaker assumption than existing constructions of pairingbased statistical NIZKs. Our constructions follow the classic paradigm of Feige, Lapidot, and Shamir (FLS). While the FLS framework has traditionally been used to construct computational (DV)NIZK proofs, we newly show that the same framework can be leveraged to construct dualmode (DV)NIZKs.
7.3.8 Adaptive Simulation Security for Inner Product Functional Encryption
Inner product functional encryption (IPFE) is a popular primitive which enables inner product computations on encrypted data. In IPFE, the ciphertext is associated with a vector x, the secret key is associated with a vector y and decryption reveals the inner product x, y. Previously, it was known how to achieve adaptive indistinguishability (IND) based security for IPFE from the DDH, DCR and LWE assumptions. However, in the stronger simulation (SIM) based security game, it was only known how to support a restricted adversary that makes all its key requests either before or after seeing the challenge ciphertext, but not both. In more detail, Wee (TCC 2017) showed that the DDHbased scheme of Agrawal et al. (Crypto 2016) achieves semiadaptive simulationbased security, where the adversary must make all its key requests after seeing the challenge ciphertext. On the other hand, O'Neill showed that all INDsecure IPFE schemes (which may be based on DDH, DCR and LWE) satisfy SIM based security in the restricted model where the adversary makes all its key requests before seeing the challenge ciphertext. In 13, we resolve the question of SIMbased security for IPFE by showing that variants of the IPFE constructions by Agrawal et al., based on DDH, Paillier and LWE, satisfy the strongest possible adaptive SIMbased security where the adversary can make an unbounded number of key requests both before and after seeing the (single) challenge ciphertext. This establishes optimal security of the IPFE schemes, under all hardness assumptions on which it can (presently) be based.
7.3.9 SimulationSound Arguments for LWE and Applications to KDMCCA2 Security
The NaorYung paradigm is a wellknown technique that constructs INDCCA2secure encryption schemes by means of noninteractive zeroknowledge proofs satisfying a notion of simulationsoundness. Until recently, it was an open problem to instantiate it under the sole LearningWithErrors (LWE) assumption without relying on random oracles. While the recent results of Canetti et al. (STOC'19) and PeikertShiehian (Crypto'19) provide a solution to this problem by applying the FiatShamir transform in the standard model, the resulting constructions are extremely inefficient as they proceed via a reduction to an NPcomplete problem. In 26, we give a direct, nongeneric method for instantiating NaorYung under the LWE assumption outside the random oracle model. Specifically, we give a direct construction of an unbounded simulationsound NIZK argument system which, for carefully chosen parameters, makes it possible to express the equality of plaintexts encrypted under different keys in Regev's cryptosystem. We also give a variant of our argument that provides tight security. As an application, we obtain an LWEbased publickey encryption scheme for which we can prove (tight) keydependent message security under chosenciphertext attacks in the standard model.
7.3.10 Latticebased ecash, Revisited
Electronic cash (ecash) was introduced 40 years ago as the digital analogue of traditional cash. It allows users to withdraw electronic coins that can be spent anonymously with merchants. As advocated by Camenisch et al. (Eurocrypt 2005), it should be possible to store the withdrawn coins compactly (i.e., with logarithmic cost in the total number of coins), which has led to the notion of compact ecash. Many solutions were proposed for this problem but the security proofs of most of them were invalidated by a very recent paper by Bourse et al. (Asiacrypt 2019). The same paper describes a generic way of fixing existing constructions/proofs but concrete instantiations of this patch are currently unknown in some settings. In particular, compact ecash is no longer known to exist under quantumsafe assumptions. In 20, we resolve this problem by proposing the first secure compact ecash system based on lattices following the result from Bourse et al. Contrarily to the latter work, our construction is not only generic, but we describe two concrete instantiations. We depart from previous frameworks of ecash systems by leveraging lossy trapdoor functions to construct our coins. The indistinguishability of lossy and injective keys allows us to avoid the very strong requirements on the involved pseudorandom functions that were necessary to instantiate the generic patch proposed by Bourse et al.
7.3.11 Signatures of Knowledge and NIZK Proofs for Boolean Circuits
In 15, we construct unbounded simulationsound proofs for Boolean circuit satisfiability under standard assumptions with proofs comprised of $O(n+d)$ group elements, where $d$ is the depth and $n$ is the input size of the circuit. Our technical contribution is to add unbounded simulation soundness to a recent NIZK of González and Ràfols (ASIACRYPT'19) with very small overhead. We give two different constructions: the first one is more efficient but not tight, and the second one is tight. The new scheme can be used to construct Signatures of Knowledge based on standard assumptions that also can be composed universally with other cryptographic protocols/primitives. As an independent contribution, we also detail a simple formula to encode Boolean circuits as Quadratic Arithmetic Programs.
7.3.12 Bandwidthefficient Threshold ECDSA
Threshold Signatures allow n parties to share the power of issuing digital signatures so that any coalition of size at least (t+1) can sign, whereas groups of t or less players cannot. Over the last few years many schemes addressed the question of realizing efficient threshold variants for the specific case of ECDSA signatures. In 18 we present new solutions to the problem that aim at reducing the overall bandwidth consumption. Our main contribution is a new variant of the Gennaro and Goldfeder protocol from ACM CCS 2018 that avoids all the required range proofs, while retaining provable security against malicious adver saries in the dishonest majority setting. Our experiments show that – for all levels of security – our signing protocol reduces the bandwidth consumption of best previously known secure protocols for factors varying between 4.4 and 9, while key generation is consistently two times less expensive. Furthermore compared to these same protocols, our signature generation is faster for 192bits of security and beyond.
7.3.13 Blind Functional Encryption
Functional encryption (FE) gives the power to retain control of sensitive information and is particularly suitable in several practical realworld use cases. Using this primitive, anyone having a specific functional decryption key (derived from some master secret key) could only obtain the evaluation of an authorized function f over a message m, given its encryption. For many scenarios, the data owner is always different from the functionality owner, such that a classical implementation of functional encryption naturally implies an interactive key generation protocol between an entity owning the function f and another one managing the master secret key. We focus on this particular phase and consider the case where the function needs to be secret. In 17, we introduce the new notion of blind functional encryption in which, during an interactive key generation protocol, the master secret key owner does not learn anything about the function f. Our new notion can be seen as a generalisation of the existing concepts of blind IBE/ABE. After a deep study of this new property and its relation with other security notions, we show how to obtain a generic blind FE from any nonblind FE, using homomorphic encryption and zeroknowledge proofs of knowledge. We finally illustrate such construction by giving an efficient instantiation in the case of the inner product functionality.
7.3.14 Alternative Constructions of Asymmetric Primitives from Obfuscation: Hierarchical IBE, Predicate Encryption, and More
We revisit constructions of asymmetric primitives from obfuscation and give simpler alternatives. We consider publickey encryption, (hierarchical) identitybased encryption ((H)IBE), and predicate encryption. Obfuscation has already been shown to imply PKE by Sahai and Waters (STOC’14) and fullfledged functional encryption by Garg et al. (FOCS’13). We simplify all these constructions and reduce the necessary assumptions on the class of circuits that the obfuscator needs to support. Our PKE scheme relies on just a PRG and does not need any puncturing. Our IBE and bounded HIBE schemes convert natural keydelegation mechanisms from (recursive) applications of puncturable PRFs to IBE and HIBE schemes. Our most technical contribution is an unbounded HIBE, which uses (publiccoin) differinginputs obfuscation for circuits and whose proof relies on a recent pebblingbased hybrid argument by Fuchsbauer et al. (ASIACRYPT’14). All our constructions are anonymous, support arbitrary inputs, and have compact keys and ciphertexts. 21
7.3.15 From Cryptomania to Obfustopia through SecretKey Functional Encryption
Functional encryption lies at the frontiers of current research in cryptography; some variants have been shown sufficiently powerful to yield indistinguishability obfuscation (IO) while other variants have been constructed from standard assumptions such as LWE. Indeed, most variants have been classified as belonging to either the former or the latter category. However, one mystery that has remained is the case of secretkey functional encryption with an unbounded number of keys and ciphertexts. On the one hand, this primitive is not known to imply anything outside of minicrypt, the land of secretkey crypto, but on the other hand, we do no know how to construct it without the heavy hammers in obfustopia. In this work, we show that (subexponentially secure) secretkey functional encryption is powerful enough to construct indistinguishability obfuscation if we additionally assume the existence of (subexponentially secure) plain publickey encryption. In other words, secretkey functional encryption provides a bridge from cryptomania to obfustopia. On the technical side, our result relies on two main components. As our first contribution, we show how to use secret key functional encryption to get “exponentiallyefficient indistinguishability obfuscation” (XIO), a notion recently introduced by Lin et al. (PKC ’16) as a relaxation of IO. Lin et al. show how to use XIO and the LWE assumption to build IO. As our second contribution, we improve on this result by replacing its reliance on the LWE assumption with any plain publickey encryption scheme. Lastly, we ask whether secretkey functional encryption can be used to construct publickey encryption itself and therefore take us all the way from minicrypt to obfustopia. A result of Asharov and Segev (FOCS ’15) shows that this is not the case under blackbox constructions, even for exponentially secure functional encryption. We show, through a nonblack box construction, that subexponentially securekey functional encryption indeed leads to publickey encryption. The resulting publickey encryption scheme, however, is at most quasipolynomially secure, which is insufficient to take us to obfustopia. 1
7.3.16 Privately Outsourcing Exponentiation to a Single Server: Cryptanalysis and Optimal Constructions
In 5, we address the problem of speeding up group computations in cryptography using a single untrusted computational resource. We analyze the security of two efficient protocols for securely outsourcing (multi)exponentiations. We show that the schemes do not achieve the claimed security guarantees and we present practical polynomialtime attacks on the delegation protocols which allow the untrusted helper to recover part (or the whole) of the device’s secret inputs. We then provide simple constructions for outsourcing group exponentiations in different settings (e.g. public/secret, fixed/variable bases and public/secret exponents). Finally, we prove that our attacks are unavoidable if one wants to use a single untrusted computational resource and to limit the computational cost of the limited device to a constant number of (generic) group operations. In particular, we show that our constructions are actually optimal in terms of operations in the underlying group.
7.3.17 Functional Encryption and Distributed Signatures Based on Projective Hash Functions, the Benefit of Class Groups
One of the current challenges in cryptographic research is the development of advanced cryptographic primitives ensuring a high level of confidence. In this thesis, we focus on their design, while proving their security under wellstudied algorithmic assumptions.
This work grounds itself on the linearity of homomorphic encryption, which allows to perform linear operations on encrypted data. Precisely, it built upon the linearly homomorphic encryption scheme introduced by Castagnos and Laguillaumie at CTRSA’15. Their scheme possesses the unusual property of having a prime order plaintext space, whose size can essentially be tailored to ones’ needs. Aiming at a modular approach, technical tools are designed from their work (projective hash functions, zeroknowledge proofs of knowledge) which provide a rich framework lending itself to many applications.
This framework first makes it possible to build functional encryption schemes; this highly expressive primitive allows a fine grained access to the information contained in e.g., an encrypted database. Then, in a different vein, but from these same tools, threshold digital signatures are designed, allowing a secret key to be shared among multiple users, so that the latter must collaborate in order to produce valid signatures. Such signatures can be used, among other applications, to secure cryptocurrency wallets.
Significant efficiency gains, namely in terms of bandwidth, result from the instantiation of these constructions from class groups. This work is at the forefront of the revival these mathematical objects have seen in cryptography over the last few years. 32
7.4 Algebraic Computing and Highperformance Kernels
7.4.1 Effective Coefficient Asymptotics of Multivariate Rational Functions via SemiNumerical Algorithms for Polynomial Systems
The coefficient sequences of multivariate rational functions appear in many areas of combinatorics. Their diagonal coefficient sequences enjoy nice arithmetic and asymptotic properties, and the field of analytic combinatorics in several variables (ACSV) makes it possible to compute asymptotic expansions. We consider these methods from the point of view of effectivity. In particular, given a rational function, ACSV requires one to determine a (generically) finite collection of points that are called critical and minimal. Criticality is an algebraic condition, meaning it is well treated by classical methods in computer algebra, while minimality is a semialgebraic condition describing points on the boundary of the domain of convergence of a multivariate power series. We show how to obtain dominant asymptotics for the diagonal coefficient sequence of multivariate rational functions under some genericity assumptions using symbolicnumeric techniques. To our knowledge, this is the first completely automatic treatment and complexity analysis for the asymptotic enumeration of rational functions in an arbitrary number of variables. 11
7.4.2 Explicit Degree Bounds for Right Factors of Linear Differential Operators
If a linear differential operator with rational function coefficients is reducible, its factors may have coefficients with numerators and denominators of very high degree. We give a completely explicit bound for the degrees of the (monic) right factors in terms of the degree and the order of the original operator, as well as the largest modulus of the local exponents at all its singularities, for which bounds are known in terms of the degree, the order and the height of the original operator. 3
7.4.3 Fast Computation of Approximant Bases in Canonical Form
In 7, we design fast algorithms for the computation of approximant bases in shifted Popov normal form. We first recall the algorithm known as PMBasis, which will be our second fundamental engine after polynomial matrix multiplication: most other fast approximant basis algorithms basically aim at efficiently reducing the input instance to instances for which PMBasis is fast. Such reductions usually involve partial linearization techniques due to Storjohann, which have the effect of balancing the degrees and dimensions in the manipulated matrices. Following these ideas, Zhou and Labahn gave two algorithms which are faster than PMBasis for important cases including HermitePadé approximation, yet only for shifts whose values are concentrated around the minimum or the maximum value. The three mentioned algorithms were designed for balanced orders and compute approximant bases that are generally not normalized. Here, we show how they can be modified to return the shifted Popov basis without impact on their cost bound; besides, we extend Zhou and Labahn's algorithms to arbitrary orders. Furthermore, we give an algorithm which handles arbitrary shifts with one extra logarithmic factor in the cost bound compared to the above algorithms. To the best of our knowledge, this improves upon previously known algorithms for arbitrary shifts, including for particular cases such as HermitePadé approximation. This algorithm is based on a recent divideandconquer approach that reduces the general case to the case where information on the output degree is available. As outlined above, we solve the latter case via partial linearizations and PMBasis.
8 Bilateral contracts and grants with industry
8.1 Bilateral contracts with industry
Bosch (Germany) ordered from us some support for the design and implementation of square root algorithms in fixedpoint and floatingpoint arithmetics (participants: ClaudePierre Jeannerod and JeanMichel Muller).
8.2 Bilateral grants with industry
 Miruna Rosca and Radu Titiu are employees of BitDefender. Their PhD's are supervised by Damien Stehlé and Benoît Libert, respectively. Miruna Rosca works on the foundations of latticebased cryptography, and Radu Titiu works on pseudorandom functions and functional encryption.
 Adel Hamdi is doing his PhD with Orange Labs and is supervised by Fabien Laguillaumie. He is working on advanced encryption protocols for the cloud.
 Orel Cosseron is doing his PhD with Zama SAS and is supervised by Damien Stehlé. He is working on fully homomorphic encryption.
9 Partnerships and cooperations
9.1 International initiatives
9.1.1 Participation in other international programs
IFCPAR Research grant with IIT Madras
Participant: Benoît Libert, Damien Stehlé, Dingding Jia.
The project “Computing on Encrypted Data: New Paradigms in Functional Encryption” is funded by the IndoFrench Centre for the Promotion of Advanced Research (IFCPAR/CEFIPRA) since January 2019 (for 3 years) and hosted by CNRS on the French side. The coPIs are Shweta Agrawal (IIT Madras) and Benoît Libert. The global budget is 200,000€. This projects deals with a cryptographic primitive called “Functional Encryption” which aims at developing new techniques for evaluating expressive functions on encrypted data and obtaining the evaluation result in the clear. It aims at leveraging the potential of Euclidean lattices to build functional encryption schemes that evaluate expressive functions (represented by Boolean circuits or Turing machines) on encrypted data.
9.2 European initiatives
9.2.1 FP7 & H2020 Projects
PROMETHEUS H2020 Project
Participant: Benoît Libert, Damien Stehlé, Amit Deo, Octavie Paris.
PROMETHEUS (PrivacyPreserving Systems from Advanced Cryptographic Mechanisms Using Lattices) is a European H2020 project (Call H2020DS20162017, Cybersecurity PPP Cryptography (DS062017)) that started in January 2018 (http://
9.3 National initiatives
ANR ALAMBIC Project
Participant: Benoît Libert, Fabien Laguillaumie, Ida Tucker, Alonso Gonzalez.
ALAMBIC is a fouryear project (started in October 2016) focused on the applications of cryptographic primitives with homomorphic or malleability properties. The project received a 6month extension due to the COVID crisis and now ends in April 2021. The web page of the project is https://RISQ Project
Participant: Chitchanok Chuengsatiansup, Rikki Amit Inder Deo, Hervé Tale Kalachi, Fabien Laguillaumie, Benoît Libert, Damien Stehlé.
RISQ (Regroupement de l’Industrie française pour la Sécurité Post – Quantique) is a BPIDGE fouryear project (started in January 2017) focused on the transfer of postquantum cryptography from academia to industrial poducts. The web page of the project is http://10 Dissemination
10.1 Promoting scientific activities
10.1.1 Scientific events: organisation
Member of the organizing committees
Bruno Salvy was an organizer of the workshop "Symbolic Analysis" of the conference "Foundations of Computational Mathematics" in Vancouver, that ended up being cancelled.
Damien Stehlé coorganized the workshop "Lattices: From Theory to Practice", which was part of the Simons Institute trimester "Lattices: Algorithms, Complexity, and Cryptography".
10.1.2 Scientific events: selection
Member of the conference program committees
Benoît Libert was a program committee member for PKC 2020, SCN 2020, Asiacrypt 2020, Eurocrypt 2021 and CTRSA 2021.
Nathalie Revol was a member of the program commmittee for Arith 2020 and Arith 2021.
Bruno Salvy was a member of the program committee for AofA 2020.
Damien Stehlé was a member of the PQCrypto 2020 program committee.
JeanMichel Muller was a member of the program commmittee for Arith 2020, Arith 2021, and ASAP 2021.
10.1.3 Journal
Member of the editorial boards
Nathalie Revol belongs to the editorial board of "Reliable Computing".
Bruno Salvy is an editor of the "Journal of Symbolic Computation", of "Annals of Combinatorics" and of the collection "Text and Monographs in Symbolic Computation" (Springer).
Damien Stehlé is an editor for the "Journal of Cryptology" and "Designs, Codes and Cryptography".
JeanMichel is associate Editor in Chief of the IEEE Transactions on Emerging Topics in Computing.
10.1.4 Leadership within the scientific community
ClaudePierre Jeannerod is a member of the scientific committee of JNCF (Journées Nationales de Calcul Formel). He is also a member of the recruitment committee for postdocs and sabbaticals at Inria Grenoble–RhôneAlpes.
Nathalie Revol is a member of the scientific committee of the GdR Calcul.
Bruno Salvy is in the steering committee of the conference AofA. He is a member of the scientific councils of the CIRM, Luminy and of the GDR Informatique Mathématique of the CNRS. He headed a selection committee for positions of professor and assistant professor in computer science at École polytechnique and was a member of a selection committee for a professor in mathematics at Université VersaillesSaintQuentin.
Damien Stehlé is a member of the advisory board of the Cryptography Research Centre of the Technology Innovation Institute (Abu Dhabi).
Damien Stehlé is a member of the advisary board of CryptoNextsecurity (France).
Damien Stehlé was a member of a selection committee for an assistant professor position in computer science at ENS Lyon.
JeanMichel Muller is in the steering committee of the Conference Arith.
10.1.5 Scientific expertise
Nathalie Revol belonged to the scientific jury for the attribution of BQR of U. Perpignan. She has been an expert for the European H2020 program.
JeanMichel Muller belongs to the scientific council of CERFACS.
10.1.6 Research administration
JeanMichel Muller is cohead of GdR IM.
10.2 Teaching  Supervision  Juries
Teaching
 Master: Nicolas Brisebarre, Approximation Theory and Proof Assistants: Certified Computations, 12h, M2, ENS de Lyon, France
 Master: ClaudePierre Jeannerod, FloatingPoint Arithmetic, 6h, M2, ENS de Lyon, France
 Master: JeanMichel Muller, FloatingPoint Arithmetic, 6h, M2, ENS de Lyon, France
 Master: Guillaume Hanrot, Computer algebra, 10h, ENS de Lyon, France
 Master: Guillaume Hanrot, Cryptanalysis, 15h, ENS de Lyon, France
 Master (1&2): Fabien Laguillaumie, Cryptography, 160 h, ISFA, UCBL, France
 Master: Nicolas Louvet, Compilers, 15h, M1, UCB Lyon 1, France
 Master: Nicolas Louvet, Operating Systems, 30h, M2, UCB Lyon 1, France
 Master: Alain Passelègue, Computer Algebra, 10h, M1, ENS de Lyon, France
 Master: Alain Passelègue, Advanced Topics in Cryptography, 30h, M2, ENS de Lyon, France
 Master: Bruno Salvy, Computer Algebra, 6h, ENS de Lyon, France
 Master: Bruno Salvy, Logic and Complexity, 32h, École polytechnique, France
 Master : Gilles Villard, Computer Algebra, 8h, ENS de Lyon, France
 Master : Vincent Lefèvre, Computer arithmetic, 12h, ISFA (Institut de Science Financière et d'Assurances), Université Claude Bernard Lyon 1, France
 Bachelor: Guillaume Hanrot, Calculability and complexity, 32h, ENS de Lyon, France
 Bachelor: Nicolas Louvet, Computer Architecture, 6h, L1, UCB Lyon 1, France
 Bachelor: Nicolas Louvet, Operating Systems, 35h, L2, UCB Lyon 1, France
 Bachelor: Nicolas Louvet, Data Structures and Algorithms, 24h, L2, UCB Lyon 1, France
 Bachelor: Nicolas Louvet, Data Structures and Algorithms, 45h, L3, UCB Lyon 1, France
 Bachelor: Nicolas Louvet, Formal Languages, 21h, L3, UCB Lyon 1, France
 Bachelor: Nicolas Louvet, Classical Logic, 24h, L3, UCB Lyon 1, France
 Bachelor: Bruno Salvy, Design and Analysis of Algorithms, 20h, École polytechnique, France
Supervision
 PhD: Miruna Rosca, On Algebraic Variants of Learning With Errors, November 16, Damien Stehlé
 PhD: Radu Titiu, New Encryption Schemes and Pseudo Random Functions with Advanced Properties from Standard Assumptions, October 16, Benoît Libert
 PhD in progress: Huyen Nguyen, Cryptographic aspects of orthogonal lattices, September 2018, Damien Stehlé
 PhD in progress: Orel Cosseron, Cryptographic aspects of orthogonal lattices, October 2020 2018, Damien Stehlé (cosupervised by Pascal Paillier and Marc Joye, Zama)
 PhD in progress: Julien Devevey, Threshold cryptography, October 2020 2018, Cosupervised by Benoît Libert and Damien Stehlé
 PhD in progress: Adel Hamdi, Functional Encryption, December 2017, Fabien Laguillaumie (cosupervised by Sébastien Canard, Orange)
 PhD in progress: Ida Tucker, Advanced cryptographic primitives from homomorphic encryption, October 2017, Fabien Laguillaumie (codirection with Guilhem Castagnos, Université de Bordeaux)
Juries
 Fabien Laguillaumie was reviewer and jury member of the PhD of Guillaume Kaim (Université de Rennes)
 Benoît Libert was a member of the PhD committee of Michele Orrù (ENS Paris) and Patrick Towa (LIP6).
 Nathalie Revol belonged to the jury for CAPES NSI. She was a member of the PhD committee of Andrea Bocco (U. Lyon).
 Bruno Salvy was a member of the PhD committee of Mickaël Maazoun (maths, ENS Lyon) and of the habilitation committee of Delphine Boucher (maths, U. Rennes).
 Damien Stehlé was a reviewer and jury member of the PhD of Mélissa Rossi (ENS Paris) and a reviewer for the PhD of Monosij Maitra (IIT Madras).
 Gilles Villard was reviewer for the PhD thesis of Robin Larrieu (Université ParisSaclay); examiner for the habilitation of Pascal Giorgi (Université de Montpellier) and the PhD thesis of Matías Bender (Sorbonne Université).
10.3 Popularization
10.3.1 Internal or external Inria responsibilities
Nathalie Revol is the scientific editor of the Web magazine Interstices: https://
Regarding parity concerns: Nathalie Revol is the cochair of the LIP committee for parity and a member of the Inria parity committee.
10.3.2 Education
Nicolas Brisebarre was a scientific consultant for « Les maths et moi », a oneman show by Bruno Martins. He also took part to Q & A sessions with the audience after some shows.
10.3.3 Interventions
Nathalie Revol gave a conference about computer arithmetic for "OpenMinds", that gathers a conference in science and a conference in humanities, intended for students at ENS Lyon.
For highschool pupils (about 70 pupils): as an incentive, especially for girls, to choose scientific careers, Nathalie Revol gave talks at Mondial des Métiers (in February 2020) and "Sciences, un métier de femmes" (in March 2020). She took part in two "Filles & MathsInfo" days, each of them gathering around 80 highschool girls of 1e S in France and Lebanon: as a speaker in May 2020 and as a coorganizer in December 2020.
Damien Stehlé was interviewed on LCI and Radio B to warn about the privacy risks of contact tracing apps. He cosigned an opinion column in L'Humanité entitled "StopCovid. Ce traitement expéditif d'une question aussi centrale pour les libertés individuelles".
11 Scientific production
11.1 Publications of the year
International journals
 1 articleFrom Cryptomania to Obfustopia Through SecretKey Functional EncryptionJournal of Cryptology332April 2020, 357405
 2 article Emulating roundtonearest tiestozero "augmented" floatingpoint operations using roundtonearest tiestoeven arithmetic IEEE Transactions on Computers 2020
 3 articleExplicit degree bounds for right factors of linear differential operatorsBulletin of the London Mathematical Society531February 2021, 5362
 4 articleError analysis of some operations involved in the CooleyTukey Fast Fourier TransformACM Transactions on Mathematical Software462May 2020, 134
 5 articlePrivately Outsourcing Exponentiation to a Single Server: Cryptanalysis and Optimal ConstructionsAlgorithmica831January 2021, 72115
 6 articleComputation of Tight Enclosures for Laplacian EigenvaluesSIAM Journal on Scientific Computing425September 2020, A3210A3232
 7 articleFast computation of approximant bases in canonical formJournal of Symbolic Computation982020, 192224

8
articleThe relative accuracy of
$(x+y)*(xy)$ 'Journal of Computational and Applied Mathematics2020, 117  9 articleOn the smoothing parameter and last minimum of random orthogonal latticesDesigns, Codes and Cryptography885May 2020, 931950
 10 articleAdaptively Secure Noninteractive CCASecure Threshold Cryptosystems: Generic Framework and ConstructionsJournal of Cryptology33June 2020, 1405–1441
 11 articleEffective Coefficient Asymptotics of Multivariate Rational Functions via SemiNumerical Algorithms for Polynomial SystemsJournal of Symbolic Computation1032021, 234279
 12 articleElementary Functions and Approximate ComputingProceedings of the IEEE10812December 2020, 15582256
International peerreviewed conferences
 13 inproceedingsAdaptive Simulation Security for Inner Product Functional EncryptionPKC 2020  International Conference on Public Key CryptographyVirtual, United KingdomJune 2020, 130

14
inproceedingsFaster EnumerationBased Lattice Reduction: Root Hermite Factor
${k}^{1/\left(2k\right)}$ Time${k}^{k/8+o\left(k\right)}$ 'Advances in Cryptology  {CRYPTO} 2020; Advances in Cryptology  {CRYPTO} 2020CryptoSanta Barbara, United States2020, 186212  15 inproceedings Signatures of Knowledge for Boolean Circuits under Standard Assumptions (Full version) AFRICACRYPT 2020  12th International Conference on Cryptology in Africa 12174 LNCS  Lecture Notes in Computer Science Cairo / Virtual, Egypt Springer July 2020
 16 inproceedingsMPSign: A Signature from SmallSecret MiddleProduct Learning with ErrorsIACR International Conference on PublicKey CryptographyPKCEdimburgh, United KingdomApril 2020, 6693
 17 inproceedingsBlind Functional EncryptionICICS 2020  International Conference on Information and Communications SecurityLecture Notes in Computer ScienceInformation and Communications Security  22nd International Conference, ICICS 2020, Copenhagen, Denmark, August 2426, 2020, Proceedings12282Copenhagen, DenmarkSpringerNovember 2020, 183201
 18 inproceedingsBandwidthEfficient Threshold ECDSAPKC 2020  23rd IACR International Conference on Practice and Theory of PublicKey CryptographyPublicKey Cryptography  PKC 2020Edinburgh / Virtual, United KingdomSpringer International PublishingApril 2020, 266296
 19 inproceedingsModFalcon: Compact Signatures Based On ModuleNTRU LatticesASIACCSTaipei, FranceACM2020, 853866
 20 inproceedingsLatticeBased ECash, RevisitedAsiacrypt 2020  26th Annual International Conference on the Theory and Application of Cryptology and Information SecurityCorée (devenu virtuel pour cause de COVID), South KoreaDecember 2020, 147
 21 inproceedings Alternative Constructions of Asymmetric Primitives from Obfuscation: Hierarchical IBE, Predicate Encryption, and More Indocrypt 2020 Virtual conference, India December 2020
 22 inproceedingsAdaptively Secure ABE for DFA from kLin and MoreEUROCRYPT 2020  International Conference on Theory and Applications of Cryptographic TechniquesEUROCRYPT 2020  9th Annual International Conference on the Theory and Applications of Cryptographic TechniquesLNCS12107Zagreb / Virtual, CroatiaSpringerMay 2020, 278308
 23 inproceedingsAlternative Split Functions and Dekker's ProductProceedings of ARITH2020, IEEE 27th Symposium on Computer ArithmeticARITH2020  IEEE 27th Symposium on Computer ArithmeticPortland, United StatesIEEEJune 2020, 17
 24 inproceedingsAlgorithms for manipulating quaternions in floatingpoint arithmeticProceedings of ARITH2020, IEEE 27th Symposium on Computer ArithmeticARITH2020  IEEE 27th Symposium on Computer ArithmeticProceedings of ARITH2020, IEEE 27th Symposium on Computer ArithmeticPortland, United StatesIEEEJune 2020, 18
 25 inproceedingsMeasureRewindMeasure: Tighter Quantum Random Oracle Model Proofs for OneWay to Hiding and CCA SecurityAdvances in Cryptology  {EUROCRYPT} 2020EurocryptZagreb, CroatiaMay 2020, 703728
 26 inproceedingsSimulationSound Arguments for LWE and Applications to KDMCCA2 SecurityAsiacrypt 2020  26th Annual International Conference on the Theory and Application of Cryptology and Information SecurityVirtual, South KoreaDecember 2020, 167
 27 inproceedingsNew Constructions of Statistical NIZKs: DualMode DVNIZKs and MoreEurocrypt 2020  39th Annual International Conference on the Theory and Applications of Cryptographic TechniquesZagreb / Virtual, CroatiaSpringerMay 2020, 185
 28 inproceedings An Architecture for Improving Variable Radix Real and Complex Division Using Recurrence Division Asilomar Conference on Signals, Systems, and Computers Proceedings of the 2020 Asilomar Conference on Signals, Systems, and Computers Pacific Grove, CA (virtual), United States November 2020
Scientific book chapters
 29 inbookInfluence of the Condition Number on Interval Computations: Illustration on Some Examples835Beyond Traditional Probabilistic Data Processing Techniques: Interval, Fuzzy etc. Methods and Their ApplicationsStudies in Computational IntelligenceSpringer2020, 359373
Doctoral dissertations and habilitation theses
 30 thesis On algebraic variants of Learning With Errors Université de Lyon November 2020
 31 thesis New Encryption Schemes and PseudoRandom Functions with Advanced Properties from Standard Assumptions Université de Lyon October 2020
 32 thesis Functional encryption and distributed signatures based on projective hash functions, the benefit of class groups Université de Lyon October 2020
Reports & preprints
 33 report Termination of Ethereum's Smart Contracts Univ Rennes, Inria, CNRS, IRISA April 2020
 34 misc Formalization of doubleword arithmetic, and comments on "Tight and rigorous error bounds for basic building blocks of doubleword arithmetic" October 2020
 35 misc Computing the Characteristic Polynomial of Generic Toeplitzlike and Hankellike Matrices April 2021