**The overall objective of AriC (Arithmetic and Computing) is, through computer arithmetic and computational
mathematics, to improve computing at large.**

A major challenge in modeling and scientific computing is the simultaneous mastery of hardware capabilities, software design, and mathematical algorithms for the efficiency of the computation. Further, performance relates as much to efficiency as to reliability, requiring progress on automatic proofs, certificates and code generation. In this context, computer arithmetic and mathematical algorithms are the keystones of AriC. Our approach conciliates fundamental studies, practical performance and qualitative aspects, with a shared strategy going from high-level problem specifications and normalization actions, to computer arithmetic and the lowest-level details of implementations.

We focus on the following lines of action:

Design and integration of new methods and tools for mathematical program specification, certification, security, and guarantees on numerical results. Some main ingredients here are: the interleaving of formal proofs, computer arithmetic and computer algebra; error analysis and computation of certified error bounds; the study of the relationship between performance and numerical quality; and on the cryptology aspects, focus on the practicality of existing protocols and design of more powerful lattice-based primitives.

Generalization of a hybrid symbolic-numeric trend, and interplay between arithmetics
for both improving and
controlling numerical approaches (symbolic

Mathematical and algorithmic foundations of computing. We address algorithmic complexity and fundamental aspects of approximation, polynomial and matrix algebra, and lattice-based cryptology. Practical questions concern the design of high performance and reliable computing kernels, thanks to optimized computer arithmetic operators and an improved adequacy between arithmetic bricks and higher level ones.

According to the application domains that we target and our main fields of expertise, these lines of actions are declined in three themes with specific objectives. These themes also correspond to complementary angles for addressing the general computing challenge stated at the beginning of this introduction:

**Efficient approximation methods** (§). Here lies the question of
interleaving formal proofs, computer arithmetic and computer algebra, for significantly extending the range of
functions whose reliable evaluation can be optimized.

**Lattices: algorithms and cryptology** (§). Long term goals are to go beyond the current
design paradigm in basis reduction, and to demonstrate the superiority of lattice-based cryptography over contemporary
public-key cryptographic approaches.

**Algebraic computing and high performance kernels** (§).
The problem is to keep the algorithm and software designs in line with the scales of computational capabilities and application needs,
by simultaneously working on the structural and the computer arithmetic levels.

We plan to focus on the generation of certified and efficient approximations for solutions of linear differential equations. These functions cover many classical mathematical functions and many more can be built by combining them. One classical target area is the numerical evaluation of elementary or special functions. This is currently performed by code specifically handcrafted for each function. The computation of approximations and the error analysis are major steps of this process that we want to automate, in order to reduce the probability of errors, to allow one to implement “rare functions”, to quickly adapt a function library to a new context: new processor, new requirements – either in terms of speed or accuracy.

In order to significantly extend the current range of functions under consideration, several methods originating from approximation theory have to be considered (divergent asymptotic expansions; Chebyshev or generalized Fourier expansions; Padé approximants; fixed point iterations for integral operators). We have done preliminary work on some of them. Our plan is to revisit them all from the points of view of effectivity, computational complexity (exploiting linear differential equations to obtain efficient algorithms), as well as in their ability to produce provable error bounds. This work is to constitute a major progress towards the automatic generation of code for moderate or arbitrary precision evaluation with good efficiency. Other useful, if not critical, applications are certified quadrature, the determination of certified trajectories of spatial objects and many more important questions in optimal control theory.

As computer arithmeticians, a wide and important target for us is the design of efficient and certified linear filters in digital signal processing (DSP). Actually, following the advent of Matlab as the major tool for filter design, the DSP experts now systematically delegate to Matlab all the part of the design related to numerical issues. And yet, various key Matlab routines are neither optimized, nor certified. Therefore, there is a lot of room for enhancing numerous DSP numerical implementations and there exist several promising approaches to do so.

The main challenge that we want to address over the next period is the development and the implementation of optimal methods for rounding the coefficients involved in the design of the filter. If done in a naive way, this rounding may lead to a significant loss of performance. We will study in particular FIR and IIR filters.

There is a clear demand for hardest-to-round cases, and several computer manufacturers recently contacted us to obtain new cases. These hardest-to-round cases are a precious help for building libraries of correctly rounded mathematical functions. The current code, based on Lefèvre's algorithm, will be rewritten and formal proofs will be done.

We plan to use uniform polynomial approximation and diophantine techniques in order to tackle the case of the IEEE quad precision and analytic number theory techniques (exponential sums estimates) for counting the hardest-to-round cases.

Lattice-based cryptography (LBC) is an utterly promising, attractive (and competitive) research ground in cryptography, thanks to a combination of unmatched properties:

**Improved performance.** LBC primitives have low asymptotic costs, but remain cumbersome in practice (e.g., for parameters achieving security against computations of up to 2100 bit operations). To address this limitation, a whole branch of LBC has evolved where security relies on the restriction of lattice problems to a family of more structured lattices called *ideal lattices*. Primitives based on such lattices can have quasi-optimal costs (i.e., quasi-constant amortized complexities), outperforming all contemporary primitives. This asymptotic performance sometimes translates into practice, as exemplified by NTRUEncrypt.

**Improved security.** First, lattice problems seem to remain hard even for quantum computers. Moreover, the security of most of LBC holds under the assumption that standard lattice problems are hard in the worst case. Oppositely, contemporary cryptography assumes that specific problems are hard with high probability, for some precise input distributions. Many of these problems were artificially introduced for serving as a security foundation of new primitives.

**Improved flexibility.** The master primitives (encryption, signature) can all be realized based on worst-case (ideal) lattice assumptions. More evolved primitives such as ID-based encryption (where the public key of a recipient can be publicly derived from its identity) and group signatures, that were the playing-ground of pairing-based cryptography (a subfield of elliptic curve cryptography), can also be realized in the LBC framework, although less efficiently and with restricted security properties. More intriguingly, lattices have enabled long-wished-for primitives. The most notable example is homomorphic encryption, enabling computations on encrypted data. It is the appropriate tool to securely outsource computations, and will help overcome the privacy concerns that are slowing down the rise of the cloud.

We work on three directions, detailed now.

All known lattice reduction algorithms follow the same design principle: perform a sequence of small elementary steps transforming a current basis of the input lattice, where these steps are driven by the Gram-Schmidt orthogonalisation of the current basis.

In the short term, we will fully exploit this paradigm, and hopefully lower the cost of reduction algorithms with respect to the lattice dimension. We aim at asymptotically fast algorithms with complexity bounds closer to those of basic and normal form problems (matrix multiplication, Hermite normal form). In the same vein, we plan to investigate the parallelism potential of these algorithms.

Our long term goal is to go beyond the current design paradigm, to reach better trade-offs between run-time and shortness of the output bases. To reach this objective, we first plan to strengthen our understanding of the interplay between lattice reduction and numerical linear algebra (how far can we push the idea of working on approximations of a basis?), to assess the necessity of using the Gram-Schmidt orthogonalisation (e.g., to obtain a weakening of LLL-reduction that would work up to some stage, and save computations), and to determine whether working on generating sets can lead to more efficient algorithms than manipulating bases. We will also study algorithms for finding shortest non-zero vectors in lattices, and in particular look for quantum accelerations.

We will implement and distribute all algorithmic improvements, e.g., within the fplll library. We are interested in high performance lattice reduction computations (see application domains below), in particular in connection/continuation with the HPAC ANR project (algebraic computing and high performance consortium).

Our long term goal is to demonstrate the superiority of lattice-based cryptography over contemporary public-key cryptographic approaches. For this, we will 1- Strengthen its security foundations, 2- Drastically improve the performance of its primitives, and 3- Show that lattices allow to devise advanced and elaborate primitives.

The practical security foundations will be strengthened by the improved understanding of the limits of lattice reduction algorithms (see above). On the theoretical side, we plan to attack two major open problems: Are ideal lattices (lattices corresponding to ideals in rings of integers of number fields) computationally as hard to handle as arbitrary lattices? What is the quantum hardness of lattice problems?

Lattice-based primitives involve two types of operations: sampling from discrete Gaussian distributions
(with lattice supports), and arithmetic in polynomial rings such as

Our main objective in terms of cryptographic functionality will be to determine the extent to which lattices can help securing cloud services. For example, is there a way for users to delegate computations on their outsourced dataset while minimizing what the server eventually learns about their data? Can servers compute on encrypted data in an efficiently verifiable manner? Can users retrieve their files and query remote databases anonymously provided they hold appropriate credentials? Lattice-based cryptography is the only approach so far that has allowed to make progress into those directions. We will investigate the practicality of the current constructions, the extension of their properties, and the design of more powerful primitives, such as functional encryption (allowing the recipient to learn only a function of the plaintext message). To achieve these goals, we will in particular focus on cryptographic multilinear maps.

This research axis of AriC is gaining strength thanks to the recruitment of Benoit Libert. We will be particularly interested in the practical and operational impacts, and for this reason we envision a collaboration with an industrial partner.

Diophantine equations. Lattice reduction algorithms can be used to solve diophantine equations, and in particular to find simultaneous rational approximations to real numbers. We plan to investigate the interplay between this algorithmic task, the task of finding integer relations between real numbers, and lattice reduction. A related question is to devise LLL-reduction algorithms that exploit specific shapes of input bases. This will be done within the ANR DynA3S project.

Communications. We will continue our collaboration with Cong Ling (Imperial College) on the use of lattices in communications. We plan to work on the wiretap channel over a fading channel (modeling cell phone communications in a fast moving environment). The current approaches rely on ideal lattices, and we hope to be able to find new approaches thanks to our expertise on them due to their use in lattice-based cryptography. We will also tackle the problem of sampling vectors from Gaussian distributions with lattice support, for a very small standard deviation parameter. This would significantly improve current schemes for communication schemes based on lattices, as well as several cryptographic primitives.

Cryptanalysis of variants of RSA. Lattices have been used extensively
to break variants of the RSA encryption scheme, via Coppersmith's method to
find small roots of polynomials. We plan to work with Nadia Heninger (U. of Pennsylvania)
on improving these attacks, to make them more practical. This is an excellent test case
for testing the practicality of LLL-type algorithm. Nadia Heninger has a strong
experience in large scale cryptanalysis based on Coppersmith's method (http://

The main theme here is the study of fundamental operations (“kernels”) on a hierarchy of symbolic or numeric data types spanning integers, floating-point numbers, polynomials, power series, as well as matrices of all these. Fundamental operations include basic arithmetic (e.g., how to multiply or how to invert) common to all such data, as well as more specific ones (change of representation/conversions, GCDs, determinants, etc.). For such operations, which are ubiquitous and at the very core of computing (be it numerical, symbolic, or hybrid numeric-symbolic), our goal is to ensure both high-performance and reliability.

On the symbolic side, we will focus on the design and complexity analysis of algorithms for matrices over various domains (fields, polynomials, integers) and possibly with specific properties (structure). So far, our algorithmic improvements for polynomial matrices and structured matrices have been obtained in a rather independent way. Both types are well known to have much in common, but this is sometimes not reflected by the complexities obtained, especially for applications in cryptology and coding theory. Our goal in this area is thus to explore these connections further, to provide a more unified treatment, and eventually bridge these complexity gaps, A first step towards this goal will be the design of enhanced algorithms for various generalizations of Hermite-Padé approximation; in the context of list decoding, this should in particular make it possible to match or even improve over the structured-matrix approach, which is so far the fastest known.

On the other hand we will focus on the design of algorithms for certified computing. We will study the use of various representations, such as mid-rad for classical interval arithmetic, or affine arithmetic. We will explore the impact of precision tuning in intermediate computations, possibly dynamically, on the accuracy of the results (e.g. for iterative refinement and Newton iterations). We will continue to revisit and improve the classical error bounds of numerical linear algebra in the light of the subtleties of IEEE floating-point arithmetic.

Our goals in linear algebra and lattice basis reduction that have been detailed above in Section will be achieved in the light of a hybrid symbolic-numeric approach.

Our work on certified computing and especially on the analysis of algorithms in floating-point arithmetic leads us to manipulate floating-point data in their greatest generality, that is, as symbolic expressions in the base and the precision. Our aim here is thus to develop theorems as well as efficient data structures and algorithms for handling such quantities by computer rather than by hand as we do now. The main outcome would be a “symbolic floating-point toolbox” which provides a way to check automatically the certificates of optimality we have obtained on the error bounds of various numerical algorithms.

We will also work on the interplay between floating-point and integer arithmetics.
Currently, small numerical kernels like an exponential or a

A third direction will be to work on algorithms for performing correctly-rounded arithmetic operations in medium precision as efficiently and reliably as possible. Indeed, many numerical problems require higher precision than the conventional floating-point (single, double) formats. One solution is to use multiple precision libraries, such as GNU MPFR, which allow the manipulation of very high precision numbers, but their generality (they are able to handle numbers with millions of digits) is a quite heavy alternative when high performance is needed. Our objective here is thus to design a multiple precision arithmetic library that would allow to tackle problems where a precision of a few hundred bits is sufficient, but which have strong performance requirements. Applications include the process of long-term iteration of chaotic dynamical systems ranging from the classical Henon map to calculations of planetary orbits. The designed algorithms will be formally proved.

Finally, our work on the IEEE 1788 standard leads naturally to the development of associated reference libraries for interval arithmetic. A first direction will be to implement IEEE 1788 interval arithmetic within MPFI, our library for interval arithmetic using the arbitrary precision floating-point arithmetic provided by MPFR: indeed, MPFI has been originally developed with definitions and handling of exceptions which are not compliant with IEEE 1788. Another one will be to provide efficient support for multiple-precision intervals, in mid-rad representation and by developing MPFR-based code-generation tools aimed at handling families of functions.

The algorithmic developments for medium precision floating-point arithmetic discussed above will lead to high performance implementations on GPUs. As a follow-up of the HPAC project (which will end in December 2015) we will pursue the design and implementation of high performance linear algebra primitives and algorithms.

Our expertise on validated numerics is useful to analyze and improve, and guarantee the quality of numerical results in a wide range of applications including:

scientific simulation;

global optimization;

control theory.

Much of our work, in particular the development of correctly rounded elementary functions, is critical to the

reproducibility of floating-point computations.

Lattice reduction algorithms have direct applications in

public-key cryptography;

diophantine equations;

communications theory.

Since 1969, ARITH is the primary and reference international conference for presenting scientific work on the latest research in computer arithmetic. In June 2015, we organized it in Lyon.

At ISSAC'2015 .

Best papers at Eurocrypt'2015

fplll contains several algorithms on lattices that rely on floating-point computations. This includes implementations of the floating-point LLL reduction algorithm, offering different speed/guarantees ratios. It contains a “wrapper” choosing the estimated best sequence of variants in order to provide a guaranteed output as fast as possible. In the case of the wrapper, the succession of variants is oblivious to the user. It also includes a rigorous floating-point implementation of the Kannan-Fincke-Pohst algorithm that finds a shortest non-zero lattice vector, and the BKZ reduction algorithm.

The fplll library is distributed under the LGPL license. It has been used in or ported to several mathematical computation systems such as Magma, Sage, and PariGP. It is also used for cryptanalytic purposes, to test the resistance of cryptographic primitives.

Participants: Shi Bai, Damien Stehlé

Contact: Damien Stehlé

Keywords: Multiple-Precision - Floating-point - Correct Rounding

GNU MPFR is an efficient multiple-precision floating-point library
written in C with well-defined semantics (copying the good ideas from
the IEEE-754 standard), in particular correct rounding in 5 rounding
modes. GNU MPFR provides about 80 mathematical functions, in addition
to utility functions (assignments, conversions...). Special data
(*Not a Number*, infinities, signed zeros) are handled like in
the IEEE-754 standard. It is distributed under the LGPL license.

The development of MPFR started in Loria (Nancy). When Vincent Lefèvre moved from Nancy to Lyon, it became a joint project between the project-team Caramel (Nancy) and AriC. Many systems use MPFR, several of them being listed on its web page. MPFR 3.1.3 was released on 19 June 2015.

New developments in the trunk: Full rewrite of `mpfr_sum` completed,
with new tests . Generic tests improved. Bug
fixes and various improvements, in particular concerning the flags.

Participants: Vincent Lefèvre, Guillaume Hanrot and Paul Zimmermann

Contact: Vincent Lefèvre

URL: http://

Gfun is a Maple package that provides tools for: guessing a sequence or a series from its first terms; manipulating rigorously solutions of linear differential or recurrence equations, using the equation as a data-structure.

Its development moved to AriC with Bruno Salvy in 2012, while a submodule NumGfun dedicated to symbolic-numeric computations with linear ODEs has been developed by Marc Mezzarobba during his post-doc at AriC. An old version of gfun is distributed with the Maple library. Newer versions are available on the web page of gfun, which also lists a number of articles by scientists who cited it.

Contact: Bruno Salvy

URL: http://

Keywords: Floating-point - Correct Rounding

Sipe is a mini-library in the form of a C header file, to perform radix-2 floating-point computations in very low precisions with correct rounding, either to nearest or toward zero. The goal of such a tool is to do proofs of algorithms/properties or computations of tight error bounds in these precisions by exhaustive tests, in order to try to generalize them to higher precisions. It is distributed under the LGPL license and mostly used internally.

Participant: Vincent Lefèvre

Contact: Vincent Lefèvre

LinBox is a C++ template library for exact, high-performance linear algebra computation with dense, sparse, and structured matrices over the integers and over finite fields. LinBox is distributed under the LGPL license. The library is developed by a consortium of researchers in Canada, USA, and France. Clément Pernet is a main contributor, especially with a focus on parallel aspects during the period covered by this report.

Participant:

Contact: Clément Pernet

The search for the worst cases for the correct rounding
(hardest-to-round cases) of mathematical functions (

The support for the

Participant: Vincent Lefèvre

Contact: Vincent Lefèvre

A Perl implementation of algorithms for the multiplication by integer constants has been updated to get more results based on exhaustive tests: threading has been implemented in this part of the script.

Participant: Vincent Lefèvre

Contact: Vincent Lefèvre

We improve the usual relative error bound for the computation of

In order to derive efficient and robust floating-point implementations of a given function

In their book, Scientific Computing on the Itanium, Cornea et al. (2002) introduce an accurate algorithm for evaluating expressions of the form

Let

In collaboration with Christoph Lauter and Marc Mezzarobba (LIP6 laboratory, Paris), Nicolas Brisebarre and Jean-Michel Muller introduce an algorithm to compare a binary floating-point (FP) number and a decimal FP number, assuming the “binary encoding” of the decimal formats is used, and with a special emphasis on the basic interchange formats specified by the IEEE 754-2008 standard for FP arithmetic. It is a two-step algorithm: a first pass, based on the exponents only, quickly eliminates most cases, then, when the first pass does not suffice, a more accurate second pass is performed. They provide an implementation of several variants of our algorithm, and compare them .

We design a linearly homomorphic encryption scheme whose security relies on the hardness of the decisional Diffie-Hellman problem. Our approach requires some special features of the underlying group. In particular, its order is unknown and it contains a subgroup in which the discrete logarithm problem is tractable. Therefore, our instantiation holds in the class group of a non maximal order of an imaginary quadratic field. Its algebraic structure makes it possible to obtain such a linearly homomorphic scheme whose message space is the whole set of integers modulo a prime

As formalized by Kiltz et al. (ICALP'05), append-only signatures (AOS) are digital signature schemes where anyone can publicly append extra message blocks to an already signed sequence of messages. This property is useful, e.g., in secure routing, in collecting response lists, reputation lists, or petitions. Bethencourt, Boneh and Waters (NDSS'07) suggested an interesting variant, called history-hiding append-only signatures (HH-AOS), which handles messages as sets rather than ordered tuples. This HH-AOS primitive is useful when the exact order of signing needs to be hidden. When free of subliminal channels (i.e., channels that can tag elements in an undetectable fashion), it also finds applications in the storage of ballots on an electronic voting terminals or in other archival applications (such as the record of petitions, where we want to hide the influence among messages). However, the only subliminal-free HH-AOS to date only provides heuristic arguments in terms of security: Only a proof in the idealized (non-realizable) random oracle model is given. This paper provides the first HH-AOS construction secure in the standard model. Like the system of Bethencourt et al., our HH-AOS features constant-size public keys, no matter how long messages to be signed are, which is atypical (we note that secure constructions often suffer from a space penalty when compared to their random-oracle-based counterpart). As a second result, we show that, even if we use it to sign ordered vectors as in an ordinary AOS (which is always possible with HH-AOS), our system provides considerable advantages over existing realizations. As a third result, we show that HH-AOS schemes provide improved identity-based ring signatures (i.e., in prime order groups and with a better efficiency than the state-of-the-art schemes).

Quasi-adaptive non-interactive zero-knowledge (QA-NIZK) proofs is a powerful paradigm, suggested recently by Jutla and Roy (Asiacrypt'13), which is motivated by the Groth-Sahai seminal techniques for efficient non-interactive zero-knowledge (NIZK) proofs. In this paradigm, the common reference string may depend on specific language parameters, a fact that allows much shorter proofs in important cases. It even makes certain standard model applications competitive with the Fiat-Shamir heuristic in the Random Oracle idealization (such QA-NIZK proofs were recently optimized to constant size by Jutla and Roy (Crypto'14) and Libert et al. (Eurocrypt'14) for the important case of proving that a vector of group elements belongs to a linear subspace). While, e.g., the QA-NIZK arguments of Libert et al. provide unbounded simulation-soundness and constant proof length, their simulation-soundness is only loosely related to the underlying assumption (with a gap proportional to the number of adversarial queries) and it is unknown how to alleviate this limitation without sacrificing efficiency. Here, we deal with the basic question of whether and to what extent we can simultaneously optimize the proof size and the tightness of security reductions, allowing for important applications with tight security (which are typically to date quite lengthy) to be of shorter size. In this paper, we resolve this question by describing a novel simulation-sound QA-NIZK argument showing that a vector v ∈ G n belongs to a subspace of rank t < n using a constant number of group elements. Unlike previous constant-size QA-NIZK proofs of such statements, the unbounded simulation-soundness of our system is nearly tightly related (i.e., the reduction only loses a factor proportional to the security parameter) to the standard Decision Linear assumption. To show simulation-soundness in the constrained context of tight reductions, we employ a number of techniques, and explicitly point at a technique – which may be of independent interest – of hiding the linear span of a structure-preserving homomorphic signature (which is part of an OR proof). As an application, we design a public-key cryptosystem with almost tight CCA2-security in the multi-challenge, multiuser setting with improved length (asymptotically optimal for long messages). We also adapt our scheme to provide CCA security in the key-dependent message scenario (KDM-CCA2) with ciphertext length reduced by 75% when compared to the best known tightly secure KDM-CCA2 system so far.

Group signatures are a central cryptographic primitive which allows users to sign messages while hiding their identity within a crowd of group members. In the standard model (without the random oracle idealization), the most efficient constructions rely on the Groth-Sahai proof systems (Eurocrypt'08). The structure-preserving signatures of Abe et al. (Asiacrypt'12) make it possible to design group signatures based on well-established, constant-size number theoretic assumptions (a.k.a. “simple assumptions”) like the Symmetric eXternal Diffie-Hellman or Decision Linear assumptions. While much more efficient than group signatures built on general assumptions, these constructions incur a significant overhead w.r.t. constructions secure in the idealized random oracle model. Indeed, the best known solution based on simple assumptions requires 2.8 kB per signature for currently recommended parameters. Reducing this size and presenting techniques for shorter signatures are thus natural questions. In this paper, our first contribution is to significantly reduce this overhead. Namely, we obtain the first fully anonymous group signatures based on simple assumptions with signatures shorter than 2 kB at the 128-bit security level. In dynamic (resp. static) groups, our signature length drops to 1.8 kB (resp. 1 kB). This improvement is enabled by two technical tools. As a result of independent interest, we first construct a new structure-preserving signature based on simple assumptions which shortens the best previous scheme by 25%. Our second tool is a new method for attaining anonymity in the strongest sense using a new CCA2-secure encryption scheme which is simultaneously a Groth-Sahai commitment.

Multilinear maps have become popular tools for designing cryptographic schemes since a first approximate realisation candidate was proposed by Garg, Gentry and Halevi (GGH). This construction was later improved by Langlois, Stehlé and Steinfeld who proposed GGHLite which offers smaller parameter sizes. In this work, we provide the first implementation of such approximate multilinear maps based on ideal lattices. Implementing GGH-like schemes naively would not allow instantiating it for non-trivial parameter sizes. We hence propose a strategy which reduces parameter sizes further and several technical improvements to allow for an efficient implementation. In particular, since finding a prime ideal when generating instances is an expensive operation, we show how we can drop this requirement. We also propose algorithms and implementations for sampling from discrete Gaussians, for inverting in some Cyclotomic number fields and for computing norms of ideals in some Cyclotomic number rings. Due to our improvements we were able to compute a multilinear jigsaw puzzle for

The Rényi divergence is a mean to measure the closeness of two distributions. We show that it can often be used as an alternative to the statistical distance in security proofs for lattice-based cryptography. Using the Rényi divergence is particularly suited for security proofs of primitives in which the attacker is required to solve a search problem (e.g., forging a signature). We show that it may also be used in the case of distinguishing problems (e.g., semantic security of encryption schemes), when they enjoy a public sampleability property. The techniques lead to security proofs for schemes with smaller parameters.

Functional encryption is a modern public-key paradigm where a master secret key can be used to derive sub-keys SKF associated with certain functions

Two main computational problems serve as security foundations of current fully homomorphic encryption schemes: Regev's Learning With Errors problem (LWE) and Howgrave-Graham's Approximate Greatest Common Divisor problem (AGCD). Our first contribution is a reduction from LWE to AGCD. As a second contribution, we describe a new AGCD-based fully homomorphic encryption scheme, which outperforms all prior AGCD-based proposals: its security does not rely on the presumed hardness of the so-called Sparse Subset Sum problem, and the bit-length of a ciphertext is only

We describe a polynomial-time cryptanalysis of the (approximate) multilinear map of Coron, Lepoint and Tibouchi (CLT). The attack relies on an adaptation of the so-called zeroizing attack against the Garg, Gentry and Halevi (GGH) candidate multilinear map. Zeroizing is much more devastating for CLT than for GGH. In the case of GGH, it allows to break generalizations of the Decision Linear and Subgroup Membership problems from pairing-based cryptography. For CLT, this leads to a total break: all quantities meant to be kept secret can be efficiently and publicly recovered.

In March, 2015 Gu Chunsheng proposed a candidate ideal multilinear map [eprint 2015/269]. An ideal multilinear map allows to perform as many multiplications as desired, while in

Most lattice-based cryptographic schemes are built upon the assumed hardness of the Short Integer Solution (SIS) and Learning With Errors (LWE) problems. Their efficiencies can be drastically improved by switching the hardness assumptions to the more compact Ring-SIS and Ring-LWE problems. However, this change of hardness assumptions comes along with a possible security weakening: SIS and LWE are known to be at least as hard as standard (worst-case) problems on euclidean lattices, whereas Ring-SIS and Ring-LWE are only known to be as hard as their restrictions to special classes of ideal lattices, corresponding to ideals of some polynomial rings. In this work, we define the Module-SIS and Module-LWE problems, which bridge SIS with Ring-SIS, and LWE with Ring-LWE, respectively. We prove that these average-case problems are at least as hard as standard lattice problems restricted to module lattices (which themselves generalize arbitrary and ideal lattices). As these new problems enlarge the toolbox of the lattice-based cryptographer, they could prove useful for designing new schemes. Importantly, the worst-case to average-case reductions for the module problems are (qualitatively) sharp, in the sense that there exist converse reductions. This property is not known to hold in the context of Ring-SIS/Ring-LWE: Ideal lattice problems could reveal easy without impacting the hardness of Ring-SIS/Ring-LWE.

In Broadcast Encryption (BE) systems like Pay-TV, AACS, online
content sharing and broadcasting, reducing the header length
(communication overhead per session) is of practical interest. The
Subset Difference (SD) scheme due to Naor-Naor-Lotspiech (NNL) is the
most popularly used BE scheme. This work introduced the

We study the complexity of Gröbner bases computation, in particular in the generic situation where the variables are in simultaneous Noether position with respect to the system. We give a bound on the number of polynomials of degree

The interpolation step in the Guruswami-Sudan algorithm is a bivariate interpolation problem with multiplicities commonly solved in the literature using either structured linear algebra or basis reduction of polynomial lattices. This problem has been extended to three or more variables; for this generalization, all fast algorithms proposed so far rely on the lattice approach. In this work, we reduce this multivariate interpolation problem to a problem of simultaneous polynomial approximations, which we solve using fast structured linear algebra. This improves the best known complexity bounds for the interpolation step of the list-decoding of Reed-Solomon codes, Parvaresh-Vardy codes, and folded Reed-Solomon codes. In particular, for Reed-Solomon list-decoding with re-encoding, our approach has complexity

We present block algorithms and their implementation for the parallelization of sub-cubic Gaussian elimination on shared memory architectures. Contrarily to the classical cubic algorithms in parallel numerical linear algebra, we focus here on recursive algorithms and coarse grain parallelization. Indeed, sub-cubic matrix arithmetic can only be achieved through recursive algorithms making coarse grain block algorithms perform more efficiently than fine grain ones. This work is motivated by the design and implementation of dense linear algebra over a finite field, where fast matrix multiplication is used extensively and where costly modular reductions also advocate for coarse grain block decomposition. We incrementally build efficient kernels, for matrix multiplication first, then triangular system solving, on top of which a recursive PLUQ decomposition algorithm is built. We study the parallelization of these kernels using several algorithmic variants: either iterative or recursive and using different splitting strategies. Experiments show that recursive adaptive methods for matrix multiplication, hybrid recursive-iterative methods for triangular system solve and tile recursive versions of the PLUQ decomposition, together with various data mapping policies, provide the best performance on a 32 cores NUMA architecture. Overall, we show that the overhead of modular reductions is more than compensated by the fast linear algebra algorithms and that exact dense linear algebra matches the performance of full rank reference numerical software even in the presence of rank deficiencies.

The row (resp. column) rank profile of a matrix describes the staircase shape of its row (resp. column) echelon form. In an ISSAC'13 paper, we proposed a recursive Gaussian elimination that can compute simultaneously the row and column rank profiles of a matrix as well as those of all of its leading sub-matrices, in the same time as state of the art Gaussian elimination algorithms. Here we first study the conditions making a Gaus-sian elimination algorithm reveal this information. Therefore, we propose the definition of a new matrix invariant, the rank profile matrix, summarizing all information on the row and column rank profiles of all the leading sub-matrices. We also explore the conditions for a Gaussian elimination algorithm to compute all or part of this invariant, through the corresponding PLUQ decomposition. As a consequence, we show that the classical iterative CUP decomposition algorithm can actually be adapted to compute the rank profile matrix. Used, in a Crout variant, as a base-case to our ISSAC'13 implementation, it delivers a significant improvement in efficiency. Second, the row (resp. column) echelon form of a matrix are usually computed via different dedicated triangular decompositions. We show here that, from some PLUQ decompositions, it is possible to recover the row and column echelon forms of a matrix and of any of its leading sub-matrices thanks to an elementary post-processing algorithm.

We describe a simple method that produces automatically closed forms for the coefficients of continued fractions expansions of a large number of special functions. The function is specified by a non-linear differential equation and initial conditions. This is used to generate the first few coefficients and from there a conjectured formula. This formula is then proved automatically thanks to a linear recurrence satisfied by some remainder terms. Extensive experiments show that this simple approach and its straightforward generalization to difference and

The diagonal of a multivariate power series

Marie Paindavoine is supported by an Orange Labs PhD Grant (from October 2013 to November 2016). She works on privacy-preserving encryption mechanisms.

Within the program Nano 2017, we collaborate with the Compilation Expertise Center of STMicroelectronics on the theme of floating-point arithmetic for embedded processors.

ARC6 PhD Programme. The PhD grant of Valentina Popescu is funded since Sep. 2014 by Région Rhône-Alpes through the “ARC6” programme.

PALSE Project. Benoît Libert was awarded a 500keur (from July 2014 to November 2016) grant for his PALSE (Programme d'Avenir Lyon Saint-Etienne) project *Towards practical enhanced asymmetric encryption schemes*.

“High-performance Algebraic Computing” (HPAC) is a four year ANR
project that started in January 2012.
The Web page of the project is
http://

The overall ambition of HPAC is to provide international reference high-performance libraries for exact linear algebra and algebraic systems on multi-processor architecture and to influence parallel programming approaches for algebraic computing. The central goal is to extend the efficiency of the LinBox and FGb libraries to new trend parallel architectures such as clusters of multi-processor systems and graphics processing units in order to tackle a broader class of problems in lattice-based cryptography and algebraic cryptanalysis. HPAC conducts researches along three axes:

A domain specific parallel language (DSL) adapted to high-performance algebraic computations;

Parallel linear algebra kernels and higher-level mathematical algorithms and library modules;

Library composition, their integration into state-of-the-art software, and innovative high performance solutions for cryptology challenges.

Dyna3s is a four year ANR project that started in October 2013. The Web page of the project is http://

The aim is to study algorithms that compute the greatest common divisor (gcd) from the point of view of dynamical systems. A gcd algorithm is considered as a discrete dynamical system by focusing on integer input. We are mainly interested in the computation of the gcd of several integers. Another motivation comes from discrete geometry, a framework where the understanding of basic primitives, discrete lines and planes, relies on algorithm of the Euclidean type.

FastRelax stands for “Fast and Reliable Approximation”. It is a four year ANR project started in October 2014.
The web page of the project is http://

The aim of this project is to develop computer-aided proofs of numerical values, with certified and reasonably tight error bounds, without sacrificing efficiency. Applications to zero-finding, numerical quadrature or global optimization can all benefit from using our results as building blocks. We expect our work to initiate a “fast and reliable” trend in the symbolic-numeric community. This will be achieved by developing interactions between our fields, designing and implementing prototype libraries and applying our results to concrete problems originating in optimal control theory.

MetaLibm is a four-year project (started in October 2013) focused on the
design and implementation of code generators for mathematical functions and filters.
The web page of the project is
http://

Damien Stehlé was awarded an ERC Starting Grant for his project *Euclidean lattices: algorithms and cryptography* (LattAC) in
2013 (1.4Meur for 5 years from January 2014). The LattAC project aims at studying all computational aspects of lattices,
from algorithms for manipulating them to applications. The main objective is to enable the rise of lattice-based cryptography.

is a H2020 Infrastructure project providing substantial funding to the open source computational mathematics ecosystem. It will run for four years, starting from September 2015. Clément Pernet is a participant.

Jung Hee Cheon from July to August;

Arnold Neumaier from August to December;

Khoa Ta Toa Nguyen until October;

Peter Tang, from June to July;

Yong Sue Song from July to August.

Fabrice Mouhartem

Date: February 2015–July 2015

Institution: ENS de Lyon

Supervisor: Benoît Libert

Alice Pellet-Mary

Date: February 2015–July 2015

Institution: ENS de Lyon

Supervisor: Damien Stehlé

Andrada Popa

Date: July 2015–September 2015

Institution: Technical University of Cluj-Napoca (Roumanie)

Supervisor: Nicolas Brisebarre

Pablo Rotondo

Date: March 2015–June 2015

Institution: Universidad de la Republica Uruguay (Uruguay)

Supervisor: Bruno Salvy

Weiqiang Wen

Date: February 2015–July 2015

Institution: SCNU, China

Supervisor: Damien Stehlé

Jean-Michel Muller was the General Chair of ARITH'22, Lyon.

Bruno Salvy is a co-organizer with Alin Bostan of Alea'2016, Luminy.

Fabien Laguillaumie was a member of the program committee of WCC'15.

Benoît Libert was a member of the program committees of ACM-CCS'15, Eurocryp'15, PKC'15 and '16, Africacrypt'16.

Jean-Michel Muller was a member of the program committee of ASAP'2015.

Nathalie Revol was a member of the program committee of NRE1 at SuperComputing'15.

Bruno Salvy is a member of program committee of AofA'2016, Warsaw, Poland.

Damien Stehlé was member of the program committees of Latincrypt'15, Asiacrypt'15, PQCrypto'16, PKC'15 and '16. He is a member of the program committee of ANTS'16 and SCN'16.

Jean-Michel Muller is a member of the editorial board of the *IEEE Transactions on Computers.* He is a member of the board of foundation editors of the *Journal for Universal Computer Science*.

Bruno Salvy is a member of the editorial boards of the *Journal
of Symbolic Computation*, of the *Journal of Algebra* (section
Computational Algebra) and of the collections *Texts and Monographs
in Symbolic Computation* (Springer) and *Mathématiques et
Applications* (SMAI-Springer).

Gilles Villard is a member of the editorial board of the *Journal
of Symbolic Computation*.

Claude-Pierre Jeannerod was an invited speaker at the MACIS conference (Sixth International Conference on Mathematical Aspects of Computer and Information Sciences, Berlin, November 2015).

Damien Stehlé gave two invited talks at the HEAT workshop on fully homomorphic encryption and multilinear maps, held in Paris in October. He gave an invited talk at a Sloan Foundation workshop on the mathematics of modern cryptography, which was held in Berkeley in July. He gave two invited talks at the summer school on real-world crypto and privacy that was held in Sibenik in June.

Damien Stehlé is a member of the steering committee of the PQCrypto conference series. He is also a member of the steering committee of the Cryptography and Coding French research grouping (C2).

Claude-Pierre Jeannerod is a member of the scientific committee of JNCF (Journées Nationales de Calcul Formel).

Nathalie Revol is the chair of the IEEE 1788 group for the standardization of interval arithmetic: the general standard has been published in July 2015 (IEEE 1788-2015) and the work now addresses the set-based model (IEEE P1788.1). She was a member of a hiring committee at U. Montpellier 2 (MCF position). She is a member of the Equality-Parity Committee at Inria.

Jean-Michel Muller is a member of the Scientific Council of CERFACS (Toulouse). He was a member of the Scientific Council of the “La Recherche” prize for 2015.

Guillaume Hanrot is director of the LIP laboratory (Laboratoire de l'Informatique du Parallélisme).

Jean-Michel Muller is co-director of the Groupement de Recherche (GDR) *Informatique Mathématique* of CNRS.

Master: Nicolas Brisebarre, *Introduction to Effective Approximation Theory* (24h), Hanoi Institute of Mathematics (Vietnam).

Master: Claude-Pierre Jeannerod, Nathalie Revol, *Algorithmique numérique et fiabilité des calculs en arithmétique flottante* (24h), M2 ISFA (Institut de Science Financière et d'Assurances), Université Claude Bernard Lyon 1.

Master: Fabien Laguillaumie, Cryptography, Error Correcting Codes, 150h, Université Claude Bernard Lyon 1.

Master: Vincent Lefèvre, *Arithmétique des ordinateurs* (20h), M2 ISFA (Institut de Science Financière et d'Assurances), Université Claude Bernard Lyon 1.

Master: Benoît Libert, Advanced cryptographic protocols, 24h, ENS de Lyon; Computer science and privacy, 12h, ENS de Lyon; Cryptography, 12h, ENS de Lyon; Public-key cryptology, (21h), M2 ISFA (Institut de Science Financière et d'Assurances), Université Claude Bernard Lyon 1.

Master: Bruno Salvy, Calcul Formel (9h), MPRI.

Master: Bruno Salvy, Mathématiques expérimentales (44h), École polytechnique.

Master: Bruno Salvy, Logique et complexité (32h), École polytechnique.

Master: Damien Stehlé, Cryptography, 24h, ENS de Lyon.

Master: Nicolas Louvet, *Compilation* (24h), Lyon 1.

Bachelor: Nicolas Louvet, *Algorithmique*, *Architecture des ordinateurs*, *Unix*, ..., (170h), Lyon 1.

Professional teaching: Nathalie Revol, *Contrôler et améliorer la qualité numérique d'un code de calcul industriel* (2h30), Collège de Polytechnique.

Divers: Bruno Salvy, Introduction à la D-finitude (2h), *Leçons de mathématiques d'aujourd'hui*, Bordeaux.

PhD in progress: Louis Dumont, *Algorithmique efficace pour les diagonales, applications en combinatoire, physique et théorie des nombres*, since September 2013, co-supervised by Alin Bostan (SpecFun team) and Bruno Salvy.

PhD in progress: Silviu Filip,
*Filtroptim : tools for an optimal synthesis of numerical filters*,
since September 2013, co-supervised by Nicolas Brisebarre and Guillaume Hanrot.

PhD in progress: Stephen Melczer, *Effective analytic combinatorics in one and several variables*, since September 2014, co-supervised by George Labahn (U. Waterloo, Canada) and Bruno Salvy.

PhD in progress: Fabrice Mouhartem, *Privacy-preserving protocols from lattices and bilinear maps*, since September 2015, co-supervised by Benoît Libert (95%) and Damien Stehlé (5%).

PhD in progress: Vincent Neiger,
*Multivariate interpolation in computer algebra: efficient algorithms ans applications*,
since September 2013, co-supervised by Claude-Pierre Jeannerod and Gilles Villard (together with Éric Schost (Western University, London, Canada)).

PhD in progress: Marie Paindavoine,
*Méthodes de calculs sur des données chiffrées*,
since October 2013 (Orange Labs - UCBL), co-supervised by Fabien Laguillaumie (together with Sébastien Canard).

PhD in progress : Antoine Plet,
*Contribution à l'analyse d'algorithmes en arithmétique virgule flottante*,
since September 2014, co-supervised by Nicolas Louvet and Jean-Michel Muller.

PhD in progress : Valentina Popescu,
*Vers des bibliothèques multi-précision certifiées et performantes*,
since September 2014, co-supervised by Mioara Joldes (LAAS) and Jean-Michel Muller.

PhD in progress : Serge Torres,
*Some tools for the design of efficient and reliable function evaluation libraries*,
since September 2010, co-supervised by Nicolas Brisebarre and Jean-Michel Muller.

PhD in progress: Weiqiang Wen, *Hard problems on lattices*, since September 2015, supervised by Damien Stehlé.

Nicolas Brisebarre was in the PhD committee of Catherine Lelay (LRI, Inria Saclay, Université Paris-Sud) and Esteban Segura Ugalde (XLIM, Université de Limoges).

Fabien Laguillaumie was in the PhD committee of Oliver Sanders (DI, ENS Paris, Orange Labs).

Benoît Libert was a reviewer and a member of the PhD committee for the PhD thesis of Olivier Sanders (DI, ENS Paris, Orange Labs).

Nathalie Revol was in the PhD committee of Qiaochu Li (Université de Technologie de Compiègne).

Bruno Salvy was a reviewer for the PhD thesis of Amaury Pouly (LIX, École polytechnique). He was in the PhD committees of Simone Naldi (LAAS, Toulouse) and Romain Serra (LAAS, Toulouse).

Damien Stehlé was a reviewer for the PhD theses of Thomas Prest (DI, ENS Paris), Zheng Wang (Imperial College, UK), Rina Zeitoun (LIP6, UPMC) and Cheng Shantian (NTU, Singapore). He is in the PhD committee of Thijs Laarhoven (TU Eindhoven, The Netherlands) and in the Habilitation thesis of Ludovic Perret (LIP6, UPMC).

Nicolas Brisebarre co-organizes scientific conferences, called «Éclats de sciences», at Maison du Livre, de l’Image et du Son in Villeurbanne. Around three conferences take place per year.

Nathalie Revol is a member of the steering committee of the MMI: Maison des Mathématiques et de l'Informatique. She led a "Coding morning" for Inria assistants at Montbonnot (February 2015). She gave two conferences for high-school teachers (Montbonnot, February and Grenoble, May 2015). As an incentive for high-school pupils, and especially girls, to choose scientific careers, she gave talks at Lycée Simone Veil (Châtillon d'Azergues) and Mondial des Métiers (both in March 2015). She gave conferences for Rallye des Maths (May 2015) and during the Science Fair (2 high-school classes, October 2015). During the Science Fair, she was also present one afternoon at Médiathèque du Bachut - Lyon 8e. She presented computer science unplugged for primary school pupils (CM1, École Guilloux, St-Genis-Laval: 8 lectures of 45mn each) and computer science plugged (CM2, École Guilloux, St-Genis-Laval: 10 lectures of 1h in 2015-2016, 2 classes). She presented this work during Forum Maths Vivantes (October 2015). She also met people from Ébulliscience and FERS (Fondation Entreprise Réussite Scolaire) to give advice on their projects of popularization of computer science towards primary schools and high school pupils.