A major challenge in modeling and scientific computing is the simultaneous mastery of hardware capabilities, software design, and mathematical algorithms for the efficiency and reliability of the computation. In this context, the overall objective of AriC is to improve computing at large, in terms of performance, efficiency, and reliability. We work on the fine structure of floating-point arithmetic, on controlled approximation schemes, on algebraic algorithms and on new cryptographic applications, most of these themes being pursued in their interactions. Our approach combines fundamental studies, practical performance and qualitative aspects, with a shared strategy going from high-level problem specifications and standardization actions, to computer arithmetic and the lowest-level details of implementations.
This makes AriC the right place for drawing the following lines of action:
Design and integration of new methods and tools for mathematical program specification, certification, security, and guarantees on numerical results. Some main ingredients here are: the interleaving of formal proofs, computer arithmetic and computer algebra; error analysis and computation of certified error bounds; the study of the relationship between performance and numerical quality; and on the cryptography aspects, focus on the practicality of existing protocols and design of more powerful lattice-based primitives.
Generalization of a hybrid symbolic-numeric trend, and interplay between arithmetic
for both improving and
controlling numerical approaches (symbolic
Mathematical and algorithmic foundations of computing. We address algorithmic complexity and fundamental aspects of approximation, polynomial and matrix algebra, and lattice-based cryptography. Practical questions concern the design of high performance and reliable computing kernels, thanks to optimized computer arithmetic operators and an improved adequacy between arithmetic bricks and higher level ones.
According to the application domains that we target and our main fields of expertise, these lines of actions are declined in three themes with specific objectives.
Efficient approximation methods (§). Here lies the question of interleaving formal proofs, computer arithmetic and computer algebra, for significantly extending the range of functions whose reliable evaluation can be optimized.
Lattices: algorithms and cryptography (§). Long term goals are to go beyond the current design paradigm in basis reduction, and to demonstrate the superiority of lattice-based cryptography over contemporary public-key cryptographic approaches.
Algebraic computing and high performance kernels (§). The problem is to keep the algorithm and software designs in line with the scales of computational capabilities and application needs, by simultaneously working on the structural and the computer arithmetic levels.
The last twenty years have seen the advent of computer-aided proofs in mathematics and this trend is getting more and more important. They request: fast and stable numerical computations; numerical results with a guarantee on the error; formal proofs of these computations or computations with a proof assistant. One of our main long-term objectives is to develop a platform where one can study a computational problem on all (or any) of these three levels of rigor. At this stage, most of the necessary routines are not easily available (or do not even exist) and one needs to develop ad hoc tools to complete the proof. We plan to provide more and more algorithms and routines to address such questions. Possible applications lie in the study of mathematical conjectures where exact mathematical results are required (e.g., stability of dynamical systems); or in more applied questions, such as the automatic generation of efficient and reliable numerical software for function evaluation. On a complementary viewpoint, numerical safety is also critical in robust space mission design, where guidance and control algorithms become more complex in the context of increased satellite autonomy. We will pursue our collaboration with specialists of that area whose questions bring us interesting focus on relevant issues.
Floating-point arithmetic is currently undergoing a major evolution, in particular with the recent advent of a greater diversity of available precisions on a same system (from 8 to 128 bits) and of coarser-grained floating-point hardware instructions. This new arithmetic landscape raises important issues at the various levels of computing, that we will address along the following three directions.
One of our targets is the design of building blocks of computing (e.g., algorithms for the basic operations and functions, and algorithms for complex or double-word arithmetic). Establishing properties of these building blocks (e.g., the absence of “spurious” underflows/overflows) is also important. The IEEE 754 standard on floating-point arithmetic (whose next version, a rather minor revision, will be released soon) will have to undergo a major revision within a few years: first because advances in technology or new needs make some of its features obsolete, and because new features need standardization. We aim at playing a leading role in the preparation of the next standard.
We will pursue our studies in rounding error analysis, in particular for the “low precision–high dimension” regime, where traditional analyses become ineffective and where improved bounds are thus most needed. For this, the structure of both the data and the errors themselves will have to be exploited. We will also investigate the impact of mixed-precision and coarser-grained instructions (such as small matrix products) on accuracy analyses.
Most directions in the team are concerned with optimized and high performance implementations. We will pursue our efforts concerning the implementation of well optimized floating-point kernels, with an emphasis on numerical quality, and taking into account the current evolution in computer architectures (the increasing width of SIMD registers, and the availability of low precision formats). We will focus on computing kernels used within other axes in the team such as, for example, extended precision linear algebra routines within the FPLLL and HPLLL libraries.
We intend to strengthen our assessment of the cryptographic relevance of problems over lattices, and to broaden our studies in two main (complementary) directions: hardness foundations and advanced functionalities.
Recent advances in cryptography have broaden the scope of encryption functionalities (e.g., encryption schemes allowing to compute over encrypted data or to delegate partial decryption keys). While simple variants (e.g., identity-based encryption) are already practical, the more advanced ones still lack efficiency. Towards reaching practicality, we plan to investigate simpler constructions of the fundamental building blocks (e.g., pseudorandom functions) involved in these advanced protocols. We aim at simplifying known constructions based on standard hardness assumptions, but also at identifying new sources of hardness from which simple constructions that are naturally suited for the aforementioned advanced applications could be obtained (e.g., constructions that minimize critical complexity measures such as the depth of evaluation). Understanding the core source of hardness of today's standard hard algorithmic problems is an interesting direction as it could lead to new hardness assumptions (e.g., tweaked version of standard ones) from which we could derive much more efficient constructions. Furthermore, it could open the way to completely different constructions of advanced primitives based on new hardness assumptions.
Lattice-based cryptography has come much closer to maturity in the recent past. In particular, NIST has started a standardization process for post-quantum cryptography, and lattice-based proposals are numerous and competitive. This dramatically increases the need for cryptanalysis: Do the underlying hard problems suffer from structural weaknesses? Are some of the problems used easy to solve, e.g., asymptotically? Are the chosen concrete parameters meaningful for concrete cryptanalysis? In particular, how secure would they be if all the known algorithms and implementations thereof were pushed to their limits? How would these concrete performances change in case (full-fledged) quantum computers get built?
On another front, the cryptographic functionalities reachable under lattice hardness assumptions seem to get closer to an intrinsic ceiling. For instance, to obtain cryptographic multilinear maps, functional encryption and indistinguishability obfuscation, new assumptions have been introduced. They often have a lattice flavour, but are far from standard. Assessing the validity of these assumptions will be one of our priorities in the mid-term.
In the design of cryptographic schemes, we will pursue our investigations on functional encryption. Despite recent advances, efficient solutions are only available for restricted function families. Indeed, solutions for general functions are either way too inefficient for pratical use or they rely on uncertain security foundations like the existence of circuit obfuscators (or both). We will explore constructions based on well-studied hardness assumptions and which are closer to being usable in real-life applications. In the case of specific functionalities, we will aim at more efficient realizations satisfying stronger security notions.
Another direction we will explore is multi-party computation via a new approach exploiting the rich structure of class groups of quadratic fields. We already showed that such groups have a positive impact in this field by designing new efficient encryption switching protocols from the additively homomorphic encryption we introduced earlier. We want to go deeper in this direction that raises interesting questions such as how to design efficient zero-knowledge proofs for groups of unknown order, how to exploit their structure in the context of 2-party cryptography (such as two-party signing) or how to extend to the multi-party setting.
In the context of the PROMETHEUS H2020 project, we will keep seeking to develop new quantum-resistant privacy-preserving cryptographic primitives (group signatures, anonymous credentials, e-cash systems, etc). This includes the design of more efficient zero-knowledge proof systems that can interact with lattice-based cryptographic primitives.
The connections between algorithms for structured matrices and for polynomial matrices will continue to be developed, since they have proved to bring progress to fundamental questions with applications throughout computer algebra. The new fast algorithm for the bivariate resultant opens an exciting area of research which should produce improvements to a variety of questions related to polynomial elimination. Obviously, we expect to produce results in that area.
For definite summation and integration, we now have fast algorithms for single integrals of general functions and sequences and for multiple integrals of rational functions. The long-term objective of that part of computer algebra is an efficient and general algorithm for multiple definite integration and summation of general functions and sequences. This is the direction we will take, starting with single definite sums of general functions and sequences (leading in particular to a faster variant of Zeilberger's algorithm). We also plan to investigate geometric issues related to the presence of apparent singularities and how they seem to play a role in the complexity of the current algorithms.
Our expertise on validated numerics is useful to analyze and improve, and guarantee the quality of numerical results in a wide range of applications including:
scientific simulation;
global optimization;
control theory.
Much of our work, in particular the development of correctly rounded elementary functions, is critical to the
reproducibility of floating-point computations.
Lattice reduction algorithms have direct applications in
public-key cryptography;
diophantine equations;
communications theory.
Florent Bréhard, jointly with Mioara Joldes and Jean-Bernard Lasserre (CNRS LAAS) received the Distinguished paper award at ISSAC 2019 for On Moment Problems with Holonomic Functions
Alice Pellet-Mary was an awardee of the L'Oréal-Unesco scholarship for Women and Science.
Keywords: Euclidean Lattices - Computer algebra system (CAS) - Cryptography
Scientific Description: The fplll library is used or has been adapted to be integrated within several mathematical computation systems such as Magma, Sage, and PariGP. It is also used for cryptanalytic purposes, to test the resistance of cryptographic primitives.
Functional Description: fplll contains implementations of several lattice algorithms. The implementation relies on floating-point orthogonalization, and LLL is central to the code, hence the name.
It includes implementations of floating-point LLL reduction algorithms, offering different speed/guarantees ratios. It contains a 'wrapper' choosing the estimated best sequence of variants in order to provide a guaranteed output as fast as possible. In the case of the wrapper, the succession of variants is oblivious to the user.
It includes an implementation of the BKZ reduction algorithm, including the BKZ-2.0 improvements (extreme enumeration pruning, pre-processing of blocks, early termination). Additionally, Slide reduction and self dual BKZ are supported.
It also includes a floating-point implementation of the Kannan-Fincke-Pohst algorithm that finds a shortest non-zero lattice vector. For the same task, the GaussSieve algorithm is also available in fplll. Finally, it contains a variant of the enumeration algorithm that computes a lattice vector closest to a given vector belonging to the real span of the lattice.
Author: Damien Stehlé
Contact: Damien Stehlé
generating functions package
Keyword: Symbolic computation
Functional Description: Gfun is a Maple package for the manipulation of linear recurrence or differential equations. It provides tools for guessing a sequence or a series from its first terms, for manipulating rigorously solutions of linear differential or recurrence equations, using the equation as a data-structure.
Contact: Bruno Salvy
URL: http://
Keywords: Multiple-Precision - Floating-point - Correct Rounding
Functional Description: GNU MPFR is an efficient arbitrary-precision floating-point library with well-defined semantics (copying the good ideas from the IEEE 754 standard), in particular correct rounding in 5 rounding modes. It provides about 80 mathematical functions, in addition to utility functions (assignments, conversions...). Special data (Not a Number, infinities, signed zeros) are handled like in the IEEE 754 standard. GNU MPFR is based on the mpn and mpz layers of the GMP library.
Participants: Guillaume Hanrot, Paul Zimmermann, Philippe Théveny and Vincent Lefèvre
Contact: Vincent Lefèvre
Publications: Correctly Rounded Arbitrary-Precision Floating-Point Summation -
Optimized Binary64 and Binary128 Arithmetic with GNU MPFR -
Évaluation rapide de fonctions hypergéométriques -
Arbitrary Precision Error Analysis for computing
Keywords: Floating-point - Correct Rounding
Functional Description: Sipe is a mini-library in the form of a C header file, to perform radix-2 floating-point computations in very low precisions with correct rounding, either to nearest or toward zero. The goal of such a tool is to do proofs of algorithms/properties or computations of tight error bounds in these precisions by exhaustive tests, in order to try to generalize them to higher precisions. The currently supported operations are addition, subtraction, multiplication (possibly with the error term), fused multiply-add/subtract (FMA/FMS), and miscellaneous comparisons and conversions. Sipe provides two implementations of these operations, with the same API and the same behavior: one based on integer arithmetic, and a new one based on floating-point arithmetic.
Participant: Vincent Lefèvre
Contact: Vincent Lefèvre
Publications: SIPE: Small Integer Plus Exponent - Sipe: a Mini-Library for Very Low Precision Computations with Correct Rounding
Keyword: Exact linear algebra
Functional Description: LinBox is an open-source C++ template library for exact, high-performance linear algebra computations. It is considered as the reference library for numerous computations (such as linear system solving, rank, characteristic polynomial, Smith normal forms,...) over finite fields and integers with dense, sparse, and structured matrices.
Participants: Clément Pernet and Thierry Gautier
Contact: Clément Pernet
URL: http://
Keywords: Euclidean Lattices - Computer algebra system (CAS)
Functional Description: Software library for linear algebra and Euclidean lattice problems
Contact: Gilles Villard
Machine implementation of mathematical functions often relies on polynomial approximations. The particularity is that rounding errors occur both when representing the polynomial coefficients on a finite number of bits, and when evaluating it in finite precision. Hence, for finding the best polynomial (for a given fixed degree, norm and interval), one has to consider both types of errors: approximation and evaluation. While efficient algorithms were already developed for taking into account the approximation error, the evaluation part is usually a posteriori handled, in an ad-hoc manner. In , we formulate a semi-infinite linear optimization problem whose solution is the best polynomial with respect to the supremum norm of the sum of both errors. This problem is then solved with an iterative exchange algorithm, which can be seen as an extension of the well-known Remez algorithm. A discussion and comparison of the obtained results on different examples are finally presented.
Many reconstruction algorithms from moments of algebraic data were developed in optimization, analysis or statistics. Lasserre and Putinar proposed an exact reconstruction algorithm for the algebraic support of the Lebesgue measure, or of measures with density equal to the exponential of a known polynomial. Their approach relies on linear recurrences for the moments, obtained using Stokes theorem. In , we extend this study to measures with holonomic densities and support with real algebraic boundary. In the framework of holonomic distributions (i.e. they satisfy a holonomic system of linear partial or ordinary differential equations with polynomial coefficients), an alternate method to creative telescoping is proposed for computing linear recurrences for the moments. When the coefficients of a polynomial vanishing on the support boundary are given as parameters, the obtained recurrences have the advantage of staying linear with respect to them. This property allows for an efficient reconstruction method. Given a finite number of numerically computed moments for a measure with holonomic density, and assuming a real algebraic boundary for the support, we propose an algorithm for solving the inverse problem of obtaining both the coefficients of a polynomial vanishing on the boundary and those of the polynomials involved in the holonomic operators which annihilate the density.
In , we present a library to verify rigorous approximations of univariate functions on real numbers, with the Coq proof assistant. Based on interval arithmetic, this library also implements a technique of validation a posteriori based on the Banach fixed-point theorem. We illustrate this technique on the case of operations of division and square root. This library features a collection of abstract structures that organize the specification of rigorous approximations, and modularize the related proofs. Finally, we provide an implementation of verified Chebyshev approximations, and we discuss a few examples of computations.
We are interested in in obtaining error bounds for the classical Cooley-Tukey FFT algorithm in floating-point arithmetic, for the 2-norm as well as for the infinity norm. For that purpose we also give some results on the relative error of the complex multiplication by a root of unity, and on the largest value that can take the real or imaginary part of one term of the FFT of a vector
Triple-word arithmetic consists in representing high-precision numbers as the unevaluated sum of three floating-point numbers (with “nonoverlapping” constraints that are explicited in the paper). We introduce and analyze in various algorithms for manipulating triple-word numbers: rounding a triple-word number to a floating-point number, adding, multiplying, dividing, and computing square-roots of triple-word numbers, etc. We compare our algorithms, implemented in the Campary library, with other solutions of comparable accuracy. It turns out that our new algorithms are significantly faster than what one would obtain by just using the usual floating-point expansion algorithms in the special case of expansions of length 3.
We deal in with accurate complex multiplication in binary floating-point arithmetic, with an emphasis on the case where one of the operands in a “double-word” number. We provide an algorithm that returns a complex product with normwise relative error bound close to the best possible one, i.e., the rounding unit
The normal and complementary error functions are ubiquitous special functions for any mathematical library. They have a wide range of applications. Practical applications call for customized implementations that have strict accuracy requirements. Accurate numerical implementation of these functions is, however, non-trivial. In particular, the complementary error function erfc for large positive arguments heavily suffers from cancellation, which is largely due to its asymptotic behavior. We provide a semi-automatic code generator for the erfc function which is parameterized by the user-given bound on the relative error. Our solution, presented in , exploits the asymptotic expression of erfc and leverages the automatic code generator Metalibm that provides accurate polynomial approximations. A fine-grained a priori error analysis provides a libm developer with the required accuracy for each step of the evaluation. In critical parts, we exploit double-word arithmetic to achieve implementations that are fast, yet accurate up to 50 bits, even for large input arguments. We demonstrate that for high required accuracies the automatically generated code has performance comparable to that of the standard libm and for lower ones our code demonstrated roughly
Many properties of the IEEE-754 floating-point number system are taken for granted in modern computers and are deeply embedded in compilers and low-level softare routines such as elementary functions or BLAS. In we review such properties on the recently proposed Posit number system. Some are still true. Some are no longer true, but sensible work-arounds are possible, and even represent exciting challenge for the community. Some, in particular the loss of scale invariance for accuracy, are extremely dangerous if Posits are to replace floating point completely. This study helps framing where Posits are better than floating-point, where they are worse, and what tools are missing in the Posit landscape. For general-purpose computing, using Posits as a storage format only could be a way to reap their benefits without loosing those of classical floating-point. The hardware cost of this alternative is studied.
We consider in the relative
accuracy of evaluating
The IEEE 1788-2015 has standardized interval arithmetic. However, few libraries for interval arithmetic are compliant with this standard. In the first part of , the main features of the IEEE 1788-2015 standard are detailed. These features were not present in the libraries developed prior to the elaboration of the standard. MPFI is such a library: it is a C library, based on MPFR, for arbitrary precision interval arithmetic. MPFI is not (yet) compliant with the IEEE 1788-2015 standard for interval arithmetic: the planned modifications are presented.
In , we describe an algorithm to solve the approximate Shortest Vector Problem for lattices corresponding to ideals of the ring of integers of an arbitrary number field
The algorithm builds upon the algorithms from Cramer al. [EUROCRYPT 2016] and Cramer et al. [EUROCRYPT 2017]. It relies on the framework from Buchmann [Séminaire de théorie des nombres 1990], which allows to merge them and to extend their applicability from prime-power cyclotomic fields to all number fields. The cost improvements are obtained by allowing precomputations that depend on the field only.
The LLL algorithm takes as input a basis of a Euclidean lattice, and, within a polynomial number of operations, it outputs another basis of the same lattice but consisting of rather short vectors. In , we provide a generalization to
In , we propose the General Sieve Kernel (G6K), an abstract stateful machine supporting a wide variety of lattice reduction strategies based on sieving algorithms. Using the basic instruction set of this abstract stateful machine, we first give concise formulations of previous sieving strategies from the literature and then propose new ones. We then also give a light variant of BKZ exploiting the features of our abstract stateful machine. This encapsulates several recent suggestions (Ducas at Eurocrypt 2018; Laarhoven and Mariano at PQCrypto 2018) to move beyond treating sieving as a blackbox SVP oracle and to utilise strong lattice reduction as preprocessing for sieving. Furthermore, we propose new tricks to minimise the sieving computation required for a given reduction quality with mechanisms such as recycling vectors between sieves, on-the-fly lifting and flexible insertions akin to Deep LLL and recent variants of Random Sampling Reduction.
Moreover, we provide a highly optimised, multi-threaded and tweakable implementation of this machine which we make open-source. We then illustrate the performance of this implementation of our sieving strategies by applying G6K to various lattice challenges. In particular, our approach allows us to solve previously unsolved instances of the Darmstadt SVP (151, 153, 155) and LWE (e.g. (75, 0.005)) challenges. Our solution for the SVP-151 challenge was found 400 times faster than the time reported for the SVP-150 challenge, the previous record. For exact SVP, we observe a performance crossover between G6K and FPLLL's state of the art implementation of enumeration at dimension 70.
In , we present a new cryptanalytic algorithm on obfuscations based on GGH15 multilinear map. Our algorithm, statistical zeroizing attack, directly distinguishes two distributions from obfuscation while it follows the zeroizing attack paradigm, that is, it uses evaluations of zeros of obfuscated programs.
Our attack breaks the recent indistinguishability obfuscation candidate suggested by Chen et al. (CRYPTO'18) for the optimal parameter settings. More precisely, we show that there are two functionally equivalent branching programs whose CVW obfuscations can be efficiently distinguished by computing the sample variance of evaluations.
This statistical attack gives a new perspective on the security of the indistinguishability obfuscations: we should consider the shape of the distributions of evaluation of obfuscation to ensure security.
In other words, while most of the previous (weak) security proofs have been studied with respect to algebraic attack model or ideal model, our attack shows that this algebraic security is not enough to achieve indistinguishability obfuscation. In particular, we show that the obfuscation scheme suggested by Bartusek et al. (TCC'18) does not achieve the desired security in a certain parameter regime, in which their algebraic security proof still holds.
The correctness of statistical zeroizing attacks holds under a mild assumption on the preimage sampling algorithm with a lattice trapdoor. We experimentally verify this assumption for implemented obfuscation by Halevi et al. (ACM CCS'17).
The reference is the journal version of the Eurocrypt'15 article with the same title and authors.
Multi-client functional encryption (MCFE) allows
Zero-knowledge elementary databases (ZK-EDBs) are cryptographic schemes that allow a prover to commit to a set D of key-value pairs so as to be able to prove statements such as “x belongs to the support of D and D(x) = y” or “x is not in the support of D”. Importantly , proofs should leak no information beyond the proven statement and even the size of D should remain private. Chase et al. (Eurocrypt'05) showed that ZK-EDBs are implied by a special flavor of non-interactive commitment, called mercurial commitment, which enables efficient instantiations based on standard number theoretic assumptions. On the other hand, the resulting ZK-EDBs are only known to support proofs for simple statements like (non-)membership and value assignments. In , we show that mercurial commitments actually enable significantly richer queries. We show that, modulo an additional security property met by all known efficient constructions, they actually enable range queries over keys and values-even for ranges of super-polynomial size-as well as membership/non-membership queries over the space of values. Beyond that, we exploit the range queries to realize richer queries such as k-nearest neighbors and revealing the k smallest or largest records within a given range. In addition, we provide a new realization of trapdoor mercurial commitment from standard lattice asssumptions, thus obtaining the most expressive quantum-safe ZK-EDB construction so far.
Lossy algebraic filters (LAFs) are function families where each function is parametrized by a tag, which determines if the function is injective or lossy. While initially introduced by Hofheinz (Eurocrypt 2013) as a technical tool to build encryption schemes with key-dependent message chosen-ciphertext (KDM-CCA) security, they also find applications in the design of robustly reusable fuzzy extractors. So far, the only known LAF family requires tags comprised of
Despite recent advances in the area of pairing-friendly Non-Interactive Zero-Knowledge proofs, there have not been many efficiency improvements in constructing arguments of satisfiability of quadratic (and larger degree) equations since the publication of the Groth-Sahai proof system (J. of Cryptology 2012). In , we address the problem of aggregating such proofs using techniques derived from the interactive setting and recent constructions of SNARKs. For certain types of quadratic equations, this problem was investigated before by González et al. (Asiacrypt'15). Compared to their result, we reduce the proof size by approximately 50
The paper constructs efficient non-interactive arguments for correct evaluation of arithmetic and Boolean circuits with proof size
Ring signatures, introduced by Rivest, Shamir and Tauman (ASIACRYPT 2001), allow to sign a message on behalf of a set of users while guaranteeing authenticity and anonymity. Groth and Kohlweiss (EUROCRYPT 2015) and Libert et al. (EUROCRYPT 2016) constructed schemes with signatures of size logarithmic in the number of users. An even shorter ring signature, of size independent from the number of users, was recently proposed by Malavolta and Schroeder (ASIACRYPT 2017). However, all these short signatures are obtained relying on strong and controversial assumptions. Namely, the former schemes are both proven secure in the random oracle model while the later requires non-falsifiable assumptions.
The most efficient construction under mild assumptions remains the construction of Chandran et al. (ICALP 2007) with a signature of size
In , we construct an asymptotically shorter ring signature from the hardness of the Diffie-Hellman assumption in bilinear groups. Each signature comprises
ECDSA is a widely adopted digital signature standard. Unfortunately, efficient distributed variants of this primitive are notoriously hard to achieve and known solutions often require expensive zero knowledge proofs to deal with malicious adversaries. For the two party case, Lindell (CRYPTO 2017) recently managed to get an efficient solution which, to achieve simulation-based security, relies on an interactive, non standard, assumption on Paillier's cryptosystem.
In this paper we generalize Lindell's solution using hash proof systems. The main advantage of our generic method is that it results in a simulation-based security proof without resorting to non-standard interactive assumptions.
Moving to concrete constructions, we show how to instantiate our framework using class groups of imaginary quadratic fields. Our implementations show that the practical impact of dropping such interactive assumptions is minimal. Indeed, while for 128-bit security our scheme is marginally slower than Lindell's, for 256-bit security it turns out to be better both in key generation and signing time. Moreover, in terms of communication cost, our implementation significantly reduces both the number of rounds and the transmitted bits without exception.
In , we construct the first pseudorandom functions that resist a strong class of attacks where an adversary is able to run the cryptosystem not only with the fixed secret key, but with related keys where bits of its choice of the original keys are flipped. This problem is motivated by practical attacks that have been performed against physical devices. Our construction guarantees that every output of our construction, for the original key or for tampered keys, are pseudorandom, i.e. are computationally hard to distinguish from truly random values. To achieve this, we rely on a recent tool introduced in cryptography and termed multilinear maps. While multilinear maps have been recently attacked by several techniques, we prove that our construction remains secure despite the numerous vulnerabilities of current constructions of multilinear maps.
Most theoretical models in cryptography suppose that an attacker can only observe the input/output behavior of a cryptosystem and nothing more. Yet, in the real world, cryptosystems run on physical devices and auxiliary information leaks from these devices. This leakage can sometimes be used to attack the system, even though it is proven secure in theory. To circumvent these issues, cryptographers have introduces several new security models in an attempt to encompass the different forms of leakage. Some models are simple, such as the probing model, and simple compilers allow to transform a system into one secure in the probing model, while some more realistic problems such as the noisy-leakage model are very involved. In , we show that these models are actually equivalent, proving in particular that the simple compilers are sufficient to guarantee security in realistic environments.
A lot of information concerning solutions of linear differential equations can be computed directly from the equation. It is therefore natural to consider these equations as a data-structure, from which mathematical properties can be computed. A variety of algorithms has thus been designed in recent years that do not aim at “solving”, but at computing with this representation. Many of these results are surveyed in .
The absolute separation of a polynomial is the minimum nonzero difference between the absolute values of its roots. In the case of polynomials with integer coefficients, it can be bounded from below in terms of the degree and the height (the maximum absolute value of the coefficients) of the polynomial. We improve the known bounds for this problem and related ones. Then we report on extensive experiments in low degrees, suggesting that the current bounds are still very pessimistic.
We consider in the LU factorization of an
In we design fast algorithms for the computation of approximant bases in shifted Popov normal form. For
Bosch (Germany) ordered from us some support for implementing complex numerical algorithms (participants: Claude-Pierre Jeannerod and Jean-Michel Muller).
Miruna Rosca and Radu Titiu are employees of BitDefender. Their PhD's are supervised by Damien Stehlé and Benoît Libert, respectively. Miruna Rosca works on the foundations of lattice-based cryptography, and Radu Titiu works on pseudo-random functions and functional encryption.
Adel Hamdi is doing is PhD with Orange Labs and is supervised by Fabien Laguillaumie. He is working on advanced encryption protocols for the cloud.
FastRelax stands for “Fast and Reliable Approximation”. It is a four year ANR project (started in October 2014 and extended till September 2019).
The web page of the project is http://
The aim of this project is to develop computer-aided proofs of numerical values, with certified and reasonably tight error bounds, without sacrificing efficiency. Applications to zero-finding, numerical quadrature or global optimization can all benefit from using our results as building blocks. We expect our work to initiate a “fast and reliable” trend in the symbolic-numeric community. This will be achieved by developing interactions between our fields, designing and implementing prototype libraries and applying our results to concrete problems originating in optimal control theory.
ALAMBIC is a four-year project (started in October 2016) focused on the
applications of cryptographic primitives with homomorphic or malleability properties.
The web page of the project is
https://
RISQ (Regroupement de l’Industrie française pour la Sécurité Post –
Quantique) is a BPI-DGE four-year project (started in January 2017)
focused on the transfer of post-quantum cryptography from academia to
industrial poducts. The web page of the project is
http://
PROMETHEUS (Privacy-Preserving Systems from Advanced Cryptographic Mechanisms Using Lattices) is a 4-year European H2020 project (call H2020-DS-2016-2017, Cybersecurity PPP Cryptography, DS-06-2017) that started in January 2018. It gathers 8 academic partners (ENS de Lyon and Université de Rennes 1; CWI, Pays-Bas; IDC Herzliya, Israel; Royal Holloway University of London, United Kingdom; Universitat Politècnica de Catalunya, Spain; Ruhr-Universität Bochum, Germany; Weizmann Institute, Israel), 4 industrial partners (Orange, Thales, TNO, Scytl). The goal of this project is to develop a toolbox of privacy-preserving cryptographic algorithms and protocols (like group signatures, anonymous credentials, or digital cash systems) that resist quantum adversaries. Solutions will be mainly considered in the context of Euclidean lattices and they will be analyzed from a theoretical point of view (i.e., from a provable security aspect) and a practical angle (which covers the security of cryptographic implementations and side-channel leakages). The project is hosted by ENS de Lyon and Benoît Libert is the administrative coordinator while Orange is the scientific leader.
3-year project accepted in July 2018. Expected beginning on January 1, 2019. Benoît Libert is co-PI with Shweta Agrawal (IIT Madras, India). Budget on the French side amounts to 100k€.
Functional encryption is a paradigm that enables users to perform data mining and analysis on encrypted data. Users are provided cryptographic keys corresponding to particular functionalities which enable them to learn the output of the computation without learning anything about the input. Despite recent advances, efficient realizations of functional encryption are only available for restricted function families, which are typically represented by small-depth circuits: indeed, solutions for general functionalities are either way too inefficient for pratical use or they rely on uncertain security foundations like the existence of circuit obfuscators (or both). This project will explore constructions based on well-studied hardness assumptions and which are closer to being usable in real-life applications. To this end, we will notably consider solutions supporting other models of computation than Boolean circuits – like Turing machines – which support variable-size inputs. In the context of particular functionalities, the project will aim for more efficient realizations that satisfy stronger security notions.
TUCKER Warwick
Department of Mathematics - Uppsala University - Sweden
Title: Attracteur de Hénon et intégrales abéliennes liées aux 16e problème de Hilbert
2018 – 2022
Ron Steinfeld, Monash University (June)
Amin Sakzad, Monash University (June)
Shi Bai, Florida Atlantic University (June and July)
David Wu, University of Virginia (July)
Olivier Bernard, Université Rennes 1 and Thalès (October and November)
Gautier Eberhart, Université Rennes 1 (October and November)
Federico Savasta, Università degli Studi di Catania (October)
Damien Stehlé organized a Winter School on the mathematical foundations of public-key cryptography, in Aussois (France), from March 17 to March 22.
Bruno Salvy was a co-chair of AofA'2019 (Analysis of Algorithms), in Luminy, France.
Elena Kirshanova was in the program committee of Asiacrypt 2019.
Benoît Libert was in the program committees of PKC 2019 and PKC 2020.
Nathalie Revol is in the steering committee of the Arith conference series. She was in the program committee of Arith'26, of Correctness 2019 (workshop of SuperComputing) and of PPAM (Parallel Processing and Applied Mathematics) 2019.
Bruno Salvy is in the steering committee of AofA. He was in the program committee for FPSAC 2019 and in the scientific committee of OPSFA 2019. He is in the program committee for AofA 2020.
Damien Stehlé is in the steering committee of the PQCrypto conference series. He was in the program committee PQCrypto 2019 and is in the program committee of PQCrypto 2020. He was in the program committee of CRYPTO 2019.
Gilles Villard was in the program committee of ISSAC 2019.
Benoît Libert was a member of the editorial board of IET Information Security until July 31, 2019.
Jean-Michel Muller is associate editor in chief of the IEEE Transactions on Emerging Topics in Computing.
Nathalie Revol is a member of the editorial board of Reliable Computing.
Bruno Salvy and Gilles Villard are members of the editorial board of Journal of Symbolic Computation.
Bruno Salvy is a member of the editorial board of the collection Text and Monographs in Symbolic Computation (Springer) and of the journal Annals of Combinatorics.
Damien Stehlé is a member of the editorial board of Journal of Cryptology.
Claude-Pierre Jeannerod gave an invited talk at the workshop Structured Matrix Days (Limoges, May 23–24, 2019).
Benoît Libert gave an invited presentation during the “Workshop on Modern Trends in Cryptography” organized at Nanyang Technological University (Singapore) on June 13-14, 2019.
Damien Stehlé gave lectures during the “Euclidean lattices: theory and applications” Summer school that was held in Kaliningrad (Russia), from July 15 to July 19.
Guillaume Hanrot was a member of selection committees for professors at Université de Lorraine (in CS) and at Université de Nouvelle-Calédonie (in Mathematics). He was also in the hiring committee of the computer science department of École polytechnique. He is a member of the working group on the revision of the CS curriculum in Classes préparatoires aux grandes écoles.
Claude-Pierre Jeannerod is a member of the scientific committee of JNCF (Journées Nationales de Calcul Formel). He is also a member of the recruitment committee for postdocs and sabbaticals at Inria Grenoble–Rhône-Alpes.
Jean-Michel Muller is co-head of the GDR Informatique Mathématique of the CNRS. He is also a member of the Scientific Council of CERFACS (Toulouse).
Alain Passelègue is a member of the steering committee of the Groupe de Travail Codage et Cryptographie (GT-C2) of the GDR-IM.
Nathalie Revol is a member of the steering committee of GDR Calcul; she was a member of the hiring committee (Comité de Sélection) for 2 positions at U. Nantes.
Bruno Salvy is a member of the scientific councils of the CIRM, Luminy and of the GDR Informatique Mathématique of the CNRS.
Nathalie Revol has been an expert for the European H2020 program.
Jean-Michel Muller is a member of the Commission Administrative Paritaire (CAP) Directeurs de Recherches of CNRS.
Gilles Villard is a member of the Section 6 of the Comité national de la recherche scientifique.
Master: Guillaume Hanrot, Computer algebra, 10h, ENS de Lyon, France
Master: Guillaume Hanrot, Cryptanalysis, 15h, ENS de Lyon, France
Master: Claude-Pierre Jeannerod, Computer Algebra, 18h, M2Pro ISFA (Institut de Science Financière et d’Assurances), Université Claude Bernard Lyon 1, France
Master (1&2): Fabien Laguillaumie, Cryptography, 160 h, ISFA, UCBL, France
Master: Benoît Libert, Advanced Topics in Cryptography, 15h, ENS de Lyon, France
Master: Nicolas Louvet, Compilers, 22h, M1, UCB Lyon 1, France
Master: Alain Passelègue, Computer Algebra, 10h, M1, ENS de Lyon, France
Master: Alain Passelègue, Advanced Topics in Cryptography, 30h, M2, ENS de Lyon, France
Master: Nathalie Revol, Numerical Algorithms and Reliability, 12h, M2Pro ISFA (Institut de Science Financière et d’Assurances), Université Claude Bernard Lyon 1, France
Master: Bruno Salvy, Computer Algebra, 6h, ENS de Lyon, France
Master: Bruno Salvy, Logic and Complexity, 32h, École polytechnique, France
Master: Damien Stehlé, Cryptanalysis, 15h, ENS de Lyon, France
Master : Gilles Villard, Computer Algebra, 8h, ENS de Lyon, France
Bachelor: Guillaume Hanrot, Calculability and complexity, 32h, ENS de Lyon, France
Bachelor: Nicolas Louvet, Computer Architecture, 27h, L1, UCB Lyon 1, France
Bachelor: Nicolas Louvet, Operating Systems, 50h, L2, UCB Lyon 1, France
Bachelor: Nicolas Louvet, Data Structures and Algorithms, 24h, L2, UCB Lyon 1, France
Bachelor: Nicolas Louvet, Data Structures and Algorithms, 40h, L3, UCB Lyon 1, France
Bachelor: Nicolas Louvet, Formal Languages, 15h, L3, UCB Lyon 1, France
Bachelor: Nicolas Louvet, Classical Logic, 15h, L3, UCB Lyon 1, France
Bachelor: Bruno Salvy, Design and Analysis of Algorithms, 20h, École polytechnique, France
PhD: Florent Bréhard, Certified Numerics in Function Spaces: Polynomial Approximations Meet Computer Algebra and Formal Proof, ENS de Lyon, July 12, Nicolas Brisebarre (co-supervision with Mioara Joldes, CNRS LAAS, and Damien Pous, LIP)
PhD: Alice Pellet-Mary, On ideal lattices and the GGH13 multilinear map, ENS de Lyon, October 16, Damien Stehlé
PhD: Chen Qian, Lossy Trapdoor Primitives, Zero-Knowledge Proofs and Applications, IRISA Rennes, October 4, Benoît Libert (co-supervision with Pierre-Alain Fouque, IRISA)
PhD in progress: Miruna Rosca, The middle-product learning with errors problem, January 2017, Damien Stehlé
PhD in progress: Huyen Nguyen, Cryptographic aspects of orthogonal lattices, September 2018, Damien Stehlé
PhD in progress: Radu Titiu, Advanced cryptographic primitives based on standard assumptions, January 2017, Benoît Libert
PhD in progress: Adel Hamdi, Functional Encryption, December 2017, Fabien Laguillaumie (co-supervised by Sébastien Canard, Orange)
PhD in progress: Ida Tucker, Advanced cryptographic primitives from homomorphic encryption, October 2017, Fabien Laguillaumie (co-direction with Guilhem Castagnos, Université de Bordeaux)
Damien Stehlé was a jury member for the PhD defences of Ilia Iliashenko (K.U. Leuven, Belgium) and Joost Rijneveld (Radboud U., the Netherlands) and for the habilitation defence of Omar Fawzi (ENS de Lyon). He was a reviewer and jury member of the PhD of Thomas Debris-Alazard (Sorbonne U.) and for the habilitation of Guilhem Castagnos (U. Bordeaux).
Benoît Libert was a reviewer for the PhD theses of Romain Gay (ENS Paris), Jérémy Chotard (Univ. of Limoges). He was a PhD examiner for the thesis of Andrea Cerulli (University College London, United Kingdom). He also chaired the PhD committee of Anca Nitulescu (ENS Paris).
Fabien Laguillaumie was reviewer and jury member of the PhD of Pauline Bert (Université de Rennes) and of the HDR of Olivier Blazy (Université de Limoges)
Jean-Michel Muller was reviewer and jury member of the PhD of Clothilde Jeangoudoux (Sorbonne University, Paris)
Gilles Villard was reviewer for the PhD thesis of Robin Larrieu (Université Paris-Saclay); examiner for the habilitation of Pascal Giorgi (Université de Montpellier) and the PhD thesis of Matías Bender (Sorbonne Université).
Nathalie Revol was in the scientific committee for the Journées Scientifiques Inria (Lyon, 5-7 June 2019).
Nathalie Revol was a member of the editorial committee of interstices; she is the scientific editor of this Web magazine since September 2019.
Nathalie Revol belonged to the steering committee of MMI (Maison des Mathématiques et de l'Informatique) until July 2019; she is now a member of its prospective committee.
Bruno Salvy is “référent chercheur” for the Inria Grenoble Center.
Nathalie Revol belonged to the working group that elaborated the "7 families of computer science" playcards, launched in February 2019.
Nathalie Revol taught "Dissemination of Scientific Knowledge", 10h, to the 4th year students (between Master and PhD) of ENS de Lyon, France.
Nathalie Revol works with DANE (Délégation Académique au Numérique dans l'Éducation) of Rectorat de Lyon towards educating primary school teachers, by educating educators (e-RUN); she is a member of the Conseil Scientifique du Numérique.
Nathalie Revol presented activities for teaching computer science at every school level, and in particular activities led by Inria, for a Taiwanese delegation (Grenoble, October 2019).
Nathalie Revol spent 2 days at école Guilloux, with 50 pupils aged 10 (level: CM2), to teach computer science without any computer (so-called "unplugged computer science"): data, algorithms, networks.
For high-school pupils (
For a larger audience: she took part to Pop Science, doing magic tricks in the street at La Duchère, Lyon, May 2019; with Florent Masséglia, she introduced 5 important figures of computer science, chosen among the personalities of the "7 families of computer science" playcard, for L'Esprit Sorcier, for la Fête de la Science, Paris, October 2019, cf https://
Nathalie Revol took part to a day of teaching unplugged computer science for whoever was interested, at La Gaîté Lyrique, Paris, June 2019.
Nathalie Revol belongs to the working group "Informatique Sans Ordinateur", which creates unplugged activities to teach computer science; this group meets twice a year, usually in Lyon.