2023Activity reportProjectTeamARIC
RNSR: 201221021B Research center Inria Lyon Centre
 In partnership with:CNRS, Université Claude Bernard (Lyon 1), Ecole normale supérieure de Lyon
 Team name: Arithmetic and Computing
 In collaboration with:Laboratoire de l'Informatique du Parallélisme (LIP)
 Domain:Algorithmics, Programming, Software and Architecture
 Theme:Algorithmics, Computer Algebra and Cryptology
Keywords
Computer Science and Digital Science
 A2.4. Formal method for verification, reliability, certification
 A4.3. Cryptography
 A7.1. Algorithms
 A8. Mathematics of computing
 A8.1. Discrete mathematics, combinatorics
 A8.4. Computer Algebra
 A8.5. Number theory
 A8.10. Computer arithmetic
Other Research Topics and Application Domains
 B6.6. Embedded systems
 B9.5. Sciences
 B9.10. Privacy
1 Team members, visitors, external collaborators
Research Scientists
 Bruno Salvy [Team leader, INRIA, Senior Researcher]
 Nicolas Brisebarre [CNRS, Senior Researcher, HDR]
 ClaudePierre Jeannerod [INRIA, Researcher]
 Vincent Lefèvre [INRIA, Researcher]
 JeanMichel Muller [CNRS, Senior Researcher, HDR]
 Alain Passelègue [INRIA, Researcher, until Aug 2023]
 Nathalie Revol [INRIA, Researcher]
 Gilles Villard [CNRS, Senior Researcher, HDR]
Faculty Members
 Guillaume Hanrot [ENS DE LYON, Professor, until Aug 2023, HDR]
 Nicolas Louvet [UNIV LYON I, Associate Professor]
 Damien Stehlé [ENS DE LYON, Professor, until Mar 2023, HDR]
PhD Students
 Calvi Abou Haidar [INRIA]
 Orel Cosseron [ZAMA SAS, until Aug 2023]
 Julien Devevey [ENS DE LYON, until Sep 2023]
 Pouria Fallahpour [ENS DE LYON]
 Joel Felderhoff [INRIA]
 Alaa Ibrahim [INRIA]
 Mahshid Riahinia [ENS DE LYON]
 Hippolyte Signargout [ENS DE LYON, until Oct 2023]
Technical Staff
 Joris Picot [ENS DE LYON, Engineer]
Interns and Apprentices
 Louis Gaillard [ENS DE LYON, Intern, from Feb 2023 until Jul 2023]
 Tom Hubrecht [ENS PARISSACLAY, from Sep 2023]
Administrative Assistant
 Chiraz Benamor [ENS DE LYON]
Visiting Scientist
 Warwick Tucker [Monash University, Australia]
2 Overall objectives
A major challenge in modeling and scientific computing is the simultaneous mastery of hardware capabilities, software design, and mathematical algorithms for the efficiency and reliability of the computation. In this context, the overall objective of AriC is to improve computing at large, in terms of performance, efficiency, and reliability. We work on the fine structure of floatingpoint arithmetic, on controlled approximation schemes, on algebraic algorithms and on new cryptographic applications, most of these themes being pursued in their interactions. Our approach combines fundamental studies, practical performance and qualitative aspects, with a shared strategy going from highlevel problem specifications and standardization actions, to computer arithmetic and the lowestlevel details of implementations.
This makes AriC the right place for drawing the following lines of action:
 Design and integration of new methods and tools for mathematical program specification, certification, security, and guarantees on numerical results. Some main ingredients here are: the interleaving of formal proofs, computer arithmetic and computer algebra; error analysis and computation of certified error bounds; the study of the relationship between performance and numerical quality; and on the cryptography aspects, focus on the practicality of existing protocols and design of more powerful latticebased primitives.
 Generalization of a hybrid symbolicnumeric trend: interplay between arithmetic for both improving and controlling numerical approaches (symbolic $\to $ numeric), as well actions accelerating exact solutions (symbolic $\leftarrow $ numeric). This trend, especially in the symbolic computation community, has acquired a strategic role for the future of scientific computing. The integration in AriC of computer arithmetic, reliable computing, and algebraic computing is expected to lead to a deeper understanding of the problem and novel solutions.
 Mathematical and algorithmic foundations of computing. We address algorithmic complexity and fundamental aspects of approximation, polynomial and matrix algebra, and latticebased cryptography. Practical questions concern the design of high performance and reliable computing kernels, thanks to optimized computer arithmetic operators and an improved adequacy between arithmetic bricks and higher level ones.
According to the application domains that we target and our main fields of expertise, these lines of actions are declined in three themes with specific objectives.
 Efficient approximation methods (§3.1). Here lies the question of interleaving formal proofs, computer arithmetic and computer algebra, for significantly extending the range of functions whose reliable evaluation can be optimized.
 Lattices: algorithms and cryptography (§3.2). Long term goals are to go beyond the current design paradigm in basis reduction, and to demonstrate the superiority of latticebased cryptography over contemporary publickey cryptographic approaches.
 Algebraic computing and high performance kernels (§3.3). The problem is to keep the algorithm and software designs in line with the scales of computational capabilities and application needs, by simultaneously working on the structural and the computer arithmetic levels.
3 Research program
3.1 Efficient and certified approximation methods
3.1.1 Safe numerical approximations
The last twenty years have seen the advent of computeraided proofs in mathematics and this trend is getting more and more important. They request: fast and stable numerical computations; numerical results with a guarantee on the error; formal proofs of these computations or computations with a proof assistant. One of our main longterm objectives is to develop a platform where one can study a computational problem on all (or any) of these three levels of rigor. At this stage, most of the necessary routines are not easily available (or do not even exist) and one needs to develop ad hoc tools to complete the proof. We plan to provide more and more algorithms and routines to address such questions. Possible applications lie in the study of mathematical conjectures where exact mathematical results are required (e.g., stability of dynamical systems); or in more applied questions, such as the automatic generation of efficient and reliable numerical software for function evaluation. On a complementary viewpoint, numerical safety is also critical in robust space mission design, where guidance and control algorithms become more complex in the context of increased satellite autonomy. We will pursue our collaboration with specialists of that area whose questions bring us interesting focus on relevant issues.
3.1.2 Floatingpoint computing
Floatingpoint arithmetic is currently undergoing a major evolution, in particular with the recent advent of a greater diversity of available precisions on a same system (from 8 to 128 bits) and of coarsergrained floatingpoint hardware instructions. This new arithmetic landscape raises important issues at the various levels of computing, that we will address along the following three directions.
Floatingpoint algorithms, properties, and standardization
One of our targets is the design of building blocks of computing (e.g., algorithms for the basic operations and functions, and algorithms for complex or doubleword arithmetic). Establishing properties of these building blocks (e.g., the absence of “spurious” underflows/overflows) is also important. The IEEE 754 standard on floatingpoint arithmetic (which has been revised slightly in 2019) will have to undergo a major revision within a few years: first because advances in technology or new needs make some of its features obsolete, and because new features need standardization. We aim at playing a leading role in the preparation of the next standard.
Error bounds
We will pursue our studies in rounding error analysis, in particular for the “low precision–high dimension” regime, where traditional analyses become ineffective and where improved bounds are thus most needed. For this, the structure of both the data and the errors themselves will have to be exploited. We will also investigate the impact of mixedprecision and coarsergrained instructions (such as small matrix products) on accuracy analyses.
High performance kernels
Most directions in the team are concerned with optimized and high performance implementations. We will pursue our efforts concerning the implementation of well optimized floatingpoint kernels, with an emphasis on numerical quality, and taking into account the current evolution in computer architectures (the increasing width of SIMD registers, and the availability of low precision formats). We will focus on computing kernels used within other axes in the team such as, for example, extended precision linear algebra routines within the FPLLL and HPLLL libraries.
3.2 Lattices: algorithms and cryptology
We intend to strengthen our assessment of the cryptographic relevance of problems over lattices, and to broaden our studies in two main (complementary) directions: hardness foundations and advanced functionalities.
3.2.1 Hardness foundations
Recent advances in cryptography have broadened the scope of encryption functionalities (e.g., encryption schemes allowing to compute over encrypted data or to delegate partial decryption keys). While simple variants (e.g., identitybased encryption) are already practical, the more advanced ones still lack efficiency. Towards reaching practicality, we plan to investigate simpler constructions of the fundamental building blocks (e.g., pseudorandom functions) involved in these advanced protocols. We aim at simplifying known constructions based on standard hardness assumptions, but also at identifying new sources of hardness from which simple constructions that are naturally suited for the aforementioned advanced applications could be obtained (e.g., constructions that minimize critical complexity measures such as the depth of evaluation). Understanding the core source of hardness of today's standard hard algorithmic problems is an interesting direction as it could lead to new hardness assumptions (e.g., tweaked version of standard ones) from which we could derive much more efficient constructions. Furthermore, it could open the way to completely different constructions of advanced primitives based on new hardness assumptions.
3.2.2 Cryptanalysis
Latticebased cryptography has come much closer to maturity in the recent past. In particular, NIST has started a standardization process for postquantum cryptography, and latticebased proposals are numerous and competitive. This dramatically increases the need for cryptanalysis:
Do the underlying hard problems suffer from structural weaknesses? Are some of the problems used easy to solve, e.g., asymptotically?
Are the chosen concrete parameters meaningful for concrete cryptanalysis? In particular, how secure would they be if all the known algorithms and implementations thereof were pushed to their limits? How would these concrete performances change in case (fullfledged) quantum computers get built?
On another front, the cryptographic functionalities reachable under lattice hardness assumptions seem to get closer to an intrinsic ceiling. For instance, to obtain cryptographic multilinear maps, functional encryption and indistinguishability obfuscation, new assumptions have been introduced. They often have a lattice flavour, but are far from standard. Assessing the validity of these assumptions will be one of our priorities in the midterm.
3.2.3 Advanced cryptographic primitives
In the design of cryptographic schemes, we will pursue our investigations on functional encryption. Despite recent advances, efficient solutions are only available for restricted function families. Indeed, solutions for general functions are either way too inefficient for practical use or they rely on uncertain security foundations like the existence of circuit obfuscators (or both). We will explore constructions based on wellstudied hardness assumptions and which are closer to being usable in reallife applications. In the case of specific functionalities, we will aim at more efficient realizations satisfying stronger security notions.
Another direction we will explore is multiparty computation via a new approach exploiting the rich structure of class groups of quadratic fields. We already showed that such groups have a positive impact in this field by designing new efficient encryption switching protocols from the additively homomorphic encryption we introduced earlier. We want to go deeper in this direction that raises interesting questions, such as how to design efficient zeroknowledge proofs for groups of unknown order, how to exploit their structure in the context of 2party cryptography (such as twoparty signing) or how to extend to the multiparty setting.
In the context of the PROMETHEUS H2020 project, we will keep seeking to develop new quantumresistant privacypreserving cryptographic primitives (group signatures, anonymous credentials, ecash systems, etc). This includes the design of more efficient zeroknowledge proof systems that can interact with latticebased cryptographic primitives.
3.3 Algebraic computing and high performance kernels
The connections between algorithms for structured matrices and for polynomial matrices will continue to be developed, since they have proved to bring progress to fundamental questions with applications throughout computer algebra. The new fast algorithm for the bivariate resultant opens an exciting area of research which should produce improvements to a variety of questions related to polynomial elimination. Obviously, we expect to produce results in that area.
For definite summation and integration, we now have fast algorithms for single integrals of general functions and sequences and for multiple integrals of rational functions. The longterm objective of that part of computer algebra is an efficient and general algorithm for multiple definite integration and summation of general functions and sequences. This is the direction we will take, starting with single definite sums of general functions and sequences (leading in particular to a faster variant of Zeilberger's algorithm). We also plan to investigate geometric issues related to the presence of apparent singularities and how they seem to play a role in the complexity of the current algorithms.
4 Application domains
4.1 Floatingpoint and Validated Numerics
Our expertise on validated numerics is useful to analyze and improve, and guarantee the quality of numerical results in a wide range of applications including:
 scientific simulation;
 global optimization;
 control theory.
Much of our work, in particular the development of correctly rounded elementary functions, is critical to the reproducibility of floatingpoint computations.
4.2 Cryptography, Cryptology, Communication Theory
Lattice reduction algorithms have direct applications in
 publickey cryptography;
 diophantine equations;
 communications theory.
5 Highlights of the year
5.1 Awards
Best paper award at ARITH 2023 for the article `Towards MachineEfficient Rational L${}^{\infty}$Approximations of Mathematical Functions', by SilviuIoan Filip and Nicolas Brisebarre 12.
6 New software, platforms, open data
6.1 New software
6.1.1 FPLLL

Keywords:
Euclidean Lattices, Computer algebra system (CAS), Cryptography

Scientific Description:
The fplll library is used or has been adapted to be integrated within several mathematical computation systems such as Magma, Sage, and PariGP. It is also used for cryptanalytic purposes, to test the resistance of cryptographic primitives.

Functional Description:
fplll contains implementations of several lattice algorithms. The implementation relies on floatingpoint orthogonalization, and LLL is central to the code, hence the name.
It includes implementations of floatingpoint LLL reduction algorithms, offering different speed/guarantees ratios. It contains a 'wrapper' choosing the estimated best sequence of variants in order to provide a guaranteed output as fast as possible. In the case of the wrapper, the succession of variants is oblivious to the user.
It includes an implementation of the BKZ reduction algorithm, including the BKZ2.0 improvements (extreme enumeration pruning, preprocessing of blocks, early termination). Additionally, Slide reduction and self dual BKZ are supported.
It also includes a floatingpoint implementation of the KannanFinckePohst algorithm that finds a shortest nonzero lattice vector. For the same task, the GaussSieve algorithm is also available in fplll. Finally, it contains a variant of the enumeration algorithm that computes a lattice vector closest to a given vector belonging to the real span of the lattice.
 URL:

Contact:
Damien Stehlé
6.1.2 Gfun

Name:
generating functions package

Keyword:
Symbolic computation

Functional Description:
Gfun is a Maple package for the manipulation of linear recurrence or differential equations. It provides tools for guessing a sequence or a series from its first terms, for manipulating rigorously solutions of linear differential or recurrence equations, using the equation as a datastructure.
 URL:

Contact:
Bruno Salvy
6.1.3 GNUMPFR

Keywords:
MultiplePrecision, Floatingpoint, Correct Rounding

Functional Description:
GNU MPFR is an efficient arbitraryprecision floatingpoint library with welldefined semantics (copying the good ideas from the IEEE 754 standard), in particular correct rounding in 5 rounding modes. It provides about 100 mathematical functions, in addition to utility functions (assignments, conversions...). Special data (Not a Number, infinities, signed zeros) are handled like in the IEEE 754 standard. GNU MPFR is based on the mpn and mpz layers of the GMP library.
 URL:
 Publications:

Contact:
Vincent Lefèvre

Participants:
Guillaume Hanrot, Paul Zimmermann, Philippe Théveny, Vincent Lefèvre
6.1.4 MPFI

Name:
Multiple Precision Floatingpoint Interval

Keyword:
Arithmetic

Functional Description:
MPFI is a C library based on MPFR and GMP for arbitrary precision interval arithmetic.

Release Contributions:
Updated for the autoconf installation. New functions added: rev_sqrt, exp10, exp2m1, exp10m1, log2p1, log10p1.
 URL:

Contact:
Nathalie Revol
7 New results
7.1 Efficient approximation methods
7.1.1 Efficient and Validated Numerical Evaluation of Abelian Integrals
Abelian integrals play a key role in the infinitesimal version of Hilbert's 16th problem. Being able to evaluate such integrals  with guaranteed error bounds  is a fundamental step in computeraided proofs aimed at this problem. Using interpolation by trigonometric polynomials and quasiNewtonKantorovitch validation, we develop a validated numerics method for computing Abelian integrals in a quasilinear number of arithmetic operations. Our approach is both effective, as exemplified on two practical perturbed integrable systems, and amenable to an implementation in a formal proof assistant, which is key to provide fully reliable computeraided proofs 4.
7.1.2 Towards MachineEfficient Rational L${}^{\infty}$Approximations of Mathematical Functions
Software implementations of mathematical functions often use approximations that can be either polynomial or rational in nature. While polynomials are the preferred approximation in most cases, rational approximations are nevertheless an interesting alternative when dealing with functions that have a pronounced "nonpolynomial behavior" (such as poles close to the approximation domain, asymptotes or finite limits at $\pm \infty $). The major challenge is that of computing good rational approximations with machine number coefficients (e.g. floatingpoint or fixedpoint) with respect to the supremum norm, a key step in most procedures for evaluating a mathematical function. This is made more complicated by the fact that even when dealing with realvalued coefficients, optimal supremum norm solutions are sometimes difficult to obtain. Here, we introduce flexible and fast algorithms for computing such rational approximations with both real and machine number coefficients. Their effectiveness is explored on several examples 12.
7.1.3 Approximation speed of quantized vs. unquantized ReLU neural networks and beyond
We deal with two complementary questions about approximation properties of ReLU networks. First, we study how the uniform quantization of ReLU networks with realvalued weights impacts their approximation properties. We establish an upperbound on the minimal number of bits per coordinate needed for uniformly quantized ReLU networks to keep the same polynomial asymptotic approximation speeds as unquantized ones. We also characterize the error of nearestneighbour uniform quantization of ReLU networks. This is achieved using a new lowerbound on the Lipschitz constant of the map that associates the parameters of ReLU networks to their realization, and an upperbound generalizing classical results. Second, we investigate when ReLU networks can be expected, or not, to have better approximation properties than other classical approximation families. Indeed, several approximation families share the following common limitation: their polynomial asymptotic approximation speed of any set is bounded from above by the encoding speed of this set. We introduce a new abstract property of approximation families, called infiniteencodability, which implies this upperbound. Many classical approximation families, defined with dictionaries or ReLU networks, are shown to be infiniteencodable. This unifies and generalizes several situations where this upperbound is known 7.
7.1.4 A pathnorm toolkit for modern networks: consequences, promises and challenges
This work introduces the first toolkit around pathnorms that is fully able to encompass general DAG ReLU networks with biases, skip connections and any operation based on the extraction of order statistics: max pooling, GroupSort etc. This toolkit notably allows us to establish generalization bounds for modern neural networks that are not only the most widely applicable pathnorm based ones, but also recover or beat the sharpest known bounds of this type. These extended pathnorms further enjoy the usual benefits of pathnorms: ease of computation, invariance under the symmetries of the network, and improved sharpness on feedforward networks compared to the product of operators' norms, another complexity measure most commonly used. The versatility of the toolkit and its ease of implementation allow us to challenge the concrete promises of pathnormbased generalization bounds, by numerically evaluating the sharpest known bounds for ResNets on ImageNet 28.
7.1.5 Can sparsity improve the privacy of neural networks?
Sparse neural networks are mainly motivated by ressource efficiency since they use fewer parameters than their dense counterparts but still reach comparable accuracies. This article empirically investigates whether sparsity could also improve the privacy of the data used to train the networks. The experiments show positive correlations between the sparsity of the model, its privacy, and its classification error. Simply comparing the privacy of two models with different sparsity levels can yield misleading conclusions on the role of sparsity, because of the additional correlation with the classification error. From this perspective, some caveats are raised about previous works that investigate sparsity and privacy 23.
7.2 Floatingpoint and Validated Numerics
7.2.1 Floatingpoint arithmetic: invited survey for Acta Numerica
Floatingpoint numbers have an intuitive meaning when it comes to physicsbased numerical computations, and they have thus become the most common way of approximating real numbers in computers. The IEEE754 Standard has played a large part in making floatingpoint arithmetic ubiquitous today, by specifying its semantics in a strict yet useful way as early as 1985. In particular, floatingpoint operations should be performed as if their results were first computed with an infinite precision and then rounded to the target format. A consequence is that floatingpoint arithmetic satisfies the ‘standard model’ that is often used for analysing the accuracy of floatingpoint algorithms. But that is only scraping the surface, and floatingpoint arithmetic offers much more. In the survey 2 we recall the history of floatingpoint arithmetic as well as its specification mandated by the IEEE754 Standard. We also recall what properties it entails and what every programmer should know when designing a floatingpoint algorithm. We provide various basic blocks that can be implemented with floatingpoint arithmetic. In particular, one can actually compute the rounding error caused by some floatingpoint operations, which paves the way to designing more accurate algorithms. More generally, properties of floatingpoint arithmetic make it possible to extend the accuracy of computations beyond working precision.
7.2.2 Error in ulps of the multiplication or division by a correctlyrounded function or constant in binary floatingpoint arithmetic
Assume we use a binary floatingpoint arithmetic and that $RN$ is the roundtonearest function. Also assume that $c$ is a constant or a real function of one or more variables, and that we have at our disposal a correctly rounded implementation of $c$, say $\widehat{c}=RN\left(c\right)$. For evaluating $x\xb7c$ (resp. $x/c$ or $c/x$), the natural way is to replace it by $RN(x\xb7\widehat{c})$ (resp. $RN(x/\widehat{c})$ or $RN(\widehat{c}/x)$), that is, to call function $\widehat{c}$ and to perform a floatingpoint multiplication or division. This can be generalized to the approximation of $n/d$ by $RN(\widehat{n}/\widehat{d})$ and the approximation of $n\xb7d$ by $RN(\widehat{n}\xb7\widehat{d})$, where $\widehat{n}=RN\left(n\right)$ and $\widehat{d}=RN\left(d\right)$, and $n$ and $d$ are functions for which we have at our disposal a correctly rounded implementation. We discuss tight error bounds in ulps of such approximations. From our results, one immediately obtains tight error bounds for calculations such as $\U0001d6a1*\mathrm{\U0001d699\U0001d692}$, $\mathrm{\U0001d695\U0001d697}\left(\mathtt{2}\right)/\U0001d6a1$, $\U0001d6a1/(\U0001d6a2+\U0001d6a3)$, $(\U0001d6a1+\U0001d6a2)*\U0001d6a3$, $\U0001d6a1/\mathrm{\U0001d69c\U0001d69a\U0001d69b\U0001d69d}\left(\U0001d6a2\right)$, $\mathrm{\U0001d69c\U0001d69a\U0001d69b\U0001d69d}\left(\U0001d6a1\right)/\U0001d6a2$, $(\U0001d6a1+\U0001d6a2)(\U0001d6a3+\U0001d69d)$, $(\U0001d6a1+\U0001d6a2)/(\U0001d6a3+\U0001d69d)$, $(\U0001d6a1+\U0001d6a2)/\left(\mathrm{\U0001d6a3\U0001d69d}\right)$, etc. in floatingpoint arithmetic 5.
7.2.3 Testing The Sharpness of Known Error Bounds on The Fast Fourier Transform
The computation of Fast Fourier Transforms (FFTs) in floatingpoint arithmetic is inexact due to roundings, and for some applications it can prove very useful to know a tight bound on the final error. Although it can be almost attained by specifically built input values, the best known error bound for the CooleyTukey FFT seems to be much larger than most actually obtained errors. Also, interval arithmetic can be used to compute a bound on the error committed with a given set of input values, but it is in general considered hampered with large overestimation. We report results of intensive computations to test the two approaches, in order to estimate the numerical performance of stateoftheart bounds. Surprisingly enough, we observe that while interval arithmeticbased bounds are overestimated, they remain, in our computations, tighter than general known bounds 13.
7.2.4 Affine Iterations and Wrapping Effect: Various Approaches
Affine iterations of the form ${x}_{n+1}=A{x}_{n}+b$ converge, using real arithmetic, if the spectral radius of the matrix $A$ is less than 1. However, substituting interval arithmetic to real arithmetic may lead to divergence of these iterations, in particular if the spectral radius of the absolute value of $A$ is greater than 1. In 11, we review different approaches to limit the overestimation of the iterates, when the components of the initial vector ${x}_{0}$ and $b$ are intervals. We compare, both theoretically and experimentally, the widths of the iterates computed by these different methods: the naive iteration, methods based on the QRand SVDfactorization of $A$, and Lohner's QRfactorization method. The method based on the SVDfactorization is computationally less demanding and gives good results when the matrix is poorly scaled, it is superseded either by the naive iteration or by Lohner's method otherwise.
7.2.5 A framework to test interval arithmetic libraries and their IEEE 17882015 compliance
As developers of libraries implementing interval arithmetic, we faced the same difficulties when it comes to testing our libraries. What must be tested? How can we devise relevant test cases for unit testing? How can we ensure a high (and possibly 100%) test coverage? In 1, before considering these questions, we briefly recall the main features of interval arithmetic and of the IEEE 17882015 standard for interval arithmetic. After listing the different aspects that, in our opinion, must be tested, we contribute a first step towards offering a test suite for an interval arithmetic library. First we define a format that enables the exchange of test cases, so that they can be read and tried easily. Then we offer a first set of test cases, for a selected set of mathematical functions. Next, we examine how the Julia interval arithmetic library, IntervalArithmetic.jl, actually performs to these tests. As this is an ongoing work, we list extra tests that we deem important to perform.
7.2.6 About the "accurate mode" of the IEEE 17882015 standard for interval arithmetic
The IEEE 17882015 standard for interval arithmetic defines three accuracy modes for the socalled setbased flavor: tightest, accurate and valid. This work in progress 30 focuses on the accurate mode. First, an introduction to interval arithmetic and to the IEEE 17882015 standard is given, then the accurate mode is defined. How can this accurate mode be tested, when a library implementing interval arithmetic claims to provide this mode? The chosen approach is unit testing, and the elaboration of testing pairs for this approach is developed. A discussion closes this paper: how can the tester be tested? And if we go to the roots of the subject, is the accurate mode really relevant or should it be dropped off in the next version of the standard?
7.2.7 Towards a correctlyrounded and fast power function in binary64 arithmetic
In 16 we design algorithms for the correct rounding of the power function ${x}^{y}$ in the binary64 IEEE 754 format, for all rounding modes, modulo the knowledge of hardesttoround cases. Our implementation of these algorithms largely outperforms previous correctlyrounded implementations and is not far from the efficiency of current mathematical libraries, which are not correctlyrounded. Still, we expect our algorithms can be further improved for speed. The proofs of correctness are fully detailed in an extended version of this paper, with the goal to enable a formal proof of these algorithms. We hope this work will motivate the next IEEE 754 revision committee to require correct rounding for mathematical functions.
7.2.8 Accurate calculation of Euclidean norms
This work was done with Laurence Rideau (STAMP Team, Sophia). In 8, we consider the computation of the Euclidean (or L2) norm of an $n$dimensional vector in floatingpoint arithmetic. We review the classical solutions used to avoid spurious overflow or underflow and/or to obtain very accurate results. We modify a recently published algorithm (that uses doubleword arithmetic) to allow for a very accurate solution, free of spurious overflows and underflows. To that purpose, we use a doubleword squareroot algorithm of which we provide a tight error analysis. The returned L2 norm will be within very slightly more than $0.5\phantom{\rule{0.166667em}{0ex}}\text{ulp}$ from the exact result, which means that we will almost always provide correct rounding.
7.3 Lattices: Algorithms and Cryptology
7.3.1 Constrained Pseudorandom Functions from Homomorphic Secret Sharing
We propose and analyze a simple strategy for constructing 1key constrained pseudorandom functions (CPRFs) from homomorphic secret sharing. In the process, we obtain the following contributions. First, we identify desirable properties for the underlying HSS scheme for our strategy to work. Second, we show that (most) recent existing HSS schemes satisfy these properties, leading to instantiations of CPRFs for various constraints and from various assumptions. Notably, we obtain the first (1key selectively secure, private) CPRFs for innerproduct and (1key selectively secure) CPRFs for NC 1 from the DCR assumption, and more. Lastly, we revisit two applications of HSS, equipped with these additional properties, to secure computation: we obtain secure computation in the silent preprocessing model with one party being able to precompute its whole preprocessing material before even knowing the other party, and we construct onesided statistically secure computation with sublinear communication for restricted forms of computation. This is a joint by Geoffroy Couteau, Pierre Meyer, Alain Passelègue, and Mahshid Riahinia, published at Eurocrypt 2023 22.
7.3.2 A Detailed Analysis of FiatShamir with Aborts
Lyubashevky's signatures are based on the FiatShamir with Aborts paradigm. It transforms an interactive identification protocol that has a nonnegligible probability of aborting into a signature by repeating executions until a loop iteration does not trigger an abort. Interaction is removed by replacing the challenge of the verifier by the evaluation of a hash function, modeled as a random oracle in the analysis. The access to the random oracle is classical (ROM), resp. quantum (QROM), if one is interested in security against classical, resp. quantum, adversaries. Most analyses in the literature consider a setting with a bounded number of aborts (i.e., signing fails if no signature is output within a prescribed number of loop iterations), while practical instantiations (e.g., Dilithium) run until a signature is output (i.e., loop iterations are unbounded).
In this work, we emphasize that combining random oracles with loop iterations induces numerous technicalities for analyzing correctness, runtime, and security of the resulting schemes, both in the bounded and unbounded case. As a first contribution, we put light on errors in all existing analyses. We then provide two detailed analyses in the QROM for the bounded case, adapted from Kiltz et al. [EUROCRYPT'18] and Grilo et al. [ASIACRYPT'21]. In the process, we prove the underlying protocol to achieve a stronger zeroknowledge property than usually considered for protocols with aborts, which enables a corrected analysis. A further contribution is a detailed analysis in the case of unbounded aborts, the latter inducing several additional subtleties.
This is a joint work by Julien Devevey, Pouria Fallahpour, Alain Passelègue, and Damien Stehlé, published at Crypto 2023 14.
7.3.3 G+G: A FiatShamir Lattice Signature Based on Convolved Gaussians
We describe an adaptation of Schnorr's signature to the lattice setting, which relies on Gaussian convolution rather than flooding or rejection sampling as previous approaches. It does not involve any abort, can be proved secure in the ROM and QROM using existing analyses of the FiatShamir transform, and enjoys smaller signature sizes (both asymptotically and for concrete security levels).
This is a joint work by Julien Devevey, Alain Passelègue, and Damien Stehlé, published at Asiacrypt 2023 18.
7.3.4 Efficient Updatable PublicKey Encryption from Lattices
Updatable public key encryption has recently been introduced as a solution to achieve forwardsecurity in the context of secure group messaging without hurting efficiency, but so far, no efficient latticebased instantiation of this primitive is known.
In this work, we construct the first LWEbased UPKE scheme with polynomial modulustonoise rate, which is CPAsecure in the standard model. At the core of our security analysis is a generalized reduction from the standard LWE problem to (a stronger version of) the Extended LWE problem. We further extend our construction to achieve stronger security notions by proposing two generic transforms. Our first transform allows to obtain CCA security in the random oracle model and adapts the FujisakiOkamoto transform to the UPKE setting. Our second transform allows to achieve security against malicious updates by adding a NIZK argument in the update mechanism. In the process, we also introduce the notion of Updatable Key Encapsulation Mechanism (UKEM), as the updatable variant of KEMs. Overall, we obtain a CCAsecure UKEM in the random oracle model whose ciphertext sizes are of the same order of magnitude as that of CRYSTALSKyber.
This is a joint work by Calvin Abou Haidar, Alain Passelègue, and Damien Stehlé, published at Asiacrypt 2023 19.
7.3.5 IdealSVP is Hard for SmallNorm Uniform Prime Ideals
The presumed hardness of the Shortest Vector Problem for ideal lattices (IdealSVP) has been a fruitful assumption to understand other assumptions on algebraic lattices and as a security foundation of cryptosystems. Gentry [CRYPTO'10] proved that IdealSVP enjoys a worstcase to averagecase reduction, where the averagecase distribution is the uniform distribution over the set of inverses of prime ideals of small algebraic norm (below ${d}^{O\left(d\right)}$ for cyclotomic fields, here $d$ refers to the field degree). De Boer et al. [CRYPTO'20] obtained another random selfreducibility result for an averagecase distribution involving integral ideals of norm ${2}^{{d}^{2}}$ .
In this work, we show that IdealSVP for the uniform distribution over inverses of smallnorm prime ideals reduces to IdealSVP for the uniform distribution over smallnorm prime ideals. Combined with Gentry's reduction, this leads to a worstcase to averagecase reduction for the uniform distribution over the set of smallnorm prime ideals. Using the reduction from PelletMary and Stehlé [ASIACRYPT'21], this notably leads to the first distribution over NTRU instances with a polynomial modulus whose hardness is supported by a worstcase lattice problem.
This is a joint work by Joël Felderhoff, Alice PelletMary, Damien Stehlé, and Benjamin Wesolowski published at TCC 2023 15.
7.4 Algebraic Computing and Highperformance Kernels
7.4.1 Minimization of differential equations and algebraic values of Efunctions
A power series being given as the solution of a linear differential equation with appropriate initial conditions, minimization consists in finding a nontrivial linear differential equation of minimal order having this power series as a solution. This problem exists in both homogeneous and inhomogeneous variants; it is distinct from, but related to, the classical problem of factorization of differential operators. Recently, minimization has found applications in Transcendental Number Theory, more specifically in the computation of nonzero algebraic points where Siegel’s Efunctions take algebraic values. We present algorithms for these questions and discuss implementation and experiments 3.
7.4.2 DifferentialDifference Properties of Hypergeometric Series
Six families of generalized hypergeometric series in a variable $x$ and an arbitrary number of parameters are considered. Each of them is indexed by an integer $n$. Linear recurrence relations in $n$ relate these functions and their product by the variable $x$. We give explicit factorizations of these equations as products of first order recurrence operators. Related recurrences are also derived for the derivative with respect to $x$. These formulas generalize wellknown properties of the classical orthogonal polynomials 6.
7.4.3 Faster modular composition
A new Las Vegas algorithm is presented for the composition of two polynomials modulo a third one, over an arbitrary field. When the degrees of these polynomials are bounded by $n$, the algorithm uses $O\left({n}^{1.43}\right)$ field operations, breaking through the 3/2 barrier in the exponent for the first time. The previous fastest algebraic algorithms, due to Brent and Kung in 1978, require $O\left({n}^{1.63}\right)$ field operations in general, and ${n}^{3/2+o\left(1\right)}$ field operations in the particular case of power series over a field of large enough characteristic. If using cubictime matrix multiplication, the new algorithm runs in ${n}^{5/3+o\left(1\right)}$ operations, while previous ones run in $O\left({n}^{2}\right)$ operations. Our approach relies on the computation of a matrix of algebraic relations that is typically of small size. Randomization is used to reduce arbitrary input to this favorable situation 9.
7.4.4 Positivity certificates for linear recurrences
We show that for solutions of linear recurrences with polynomial coefficients of Poincaré type and with a unique simple dominant eigenvalue, positivity reduces to deciding the genericity of initial conditions in a precisely defined way. We give an algorithm that produces a certificate of positivity that is a datastructure for a proof by induction. This induction works by showing that an explicitly computed cone is contracted by the iteration of the recurrence 17.
7.4.5 ReductionBased Creative Telescoping for Definite Summation of DFinite Functions
Creative telescoping is an algorithmic method initiated by Zeilberger to compute definite sums by synthesizing summands that telescope, called certificates. We describe a creative telescoping algorithm that computes telescopers for definite sums of Dfinite functions as well as the associated certificates in a compact form. The algorithm relies on a discrete analogue of the generalized Hermite reduction, or equivalently, a generalization of the AbramovPetkovšek reduction. We provide a Maple implementation with good timings on a variety of examples 27.
7.4.6 Highorder lifting for polynomial Sylvester matrices
A new algorithm is presented for computing the resultant of two “sufficiently generic” bivariate polynomials over an arbitrary field. For such $p$ and $q$ in $\U0001d5aa[x,y]$ of degree $d$ in $x$ and $n$ in $y$, the resultant with respect to $y$ is computed using $O\left({n}^{1.458}d\right)$ arithmetic operations as long as $d=O\left({n}^{1/3}\right)$. For $d=1$, the complexity estimate is therefore essentially reconciled with the best known estimates of 9 for the related problems of modular composition and characteristic polynomial in a univariate quotient algebra. This allows to cross the $3/2$ barrier in the exponent of $n$ for the first time in the case of the resultant. More generally, our algorithm improves on best previous algebraic ones as long as $d=O\left({n}^{0.47}\right)$ 10.
7.4.7 Exact computations with quasiseparable matrices
Quasiseparable matrices are a class of rankstructured matrices widely used in numerical linear algebra and of growing interest in computer algebra, with applications in e.g. the linearization of polynomial matrices. Various representation formats exist for these matrices that have rarely been compared. We show how the most central formats SSS and HSS can be adapted to symbolic computation, where the exact rank replaces threshold based numerical ranks. We clarify their links and compare them with the Bruhat format. To this end, we state their space and time cost estimates based on fast matrix multiplication, and compare them, with their leading constants. The comparison is supported by software experiments. We make further progresses for the Bruhat format, for which we give a generation algorithm, following a Crout elimination scheme, which specializes into fast algorithms for the construction from a sparse matrix or from the sum of Bruhat representations 20, 25.
7.4.8 Elimination ideal and bivariate resultant over finite fields
A new algorithm is presented for computing the largest degree invariant factor of the Sylvester matrix (with respect either to $x$ or $y$) associated to two polynomials $a$ and $b$ in ${\mathbb{F}}_{q}[x,y]$ which have no nontrivial common divisors. The algorithm is randomized of the Monte Carlo type and requires ${(delogq)}^{1+o\left(1\right)}$ bit operations, where $d$ and $e$ respectively bound the input degrees in $x$ and in $y$. It follows that the same complexity estimate is valid for computing: a generator of the elimination ideal $\langle a,b\rangle \cap {\mathbb{F}}_{q}\left[x\right]$ (or ${\mathbb{F}}_{q}\left[y\right]$), as long as the polynomial system $a=b=0$ has not roots at infinity; the resultant of $a$ and $b$ when they are sufficiently generic, especially so that the Sylvester matrix has a unique nontrivial invariant factor. Our approach is to use the reduction of the problem to a problem of minimal polynomial in the quotient algebra ${\mathbb{F}}_{q}[x,y]/\langle a,b\rangle $. By proposing a new method based on structured polynomial matrix division for computing with the elements in the quotient, we manage to improve the best known complexity bounds 21.
8 Bilateral contracts and grants with industry
8.1 Bilateral contracts with industry
Bosch (Stuttgart) ordered from us some support for the design and implementation of accurate functions in binary32 floatingpoint arithmetic (inverse trigonometric functions, hyperbolic functions and their inverses, exponential, logarithm, ...).
Participants: ClaudePierre Jeannerod, JeanMichel Muller.
9 Partnerships and cooperations
9.1 International initiatives
9.1.1 Associate Teams in the framework of an Inria International Lab or in the framework of an Inria International Program
Associated team Symbolic, CanadaFrance, 20222024, University of Waterloo and Inria.
Participants: ClaudePierre Jeannerod, Bruno Salvy, Gilles Villard.
 Coordinators: Éric Schost (PI Waterloo), Gilles Villard (PI AriC). The Symbolic Computation Group at Waterloo and AriC expand established collaborations, to design and implement algorithms for linear and nonlinear symbolic algebra. Four main directions are especially investigated: dense matrices, structured linear algebra, polynomial arithmetic, and analytic combinatorics.
9.2 International research visitors
9.2.1 Visits of international scientists
Inria International Chair
Participant: Warwick Tucker.
From Monash University, Australia. Title: Attracteur de Hénon; intégrales abéliennes liées aux 16e problème de Hilbert
Summary: The goal of the proposed research program is to unify the techniques of modern scientific computing with the rigors of mathematics and develop a functional foundation for solving mathematical problems with the aid of computers. Our aim is to advance the field of computeraided proofs in analysis; we strongly believe that this is the only way to tackle a large class of very hard mathematical problems.
9.3 National initiatives
9.3.1 France 2030 ANR Project  PEPR Cybersecurity  SecureCompute
Participant: Alain Passelègue.
SecureCompute is a France 2030 ANR 6year project (started in July 2022) focused on the study of cryptographic mechanisms allowing to ensure the security of data, during their transfer, at rest, but also during processing, despite uncontrolled environments such as the Internet for exchanges and the Cloud for hosting and processing. Security, in this context, not only means confidentiality but also integrity, a.k.a. the correct execution of operations. See the web page of the project. It is headed by ENSPSL (Inria CASCADE teamproject), and besides AriC, also involves CEA, IRIF (Université Paris Cité), and LIRMM (Université de Montpellier).
The project ended prematurely on August 31, 2023, following the departure of Alain Passelègue.
9.3.2 France 2030 ANR Project  PEPR Quantique  PostQuantumTLS
Participant: Damien Stehlé.
PostQuantumTLS is a France 2030 ANR 5year project (started in 2022) focused on postquantum cryptography. The famous "padlock" appearing in browsers when one visits websites whose address is preceded by "https" relies on cryptographic primitives that would not withstand a quantum computer. This integrated project aims to develop postquantum primitives in a prototype of "postquantum lock" that will be implemented in an open source browser. The evolution of cryptographic standards has already started, the choice of new primitives will be made quickly, and the transition will be made in the next few years. The objective is to play a driving role in this evolution and to make sure that the French actors of postquantum cryptography, already strongly involved, are able to influence the cryptographic standards of the decades to come.
Benjamin Wesolowski (UMPA) replaced Damien Stehlé as the leader in Lyon for this project.
9.3.3 ANR RAGE Project
Participant: Alain Passelègue.
RAGE is a fouryear project (started in January 2021) focused on the randomness generation for advanced cryptography. See the web page of the project. It is headed by Alain Passelègue and also involves Pierre Karpmann (UGA) and Thomas Prest (PQShield). The main goals of the project are: (i) construct and analyze security of lowcomplexity pseudorandom functions that are wellsuited for MPCbased and FHEbased applications, (ii) construct advanced forms of pseudorandom functions, such as (private) constrained PRFs.
The project ended prematurely on August 31, 2023, following the departure of Alain Passelègue.
9.3.4 ANR CHARM Project
Participants: Damien Stehlé, Guillaume Hanrot, Joël Felderhoff.
CHARM is a threeyear project (started in October 2021) focused on the cryptographic hardness of module lattices. See the web page of the project. It is coheaded by Shi Bai (FAU, USA) and Damien Stehlé, with two other sites: the U. of Bordeaux team led by Benjamin Wesolowski (with Bill Allombert, Karim Belabas, Aurel Page and Alice PelletMary) and the Cornell team led by Noah StephensDavidowitz. The main goal of the project is to provide a clearer understanding of the intractability of module lattice problems via improved reductions and improved algorithms. It will be approached by investigating the following directions: (i) showing evidence that there is a hardness gap between rank 1 and rank 2 module problems, (ii) determining whether the NTRU problem can be considered as a rank 1.5 module problem, (iii) designing algorithms dedicated to module lattices, along with implementation and experiments.
Following the departures of Guillaume Hanrot and Damien Stehlé, Benjamin Wesolowski (UMPA) took the lead on this project.
9.3.5 France 2030 ANR Project  HQI
Participant: Damien Stehlé.
The Hybrid HPC Quantum Initiative is a France 2030 ANR 5year project (started in 2022) focused on quantum computing. We are involved in the Cryptanalysis work package. The application of quantum algorithms for cryptanalysis is known since the early stages of quantum computing when Shor presented a polynomialtime quantum algorithm for factoring, a problem which is widely believed to be hard for classical computers and whose hardness is one of the main cryptographic assumptions currently used. Therefore, with the development of (fullscalable) quantum computers, the security of many cryptographic protocols of practical importance would be broken. Therefore, it is necessary to find other computational assumptions that can lead to cryptographic schemes that are secure against quantum adversaries. While we have candidate assumptions, their security against quantum attacks is still under scrutiny. In this work package, we will study new quantum algorithms for cryptanalysis and their implementation in the hybrid platform of the national platform. The goal is to explore the potential weaknesses of old and new cryptographic assumptions, potentially finding new attacks on the proposed schemes.
The project ended prematurely on March 31, 2023, following the departure of Damien Stehlé, but Benjamin Wesolowski (UMPA) is still involved.
9.3.6 ANR NuSCAP Project
Participants: Nicolas Brisebarre, JeanMichel Muller, Joris Picot, Bruno Salvy.
NuSCAP (Numerical Safety for ComputerAided Proofs) is a fouryear project started in February 2021. See the web page of the project. It is headed by Nicolas Brisebarre and, besides AriC, involves people from LIP lab, Galinette, Lfant, Stamp and Toccata INRIA teams, LAAS (Toulouse), LIP6 (Sorbonne Université), LIPN (Univ. Sorbonne Paris Nord) and LIX (École Polytechnique). Its goal is to develop theorems, algorithms and software, that will allow one to study a computational problem on all (or any) of the desired levels of numerical rigor, from fast and stable computations to formal proofs of the computations.
9.3.7 ANR/Astrid AMIRAL Project
Participants: Alain Passelègue, Damien Stehlé.
AMIRAL is a fouryear project (starting in January 2022) that aims to improve latticebased signatures and to develop more advanced related cryptographic primitives. See the web page of the project. It is headed by Adeline RouxLanglois from Irisa (Rennes) and locally by Alain Passelègue. The main goals of the project are: (i) optimize the NIST latticebased signatures, namely CRYSTALSDILITHIUM and FALCON, (ii) develop advanced signatures, such as threshold signatures, blind signatures, or aggregated signatures, and (iii) generalize the techniques developed along the project to other related primitives, such as identitybased and attributebased encryption.
The project ended prematurely on August 31, 2023, following the departure of Alain Passelègue.
10 Dissemination
10.1 Promoting scientific activities
10.1.1 Scientific events: organisation
General chair, scientific chair
 Alain Passelègue and Damien Stehlé (as well as Benjamin Wesolowski from ENS Lyon  UMPA) coorganized Eurocrypt 2023 in Lyon in April 2023.
Member of the organizing committees
 Bruno Salvy and Gilles Villard have coorganized two trimesters in 2023 on Recent Trends in Computer Algebra in Lyon and Paris.
10.1.2 Scientific events: selection
Member of the conference program committees
 JeanMichel Muller and Nathalie Revol served in the program committee of the Arith 2023 conference.
 Alain Passelègue served in the program committee of PKC 2023 and Crypto 2023, and of the national Journées Codes et Cryptographie (C2).
10.1.3 Journal
Member of the editorial boards
 JeanMichel Muller is associate editor in chief of the journal IEEE Transactions on Emerging Topics in Computing.
 Nathalie Revol is associate editor of the journal IEEE Transactions on Computers.
 Bruno Salvy is a member of the editorial board of the Journal of Symbolic Computation, of Annals of Combinatorics, and of the collection Texts and Monographs in Symbolic Computation (Springer).
 Damien Stehlé is an editor of the Journal of Cryptology and of Designs, Codes and Cryptography.
 Gilles Villard is a member of the editorial board of the Journal of Symbolic Computation.
10.1.4 Scientific expertise
 Nicolas Brisebarre is a member of the scientific council of "Journées Nationales de Calcul Formel".
 ClaudePierre Jeannerod is a member of "Comité des Moyens Incitatifs" of the Lyon Inria research center.
10.1.5 Research administration
 Nicolas Brisebarre is cohead of GT ARITH (GDR IM).
 Nathalie Revol is a member of the Inria Evaluation Committee and of the Inria Commission Administrative Paritaire.
 Alain Passelègue is a member of the board of GT C2, as well as a member of the program committee of the national Séminaire C2.
10.2 Teaching  Juries
10.2.1 Teaching

Master:
Nicolas Brisebarre, Computer Algebra, 30h, Univ. Polynésie Française

Master:
Nicolas Brisebarre, "Approximation Theory and Proof Assistants: Certified Computations", 12h, M2, ENS de Lyon

Master:
ClaudePierre Jeannerod, Computer Algebra, 30h in 2023, M2, ISFA, France

Master:
Nicolas Louvet, Compilers, 15h, M1, UCB Lyon 1, France

Master:
Nicolas Louvet, Introduction to Operating Systems, 30h, M2, UCB Lyon 1, France

Master:
Vincent Lefèvre, Computer Arithmetic, 10.5h in 2023, M2, ISFA, France

Master:
JeanMichel Muller, FloatingPoint Arithmetic and beyond, 7h in 2021, M2, ENS de Lyon, France

Master:
Alain Passelègue, Cryptography and Security, 24h, M1, ENS de Lyon, France

Master:
Alain Passelègue, Interactive and NonInteractive Proofs in Complexity and Cryptography, 20h, M2, ENS de Lyon, France

Licence:
Alain Passelègue, in charge of 1st year student (L3) research internships, 12h, L3, ENS de Lyon, France

Postgrad:
Nathalie Revol, "Scientific Dissemination and Outreach Activities", 36h in 2023 (3 groups, 12h/group), 4th year students, ENS de Lyon, France

Master:
Bruno Salvy, Computer Algebra, 24h, M1, ENS de Lyon, France

Master:
Bruno Salvy, Modern Algorithms in Symbolic Summation and Integration, 10h, M2, ENS de Lyon, France

Master:
Damien Stehlé, Postquantum cryptography, 12h, M2, ENS de Lyon, France

Master:
Gilles Villard, Modern Algorithms in Symbolic Summation and Integration, 10h, M2, ENS de Lyon, France
10.2.2 Juries
 Nathalie Revol was a member of the recruiting committee for an associate professor position at Sorbonne University.
 Nathalie Revol was an examiner in the PhD committees of Nuwan Herath Mudiyanselage (U. Lorraine), Daria Pchelina (U. Sorbonne Paris Nord) and Maria Luiza Costa Vianna (École Polytechnique).
10.3 Popularization
10.3.1 Internal or external Inria responsibilities
 Nathalie Revol is the scientific leader of the Interstices magazine (above 750,000 pages visited per year), where she also regularly writes reading recommendations.
 Regarding parity issues: Nathalie Revol is a member of the parity committee of the LIP laboratory; in particular she coorganized "womenonly lunches". She is a member of the parity committee of Inria: this year, her work focused on the adoption of a chart about the inclusion of LGBTQI+ people, and on making the working environment more inclusive. As every year, she coorganized in November a day "Journée Filles & MathsInfo" at ENS Lyon for female highschool pupils (around 90 pupils). She was in charge of the animation of workshops about debunking stereotypes, in Lyon and StÉtienne. She coorganized the visit of the LIP laboratory for 3 groups of 15 highschool female pupils around the 8th of March during the "Sciences: un métier de femmes" day.
10.3.2 Articles and contents
 Nathalie Revol wrote an article about women in mathematics and computer science 33, as an introduction to the special issue no 86 of MathemaTICE, a Web magazine for mathematics teachers. She took part to two roundtables on March 8, one at the "Global Industry and AI" forum and one at Préfecture du Rhône, about women in computer science. She introduced the structure and content of the "Journée Filles & MathsInfo" in 180 seconds during the Inria "séminaire médiation" in SophiaAntipolis, April 2023. She took part to the creation of the topics that will be explored by highschools pupils during a computer science camp, Fall 2024, about "green networks" in relation with the Facto ANR.
10.3.3 Education
 Nathalie Revol teaches how to popularize science towards a large audience, to 4th year students at ENS de Lyon.
10.3.4 Interventions
 Nicolas Brisebarre has been a scientific consultant for “Les maths et moi”, a oneman show by Bruno Martins since 2020. He also takes part to Q & A sessions with the audience after some shows.
 Nathalie Revol took part in the Declics action, once as "captain" and once as member (lycée Juliette Récamier, 70 pupils). She was present during Mondial des Métiers to provide information about scientific careers, to highschool pupils and their parents (around 65 persons).
 Alain Passelègue spend half a day in a middle school in Lagnieu to talk about modern cryptography and its usage in the world, as well as about the work as a researcher. This was part of a project around Alan Turing led by English, Maths, and History teachers from the school.
11 Scientific production
11.1 Publications of the year
International journals
 1 articleA framework to test interval arithmetic libraries and their IEEE 17882015 compliance.Concurrency and Computation: Practice and ExperienceAugust 2023, e7856HALDOIback to text
 2 articleFloatingpoint arithmetic.Acta Numerica32May 2023, 203290HALDOIback to text

3
articleMinimization of differential equations and algebraic values of
$E$ functions.Mathematics of Computation2023HALback to text  4 articleEfficient and Validated Numerical Evaluation of Abelian Integrals.ACM Transactions on Mathematical Software2023HALback to text
 5 articleError in ulps of the multiplication or division by a correctlyrounded function or constant in binary floatingpoint arithmetic.IEEE Transactions on Emerging Topics in Computing2023HALDOIback to text
 6 articleDifferentialDifference Properties of Hypergeometric Series.Proceedings of the American Mathematical Society15162023, 26032617HALDOIback to text
 7 articleApproximation speed of quantized vs. unquantized ReLU neural networks and beyond.IEEE Transactions on Information Theory696June 2023, 39603977HALDOIback to text
 8 articleAccurate calculation of Euclidean Norms using Doubleword arithmetic.ACM Transactions on Mathematical Software491March 2023, 134HALDOIback to text
 9 articleFaster Modular Composition.Journal of the ACM (JACM)2023HALback to textback to text
 10 articleHighorder lifting for polynomial Sylvester matrices.Journal of Complexity802023HALDOIback to text
 11 articleAffine Iterations and Wrapping Effect: Various Approaches.Acta Cybernetica261June 2023, 129147HALDOIback to text
International peerreviewed conferences
 12 inproceedingsTowards MachineEfficient Rational L ∞ Approximations of Mathematical Functions.Proceedings of the 30th IEEE International Symposium on Computer Arithmetic ARITH 2023, Sep 2023, Portland, Oregon, USA30th IEEE International Symposium on Computer Arithmetic ARITH 2023Portland, United StatesSeptember 2023HALback to textback to text
 13 inproceedingsTesting The Sharpness of Known Error Bounds on The Fast Fourier Transform.30th IEEE International Symposium on Computer Arithmetic ARITH 2023Portland, Oregon, USA, FranceSeptember 2023HALback to text
 14 inproceedingsA Detailed Analysis of FiatShamir with Aborts.Crypto 202314085Lecture Notes in Computer ScienceSanta Barbara, United StatesSpringer Nature SwitzerlandAugust 2023, 327357HALDOIback to text
 15 inproceedingsIdealSVP is Hard for SmallNorm Uniform Prime Ideals.Lecture Notes in Computer ScienceTheory of Cryptography, TCC 202314372Lecture Notes in Computer ScienceTaipei (Taiwan), TaiwanSpringer Nature SwitzerlandNovember 2023, 6392HALDOIback to text
 16 inproceedingsTowards a correctlyrounded and fast power function in binary64 arithmetic.2023 IEEE 30th Symposium on Computer Arithmetic (ARITH 2023)2023 IEEE 30th Symposium on Computer Arithmetic (ARITH)Portland, Oregon (USA), United States2023HALback to text
 17 inproceedingsPositivity certificates for linear recurrences.Proceedings SODASODAAlexandria, Virginia, United States2024HALback to text
 18 inproceedingsG+G: A FiatShamir Lattice Signature Based on Convolved Gaussians.Asiacrypt 2023Guangzhou (Canton), China2023HALback to text
 19 inproceedingsEfficient Updatable PublicKey Encryption from Lattices.Asiacrypt 2023Guangzhou (Canton), China2023HALback to text
 20 inproceedingsExact computations with quasiseparable matrices.ISSAC'23: the 2023 International Symposium on Symbolic and Algebraic ComputationTromso, NorwayACMJuly 2023, 480–489HALDOIback to text
 21 inproceedingsElimination ideal and bivariate resultant over finite fields.Proceedings of the 2023 International Symposium on Symbolic and Algebraic ComputationISSAC 2023: International Symposium on Symbolic and Algebraic Computation 2023Tromsø Norway, NorwayACM; ACMJuly 2023, 526534HALDOIback to text
National peerreviewed Conferences
 22 inproceedingsConstrained Pseudorandom Functions from Homomorphic Secret Sharing.Lecture Notes in Computer Science42nd Annual International Conference on the Theory and Applications of Cryptographic Techniques,EUROCRYPT 202314006Lecture Notes in Computer ScienceLyon, FranceSpringer Nature SwitzerlandApril 2023, 194224HALDOIback to text
 23 inproceedings Can sparsity improve the privacy of neural networks? GRETSI 2023  XXIXème Colloque Francophone de Traitement du Signal et des Images Grenoble, France April 2023 HAL back to text
Doctoral dissertations and habilitation theses
 24 thesisLatticebased Signatures in the FiatShamir Paradigm.Ecole Normale Supérieure de LyonSeptember 2023HAL
 25 thesisExact computations with quasiseparable matrices and polynomial matrices with a displacement structure.École Normale Supérieure de LyonOctober 2023HALback to text
Reports & preprints
 26 miscInteger points close to a transcendental curve and correctlyrounded evaluation of a function.2023HAL
 27 miscReductionBased Creative Telescoping for Definite Summation of DFinite Functions.July 2023HALback to text
 28 miscA pathnorm toolkit for modern networks: consequences, promises and challenges.September 2023HALback to text
 29 miscLeading constants of rank deficient Gaussian elimination.February 2023HAL
 30 miscAbout the "accurate mode" of the IEEE 17882015 standard for interval arithmetic.April 2023HALback to text
11.2 Other
Scientific popularization
 31 articleLe dilemme du fabricant de tables.La Recherche572January 2023HAL
 32 inproceedingsArithmétique des Ordinateurs.Colloque Raisonner en arithmétique, estce incongru ?Talence (33), France2023HAL
 33 articleComputer science and mathematics: far from parity.MathémaTICE86September 2023HALback to text