A major challenge in modeling and scientific computing is the simultaneous mastery of hardware capabilities, software design, and mathematical algorithms for the efficiency and reliability of the computation. In this context, the overall objective of AriC is to improve computing at large, in terms of performance, efficiency, and reliability. We work on the fine structure of floating-point arithmetic, on controlled approximation schemes, on algebraic algorithms and on new cryptographic applications, most of these themes being pursued in their interactions. Our approach combines fundamental studies, practical performance and qualitative aspects, with a shared strategy going from high-level problem specifications and standardization actions, to computer arithmetic and the lowest-level details of implementations.

This makes AriC the right place for drawing the following lines of action:

According to the application domains that we target and our main fields of expertise, these lines of actions are declined in three themes with specific objectives.

The last twenty years have seen the advent of computer-aided proofs in mathematics and this trend is getting more and more important. They request:
fast and stable numerical computations;
numerical results with a guarantee on the error;
formal proofs of these computations or computations with a proof assistant.
One of our main long-term objectives is to develop a platform where one can study a computational problem on all (or any) of these three levels of rigor.
At this stage, most of the necessary routines are not easily available (or do not even exist) and one needs to develop ad hoc tools to complete the proof. We plan to provide more and more algorithms and routines to address such questions. Possible applications lie in the study of mathematical conjectures where exact mathematical results are required (e.g., stability of dynamical systems); or in more applied questions, such as the automatic generation of efficient and reliable numerical software for function evaluation.
On a complementary viewpoint, numerical safety is also critical in robust space mission design, where guidance and control algorithms become more complex in the context of increased satellite autonomy. We will pursue our collaboration with specialists of that area whose questions bring us interesting focus on relevant issues.

Floating-point arithmetic is currently undergoing a major evolution, in particular with the recent advent of a greater diversity of available precisions on a same system (from 8 to 128 bits) and of coarser-grained floating-point hardware instructions. This new arithmetic landscape raises important issues at the various levels of computing, that we will address along the following three directions.

One of our targets is the design of building blocks of computing (e.g., algorithms for the basic operations and functions, and algorithms for complex or double-word arithmetic). Establishing properties of these building blocks (e.g., the absence of “spurious” underflows/overflows) is also important. The IEEE 754 standard on floating-point arithmetic (which has been revised slightly in 2019) will have to undergo a major revision within a few years: first because advances in technology or new needs make some of its features obsolete, and because new features need standardization. We aim at playing a leading role in the preparation of the next standard.

We will pursue our studies in rounding error analysis, in particular for the “low precision–high dimension” regime, where traditional analyses become ineffective and where improved bounds are thus most needed. For this, the structure of both the data and the errors themselves will have to be exploited. We will also investigate the impact of mixed-precision and coarser-grained instructions (such as small matrix products) on accuracy analyses.

Most directions in the team are concerned with optimized and high performance implementations. We will pursue our efforts concerning the implementation of well optimized floating-point kernels, with an emphasis on numerical quality, and taking into account the current evolution in computer architectures (the increasing width of SIMD registers, and the availability of low precision formats). We will focus on computing kernels used within other axes in the team such as, for example, extended precision linear algebra routines within the FPLLL and HPLLL libraries.

We intend to strengthen our assessment of the cryptographic relevance of problems over lattices, and to broaden our studies in two main (complementary) directions: hardness foundations and advanced functionalities.

Recent advances in cryptography have broadened the scope of encryption functionalities (e.g., encryption schemes allowing to compute over encrypted data or to delegate partial decryption keys). While simple variants (e.g., identity-based encryption) are already practical, the more advanced ones still lack efficiency. Towards reaching practicality, we plan to investigate simpler constructions of the fundamental building blocks (e.g., pseudorandom functions) involved in these advanced protocols. We aim at simplifying known constructions based on standard hardness assumptions, but also at identifying new sources of hardness from which simple constructions that are naturally suited for the aforementioned advanced applications could be obtained (e.g., constructions that minimize critical complexity measures such as the depth of evaluation). Understanding the core source of hardness of today's standard hard algorithmic problems is an interesting direction as it could lead to new hardness assumptions (e.g., tweaked version of standard ones) from which we could derive much more efficient constructions. Furthermore, it could open the way to completely different constructions of advanced primitives based on new hardness assumptions.

Lattice-based cryptography has come much closer to maturity in the recent past. In particular, NIST has started a standardization process for post-quantum cryptography, and lattice-based proposals are numerous and competitive. This dramatically increases the need for cryptanalysis:

Do the underlying hard problems suffer from structural weaknesses? Are some of the problems used easy to solve, e.g., asymptotically?

Are the chosen concrete parameters meaningful for concrete cryptanalysis? In particular, how secure would they be if all the known algorithms and implementations thereof were pushed to their limits? How would these concrete performances change in case (full-fledged) quantum computers get built?

On another front, the cryptographic functionalities reachable under lattice hardness assumptions seem to get closer to an intrinsic ceiling. For instance, to obtain cryptographic multilinear maps, functional encryption and indistinguishability obfuscation, new assumptions have been introduced. They often have a lattice flavour, but are far from standard. Assessing the validity of these assumptions will be one of our priorities in the mid-term.

In the design of cryptographic schemes, we will pursue our investigations on functional encryption. Despite recent advances, efficient solutions are only available for restricted function families. Indeed, solutions for general functions are either way too inefficient for pratical use or they rely on uncertain security foundations like the existence of circuit obfuscators (or both). We will explore constructions based on well-studied hardness assumptions and which are closer to being usable in real-life applications. In the case of specific functionalities, we will aim at more efficient realizations satisfying stronger security notions.

Another direction we will explore is multi-party computation via a new approach exploiting the rich structure of class groups of quadratic fields. We already showed that such groups have a positive impact in this field by designing new efficient encryption switching protocols from the additively homomorphic encryption we introduced earlier. We want to go deeper in this direction that raises interesting questions, such as how to design efficient zero-knowledge proofs for groups of unknown order, how to exploit their structure in the context of 2-party cryptography (such as two-party signing) or how to extend to the multi-party setting.

In the context of the PROMETHEUS H2020 project, we will keep seeking to develop new quantum-resistant privacy-preserving cryptographic primitives (group signatures, anonymous credentials, e-cash systems, etc). This includes the design of more efficient zero-knowledge proof systems that can interact with lattice-based cryptographic primitives.

The connections between algorithms for structured matrices and for polynomial matrices will continue to be developed, since they have proved to bring progress to fundamental questions with applications throughout computer algebra. The new fast algorithm for the bivariate resultant opens an exciting area of research which should produce improvements to a variety of questions related to polynomial elimination. Obviously, we expect to produce results in that area.

For definite summation and integration, we now have fast algorithms for single integrals of general functions and sequences and for multiple integrals of rational functions. The long-term objective of that part of computer algebra is an efficient and general algorithm for multiple definite integration and summation of general functions and sequences. This is the direction we will take, starting with single definite sums of general functions and sequences (leading in particular to a faster variant of Zeilberger's algorithm). We also plan to investigate geometric issues related to the presence of apparent singularities and how they seem to play a role in the complexity of the current algorithms.

Our expertise on validated numerics is useful to analyze and improve, and guarantee the quality of numerical results in a wide range of applications including:

Much of our work, in particular the development of correctly rounded elementary functions, is critical to the reproducibility of floating-point computations.

Lattice reduction algorithms have direct applications in

Best paper award at ASIACRYPT 2021. For the article 'On the hardness of the NTRU problem', by Alice Pellet–Mary and Damien Stehlé 13. More in Section 7.3.5.

fplll contains implementations of several lattice algorithms. The implementation relies on floating-point orthogonalization, and LLL is central to the code, hence the name.

It includes implementations of floating-point LLL reduction algorithms, offering different speed/guarantees ratios. It contains a 'wrapper' choosing the estimated best sequence of variants in order to provide a guaranteed output as fast as possible. In the case of the wrapper, the succession of variants is oblivious to the user.

It includes an implementation of the BKZ reduction algorithm, including the BKZ-2.0 improvements (extreme enumeration pruning, pre-processing of blocks, early termination). Additionally, Slide reduction and self dual BKZ are supported.

It also includes a floating-point implementation of the Kannan-Fincke-Pohst algorithm that finds a shortest non-zero lattice vector. For the same task, the GaussSieve algorithm is also available in fplll. Finally, it contains a variant of the enumeration algorithm that computes a lattice vector closest to a given vector belonging to the real span of the lattice.

Despite several significant advances over the last 30 years, guaranteeing the correctly rounded evaluation of elementary functions, such as

The 2019 version of the IEEE 754 Standard for Floating-Point Arithmetic recommends that new “augmented” operations should be provided for the binary formats. These operations use a new “rounding direction”: round to nearest ties-to-zero. In collaboration with S. Boldo (Toccata) and C. Lauter (U. Alaska), we show how they can be implemented using the currently available operations, using round-to-nearest ties-to-even with a partial formal proof of correctness 1.

Recently, a complete set of algorithms for manipulating double-word numbers (some classical, some new) was analyzed1. In collaboration with L. Rideau (STAMP), we have formally proven all the theorems given in that paper, using the Coq proof assistant. The formal proof work led us to: i) locate mistakes in some of the original paper proofs (mistakes that, however, do not hinder the validity of the algorithms), ii) significantly improve some error bounds, and iii) generalize someresults by showing that they are still valid if we slightly change the rounding mode. The consequence is that the algorithms presented in Joldes et al.'s paper can be used with high confidence, and that some of them are even more accurate than what was believed before. This illustrates what formal proof can bring to computer arithmetic: beyond mere (yet extremely useful) verification, correction and consolidation of already known results, it can help to find new properties. All our formal proofs are freely available. 6

Expressions such as

Affine iterations of the form

In 17,
we consider Kahan's compensated summation of

In 8, we considered threshold encryption schemes, where the decryption servers distributively hold the private key shares, and a threshold of these servers should collaborate to decrypt the message (while the system remains secure when less than the threshold is corrupted). We investigated the notion of chosen-ciphertext secure threshold systems which has been historically hard to achieve. We further require the systems to be, both, adaptively secure (i.e., secure against a strong adversary making corruption decisions dynamically during the protocol), and on-interactive (i.e., where decryption servers do not interact amongst themselves but rather efficiently contribute, each, a single message). To date, only pairing-based implementations were known to achieve security in the standard security model without relaxation (i.e., without assuming the random oracle idealization) under the above stringent requirements. We investigate how to achieve the above using other assumptions (in order to understand what other algebraic building blocks and mathematical assumptions are needed to extend the domain of encryption methods achieving the above). Specifically, we show realizations under the Decision Composite Residuosity (DCR) and Learning-With-Errors (LWE) assumption.

In anonymous credentials and group signature schemes, an orthogonality exists between users' anonymity and their accountability. Usually, the tracing authority can identify the author of any signature. In 11, we suggest an alternative approach where the tracing authority's capability to trace a signature back to its source depends on the signed message. More precisely, the traceability of a signature is determined by a predicate evaluated on the message and the user's identity/credential. At the same time, the schemes provide what we call "branch- hiding;" namely, the resulting predicate value hides from outsiders if a given signature is traceable or not. Specifically, we precisely define and give the first construction and security proof of a "Bifurcated Anonymous Signature" (BiAS): A scheme which supports either absolute anonymity or anonymity with accountability, based on a specific contextual predicate, while being branch-hiding. This novel signing scheme has numerous applications not easily implementable or not considered before, especially because: (i) the conditional traceability does not rely on a trusted authority as it is (non-interactively) encapsulated into signatures; and (ii) signers know the predicate value and can make a conscious choice at each signing time. Technically, we realize BiAS from homomorphic commitments for a general family of predicates that can be represented by bounded-depth circuits. Our construction is generic and can be instantiated in the standard model from lattices and, more efficiently, from bilinear maps. In particular, the signature length is independent of the circuit size when we use commitments with suitable efficiency properties

In a selective-opening chosen ciphertext (SO-CCA) attack on a public-key encryption scheme, an adversary has access to a decryption oracle, and after getting a number of ciphertexts, can then adaptively corrupt a subset of them, obtaining the plaintexts and corresponding encryption randomness. SO-CCA security requires the privacy of the remaining plaintexts being well protected. There are two flavors of SO-CCA definition: the weaker indistinguishability-based (IND) and the stronger simulation-based (SIM) ones. In 3, we study SO-CCA secure PKE constructions from all-but-many lossy trapdoor functions (ABM-LTFs) in pairing-friendly prime order groups. Concretely,

In distributed pseudorandom functions (DPRFs), a PRF secret key

The 25 year-old NTRU problem is an important computational assumption in public-key cryptography. However, from a reduction perspective, its relative hardness compared to other problems on Euclidean lattices is not well-understood. Its decision version reduces to the search Ring-LWE problem, but this only provides a hardness upper bound. In 13, we provide two answers to the long-standing open problem of providing reduction-based evidence of the hardness of the NTRU problem. First, we reduce the worst-case approximate Shortest Vector Problem over ideal lattices to an average-case search variant of the NTRU problem. Second, we reduce another average-case search variant of the NTRU problem to the decision NTRU problem.

Several recent proposals of efficient public-key encryption are based on variants of the polynomial learning with errors problem (PLWE

In 9, we describe the first polynomial-time average-case reductions for the search variant of I-PLWE

Broadcast Encryption is a fundamental cryptographic primitive, that gives the ability to send a secure message to any chosen target set among registered users. In this work, we investigate broadcast encryption with anonymous revocation, in which ciphertexts do not reveal any information on which users have been revoked. We provide a scheme whose ciphertext size grows linearly with the number of revoked users. Moreover, our system also achieves traceability in the black-box confirmation model.

In 7, our contribution is threefold. First, we develop a generic transformation of linear functional encryption toward trace-and-revoke systems. It is inspired from the transformation by Agrawal et al. (CCS’17) with the novelty of achieving anonymity. Our second contribution is to instantiate the underlying linear functional encryptions from standard assumptions. We propose a DDH-based (Decision Diffie-Hellman) construction which does no longer require discrete logarithm evaluation during the decryption and thus significantly improves the performance compared to the DDH-based construction of Agrawal et al.. In the LWE-based setting, we tried to instantiate our construction by relying on the scheme from Wang et al. (PKC’19) but finally found an attack to this scheme. Our third contribution is to extend the 1-bit encryption from the generic transformation to

In the context of the NIST post-quantum cryptography project, there have been claims that the Gaborit and Aguilar-Melchor patent could apply to the Kyber and Saber encryption schemes. In 19, we argue that these claims are in contradiction with the potential validity of the patent.

This short note was complemented by a post on the post-quantum standardisation mailing list, which provided recommendations regarding how to proceed towards post-quantum standardisation in light of the scientifically baseless patent claims from CNRS on the Kyber and Saber submissions.

In
10, we construct a publicly verifiable, non-interactive delegation scheme for any polynomial-size arithmetic circuit with proof-size and verification complexity comparable to those of pairing-based zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARKS). Concretely, the proof consists of a constant number of group elements and verification requires

A new Las Vegas algorithm is presented for the composition of two polynomials modulo a third one, over an arbitrary field. When the degrees of these polynomials are bounded by

New algorithms are presented for computing annihilating polynomials of Toeplitz, Hankel, and more generally Toeplitz+Hankel-like matrices over a field. Our approach follows works on Coppersmith’s block Wiedemann method with structured projections, which have been recently successfully applied for computing the bivariate resultant and for modular composition. In particular, if the displacement rank is considered constant, then we compute the characteristic polynomial of a generic matrix that belongs to one of the classes above in

The coefficient sequences of multivariate rational functions appear in many areas of combinatorics. Their diagonal coefficient sequences enjoy nice arithmetic and asymptotic properties, and the field of analytic combinatorics in several variables (ACSV) makes it possible to compute asymptotic expansions. We consider these methods from the point of view of effectivity. In particular, given a rational function, ACSV requires one to determine a (generically) finite collection of points that are called critical and minimal. Criticality is an algebraic condition, meaning it is well treated by classical methods in computer algebra, while minimality is a semi-algebraic condition describing points on the boundary of the domain of convergence of a multivariate power series. We show how to obtain dominant asymptotics for the diagonal coefficient sequence of multivariate rational functions under some genericity assumptions using symbolic-numeric techniques. To our knowledge, this is the first completely automatic treatment and complexity analysis for the asymptotic enumeration of rational functions in an arbitrary number of variables. 5

If a linear differential operator with rational function coefficients is reducible, its factors may have coefficients with numerators and denominators of very high degree. We give a completely explicit bound for the degrees of the (monic) right factors in terms of the degree and the order of the original operator, as well as the largest modulus of the local exponents at all its singularities, for which bounds are known in terms of the degree, the order and the height of the original operator. 2

Bosch (Germany) ordered from us some support for the design and implementation of trigonometric functions in fixed-point and floating-point arithmetics (choice of formats and parameters, possibility of various compromises speed/accuracy/range depending on application needs, etc.)

PROMETHEUS is a project over 54 months ending in June 2022. The goal is to develop a toolbox of privacy-preserving cryptographic algorithms and protocols (like group signatures, anonymous credentials, or digital cash systems) that resist quantum adversaries. Solutions are mainly considered in the context of Euclidean lattices and analyzed from a theoretical point of view (i.e., from a provable security aspect) and a practical angle (which covers the security of cryptographic implementations and side-channel leakages). Orange is the scientific leader and Benoît Libert is the administrative responsible on behalf of ENS de Lyon.

ALAMBIC is a project (started in October 2016 and ending in April 2022) focused on the applications of cryptographic primitives with homomorphic or malleability properties. The project received a 6-month extension due to the COVID crisis and now ends in April 2021. The web page of the project is

https://crypto.di.ens.fr/projects:alambic:description. It is headed by Damien Vergnaud (ENS Paris and CASCADE team) and, besides AriC, also involves teams from the XLIM laboratory (Université de Limoges) and the CASCADE team (ENS Paris). The main goals of the project are: (i) Leveraging the applications of malleable cryptographic primitives in the design of advanced cryptographic protocols which require computations on encrypted data; (ii) Enabling the secure delegation of expensive computations to remote servers in the cloud by using malleable cryptographic primitives; (iii) Designing more powerful zero-knowledge proof systems based on malleable cryptography.

RISQ (Regroupement de l’Industrie française pour la Sécurité Post – Quantique) is a BPI-DGE four-year project (started in January 2017) focused on the transfer of post-quantum cryptography from academia to industrial poducts. The web page of the project is

http://risq.fr. It is headed by Secure-IC and, besides AriC, also involves teams from ANSSI (Agence Nationale de la Sécurité des Systèmes d’Information), Airbus, C& S (Communication et Systèmes), CEA (CEA-List), CryptoExperts, Gemalto, Orange, Thales Communications & Security, Paris Center for Quantum Computing, the EMSEC team of IRISA, and the Cascade and Polsys INRIA teams. The outcome of this project will include an exhaustive encryption and transaction signature product line, as well as an adaptation of the TLS protocol. Hardware and software cryptographic solutions meeting these constraints in terms of security and embedded integration will also be included. Furthermore, documents guiding industrials on the integration of these post-quantum technologies into complex systems (defense, cloud, identity and payment markets) will be produced, as well as reports on the activities of standardization committees.

RAGE is a four-year project (started in January 2021) focused on the randomness generation for advanced cryptography. The web page of the project is

https://perso.ens-lyon.fr/alain.passelegue/projects.html. It is headed by Alain Passelègue and also involves Pierre Karpmann (UGA) and Thomas Prest (PQShield). The main goals of the project are: (i) construct and analyze security of low-complexity pseudorandom functions that are well-suited for MPC-based and FHE-based applications, (ii) construct advanced forms of pseudorandom functions, such as (private) constrained PRFs.

CHARM is a three-year project (started in October 2021) focused on the cryptographic hardness of module lattices. The web page of the project is

https://github.com/CHARM-project/charm-project.github.io. It is co-headed by Shi Bai (FAU, USA) and Damien Stehlé, with two other sites: the U. of Bordeaux team led by Benjamin Wesolowski (with Bill Allombert, Karim Belabas, Aurel Page and Alice Pellet-Mary) and the Cornell team led by Noah Stephens-Davidowitz. The main goal of the project is to provide a clearer understanding of the intractability of module lattice problems via improved reductions and improved algorithms. It will be approached by investigating the following directions: (i) showing evidence that there is a hardness gap between rank 1 and rank 2 module problems, (ii) determining whether the NTRU problem can be considered as a rank 1.5 module problem, (iii) designing algorithms dedicated to module lattices, along with implementation and experiments.

NuSCAP (Numerical Safety for Computer-Aided Proofs) is a four-year project started in February 2021. The web page of the project is

https://nuscap.gitlabpages.inria.fr/. It is headed by Nicolas Brisebarre and, besides AriC, involves people from LIP lab, Galinette, Lfant, Stamp and Toccata INRIA teams, LAAS (Toulouse), LIP6 (Sorbonne Université), LIPN (Univ. Sorbonne Paris Nord) and LIX (École Polytechnique). Its goal is to develop theorems, algorithms and software, that will allow one to study a computational problem on all (or any) of the desired levels of numerical rigor, from fast and stable computations to formal proofs of the computations.

AMIRAL is a four-year project (starting in January 2022) that aims to improve lattice-based signatures and to develop more advanced related cryptographic primitives. The web page of the project is

https://perso.ens-lyon.fr/alain.passelegue/projects.html. It is headed by Adeline Roux-Langlois from Irisa (Rennes) and locally by Alain Passelègue. The main goals of the project are: (i) optimize the NIST lattice-based signatures, namely CRYSTALS-DILITHIUM and FALCON, (ii) develop advanced signatures, such as threshold signatures, blind signatures, or aggregated signatures, and (iii) generalize the techniques developed along the project to other related primitives, such as identity-based and attribute-based encryption.

Alain Passelègue gave an invited talk about contact tracing apps during the Journées Nationales du GDR Sécurité.

Damien Stehlé gave an invited talk on the cryptographic aspects of module lattices, at the PQCRYPTO 2021 conference.

Jean-Michel Muller gave an invited talk at the SIAM CSE21 Minisymposium on Reduced Precision Arithmetic and Stochastic Rounding.

Nathalie Revol gave a talk at the International Online Seminar on Interval Methods in Control Engineering.

Claude-Pierre Jeannerod was a member of the scientific committee of JNCF (Journées Nationales de Calcul Formel).

Bruno Salvy is chair of the steering committee of the conference AofA.

Damien Stehlé is a member of the steering committee of the PQCRYPTO conference series.

Jean-Michel Muller and Nathalie Revol are members of the steering committee of the ARITH conference series.

Nathalie Revol is a member of the scientific committee of the SCAN conference series.

Bruno Salvy is a member of the scientific councils of the CIRM, Luminy and of the GDR Informatique Mathématique of the CNRS. This year, he was in the hiring committee for young researchers of Inria Lyon and in one for a “Maître de conférences” at University Nancy.

Damien Stehlé was in the hiring committees for a “Maître de conférences” position at Sorbonne University and a professor position at ENS Rennes.

Jean-Michel Muller chaired the evaluation committee of LABRI (Laboratoire Bordelais de Recherche en Informatique). He is a member of the Scientific Council of CERFACS (Toulouse).

Nathalie Revol was in the hiring committee for a “Maître de conférences” position at Sorbonne University. She was an expert for the European Commission.

Claude-Pierre Jeannerod was a member of the recruitment committee for postdocs and sabbaticals at Inria Grenoble–Rhône-Alpes. He has been a member of the Comité des Moyens Incitatifs for the Lyon Inria research center since October 2021.

Guillaume Hanrot was in the hiring committee for a "Professor" position at Sorbonne University and for two Professor positions (mathematics and economics) at ENS de Lyon. He was also a member of the general hiring committee for positions in computer science at Ecole polytechnique.

Gilles Villard was member of the Section 6 du Comité national de la recherche scientifique, 2016-2021.

Vincent Lefèvre participates in the revision of the ISO C standard via the C Floating Point Study Group.

Alain Passelègue is a member of the directive board of the GT C2.

Jean-Michel Muller is co-head of GDR IM (Groupement de Recherches Informatique Mathématique).

Jean-Michel Muller is a member of the Commission Administrative Paritaire 1 of CNRS.

Guillaume Hanrot has been head of the Laboratoire d'excellence MILyon since Sept. 1st, 2021.

Alain Passelègue was a member of the Inria working group GT Recrutement-Accueil on developping new processes for hiring and welcoming new Inria employees.

Nathalie Revol is a member of the Inria committee on Gender Equality and Equal Opportunities, working in 2021 on recommendations for a better inclusion of LGBTI+ collaborators.

Damien Stehlé was interviewed for Le Monde, in the context of the baseless patent claims from CNRS on Kyber and Saber ('Quand un brevet perturbe l'innovation post-quantique', 16 November 2021).

Jean-Michel Muller wrote a chapter 15 for a popular science book “De la mesure en toutes choses” published by CNRS editions.

Nicolas Brisebarre was a scientific consultant for « Les maths et moi », a one-man show by Bruno Martins. He also took part to Q & A sessions with the audience after some shows.

Nathalie Revol is the scientific editor of the website Interstices for the dissemination of computer science to a large audience, with 25 publications and more than half a million visits in 2021.

Alain Passelègue was a member of the panel discussion on contact tracing during the Journées Nationales du GDR Sécurité.

Damien Stehlé gave an invited talk on post-quantum cryptography at the webinar of the Chey Institute for Advanced Studies.

Damien Stehlé was invited to the Parliamentary Office for the Evaluation of Scientific and Technological Choices (OPECST) for a public hearing on quantum technologies (8 October 2021).

Nathalie Revol gave talks at a "Filles et Maths-Info" day in St-Étienne for 80 high-school girls and at lycée Descartes in St-Genis-Laval for 250 high-school pupils, as an incentive to choose scientific careers, especially for girls.