Grace combines expertise and deep knowledge in algorithmic number theory and algebraic geometry, to build and analyse (public-key) cryptosystems, design new error correcting codes, with real-world concerns like cybersecurity or blockchains (software and hardware implementations, secure implementations in constrained environments, countermeasures against side channel attacks, white box cryptography).

The foundations of Grace therefore lie in algorithmic number theory (fundamental algorithms primality, factorization), number fields, the arithmetic geometry of curves, algebraic geometry and the theory of algebraic codes.

Arithmetic Geometry is the meeting point of algebraic geometry and number theory: the study of geometric objects defined over arithmetic number systems. In our case, the most important objects are curves and their Jacobians over finite fields; these are fundamental to our applications in both coding theory and cryptology. Jacobians of curves are excellent candidates for cryptographic groups when constructing efficient instances of public-key cryptosystems, of which Diffie–Hellman key exchange is an instructive example.

Coding Theory studies originated with the idea of using redundancy in messages to protect them against noise and errors. While the last decade of the 20th century has seen the success of so-called iterative decoding methods, we see now many new ideas in the realm of algebraic coding, with the foremost example being list decoding, (zero knowledge or not) proofs of computation.

Part of the activities of the team are oriented towards post-quantum cryptography, either based on elliptic curves (isogenies) or code-based. Also the team study relevant cryptography for the blockchain arena.

The group is strongly invested in cybersecurity: software security, secure hardware implementations, privacy, etc.

Algorithmic Number Theory is concerned with replacing special cases with general algorithms to solve problems in number theory. In the Grace project, it appears in three main threads:

Clearly, we use computer algebra in many ways. Research in cryptology
has motivated a renewed interest in Algorithmic Number Theory in
recent decades—but the fundamental problems still exist per
se. Indeed, while algorithmic number theory application in
cryptanalysis is epitomized by applying factorization to breaking RSA
public key, many other problems, are relevant to various area of
computer science. Roughly speaking, the problems of the cryptological
world are of bounded size, whereas Algorithmic Number Theory is also
concerned with asymptotic results.

Theme: Arithmetic Geometry: Curves and their Jacobians

is the meeting point of algebraic geometry and number theory: that is, the study of geometric objects defined over arithmetic number systems (such as the integers and finite fields). The fundamental objects for our applications in both coding theory and cryptology are curves and their Jacobians over finite fields.

An algebraic plane curve

(Not every curve is planar—we may have more variables, and more
defining equations—but from an algorithmic point of view,
we can always reduce to the plane setting.)
The genusJacobian of

The simplest curves with nontrivial Jacobians are
curves of genus 1,
known as elliptic curves;
they are typically defined by equations of the form

Theme: Curve-Based Cryptology

Jacobians of curves are excellent candidates for cryptographic groups when constructing efficient instances of public-key cryptosystems. Diffie–Hellman key exchange is an instructive example.

Suppose Alice and Bob want to establish a secure communication
channel. Essentially, this means establishing a common secret
key, which they will then use for encryption and decryption.
Some decades ago, they would have exchanged this key in person, or
through some trusted intermediary; in the modern, networked world,
this is typically impossible, and in any case completely unscalable.
Alice and Bob may be anonymous parties who want to do e-business, for
example, in which case they cannot securely meet, and they have no way
to be sure of each other's identities. Diffie–Hellman key exchange
solves this problem. First, Alice and Bob publicly agree on a
cryptographic group

This simple protocol has been in use, with only minor modifications,
since the 1970s. The challenge is to create examples of groups

The classic example of a group suitable for the Diffie–Hellman protocol
is the multiplicative group of a finite field

This is where Jacobians of algebraic curves come into their own.
First, elliptic curves and Jacobians of genus 2 curves do not have a
subexponential index calculus algorithm: in particular, from the point
of view of the DLP, a generic elliptic curve is currently as
strong as a generic group of the same size. Second, they provide
some diversity: we have many degrees of freedom in choosing
curves over a fixed

Theme: Coding theory

Coding Theory studies originated with the idea of using redundancy in messages to protect against noise and errors. The last decade of the 20th century has seen the success of so-called iterative decoding methods, which enable us to get very close to the Shannon capacity. The capacity of a given channel is the best achievable transmission rate for reliable transmission. The consensus in the community is that this capacity is more easily reached with these iterative and probabilistic methods than with algebraic codes (such as Reed–Solomon codes).

However, algebraic coding is useful in settings other than the Shannon context. Indeed, the Shannon setting is a random case setting, and promises only a vanishing error probability. In contrast, the algebraic Hamming approach is a worst case approach: under combinatorial restrictions on the noise, the noise can be adversarial, with strictly zero errors.

These considerations are renewed by the topic of list decoding after the breakthrough of Guruswami and Sudan at the end of the nineties. List decoding relaxes the uniqueness requirement of decoding, allowing a small list of candidates to be returned instead of a single codeword. List decoding can reach a capacity close to the Shannon capacity, with zero failure, with small lists, in the adversarial case. The method of Guruswami and Sudan enabled list decoding of most of the main algebraic codes: Reed–Solomon codes and Algebraic–Geometry (AG) codes and new related constructions “capacity-achieving list decodable codes”. These results open the way to applications against adversarial channels, which correspond to worst case settings in the classical computer science language.

Another avenue of our studies is AG codes over various geometric objects. Although Reed–Solomon codes are the best possible codes for a given alphabet, they are very limited in their length, which cannot exceed the size of the alphabet. AG codes circumvent this limitation, using the theory of algebraic curves over finite fields to construct long codes over a fixed alphabet. The striking result of Tsfasman–Vladut–Zink showed that codes better than random codes can be built this way, for medium to large alphabets. Disregarding the asymptotic aspects and considering only finite length, AG codes can be used either for longer codes with the same alphabet, or for codes with the same length with a smaller alphabet (and thus faster underlying arithmetic).

From a broader point of view, wherever Reed–Solomon codes are used, we can substitute AG codes with some benefits: either beating random constructions, or beating Reed–Solomon codes which are of bounded length for a given alphabet.

Another area of Algebraic Coding Theory with which we are more recently concerned is the one of Locally Decodable Codes. After having been first theoretically introduced, those codes now begin to find practical applications, most notably in cloud-based remote storage systems.

Theme: Cryptography

A huge amount of work is being put into developing an efficient quantum computer. But even if the advent of such a computer may wait for decades, it is urgent to deploy post-quantum cryptography (PQC), i.e: solutions on our current devices that are quantum-safe. Indeed, an attacker could store encrypted sessions and wait until a quantum computer is available to decrypt.
In this context the National Institute of Standard Technology (NIST) has launched in 2017 (see this website) a call for standardizing public-key PQC schemes (key exchanges and signatures). Among the mathematical
objects to design post quantum primives, one finds error
correcting codes, Euclidean lattices and isogenies.

We are currently in the final step of the standardization of the NIST and most of the selected solutions are based on codes and lattices. These preliminary results tend to show that codes and lattices will be in a near future at the ground of our numerical security. If isogenies are less represented, they remain of deep interest since they appear to be the post quantum solution providing the smallest key sizes. The purpose of our research program is to bring closer these solutions for a post-quantum security in order to improve their efficiency, diversity and to increase our trust in these propositions.

Proofs of computation are cryptographic protocols which allow a prover to convince a verifier that a statement or an output of a computation is correct. The prover is untrusted in the sense that it may try to convince the verifier that a false statement is true. On the other hand the prover is computationnally restricted, and have very small prower: the proof should be short and easy to verify. They can be interactive or not.

While the topic originates back to 1990, several important steps towards praticality has been made in last decade, with efficient, real-life implementations and industrial deployments in the last years, thanks to huge fundings.

There are several cryptographic paths for designing such proof
systems. Within Grace, two main techniques are investigated. The
first one relies on elliptic curves and pairings, and produce
very short (constant-size) proofs.

We are interested in developing some interactions between cryptography and cybersecurity. In particular, we develop some researches in embedded security (side channels and fault attack), software security (finding vulnerability efficiently) and privacy (security of TOR).

The huge hype about blockchains attracted the attention of many companies towards advanced cryptographic protocols. While basic and standard blockchain ideas rely, on the cryptographic side, on very basic and standard cryptographic primitives like signatures and hash functions, more elaborate techniques from crypto can alleviate some shortcomings of blockchain, like the poor bandwith and the lack of privacy.

The topic of verifiable computation consists in verifying heavy computations done by a remote computer, using a lightweight computer which is not able to do the computation. The remore computer, called the prover, is allowed to provided a proof aside the result of the computation. This proof must be very short and fast to verify. It can also be made zero-knowledge, where the prover hides some inputs to the computation, and yet prove the result is correct.

There are two competing propositions which provide a mathematical and
algorithmic background for these proof techniques: one based on a
line of research dating back to the celebrated 1990 PCP theorem
(error correcting codes), and one based on the discrete logarithm
problem and pairing based protocols (elliptic curves over finite
fields).

These proofs allows to move data and computation off chain, pushing
the burden to off-chain servers, who then commit short commitments
of the update of their offchain data , accompanied by short proofs
which are easy to verify onchain. This mecanism is called a rollup and is at the core of the proposed path for
scaling Ethereum, a predominant blockchain, which will be “rollup-centric”.

Also Daniel Augot, together with Julien Prat (economist, ENSAE), is
co-leading a Polytechnique teaching and research “chair”, called
Blockchain and B2B
plaforms, funded by CapGemini, Caisse des dépots and
NomadicLabs. This is patronage, which funded Sarah Bordage's PhD
thesis. This gives visiblity and outreach beyond the academic
sphere.

The team is concerned with several aspect of reliability and security of cloud storage, obtained mainly with tools from coding theory. On the privacy side, we build protocols for so-called Private Information Retrieval which enable a user to query a remote database for an entry, while not revealing his query. For instance, a user could query a service for stock quotes without revealing with company he is interested in. On the availability side, we study protocols for proofs of retrievability, which enable a user to get assurance that a huge file is still available on a remote server, with a low bandwith protocol which does not require to download the whole file. For instance, in a peer-to-peer distributed storage system, where nodes could be rewarded for storing data, they can be audited with proof of retrievability protocols to make sure they indeed hold the data.

We investigate these problems with algebraic coding theory for the effective constuction of protocols. To this respect, we mainly use locally decodable codes and in particular high-rate lifted codes.

Maxime Roméas is a PhD student of the team. (PhD grant from IP Paris/Ecole Polytechnique for a 3-year doctorate, Oct 2019-Sept 2022). The subject of his thesis is "The Constructive Cryptography paradigm applied to Interactive Cryptographic Proofs".

The Constructive Cryptography framework, introduced by Maurer in 2011, redefines basic cryptographic primitives and protocols starting from discrete systems of three types (resources, converters, and distinguishers). This not only permits to construct them effectively, but also lighten and sharpen their security proofs. One strength of this model is its composability. The purpose of the PhD is to apply this model to rephrase existing interactive cryptographic proofs so as to assert their genuine security, as well as to design new proofs. The main concern here is security and privacy in Distributed Storage settings. Another axis of the PhD is to augment the CC model by, e.g., introducing new functionalities to a so-called Server Memory Resource.

The project ENCODE is a doctoral
network project submitted to the call Horizons Marie Słodowska-Curie Actions - Doctoral Networks 2021.
The project's principal investigator is Eimear Byrne
from university college Dublin. Grace Team is one of the 5 poles of the project.
The project has been granted and got the grade 100%. It was ranked 1st among more than 1000 projects. ENCODE starts in march 2023.

The Action Exploratoire CACHAÇA, led by

6 Ph.D. Students of the team defended their thesis during the last year:

We are concerned that our management neglects research of high quality, or even research by itself, showing preference for short term, short lived activities, judged by their “impact”, whatever it means.

We are proud that INRIA has a quality national evaluation committee (“commission d'évaluation”, CE) in charge of evaluating the scientific activity and merit of individual researchers. The committee members do a deep, sensible and thorough scientific analysis of each researcher's file, well beyond publications and bibliometrics. This give us strong confidence that recruitments are of high quality, that promotions are done on the core, real-life, and meaningful basis of our activity. Yet, during last year, our evaluation committee has been under veiled and constant criticism by our top management. We fear that future recruitments will be not based on scientific merit, lowering the quality of research done at INRIA.

Finally, please take note also that new information system “EKSAE” has been deployed at INRIA. EKSAE is completely malfunctioning and is a plague to administrative staff, who can not handle basic tasks. Ph.D. students are not reimbursed for their trips to conferences, invited researchers are not reimbursed for their cost, experts doing reports for the evaluation committee (CE) are not paid. At the international level, this destroys any credit INRIA could have, ashames us, and forbid us to establish important scientific connections.

When considering error correcting codes, one usually endows the ambient space with the Hamming
metric. However, other metrics have been considered, and in particular, codes endowed with the rank
metric have found applications in cryptography, in network communications or in data storage. Com-
pared to the Hamming world, only few families of codes endowed with the rank metric are known to have
efficient decoding algorithms. As the rank metric analogues of Reed-Solomon codes, interleaving), it is possible to give a probabilistic
decoder correcting up to

In 42,

In 67,

The security of most code–based cryptosystem relies on the hardness
of the so–called Decoding Problem. If its search version (Given a
random linear code, and a noisy codeword, it should be hard to decode,
i.e. to remove the error and recover the original message) is
quite well understood, many proposals actually rely on the decision version which can be formulated as follows: Given a random
linear code it should be hard to distinguish between a uniformly
random vector of the ambiant space, and a noisy codeword. This
decision version can be thought as the code–based analogue of the
Decisional Diffie Hellman problem, and for general random linear codes
both search and decision problem are known to be equivalent. Such a
result is known as a search to decision reduction. However, for
efficiency purposes, it is very appealing to use algebraically
structured codes such as quasi-cyclic codes, that can be represented
more compactly. In this situation, the hardness of the decision
Decoding problem is only conjectured. On the other hand, one of the
reasons of the success of lattice–based cryptography is that it
benefits from a rich literature of security reductions for both
general lattices and so–called structured lattices, i.e.
lattices arising from orders of number fields.

In 29, based on a strong analogy between
number fields and function fields, and especially using Carlitz
modules which can be somehow considered as an analogue of cyclotomic
number fields in positive characteristics, we introduce a new generic
problem that we call Function Field Decoding Problem, and
derive the first search to decision reduction in this context.

The security of code-based cryptography relies primarily on the hardness of generic decoding with linear codes. The best generic decoding algorithms are all improvements of an old algorithm due to Prange: they are known under the name of information set decoders (ISD). A while ago, a generic decoding algorithm which does not belong to this family was proposed: statistical decoding. It is a randomized algorithm that requires the computation of a large set of parity-checks of moderate weight, and uses some kind of majority voting on these equations to recover the error we are looking for in the decoding problem. This algorithm was long forgotten because even the best variants of it performed poorly when compared to the simplest ISD algorithm. In 31, we revisit this old algorithm by using parity-check equations in a more general way. Here the parity-checks are used to get LPN samples with a secret which is part of the error and the LPN noise is related to the weight of the parity-checks we produce. The corresponding LPN problem is then solved by standard Fourier techniques. By properly choosing the method of producing these low weight equations and the size of the LPN problem, we are able to outperform in this way significantly information set decoders at code rates smaller than 0.3. It gives for the first time after 60 years, a better decoding algorithm for a significant range which does not belong to the ISD family. >>>>>>> 5a27469fe7075e10a50ee82951408ab3de01783b

In 42, interleaved
Gabidulin codes, working on the right-hand side, which gives a
simpler point of view on the decoding of such error pattern.

In 55, any
symmetric error pattern, whatever its rank, without failure. This
algorithm works for a broad family of codes which includes the
aformentioned Gabidulin codes. Moreover, when the rate is larger than

We give a quantum reduction from finding short codewords in a random linear code to decoding for the Hamming metric. This is the first time such a reduction (classical or quantum) has been obtained. Our reduction adapts to linear codes Stehlé-Steinfield-Tanaka- Xagawa’s re-interpretation of Regev’s quantum reduction from finding short lattice vectors to solving the Closest Vector Problem. The Hamming metric is a much coarser metric than the Euclidean metric and this adaptation has needed several new ingredients to make it work. For instance, in order to have a meaningful reduction it is necessary in the Hamming metric to choose a very large decoding radius and this needs in many cases to go beyond the radius where decoding is unique. Another crucial step for the analysis of the reduction is the choice of the errors that are being fed to the decoding algorithm. For lattices, errors are usually sampled according to a Gaussian distribution. However, it turns out that the Bernoulli distribution (the analogue for codes of the Gaussian) is too much spread out and can not be used for the reduction with codes. Instead we choose here the uniform distribution over errors of a fixed weight and bring in orthogonal polynomials tools to perform the analysis and an additional amplitude amplification step to obtain the aforementioned result.

The result is presented in the preprint 56.

In 18, we have proposed an adaptation of the algorithmic reduction theory of lattices to binary codes. This includes the celebrated LLL algorithm (Lenstra, Lenstra, Lovasz, 1982), as well as adaptations of associated algorithms such as the Nearest Plane Algorithm of Babai (1986). Interestingly, the adaptation of LLL to binary codes can be interpreted as an algorithmic version of the bound of Griesmer (1960) on the minimal distance of a code. Using these algorithms, we demonstrate —both with a heuristic analysis and in practice— a small polynomial speed-up over the Information-Set Decoding algorithm of Lee and Brickell (1988) for random binary codes. This appears to be the first such speed-up that is not based on a time-memory trade-off. The above speed-up should be read as a very preliminary example of the potential of a reduction theory for codes, for example in cryptanalysis.

This work 63 has presented the first full implementation of Wave, a postquantum code-based signature scheme. We define Wavelet, a concrete Wave scheme at the 128-bit classical security level (or NIST postquantum security Level 1) equipped with a fast verification algorithm targeting embedded devices. Wavelet offers 930- byte signatures, with a public key of 3161 kB. We include implementation details using AVX instructions, and on ARM Cortex-M4, including a solution to deal with Wavelet’s large public keys, which do not fit in the SRAM of a typical embedded device. Our verification algorithm is approximately 4.65 times faster then the original, and verifies in 1 087 538 cycles using AVX instructions, or 13 172 ticks in an ARM Cortex-M4.

Bringing practical post-quantum security to low-end IoT devices is a pressing challenge. In 64 we evaluate a range of pre- and post-quantum secure signature schemes in the context of SUIT software updates (specified by the IETF), on three popular, off-the-shelf microcontroller boards (ARM Cortex-M4, ESP32, and RISC-V) that are representative of the 32-bit landscape. We show that upgrading to postquantum security is practical now, and reflect on the best choices for various use cases. This work was selected for presentation at Real World Crypto 2022, and was published at ACNS 2022.

Polynomial multiplication is one of the most costly operations of ideal lattice-based cryptosystems. In 71, with Aurélien Greuet (Idemia), we study its optimizations when one of the operands has coefficients close to 0. We focus on this structure since it is at the core of lattice-based Key Encapsulation Mechanisms submitted to the NIST call for post-quantum cryptography. In particular, we propose optimization of this operation for embedded devices by using a RSA/ECC coprocessor that provides efficient and secure large-integer arithmetic. In this context, we compare Kronecker Substitution, with two specific algorithms that we introduce: KSV, a variant of this substitution, and an adaptation of the schoolbook multiplication, denoted Shift&Add. All these algorithms rely on the transformation of polynomial multiplication to large-integer arithmetic. Then, thanks to these algorithms, existing secure coprocessors dedicated to large-integer can be re-purposed in order to speed-up post-quantum schemes. The efficiency of these algorithms depends on the component specifications and the cryptosystem parameters set. Thus, we establish a methodology to determine which algorithm to use, for a given component, by only implementing basic large-integer operations. Moreover, the three algorithms are assessed on a chip ensuring that the theoretical methodology matches with practical results.

NTRU was first introduced by J. Hoffstein, J. Pipher and J.H Silverman in 1998. Its security, efficiency and compactness properties have been carefully studied for more than two decades. A key encapsulation mechanism (KEM) version was even submitted to the NIST standardization competition and made it to the final round. Even though it has not been chosen to be a new standard, NTRU remains a relevant, practical and trustful post-quantum cryptographic primitive. In 25, with Luk Bettale (Idemia), Julien Eynard (ANSSI) and Rémi Strullu (ANSSI), we investigate the side-channel resistance of the NTRU Decrypt procedure. In contrast with previous works about side-channel analysis on NTRU, we consider a weak attacker model and we focus on an implementation that incorporates some side-channel countermeasures. The attacker is assumed to be unable to mount powerful attacks by using templates or by forging malicious ciphertexts for instance. In this context, we show how a non-profiled side-channel analysis can be done against a core operation of NTRU decryption. Despite the considered countermeasures and the weak attacker model, our experiments show that the secret key can be fully retrieved with a few tens of traces.

Together with Cyprien Delpech de Saint Guilhem (KU Leuven), Tako Boris Fouotsa (Universit`a Degli Studi Roma Tre), Peter Kutas (University of Birmingham), Christophe Petit (Université Libre de Bruxelles), Javier Silva ( Universitat Pompeu Fabra)and Benjamin Wesolowski (Institut Mathématiques de Bordeaux), Luca de Feo and Antonin Leroux have introduced a new post-quantum public key encryption scheme that uses constructively the torsion point attack against the SIDH key exchange. The publication includes an implementation in C of this new construction. Another contribution of this work is the "uber-isogeny assumption" which aims at generalizing some computational assumption encountered in various scheme of the literature.

Elliptic curves over finite fields are either

or

. Distinguishing supersingular elliptic curves is an important task in algorithmic number theory, and now forms a crucial step in public-key validation for some isogeny-based cryptosystems such as CSIDH. In

13, we improve the state-of-the-art of supersingularity testing, especially over

for cryptographic applications, with faster algorithms backed up with high-speed software implementations.

The tasks of evaluating and verifying isogenies are fundamental for isogeny-based cryptography. The

introduced in

37and presented at ASIACRYPT 2022 targets the case of (big) prime-degree isogenies. The core of our new method is the revelation of endomorphisms of smooth norm inside a well-chosen suborder of the codomain's endomorphism ring. This new representation appears to be opening interesting prospects for isogeny-based cryptography under the hardness of a new computational problem: the SubOrder to Ideal Problem (SOIP). As an application, we introduce pSIDH, a new NIKE based on the suborder representation. Studying new assumption appears to be particularly crucial in the light of the recent attacks against isogeny-based cryptography.

The special combinatorial properties of the elliptic supersingular isogeny graph have made it a fruitful setting for new isogeny-based cryptosystems. Naturally, then, we seek to understand the properties of their generalizations, starting with the isogeny graphs formed by supersingular and superspecial abelian surfaces. In

20we investigate the intricate local structure of these graphs.

In

17we investigate the isogeny graphs of supersingular elliptic curves over

equipped with a

-isogeny to their Galois conjugate. These curves are interesting because they are, in a sense, a generalization of curves defined over

, and there is an action of the ideal class group of

on the isogeny graphs. We investigate constructive and destructive aspects of these graphs in isogeny-based cryptography, including generalizations of the CSIDH cryptosystem and the Delfs-Galbraith algorithm.

Pseudorandom correlation functions (Boyle et al. 68) allow two parties to locally generate, from short correlated keys, a near-unbounded amount of pseudorandom samples, according to a target correlation. The candidate introduced by Boyle et al. was constructed over a new assumption, variable-density learning parity with noise (VDLPN), for which they provide support by showing its resistance to a large class of attacks (called linear attacks). In 57, G. Couteau and

Suppose a user of a small device requires a powerful computer to perform a heavy computation for him. The computation can not be performed by the device. After completion of the computation, the powerful computer reports a result. Suppose now that the user has not full confidence that the remote computer performs correctly or behaves honestly. How can the user be assured that the correct result has been returned to him, given that he can not redo the computation ?

The topic of verifiable computation deals with this issue. Essentially it is a cryptographic protocol where the prover (i.e. the remote computer) provides a proof to a waek verifier (i.e. the user) that a computation is correct. The protocol may be interactive, in which case there may be one or more rounds of interactions between the prover and the verifier, or non interactive, in which case the prover sends a proof that the computation is correct.

These protocols incorporate zero-knowledge variants, where the scenario is different. A service performs a computation on date, part of which remaining private (for instance statistics on citizen's incomes). It is possible for the service to prove the correctness of the result without revealing the data (which has to be committed anyway).

Two directions for building these protocols are discrete logarithms (and pairings) in elliptic curves or a coding theoretical setting (originating to the PCP theorem). Both variants admit a zero-knowledge version, and the core of the research is more on provable computation than the zero-knowledge aspect, which comes rather easily in comparison.

In the coding theoretic setting, these protocols are made popular, in particular in the blockchain area, under the name of (ZK-)STARKS, Scalable Transparent Arguments of Knowledge, introduced in 2018. The short non interactive proofs are derived for protocols which are called IOPs Interactive Oracle Proofs, which are combination of IPs Interactive Proofs and PCPs Probabilistically Checkable Proofs, for combining the best of both worlds, and making PCPs pratical.

At the core of these protocols lies the following coding problem: how to decide, with high confidence, that a very long ambient word is close to a given code, while looking at very few coordinates of it.

These protocols were originally designed for the simplest algebraic codes, Reed-Solomon codes. Daniel Augot and Sarah Bordage provided a generalization of these protocols to multivariate codes, i.e. product of Reed-Solomon codes and Reed-Muller codes. The performance does not degrade badly with respect to the basic Reed-Solomon case 12. It remains to assert the revelance of these codes for building proof systems and to compare to litterature, where product of Reed-Solomon codes have been studied for more than twenty years.

A very important issue is to have a smaller alphabet, and this can be done using algebraic-geometric codes. This was done by

Verifiable computation can also be built using the theory of ellitpic curves, the hardness of the discrete logarithms, and pairings, as introduced in 72 and made practical in 73. These proofs are much more shorter than the ones provided by the STARKS, with a higher cost for the prover. Furthermore, these systems are not post-quantum, and there are important issues in the setting of the proof system, where a trusted third party is required.

The verifiable computation problems leads to several new questions in elliptic curves cryptographic, since the required operations depart from the standard ones used for instance in signature algorithms.

A very interesting topic is the notion of "proof of proofs". Essentially, verifying a proof is a computation, and a proof that a proof has been verified can be given. The same idea applies for verifying hundreds of proofs. A single proof can report that hundred of proofs have been checked.

This is very strong in the elliptic curve setting because the size of a proof is a constant (a few hundred bytes, only depending on the security parameter, not the computation). This means that the above hundred of statements admits a very short proof. In the blockchain world, this translates into a very short proof that many offchain transactions are correct.

To achieve this goal, this requires an ellitpic curve for proving computations done over an other elliptic curve. The problem is that there is an arithmetic mismatch: the statement which is to be proved is defined over

In collaboration with Aurore Guillevic, Youssef El Housni provided curves which are very efficient for this recursion 7033. These curves beat the competition, an implementation has been provided here. Some other blockchain players CELO, Consensys also have used these curves in their implentations of verifiable computation and zero-knowledge proofs.

Once the curves are built, it remains to do other ellitpic curves
operations, which are particular to SNARKS, for instance doing a sum
of a lot of scalar multiplication with differents points.

In collaboration with Matthieu Rambaud (Télécom Paris), Daniel Augot is advising Angelo Saadeh. The issue which is adressed is the following. Two parties each hold privately some distinct slices of common data. compute a logistic regression on the whole set of data, without each party revealing its data to the other party.

Computing a common output from inputs of several participants in the above is done in cryptography using MPC Secure Multiparty Computation, as introduced by Yao 74, and made recently practical, with several implementations. Yet, as classically observed in MPC, the actual result, when learned, may leak information about the secret inputs. The same problem occurs here, where the model may leak information about the data.

Thus it is natural to investigate the use of

Proofs of Retrievability (PoR) protocols ensure that a client can fully retrieve a large outsourced file from an untrusted server. In

40we design a good PoR based on a family of graph codes called expander codes. We use expander codes based on graphs derived from point-line incidence relations of finite affine planes. These codes have good dimension and minimum distance over a relatively small alphabet. Moreover, expander codes possess very efficient unique decoding algorithms. We take advantage of these results to design a PoR scheme that extracts the outsourced file in quasi-linear time and features better concrete parameters than state-of-the-art schemes w.r.t storage overhead and size of the outsourced file. This work has been presented at CANS 2022. Another line of work on PoRs was to design good PoR schemes with simple security proofs. To this end, in

39, we propose a framework for the design of secure and efficient PoR schemes that is based on Locally Correctable Codes, and whose security is phrased in the Constructive Cryptography model by Maurer. We give an instantiation of our framework using the high rate lifted codes introduced by Guo et al. This yields an infinite family of good PoRs. We assert their security by solving a finite geometry problem, giving an explicit formula for the probability of an adversary to fool the client. This was presented at I4CS 2022.

Cornacchia's algorithm is an important building block of CM elliptic curve cryptography. Sharing many properties with fast integer gcd algorithms, we worked on a fast version for this tool. A paper is to be submitted at ISSAC'2022 and the code is to be available on gitlab.

One of the most powerful factoring algorithm is ECM that uses elliptic curves. To improve it, families of curves are traditionally built over the rationals. In this work, number fields are used to treat the special numbers

. See the preliminary results in

60.

Groups of unknown order are a classic setting for asymmetric cryptosystems—RSA being the most famous example. In recent times, unknown-order groups have returned to prominence as a setting for new, advanced cryptosystems including accumulators and VDFs (Verifiable Delay Functions). In these applications,

becomes critical: not even the constructor of the group should know its order. In

19(joint work with Samuel Dobson and Steven Galbraith), we re-evaluate the security of ideal class groups—the most popular source of trustless unknown-order groups—and show that generally accepted parameters do not meet claimed security levels. We also propose a more efficient alternative: Jacobians of genus-3 hyperelliptic curves.

SPARTA project on cordis.europa.eu

ANR

(Cryptography, Isogenies, and Abelian varieties Overwhelming) is a JCJC 2019 project, led by Damien Robert (Inria EP LFANT). This project, which started in October 2019, will examine applications of higher-dimensional abelian varieties in isogeny-based cryptography.

ANR

(Code–based Cryptography) This is a project from (

). This project, starting in october 2017 led by Jean-Pierre Tillich (Inria, EP Cosmiq) focusses on the design and the security analysis of code–based primitives, in the context of the current

NIST competition.

ANR

(An interface between COde and LAttice-based cryptography) is a project from (

). This project (ANR JCJC), starting in october 2021 led by Thomas Debris-Alazard focusses on bringing closer post-quantum solutions based on codes and lattices to improve our trust in cryptanalysis and to open new perspectives in terms of design.

BARRACUDA is a collaborative ANR project accepted in 2021 and led by

Website : barracuda.inria.fr

The project gathers specialists of coding and cryptology on one hand and specialists of number theory and algebraic geometry on the other hand. The objectives concern problems arising from modern cryptography which require the use of advanced algebra based objects and techniques. It concerns for instance mathematical problems with applications to distributed storage, multi-party computation or zero knowledge proofs for protocols.

SANGRIA is a collaborative ANR project accepted in 2021.

Website : lip6.fr/Damien.Vergnaud/projects/sangria/

The main scientific challenge of the SANGRIA (Secure distributed computAtioN - cryptoGRaphy, combinatorIcs and computer Algebra) project are (1) to construct specific protocols that take into account practical constraints and prove them secure, (2) to implement them and to improve the efficiency of existing protocols significantly. The SANGRIA project (for Secure distributed computAtioN: cryptoGRaphy, combinatorIcs and computer Algebra) aims to undertake research in these two aspects while combining research from cryptography, combinatorics and computer algebra. It is expected to impact central problems in secure distributed computation, while enriching the general landscape of cryptography.

MobiS5 is a collaborative ANR project accepted in 2018.

Website : mobis5.limos.fr/

MobiS5 will aim to foresee and counter the threats posed in 5G architectures by the architectural modifications suggested in TR 22.861-22.864. Concretely, we will provide a provably-secure cryptographic toolbox for 5G networks, validated formally and experimentally, responding to the needs of 5G architectures at three levels:

* Challenge 1: security in the network infrastructure and end points: including core network security and attack detection and prevention; * Challenge 2: cryptographic primitives and protocols, notably : a selection of basic primitives, an authenticated key-exchange protocol, tools to compute on encrypted data, and post-quantum cryptographic countermeasures * Challenge 3: mobile applications, specifically in the use-case of a secure server that aids or processes outsourced computation; and the example of a smart home.

CryptiQ is a collaborative ANR project accepted in 2018.

The goal of the CryptiQ project is to major changes due to Quantum Computing by considering three plausible scenarios, from the closest to the furthest foreseeable future, depending on the means of the adversary and the honest parties. In the first scenario, the honest execution of protocols remains classical while the adversary may have oracle access to a quantum computer. This is the so-called post-quantum cryptography, which is the best known setting. In the second scenario (quantum-enhanced classical cryptography), we allow honest parties to have access to quantum technologies in order to achieve enhanced properties, but we restrict this access to those quantum technologies that are currently available (or that can be built in near-term). The adversary is still allowed to use any quantum technology. Finally, in the third scenario (cryptography in a quantum world), we allow the most general quantum operations to an adversary and we consider that anybody can now have access to both quantum communication and computation.

This projet intégré aims to develop post quantum cryptographic primitives in 5 years which would be implemented in an open source web browser.
The evolution of cryptographic standards has already begun. The choice of new primitives will be made soon and the transition should be operated in a few years. The objective of the project is to play a crucial role in this evolution
so that french researchers, which are already strongly implied in this process could influence the choice of cryptographic standards in the next years.

is a research project on cyber-security targeting low-end, microcontroller-based IoT devices, on which run operating systems such as RIOT and a low-power network stack. It links the project-teams EVA, GRACE, PROSECCO, TRiBE, and TEA. Taking a global and practical approach, RIOT-fp gathers partners planning to enhance RIOT with an array of security mechanisms. The main challenges tackled by RIOT-fp are:

Beyond academic outcomes, the output of RIOT-fp is open source code published, maintained and integrated in the open source ecosystem around RIOT. As such, RIOT-fp strives to contribute usable building blocks for an open source IoT solution improving the typical functionality vs. risk tradeoff for end-users.

The

CACHAÇA, led by

and based at Campus Cyber, started in 2022. CACHAÇA aims to bring high-assurance techniques from formal methods to the initial design and implementation phase for new postquantum cryptosystems, to produce fast, safe, and portable software implementations, especially for constrained environments such as IoT devices.

has associate researcher status, and so CACHAÇA is an anchor-point for collaborations between GRACE and the Secure Components laboratory at ANSSI. It will also englobe GRACE's contribution to planned industrial consortia (expected to begin in 2023).