Section: New Results
Lattices and cryptography
Worst-Case to Average-Case Reductions for Module Lattices
Most lattice-based cryptographic schemes are built upon the assumed hardness of the Short Integer Solution (SIS) and Learning With Errors (LWE) problems. Their efficiencies can be drastically improved by switching the hardness assumptions to the more compact Ring-SIS and RingLWE problems. However, this change of hardness assumptions comes along with a possible security weakening: SIS and LWE are known to be at least as hard as standard (worst-case) problems on euclidean lattices, whereas Ring-SIS and Ring-LWE are only known to be as hard as their restrictions to special classes of ideal lattices, corresponding to ideals of some polynomial rings. Adeline Langlois and Damien Stehlé defined the Module-SIS and Module-LWE problems, which bridge SIS with Ring-SIS, and LWE with Ring-LWE, respectively. They proved that these average-case problems are at least as hard as standard lattice problems restricted to module lattices (which themselves bridge arbitrary and ideal lattices). As these new problems enlarge the toolbox of the lattice-based cryptographer, they could prove useful for designing new schemes. Importantly, the worst-case to average-case reductions for the module problems are (qualitatively) sharp, in the sense that there exist converse reductions. This property is not known to hold in the context of Ring-SIS/Ring-LWE: Ideal lattice problems could reveal easy without impacting the hardness of Ring-SIS/Ring-LWE [8] .
Semantically Secure Lattice Codes for the Gaussian Wiretap Channel
Cong Ling (Imperial College, UK), Laura Luzzi (ENSEA), Jean-Claude Belfiore (Telecom ParisTech) and Damien Stehlé proposed a new scheme of wiretap lattice coding that achieves semantic security and strong secrecy over the Gaussian wiretap channel. The key tool in their security proof is the flatness factor which characterizes the convergence of the conditional output distributions corresponding to different messages and leads to an upper bound on the information leakage. They not only introduced the notion of secrecy-good lattices, but also proposed the flatness factor as a design criterion of such lattices. Both the modulo-lattice Gaussian channel and the genuine Gaussian channel are considered. In the latter case, they proposed a novel secrecy coding scheme based on the discrete Gaussian distribution over a lattice, which achieves the secrecy capacity to within a half nat under mild conditions. No a priori distribution of the message is assumed, and no dither is used in their proposed schemes [9] .
GGHLite: More Efficient Multilinear Maps from Ideal Lattices
The Garg-Gentry-Halevi (GGH) Graded Encoding Scheme, based on ideal
lattices, is the first plausible approximation to a cryptographic
multilinear map. Unfortunately, the scheme requires very large
parameters to provide security for its underlying encoding
re-randomization process. Adeline Langlois, Damien Stehlé and Ron
Steinfeld (Monash University, Australia) formalized, simplified and improved the
efficiency and the security analysis of the re-randomization process
in the GGH construction. This results in a new construction that they
called GGHLite. In particular, they first lowered the size of a standard
deviation parameter of the GGH re-randomization process from
exponential to polynomial in the security parameter. This first
improvement is obtained via a finer security analysis of the
so-called drowning step of re-randomization, in which they applied the Rényi
divergence instead of the conventional statistical distance as a
measure of distance between distributions. Their second improvement is
to reduce the number of randomizers needed to 2, independently
of the dimension of the underlying ideal lattices. These two
contributions allowed them to decrease the bit size of the public
parameters to
LLL reducing with the most significant bits
Let
Hardness of -LWE and Applications in Traitor Tracing
San Ling (NTU, Singapore), Duong Hieu Phan (LAGA), Damien Stehlé and
Ron Steinfeld (Monash University, Australia) introduced the
Lattice-Based Group Signatures Scheme with Verifier-local Revocation
Support of membership revocation is a desirable functionality for any
group signature scheme. Among the known revocation approaches,
verifier-local revocation (VLR) seems to be the most flexible one,
because it only requires the verifiers to possess some up-to-date
revocation information, but not the signers. All of the contemporary
VLR group signatures operate in the bilinear map setting, and all of
them will be insecure once quantum computers become a reality. Adeline
Langlois, San Ling, Khoa Nguyen and Huaxiong Wang (NTU, Singapore)
introduced the first lattice-based VLR group
signature [21] , and thus, the first such
scheme that is believed to be quantum-resistant. In comparison with
existing lattice-based group signatures, this scheme has several
noticeable advantages: support of membership revocation,
logarithmic-size signatures, and weaker security assumption. In the
random oracle model, our scheme is proved to be secure based on the
hardness of the Shortest Independent Vector Problem with approximation
factor
Proxy Re-Encryption Scheme Supporting a Selection of Delegatees
Julien Devigne (Orange Labs), Eleonora Guerrini (Univ. Montpellier 2, LIRMM) and Fabien Laguillaumie adapt the primitive of proxy re-encryption which allows a user to decide that in case of unavailability, one (or several) particular user, the delegatee, will be able to read his confidential messages. They modify it so that a sender can choose who among many potential delegatees will be able to decrypt his messages, and propose a simple and efficient scheme which is secure under chosen plaintext attack under standard algorithmic assumption in a bilinear setting. They also investigate the possibility to add a traceability of the proxy so that one can detect if it has leaked some re-encryption keys [17] .
Practical validation of several fault attacks against the Miller algorithm
Ronan Lashermes (SAS-ENSMSE, PRISM), Marie Paindavoine, Nadia El Mrabet (Univ. P8, LIASD), Jacques Fournier (SAS-ENSMSE) and Louis Goubin (UVSQ, PRISM) describe practical implementations of fault attacks against the Miller algorithm, which computes pairing evaluations on algebraic curves. These implementations validate common fault models used against pairings. In the light of the implemented fault attacks, they show that some blinding techniques proposed to protect the algorithm against Side-Channels Analyses cannot be used as countermeasures against the implemented fault attacks [23] .
Non-Malleability from Malleability: Simulation-Sound Quasi-Adaptive NIZK Proofs and CCA2-Secure Encryption from Homomorphic Signatures
Verifiability is central to building protocols and systems with
integrity. Initially, efficient methods employed the Fiat-Shamir
heuristics. Since 2008, the Groth-Sahai techniques have been the most
efficient in constructing non-interactive witness indistinguishable
and zero-knowledge proofs for algebraic relations in the standard
model. For the important task of proving membership in linear
subspaces, Jutla and Roy (Asiacrypt 2013) gave significantly more
efficient proofs in the quasi-adaptive setting (QA-NIZK). For
membership of the row space of a
Born and Raised Distributively: Fully Distributed Non-Interactive Adaptively-Secure Threshold Signatures with Short Shares
Threshold cryptography is a fundamental distributed computational
paradigm for enhancing the availability and the security of
cryptographic public-key schemes. It does it by dividing private keys
into
In [24] , Benoît Libert, Marc Joye
(Technicolor, USA) and Moti Yung (Google and Columbia U, USA)
constructed practical fully distributed (the private key is born
distributed), non-interactive schemes – where the servers can compute
their partial signatures without communication with other servers –
with adaptive security (i.e., the adversary corrupts servers
dynamically based on its full view of the history of the
system). Their schemes are very efficient in terms of computation,
communication, and scalable storage (with private key shares of size
Concise Multi-challenge CCA-Secure Encryption and Signatures with Almost Tight Security
To gain strong confidence in the security of a public-key scheme, it
is most desirable for the security proof to feature a tight reduction
between the adversary and the algorithm solving the under-lying hard
problem. Recently, Chen and Wee (Crypto '13) described the first
Identity-Based Encryption scheme with almost tight security under a
standard assumption. Here, “almost tight” means that the security
reduction only loses a factor
In [25] , Benoît Libert, Thomas Peters (UCL, Belgique), Marc Joye (Technicolor, USA) and Moti Yung (Google and Columbia U, USA) considered space-efficient schemes with security almost tightly related to standard assumptions. As a step in solving the open question by Hofheinz and Jager, they constructed an efficient CCA-secure public-key encryption scheme whose chosen-ciphertext security in the multi-challenge, multi-user setting almost tightly relates to the DLIN assumption (in the standard model). Quite remarkably, the ciphertext size decreases to 69 group elements under the DLIN assumption whereas the best previous solution required about 400 group elements. Their scheme is obtained by taking advantage of a new almost tightly secure signature scheme (in the standard model) they developed and which is based on the recent concise proofs of linear subspace membership in the quasi-adaptive non-interactive zero-knowledge setting (QA-NIZK) defined by Jutla and Roy (Asiacrypt '13). The new signature scheme reduces the length of the previous such signatures (by Chen and Wee) by 37% under the Decision Linear assumption, by almost 50% under the K-LIN assumption, and it becomes only 3 group elements long under the Symmetric eXternal Diffie-Hellman assumption. Our signatures are obtained by carefully combining the proof technique of Chen and Wee and the above mentioned QA-NIZK proofs.