The rise of the Internet and the ubiquity of electronic devices have changed our way of life. Many face to face and paper transactions have nowadays digital counterparts: home banking, electronic commerce, e-voting, ... and even partially our social life. This digitalisation of the world comes with tremendous risks for our security and privacy as illustrated by the following examples.
Financial transactions. According to the FEVAD (French
federation of remote selling and e-commerce), in
France 51.1 billion Euros have been spent through e-commerce in 2013
and fraud is estimated at 1.9 billion Euros by
certissim.1 As discussed in another white paper 2
by Dave Marcus (Director of Advanced Research and Threat Intelligence,
McAfee) and Ryan Sherstobitoff (Threat Researcher, Guardian Analytics)
bank fraud has changed dramatically. Fraudsters are aiming to steal
increasingly higher amounts from bank accounts (with single transfers
over 50,000 Euros) and develop fully automated attack tools to do
so. As a consequence, protocols need to implement more advanced,
multi-factor authentication methods.
Electronic voting. In the last few years several European
countries (Estonia, France, Norway and Switzerland) organised
legally binding political elections that allowed (part of
the) voters to cast their votes remotely via the Internet. For
example, in June 2012 French people living abroad (“expats”) were
allowed to vote via the Internet for parliament elections. An
engineer demonstrated that it was possible to write a malware that
could change the value of a cast vote without any way for the voter
to notice.3 In Estonia in the 2011 parliament election,
a similar attack was reported by computer scientist Paavo Pihelgas
who conducted a real life experiment with aware consenting test
subjects.4
Privacy violations. Another security threat is the violation of an individual person's privacy. For instance the use of radio-frequency identification (RFID) technology can be used to trace persons, e.g. in automatic toll-paying devices 5
or in public transportation. Even though security protocols are
deployed to avoid tracing by third parties, protocol design errors
enabled tracing of European e-passports.6 Recently,
a flaw was identified in the 3G mobile phone protocols that allows a
third party, i.e., not only the operator, to trace
telephones 32. Also, anonymised data of social networks
has been effectively used to identify persons by comparing data from
several social networks.7
The aim of the Pesto project is to build formal models and techniques, for computer-aided analysis and design of security protocols (in a broad sense). While historically the main goals of protocols were confidentiality and authentication, the situation has changed. E-voting protocols need to guarantee privacy of votes, while ensuring transparency of the election; electronic devices communicate data by the means of web services; RFID and mobile phone protocols must guarantee that people cannot be traced. Due to malware, security protocols must rely on additional mechanisms, such as trusted hardware components or multi-factor authentication, to guarantee security even if the computing platform is a priori untrusted. Currently existing techniques and tools are however unable to analyse the properties required by these new protocols and to take the newly deployed mechanisms and associated attacker models into account.
Before being able to analyse and properly design security protocols, it is essential to have a model with a precise semantics of the protocols themselves, the attacker and its capabilities, as well as the properties a protocol must ensure.
Most current languages for protocol specification are quite basic and do not provide support for global state, loops, or complex data structures such as lists, or Merkle trees. As an example we may cite Hardware Security Modules that rely on a notion of mutable
global state which does not arise in traditional protocols, see
e.g. the discussion by Herzog 45.
Similarly, the properties a protocol should satisfy are generally not precisely defined, and stating the “right” definitions is often a challenging task in itself. In the case of authentication, many protocol attacks were due to the lack of a precise meaning, cf. 43. While the case of authentication has been widely studied, the recent digitalisation of all kinds of transactions and services, introduces a plethora of new properties, including for instance anonymity in e-voting, untraceability of RFID tokens, verifiability of computations that are out-sourced, as well as sanitisation of data in social networks. We expect that many privacy and anonymity properties may be modelled as particular observational equivalences in process calculi 39, or indistinguishability between cryptographic games 3; sanitisation of data may also rely on information-theoretic measures.
We also need to take into account that the attacker model
changes. While historically the attacker was considered to control the
communication network, we may nowadays argue that even (part of) the
host executing the software may be compromised through, e.g., malware.
This situation motivates the use of secure elements and multi-factor
authentication with out-of-band channels. A typical example occurs in
e-commerce: to validate an online payment a user needs to enter an
additional code sent by the bank via SMS to the user's mobile phone.
Such protocols require the possession of a physical device in addition
to the knowledge of a password which could have been leaked on an
untrusted platform. The fact that data needs to be copied by a human
requires these data to be short, and hence amenable to
brute-force attacks by an attacker or guessing.
Most automated tools for verifying security properties rely on
techniques stemming from automated deduction. Often existing
techniques do however not apply directly, or do not scale up due to
state explosion problems. For instance, the use of Horn clause
resolution techniques requires dedicated resolution
methods 3335. Another example is
unification modulo equational theory, which is a key technique in
several tools, e.g. 42. Security protocols however
require to consider particular equational theories that are not
naturally studied in classical automated reasoning. Sometimes, even
new concepts have been introduced. One example is the finite variant
property 37, which is used in several tools, e.g.,
Akiss 35, Maude-NPA 42 and
TAMARIN 46. Another example is the notion of asymmetric
unification 41 which is a variant of unification
used in Maude-NPA to perform important syntactic pruning
techniques of the search space, even when reasoning modulo an
equational theory. For each of these topics we need to design
efficient decision procedures for a variety of equational theories.
We design dedicated techniques for automated protocol verification. While existing techniques for security protocol verification are efficient and have reached maturity for verification of confidentiality and authentication properties (or more generally safety properties), our goal is to go beyond these properties and the standard attacker models, verifying the properties and attacker models identified in Section 3.1. This includes techniques that:
These goals are beyond the scope of most current analysis tools and require both theoretical advances in the area of verification, as well as the design of new efficient verification tools.
Given our experience in formal analysis of security protocols, including both protocol proofs and finding of flaws, it is tempting to use our experience to design protocols with security in mind and security proofs. This part includes both provably secure design techniques, as well as the development of new protocols.
Design techniques include composition results that allow
one to design protocols in a modular way 38, 36. Composition results come in many flavours: they may
allow one to compose protocols with different objectives, e.g. compose a
key exchange protocol with a protocol that requires a shared key or
rely on a protocol for secure channel establishment, compose different
protocols in parallel that may re-use some key material, or compose
different sessions of the same protocol.
Another area where composition is of particular importance is Service Oriented Computing, where an “orchestrator” must combine some available component services, while guaranteeing some security properties. In this context, we work on the automated synthesis of the orchestrator or monitors for enforcing the security goals. These problems require the study of new classes of automata that communicate with structured messages.
We also design new protocols. Application areas that seem of particular importance are:
Security protocols, such as TLS, Kerberos, ssh or AKA (mobile communication), are the main tool for securing our communications. The aim of our work is to improve their security guarantees. For this, we propose models that are expressive enough to formally represent protocol executions in the presence of an adversary, formal definitions of the security properties to be satisfied by these protocols, and automated tools able to analyse them and possibly exhibit design flaws.
Many techniques for symbolic verification of security are rooted in automated reasoning. A typical example is equational reasoning used to model the algebraic properties of a cryptographic primitive. Our work therefore aims to improve and adapt existing techniques or propose new ones when needed for reasoning about security.
Electronic elections have in the last years been used in several countries for politically binding elections. The use in professional elections is even more widespread. The aim of our work is to increase our understanding of the security properties needed for secure elections, propose techniques for analysing e-voting protocols, design of state-of-the-art voting protocols, but also to highlight the limitations of e-voting solutions.
The treatment of information released by users on social networks can violate a user's privacy. The goal of our work is to allow users to control the information released while guaranteeing their privacy.
Véronique Cortier and Alexandre Debant, in collaboration with Pierrick Gaudry (Caramba), obtained a bug bounty for a vulnerability detected in the Swiss Post e-voting protocol.
Charlie Jacomme received the GDR Sécurité PhD award 2021.
Belenios is an open-source online voting system that provides vote confidentiality and verifiability. End-to-end verifiability relies on the fact that the ballot box is public (voters can check that their ballots have been received) and on the fact that the tally is publicly verifiable (anyone can recount the votes). Vote confidentiality relies on the encryption of the votes and the distribution of the decryption key (no one detains the secret key).
Belenios supports various kind of elections. In the standard mode, Belenios supports simple elections where voters simply select one or more candidates. It also supports arbitrary counting functions at the cost of a slightly more complex tally procedure for the authorities. For example, Belenios supports Condorcet, STV, and Majority Judgement, where voters order candidates and grade them.
Belenios is available in several languages for the voters as well as the administrators of an election. More languages can be freely added by users.
In 2021, our platform was used for the organization of about 2000 elections, with about 70,000 ballots counted.
This year, we modified the voting platform to make it more user-friendly and responsive: it automatically adapts on a cell phone, for example. We also developed two new interfaces to vote by ranking the candidates (Condorcet) or by rating them (majority judgment). Following several requests, Belenios now offers weighted votes, where each voter has a certain number of votes. Less visible to users, an important change was the update of the cryptographic core, in order to better link a ballot to the context of the election. Finally, we initiated the development of a REST API and modernized the management of administrator accounts.
ProVerif is an automatic security protocol verifier in the symbolic model (so called Dolev-Yao model). In this model, cryptographic primitives are considered as black boxes. This protocol verifier is based on an abstract representation of the protocol by Horn clauses. Its main features are:
It can verify various security properties (secrecy, authentication, process equivalences).
It can handle many different cryptographic primitives, specified as rewrite rules or as equations.
It can handle an unbounded number of sessions of the protocol (even in parallel) and an unbounded message space.
The Jasmin programming language smoothly combines high-level and low-level constructs, so as to support “assembly in the head” programming. Programmers can control many low-level details that are performance-critical: instruction selection and scheduling, what registers to spill and when, etc. The language also features high-level abstractions (variables, functions, arrays, loops, etc.) to structure the source code and make it more amenable to formal verification. The Jasmin compiler produces predictable assembly and ensures that the use of high-level abstractions incurs no run-time penalty.
The semantics is formally defined to allow rigorous reasoning about program behaviors. The compiler is formally verified for correctness (the proof is machine-checked by the Coq proof assistant). This justifies that many properties can be proved on a source program and still apply to the corresponding assembly program: safety, termination, functional correctness…
Jasmin programs can be automatically checked for safety and termination (using a trusted static analyzer). The Jasmin workbench leverages the EasyCrypt toolset for formal verification. Jasmin programs can be extracted to corresponding EasyCrypt programs to prove functional correctness, cryptographic security, or security against side-channel attacks (constant-time).
Year 2021 has brought several improvements to the Jasmin programming language, enabling the implementation of more complex programs: local functions (preserved during compilation), sub-arrays, etc. The release of a new major version is scheduled for early 2022.
Preparatory works to the support of several target architectures have also been carried.
The correctness theorem of the compiler has been made more precise. It now allows to reason at source level about some non-functional properties of the program produced by the compiler. In particular, there is now a formal proof (in Coq) that the compiler always preserves the “constant-time” security property.
Security properties of cryptographic protocols are typically expressed as reachability or equivalence properties. Secrecy and authentication are examples of reachability properties while privacy properties such as untraceability, vote secrecy, or anonymity are generally expressed as behavioral equivalence in a process algebra that models security protocols.
Cortier, Delaune and Sundararajan 9 identify a new decidable class of security protocols, both for reachability and equivalence properties. The result holds for an unbounded number of sessions and for protocols with nonces. It covers all standard cryptographic primitives. The class sets up three main assumptions. (i) Protocols need to be without else branch and “simple”, meaning that an attacker can precisely identify from which participant and which session a message originates from. (ii) Protocols should be type-compliant which is intuitively guaranteed as soon as two encrypted messages of the protocol cannot be confused. (iii) Finally, the dependency graph of the protocol must be acyclic. The dependency graph is a new notion that characterises how actions depend on each other. Revisiting this approach, Cortier, Dallon, and Delaune 17 show that it is possible to significantly bound the number of sessions for a similar class of protocols, for both reachability and equivalence properties. Experiments show that on most basic protocols of the literature, the proposed algorithm computes a small number of sessions (a dozen). As a consequence, tools for a bounded number of sessions like DeepSec can then be used to conclude that a protocol is secure for an unbounded number of sessions.
Cheval (Inria Paris), Crubillé and Kremer study probabilistic process equivalences for security protocols. Symbolic models are classically purely non-deterministic. Indeed, generating random keys and nonces, or using randomized cryptographic primitives (like any secure encryption scheme) is idealized in symbolic models, replacing random numbers that can be guessed with only a negligeable probability with perfectly fresh values that cannot be guessed at all. This abstraction has been widely used and has shown its usefulness. Another source of randomness may however come from the control flow. Typically, protocols aiming at anonymity, such as the Dining Cryptographers protocol, require users to take one action or another probabilistically. In this work we propose an extension of the applied pi calculus with a probabilistic choice operator (randomized, as non-randomized schedulers lead to definitions that have undesirable properties. We for instance show that typical behavioral relations would not be transitive and point out a flaw in the main theorem of a previous framework 44 that chose non-randomized schedulers. Mixing non-determinism and probabilistic choices generally leads to unsatisfactory behavioral equivalences: as the non-deterministic choices can leak the probabilistic choices, the resulting equivalences is too strong, modelling unrealistic attacker capabilities. We therefore investigate two sub-classes of protocols. We first consider the class of protocols that do not make any probabilistic choices, but allow the attacker to do so. Even though the honest processes may be purely non-deterministic, the resulting may testing equivalence is strictly stronger by allowing a probabilistic attacker. We show that for a bounded number of sessions may-testing with a probabilistic attacker coincides with purely possibilistic similarity. Second, we consider a class of simple processes, with a very limited non-determinism. For this class, we show that trace equivalence coincides with may-testing where attackers are sequential processes (no parallel, nor non-deterministic choice).
In collaboration with Erbatur (UT Dallas, USA) and Marshall (Univ Mary Washington, USA), Ringeissen studies decision procedures for equational theories used in protocol analysis. In 19, 29 hierarchical unification procedures have been developed for non-disjoint unions of theories closed by equational paramodulation such as paramodulation modulo Associativity-Commutativity. Beyond the decision problems related to equational unification and (intruder) theories, Ringeissen is also working on SMT (Satisfiability Modulo Theories) solvers to model verification conditions. In collaboration with Sheng, Zohar, Lange, Barrett (Stanford, USA) and Fontaine (Veridis project-team and University of Liège, Belgium), Ringeissen has published a new short paper showing that the theory of datatypes is strongly polite and so it can be combined with arbitrary disjoint theories to get a satisfiability procedure using polite combination 21. In collaboration with Sheng, Zohar, Barrett (Stanford, USA) together with Reynolds and Tinelli (U. Iowa, USA), Ringeissen has been involved into a new contribution to the study of polite combination. In 22, the difference between polite and strongly polite theories has been proven and an optimization is proposed to the polite combination method. This optimization allows us to reduce the number of guessings to be considered, in a way similar to the Nelson-Oppen method. Preliminary evidence is shown for this by demonstrating a speed-up on a smart contract verification benchmark.
Motivated by the addition of global states in ProVerif, Cheval and Cortier have conducted a major revision of the popular ProVerif tool. This revision goes well beyond global states and is conducted in collaboration with Bruno Blanchet, the original and main developer of ProVerif. One of the first main changes is the addition to ProVerif of the notion of “lemmas”, “axioms”, and "restrictions", that can be added to either encode additional properties (axioms and restrictions) or help ProVerif to prove the desired properties. It is indeed now possible to specify lemmas, that will significantly reduce the number of considered clauses in the saturation procedure of ProVerif. These lemmas should of course be proved themselves by ProVerif, possibly by induction thanks to a particular care of the order of literals in the saturation procedure. The new approach provides more flexibility in cases where ProVerif was not able to terminate or yields false attacks (e.g. in the presence of global states).
Moreover, even when ProVerif is able to prove security, the tool is suffering from efficiency issues when applied to complex industrial protocols (up to 1 month running time for the analysis of the NoiseExplorer protocol). While revisiting the core procedure of ProVerif, its efficiency has been considerably improved at several steps of the algorithm. For example, clause generation has been turned into a more lazy approach in order to generate fewer clauses. Moreover, techniques from automated deduction have been introduced to speed up checking when a clause subsumes another one. The detection and removal of redundant clauses have been also optimized. The experimental results show significant speed-up on many examples: On average, ProVerif is now 10 to 50 times faster than its previous release, with some examples peaking at 500 to 1,000 times speedup.
The correctness of the new procedure is proven for the entire syntax and semantics of ProVerif, covering optimizations and features that were never formally defined in previous papers. For instance, the correspondence queries are not restricted anymore to be defined only with events in their conclusion. The result will be presented at S&P'22 15.
The TAMARIN prover is a state-of-the-art verification tool for cryptographic protocols in the symbolic model developed jointly by CISPA, ETH Zurich and the PESTO team.
One major strength of TAMARIN is that it offers an interactive mode, allowing users to go beyond what pushbutton tools can typically handle. TAMARIN is for example able to verify complex protocols such as TLS or the authentication protocols from the 5G standard. However, one of its drawbacks is its lack of automation. For many simple protocols, the user often needs to help TAMARIN by writing specific lemmas, called “sources lemmas”, which requires some knowledge of the internal behaviour of the tool. Cortier, Delaune, and Dreier propose a technique to automatically generate sources lemmas in TAMARIN. They prove formally that the lemmas indeed hold, for arbitrary protocols that make use of cryptographic primitives that can be modelled with a subterm convergent equational theory (modulo associativity and commutativity). They have implemented their approach within TAMARIN. Experiments show that, in most examples of the literature, suitable sources lemmas can now be automatically generated, in replacement of the handwritten lemmas. As a direct application, many simple protocols can now be analysed fully automatically, while they previously required user interaction. These results have been selected among the papers presented at ESORICS'20, to be submitted to a special issue of the JCS journal. This journal version contains another improvement of the algorithm, designed by Cortier, Delaune, Dreier, and Klein, and is currently under submission.
Cheval (Inria Paris), Jacomme (CISPA), Kremer and Künnemann (CISPA) have integrated into TAMARIN a protocol verification platform dubbed SAPIC that allows users to compile a common input language, a stateful dialect of the applied pi calculus, to three state-of-the-art verification tools:
Passwords are still the most widespread means for authenticating users, even though they have been shown to create huge security problems. This motivated the use of additional authentication mechanisms in so-called multi-factor authentication protocols. Kremer and Jacomme (CISPA) define a detailed threat model for this kind of protocols: while in classical protocol analysis attackers control the communication network, we take into account that many communications are performed over TLS channels, that computers may be infected by different kinds of malwares, that attackers could perform phishing, and that humans may omit some actions. We formalize this model in the applied pi calculus and perform an extensive analysis and comparison of several widely used protocols — variants of Google 2-step and FIDO’s U2F (Yubico’s Security Key token). The analysis is completely automated, generating systematically all combinations of threat scenarios for each of the protocols and using the ProVerif tool for automated protocol analysis. To validate our model and attacks we demonstrate their feasibility in practice, even though our experiments are run in a laboratory environment. Our analysis highlights the weaknesses and strengths of the different protocols. It allows us to suggest several small modifications of the existing protocols which are easy to implement, as well as an extension of Google 2-step, that improves security in several threat scenarios. This work has been published in ACM TOPS 11.
ZCash is a specification and an implementation of a decentralized anonymous payment scheme based on, and improving the ZeroCash protocol. Cheval (Inria Paris), Hirschi, Kremer and Laporte provide a detailed formal model of ZCash, version Sapling, and analysis using ProVerif. The model includes the complete key derivation infrastructure, a precise model of transactions and a symbolic model of the notion of treestate. The underlying blockchain infrastructure is currently idealized as we consider a single, consistent view of the treestate shared by all participants.
A particular effort has been put into the modelling of cryptographic primitives. We propose a model for complex zero-knowledge proofs, different kinds of signatures (relying on an existing model for signatures that includes potential weaknesses) and a novel model of hash functions. We take particular care to reflect assumptions on hash functions, such as collision-resistance, second pre-image resistance and one-wayness. While precisely modelling such assumptions is difficult in symbolic models, we go beyond the standard modelling which considers perfect hash functions (as in the Random Oracle model), and provide a best effort modelling that is able to provide a number of proofs in extreme cases where hash functions are considered invertible, or trivial collisions (when these properties were not assumed).
Our analysis focuses on two essential properties: the balance property and non-malleability of transactions. We discuss our modeling choices regarding those properties that turned out to be non-trivial to model. We prove that in our model balance holds for the given security assumptions on the underlying cryptographic primitives. Similarly we are able to show that non-malleability is satisfied. However, a mistake in a first model, due to a lack of clarity in the specification, leads to an attack: it allows the adversary to change stealthily some encrypted fields, and disable the recipient's ability to spend these notes.
Although ProVerif is routinely used for the analysis of large, real-life cryptographic protocols this case study pushed the tool to its limits. As a result we added optimizations both for the ProVerif tool, which are of general interest, and for our models, that speed up verification and improve memory consumption. We also added a few features improving usability. Thanks to those optimizations, the simplest verification queries terminate in a few seconds, while most take a few hours, and the most complex queries even take up to 14 days of computation time while consuming 80 GB of memory on average.
Verifiability is a key requirement for electronic voting. However, the use of cryptographic techniques to achieve it usually requires specialist knowledge to understand; hence voters cannot easily assess the validity of such arguments themselves. To address this, solutions have been proposed using simple tables and checks, which require only simple verification steps with almost no cryptography.
This simplicity comes at a cost: numerous verification checks must be made on the tables to ensure their correctness, raising the question whether the success of all the small verification steps entails the overall goal of end-to-end verifiability while preserving vote secrecy. Do the final results reflect the voters' will? Moreover, do the verification steps leak information about the voters' choices?
In ACM CCS 2021 13, Basin, Dreier, Giampietro, and Radomirovic provide mathematical foundations and an associated methodology for defining and proving verifiability and voter privacy for table-based election protocols. We apply them to three case studies: the Eperio protocol, Scantegrity, and Chaum's Random-Sample Election protocol. Our methodology helps us, in all three cases, identify previously unknown problems that allow an election authority to cheat and modify the election outcome. Furthermore, it helps us formulate and verify the corrected versions.
In a paper published in ACM TOCL 8, Barthe (MPI Security and Privacy), Jacomme (CISPA) and Kremer study decidability problems for equivalence of probabilistic programs, for a core probabilistic programming language over finite fields of fixed characteristic. The programming language supports uniform sampling, addition, multiplication and conditionals and thus is sufficiently expressive to encode boolean and arithmetic circuits. We consider two variants of equivalence: the first one considers an interpretation over the finite field
Fuzzing implementations of cryptographic protocols is challenging. In contrast to traditional fuzzing of file formats, cryptographic protocols require a specific flow of cryptographic and mutually dependent messages to reach deep protocol states.
Moreover, the state-of-the-art fuzzing techniques are adequate to find
safety vulnerabilities (sometimes with potential security implications) but are unfortunately
unable to find logical attacks in protocols, i.e., attacks that exploit protocol logic flaws.
Indeed, most of these techniques generate random operations such as bit-flips on network packets,
which make logical attack finding overwhelmingly unlikely,
and use a code-based notion of coverage, which is a poor feedback
as it falls short of capturing the diversity of executions corresponding to different adversarial behaviours
that reach the same code coverage.
In his master thesis, Max Ammann has designed, implemented, and evaluated a new fuzzing engine tailored to capture logical attacks in cryptographic protocols and applied it to the TLS 1.2 and TLS 1.3 protocols. The core idea of this fuzzing engine is to consider the input space to be fuzzed to be symbolic traces in a Dolev-Yao-style model that are executed by target TLS libraries such as OpenSSL through concretizing of symbolic messages as bitstrings. Although the TLS specifications have been formally verified multiple times establishing strong security guarantees, those guarantees do not apply on the actual implementations that are in use. Because the development of cryptographic protocols is error-prone, multiple security vulnerabilities have already been discovered in implementations in TLS which are not present in its specification. The goal of this fuzzing methodology is to explore this blind spot in between formal verification and testing.
Inspired by symbolic protocol verification, the thesis presents a reference implementation of a fuzzer named TLSPuffin which employs a concrete semantic to execute TLS 1.2 and 1.3 symbolic traces. This method allows us to utilise a genetic fuzzing algorithm to fuzz protocol flows. The novel approach allows rediscovering known vulnerabilities in TLS, which are out-of-scope for classical bit-level fuzzers.
Timing side-channels are arguably one of the main sources of vulnerabilities in cryptographic implementations. One effective mitigation against timing side-channels is to write programs that do not perform secret-dependent branches and memory accesses. This mitigation, known as “cryptographic constant-time”, is adopted by several popular cryptographic libraries. Such mitigation is usually implemented by programmers at the source level.
In 12, Laporte in collaboration with Barthe, Grégoire, and Priya, designs a methodology to formally verify that a compiler preserves the “cryptographic constant-time” security property (CCT). More generally, they provide means to soundly reason at the source-level about many security properties of interest that are captured by instrumented semantics that model the functional behavior and the leakage of programs. To achieve this goal, they put forward the idea of structured leakage. In contrast to the usual modeling of leakage as a sequence of observations, structured leakage is tightly coupled with the operational semantics of programs. This coupling greatly simplifies the definition of leakage transformers that map the leakage of source programs to leakage of their compilation and yields more precise statements about the preservation of security properties.
This methodology has been instantiated on the Jasmin compiler. Along with a target program, the compiler can produce a leakage transformer that precisely describes how to compute the leakage of an execution of the target program from the leakage of the corresponding source-level execution. The correctness theorem of the compiler (along with its machine-checked proof) has been extended to make this statement precise and formal. This implies on one hand that the Jasmin compiler always preserves the CCT security property. On the other hand, as instrumented semantics enable reasoning about run-time execution costs, sound leakage transformers allow us to carry out such reasoning at the source-level in order to obtain precise guarantees about the target-level cost.
In 1968, Liu described the problem of securing documents in a shared
secret project. In an example, at least six out of eleven
participating scientists need to be present to open the lock securing
the secret documents. Shamir proposed a mathematical solution to this
physical problem in 1979, by designing an efficient
In this work accepted for the Journal of Computer Security 10 Dreier, Dumas, Lafourcade, and Robert relax some implicit assumptions in Liu and Shamir's claim and
propose an optimal physical solution to the problem of Liu that uses
physical padlocks, but the number of padlocks is not greater than the
number of participants. Then, we show that no device can do better
for
As a part of a contract with Idemia, Cortier, Debant, Dreier, Turuani and Yang are designing a novel electronic voting system, tailored to the voting context envisioned by Idemia. The system is made for on-site elections, with the use of smart cards. However, the goal is that the trust should not be placed in one single part of the system, hence smart cards can not be trusted. One originality of the approach is the possibility to re-use existing techniques, in conjunction with the use of smart-cards and paper ballots. The designed protocol is meant to achieve vote secrecy, coercion resistance, and cast as intended. Coercion resistance is eased by the fact that voters enter a physical voting booth. Cast-as-intended was more difficult to achieve since Idemia aimed at two strong guarantees: all cast ballots should be audited by voters (this is not an option left to the choice of the voter) and whenever the system attempts to cheat, its misbehavior can be proved to a third party, possibly yielding to a punishment of the system. The proposed protocol has been proved secure with the ProVerif tool using some of its new features as explained in Section 7.1.2. A challenge was to cover three families of properties (vote secrecy, verifiability, and accountability) under various corruption scenarios, in a unified way.
There are two main approaches for tallying an election in the context of electronic voting. The first one is the homomorphic
tally. Thanks to the homomorphic property of the encryption scheme (typically ElGamal), the ballots are combined to compute the (encrypted) sum of the votes. Then only the resulting ciphertext needs to be decrypted to reveal the election result, without leaking the individual votes. However, it can only be applied to simple vote counting functions. The second main approach is based on mixnets. The encrypted ballots are shuffled and re-randomized such that the resulting ballots cannot be linked to the original ones. Several mixers are successively used and then each (randomized) ballot is decrypted, yielding the original votes in clear, in a random order. It can be used for any vote counting function but it reveals much more information than the result itself (the winner(s) of the election) and is subject to so-called Italian attacks.
Quentin Yang, co-supervised by Cortier and Gaudry (Caramba project-team), has studied the possibility to compute the election result from a set of encrypted ballots, without leaking any other information. This can be seen as an instance of Multi-Party Computation (MPC). Cortier, Gaudry and Yang 27 have unveiled several flaws or limitations of the existing works and they have provided a toolbox to implement, at a reasonable cost, several key counting functions of the literature: Majority Judgement, Condorcet, and STV. One of the surprises of the work lies in the fact that they show that it is often preferable to use the very standard El Gamal encryption instead of Paillier encryption, that is typically considered as the Swiss-knife for MPC.
While detailed security analyses have been conducted for several protocols of the literature (e.g. CHVote or Swiss Post), this was not the case for our own voting protocol, Belenios. We have started an analysis in ProVerif, with the objective to be as close as possible to the practical usage of Belenios. In particular, our analysis takes into account the fact that Belenios supports multi-elections where trustees use the same key; it also covers the case where voters check their vote during the election only, and not once the voting phase is over. Our analysis unveils unknown flaws in some corruption scenarios. We propose fixes and prove them to be secure.
The SwissPost e-voting system is currently proposed under the scrutiny of the community, before being deployed in 2022 for political elections in several Swiss cantons. We explain 26 how real world constraints led to shortcomings that allowed a privacy attack to be mounted. More precisely, dishonest authorities can learn the vote of several voters of their choice, without being detected, even when the requested threshold of honest authorities act as prescribed. This flaw has been acknowledged by Swiss Post, made public, and the system will be patched to prevent the problem. We also obtained a generous reward from the bug bounty program (40 Keuros).
The results of electronic elections should be verifiable so that any cheating is detected. To support this, many protocols employ an electronic bulletin board (BB) for publishing data that can be read by participants and used for verifiability checks. In our paper 20, we explore the role of BBs in e-voting and show that previous designs and requirements were not sufficient for key security goals to hold. We present practical attacks based on equivocation against some state-of-the-art designs (Civitas, Belenios, and Helios) supporting our thesis that the threat of equivocation was overlooked or underestimated. To fix those protocols and future designs, we propose provably minimal BB requirements and propose a concrete BB protocol achieving them. Our protocol can replace existing BBs, enabling verifiability under much weaker trust assumptions.
Social media such as Facebook provide a new way to connect, interact and learn. Facebook allows users to share photos and express their feelings by using comments. However, Facebook users are vulnerable to attribute inference attacks where an attacker intends to guess private attributes (e.g., gender, age, political view) of target users through their online profiles and/or their vicinity (e.g., what their friends reveal). Given user-generated pictures on Facebook, Alipour, Imine and Rusinowitch show how to launch gender inference attacks on their owners from picture metadata composed of: (i) alt-texts generated by Facebook to describe the content of pictures, and (ii) comments posted by friends, friends of friends or regular users. They assume these two types of meta-data are the only available information to the attacker. Evaluation results demonstrate that an adversary can infer the gender with high accuracy by combining alt-texts and comments. Moreover they can identify sensitive words in the meta-data and hide them to decrease drastically the adversary's prediction accuracy. To our knowledge, this is the first inference attack on Facebook that exploits comments and alt-texts solely. Moreover by computing several vector representations for the same word or emoji, each one being specific to an attribute value, the machine algorithms can select the best one to boost the model accuracy 18. In our experiments, the attributes of Facebook users can be inferred from commenters’ reactions to their publications with an AUC from 94% to 98%, depending on the traits (gender, age, relationship status). Bizhan Alipour's thesis to be defended in February 2022 will present in more detail the employed techniques and the results. Protection against these attribute inference attacks has been investigated using machine learning explainability and adversarial defense strategies. More precisely, effective adversarial reactions have been generated to fool sensitive attribute (blackbox) classifiers 14. Experiments show that the resulting FOX system successfully fools (about 99.7% and 93.2% of the time) the classifiers, improving set-of-the-art baselines with a good transferability of adversarial features.
Nowadays, big data management is gaining momentum within the research community. Basically, the main issue of supporting privacy-preserving big data management plays a first-class role, especially with respect to the wide class of emerging big data applications, such as social networks, bio-informatics and web recommendation systems. Co-supervised with Pr Alfredo Cuzzocrea (Excellence Chair in Computer Engineering, Lorraine University), the main objectives of Ala Eddine Laouir's thesis consist in devising new models and methods for effectively supporting privacy-preserving big data management in distributed environments, and providing significant realizations in reference case studies.
In a joint project with the Resist project-team and the Numeryx company, Abboud, Lahmadi (Resist) and Rusinowitch are working on the design, implementation and evaluation of a double-mask technique for building compressed, verifiable filtering rules in Software Defined Networks. As an alternative solution to the memory limitation of switches they investigate the distribution of filtering rules among several devices while preserving the network policy semantics. They also apply the rules distribution algorithm to design an efficient update strategy for varying sets of rules and topologies 24.
We have several contracts with industrial partners interested in the design of electronic voting systems:
A CIFRE contract with Numeryx has started with the Resist project-team and Pesto, to develop algorithms for optimizing sets of filtering rules in Software Defined Networks.
COST ACTION CA19122 EUGAIN — European Network For Gender
Balance in Informatics, duration: 4 years, since 2020,
participant and leader of Working Group 3 – From PhD to
Professor: Steve Kremer
Women are underrepresented in Informatics at all levels, from undergraduate and graduate studies to participation and leadership in academia and industry. The main aim and objective of EUGAIN is to improve gender balance in Informatics at all levels through the creation of a European network of colleagues working on the forefront of the efforts for gender balance in Informatics in their countries and research communities.
ANR Chaire IA ASAP Tools for automated, symbolic analysis
of real-world cryptographic protocols, duration:
September 2020 – August 2024, leader: Steve Kremer.
The goal of this project is the development of efficient algorithms and tools for automated verification of cryptographic protocols, that are able to comprehensively analyse detailed models of real-world protocols building on techniques from automated reasoning. Automated reasoning is the subfield of AI whose goal is the design of algorithms that enable computers to reason automatically, and these techniques underlie almost all modern verification tools. Current analysis tools for cryptographic protocols do however not scale well, or require to (over)simplify models, when applied on real-world, deployed cryptographic protocols. We aim at overcoming these limitations: we therefore design new, dedicated algorithms, include these algorithms in verification tools, and use the resulting tools for the security analyses of real-world cryptographic protocols.
ANR TECAP Protocol Analysis — Combining Existing Tools,
duration: January 2018 – June 2022, leader: Vincent Cheval, other
partners: ENS Cachan, Inria Paris, Inria Sophia Antipolis, IRISA,
LIX.
Despite the large number of automated verification tools, several
cryptographic protocols (e.g. stateful protocols) still represent a
real challenge for these tools and reveal their limitations. To cope
with these limits, each tool focuses on different classes of
protocols depending on the primitives, the security properties,
etc. Moreover, the tools cannot interact with each other as they
evolve in their own model with specific assumptions. The aim of this
project is to get the best of all these tools, that is, to improve
the theory and implementation of each individual tool towards the
strengths of the others and to build bridges that allow the
cooperation of the methods/tools. We will focus in this project on
CryptoVerif, EasyCrypt, Scary, ProVerif, TAMARIN, Akiss and
APTE. In order to validate the results, we will apply them to
several case studies such as the Authentication and Key Agreement
protocol from the telecommunication networks, the Scytl and Helios
voting protocols, and the low entropy 3D-Secure authentication
protocol. These protocols have been chosen to cover many challenges
that the current tools are facing.
ANR SEVERITAS Secure and Verifiable Test and Assessment System,
duration: Mai 2021 – April 2025, local coordinator: Jannik Dreier, other
partners: LIG/University Grenoble Alpes (coordinator France), SnT/University of Luxembourg (coordinator Luxembourg), LIMOS/Université Clermont Auvergne.
SEVERITAS advances information socio-technical security for Electronic Test and Assessment Systems (e-TAS). These systems measure skills and performances in education and training. They improve management, reduce time-to-assessment, reach larger audiences, but they do not always provide security by design. This project recognizes that the security aspects for e-TAS are still mostly unexplored. We fill these gaps by studying current and other to-be-defined security properties. We develop automated tools to advance the formal verification of security and show how to valide e-TAS security rigorously. We develop new secure, transparent, verifiable and lawful e-TAS procedures and protocols. We also deploy novel run-time monitoring strategies to reduce frauds and study the user experience about processes to foster e-TAS usable security. Thanks to connections with players in the business of e-TAS, such as OASYS, this project will contribute to the development of secure e-TAS.
Licence:
J. Dreier, Introduction to Logic, Fall 2021, 20 hours (ETD), TELECOM Nancy
V. Laporte, Introduction to Theoretical Computer Science (Logic, Languages, Automata), Spring 2021, 42 hours (ETD), TELECOM Nancy
V. Laporte, Introduction to Logic, Fall 2021, 20 hours (ETD), TELECOM Nancy
Master:
J. Dreier, Protocol Security and Verification, 39 hours (ETD), M2 Computer Science, TELECOM Nancy
J. Dreier, Advanced Cryptography, 37 hours, M2 Computer Science, TELECOM Nancy
A. Imine, Security for XML Documents, 12 hours (ETD), M1, Univ Lorraine
S. Kremer, Security Theory, 24 hours (ETD), M2 Computer science, Univ Lorraine
C. Ringeissen, Decision Procedures for Software Verification, 8 hours (ETD), M2 Computer science, Univ Lorraine
L. Vigneron, Introduction to cryptography, 17 hours (ETD), Polytech Nancy – Information Systems and Networks, Univ Lorraine
L. Vigneron, Advanced Security, 30 hours (ETD), Polytech Nancy – Information Systems and Networks, Univ Lorraine
L. Vigneron, Security of information systems, 28 hours (ETD), M2 MIAGE – Audit and Design of Information Systems, Univ Lorraine
Summer School:
J. Dreier and L. Hirschi. Symbolic verification of cryptographic protocols using Tamarin. “Cyber In Saclay” French Cybersecurity Doctoral School on formal methods for security, virtual, February 2021.
V. Laporte. Machine-Checked Cryptography with EasyCrypt and Jasmin, Summer School of the SAC 2021 conference (Selected Areas in Cryptography), virtual, October 2021.
PhD defended in 2021:
Ahmad Abboud, Efficient Rules Management Algorithms In Software Defined Networking 24, December 9th 2021, Univ. Lorraine (Abdelkader Lahmadi and Michaël Rusinowitch)
Itsaka Rakotonirina, Symbolic verification of cryptographic protocols: theory and practice 25, February 1st 2021, Univ. Lorraine (S. Kremer and V. Cheval).
PhD in progress:
Elise Klein, Automatic Synthesis of Cryptographic Protocols, started in October 2021. (J. Dreier and S. Kremer)
Bizhan Alipour Pijani, Attribute Inference Attacks on Social Network Publications started in October 2018 (A. Imine and M. Rusinowitch)
Maïwenn Racouchot, Automated Learning of Proof Strategies in Tamarin, started in October 2021. (J. Dreier and S. Kremer)
Quentin Yang, Design of a cast-as-intended, verifiable, and coercion-resistant electronic voting protocol, started in November 2020. (V. Cortier and P. Gaudry)