EN FR
Homepage Inria website


Section: New Results

Timing-side channels attacks

We have pursued our studies on foundations of language-based security following two axes on timing-side channels research:

Speculative constant time

The most robust way to deal with timing side-channels in software is via constant-time programming—the paradigm used to implement almost all modern cryptography. Constant-time programs can neither branch on secrets nor access memory based on secret data. These restrictions ensure that programs do not leak secret information via timing side channels, at least on hardware without microarchitectural features. However, microarchitectural features are a major source of timing side channels as the growing list of attacks (Spectre, Meltdown, etc) is showing. Moreover code deemed to be constant-time in the usual sense may in fact leak information on processors with microarchitectural features. Thus the decade-old constant-time recipes are no longer enough. We lay the foundations for constant-time in the presence of micro-architectural features that have been exploited in recent attacks: out-of-order and speculative execution. We focus on constant-time for two key reasons. First, impact: constant-time programming is largely used in narrow, high-assurance code—mostly cryptographic implementations—where developers already go to great lengths to eliminate leaks via side-channels. Second, foundations: constant-time programming is already rooted in foundations, with well-defined semantics. These semantics consider very powerful attackers have control over the cache and the scheduler. A nice effect of considering powerful attackers is that the semantics can already overlook many hardware details—e.g., since the cache is adversarially controlled there is no point in modeling it precisely—making constant-time amenable to automated verification and enforcement.

We have first defined a semantics for an abstract, three-stage (fetch, execute, and retire) machine. This machine supports out-of-order and speculative execution by modeling reorder buffers and transient instructions, respectively. Our semantics assumes that attackers have complete control over microarchitectural features (e.g., the branch target predictor), and uses adversarial execution directives to model adversary's control over predictors. We have then defined speculative constant-time, the counterpart of constant-time for machines with out-of-order and speculative execution. This definition has allowed us to discover microarchitectural side channels in a principled way—all four classes of Spectre attacks as classified by Canella et al., for example, manifest as violation of our constant-time property. Our semantics even revealed a new Spectre variant, that exploits the aliasing predictor. The variant can be disabled by unsetting a flag, by illusttrates the usefulness of our semantics. This study is described in a paper currently submitted.

Remote timing attacks

A common approach to deal with timing attacks is based on preventing secrets from affecting the execution time, thus achieving security with respect to a strong, local attacker who can measure the timing of program runs. Another approach is to allow branching on secrets but prohibit any subsequent attacker-visible side effects of the program. It is sometimes used  to handle internal timing leaks, i.e., when the timing behavior of threads affects the interleaving of attacker-visible events via the scheduler.

While these approaches are compatible with strong attackers, they are highly restrictive for program runs as soon as they branch on a secret. It is commonly accepted that “adhering to constant-time programming is hard” and “doing so requires the use of low-level programming languages or compiler knowledge, and forces developers to deviate from conventional programming practices”.

This restrictiveness stems from the fact that there are many ways to set up timing leaks in a program. For example, after branching on a secret the program might take different time in the branches because of: (i) more time-consuming operations in one of the branches, (ii) cache effects, when in one of the branches data or instructions are cached but not in the other branch, (iii) garbage collection (GC) when in one of the branches GC is triggered but not in the other branch, and (iv) just-in-time (JIT) compilation, when in one of the branches a JIT-compiled function is called but not in the other branch. Researchers have been painstakingly addressing these types of leaks, often by creating mechanisms that are specific to some of these types. Because of the intricacies of each type, addressing their combination poses a major challenge, which these approaches have largely yet to address.

This motivates a general mechanism to tackle timing leaks independently of their type. However, rather than combining enforcement for the different types of timing leaks for strong local attackers, is there a setting where the capabilities of attackers are perhaps not as strong, enabling us to design a general and less restrictive mechanism for a variety of timing attacks with respect to a weaker attacker?

We focus on timing leaks under remote execution. A key difference is that the remote attacker does not generally have a reference point of when a program run has started or finished, which significantly restricts attacker capabilities.

We illustrate remote timing attacks by two settings: a server-side setting of IoT apps where apps that manipulate private information run on a server and a client-side setting where e-voting code runs in a browser.

IFTTT (If This Then That), Zapier, and Microsoft Flow are popular IoT platforms driven by enduser programming. App makers publish their apps on these platforms. Upon installation apps manipulate sensitive information, connecting cyberphysical “things” (e.g., smart homes, cars, and fitness armbands) to online services (e.g., Google and Dropbox) and social networks (e.g., Facebook and Twitter). An important security goal is to prevent a malicious app from leaking private information of a user to the attacker.

Recent research identifies ways to leak private information by IoT apps and suggests tracking information flows in IoT apps to control these leaks. The suggested mechanisms perform data-flow (explicit) and control-flow (implicit) tracking. Unfortunately, they do not address timing leaks, implying that a malicious app maker can still exfiltrate private information, even if the app is subject to the security restrictions imposed by the proposed mechanisms.

In addition, Verificatum, an advanced client-side cryptographic library for e-voting motivates the question of remote timing leaks with respect to attackers who can observe the presence of encrypted messages on the network.

This leads us to the following general research questions:

  1. What is the right model for remote timing attacks?

  2. How do we rule out remote timing leaks without rejecting useful secure programs?

  3. How do we generalize enforcement to multiple security levels?

  4. How do we harden existing information flow tools to track remote timing leaks?

  5. Are there case studies to give evidence for the feasibility of the approach?

To help answering these questions, we propose an extensional knowledge-based security characterization that captures the essence of remote timing attacks. In contrast to the local attacker that counts execution steps/time since the beginning of the execution, our model of the remote attacker is only allowed to observe inputs and outputs on attacker-visible channels, along with their timestamps. At the same time, the attacker is in charge of the potentially malicious code with capabilities to access the clock, in line with assumptions about remote execution on IoT app platforms and e-voting clients.

A timing leak is typically enabled by branching on a secret and taking different time or exhibiting different cache behavior in the branches. However, as discussed earlier, it is desirable to avoid restrictive options like forcing the execution to take constant time, prohibiting attacker-visible output any time after the branching, or prohibiting branching on a secret in the first place.

Our key observation is that for a remote attacker to successfully set up and exploit a timing leak, program behavior must follow the following pattern: (i) branching on a secret takes place in a program run, and either (ii-a) the branching is followed by more than one attacker-visible I/O event, or (ii-b) the branching is followed by one attacker-visible I/O event, and prior to the branching there is either an attacker-visible I/O event or a reading to the clock.

Based on this pattern, we design Clockwork, a monitor that rules out timing leaks. Our mechanism pushes for permissiveness. For example, runs (free of explicit and implicit flows) that do not access the clock and only have one attacker-visible I/O event are accepted.

Runs that do not perform attacker-visible I/O after branching on a secret are accepted as well. As we will see, these kinds of runs are frequently encountered in secure IoT and e-voting apps.

We implement our monitor for JavaScript, leveraging JSFlow, a state-of-the-art information flow tracker for JavaScript. We demonstrate the feasibility of the approach on a case study with IFTTT, showing how to prevent malicious app makers from exfiltrating users' private information via timing, and a case study with Verificatum, showing how to track remote timing attacks with respect to network attackers. Our case studies demonstrate both the security and permissiveness. While apps with timing leaks are rejected, benign apps that use clock and I/O operations in a non-trivial fashion are accepted.