Section: New Results

Formal Methods for Developing and Analyzing Algorithms and Systems

Participants : Étienne André, Marie Duflot-Kremer, Yann Duplouy, Margaux Duroeulx, Igor Konnov, Dominique Méry, Stephan Merz, Nicolas Schnepf, Christoph Weidenbach.

Synthesis of Security Chains for Software Defined Networks

Joint work with Rémi Badonnel and Abdelkader Lahmadi (Inria Nancy – Grand Est, Resist).

The PhD thesis of Nicolas Schnepf focuses on applying techniques based on formal methods in the area of network communications, and in particular for the construction, verification, and optimization of chains of security functions in the setting of software-defined networks (SDN). The main objective is to prevent applications from disrupting the functioning of the network or services, for example by launching denial of service attacks, port scanning or similar activities.

We designed techniques for formally verifying security chains using SMT solving and symbolic model checking. Furthermore, we developed and prototypically implemented an approach for (i) learning a Markov chain characterizing the network behavior of an Android application based on its observed communications, (ii) inferring appropriate security functions from the structure of that Markov chain and thresholds set by the network operator, using techniques of logic programming, (iii) combining security functions for individual applications into larger security chains, and (iv) optimizing the deployment of security chains for a given SDN infrastructure using techniques of (linear or non-linear) optimization or optimizing SMT solvers. Two papers were presented at IM 2019 [39], [38], the PhD thesis [12] was defended in September 2019, and a journal paper is in preparation.

Satisfiability Techniques for Reliability Assessment

Joint work with Nicolae Brînzei at Centre de Recherche en Automatique de Nancy.

In the context of the PhD thesis of Margaux Durœulx, funded by the Lorraine University of Excellence program, we explore the applicability of satisfiability techniques for assessing the reliability of complex systems. In particular, we consider component-based systems modeled using fault trees that can be seen as a visual representation of the structure function indicating which combinations of component failures lead to system failures. We rely on SAT solvers to compute minimal tie sets, i.e., minimal sets of components whose functioning ensures that the overall system works. These tie sets are instrumental for a probabilistic reliability assessment. In 2019, we have extended this idea to dynamic fault trees where the order of component failures needs to be taken into account in order to determine the failure status of the overall system [34].

Statistical Model Checking of Distributed Programs

Yann Duplouy joined the HAC SPECIS project (cf. section 9.2) in December 2018 as a post-doctoral researcher with the objective of designing and implementing a statistical model checker within the SimGrid framework. So far he added to SimGrid the possibility to use stochastic profiles, introducing probabilities in the model of the network. He also developed a prototype tool that can be interfaced with the SimGrid simulators to perform statistical model checking on the actual programs simulated using the SimGrid framework. He now validates this prototype on concrete case studies, including the Bit Torrent protocol with probabilistic failures of the nodes.

Parameterized Verification of Threshold-Guarded Fault-Tolerant Distributed Algorithms

Joint work with Nathalie Bertrand (Inria Rennes Bretagne – Atlantique, SUMO), Marijana Lazić (TU Munich) and Ilina Stoilkovska, Josef Widder, Florian Zuleger (TU Wien).

Many fault-tolerant distributed algorithms use threshold guards: processes broadcast messages and count the number of messages that they receive from their peers. Based on the total number n of processes and an upper bound on the number t of faulty processes, a correct process tolerates faults by receiving “sufficiently many” messages. For instance, when a correct process has received t+1 messages from distinct processes, at least one of these messages must originate from a non-faulty process. The main challenge is to verify such algorithms for all combinations of parameters n and t that satisfy a resilience condition, e.g., n>3t.

In earlier work, we introduced threshold automata for representing processes in such algorithms and showed that systems of threshold automata have bounded diameters that do not depend on the parameters such as n and t, provided that a single-step acceleration is allowed [62], [63], [64].

Our previous results apply to asynchronous algorithms. It is well-known that distributed consensus cannot be solved in purely asynchronous systems [61]. However, when an algorithm is provided with a random coin, consensus becomes solvable (e.g., the algorithm by Ben-Or, 1993). In [29], we introduced an approach to parameterized verification of randomized threshold-guarded distributed algorithms, which proceed in an unbounded number of rounds and toss a coin to break symmetries. This approach integrates two levels of reasoning: (1) proving safety and liveness of a single round system with ByMC by replacing randomization with non-determinism, (2) showing almost-sure termination of an algorithm by using the verification results for the non-deterministic system. To show soundness, we proved several theorems that reduce reasoning about multiple rounds to reasoning about a single round. We verified five prominent algorithms, including Ben-Or's randomized consensus and randomized one-step consensus (RS-BOSCO [71]). The verification of the latter algorithm required us to run experiments in Grid5000. This paper was presented at CONCUR 2019.

Another way of making consensus solvable is to impose synchrony on the executions of a distributed system. In [40] we introduced synchronous threshold automata, which execute in lock-step and count the number of processes in given local states. In general, we showed that even reachability of a parameterized set of global states in such a distributed system is undecidable. However, we proved that systems of automata with monotonic guards have bounded diameters, which allows us to use SMT-based bounded model checking as a complete parameterized verification technique. We introduced a procedure for computing the diameter of a counter system of synchronous threshold automata, applied it to the counter systems of 8 distributed algorithms from the literature, and found that their diameters are tiny (from 1 to 4). This makes our approach practically feasible, despite undecidability in general. This paper was presented at TACAS 2019. The paper was invited to the special issue of TACAS 2019, to appear in the International Journal on Software Tools for Technology Transfer in 2020.

Symbolic Model Checking of TLA+ Specifications

Joint work with Jure Kukovec, Thanh Hai Tran, Josef Widder (TU Wien).

TLA+ is a general language introduced by Leslie Lamport for specifying temporal behavior of computer systems [66]. The tool set for TLA+ includes an explicit-state model checker tlc . As explicit state model checkers do not scale to large verification problems, we started the project APALACHE (WWTF project APALACHE (ICT15-103): https://forsyte.at/research/apalache/) on developing a symbolic model checker for TLA+ in 2016.

Following our results in 2018 [65], we have extended the symbolic model checker for TLA+. In [22], we have defined the translation process from TLA+ to SMT as a series of rewriting rules. Furthermore, we have proven soundness of this translation. Our experiments show that APALACHE runs faster than TLC when proving inductive invariants. APALACHE also implements bounded model checking, which has to be improved, in order to make it competitive with TLC. The paper [22] was presented at ACM OOPSLA 2019.

Incremental Development of Systems and Algorithms

Joint work with Rosemary Monahan (NUI Maynooth, Ireland) and Mohammed Mosbah (LaBRI, Bordeaux).

The development of distributed algorithms and, more generally, of distributed systems, is a complex, delicate, and challenging process. The approach based on refinement applies a design methodology that starts from the most abstract model and leads, in an incremental way, to a distributed solution. The use of a proof assistant gives a formal guarantee on the conformance of each refinement with the model preceding it. Our main result during 2019 is the development of a distributed pattern [26] handling the dynamicity of the topology of networks.