The VeriDis project team includes members of the Formal Methods department at LORIA, the
computer science laboratory in Nancy, and members of the research group
Automation of Logic at Max Planck Institut für Informatik in
Saarbrücken. It is headed by Stephan Merz and Christoph Weidenbach. VeriDis
was created in 2010 as a local research group of Inria Nancy – Grand Est and
has been an Inria project team since July 2012.

The objectives of VeriDis are to contribute to advances in verification techniques, including automated and interactive theorem proving, and to make them available for the development and analysis of concurrent and distributed algorithms and systems, based on mathematically precise and practically applicable development methods. The techniques that we develop are intended to assist designers of algorithms and systems in carrying out formally verified developments, where proofs of relevant properties, as well as bugs, can be found with a high degree of automation.

Within this context, we work on techniques for automated theorem
proving for expressive languages based on first-order logic, with support
for theories (including fragments of arithmetic or of set theory) that are relevant for
specifying algorithms and systems. Ideally, systems and their properties would
be specified using high-level, expressive languages, errors in specifications
would be discovered automatically, and finally, full verification could also
be performed completely automatically. Due to the fundamental undecidability
of the problem, this cannot be achieved in general. Nevertheless, we have
observed important advances in automated deduction in recent years, to which
we have contributed. These advances suggest that a substantially higher degree
of automation can be achieved over what is available in today's tools
supporting deductive verification. Our techniques are developed within SMT
(satisfiability modulo theories) solving and first-order logic reasoning based
on superposition, the two
main frameworks of contemporary automated reasoning that have complementary
strengths and weaknesses, and we are interested in making them converge when
appropriate. Techniques developed within the symbolic computation domain, such
as algorithms for quantifier elimination for appropriate theories, are also
relevant, and we are working on integrating them into our portfolio of
techniques. In order to handle expressive input languages, we are working on
techniques that encompass tractable fragments of higher-order logic, for
example for specifying inductive or co-inductive data types, for automating
proofs by induction, or for handling collections defined through a
characteristic predicate.

Since full automatic verification remains elusive, another line of our
research targets interactive proof platforms. We intend these platforms
to benefit from our work on automated deduction by incorporating powerful
automated backends and thus raise the degree of automation beyond what current
proof assistants can offer. Since most conjectures stated by users are
initially wrong (due to type errors, omitted hypotheses or overlooked border
cases), it is also important that proof assistants be able to detect and
explain such errors rather than letting users waste considerable time in
futile proof attempts. Moreover, increased automation must not come at the
expense of trustworthiness: skeptical proof assistants expect to be given an
explanation of the proof found by the backend prover that they can certify.

Model checking is also an established and highly successful technique
for verifying systems and for finding errors. Our contributions in this area
more specifically target quantitative, in particular timed or probabilistic
systems. A specificity of VeriDis is notably to consider partially specified
systems, using parameters, in which case the verification problem
becomes the synthesis of suitable parameter valuations.

Our methodological and foundational research is accompanied by the development
of efficient software tools, several of which go beyond pure research
prototypes: they have been used by others, have been integrated in
verification platforms developed by other groups, and participate in
international competitions. We also validate our work on verification
techniques by applying them to the formal development of algorithms and
systems. We mainly target high-level descriptions of concurrent and
distributed algorithms and systems. This class of algorithms is by now
ubiquitous, ranging from multi- and many-core algorithms to large networks and
cloud computing, and their formal verification is notoriously difficult.
Targeting high levels of abstraction allows the designs of such systems to be
verified before an actual implementation has been developed, contributing to
reducing the costs of formal verification. The potential of distributed
systems for increased resilience to component failures makes them attractive
in many contexts, but also makes formal verification even more important and
challenging. Our work in this area aims at identifying classes of algorithms
and systems for which we can provide guidelines and identify patterns of
formal development that makes verification less an art and more an engineering
discipline. We mainly target components of operating systems, distributed and
cloud services, and networks of computers or mobile devices.

Beyond formal system verification, we pursue applications of some of the symbolic techniques that we develop in other domains. We have observed encouraging success in using techniques of symbolic computation for the qualitative analysis of biological and chemical networks described by systems of ordinary differential equations that were previously only accessible to large-scale simulation. Such networks include biological reaction networks as they occur with models for diseases such as diabetes or cancer. They furthermore include epidemic models such as variants and generalizations of SEIR1 models, which are typically used for Influenza A or Covid-19. This work is being pursued within a large-scale interdisciplinary collaboration. It aims for our work grounded in verification to have an impact on the sciences, beyond engineering, which will feed back into our core formal methods community.

The VeriDis team gathers experts in techniques and tools for automatic deduction and interactive theorem proving, and specialists in methods and formalisms designed for the development of trustworthy concurrent and distributed systems and algorithms. Our common objective is twofold: first, we wish to advance the state of the art in automated and interactive theorem proving, and their combinations. Second, we work on making the resulting technology available for the computer-aided verification of distributed systems and protocols. In particular, our techniques and tools are intended to support sound methods for the development of trustworthy distributed systems that scale to algorithms relevant for practical applications.

VeriDis members from Saarbrücken are developing the
Spass 10workbench.
It currently consists of one of the leading automated theorem provers for
first-order logic based on the superposition calculus
56, a theory solver for linear arithmetic
2, a CDCL2 based satisfiability solver and a propositional converter to clausal normal form.
Recently we have extended it to a Datalog
hammer solving universal and existential queries with respect to a Horn
Bernays-Schoenfinkel theory modulo linear
arithmetic 61, 60.

In a complementary approach to automated deduction, VeriDis members from Nancy work on techniques for integrating reasoners for specific theories. They develop veriT1, an SMT 3 solver that combines decision procedures for different fragments of first-order logic. The veriT solver is designed to produce detailed proofs; this makes it particularly suitable as a component of a robust cooperation of deduction tools.

Finally, VeriDis members design effective quantifier elimination methods and decision procedures for algebraic theories, supported by their efficient implementation in the Redlog system 5.

An important objective of this line of work is the integration of theories in automated deduction. Typical theories of interest, including fragments of arithmetic, are difficult or impossible to express in first-order logic. We therefore explore efficient, modular techniques for integrating semantic and syntactic reasoning methods, develop novel combination results and techniques for quantifier instantiation. These problems are addressed from both sides, i.e. by embedding decision procedures into the superposition framework or by allowing an SMT solver to accept axiomatizations for plug-in theories. We also develop specific decision procedures for theories such as non-linear real arithmetic that are important when reasoning about certain classes of (e.g., real-time) systems but that also have interesting applications beyond verification.

We rely on interactive theorem provers for reasoning about specifications at a
high level of abstraction when fully automatic verification is not (yet)
feasible. An interactive proof platform should help verification engineers lay
out the proof structure at a sufficiently high level of abstraction; powerful
automatic plug-ins should then discharge the resulting proof steps. Members of
VeriDis have ample experience in the specification and subsequent
machine-assisted, interactive verification of algorithms. In particular, we
participate in a project at the joint Microsoft Research-Inria Centre on the
development of methods and tools for the formal proof of specifications
written in the TLA+65 language. Our prover
relies on a declarative proof language, and calls upon several automatic
backends 4. Trust in the correctness of the
overall proof can be ensured when the backends provide justifications that can
be checked by the trusted kernel of a proof assistant. During the development
of a proof, most obligations that are passed to the prover actually fail –
for example, because necessary information is not present in the context or
because the invariant is too weak, and we are interested in explaining failed
proof attempts to the user, in particular through the construction of
counter-models.

Members of VeriDis formalize a framework in the proof assistant Isabelle/HOL for representing the correctness and completeness of automated theorem provers. This work encompasses proof calculi such as ordered resolution or superposition, as well as concrete prover architectures such as Otter or DISCOUNT loops. It also covers the most recent splitting techniques that bring proof calculi closer to SMT solvers.

Theorem provers are not used in isolation, but they support the
application of sound methodologies for modeling and verifying systems. In this
respect, members of VeriDis have gained expertise and recognition in making
contributions to formal methods for concurrent and distributed algorithms and
systems 3, 8, and in applying them to
concrete use cases. In particular, the concept of refinement53, 57, 69 in state-based
modeling formalisms is central to our approach because it allows us to present
a rational (re)construction of system development. An important goal in
designing such methods is to establish precise proof obligations, many of which
can be discharged by automatic tools. This requires taking into account
specific characteristics of certain classes of systems and tailoring the model
to concrete computational models. Our research in this area is supported by
carrying out case studies for academic and industrial developments. This
activity benefits from and influences the development of our proof tools.

In this line of work, we investigate specific development and verification patterns for particular classes of algorithms, in order to reduce the work associated with their verification. We are also interested in applications of formal methods and their associated tools to the development of systems that underlie specific certification requirements in the sense of, e.g., Common Criteria. Finally, we are interested in the adaptation of model checking techniques for verifying actual distributed programs, rather than high-level models.

Today, the formal verification of a new algorithm is typically the subject of a PhD thesis, if it is addressed at all. This situation is not sustainable given the move towards more and more parallelism in mainstream systems: algorithm developers and system designers must be able to productively use verification tools for validating their algorithms and implementations. On a high level, the goal of VeriDis is to make formal verification standard practice for the development of distributed algorithms and systems, just as symbolic model checking has become commonplace in the development of embedded systems and as security analysis for cryptographic protocols is becoming standard practice today. Although the fundamental problems in distributed programming are well-known, they pose new challenges in the context of modern system paradigms, including ad-hoc and overlay networks or peer-to-peer systems, and they must be integrated for concrete applications.

The paradigm of model checking is based on automatically verifying properties
over a formal model of a system, using mathematical foundations.
Model checking, while useful and highly successful in practice, can encounter
the infamous state space explosion problem.
One direction of VeriDis therefore addresses the efficiency of model checking,
by proposing new algorithms or heuristics to speed up analysis.
We notably focus on the quantitative setting (time, probabilities), and more
specifically on the parametric paradigm where some quantitative constants are
unknown, and the goal becomes to synthesize suitable valuations.
A recent application of the VeriDis team is that of opacity (in the more general field of cybersecurity), addressed using model checking.
The team considers a novel definition of opacity in timed automata, where an attacker has only access to the execution time; several recent works address this direction.

The unprecedented accumulation of information in biology and medicine during the last 20 years led to a situation where any new progress in these fields is dependent on the capacity to model and make sense of large data. Until recently, foundational research was concerned with simple models of 2 to 5 ordinary differential equations. The analysis of even such simple models was sufficiently involved that it resulted in one or several scientific publications for a single model. Much larger models are built today to represent cell processes, explain and predict the origin and evolution of complex diseases or the differences between patients in precision and personalized medicine. For instance, the biomodels.net model repository 66 contains thousands of hand-built models of up to several hundreds of variables. Numerical analysis of large models requires an exhaustive scan of the parameter space or the identification of the numerical parameters from data. Both are infeasible for large biological systems because parameters are largely unknown and because of the curse of dimensionality: data, even rich, become rapidly sparse when the dimensionality of the problem increases. On these grounds, VeriDis researchers aim at formal symbolic analysis instead of numerical simulation.

As an illustration of the approach, consider BIOMD0000000716 in the
above-mentioned BioModels database, which models the transmission dynamics of
subtype H5N6 of the avian Influenza A virus in the Philippines in August 2017
67. This model describes four species (susceptible/infected
bird or human) together with their dynamics. Using purely symbolic algorithms,
we obtain a decomposition of the dynamics into three subsystems

This multiple time scale reduction emphasizes a cascade of successive
relaxations of model variables. Figure 1(a) shows the surface of

The computation time is less than a second. The computation is based on
massive SMT solving over various theories, including QF_LRA for
tropicalizations, QF_NRA for testing Hurwitz conditions on
eigenvalues, and QF_LIA for finding sufficient differentiability
conditions for hyperbolic attractivity of critical manifolds. Gröbner
reduction techniques are used for final algebraic simplification
64. Observe that numerical simulation would not be able
to provide such a global analysis of the overall system, even in the absence
of symbolic parameters, as is the case in our rather simple example.

Distributed algorithms and protocols are found at all levels of computing infrastructure, from many-core processors and systems on chip to wide-area networks. We are particularly interested in the verification of algorithms that are developed for supporting novel computing paradigms, including ad-hoc networks that underlie mobile and low-power computing or overlay networks, peer-to-peer networks that provide services for telecommunication, or cloud computing services. Computing infrastructure must be highly available and is ideally invisible to the end user, therefore correctness is crucial. One should note that standard problems of distributed computing such as consensus, group membership or leader election have to be reformulated for the dynamic context of these modern systems. We are not ourselves experts in the design of distributed algorithms, but we work together with domain experts on designing formal models of these protocols, and on verifying their properties. These cooperations help us focus on concrete algorithms and ensure that our work is relevant to the distributed algorithm community.

Our work on symbolic procedures for solving polynomial constraints finds applications beyond verification. In particular, we have been working in interdisciplinary projects with researchers from mathematics, computer science, systems biology, and system medicine on the analysis of reaction networks and epidemic models in order to infer principal qualitative properties. Our techniques complement numerical analysis techniques and are validated against collections of models from computational biology.

The team uses extensions of timed automata (such as parametric timed automata 54) as an underlying formalism to solve practical questions. Our work on parametric timed automata is partly motivated by applications in cybersecurity, notably within the ANR-NRF ProMiS project. Foundational decidability results and novel notions of non-interference and opacity for this class of automata allow us, for example, to determine the maximal frequency of attacker actions for the attack to succeed (i.e., so that these actions remain invisible to the external observer). Several software artefacts were implemented by the team in this domain 55.

Pascal Fontaine obtained an Amazon Research Award, and Stephan Merz received a research award from Oracle Corporation.

Jasmin Blanchette, Qi Qiu and Sophie Tourret received the best paper award at
CADE-29, the 29th International Conference on Automated Deduction, for their
paper about verified given clause procedures

An article on the use of interactive proof assistants for formalizing mathematical developments was published in Communications of the ACM 12.

Redlog is an integral part of the interactive computer algebra system Reduce. It supplements Reduce's comprehensive collection of powerful symbolic computation methods by supplying more than 100 functions on first-order formulas.

Redlog generally works with interpreted first-order logic in contrast to free first-order logic. Each first-order formula in Redlog must exclusively contain atoms from one particular Redlog-supported theory, which corresponds to a choice of admissible functions and relations with fixed semantics. Redlog-supported theories include Nonlinear Real Arithmetic (Real Closed Fields), Presburger Arithmetic, Parametric QSAT (quantified satisfiability solving), and many more.

Like the previous recent years, efforts in 2023 have been focused on higher-order logic, and better proof production. We also initiated a profound code refactoring phase, to better accommodate the role of the solver as a platform for testing new ideas.

We target applications where validation of formulas is crucial, such as proof about specifications written in the B or TLA+ languages, and we work together with the developers of the respective verification platforms to make veriT even more useful in practice. The solver is available as a plugin for the Rodin platform, and it is integrated within Atelier B.

SMT solvers have throughout the years been able to cope with increasingly expressive logics, from ground formulas to full first-order logic (FOL). In the past, we proposed a pragmatic extension for SMT solvers to support higher-order logic reasoning natively without compromising performance on FOL reasoning, thus leveraging the extensive research and implementation efforts dedicated to efficient SMT solving. However, the higher-order SMT solvers resulting from this work are not as effective as we would expect given their performances in first-order logic. We believe this comes from the fact that only the core of the SMT solver has been extended, ignoring in particular the modules for quantifier instantiation.

This motivated us to start working on an extension of the main quantifier-instantiation approach (congruence closure with free variables, CCFV) to higher-order logic in 2020. We are working on an encoding of the CCFV higher-order problem into a set of SAT constraints. In previous years, we concentrated our efforts on the theory, to prove the soundness and completeness of our approach, and developed pseudo-code for all elements of CCFV computation. In 2022 and 2023, these algorithms were implemented in a C++ library, and they were tested on benchmarks from the SMT-lib collection. We started to integrate this library within a new SMT framework. The library will eventually be released under an open-source permissive license.

We are part of a group developing a framework for formal refutational completeness proofs of abstract provers that implement automated reasoning calculi, especially calculi based on saturation such as ordered resolution and superposition. In previous work, we published a framework that fully captures the dynamic aspects of proof search with a saturation calculus. This framework covers clause splitting as supported by modern superposition provers with the help of a SAT solver. In particular, our formalization revealed some completeness issues with the theorem prover Vampire.

This year, we extended the Isabelle formalization by representations of the main loops of saturation-based theorem provers and their fairness conditions. In the process, we found and repaired several issues with the (in fact, our own) description of the Zipperposition loop, a novel loop that handles inferences producing an infinite stream of conclusions. In parallel, Martin Desharnais, for his PhD thesis, completed an instantiation of this framework for the superposition calculus. We also made progress on the Isabelle/HOL mechanization of the framework with clause splitting. Ghilain Bergeron contributed to this endeavor for his master thesis. This last piece of work is still ongoing, while the first one has been completed in 2022 and led to a paper presented at CADE in 2023 23.

When automatic reasoning techniques are applied in order to prove a specific property of some system, a proof is a certificate of success and we have worked on explaining the gist of it 63. If a proof attempt fails, automatic reasoning techniques may still terminate and implicitly provide a representation of a (counter) model to the property of interest. We have worked on effective representations for such counter models providing insights into why the desired property does not hold. This way, either the system, the formalization of it or the property can be debugged 47, 27.

Cyclic induction is a powerful reasoning technique that consists in blocking
the proof development of certain subgoals already encountered during the proof
process. In the setting of first-order logic with inductive definitions and
equality (FOLCyclist prover, but their implementations are error-prone and the
human validation may be tedious. On the other hand, cyclic induction is not
yet integrated into certifying proof environments that support first-order
logic and inductive definitions, such as Isabelle and Coq.

We have proposed in 20 a general procedure for
certifying formula-based Noetherian induction reasoning in order to check
FOLE-Cyclist proof steps without using proof reconstruction
techniques. This approach also allows for finding errors in cyclic proofs in a
very precise way, at the level of proof steps. We established a bridge between
formula-based Noetherian induction and FOLE-Cyclist prover. The advantages of our approach are threefold:

In her PhD work, Rosalie Defourné defined improved encodings of the
non-temporal theory of TLA+ in the input languages of automated theorem
provers for first-order and higher-order logic, including SMT solvers and
Zipperposition. The SMT encoding relies on an axiomatization for the operators
of TLA+ set theory, annotated by triggers for finding relevant instances
of these axioms. The previously existing encoding heavily relied on rewriting
input formulas in order to simplify them before submitting them to the backend
solver. Verifying its soundness required to not only inspect the axioms given
to the solver, but also understanding the preprocessing techniques that were
applied to the input formulas. Besides, optimizing the set of rewrite rules
was delicate because properties such as confluence and termination had to be
maintained. In contrast, the soundness of the new encoding can be determined
directly by verifying the background axioms. Adding triggers does not endanger
soundness, and benchmarking the new backend over an extensive corpus of
existing TLA+ proofs showed that it performs comparably or better than
the old encoding. This work was presented in a conference publication
30 and in Rosalie Defourné's PhD thesis 45,
which was defended in November.

F3FLUID is a unified formal framework for the development of safety-critical interactive systems. This framework is based on the Formal Language of User Interface Design (FLUID) defined in the FORMEDICIS ANR project (2017–2022) for expressing high-level system requirements for interactive systems. FLUID is specifically designed for handling concepts of safety-critical interactive systems, including domain knowledge. A FLUID model is used as a source model for the generation of several target models in different modeling languages to support the formal verification methods, such as theorem proving and model checking.

The Event-B modeling language is used for checking functional behaviors, user interactions, safety properties, and domain properties. A FLUID model is transformed into an Event-B model, and then, the Rodin tool is used to check the internal consistency with respect to the given safety properties. In addition, an interactive cooperative objects (ICOs) model is derived from the Event-B model for animation, visualization and validation of dynamic behaviors, visual properties, and task analysis, using the ProB model checker. Finally, an industrial case study, complying with the ARINC 661 standard, Multi-Purpose Interactive Applications (MPIA), is used to illustrate the effectiveness of our F3FLUID framework for the development of safety-critical interactive systems. Moreover, we show 18 how formal ontologies can be used to model domain-specific knowledge, as well as how system models may refer to these ontologies through annotations. It relies on the Event-B refinement and proof state-based method, and the associated theories, to define a framework in which domain-specific knowledge ontologies are formalized as Event-B theories defining data types used to type Event-B system design models. Finally, this framework is deployed for the specific case of interactive critical systems. To illustrate the proposed approach, a case study of the Traffic Collision Avoidance System (TCAS) is developed.

The increase in popularity of wireless networks in industrial, embedded, medical and public sectors has made them an appealing attack surface for attackers who exploit the vulnerabilities in network protocols to launch attacks such as Evil Twin, Man-in-the-middle, sniffing, etc., which may result in economic and non-economic losses. To protect wireless networks against such attacks, IEEE 802.11 keep updating the protocol standards with new and more secure versions. There has always been a direct correlation between attacks and the improvement of protocol standards. As the sophistication of attacks increases, protocol standards tend to move towards higher security, resulting in a significant increase in both latency and computational overhead, and severe degradation in the performance of low-latency applications such as Industrial Internet of Things (IIoT), automotive, robotics, etc. The paper 16 highlights the importance of both latency and security in wireless networks from the implementation and performance perspectives. We review existing IEEE 802.11 protocols in terms of security offered and overhead incurred to substantiate the fact that there is a need for a protocol which in addition to providing optimum security against attacks also maintains the latency and overhead. We also propose a secure and low-latency protocol known as Secure Authentication Protocol (SAP) which operates in two phases, where the first phase is a one-time process implemented using asymmetric cryptography and the second phase is implemented using symmetric cryptography. The protocol is structured in a way that it maintains the original structure of IEEE 802.11 protocols and performs both phases using fewer messages than existing protocols. By simulating the protocol using the well-established OMNeT++ simulator, we demonstrated that the proposed protocol incurs a low computation overhead, making it ideal for low-latency applications. We extensively verified the security properties of the proposed protocol using formal verification through the widely-accepted Scyther tool. Finally, we perform a comparative analysis of SAP with existing IEEE 802.11 wireless network protocols to highlight the improvement.

Whenever continuous dynamics and discrete control interact, hybrid systems arise. As hybrid systems become ubiquitous and more and more complex, analysis and synthesis techniques are in high demand to design safe hybrid systems. This is however challenging due to the nature of hybrid systems and their designs, and the question arises of how to formulate and reason about their safety problems. Previous work has demonstrated how to extend the discrete modeling language Event-B with support for continuous domains to integrate traditional refinement in hybrid system design. We have extended our strategy 50 that can coherently refine an abstract hybrid system design with safety constraints down to a concrete one, integrated with implementable discrete control, that can behave safely. We demonstrate our proposal on a smart heating system that regulates room temperature between two reference values. The main innovation lies in the fact that we combine the Event-B modelling language (events) and the B modelling language (operations). The transformation of events into operation is proved correct using a proof assistant. We also explore the frequency model approach of control theory using the Event-B modelling language. The frequency modelling style is not so usual and we provided a Python artefact for assisting with the use of the approach.

Any given programming language may come with several semantics definitions, such as big-step, small-step, or even abstract machines, which can be seen as an implementation of a language. They all describe identical behaviors of programs, but each may be better adapted for some purpose: for instance, small-step semantics are better suited to prove subject reduction.

To have access to all kinds of semantics at once, we develop transformations between semantics to be able to generate one from the other at no extra cost for the language designer. In a previous work 59, we developed a semantics format called zipper semantics, a small-step semantics from which we can automatically derive sound and complete abstract machines for non-deterministic languages such as process calculi. We build on this work in two ways: first, we propose a new format of zipper semantics, called leaf-first, in which we can express lazy scope extrusion or join patterns. Next, we make the non-deterministic abstract machines more efficient by reusing some design principles of abstract machines for the lambda-calculus, like environments and refocusing.

In joint work with Aman Goel (Amazon) and Karem Sakallah (University of
Michigan), we investigated the use of symbolic model checking techniques for
automatically computing inductive invariants for parameterized distributed
algorithms. Specifically, the IC3PO model checker applies the well-known IC3
model checking algorithm to finite instances of the algorithm and, in case of
success, retrieves inductive invariants for those instances. It then inspects
these invariants for symmetries with respect to space (processes) and time
(e.g., ballot numbers used during the algorithm) and expresses those
symmetries by introducing quantifiers. The resulting formulas are then checked
for larger input sizes until no new invariants are inferred. We applied the
technique to two versions of the well-known Bakery algorithm that ensures
mutual exclusion among

We designed an extension of the PlusCal algorithmic language for
modeling distributed algorithms. Rather
than introducing many new features that could break the design
objectives of PlusCal being a lightweight front-end to writing
TLA+specifications, we added only a few orthogonal
concepts inspired from those found in distributed programming
languages while both remaining compatible with the existing language
and keeping simple the generation of human-readable TLA+
specifications. Compared to the original PlusCal language,
Distributed PlusCal allows processes to consist of multiple threads
that communicate via process-local variables, and it introduces
communication channels that can be declared as preserving FIFO order
or not. We illustrated Distributed PlusCal using two
well-known algorithms and our preliminary findings indicate that the
extensions provided help us express distributed algorithms in a
natural way. Moreover, any overhead incurred in verification with
respect to a specification written in TLA+ is not different from
that of ordinary PlusCal. This work was presented in a conference paper
29.

Programming distributed systems is error-prone, and while formal
methods such as TLA+ can help design such systems, the
translation from the verified specification towards the final
implementation is rarely certified. Nevertheless, we believe that we
can still gain confidence in the fact that an implementation behaves
according to a high-level specification by collecting traces of
executions and checking that they are allowed by the
specification. We have developed a framework for the instrumentation
of Java programs allowing one to record events and variable updates
that are meaningful at the level of the TLA+ specification, and
we use the TLA+ model checker to verify that the recorded trace of
an execution indeed corresponds to a possible execution of the
high-level specification4.
Although this cannot formally establish the correctness of an implementation,
preliminary experience with this approach indicates that it can be
surprisingly effective at discovering discrepancies between the
TLA+ specification and the implementation.

Opacity of a system is a property describing the impossibility for an external attacker to deduce some private information by observing an execution of the system. In two publications 22, 11 we considered the opacity of systems modeled by timed automata where an attacker has access to the duration of the executions only. Our goal was to determine whether a system was opaque, and in some cases, under which conditions it could be made opaque. More precisely, the models we considered allowed for parameters representing unknown delays within the system, and we wished to identify for which parameter values opacity was achieved. Throughout this line of research we have considered many different variants of the notion of opacity, for instance by allowing the secret behaviour to expire, meaning that some private behaviour could stop being deemed private if the private information was not detected by the observer quickly enough. The paper 11 serves as a survey of the current state of this line of research.

For the analysis of timed automata, including the analysis of opacity mentioned above, the notion of semi-linear sets (i.e. sets that can be defined by Presburger arithmetics) appears regularly. For instance, the durations of runs in a timed automata can be described as such a set. Presburger arithmetics however fails once parameters are included within the automata: the presence of parameters often leads to multiplications between integer variables, which cannot be represented in Presburger arithmetic. Peano arithmetic could of course represent those sets, but this arithmetic is well known to be undecidable and thus not a real option. Much previous research has studied the gap between Presburger and Peano arithmetic. To our knowledge, none of the existing work was able to handle the formulas that we generate in the analysis of parametric timed automata. In 51, we developed a parametric extension to Presburger arithmetic tailored for our needs. It shows how to handle some formulas beyond Presburger which we plan to use in future work.

Loops are a fundamental staple of any programming language, and the study of loops plays a pivotal role in many subfields of computer science, including automated verification, abstract interpretation, program analysis, semantics, etc. A lot of work has been done in the linear dynamical systems community to represent program loops as simple linear systems, and to analyze which kind of properties formulated in Monadic Second Order Logic could be checked over the evolution of a loop. In 35, we modify the usual linear models used to account for the fact that computers rely on floating points. This change leads to a completely different approach to the problem, as the usual tools to tackle linear dynamical systems fail to apply. An interesting result is that, while termination of simple linear loops is a long-standing open problem in the usual setting, it becomes undecidable with floating points. Moreover, if we add some very simple restriction on the system, which changes nothing in the original setting, termination in our setting becomes decidable.

Section 3.3 showed an example for the analysis of the
kinetics of a chemical reaction network at multiple timescales
64. Although general in its implementation, this
reduction method can fail in a number of cases. A major cause of failure is
the degeneracy of the quasi-steady state, when the fast dynamics has a
continuous variety of steady states. Typically, this happens when the fast
truncated ODEs have first integrals, i.e. quantities that are conserved on
any trajectory and that have to be fixed at arbitrary values depending on
the initial conditions. The quasi-steady states are then no longer
hyperbolic, because the Jacobian matrix of the fast part of the dynamics
becomes singular. To address this issue, we have now proposed a concept of
approximate conservation laws, which allows additional reductions.

Technically, this framework requires parametric versions of various established algorithms from Symbolic Computation. One simple example is the computation of the rank of a matrix with real parameters, which produces a formal finite case distinction where possible ranks are paired with necessary and sufficient conditions as Tarski formulas. This allows to identify critical cases with respect to the above-mentioned singularity of the Jacobian. Another example is the use of comprehensive Gröbner bases in the course of parametric computation of certain syzygy modules. From a practical point of view, a central issue with all such algorithms is the combinatorial explosion of the number of cases.

We use SMT solving as well as real quantifier elimination methods to detect inconsistent cases and prune the tree of case distinctions early. The decision procedures used are typically double exponential and can easily turn into a bottleneck preventing termination within reasonable time altogether, in particular when the degrees of polynomial terms get larger. Since the results remain correct also without the elimination of some redundant cases, we combine various methods and use suitable timeouts. This work has resulted in two articles, which have been accepted for publication in the SIAM Journal on Applied Dynamical Systems