Computers and programs running on these computers are powerful tools for many domains of human activities. In some of these domains, program errors can have enormous consequences. It will become crucial for all stakeholders that the best techniques are used when designing these programs.

We advocate using higher-order logic proof assistants as tools to obtain better quality programs and designs. These tools make it possible to build designs where all decisive arguments are explicit, ambiguity is alleviated, and logical steps can be verified precisely. In practice, we are intensive users of the Coq system and we participate actively to the development of this tool, in collaboration with other teams at Inria, and we also take an active part in promoting its usage by academic and industrial users around the world.

Many domains of modern computer science and engineering make a heavy use of mathematics. If we wish to use proof assistants to avoid errors in designs, we need to develop corpora of formally verified mathematics that are adapted to these domains. Developing libraries of formally verified mathematics is the main motivation for our research. In these libraries, we wish to capture not only the knowledge that is usually recorded in definitions and theorems, but also the practical knowledge that is recorded in mathematical practice, idioms, and work habits. Thus, we are interested in logical facts, algorithms, and notation habits. Also, the very process of developing an ambitious library is a matter of organization, with design decisions that need to be evaluated and improved. Refactoring of libraries is also an important topic. Among all higher-order logic based proof assistants, we contend that those based on Type theory are the best suited for this work on libraries, thanks to their strong capabilities for abstraction and modular re-use.

The interface between mathematics, computer science and engineering is large. To focus our activities, we will concentrate on applications of proof assistants to two main domains: cryptography and robotics. We also develop specific tools for proofs in cryptography, mainly around a proof tool named EasyCrypt.

The proof assistants that we consider provide both a programming language, where users can describe algorithms performing tasks in their domain of interest, and a logical language to reason about the programs, thus making it possible to ensure that the algorithms do solve the problems for which they were designed. Trustability is gained because algorithms and logical statements provide multiple views of the same topic, thus making it possible to detect errors coming from a mismatch between expected and established properties. The verification process is itself a logical process, where the computer can bring rigor in aligning expectations and guarantees.

The foundations of proof assistants rest on the very foundations of mathematics. As a consequence, all aspects of reasoning must be made completely explicit in the process of formally verifying an algorithm. All aspects of the formal verification of an algorithm are expressed in a discourse whose consistency is verified by the computer, so that unclear or intuitive arguments need to be replaced by precise logical inferences.

One of the foundational features on which we rely extensively is Type Theory. In this approach a very simple programming language
is equiped with a powerful discipline to check the consistency of
usage: types represent sets of data with similar behavior, functions
represent algorithms mapping types to other types, and the consistency
can be verified by a simple computer program, a type-checker.
Although they can be
verified by a simple program, types can express arbitrary complex
objects or properties, so that the verification work lives in an
interesting realm, where verifying proofs is decidable, but finding
the proofs is undecidable.

This process for producing new algorithms and theorems is a novelty in the development of mathematical knowledge or algorithms, and new working methods must be devised for it to become a productive approach to high quality software development. Questions that arise are numerous. How do we avoid requiring human assistance to work on mundane aspects of proofs? How do we take advantage of all the progress made in automatic theorem proving? How do we organize the maintenance of ambitious corpora of formally verified knowledge in the long term?

To acquire hands-on expertise, we concentrate our activity on three aspects. The first one is foundational: we develop and maintain a library of mathematical facts that covers many aspects of algebra and analysis. In the past, we applied this library to proofs in group theory, but it is increasingly used for many different areas of mathematics and by other teams around the world, from combinatorics to elliptic cryptography, for instance. The second aspect is applicative: we develop a specific tool for proofs in cryptography, where we need to reason on the probability that opponents manage to access information we wish to protect. For this activity, we develop a specific proof system, relying on a wider set of automatic tools, with the objective of finding the tools that are well adapted to this domain and to attract users that are initially specialists in cryptography but not in formal verification. The third domain is robotics, as we believe that the current trend towards more and more autonomous robots and vehicles will raise questions of safety and trustability where formal verification can bring significant added value.

The Mathematical Components library is the main by-product of an effort started almost two decades ago to provide a formally verified proof for a major theorem in group theory. Because this major theorem had a proof published in books of several hundreds of pages, with elements coming from character theory, other coming from algebra, and some coming from real analysis, it was an exercise in building a large library, with results in many domains, and in establishing clear guidelines for further increase and data search.

This library has proved to be a useful repository of mathematical facts for a wide area of applications, so that it has a growing community of users in many countries (Denmark, France, Germany, Japan, Singapore, Spain, Sweden, UK, USA) and for a wide variety of topics (transcendental number theory, elliptic curve cryptography, articulated robot kinematics, recently block chain foundations).

Interesting questions on this library range around the importance of decidability and proof irrelevance, the way to structure knowledge to automatically inherit theorems from one topic to another, the way to generate infrastructure to make this automation efficient and predictable. In particular, we want to concentrate on adding a new mathematical topic to this library: real analysis and then complex analysis (Mathematical Components Analysis).

On the front of automation, we are convinced that a higher level language is required to describe similarities between theories, to generate theorems that are immediate consequences of structures, etc, and for this reason, we invest in the development of a new language on top of the proof assistant (ELPI, Embeddable Lambda Prolog Interpreter).

When we work on cryptography, we are interested in the formal
verification of proofs showing that some cryptographic primitives
provide good guarantees against unwanted access to information. Over
the years we have developed a technique for this kind of reasoning
that relies on a programing logic (close to Hoare logic) with
probabilistic aspects and the capability to establish relations
between several implementations of a problem. The resulting
programming logic is called probabilistic relational Hoare
logic.
We also study questions of side-channel attacks,
where we wish to guarantee that opponents cannot gain access to
protected knowledge, even if they observe specific features of
execution, like execution time (to which the answer lies in constant-time execution) or partial access to memory bits (to which
the answer lies in masking).

For this domain of application, we choose to work with a specific proof tool (EasyCrypt), which combines powerful first-order reasoning and use of automatic tools, with a specific support for probabilistic relational Hoare Logic. The development of this EasyCrypt proof tool is one of the objectives of our team.

When it comes to formal proofs of resistance to side-channel attacks, we contend that it is necessary to verify formally that the compiler used in the production of actually running code respects the resistance properties that were established in formally verified proofs. One of our objectives is to develop such a compiler (Jasmin) and show its strength on a variety of applications.

The pair of tools EasyCrypt and Jasmin has also proved its worth in the formal verification of correctness for post-quantum cryptography.

Robots are man-made artifacts where numerous design decisions can be argued based on logical or mathematical principles. For this reason, we wish to use this domain of application as a focus for our investigations. The questions for which we are close to providing answers involve precision issues in numeric computation, obstacle avoidance and motion planning (including questions of graph theory), articulated limb kinematics and dynamics, and balance and active control.

From the mathematical perspective, these topics require that we improve our library to cover real algebraic geometry, computational geometry, real analysis, graph theory, and refinement relations between abstract algorithms and executable programs.

In the long run, we hope to exhibit robots where pieces of software and part of the design have been subject to formal verification.

The programming language has the following features

- Native support for variable binding and substitution, via a Higher Order Abstract Syntax (HOAS) embedding of the object language. The programmer does not need to care about technical devices to handle bound variables, like De Bruijn indices.

- Native support for hypothetical context. When moving under a binder one can attach to the bound variable extra information that is collected when the variable gets out of scope. For example when writing a type-checker the programmer needs not to care about managing the typing context.

- Native support for higher-order unification variables, again via HOAS. Unification variables of the meta-language (lambdaProlog) can be reused to represent the unification variables of the object language. The programmer does not need to care about the unification-variable assignment map and cannot assign to a unification variable a term containing variables out of scope, or build a circular assignment.

- Native support for syntactic constraints and their meta-level handling rules. The generative semantics of Prolog can be disabled by turning a goal into a syntactic constraint (suspended goal). A syntactic constraint is resumed as soon as relevant variables get assigned. Syntactic constraints can be manipulated by constraint handling rules (CHR).

- Native support for backtracking, to ease implementation of search.

- The constraint store is extensible. The host application can declare non-syntactic constraints and uses custom constraint solvers to check their consistency.

- Clauses are graftable. The user is free to extend an existing program by inserting/removing clauses, both at runtime (using implication) and at "compilation" time by accumulating files.

Most of these features come with lambdaProlog. Constraints and propagation rules are novel in ELPI.

ELPI implements a variant of lambdaProlog enriched with Constraint Handling Rules, a programming language well suited to manipulate syntax trees with binders and unification variables.

ELPI is a research project aimed at providing a programming platform for the so called elaborator component of an interactive theorem prover.

ELPI is designed to be embedded into larger applications written in OCaml as an extension language. It comes with an API to drive the interpreter and with an FFI for defining built-in predicates and data types, as well as quotations and similar goodies that come in handy to adapt the language to the host application.

The Jasmin programming language smoothly combines high-level and low-level constructs, so as to support “assembly in the head” programming. Programmers can control many low-level details that are performance-critical: instruction selection and scheduling, what registers to spill and when, etc. The language also features high-level abstractions (variables, functions, arrays, loops, etc.) to structure the source code and make it more amenable to formal verification. The Jasmin compiler produces predictable assembly and ensures that the use of high-level abstractions incurs no run-time penalty.

The semantics is formally defined to allow rigorous reasoning about program behaviors. The compiler is formally verified for correctness (the proof is machine-checked by the Coq proof assistant). This ensures that many properties can be proved on a source program and still apply to the corresponding assembly program: safety, termination, functional correctness…

Jasmin programs can be automatically checked for safety and termination (using a trusted static analyzer). The Jasmin workbench leverages the EasyCrypt toolset for formal verification. Jasmin programs can be extracted to corresponding EasyCrypt programs to prove functional correctness, cryptographic security, or security against side-channel attacks (constant-time).

2023.06.0 is a major release of Jasmin. It contains a few noteworthy changes: - local functions now use call and ret instructions, - experimental support for the ARMv7 (i.e., Cortex-M4) architecture, - a few aspects of the safety checker can be finely controlled through annotations or command-line flags, - shift and rotation operators have a simpler semantics.

As usual, it also brings in various fixes and improvements, such as bit rotation operators and automatic slicing of the input program.

Trocq is a prototype of a modular parametricity plugin for Coq, aiming to perform proof transfer by translating the goal into an associated goal featuring the target data structures as well as a rich parametricity witness from which a function justifying the goal substitution can be extracted.

The plugin features a hierarchy of parametricity witness types, ranging from structure-less relations to a new formulation of type equivalence, gathering several pre-existing parametricity translations, including univalent parametricity and CoqEAL, in the same framework.

This modular translation performs a fine-grained analysis and generates witnesses that are rich enough to preprocess the goal yet are not always a full-blown type equivalence, allowing to perform proof transfer with the power of univalent parametricity, but trying not to pull in the univalence axiom in cases where it is not required.

The translation is implemented in Coq-Elpi and features transparent and readable code with respect to a sequent-style theoretical presentation.

VsCoq is an extension for Visual Studio Code (VS Code) and VSCodium which provides support for the Coq Proof Assistant.

VsCoq is distributed in two flavours:

- VsCoq Legacy (required for Coq < 8.18, compatible with Coq >= 8.7) is based on the original VsCoq implementation by C.J. Bell. It uses the legacy XML protocol spoken by CoqIDE.

- VsCoq (recommended for Coq >= 8.18) is a full reimplementation around a language server which natively speaks the LSP protocol.

We have mainly been working on stability and bug fixes, in this release you’ll find :

- Some improvements to performance on large files. - Fixing document state invalidation bugs. - Goal view improvements.

In July 2022, NIST announced the first batch of “winners” of the post-quantum project,
i.e., schemes that will be forwarded to standardization 25.
This first batch contained three signature schemes (CRYSTALS-Dilithium 28, 32,
Falcon 33, and

We have started the formal verification of three of those primitives (CRYSTALS-Dilithium,

For Kyber, we give a (readable) formal specification in the EasyCrypt proof assistant, which is syntactically very close to the pseudocode description of the scheme as given in the most recent version of the NIST submission. We also provide high-assurance open-source implementations of Kyber written in the Jasmin language, along with machine-checked proofs that they are functionally correct with respect to the EasyCrypt specification. To make this possible it was necessary to extend the Jasmin language. This work has been published in 11.

For CRYSTALS-Dilithium and

We continue our study of approaches to combine two mechanized tools to verify protocols. We developed a translation from CryptoVerif to EasyCrypt that allows cryptographic assumptions that cannot be proved in CryptoVerif to be translated to EasyCrypt and proved there. We used the translation to prove different hypotheses assumed in CryptoVerif:

We completed the translation to cover a wider range of the language that CryptoVerif uses for specifying assumptions on cryptographic primitives. This work 22 has been accepted for publication in CSF 2024.

We have extended Jasmin with a new back-end for arm-v7. The main difficulty was to generalize the compiler to be independent from the architecture (different pointer size, different calling convention, different instruction set and so on). Before that, the only back-end was for x86-64 with avx2 extension. This generalization is an important step, because it will allow to easily add other back-ends, in particular we plan to add RISC-V.

The language has been extended with new features for security.

In an effort to synthesize several years of investigations around the computation of robot trajectories, we developed a Coq model for a program that takes as input a description of obstacles and a pair of points and produces as output a trajectory for a robot from one point to the other between these obstacles. The obstacles are given by a collection of straight line segments and the produced trajectory is composed of straight line segments and Bézier curves, so that the trajectory is smooth. An article describing the different phases of the program is submitted for publication 21. This Coq mod l is actually a program that can be run inside Coq. Thanks to the extraction tool, the same program can also be run in a web page. Proofs of correctness for this program are under construction.

In the initial revision of Hierarchy Builder, definitions needed to be added in a precise order, otherwise instances of structures would be missing in the final graph of inheritance. We developed an extension that verifies all the instances that would be missing and includes them. Thanks to this extension the Hierarchy Builder program is more robust, as the user does not need to respect a specific order of definitions anymore.

We are developing a new type class solver for Coq by compiling type class instances into rules for the Elpi programming language. Currently we are validating a prototype implementation on the Std++ and TLC Coq libraries, two widely used libraries that rely on type classes.

We are trying to use the Elpi programming language to automate proofs in separation logic. Diaframe is an existing automatic prover based on Coq type classes that suffers from the limitations of the current Coq solver. We have improved the indexing data structures used by Elpi in order to make them scale to larger inputs. Also, we are trying to use partial evaluation in order to specialize, ahead of time, the rules used for type class search in the context of separation logic.

We ported the entire Mathcomp ecosystem to the new major release (version 2) of the mathematical components library. Most software have been released, and Mathcomp analysis and Abel are ported but not released yet. The details of the port are described in 19

In interactive theorem proving, a range of different representations may be available for a single mathematical concept, and some proofs may rely on several representations. Without automated support such as proof transfer, theorems available with different representations cannot be combined, without manual input from the user. Tools with such a purpose exist, but in proof assistants based on dependent type theory, it still requires human effort to prove transfer, whereas it is obvious and often left implicit on paper. We present Trocq, a new proof transfer framework, based on a generalization of the univalent parametricity translation, thanks to a new formulation of type equivalence. This translation takes care to avoid dependency on the axiom of univalence for transfers in a delimited class of statements, and may be used with relations that are not necessarily isomorphisms. We motivate and apply our framework on a set of examples designed to show that it unifies several existing proof transfer tools. The article 23 also discusses an implementation of this translation for the Coq proof assistant, in the Coq-Elpi metalanguage.

A rewrite of the VSCoq extension has been completed this year. This leads to the publication of the release V2.0.1 in September. This effort is meant to continue for a few years and provide a modern and stable user interface for Coq. Maxime Dénès and Enrico Tassi have worked in regular sprints since February, helping Romain Tetley to dive into the Coq language server.

We are experimenting with new design patterns to automate the conversion between sets and types, to automatically prove set membership and to automatically cast between types even when an external proof is required. The result of these experiments will be integrated in Hierarchy Builder in order to extend its expressiveness, in particular in the formalization of topology, number theory and category theory.

This is ongoing work without any publication yet. Early experiments were presented during meetings of the Liberabaci projects.

We have been working since June 2023 on the CoREACT project, which addresses the development of applied category theory in Coq. Our workload involves using the Hierarchy Builder (HB) and improving it to match the project goal. HB is useful in the formalization of complex algebraic hierarchies, making it possible to automate inheritance and to manage efficiently hierarchy evolution relative to a type subject. First we extended HB in order to support reasoning about enriched categories. In fact, the subject localization associated with enrichment has made it necessary to implement an appropriate connector that we call wrapper, allowing the user to benefit from the automation provided by HB without having to resort to mathematically unnatural formulations. Part of our initial work also involved clarifying the operational meaning of wrapping with respect to the informal semantics of HB. Then, since October, we moved on to formalize categorical theories that make use of related notions, notably double categories and internal categories. We have currently provided two alternative characterizations of double categories and we are proving their equivalence, as part of the development of a Coq library.

We extended the Abel-Galois theorem to the case of the positive characteristic. This involved the generalization of several definitions and lemmas, and in particular the contribution of Hilbert Theorem 90 in its additive version.

The construction of the Lebesgue integral and its measure has been completed and published in 9. This paper describes the techniques of formalization that were needed to obtained comfortably usable definitions.

We have introduced a construction for finite fields in the Mathcomp Library. We have first defined polynomials of a given size from which we have derived the standard module structure. Then, we use the theory of irreducible polynomials to get to finite fields. This contribution has been added to Mathcomp version 1.19.

We have continued our collaboration inside the ANR Nuscap about double-word arithmetic.
First, an article on the work on the formalization of algorithms for euclidian norm has been published 8.
Second, we have started a formalization in Coq+Flocq of the proofs given in 34
(with an extended version in 35).
This paper describes algorithms for the correct rounding of the power function IEEE 754 format, for all rounding modes. We have verified (and amended with the help of the authors) all the paper proofs given in the article. The formal proofs are available on github.
For this work we also had to formalize the correctness of the FastTwoSum algorithm with directed roundings
given in 36.

We have continued our collaboration inside the ANR Nuscap about the Fast Fourier Algorithm.
First we have a formal proof of the relative error of the Cooley-Tukey Fast Fourier Transform given in 31.
Second, we have developed a certified Fast Fourier algorithm that is executable inside Coq. It uses a complex-number interval arithmetic
built on top of the Coq interval library.
It is used to get a toy implementation of a multiplication algorithm for complex-number polynomials.

In work that was started in Pierre Boutry's thesis, we study how Tarski's work on axioms for reasoning in geometry can be made constructive. This is a follow-up of work on the same topic from 2020. We progressed on the independence of the new axioms. The current state has been presented at ADG 13.

The STAMP team participates with the Grace team (Inria Saclay) in the JASMIN contract funded in the framework of the Inria-Nomadic Labs collaboration for research related to the Tezos blockchain. This contract funds the PhD thesis of Swarn Priya.

Cyril Cohen has created and is now a co-administrator of the Coq Zulip chat.

Laurence Rideau is member of the editorial board of Interstices.