The EPI Gallinette aims at developing a new generation of proof assistants, with the belief that practical experiments must go in pair with foundational investigations:

Software quality is a requirement that is becoming more and more prevalent,
by now far exceeding the traditional scope of embedded systems. The
development of tools to construct software that respects a given specification
is a major challenge in computer science. Proof assistants
such as Coq 39 provide a formal method whose central
innovation is to produce certified programs by transforming
the very activity of programming. Programming and proving are merged
into a single development activity, informed by an elegant but rigid
mathematical theory inspired by the correspondence between programming,
logic and algebra: the Curry-Howard correspondence. For the
certification of programs, this approach has shown its effectiveness
in the development of important pieces of certified software such
as the C compiler of the CompCert project 45.
The extracted CompCert compiler is reliable and efficient, running
only 15% slower than GCC 4 at optimisation level 2 (gcc -O2),
a level of optimisation that was considered before to be unreliable from
critical applications such as embedded systems.

Proof assistants can also be used to formalise mathematical
theories: they not only provide a means of representing mathematical
theories in a form amenable to computer processing, but their internal
logic provides a language for reasoning about such theories. In the
last decade, proof assistants have been used to verify extremely large
and complicated proofs of recent mathematical results, sometimes requiring
either intensive computations 42, 43
or intricate combinations of a multitude of mathematical theories
41. But formalised mathematics is more than just proof
checking and proof assistants can help with the organisation of mathematical
knowledge or even with the discovery of new constructions and proofs.

Unfortunately, the rigidity of the theory behind proof assistants restricts their expressiveness both as programming languages and as logical systems. For instance, a program extracted from Coq only uses a purely functional subset of OCaml, leaving behind important means of expression such as side-effects and objects. Limitations also appear in the formalisation of advanced mathematics: proof assistants do not cope well with classical axioms such as excluded middle and choice which are sometimes used crucially. The fact of the matter is that the development of proof assistants cannot be dissociated from a reflection on the nature of programs and proofs coming from the Curry-Howard correspondence. In the EPC Gallinette, we propose to address several limitations of proof assistants by pushing the boundaries of this correspondence.

In the 1970's, the Curry-Howard correspondence was seen as a perfect match between functional programs, intuitionistic logic, and Cartesian closed categories. It received several generalisations over the decades, and now it is more widely understood as a fertile correspondence between computation, logic, and algebra.

Nowadays, the Curry-Howard correspondence is not perceived as a perfect match anymore, but rather as a collection of theories meant to explain similar structures at work in logic and computation, underpinned by mathematical abstractions. By relaxing the requirement of a perfect match between programs and proofs, and instead emphasising the common foundations of both, the insights of the Curry-Howard correspondence may be extended to domains for which the requirements of programming and mathematics may in fact be quite different.

Consider the following two major theories of the past decades, which were until recently thought to be irreconcilable:

We now outline a series of scientific challenges aimed at understanding of type theory, effects, and their combination.

More precisely, three key axes of improvement have been identified:

To validate the new paradigms, we propose in Section 3.5 three particular application fields in which members of the team already have a strong expertise: code refactoring, constraint programming and symbolic computation.

The experience of the team on various extensions of type theory (definitional proof irrelevant propositions, observational type theory, gradual type theory, opetopic type theory) begs naturally the question of the integration of these distinct flavours of type theory in a single type theory. At a theoretical level, we will investigate type theories with multiple universe hierarchies hosting theories with potentially incompatible principles able to express efficiently a variety of mathematical situations. At a practical level, we will develop a version of the Coq proof assistant with multiple sorts, generalizing the existing situation where the sorts of types, propositions and definitionally proof-irrelevant propositions cohabit de facto. An important challenge in that direction is to design a sound mechanism of sort polymorphism to factor away the constructions common to multiple sorts and prevent the combinatorial explosion induced by a naive implementation. A somewhat related line of research is designing an efficient decision procedure for universe level constraints, following the work of 38.

In the long quest towards a practical and extensional notion of equality, observational type theory, first introduced by 36 and further developped and studied in 50 and 13, represents an important milestone that should now be implemented in practice. We will pursue in parallel other extensionality principles to enhance the expressivity of type theories. In particular, functor laws for type formers 33 provide a common basis to type cast operations with a structural behaviour.

The investigation of extensions of CIC with side effects, in particular that of exceptions and the addition of a case analysis operator on types, yields important insights to give sound models of the cast calculi behind gradual dependent types. We plan to go beyond these relatively simple extensions and consider other widely used side effects, for instance the addition of global state to a type theory. These type theories with a primitive support for effectful operations could provide a new approach to the verification of programs exhibiting side-effects. Extending our previous work on classical sequent calculus with dependent types, we will study the integration of classical axioms such as excluded middle and choice in rich type theory. One goal is to better integrate insights of (classical) proof theory in the state of art of type theory (or in an alternative approach thereof). We also aim to look at concrete issues met in formalized mathematics stemming from the classical/intuitionist divide.

The MetaCoq project currently provides bare-bones meta-programming facilities of quotation and denotation. We plan to improve this to provide a full-feature meta-programming facility, and explore the possibility to give strong specifications and verify our meta-programs. A prime example of this is the development of support for verified parametricity translations that can have many uses during formalization (elimination principles, automatic transport, etc.).

We aim at pursuing the study of representation independence principles
and the implementation of corresponding tools, so as to dramatically
reduce the practical cost of library development. The mid-term
expected outcome concerns the design of refinements libraries, which
connect proof-oriented with computation-oriented data-structures, and
better transport instruments for formalized mathematics, e.g., automating reasoning modulo structure isomorphisms.

The porting of the developement of logical relations for MLTT of Abel et al. from Agda to Coq paves the may to a much more modular library. We would like to
extend this work by developping a generic framework for dependently-typed
logical relations, and use it for a wide variety of new dependent type theories.
The main goal is to establish strong metatheoretical properties: normalization,
but also suitable forms of interoperability. Ultimately, we
believe this framework could interface with the MetaCoq project.

We will keep investigating the semantic foundations of features of
systems programming languages from a mathematical point of view. Based
on our earlier work showing a link between resources and ordered
logic, we will study resources management in the context of the
formal theory of side-effects and linearity.
Existing theorems will need to be generalized in many ways (extension
of the notions of effect and resource modalities, handling of order,
etc.). A link with linear logic will help make tighter connections
between systems programming and “linear” approaches to program
semantics (ownership, linear types, etc.). The notion of “borrowing”
will be studied from the angle of linear logic, with possible
applications to program verification. This study should also be
extended to notions of fault tolerance (exception-safety and
isolation) which might show links with modal logics.
The anticipated outcome is an understanding of advanced notions in
programming languages that better align with proven concepts from
systems programming, compared to experimental type systems originating
from pure theory, while providing clear distinctions between essential
and accidental aspects of these real-world languages. As concrete
experiments, we will keep researching ways to integrate systems
programming concepts such as resources and fault tolerance in
functional programming languages (notably OCaml and the OCaml-Rust
interface).

We will continue our work on game semantics for programming languages, with the aim of studying interoperability and compilation between languages. Indeed, these semantics are particularly well suited to studying the interaction between a program and an environment written in different languages. We believe this approach will make it possible to overcome major open problems concerning interoperability between languages equipped with abstraction properties statically enforced by parametric polymorphism, and untyped languages where such abstractions properties are enforced dynamically. We will also continue studying the automation of reasoning on these semantics, along the lines of the CAVOC project. To do this, we want to apply abstract interpretation techniques, in particular the Abstracting Abstract Machine methodology, to automatically check accessibility properties on programs, such as unverified assertions.

As part of the CANofGAS project, we also plan to apply these interactive semantics to develop compositional cost models for programs. This would provide compositional reasoning on time and space complexity for higher-order programs.

The MetaCoq project's Achille's heel is that it relies on an assumption of strong normalization for the calculus: there is ongoing work in the team on defining powerful logical relations in Coq without relying on inductive-recursion, that gives hope that a strong-normalization model for a large fragment of MetaCoq can be constructed in the future. The main scientific obstacle is to specify the syntactic guard/productivity condition at the heart of termination checking in such a way that it can be reduced to an eliminator-based definition of (co-)inductive types, which is how they are usually modelled. We anticipate difficulties with nested and indexed inductive types, which might be currently accepted by Coq but difficult to emulate with eliminators. However, this can only lead to a better understanding of the theory. As part of the ReCiProg project, we also plan to establish formal links between the validation criteria derived from circular proofs and this guard condition.

The benefits of formally verified symbolic computations is twofold: increase the trust in computer-produced mathematics and expand the automation available for users of proof assistants. The main challenge is to enable the formal verification of efficient programs, whose correctness proofs involve sophisticated mathematical ingredients rather than subtle memory or parallelism issues. This involves in particular scaling up the automatic transport of libraries, as well as the formal verification of existing imperative code from computer algebra systems (typically written in C).

The MetaCoq erasure pipeline, targeting C or OCaml, provides a guarantee that the evaluation
of the compiled program gives a semantically correct result. However, in general extracted programs
are linked to larger programs of the target language, where we lose guarantees of correctness
in most non-trivial cases of interoperability. We are hence interested in developing
techniques to show interoperability results between code that
is extracted through our certified compilation pipeline and external
code, e.g., in OCaml or C.
In 32, we developped a complete verified extraction pipeline
from Coq to OCaml. The goal for the future is to scale this work to allow more scenarios of
interoperability with effectful target programs, using a formal semantics for the target language.
In particular, we should be able to soundly interpret the primitive constructs that are already part
of Coq, which are fixed-width integers, IEE-754 floating point numbers and applicative arrays.
There is a point of synergy here with the previous goal of enabling the development of efficient,
formally verified symbolic computation.

Ltac2 is a member of the ML family of languages, in the sense that it is an effectful call-by-value functional language, with static typing à la Hindley-Milner. It is commonly accepted that ML constitutes a sweet spot in PL design, as it is relatively expressive while not being either too lax (unlike dynamic typing) nor too strict (unlike, say, dependent types).

The main goal of Ltac2 is to serve as a meta-language for Coq. As such, it naturally fits in the ML lineage, just as the historical ML was designed as the tactic language for the LCF prover. It can also be seen as a general-purpose language, by simply forgetting about the Coq-specific features.

Sticking to a standard ML type system can be considered somewhat weak for a meta-language designed to manipulate Coq terms. In particular, there is no way to statically guarantee that a Coq term resulting from an Ltac2 computation will be well-typed. This is actually a design choice, motivated by backward compatibility with Ltac1. Instead, well-typedness is deferred to dynamic checks, allowing many primitive functions to fail whenever they are provided with an ill-typed term.

The language is naturally effectful as it manipulates the global state of the proof engine. This allows to think of proof-modifying primitives as effects in a straightforward way. Semantically, proof manipulation lives in a monad, which allows to ensure that Ltac2 satisfies the same equations as a generic ML with unspecified effects would do, e.g. function reduction is substitution by a value.

Equations is a tool designed to help with the definition of programs in the setting of dependent type theory, as implemented in the Coq proof assistant. Equations provides a syntax for defining programs by dependent pattern-matching and well-founded recursion and compiles them down to the core type theory of Coq, using the primitive eliminators for inductive types, accessibility and equality. In addition to the definitions of programs, it also automatically derives useful reasoning principles in the form of propositional equations describing the functions, and an elimination principle for calls to this function. It realizes this using a purely definitional translation of high-level definitions to core terms, without changing the core calculus in any way, or using axioms.

The main features of Equations include:

Dependent pattern-matching in the style of Agda/Epigram, with inaccessible patterns, with and where clauses. The use of the K axiom or a proof of K is configurable, and it is able to solve unification problems without resorting to the K rule if not necessary.

Support for well-founded and mutual recursion using measure/well-foundedness annotations, even on indexed inductive types, using an automatic derivation of the subterm relation for inductive families.

Support for mutual and nested structural recursion using with and where auxilliary definitions, allowing to factor multiple uses of the same nested fixpoint definition. It proves the expected elimination principles for mutual and nested definitions.

Automatic generation of the defining equations as rewrite rules for every definition.

Automatic generation of the unfolding lemma for well-founded definitions (requiring only functional extensionality).

Automatic derivation of the graph of the function and its elimination principle. In case the automation fails to prove these principles, the user is asked to provide a proof.

A new dependent elimination tactic based on the same splitting tree compilation scheme that can advantageously replace dependent destruction and sometimes inversion as well. The as clause of dependent elimination allows to specify exactly the patterns and naming of new variables needed for an elimination.

A set of Derive commands for automatic derivation of constructions from an inductive type: its signature, no-confusion property, well-founded subterm relation and decidable equality proof, if applicable.

Equations is a function definition plugin for Coq (supporting Coq 8.13 to 8.17, with special support for the Coq-HoTT library), that allows the definition of functions by dependent pattern-matching and well-founded, mutual or nested structural recursion and compiles them into core terms. It automatically derives the clauses equations, the graph of the function and its associated elimination principle.

Equations is based on a simplification engine for the dependent equalities appearing in dependent eliminations that is also usable as a separate tactic, providing an axiom-free variant of dependent destruction.

This is a new major release of Equations, working with Coq 8.15 to 8.17. This version adds an improved syntax (less ,-separation), integration with the Coq-HoTT library and numerous bug fixes. See the reference manual for details.

This version introduces minor breaking changes along with the following features:

Enhancements of pattern interpretation

No explicit shadowing of pattern variables is allowed anymore. This fixes numerous bugs where generated implicit names introduced by the elaboration of patterns could shadow user-given names, leading to incorrect names in right-hand sides and confusing environments.

Improved syntax for "concise" clauses separated by |, at top-level or inside with subprograms. We no longer require to separate them by ,. For example, the following definition is now accepted:

Equations foo : nat -> nat := | 0 => 1 | S n => S (foo n). The old syntax is however still supported for backwards compatibility.

Multiple patterns can be separated by , in addition to |, as in:

Equations trans {A} {x y z : A} (e : x = y) (e' : y = z) : x = z := | 1, 1 => 1. Require Import Equations.Equations. does not work anymore. One has to use Require Import Equations.Prop.Equations to load the plugin's default instance where equality is in Prop. From Equations Require Import Equations is unaffected.

Use Require Import Equations.HoTT.All to use the HoTT variant of the library compatible with the Coq HoTT library The plugin then reuses the definition of paths from the HoTT library and all its constructions are universe polymorphic. As for the HoTT library alone, coq must be passed the arguments -noinit -indices-matter to use the library and plugin. The coq-equations opam package depends optionally on coq-hott, so if coq-hott is installed before it, coq-equations will automatically install the HoTT library variant in addition to the standard one. This variant of Equations allows to write very concise dependent pattern-matchings on equality:

Require Import Equations.HoTT.All. Equations sym {A} {x y : A} (e : x = y) : y = x := | 1 => 1. New attribute #[tactic=tac] to set locally the default tactic to solve remaining holes. The goals on which the tactic applies are now always of the form Γ |- τ where Γ is the context where the hole was introduced and τ the expected type, even when using the Obligation machinery to solve them, resulting in a possible incompatibility if the obligation tactic treated the context differently than the conclusion. By default, the program_simpl tactic performs a simpl call before introducing the hypotheses, so you might need to add a simpl in * to your tactics.

New attributes #[derive(equations=yes,no, eliminator=yes|no)] can be used in place of the (noeqns, noind) flags which are deprecated.

The MetaCoq project aims to provide a certified meta-programming environment in Coq. It builds on Template-Coq, a plugin for Coq originally implemented by Malecha (Extensible proof engineering in intensional type theory, Harvard University, 2014), which provided a reifier for Coq terms and global declarations, as represented in the Coq kernel, as well as a denotation command. Recently, it was used in the CertiCoq certified compiler project (Anand et al., in: CoqPL, Paris, France, 2017), as its front-end language, to derive parametricity properties (Anand and Morrisett, in: CoqPL’18, Los Angeles, CA, USA, 2018). However, the syntax lacked semantics, be it typing semantics or operational semantics, which should reflect, as formal specifications in Coq, the semantics of Coq ’s type theory itself. The tool was also rather bare bones, providing only rudimentary quoting and unquoting commands. MetaCoq generalizes it to handle the entire polymorphic calculus of cumulative inductive constructions, as implemented by Coq, including the kernel’s declaration structures for definitions and inductives, and implement a monad for general manipulation of Coq’s logical environment. The MetaCoq framework allows Coq users to define many kinds of general purpose plugins, whose correctness can be readily proved in the system itself, and that can be run efficiently after extraction. Examples of implemented plugins include a parametricity translation and a certified extraction to call-by-value lambda-calculus. The meta-theory of Coq itself is verified in MetaCoq along with verified conversion, type-checking and erasure procedures providing highly trustable alternatives to the procedures in Coq's OCaml kernel. MetaCoq is hence a foundation for the development of higher-level certified tools on top of Coq's kernel. A meta-programming and proving framework for Coq.

MetaCoq is made of 4 main components:

- The entry point of the project is the Template-Coq quoting and unquoting library for Coq which allows quotation and denotation of terms between three variants of the Coq AST: the OCaml one used by Coq's kernel, the Coq one defined in MetaCoq and the one defined by the extraction of the MetaCoq AST, allowing to extract OCaml plugins from Coq implementations.

- The PCUIC component is a full formalization of Coq's typing and reduction rules, along with proofs of important metatheoretic properties: weakening, substitution, validity, subject reduction and principality. The PCUIC calculus differs slightly from the Template-Coq one and verified translations between the two are provided.

- The checker component contains verified implementations of weak-head reduction, conversion and type inference for the PCUIC calculus, along with a verified checker for Coq theories.

- The erasure compoment contains a verified implementation of erasure/extraction from PCUIC to untyped (call-by-value) lambda calculus extended with a dummy value for erased terms.

This new version integrates:

Support for primitive integers and floating point values, using the same typechecking mechanism as Coq's kernel, up to the erased lambda-box language. Better computational behavior of the safe checker. Support for nix and cachix (useful for CI, allows to reuse remotely compiled components) Registering of projections for inductive types defined as records More efficient eta-expansion transformation using environment maps instead of association lists.

Memprof-limits is an implementation of per-thread global memory limits, and per-thread allocation limits à la Haskell, and CPU-bound thread cancellation, for OCaml, compatible with multiple threads.

Memprof-limits interrupts the execution by raising an asynchronous exception: an exception that can arise at almost any location in the program. It is provided with a guide on how to recover from asynchronous exceptions and other unexpected exceptions, summarising for the first time practical knowledge acquired in OCaml by the Coq proof assistant as well as in other programming languages.

Memprof-limits is probabilistic, as it is based on the statistical memory accountant memprof. It is provided with a statistical analysis that the user can rely on to have guarantees about the enforcement of limits.

Memprof-limits is an implementation of (per-thread) global memory limits, (per-thread) allocation limits, and cancellation of CPU-bound threads, for OCaml. Memprof-limits interrupts a computation by raising an exception asynchronously and offers features to recover from them such as interrupt-safe resources.

It is provided with an extensive documentation with examples which explains what must be done to ensure one recovers from an interrupt. This documentation summarises for the first time the experience acquired in OCaml in the Coq proof assistant, as well as in other situations in other programming languages.

Trocq is a prototype of a modular parametricity plugin for Coq, aiming to perform proof transfer by translating the goal into an associated goal featuring the target data structures as well as a rich parametricity witness from which a function justifying the goal substitution can be extracted.

The plugin features a hierarchy of parametricity witness types, ranging from structure-less relations to a new formulation of type equivalence, gathering several pre-existing parametricity translations, including univalent parametricity and CoqEAL, in the same framework.

This modular translation performs a fine-grained analysis and generates witnesses that are rich enough to preprocess the goal yet are not always a full-blown type equivalence, allowing to perform proof transfer with the power of univalent parametricity, but trying not to pull in the univalence axiom in cases where it is not required.

The translation is implemented in Coq-Elpi and features transparent and readable code with respect to a sequent-style theoretical presentation.

In dependent type theory, impredicativity is a powerful logical principle that allows the definition of propositions that quantify over arbitrarily large types, potentially resulting in self-referential propositions. Impredicativity can provide a system with increased logical strength and flexibility, but in counterpart it comes with multiple incompatibility results. In particular, Abel and Coquand showed that adding definitional uniqueness of identity proofs (UIP) to the main proof assistants that support impredicative propositions (Coq and Lean) breaks the normalization procedure, and thus the type-checking algorithm. However, it was not known whether this stems from a fundamental incompatibility between UIP and impredicativity or if a more suitable algorithm could decide type-checking for a type theory that supports both. In 13, we design a theory that handles both UIP and impredicativity by extending the recently introduced observational type theory TTobs with an impredicative universe of definitionally proof-irrelevant types, as initially proposed in the seminal work on observational equality of 36. We prove decidability of conversion for the resulting system, that we call CCobs, by harnessing proof-irrelevance to avoid computing with impredicative proof terms. Additionally, we prove normalization for CCobs in plain Martin-Löf type theory, thereby showing that adding proof-irrelevant impredicativity does not increase the computational content of the theory.

Dependently-typed proof assistant rely crucially on definitional equality, which relates types and terms that are automatically identified in the underlying type theory. In 33, we extend type theory with definitional functor laws, equations satisfied propositionally by a large class of container-like type constructors F

We report in 15 on a mechanization in the Coq proof assistant of the decidability of conversion and type-checking for Martin-Löf Type Theory (MLTT), extending a previous Agda formalization. Our development proves the decidability not only of conversion, but also of type-checking, using bidirectional derivations that are canonical for typing. Moreover, we wish to narrow the gap between the object theory we formalize (currently MLTT with Π, Σ, N and one universe) and the metatheory used to prove the normalization result, e.g., MLTT, to a mere difference of universe levels. We thus avoid induction-recursion or impredicativity, which are central in previous work. Working in Coq, we also investigate how its features, including universe polymorphism and the metaprogramming facilities provided by tactics, impact the development of the formalization compared to the development style in Agda. The development is freely accessible on GitHub.

In his PhD thesis, defended this year, Antoine Allioux developed the theory of dependent opetopic types and the universe of polynomial monads and provides example synthetic proofs of results from higher category theory, including adjunction and representability theorems.

By mimicking the internal sheafification construction in an arbitrary topos, we provide a simple description of the various sheaf-related constructions in a purely type-theoretical setting 34. Assuming mild extensionality properties on our target type theory, we show that it almost results in a syntactic model of CIC. Unfortunately, a well-known topos-theoretic issue carries over, and the resulting theory, called ShTT, does not feature universes. We propose three different solutions to this problem: making the target theory univalent, allowing effects in the source theory, or changing our point of view while using strict propositions. As a side-product, we also give a computational interpretation of sheaves that highlights their deep relationship with the well-known structure of interaction trees.

In formal systems combining dependent types and inductive types, such as the Coq proof assistant, non-terminating programs are frowned upon. They can indeed be made to return impossible results, thus endangering the consistency of the system, although the transient usage of a non-terminating Y combinator, typically for searching witnesses, is safe. To avoid this issue, the definition of a recursive function is allowed only if one of its arguments is of an inductive type and any recursive call is performed on a syntactically smaller argument. If there is no such argument, the user has to artificially add one, e.g., an accessibility property. Free monads can still be used to address general recursion and elegant methods make possible to extract partial functions from sophisticated recursive schemes. The latter yet rely on an inductive characterization of the domain of a function, and of its computational graph, which in turn might require a substantial effort of specification and proof. This leads to a rather frustrating situation when computations are involved. Indeed, the user first has to formally prove that the function will terminate, then the computation can be performed, and finally a result is obtained (assuming the user waited long enough). But since the computation did terminate, what was the point of proving that it would terminate? In 22, we investigates how users of proof assistants based on variants of the Calculus of Inductive Constructions could benefit from manifestly terminating computations. A companion file showcasing the approach in the Coq proof assistant is available online.

In the context of interactive theorem provers based on a dependent type theory, automation tactics (dedicated decision procedures, call of automated solvers, ...) are often limited to goals which are exactly in some expected logical fragment. This very often prevents users from applying these tactics in other contexts, even similar ones. 16 discusses the design and the implementation of pre-processing operations for automating formal proofs in the Coq proof assistant. It presents the implementation of a wide variety of predictible, atomic goal transformations, which can be composed in various ways to target different backends. A gallery of examples illustrates how it helps to expand significantly the power of automation engines.

Since their inception, proof assistants based on dependent type theory have featured some way to quantify over types. Leveraging dependent products, the most common way to do so is to introduce a type of types, known as a universe. Care has to be taken, as paradoxes lurk in the dark. Martin-Löf famously introduced in his seminal type theory MLTT a universe U with the typing rule

While trivial from the point of view of the typing rules, this additional index is a major source of non-modularity. In 26 we report on the implementation of sort polymorphism in Coq to solve this modularity issue.

Dependently typed languages such as Coq or Agda are very convenient tools to program with strong invariants and develop mathematical proofs. However, a user might be inconvenienced by things such as the fact that n and n+0 are not considered definitionally equal, or the inability to postulate one's own constructs with computation rules such as exceptions. Coq modulo theory solves the first of the two problems by extending Coq's conversion with decision procedures, e.g., for linear integer arithmetic. Rewrite rules can be used to deal with directed equalities for natural numbers, but also to implement exceptions that compute. They were introduced in Agda a few years ago, and later extended to provide more guarantees with a modular confluence checker. We present in 20 a work-in-progress extension of Coq which supports user-defined rewrite rules. While we mostly follow in the footsteps of the Agda implementation, we also have to face new issues due to the differences in the implementation and meta-theory of Coq and Agda. The most prominent one being the different treatment of universes as Coq supports cumulativity but no first-class universe levels.

The Mathematical Components library (hereafter, MathComp) provides, among others, a number of mathematical structures organized as hierarchies. Hierarchy Builder (hereafter, HB) is an extension of the Coq proof assistant to ease the development of hierarchies of structures. MathComp 2 is the result of the port of MathComp to HB. 30 is a technical report whose goal is to explain how to port MathComp developments to MathComp 2. It has been written by the participants of the MathComp Documentation Sprint that happened from 2023-05-03 to 2023-05-10.

In interactive theorem proving, a range of different representations may be available for a single mathematical concept, and some proofs may rely on several representations. Without automated support such as proof transfer, theorems available with different representations cannot be combined, without light to major manual input from the user. Tools with such a purpose exist, but in proof assistants based on dependent type theory, it still requires human effort to prove transfer, whereas it is obvious and often left implicit on paper. In 17, we present Trocq, a new proof transfer framework, based on a generalization of the univalent parametricity translation, thanks to a new formulation of type equivalence. This translation takes care to avoid dependency on the axiom of univalence for transfers in a delimited class of statements, and may be used with relations that are not necessarily isomorphisms. We motivate and apply our framework on a set of examples designed to show that it unifies several existing proof transfer tools. The article also discusses an implementation of this translation for the Coq proof assistant, in the Coq-Elpi metalanguage.

Coq is built around a well-delimited kernel that performs type checking for definitions in a variant of the Calculus of Inductive Constructions (CIC). Although the metatheory of CIC is very stable and reliable, the correctness of its implementation in Coq is less clear. Indeed, implementing an efficient type checker for CIC is a rather complex task, and many parts of the code rely on implicit invariants which can easily be broken by further evolution of the code. Therefore, on average, one critical bug has been found every year in Coq. In 35, 14, we present the first implementation of a type checker for the kernel of Coq (without the module system, template polymorphism and

We continued previous work establishing a formal link between resource-management features from systems programming (C++/Rust), and ordered (or non-commutative) logic.

We previously showed that there exist algorithms for clean-up functions that are efficient in time and space for ordered algebraic datatypes. We have extended this approach to support abstract types and separate compilation, in order to make it suitable for code generation, as part of Jean Caspar's internship. This addresses a long-standing conceptual problem with compiler-generated clean-up functions causing stack overflows in deep structures.

In 25, we present a functional translation of a subset of safe Rust programs, building upon the results of Aeneas. It preserves linearity and captures a new feature, namely lifetime bounds. This is a work in progress: in particular, translation rules are not set yet.

Thanks to the semantic understanding of resources in system programming languages mentioned in the previous paragraph, we proposed a design for extending functional programming languages towards usages from systems programming, centered around the ML language 27. One motivation is to show the feasibility of basing linear allocation with re-use 44, 37 inside languages that would still leverage state-of-art garbage collection for non-linear values. The design includes the possibility to choose the most suitable allocation mode, with a garbage collector (GC) or with a controlled form of dynamic memory (linear allocation with reuse), based on kinds.

A result by Fürhmann and by Hasegawa is important in the type-theoretic semantics of side-effects and linearity, as it characterises what it means to be without side-effects in the context of classical and linear logics. It was admitted to be true, but lacked a written proof, for lack of a conceptual approach to the result. We proposed such a conceptual proof with Éléonore Mangel.

In 21, we prove decidability for contextual equivalence of the

Destructors, responsible for releasing memory and other resources in languages such as C++ and Rust, can lead to stack overflows when releasing a recursive structure that is too deep. In certain cases, it is possible to generate an efficient destructor (non-allocating and tail recursive) using a typed variant of pointer reversal. In 24, we extend this technique by making it more modular, in order to handle abstract types, separate compilation, and unboxed types.

State-separating proofs (SSP) is a recent methodology for structuring game-based cryptographic proofs in a modular way, by using algebraic laws to exploit the modular structure of composed protocols. While promising, this methodology was previously not fully formalized and came with little tool support. We address this by introducing SSProve 12, the first general verification framework for machine-checked state-separating proofs. SSProve combines high-level modular proofs about composed protocols, as proposed in SSP, with a probabilistic relational program logic for formalizing the lower-level details, which together enable constructing machine-checked cryptographic proofs in the Coq proof assistant. Moreover, SSProve is itself fully formalized in Coq, including the algebraic laws of SSP, the soundness of the program logic, and the connection between these two verification styles. To illustrate SSProve, we use it to mechanize the simple security proofs of ElGamal and pseudo-random-function–based encryption. We also validate the SSProve approach by conducting two more substantial case studies: First, we mechanize an SSP security proof of the key encapsulation mechanism–data encryption mechanism (KEM-DEM) public key encryption scheme, which led to the discovery of an error in the original paper proof that has since been fixed. Second, we use SSProve to formally prove security of the sigma-protocol zero-knowledge construction, and we moreover construct a commitment scheme from a sigma-protocol to compare with a similar development in CryptHOL. We instantiate the security proof for sigma-protocols to give concrete security bounds for Schnorr’s sigma-protocol.

The Cantor-Bernstein theorem (CB) from set theory, stating that two sets which can be injectively embedded into each other are in bijection, is inherently classical in its full generality, i.e. implies the law of excluded middle, a result due to Pradic and Brown 48. Recently, Escardó has provided a proof of CB in univalent type theory, assuming the law of excluded middle. It is a natural question to ask which restrictions of CB can be proved without axiomatic assumptions. In 19, we give a partial answer to this question contributing an assumption-free proof of CB restricted to enumerable discrete types, i.e. types which can be computationally treated. In fact, we construct several bijections from injections: The first is by translating a proof of the Myhill isomorphism theorem from computability theory—stating that 1-equivalent predicates are recursively isomorphic-to constructive type theory, where the bijection is constructed in stages and an algorithm with an intricate termination argument is used to extend the bijection in every step. The second is also constructed in stages, but with a simpler extension algorithm sufficient for CB. The third is constructed directly in such a way that it only relies on the given enumerations of the types, not on the given injections. We aim at keeping the explanations simple, accessible, and concise in the style of a "proof pearl". All proofs are machine-checked in Coq but should transport to other foundations: they do not rely on impredicativity, on choice principles, or on large eliminations.

We present in 18 a constructive analysis and machine-checked theory of one-one, many-one, and truth-table reductions based on synthetic computability theory in the Calculus of Inductive Constructions, the type theory underlying the proof assistant Coq. We give elegant, synthetic, and machine-checked proofs of Post's landmark results that a simple predicate exists, is enumerable, undecidable, but many-one incomplete (Post's problem for many-one reducibility), and a hypersimple predicate exists, is enumerable, undecidable, but truth-table incomplete (Post's problem for truth-table reducibility). In synthetic computability, one assumes axioms allowing to carry out computability theory with all definitions and proofs purely in terms of functions of the type theory with no mention of a model of computation. Proofs can focus on the essence of the argument, without having to sacrifice formality. Synthetic computability also clears the lense for constructivisation. Our constructively careful definition of simple and hypersimple predicates allows us to not assume classical axioms, not even Markov's principle, still yielding the expected strong results.

In 23, we discuss the formalization of proofs "by diagram chasing", a standard technique for proving properties in abelian categories. We discuss how the essence of diagram chases can be captured by a simple many-sorted first-order theory, and we study the models and decidability of this theory. The longer-term motivation of this work is the design of a computer-aided instrument for writing reliable proofs in homological algebra, based on interactive theorem provers.

Using order structures in a proof assistant naturally raises the problem of working with multiple instances of a same structure over a common type of elements. This goes against the main design pattern of hierarchies used for instance in Coq's MathComp or Lean's mathlib libraries, where types are canonically associated to at most one instance and instances share a common overloaded syntax. In 31, we present new design patterns to leverage these issues, and apply them to the formalization of order structures in the MathComp library. A common idea in these patterns is underloading, i.e., a disambiguation of operators on a common type. In addition, our design patterns include a way to deal with duality in order structures in a convenient way. We hence formalize a large hierarchy which includes partial orders, semilattices, lattices as well as many variants. We finally pay a special attention to order substructures. We introduce a new kind of structure called prelattice. They are abstractions of semilattices, and allow us to deal with finite lattices and their sublattices within a common signature. As an application, we report on significant simplifications of the formalization of the face lattices of polyhedra in the Coq-Polyhedra library.

One of the central claims of fame of the Coq proof assistant is extraction, i.e., the ability to obtain efficient programs in industrial programming languages such as OCaml, Haskell, or Scheme from programs written in Coq's expressive dependent type theory. Extraction is of great practical usefulness, used crucially e.g., in the CompCert project. However, for such executables obtained by extraction, the extraction process is part of the trusted code base (TCB), as are Coq's kernel and the compiler used to compile the extracted code. The extraction process contains intricate semantic transformation of programs that rely on subtle operational features of both the source and target language. Its code has also evolved since the last theoretical exposition in the seminal PhD thesis of Pierre Letouzey. Furthermore, while the exact correctness statements for the execution of extracted code are described clearly in academic literature, the interoperability with unverified code has never been investigated formally, and yet is used in virtually every project relying on extraction. In 32, we describe the development of a novel extraction pipeline from Coq to OCaml, implemented and verified in Coq itself, with a clear correctness theorem and guarantees for safe interoperability. We build our work on the MetaCoq project, which aims at decreasing the TCB of Coq's kernel by reimplementing it in Coq itself and proving it correct w.r.t. a formal specification of Coq's type theory in Coq. Since OCaml does not have a formal specification, we make use of the Malfunction project specifying the semantics of the intermediate language of the OCaml compiler. Our work fills some gaps in the literature and highlights important differences between the operational semantics of Coq programs and their extracted variants. In particular, we focus on the guarantees that can be provided for interoperability with unverified code, identify guarantees that are infeasible to provide, and raise interesting open question regarding semantic guarantees that could be provided. As central result, we prove that extracted programs of first-order data type are correct and can safely interoperate, whereas for higher-order programs already simple interoperations can lead to incorrect behaviour and even outright segfaults.

FRESCO project on cordis.europa.eu

The use of computers for formulating conjectures, but also for substantiating proof steps, pervades mathematics, even in its most abstract fields. Most computer proofs are produced by symbolic computations, using computer algebra systems. Sadly, these systems suffer from severe, intrinsic flaws, key to their amazing efficiency, but preventing any flavor of post-hoc verification.

But can computer algebra become reliable while remaining fast? Bringing a positive answer to this question represents an outstanding scientific challenge per se, which this project aims at solving.

Our starting point is that interactive theorem provers are the best tools for representing mathematics in silico. But we intend to disrupt their architecture, shaped by decades of applications in computer science, so as to dramatically enrich their programming features, while remaining compatible with their logical foundations.

We will then design a novel generation of mathematical software, based on the firm grounds of modern programming language theory. This environment will feature a new, high-level, performance-oriented programming language, devised for writing efficient and correct code easily, and for serving the frontline of research in computational mathematics. Users will have access to fast implementations, and to powerful proving technologies for verifying any component à la carte, with high productivity. Logic- and computer-based formal proofs will prevent run-time errors, and incorrect mathematical semantics.

We will maintain a close, continuous collaboration with interested high-profile mathematicians, on the verification of cutting-edge research results, today beyond the reach of formal proofs. We ambition to empower mathematical journals to install high-quality artifact evaluation, when peer-reviewing falls short of assessing computer proofs. This project will eventually impact the use of formal methods in engineering, in areas like cryptography or signal-processing.

Coqaml project on cordis.europa.eu

The Coq proof assistant is a popular tool to verify the correctness of security-critical software. The CompCert C compiler, some implementations of blockchain languages, and the implementation of the P-256 elliptic curve in Google’s BoringSSL library are all OCaml programs obtained by extraction from Coq functions.

While a type checker for Coq has recently been verified via a machine-checked mathematical proof based on the MetaCoq project for verified meta-programming, the extraction process from Coq to OCaml is still part of the trusted computing base (TCB).

The Coqaml project will minimise the TCB for extracted programs even further by also providing a machine-checked correctness proof for the extraction mechanism to OCaml. Under the supervision of Nicolas Tabareau, head of the Inria Gallinette team in Nantes, the experienced researcher (ER) will implement Coq's extraction as mechanically verified MetaCoq-plugin, obtaining the guarantee that extracted OCaml programs behave exactly like the Coq function specified.

In order to be usable in industrial applications, Coqaml will include a novel extraction targeting generalized algebraic datatypes (GADTs) in OCaml. The project includes a secondment of the ER to Nomadic Labs in Paris, who require GADTs as target for Coq's extraction. The intermediate semantic correctness proof for type and proof erasure, allowing axioms like functional extensionality or proof irrelevance in verified programs, can also be exploited in other extraction projects like the CertiCoq compiler from Coq to C code.

The Coqaml project is interdisciplanary by design, spanning logic, type theory, programming languages, and compilers. The density of some of the world’s leading experts on Coq and type theory in the Gallinette team and the expertise at Nomadic Labs will ensure that the environment is ideal for the success of the Coqaml project and the most beneficial development of the ER, greatly enhancing his future career prospects.

Assia Mahboubi has served in the scientific committee of the GdR Informatique Mathématique.

Assia Mahboubi is an elected member of the Commission d'Évaluation Inria and member of the conseil de laboratoire of the Laboratoire des Sciences du Numérique (LS2N).

Guilhem Jaber participated in the production of a popular science magazine during the residence of journalists from "Les Autres Possible".

Assia Mahboubi co-coordinates a joint computer science and mathematics departments program for Art+Science actions in schools at Nantes Université.

Assia Mahboubi has given a talk at the Université Ouverte Lyon 1 (Villeurbanne).