The EPI Gallinette aims at developing a new generation of proof assistants, with the belief that practical experiments must go in pair with foundational investigations:

Software quality is a requirement that is becoming more and more prevalent,
by now far exceeding the traditional scope of embedded systems. The
development of tools to construct software that respects a given specification
is a major challenge in computer science. Proof assistants
such as Coq 57 provide a formal method whose central
innovation is to produce certified programs by transforming
the very activity of programming. Programming and proving are merged
into a single development activity, informed by an elegant but rigid
mathematical theory inspired by the correspondence between programming,
logic and algebra: the Curry-Howard correspondence. For the
certification of programs, this approach has shown its effectiveness
in the development of important pieces of certified software such
as the C compiler of the CompCert project 88.
The extracted CompCert compiler is reliable and efficient, running
only 15% slower than GCC 4 at optimisation level 2 (gcc -O2),
a level of optimisation that was considered before to be unreliable from
critical applications such as embedded systems.

Proof assistants can also be used to formalise mathematical
theories: they not only provide a means of representing mathematical
theories in a form amenable to computer processing, but their internal
logic provides a language for reasoning about such theories. In the
last decade, proof assistants have been used to verify extremely large
and complicated proofs of recent mathematical results, sometimes requiring
either intensive computations 69, 73
or intricate combinations of a multitude of mathematical theories
68. But formalised mathematics is more than just proof
checking and proof assistants can help with the organisation of mathematical
knowledge or even with the discovery of new constructions and proofs.

Unfortunately, the rigidity of the theory behind proof assistants restricts their expressiveness both as programming languages and as logical systems. For instance, a program extracted from Coq only uses a purely functional subset of OCaml, leaving behind important means of expression such as side-effects and objects. Limitations also appear in the formalisation of advanced mathematics: proof assistants do not cope well with classical axioms such as excluded middle and choice which are sometimes used crucially. The fact of the matter is that the development of proof assistants cannot be dissociated from a reflection on the nature of programs and proofs coming from the Curry-Howard correspondence. In the EPC Gallinette, we propose to address several limitations of proof assistants by pushing the boundaries of this correspondence.

In the 1970's, the Curry-Howard correspondence was seen as a perfect match between functional programs, intuitionistic logic, and Cartesian closed categories. It received several generalisations over the decades, and now it is more widely understood as a fertile correspondence between computation, logic, and algebra.

Nowadays, the Curry-Howard correspondence is not perceived as a perfect match anymore, but rather as a collection of theories meant to explain similar structures at work in logic and computation, underpinned by mathematical abstractions. By relaxing the requirement of a perfect match between programs and proofs, and instead emphasising the common foundations of both, the insights of the Curry-Howard correspondence may be extended to domains for which the requirements of programming and mathematics may in fact be quite different.

Consider the following two major theories of the past decades, which were until recently thought to be irreconcilable:

We now outline a series of scientific challenges aimed at understanding of type theory, effects, and their combination.

More precisely, three key axes of improvement have been identified:

To validate the new paradigms, we propose in Section 3.5 three particular application fields in which members of the team already have a strong expertise: code refactoring, constraint programming and symbolic computation.

The democratisation of proof assistants based on type theory has likely been impeded by one central problem: the mismatch between the conception of equality in mathematics and its formalisation in type theory. Indeed, some basic principles that are used implicitly in mathematics—such as Church’s principle of propositional extensionality, which says that two propositions are equal when they are logically equivalent—are not derivable in type theory. Even more problematically, from a computer science point of view, the basic concept of two functions being equal when they are equal at every “point” of their domain is also not derivable: rather, it must be added as an additional axiom. Of course, these principles are consistent with type theory so that working under the corresponding additional assumptions is safe. But the use of these assumptions in a definition potentially clutters its computational behaviour: since axioms are computational black boxes, computation gets stuck at the points of the code where they have been used.

We propose to investigate how expressive logical transformations such as forcing 79 and sheaf construction might be used to enhance the computational and logical power of proof assistants—with a particular emphasis on their implementation in the Coq proof assistant by the means of effective translations (or compilation phases). One of the main topics of this task, in connection to the ERC project CoqHoTT, is the integration in Coq of new concepts inspired by homotopy type theory 109 such as the univalence principle, and higher inductive types.

In the Coq proof assistant, the sort proof
irrelevance, can be expressed formally as: raison d'être of the sort

The univalence principle is becoming widely accepted as a very promising
avenue to provide new foundations for mathematics and type theory.
However, this principle has not yet been incorporated into a proof
assistant. Indeed, the very mathematical structures (known as

Hence a major objective is to achieve a complete internalisation of univalence in intensional type theory, including an integration to a new version of Coq. We will strive to keep compatibility with previous versions, in particular from a performance point of view. Indeed, the additional complexity of homotopy type theory should not induce an overhead in the type checking procedure used by the software if we want our new framework to become rapidly adopted by the community. Concretely, we will make sure that the compilation time of Coq’s Standard Library will be of the same order of magnitude.

Extending the power of a logic using model transformations (e.g.,
forcing transformation 80, 79 or the sheaf
construction 112) is a classic topic of mathematical
logic 53, 86. However, these ideas have
not been much investigated in the setting of type theory, even though
they may provide a useful framework for extending the logical power
of proof assistant in a modular way. There is a good reason for this:
with a syntactic notion of equality, the underlying structure of type
theory does not conform to the structure of topos used in mathematical
logic. A direct incorporation of the standard techniques is therefore
not possible. However, a univalent notion of equality brings type
theory closer to the required algebraic structure, as it corresponds
to the notion of

The Gallinette project advocates the use of distinct compilation phases as a methodology for the design of a new generation of proof assistants featuring modular extensions of a core logic. The essence of a compiler is the separation of the complexity of a translation process into modular stages, and the organization of their re-composition. This idea finds a natural application in the design of complex proof assistants (Figure 1). For instance, the definition of type classes in Coq follows this pattern, and is morally given by the means of a translation into a type-class free kernel. More recently, a similar approach by compilation stages, using the forcing transformation, was used to relax the strict positivity condition guarding inductive types 80, 79. We believe that this flavour of compilation-based strategies offers a promising direction of investigation for the purpose of defining a decidable type checking algorithm for HoTT.

We propose the incorporation of effects in the theory of proof assistants at a foundational level. Not only would this allow for certified programming with effects, but it would moreover have implications for both semantics and logic.

We mean effects in a broad sense that encompasses both Moggi's
monads 100 and Girard's linear
logic 66. These two seminal works have given rise to respective
theories of effects (monads) and resources (co-monads). Recent advances,
however have unified these two lines of thought: it is now clear that
the defining feature of effects, in the broad sense, is sensitivity
to evaluation order 89, 58.

In contrast, the type theory that forms the foundations of proof assistants
is based on pure “the next
difficult step [...] currently under investigation”.

Any realistic program contains effects: state, exceptions, input-output.
More generally, evaluation order may simply be important for complexity
reasons. With this in mind, many works have focused on certified programming
with effects: notably Ynot 105, and more recently

We propose to develop the foundations of a type theory with effects
taking into account the logical and semantic aspects, and to study
their practical and theoretical consequences. A type theory that integrates
effects would have logical, algebraic and computational implications
when viewed through the Curry-Howard correspondence. For instance,
effects such as control operators establish a link with classical
proof theory 71. Indeed, control
operators provide computational interpretations of type isomorphisms
such as

The goal is to develop a type theory with effects that accounts both for practical experiments in certified programming, and for clues from denotational semantics and logical phenomena, in a unified setting.

A crucial step is the integration of dependent types with effects,
a topic which has remained “currently under investigation”
101 ever since the beginning. The difficulty resides
in expressing the dependency of types on terms that can perform side-effects
during the computation. On the side of denotational semantics, several
extensions of categorical models for effects with dependent types
have been proposed 33, 120 using axioms that should
correspond to restrictions in terms of expressivity but whose practical
implications, however, are not immediately transparent. On the side
of logical approaches 75, 76, 87, 99,
one first considers a drastic restriction to terms that do not compute,
which is then relaxed by semantic means. On the side of systems for
certified programming such as

Thus, the recurring idea is to introduce restrictions on the dependency
in order to establish an encapsulation of effects. In our approach,
we seek a principled description of this idea by developing the concept
of semantic value (thunkables, linears) which arose from foundational
considerations 65, 115, 103 and
whose relevance was highlighted in recent works 90, 107.
The novel aspect of our approach is to seek a proper extension of
type theory which would provide foundations for a classical type theory
with axiom of choice in the style of Herbelin 75,
but which moreover could be generalised to effects other than just
control by exploiting an abstract and adaptable notion of semantic
value.

In our view, the common idea that evaluation order does not matter
for pure and terminating computations should serve as a bridge between
our proposals for dependent types in the presence of effects and traditional
type theory. Building on the previous goal, we aim to study the relationship
between semantic values, purity, and parametricity theorems 114, 67.
Our goal is to characterise parametricity as a form of intuitionistic
depolarisation following the method by which the first game model
of full linear logic was given (Melliès 96, 97).
We have two expected outcomes in mind: enriching type theory with
intensional content without losing its properties, and giving an explanation
of the dependent types in the style of Idris and

An integrated type theory with effects requires an understanding of
evaluation order from the point of view of rewriting. For instance,
rewriting properties can entail the decidability of some conversions,
allowing the automation of equational reasoning in types 31.
They can also provide proofs of computational consistency (that terms
are not all equivalent) by showing that extending calculi with new
constructs is conservative 117. In our approach,
the

One goal is to prove computational consistency or decidability of conversions purely using advanced rewriting techniques following a technique introduced in 117. Another goal is the characterisation of weak reductions: extensions of the operational semantics to terms with free variables that preserve termination, whose iteration is equivalent to strong reduction 32, 62. We aim to show that such properties derive from generic theorems of higher-order rewriting 113, so that weak reduction can easily be generalised to richer systems with effects.

Proof theory and rewriting are a source of coherence theorems
in category theory, which show how calculations in a category can
be simplified with an embedding into a structure with stronger properties
94, 85. We aim to explore such results
for categorical models of effects 89, 58. Our key
insight is to consider the reflection between indirect and direct
models 65, 103 as a coherence theorem:
it allows us to embed the traditional models of effects into structures
for which the rewriting and proof-theoretic techniques from the previous
section are effective.

Building on this, we are further interested in connecting operational
semantics to 2-category theory, in which a second dimension is traditionally
considered for modelling conversions of programs rather than equivalences.
This idea has been successfully applied for the

The unified theory of effects and resources 58 prompts an investigation into the semantics of safe and automatic resource management, in the style of Modern C++ and Rust. Our goal is to show how advanced semantics of effects, resources, and their combination arise by assembling elementary blocks, pursuing the methodology applied by Melliès and Tabareau in the context of continuations 98. For instance, combining control flow (exceptions, return) with linearity allows us to describe in a precise way the “Resource Acquisition Is Initialisation” idiom in which the resource safety is ensured with scope-based destructors. A further step would be to reconstruct uniqueness types and borrowing using similar ideas.

The development of tools to construct software systems that respect
a given specification is a major challenge of current and future research
in computer science. Certified programming with dependent types has
recently attracted a lot of interest, and Coq is the de facto
standard for such endeavours, with an increasing number of users,
pedagogical resources, and large-scale projects. Nevertheless, significant
work remains to be done to make Coq more usable from a software engineering
point of view. The Gallinette team proposes to make progress on three
lines of work: (i) the development of gradual certified programming,
(ii) the integration of imperative features and object polymorphism
in Coq, and (iii) the development of robust tactics for proof engineering
for the scaling of formalised libraries.

One of the main issues faced by a programmer starting to internalise in a proof assistant code written in a more permissive world is that type theory is constrained by a strict type discipline which lacks flexibility. Concretely, as soon as you start giving a more precise type/specification to a function, the rest of the code interacting with this function needs to be more precise too. To address this issue, the Gallinette team will put strong efforts into the development of gradual typing in type theory to allow progressive integration of code that comes from a more permissive world.

Indeed, on the way to full verification, programmers can take advantage of a gradual approach in which some properties are simply asserted instead of proven, subject to dynamic verification. Tabareau and Tanter have made preliminary progress in this direction 119. This work, however, suffers from a number of limitations, the most important being the lack of a mechanism for handling the possibility of runtime errors within Coq. Instead of relying on axioms, this project will explore the application of Section 3.3 to embed effects in Coq. This way, instead of postulating axioms for parts of the development that are too hard/marginal to be dealt with, the system adds dynamic checks. Then, after extraction, we get a program that corresponds to the initial program but with dynamic checks for parts that have not been proven, ensuring that the program will raise an error instead of going outside its specification.

This will yield new foundations of gradual certified programming, both more expressive and practical. We will also study how to integrate previous techniques with the extraction mechanism of Coq programs to OCaml, in order to exploit the exception mechanism of OCaml.

Abstract data types (ADTs) become useful as the size of programs grows since they provide for a modular approach, allowing abstractions about data to be expressed and then instantiated. Moreover, ADTs are natural concepts in the calculus of inductive constructions. But while it is easy to declare an ADT, it is often difficult to implement an efficient one. Compare this situation with, for example, Okasaki's purely functional data structures 106 which implement ADTs like queues in languages with imperative features. Of course, Okasaki's queues enforce some additional properties for free, such as persistence, but the programmer may prefer to use and to study a simpler implementation without those additional properties. Also in certified symbolic computation (see 3.5.3), an efficient functional implementation of ADTs is often not available, and efficiency is a major challenge in this area. Relying on the theoretical work done in 3.3, we will equip Coq with imperative features and we will demonstrate how they can be used to provide efficient implementations of ADTs. However, it is also often the case that imperative implementations are hard-to-reason-on, requiring for instance the use of separation logic. But in that case, we benefit from recent works on integration of separation logic in the Coq proof assistant and in particular the Iris project.

Object-oriented programming has evolved since its foundation based on the representation of computations as an exchange of messages between objects. In modern programming languages like Scala, which aims at a synthesis between object-oriented and functional programming, object-orientation concretely results in the use of hierarchies of interfaces ordered by the subtyping relation and the definition of interface implementations that can interoperate. As observed by Cook and Aldrich 56, 35, interoperability can be considered as the essential feature of objects and is a requirement for many modern frameworks and ecosystems: it means that two different implementations of the same interface can interoperate.

Our objective is to provide a representation of object-oriented programs, by focusing on subtyping and interoperability.

For subtyping, the natural solution in type theory is coercive subtyping 92, as implemented in Coq, with an explicit operator for coercions. This should lead to a shallow embedding, but has limitations: indeed, while it allows subtyping to be faithfully represented, it does not provide a direct means to represent union and intersection types, which are often associated with subtyping (for instance intersection types are present in Scala). A more ambitious solution would be to resort to subsumptive subtyping (or semantic subtyping 64): in its more general form, a type algebra is extended with boolean operations (union, intersection, complementing) to get a boolean algebra with operators (the original type constructors). Subtyping is then interpreted as the natural partial order of the boolean algebra.

We propose to use the type class machinery of Coq to implement semantic subtyping for dependent type theory. Using type class resolution, we can emulate inference rules of subsumptive subtyping without modifying Coq internally. This has also another advantage. As subsumptive subtyping for dependent types should be undecidable in general, using type class resolution allows for an incomplete yet extensible decision procedure.

When developing certified software, a major part of the effort is
spent not only on writing proof scripts, but on rewriting them,
either for the purpose of code maintenance or because of more significant
changes in the base definitions. Regrettably, proof scripts suffer
more often than not from a bad programming style, and too many proof
developers casually neglect the most elementary principles of well-behaved
programmers. As a result, many proof scripts are very brittle, user-defined
tactics are often difficult to extend, and sometimes even lack a clear
specification. Formal libraries are thus generally very fragile pieces
of software. One reason for this unfortunate situation is that proof
engineering is very badly served by the tools currently available
to the users of the Coq proof assistant, starting with its tactic
language. One objective of the Gallinette team is to develop better
tools to write proof scripts.

Completing and maintaining a large corpus of formalised mathematics
requires a well-designed tactic language. This language should both
accommodate the possible specific needs of the theories at stake,
and help with diagnostics at refactoring time. Coq's tactic language
is in fact two-leveled. First, it includes a basic tactic language,
to organise the deductive steps in a proof script and to perform the
elementary bureaucracy. Its second layer is a meta-programming language,
which allows users to define their own new tactics at toplevel. Our
first direction of work consists in the investigation of the appropriate
features of the basic tactic language. For instance, the design
of the Ssreflect tactic language, and its support for the small scale
reflection methodology 70, has been a
key ingredient in at least two large scale formalisation endeavours:
the Four Colour Theorem 69 and of the Odd Order
Theorem 68. Building on our experience with the Ssreflect
tactic language, we will contribute to the ongoing work on the basic
tactic language for Coq. The second objective of this task is to contribute
to the design of a typed tactic language. In particular, we
will build on the work of Ziliani and his collaborators 121,
extending it with reasoning about the effects that tactics have on
the “state of a proof” (e.g. number of sub-goals, metavariables in
context). We will also develop a novel approach for incremental type
checking of proof scripts, so that programmers gain access to a richer
discovery—engineering interaction with the proof assistant.

The first three axes of the EPC Gallinette aim at developing a new generation of proof assistants. But we strongly believe that foundational investigations must go hand in hand with practical experiments. Therefore, we expect to benefit from existing expertise and collaborations in the team to experiment our extensions of Coq on real world developments. It should be noticed that those practical experiments are strongly guided by the deep history of research on software engineering of team members.

In the context of refactoring of C programs, we intend to formalise program transformations that are written in an imperative style to test the usability of our addition of effects in the proof assistant. This subject has been chosen based on the competence of members of the team.

We are currently working on the formalisation of refactoring tools in Coq 55. Automatic refactoring of programs in industrial languages is difficult because of the large number of potential interactions between language features that are difficult to predict and to test. Indeed, all available refactoring tools suffer from bugs : they fail to ensure that the generated program has the same behaviour as the input program. To cope with that difficulty, we have chosen to build a refactoring tool with Coq : a program transformation is written in the Coq programming language, then proven correct on all possible inputs, and then an OCaml executable program is generated by the platform. We rely on the CompCert C formalisation of the C language. CompCert is currently the most complete formalisation of an industrial language, which justifies that choice. We have three goals in that project :

We plan to make use of the internalisation of the object-oriented paradigm in the context of constraint programming. Indeed, this domain is made of very complex algorithms that are often developed using object-oriented programming (as it is the case for instance for CHOCO, which is developed in the Tasc Group at IMT Atlantique, Nantes). We will in particular focus on filtering algorithms in constraint solvers, for which research publications currently propose new algorithms with manual proofs. Their formalisation in Coq is challenging. Another interesting part of constraint solving to formalise is the part that deals with program generation (as opposed to extraction). However, when there are numerous generated pieces of code, it is not realistic to prove their correctness manually, and it can be too difficult to prove the correctness of a generator. So we intend to explore a middle path that consists in generating a piece of code along with its corresponding proof (script or proof term). A target application could be interval constraints (for instance Allen interval algebra or region connection calculus) that can generate thousands of specialised filtering algorithms for a small number of variables 42.

Finally, Rémi Douence has already worked (articles publishing 72, 111, 61, 44, 39, 38, PhD Thesis advising 110, 40) with different members of the Tasc team. Currently, he supervises with Nicolas Beldiceanu the PhD Thesis of Jovial Cheukam Ngouonou in the Tasc team. He studies graph invariants to enhance learning algorithms. This work requires proofs, manually done for now, we would like to explore when these proofs could be mechanized.

We will investigate how the addition of effects in the Coq proof assistant can facilitate the marriage of computer algebra with formal proofs. Computer algebra systems and proof assistants are both designed for doing mathematics with the help of a computer, by the means of symbolic computations. These two families of systems are however very different in nature: computer algebra systems allow for implementations faithful to the theoretical complexity of the algorithms, whereas proof assistants have the expressiveness to specify exactly the semantics of the data-structures and computations. By adding effects in the Coq proof assistant, it should be possible to specify and certify computer algebra routines in Coq, thus getting efficiency at the same time as formal correctness.

Experiments have been run that link computer algebra systems with Coq 60, 50. These bridges rely on the implementation of formal proof-producing core algorithms like normalisation procedures. Incidentally, they require non trivial maintenance work to survive the evolution of both systems. Other proof assistants like the Isabelle/HOL system make use of so-called reflection schemes: the proof assistant can produce code in an external programming language like SML, but also allows to import the values output by these extracted programs back inside the formal proofs. This feature extends the trusted base of code quite significantly but it has been used for major achievements like a certified symbolic/numeric ODE solver 78.

We would like to bring Coq closer to the efficiency and user-friendliness of computer algebra systems: for now it is difficult to use the Coq programming language so that certified implementations of computer algebra algorithms have the right, observable, complexity when they are executed inside Coq. We see the addition of effects to the proof assistant as an opportunity to ease these implementations, for instance by making use of caching mechanisms or of profiling facilities. Such enhancements should enable the verification of computation-intensive mathematical proofs that are currently beyond reach, like the validation of Helfgott's proof of the weak Goldbach conjecture 74.

Acknowledging the ordeal of a fully formal development in a proof assistant such as Coq, we have investigated in 12 gradual variations on the Calculus of Inductive Construction (CIC) for swifter prototyping with imprecise types and terms. We observe, with a no-go theorem, a crucial tradeoff between graduality and the key properties of canonicity, decidability and closure of universes under dependent product that CIC enjoys. Beyond this Fire Triangle of Graduality, we explore the gradualization of CIC with three different compromises, each relaxing one edge of the Fire Triangle. We develop a parametrized presentation of Gradual CIC that encompasses all three variations, and jointly develop their metatheory. We first present a bidirectional elaboration of Gradual CIC to a dependently-typed cast calculus, which elucidates the interrelation between typing, conversion, and graduality. We then establish the metatheory of this cast calculus through both a syntactic model into CIC, which provides weak canonicity, confluence, and when applicable, normalization, and a monotone model that purports the study of the graduality of two of the three variants. This work informs and paves the way towards the development of malleable proof assistants and dependently-typed programming languages.

Gradualizing the Calculus of Inductive Constructions (CIC) involves dealing with subtle tensions between normalization, graduality, and conservativity with respect to CIC. Recently, GCIC has been proposed as a parametrized gradual type theory that admits three variants, each sacrificing one of these properties. For devising a gradual proof assistant based on CIC, normalization and conservativity with respect to CIC are key, but the tension with graduality needs to be addressed. Additionally, several challenges remain: (1) The presence of two wildcard terms at any type—the error and unknown terms—enables trivial proofs of any theorem, jeopardizing the use of a gradual type theory in a proof assistant; (2) Supporting general indexed inductive families, most prominently equality, is an open problem; (3) Theoretical accounts of gradual typing and graduality so far do not support handling type mismatches detected during reduction; (4) Precision and graduality are external notions not amenable to reasoning within a gradual type theory. All these issues manifest primally in CastCIC, the cast calculus used to define GCIC. In 13, we present an alternative to CastCIC called GRIP. GRIP is a reasonably gradual type theory that addresses the issues above, featuring internal precision and general exception handling. For consistent reasoning about gradual terms, GRIP features an impure sort of types inhabited by errors and unknown terms, and a pure sort of strict propositions. By adopting a novel interpretation of the unknown term that carefully accounts for universe levels, GRIP satisfies graduality for a large and well-defined class of terms, in addition to being normalizing and a conservative extension of CIC. Internal precision supports reasoning about graduality within GRIP itself, for instance to characterize gradual exception-handling terms, and supports gradual subset types. We develop the metatheory of GRIP using a model formalized in Coq, and provide a prototype implementation of GRIP in Agda.

Building on the recent extension of dependent type theory with a universe of definitionally proof-irrelevant types, we introduce in 14

In dependent type theory, impredicativity is a powerful logical principle that allows the definition of propositions that quantify over arbitrarily large types, potentially resulting in self-referential propositions. Impredicativity can provide a system with increased logical strength and flexibility, but in counterpart it comes with multiple incompatibility results. In particular, Abel and Coquand showed that adding definitional uniqueness of identity proofs (UIP) to the main proof assistants that support impredicative propositions (Coq and Lean) breaks the normalization procedure, and thus the type-checking algorithm. However, it was not known whether this stems from a fundamental incompatibility between UIP and impredicativity or if a more suitable algorithm could decide type-checking for a type theory that supports both. In 23, we design a theory that handles both UIP and impredicativity by extending the recently introduced observational type theory TTobs with an impredicative universe of definitionally proof-irrelevant types, as initially proposed in the seminal work on observational equality of 36. We prove decidability of conversion for the resulting system, that we call CCobs, by harnessing proof-irrelevance to avoid computing with impredicative proof terms. Additionally, we prove normalization for CCobs in plain Martin-Löf type theory, thereby showing that adding proof-irrelevant impredicativity does not increase the computational content of the theory.

In 16,we generalize to a rich dependent type theory a proof originally developed by Escardó 63 that all System T functionals are continuous. It relies on the definition of a syntactic model of Baclofen Type Theory, a type theory where dependent elimination must be strict, into the Calculus of Inductive Constructions. The model is given by three translations: the axiom translation, that adds an oracle to the context; the branching translation, based on the dialogue monad, turning every type into a tree; and finally, a layer of algebraic binary parametricity, binding together the two translations. In the resulting type theory, every function

The MetaCoq project formalizes the theory of Coq, a reference implementation of its type-checking kernel and a verified erasure pipeline. During this year, we completed the completeness proof of the system, including a bidirectional version of the type-checker described in 12, and improved both the clarity of the specification and the efficiency of the implementations of type-checking and erasure. In parallel, work on later stages of the erasure pipeline consisted in proving various optimization phases correct, to bring the idealized

Bots are becoming a popular method for automating basic everyday tasks in many software projects. This is true in particular because of the availability of many off-the-shelf task-specific bots that teams can quickly adopt (which are sometimes completed with additional task-specific custom bots). Based on our experience in the Coq project, where we have developed and maintained a multi-task project-specific bot, we argue that this alternative approach to project automation should receive more attention because it strikes a good balance between productivity and adaptibility. In 15, we describe the kind of automation that our bot implements, what advantages we have gained by maintaining a project-specific bot, and the technology and architecture choices that have made it possible. We draw conclusions that should generalize to other medium-sized software teams willing to invest in project automation without disrupting their workflows.

In the context of interactive theorem provers based on a dependent type theory, automation tactics (dedicated decision procedures, call of automated solvers, ...) are often limited to goals which are exactly in some expected logical fragment. This very often prevents users from applying these tactics in other contexts, even similar ones. 18 discusses the design and the implementation of pre-processing operations for automating formal proofs in the Coq proof assistant. It presents the implementation of a wide variety of predictible, atomic goal transformations, which can be composed in various ways to target different backends. A gallery of examples illustrates how it helps to expand significantly the power of automation engines.

Dans un assistant de preuve comme Coq, un même objet mathématique peut souvent être formalisé par différentes structures de données. Par exemple, le type Z des entiers binaires, dans la bibliothèque standard de Coq, représente les entiers relatifs tout comme le type ssrint, des entiers unaires, fourni par la bibliothèque MathComp. En pratique, cette situation familière en programmation est un frein à la preuve formelle automatique. Dans 24, nous présentons trakt, un outil dont l'objectif est de faciliter l'accès des utilisateurs de Coq aux tactiques d'automatisation, pour la représentation des théories décidables de leur choix. Cet outil construit une formule auxiliaire à partir d'un but utilisateur, et une preuve que cette dernière implique ce but initial. La formule auxiliaire est conçue pour être adaptée aux outils de preuve automatique (lia, SMTCoq, etc). Cet outil est extensible, grâce à une API permettant à l'utilisateur de définir plusieurs natures de plongements dans un jeu de structures de données de référence. Le méta-langage Coq-Elpi, utilisé pour l'implémentation, fournit des facilités bienvenues pour la gestion des lieurs et la mise en oeuvre des parcours de termes en jeu dans ces tactiques.

In 22, a tight connection between two models of the λ-calculus is established, namely Milner's encoding into the π-calculus (precisely, the Internal π-calculus), and operational game semantics (OGS). The operational correspondence between the behaviours of the encoding provided by π and OGS is first explored, for various LTSs: the standard LTS for π and a new 'concurrent' LTS for OGS; an 'output-prioritised' LTS for π and the standard alternating LTS for OGS. Then it is shown that the equivalences induced on λ-terms by all these LTSs (for π and OGS) coincide. These connections allow us to transfer results and techniques between π and OGS. In particular up-to techniques from π onto OGS are imported and congruence and compositionality results for OGS from those of π are derived. The study is illustrated for call-by-value; similar results hold for call-by-name.

25 reports an experiment with a large pages allocator for the OCaml runtime, with measured performance improvements. One goal was to evaluate the possible performance of a page table for multicore OCaml, a data structure of the OCaml runtime used by libraries implementing experimental memory allocation and memory sharing schemes. While we did not use original techniques, some of the results are unexpected a priori based on expressed beliefs in the OCaml community: it lets programs run faster rather than slower, by letting statically-allocated values be skipped during garbage collection.

In 26, we propose a new API and implementation for managing garbage collector (GC) roots for the OCaml foreign-function interface (FFI), which offers:

Our contributions include a C library called Boxroot which is already in use in several OCaml-Rust interfacing libraries (ocaml-rs, ocaml-interop). We believe that this approach generalizes beyond OCaml, to other FFI situations where a language with GC interacts with a language without pervasive GC.

The first paper reports, in essence, the good hypothetical performance of embedding (borrowing) linearly-allocated values inside garbage-collected values. The second paper reports a symmetrical result: how to efficiently embed (own) garbage-collected values inside linearly-allocated values. Taken together, the broader motivation is to show the feasibility of basing linear allocation with re-use 84, 41 in languages that would still leverage state-of-art garbage collection for non-linear values.

In 21, we present a generalised, constructive, and machine-checked approach to Kolmogorov complexity in the constructive type theory underlying the Coq proof assistant. By proving that nonrandom numbers form a simple predicate, we obtain elegant proofs of undecidability for random and nonrandom numbers and a proof of uncomputability of Kolmogorov complexity. We use a general and abstract definition of Kolmogorov complexity and subsequently instantiate it to several definitions frequently found in the literature. Whereas textbook treatments of Kolmogorov complexity usually rely heavily on classical logic and the axiom of choice, we put emphasis on the constructiveness of all our arguments, however without blurring their essence. We first give a high-level proof idea using classical logic, which can be formalised with Markov's principle via folklore techniques we subsequently explain. Lastly, we show how to eliminate Markov's principle from a certain class of computability proofs, rendering all our results fully constructive. All our results are machine-checked by the Coq proof assistant, which is enabled by using a synthetic approach to computability: rather than formalising a model of computation, which is well-known to introduce a considerable overhead, we abstractly assume a universal function, allowing the proofs to focus on the mathematical essence.

In 19, 30, we present a constructive analysis and machine-checked theory of one-one, many-one, and truth-table reductions based on synthetic computability theory in the Calculus of Inductive Constructions, the type theory underlying the proof assistant Coq. We give elegant, synthetic, and machine-checked proofs of Post's landmark results that a simple predicate exists, is enumerable, undecidable, but many-one incomplete (Post's problem for many-one reducibility), and a hypersimple predicate exists, is enumerable, undecidable, but truth-table incomplete (Post's problem for truth-table reducibility). In synthetic computability, one assumes axioms allowing to carry out computability theory with all definitions and proofs purely in terms of functions of the type theory with no mention of a model of computation. Proofs can focus on the essence of the argument, without having to sacrifice formality. Synthetic computability also clears the lense for constructivisation. Our constructively careful definition of simple and hypersimple predicates allows us to not assume classical axioms, not even Markov's principle, still yielding the expected strong results.

In synthetic computability, pioneered by Richman, Bridges 49, and Bauer 43, one develops computability theory without an explicit model of computation. This is enabled by assuming an axiom equivalent to postulating a function

The Cantor-Bernstein theorem (CB) from set theory, stating that two sets which can be injectively embedded into each other are in bijection, is inherently classical in its full generality, i.e. implies the law of excluded middle, a result due to Pradic and Brown 108. Recently, Escardó has provided a proof of CB in univalent type theory, assuming the law of excluded middle. It is a natural question to ask which restrictions of CB can be proved without axiomatic assumptions. In 20, we give a partial answer to this question contributing an assumption-free proof of CB restricted to enumerable discrete types, i.e. types which can be computationally treated. In fact, we construct several bijections from injections: The first is by translating a proof of the Myhill isomorphism theorem from computability theory—stating that 1-equivalent predicates are recursively isomorphic-to constructive type theory, where the bijection is constructed in stages and an algorithm with an intricate termination argument is used to extend the bijection in every step. The second is also constructed in stages, but with a simpler extension algorithm sufficient for CB. The third is constructed directly in such a way that it only relies on the given enumerations of the types, not on the given injections. We aim at keeping the explanations simple, accessible, and concise in the style of a "proof pearl". All proofs are machine-checked in Coq but should transport to other foundations: they do not rely on impredicativity, on choice principles, or on large eliminations.

To automate the discovery of conjectures on combinatorial objects, we introduce in 17 the concept of a map of sharp bounds on characteristics of combinatorial objects, that provides a set of interrelated sharp bounds for these combinatorial objects. We then describe a Bound Seeker, a constraint-programming-based system, that gradually acquires maps of conjectures. The system was tested for searching conjectures on bounds on characteristics of digraphs: it constructs sixteen maps involving 431 conjectures on sharp lower and upper-bounds on eight digraph characteristics.

A. Mahboubi holds a part-time endowed professor position in the Department of Mathematics at the Vrije Universiteit Amsterdam (the Netherlands).

FRESCO project on cordis.europa.eu

Coqaml project on cordis.europa.eu

This section involves all the permanent members of the team.

Matthieu Sozeau and Nicolas Tabareau organized the TYPES 2022 conference.

The Gallinette team had held the TYPES 2022 conference in June 2022 in Nantes.

Guilhem Jaber participated in the production of a popular science magazine during the residence of journalists from "Les Autres Possible".

Guilhem Jaber participated in an art and science teaching module for second year bachelor student with Pascale Kuntz and artists from "Compagnie Brume". This experience was covered in a paper in L'Étudiant.

Guilhem Jaber and Assia Mahboubi participated to performances of the "Pièces à pédales", an art and science collaboration with the Athenor theater (Saint-Nazaire, France) and composer Alessandro Bosetti. It was performed in GMEA (Albi, France) and GMEM, (Marseille, France).