There is an emerging consensus that formal methods must be used as a matter of course in software development. Most software is too complex to be fully understood by one programmer or even a team of programmers, and requires the help of computerized techniques such as testing and model checking to analyze and eliminate entire classes of bugs. Moreover, in order for the software to be maintainable and reusable, it not only needs to be bug-free but also needs to have fully specified behavior, ideally accompanied with formal and machine-checkable proofs of correctness with respect to the specification. Indeed, formal specification and machine verification is the only way to achieve the highest level of assurance (EAL7) according to the ISO/IEC Common Criteria.1

Historically, achieving such a high degree of certainty in the operation of software has required significant investment of manpower, and hence of money. As a consequence, only software that is of critical importance (and relatively unchanging), such as monitoring software for nuclear reactors or fly-by-wire controllers in airplanes, has been subjected to such intense scrutiny. However, we are entering an age where we need trustworthy software in more mundane situations, with rapid development cycles, and without huge costs. For example: modern cars are essentially mobile computing platforms, smart-devices manage our intensely personal details, elections (and election campaigns) are increasingly fully computerized, and networks of drones monitor air pollution, traffic, military arenas, etc. Bugs in such systems can certainly lead to unpleasant, dangerous, or even life-threatening incidents.

The field of formal methods has stepped up to meet this growing need for trustworthy general purpose software in recent decades. Techniques such as computational type systems and explicit program annotations/contracts, and tools such as model checkers and interactive theorem provers, are starting to become standard in the computing industry. Indeed, many of these tools and techniques are now a part of undergraduate computer science curricula. In order to be usable by ordinary programmers (without PhDs in logic), such tools and techniques have to be high level and rely heavily on automation. Furthermore, multiple tools and techniques often need to marshaled to achieve a verification task, so theorem provers, solvers, model checkers, property testers, etc. need to be able to communicate with—and, ideally, trust—each other.

With all this sophistication in formal tools, there is an obvious
question: what should we trust?
Sophisticated formal reasoning tools are, generally speaking, complex
software artifacts themselves; if we want complex software to undergo
rigorous formal analysis we must be prepared to formally analyze the
tools and techniques used in formal reasoning itself.
Historically, the issue of trust has been addressed by means of
relativizing it to small and simple cores.
This is the basis of industrially successful formal reasoning systems
such as Coq, Isabelle, HOL4, and ACL2.
However, the relativization of trust has led to a balkanization of the
formal reasoning community, since the Coq kernel, for example, is
incompatible with the Isabelle kernel, and neither can directly
cross-validate formal developments built with the other.
Thus, there is now a burgeoning cottage industry of translations and
adaptations of different formal proof languages for bridging the gap.
A number of proposals have also been made for universal or
retargetable proof languages (e.g., Dedukti, ProofCert) so that the
cross-platform trust issues can be factorized into single trusted
checkers.

Beyond mutual incompatibility caused by relativized trust, there is a
bigger problem that the proof evidence that is accepted by small
kernels is generally far too detailed to be useful.
Formal developments usually occurs at a much higher level, relying on
algorithmic techniques such as unification, simplification, rewriting,
and controlled proof search to fill in details.
Indeed, the most reusable products of formal developments tend to be
these algorithmic techniques and associated collections of
hand-crafted rules.
Unfortunately, these techniques are even less portable than the fully
detailed proofs themselves, since the techniques are often implemented
in terms of the behaviors of the trusted kernels.
We can broadly say that the problem with relativized trust is that it
is based on the operational interpretation of implementations
of trusted kernels.
There still remains the question of meta-theoretic correctness.
Most formal reasoning systems implement a variant of a well known
mathematical formalism (e.g., Martin-Löf type theory, set theory,
higher-order logic), but it is surprising that hardly any mainstream
system has a formalized meta-theory.2
Furthermore, formal reasoning systems are usually associated with
complicated checkers for side-conditions that often have unclear
mathematical status.
For example, the Coq kernel has a built-in syntactic termination
checker for recursive fixed-point expressions that is required to work
correctly for the kernel to be sound.
This termination checker evolves and improves with each version of
Coq, and therefore the most accurate documentation of its behavior is
its own source code.
Coq is not special in this regard: similar trusted features exist in
nearly every mainstream formal reasoning system.

The Partout project is interested in the principles of deductive
and computational formalisms.
In the broadest sense, we are interested in the question of
trustworthy and verifiable meta-theory.
At one end, this includes the well studied foundational questions of
the meta-theory of logical systems and type systems: cut-elimination
and focusing in proof theory, type soundness and normalization
theorems in type theory, etc.
The focus of our research here is on the fundamental relationships
behind the the notions of computation and deduction.
We are particularly interested in relationships that go beyond the
well known correspondences between proofs and programs.3
Indeed, interpreting computation in terms of deduction (as in
logic programming) or deduction in terms of computation (as in
rewrite systems or in model checking) can often lead to fruitful and
enlightening research questions, both theoretical and practical.

From another end, Partout works on the question of the
essential nature of deductive or computational formalisms.
For instance, we are interested in the question of proof
identity that attempts to answer the following question: when are
two proofs of the same theorem the same?
Surprisingly, this very basic question is left unanswered in
proof theory, the branch of mathematics that supposedly treats
proofs as algebraic objects of interest.
We also pay particular attention to the combinatorial and
complexity-theoretic properties of the formalisms.
Indeed, it is surprising that until very recently the

To put trustworthy meta-theory to use, the Partout project also
works on the design and implementations of formal reasoning tools and
techniques.
We study the mathematical principles behind the representations of
formal concepts (structural operational semantics (SOS)
style.
We also work on foundational questions about induction and
co-induction, which are used in intricate combinations in
metamathematics.

Software and hardware systems perform computation (systems that
process, compute and perform) and deduction (systems that
search, check or prove). The makers of those systems express their
intent using various frameworks such as programming languages,
specification languages, and logics. The Partout project aims
at developing and using mathematical principles to design better
frameworks for computation and reasoning. Principles of expression are
researched from two directions, in tandem:

Foundational approaches, from theories to applications: studying fundamental problems of programming and proof theory.

Examples include studying the complexity of reduction strategies in lambda-calculi with sharing, or studying proof representations that quotient over rule permutations and can be adapted to many different logics.

Empirical approaches, from applications to theories: studying systems currently in use to build a theoretical understanding of the practical choices made by their designers.

Examples include studying realistic implementations of programming languages and proof assistants, which differ in interesting ways from their usual high-level formal description (regarding of sharing of code and data, for example), or studying new approaches to efficient automated proof search, relating them to existing approaches of proof theory, for example to design proof certificates or to generalize them to non-classical logics.

One of the strengths of Partout is the co-existence of a number
of different expertise and points of view. Many dichotomies exist in
the study of computation and deduction:
functional programming vs logic programming,
operational semantics vs denotational semantics,
constructive logic vs classical logic,
proof terms vs proof nets, etc.
We do not identify with any one of them in particular, rather with
them as a whole, believing in the value of interaction and
cross-fertilization between different approaches.
Partout defines its scope through the following core tenets:

More concretely, the research in Partout will be centered around
the following four themes:

The Partout team studies the structure of mathematical proofs, in ways that often makes them more amenable to automated theorem proving – automatically searching the space of proof candidates for a statement to find an actual proof – or a counter-example.

(Due to fundamental computability limits, fully-automatic proving is only possible for simple statements, but this field has been making a lot of progress in recent years, and is in particular interested with the idea of generating verifiable evidence for the proofs that are found, which fits squarely within the expertise of Partout.)

Our work on the structure of proofs also suggests ways how they could be presented to a user, edited and maintained, in particular in “proof assistants”, automated tool to assist the writing of mathematical proofs with automatic checking of their correctness.

Our work also gives insight on the structure and properties of programming languages. We can improve the design or implementation of programming languages, help programmers or language implementors reason about the correctness of the programs in a given language, or reason about the cost of execution of a program.

Dale Miller was named an ACM Fellow for contributions to proof theory and computational logic.

Dale Miller was named a Fellow of the Asia-Pacific Artificial Intelligence Association (AAIA).

Accattoli has been co-chair of the international conference PPDP 2022.

The version 5.0 of the OCaml programming language implementation was released, with participation from Gabriel Scherer.

Paper 9 by Accattoli, Dal Lago, and Vanoni received the distinguished paper award of the international conference ICFP 2022.

MOIN is a SWI Prolog theorem prover for classical and intuitionstic modal logics. The modal and intuitionistic modal logics considered are all the 15 systems occurring in the modal S5-cube, and all the decidable intuitionistic modal logics in the IS5-cube. MOIN also provides a protptype implementation for the intuitionistic logics for which decidability is not known (IK4,ID5 and IS4). MOIN is consists of a set of Prolog clauses, each clause representing a rule in one of the three proof systems. The clauses are recursively applied to a given formula, constructing a proof-search tree. The user selects the nested proof system, the logic, and the formula to be tested. In the case of classic nested sequent and Maehara-style nested sequents, MOIN yields a derivation, in case of success of the proof search, or a countermodel, in case of proof search failure. The countermodel for classical modal logics is a Kripke model, while for intuitionistic modal logic is a bi-relational model. In case of Gentzen-style nested sequents, the prover does not perform a countermodel extraction.

A system description of MOIN is available at https://hal.inria.fr/hal-02457240

External Collaborators: Matteo Acclavio (Univ. Luxembourg)

Combinatorial proofs form a syntax-independent presentation of proofs, originally proposed by Hughes for classical propositional logic. In this paper we present a notion of combinatorial proofs for the constructive modal logics CK and CD, we show soundness and completeness of combinatorial proofs by translation from and to sequent calculus proofs, and we discuss the notion of proof equivalence enforced by these translations.

This work has been published at the AiML 2022 conference 18

External Collaborators: Willem Heijltjes (University of Bath), Dominic Hughes (UC Berkeley)

We present normalization for intuitionistic combinatorial proofs (ICPs) and relate it to the simply-typed lambda-calculus. We prove confluence and strong normalization. Combinatorial proofs, or "proofs without syntax", form a graphical semantics of proof in various logics that is canonical yet complexity-aware: they are a polynomial-sized representation of sequent proofs that factors out exactly the non-duplicating permutations. Our approach to normalization aligns with these characteristics: it is canonical (free of permutations) and generic (readily applied to other logics). Our reduction mechanism is a canonical representation of reduction in sequent calculus with closed cuts (no abstraction is allowed below a cut), and relates to closed reduction in lambda-calculus and supercombinators. While we will use ICPs concretely, the notion of reduction is completely abstract, and can be specialized to give a reduction mechanism for any representation of typed normal forms.

This work was published at the FSCD 2022 conference 20.

Combinatorial flows are a graphical representation of proofs. They can be seen as a generalization of atomic flows on one side and of combinatorial proofs on the other side. From atomic flows, introduced by Guglielemi and Gundersen, they inherit the close correspondence with open deduction and the possibility of tracing the occurrences of atoms in a derivation. From combinatorial proofs, introduced by Hughes, they inherit the correctness criterion that allows to reconstruct the derivation from the flow. In fact, combinatorial flows form a proof system in the sense of Cook and Reckhow. We show how to translate between open deduction derivations and combinatorial flows, and we show how they are related to combinatorial proofs with cuts.

This work has been published in 23

External Collaborators: Nguyễn, Lê Thành Dũng (ENS Lyon)

BV and pomset logic are two logics that both conservatively extend unit-free multiplicative linear logic by a third binary connective, which (i) is non-commutative, (ii) is self-dual, and (iii) lies between the "par" and the "tensor". It was conjectured early on (more than 20 years ago), that these two logics, that share the same language, that both admit cut elimination, and whose connectives have essentially the same properties, are in fact the same. In this paper we show that this is not the case. We present a formula that is provable in pomset logic but not in BV.

We also studied the complexity of the two logics. These results are presented in 22 and 34.

External Collaborators: Danko Ilik (Siemens)

A compiler consists of a sequence of phases going from lexical analysis to code generation. Ideally, the formal verification of a compiler should include the formal verification of every component of the tool-chain. In order to contribute to the end-to-end verification of compilers, we implemented a verified lexer generator with usage similar to OCamllex. This software-Coqlex-reads a lexer specification and generates a lexer equipped with Coq proofs of its correctness. Although the performance of the generated lexers does not measure up to the performance of a standard lexer generator such as OCamllex, the safety guarantees it comes with make it an interesting alternative to use when implementing totally verified compilers or other language processing tools.

More details on this work can be found here 35

External Collaborators: Matteo Acclavio (Univ. Luxembourg), Ross Horne (Univ. Luxembourg)

In this work, published in 11 we present a proof system that operates on graphs instead of formulas. Starting from the well-known relationship between formulas and cographs, we drop the cograph-conditions and look at arbitrary (undirected) graphs. This means that we lose the tree structure of the formulas corresponding to the cographs, and we can no longer use standard proof theoretical methods that depend on that tree structure. In order to overcome this difficulty, we use a modular decomposition of graphs and some techniques from deep inference where inference rules do not rely on the main connective of a formula. For our proof system we show the admissibility of cut and a generalization of the splitting property. Finally, we show that our system is a conservative extension of multiplicative linear logic with mix, and we argue that our graphs form a notion of generalized connective.

External Collaborators: Matteo Acclavio (Univ. Luxembourg), Ross Horne (Univ. Luxembourg), Sjouke Mauw (Univ. Luxembourg)

Logical time is a partial order over events in distributed systems, constraining which events precede others. Special interest has been given to series-parallel orders since they correspond to formulas constructed via the two operations for "series" and "parallel" composition. For this reason, seriesparallel orders have received attention from proof theory, leading to pomset logic, the logic BV, and their extensions. However, logical time does not always form a series-parallel order; indeed, ubiquitous structures in distributed systems are beyond current proof theoretic methods. In this paper, we explore how this restriction can be lifted. We design new logics that work directly on graphs instead of formulas, we develop their proof theory, and we show that our logics are conservative extensions of the logic BV.

This work was published at the FSCD 2022 conference 17.

External Collaborators: Agata Ciabattoni (TU Wien), Matteo Tesi (SNS Pisa)

Bounded depth refers to a property of Kripke frames that serve as semantics for intuitionistic logic. We introduce nested sequent calculi for the intermediate logics of bounded depth. Our calculi are obtained in a modular way by adding suitable structural rules to a variant of Fitting’s calculus for intuitionistic propositional logic, for which we present the first syntactic cut elimination proof. This proof modularly extends to the new nested sequent calculi introduced in this paper, which has been published at the the AiML 2022 conference 19

External Collaborators: Sonia Marin (University of
Birmingham), Elaine Pimentel (University College of London), and
Marco Volpe (Osnabrueck University).

We examine the synthetic inference rules that arise when using theories composed of bipolars in both classical and intuitionistic logics. A key step in transforming a formula into synthetic inference rules involves attaching a polarity to atomic formulas and some logical connectives. Since there are different choices in how polarity is assigned, it is possible to produce different synthetic inference rules for the same formula. We show that this flexibility allows for the generalization of different approaches for transforming axioms into sequent rules present in the literature. We also show how to apply these results to organize the proof theory of labeled sequent systems for several propositional modal logics. This work was published in the Annals of Pure and Applied Logic 13.

We use

This work was presented at TLLA-Linearity 2022 and appears in 33.

When a proof-checking kernel completes the checking of a formal proof, that kernel asserts that a specific formula follows from a collection of lemmas within a given logic. We describe a framework in which such an assertion can be made globally so that any other proof assistant willing to trust that kernel can use that assertion without rechecking (or even understanding) the formal proof associated with that assertion. In this framework, we propose to move beyond autarkic proof checkers-i.e., self-sufficient provers that trust proofs only when they are checked by their kernel-to an explicitly non-autarkic setting. This framework must, of course, explicitly track which agents (proof checkers and their operators) are being trusted when a trusting proof checker makes its assertions. We describe how we have integrated this framework into a particular theorem prover while making minor changes to how the prover inputs and outputs text files. This framework has been implemented using off-the-shelf web-based technologies, such as JSON, IPFS, IPLD, and public key cryptography. A preliminary report on this work appears in 31.

The focused proof system LJF can be used as a framework for describing
term structures and substitution. Since the proof theory of LJF does
not pick a canonical polarization for primitive types, two different
approaches to term representation arise. When primitive types are
given the negative polarity, LJF proofs encode terms as tree-like
structures in a familiar fashion. In this situation, cut elimination
also yields the familiar notion of substitution. On the other hand,
when primitive types are given the positive polarity, LJF proofs yield
a structure in which explicit sharing of term structures is
possible. Such a representation of terms provides an explicit method
for sharing term structures. In this setting, cut elimination yields a
different notion of substitution. We illustrate these two approaches
to term representation by applying them to the encoding of untyped

This work will be presented as an invited paper at CSL 2023 21.

Deep Inference can ways to construct proofs steps by pointing to two different locations in the goal and/or hypotheses. This had been described in

36. We have built on this to provide a novel proof interface, Actema, which to construct proofs without textual commands, in a way which is, we beleive, intuitive and user-friendly. A version of the system can be used on the system's web page.

This work was published at the CPP 2022 conference 27.

A new version of Actema is under way which allows to use it as a front-end for the Coq proof system.

Antoine Séré is conducting his PhD research, with Pierre-Yves Strub as external main adviser, on formally certified cryptography. This year he participated to the formal correctness proof of the Kyber crpytographic primitive (which won the NIST post-quantum competition). An article is to be submitted.

External Collaborators: Maico Leberle (ex PhD student in Partout).

This work studies useful sharing, which is a sophisticated optimization for

This work belongs to the foundations of complexity analysis for functional programs theme of the research program of Partout.
It has been published in 16.

External Collaborators: Ugo Dal Lago (University of Bologna & Inria), Gabriele Vanoni (University of Bologna & Inria).

Can the λ-calculus be considered a reasonable computational model? Can we use it for measuring the time and space consumption of algorithms? While the literature contains positive answers about time, much less is known about space.

This work presents a new reasonable space cost model for the

This work belongs to the foundations of complexity analysis for functional programs theme of the research program of Partout.
It has been published in 14, and it has been selected for the special issue of the conference.

External Collaborators: Ugo Dal Lago (University of Bologna & Inria), Gabriele Vanoni (University of Bologna & Inria).

This work continues the study of the previous sub-section, providing a new system of multi types (a variant of intersection types) and we show how to extract from multi type derivations the space used by the Space KAM, capturing into a type system the space complexity of the abstract machine. Additionally, we show how to capture also the time of the Space KAM, which is a reasonable time cost model, via minor changes to the type system.

This work belongs to the foundations of complexity analysis for functional programs theme of the research program of Partout.
It has been published in 9, and it received the distinguished paper award of the conference.

This work introduces the exponential substitution calculus (ESC), a new presentation of cut elimination for intuitionistic multiplicative and exponential linear logic (IMELL), based on proof terms and building on the idea that exponentials can be seen as explicit substitutions (a formalism for representing sharing in the

One of the key properties of the LSC is that it naturally models the sub-term property of abstract machines, which is the key ingre-
dient for the study of reasonable time cost models for the

For the ESC, we also prove untyped confluence and typed strong normalization, showing that it is an alternative to proof nets for an advanced study of cut elimination.

This work belongs to the foundations of complexity analysis for functional programs theme of the research program of Partout.
It has been published in 15, and it has been selected for the special issue of the conference.

External Collaborators: Giulio Guerrieri(Edinbugh Research Centre, Huawei, UK).

The denotational semantics of the untyped solvable terms, which are elegantly characterized in many different ways. In particular, unsolvable terms
provide a consistent notion of meaningless term. The semantics of the untyped call-by-value

This work belongs to the foundations of complexity analysis for functional programs theme of the research program of Partout.
It has been published in 10.

We begin by explaining how any context-free grammar encodes a functor of operads from a freely generated operad into a certain "operad of spliced words". This motivates a more general notion of CFG over any category

We then turn to the Chomsky-Schützenberger Representation Theorem. We describe how a non-deterministic finite state automaton can be seen as a category

This work has been published in 28.

External Collaborators: Paul-André Melliès (CNRS)

Cofunctors are a kind of map between categories which lift morphisms along an object assignment. In this paper, we introduce cofunctors between categories enriched in a distributive monoidal category. We define a double category of enriched categories, enriched functors, and enriched cofunctors, whose horizontal and vertical 2-categories have 2-cells given by enriched natural transformations between functors and cofunctors, respectively. Enriched lenses are defined as a compatible enriched functor and enriched cofunctor pair; weighted lenses, which were introduced by Perrone, are precisely lenses enriched in weighted sets. Several other examples are also studied in detail.

A preliminary version of this article was released as the preprint 32. A revised version is in preparation, for journal submission.

External Collaborators: Matthew di Meglio

External Collaborators: Nathanëlle Courant (INRIA Paris), Julien Lepiller (Yale University)

It is common for programming languages that their reference implementation is implemented in the language itself. This requires a "bootstrap binary": the executable form of a previous version of the implementation is provided along with the sources, to be able to run the implementation itself. Those bootstrap binaries are opaque; they could contain bugs, or even malicious changes that could reproduce themselves when running the source version of the language implementation-this is called the "trusting trust attack". A collective project called Bootstrappable was launched in to remove bootstrap binaries, providing alternative build paths that do not rely on opaque binaries.

Camlboot is our project to debootstrap the OCaml compiler, version 4.07. Using diverse double-compilation, we were able to prove the absence of trusting trust attack in the existing bootstrap binaries of the standard OCaml implementation.

To our knowledge, our publication 12 is the first scholarly discussion of "tailored" debootstrapping for high-level programming languages. Debootstrapping recently grew an active community of free software contributors, but so far the interactions with the programming language research community have been minimal. We share our experience on Camlboot, trying to highlight aspects that are of interest to other language designers and implementors; we hope to foster stronger ties between the Bootstrappable project and relevant academic communities. In particular, the debootstrapping experience has been an interesting reflection on language design and implementation..

External Collaborators: Nicolas Chataing (INRIA Paris),
Camille Noûs (laboratoire Cogitamus)

Nous proposons dans 24 une implémentation d'une nouvelle fonctionnalité pour OCaml, l'unboxing de constructeur. Elle permet d'élimineer certains constructeurs de la représentation dynamique des valeurs quand cela ne crée pas d'ambiguité entre différentes valeurs au même type. Nous décrivons:

Pour notre analyse statique, nous devons normaliser certaines expressions de type, avec une relation de normalisation qui ne termine pas nécessairement en présence de types mutuellement récursifs; nous décrivons une analyse de terminaison qui garantit la normalisation sans rejeter les déclarations de types qui nous intéressent.

External Collaborators:
Camille Noûs (laboratoire Cogitamus)

François Pottier's union-find library is parameterized over an
underlying store of mutable references, and provides the usual
references, transactional reference stores (for rolling back some
changes in case of higher-level errors), and persistent reference
stores. In 26, we extend this library with a new implementation of
backtracking reference stores, to get a Union-Find implementation
that efficiently supports arbitrary backtracking and also subsumes
the transactional interface.

Our backtracking reference stores are not specific to union-find, they can be used to build arbitrary backtracking data structures. The natural implementation, using a journal to record all writes, provides amortized-constant-time operations with a space overhead linear in the number of store updates. A refined implementation reduces the memory overhead to be linear in the number of store cells updated, and gives performance that match non-backtracking references in practice.

External Collaborators:
Guillaume Munch-Maccagnoni (INRIA Rennes/Nantes)

In 29 we study the safe manipulation of GC-managed values inside non-managed (foreign) code. Focusing on the problem of implementing a safe and convenient FFI for OCaml in Rust, we propose a new interface and implementation for storing roots for the OCaml GC inside foreign (C and Rust) data structures, along with a typing discipline in Rust's ownership type system, which offer:

External Collaborators:
Nathanëlle Courant (INRIA Paris)

In 30 we detail a use-case for strong call-by-need reduction in the OCaml compiler. Strong call-by-need reduction is a sophisticated reduction strategy for programming languages, and to our knowledge all its practical applications known today are in the field of proof assistants. Our use-case is the first sighting of a use for strong call-by-need reduction outside this specific domain.

The goal of the thesis is to develop ways to optimize the performance of software, while not sacrificing the guarantees of safety already provided for non-optimized code. The software that Siemens is using for their self-driving trains (e.g. Metro 14 in Paris) is programmed in Ada. Due to the high safety requirements for the software, the used Ada compiler has to be certified. At the current state of the art, only non-optimized code fulfils all necessary requirements. Because of higher performance needs, we are interested in producing optimized code that also fulfils these reqirements.

Stated most generally, the aim of the thesis is to assure, at the same time:

The OCaml Software Foundation (OCSF),4 established in 2018 under the umbrella of the Inria Foundation, aims to promote, protect, and advance the OCaml programming language and its ecosystem, and to support and facilitate the growth of a diverse and international community of OCaml users.

Since 2019, Gabriel Scherer serves as the director of the foundation.

Nomadic Labs, a Paris-based company, has implemented the Tezos blockchain and cryptocurrency entirely in OCaml. In 2019, Nomadic Labs and Inria have signed a framework agreement (“contrat-cadre”) that allows Nomadic Labs to fund multiple research efforts carried out by Inria groups. Within this framework, we participate to the following grants, in collaboration with the project-team Cambium at INRIA Paris:

This grant is intended to fund a number of improvements to OCaml, including the addition of new features and a possible re-design of the OCaml type-checker. This grant funds the PhD thesis of Olivier Martinot on this topic.

This grant is intended to fund the day-to-day maintenance of OCaml as well as the considerable work involved in managing the release cycle.

LambdaComb is an interdisciplinary project financed by the Agence Nationale de la Recherche (PRC grant ANR-21-CE48-0017). Broadly, the project aims to deepen connections between lambda calculus and logic on the one hand and combinatorics on the other. One important motivation for the project is the discovery over recent years of a host of surprising links between subsystems of lambda calculus and enumeration of graphs on surfaces, or "maps", the latter being an active subfield of combinatorics with roots in W. T. Tutte's work in the 1960s. Using these new links and other ideas and tools, the LambdaComb project aims to:

The project also intersects with and aims to shed new light on other established connections between logic and geometry, notably Joyal and Street's categorical framework of string diagrams as well as Girard's proof nets for linear logic.

Wendlasida Ouedraogo was teaching assistant for the courses

at Ecole Polytechnique