There is an emerging consensus that formal methods must be used as a matter of course in software development. Most software is too complex to be fully understood by one programmer or even a team of programmers, and requires the help of computerized techniques such as testing and model checking to analyze and eliminate entire classes of bugs. Moreover, in order for the software to be maintainable and reusable, it not only needs to be bug-free but also needs to have fully specified behavior, ideally accompanied with formal and machine-checkable proofs of correctness with respect to the specification. Indeed, formal specification and machine verification is the only way to achieve the highest level of assurance (EAL7) according to the ISO/IEC Common Criteria.1

Historically, achieving such a high degree of certainty in the operation of software has required significant investment of manpower, and hence of money. As a consequence, only software that is of critical importance (and relatively unchanging), such as monitoring software for nuclear reactors or fly-by-wire controllers in airplanes, has been subjected to such intense scrutiny. However, we are entering an age where we need trustworthy software in more mundane situations, with rapid development cycles, and without huge costs. For example: modern cars are essentially mobile computing platforms, smart-devices manage our intensely personal details, elections (and election campaigns) are increasingly fully computerized, and networks of drones monitor air pollution, traffic, military arenas, etc. Bugs in such systems can certainly lead to unpleasant, dangerous, or even life-threatening incidents.

The field of formal methods has stepped up to meet this growing need for trustworthy general purpose software in recent decades. Techniques such as computational type systems and explicit program annotations/contracts, and tools such as model checkers and interactive theorem provers, are starting to become standard in the computing industry. Indeed, many of these tools and techniques are now a part of undergraduate computer science curricula. In order to be usable by ordinary programmers (without PhDs in logic), such tools and techniques have to be high level and rely heavily on automation. Furthermore, multiple tools and techniques often need to marshaled to achieve a verification task, so theorem provers, solvers, model checkers, property testers, etc. need to be able to communicate with—and, ideally, trust—each other.

With all this sophistication in formal tools, there is an obvious
question: what should we trust?
Sophisticated formal reasoning tools are, generally speaking, complex
software artifacts themselves; if we want complex software to undergo
rigorous formal analysis we must be prepared to formally analyze the
tools and techniques used in formal reasoning itself.
Historically, the issue of trust has been addressed by means of
relativizing it to small and simple cores.
This is the basis of industrially successful formal reasoning systems
such as Coq, Isabelle, HOL4, and ACL2.
However, the relativization of trust has led to a balkanization of the
formal reasoning community, since the Coq kernel, for example, is
incompatible with the Isabelle kernel, and neither can directly
cross-validate formal developments built with the other.
Thus, there is now a burgeoning cottage industry of translations and
adaptations of different formal proof languages for bridging the gap.
A number of proposals have also been made for universal or
retargetable proof languages (e.g., Dedukti, ProofCert) so that the
cross-platform trust issues can be factorized into single trusted
checkers.

Beyond mutual incompatibility caused by relativized trust, there is a
bigger problem that the proof evidence that is accepted by small
kernels is generally far too detailed to be useful.
Formal developments usually occurs at a much higher level, relying on
algorithmic techniques such as unification, simplification, rewriting,
and controlled proof search to fill in details.
Indeed, the most reusable products of formal developments tend to be
these algorithmic techniques and associated collections of
hand-crafted rules.
Unfortunately, these techniques are even less portable than the fully
detailed proofs themselves, since the techniques are often implemented
in terms of the behaviors of the trusted kernels.
We can broadly say that the problem with relativized trust is that it
is based on the operational interpretation of implementations
of trusted kernels.
There still remains the question of meta-theoretic correctness.
Most formal reasoning systems implement a variant of a well known
mathematical formalism (e.g., Martin-Löf type theory, set theory,
higher-order logic), but it is surprising that hardly any mainstream
system has a formalized meta-theory.2
Furthermore, formal reasoning systems are usually associated with
complicated checkers for side-conditions that often have unclear
mathematical status.
For example, the Coq kernel has a built-in syntactic termination
checker for recursive fixed-point expressions that is required to work
correctly for the kernel to be sound.
This termination checker evolves and improves with each version of
Coq, and therefore the most accurate documentation of its behavior is
its own source code.
Coq is not special in this regard: similar trusted features exist in
nearly every mainstream formal reasoning system.

The Partout project is interested in the principles of deductive
and computational formalisms.
In the broadest sense, we are interested in the question of
trustworthy and verifiable meta-theory.
At one end, this includes the well studied foundational questions of
the meta-theory of logical systems and type systems: cut-elimination
and focusing in proof theory, type soundness and normalization
theorems in type theory, etc.
The focus of our research here is on the fundamental relationships
behind the the notions of computation and deduction.
We are particularly interested in relationships that go beyond the
well known correspondences between proofs and programs.3
Indeed, interpreting computation in terms of deduction (as in
logic programming) or deduction in terms of computation (as in
rewrite systems or in model checking) can often lead to fruitful and
enlightening research questions, both theoretical and practical.

From another end, Partout works on the question of the
essential nature of deductive or computational formalisms.
For instance, we are interested in the question of proof
identity that attempts to answer the following question: when are
two proofs of the same theorem the same?
Surprisingly, this very basic question is left unanswered in
proof theory, the branch of mathematics that supposedly treats
proofs as algebraic objects of interest.
We also pay particular attention to the combinatorial and
complexity-theoretic properties of the formalisms.
Indeed, it is surprising that until very recently the

To put trustworthy meta-theory to use, the Partout project also
works on the design and implementations of formal reasoning tools and
techniques.
We study the mathematical principles behind the representations of
formal concepts (structural operational semantics (SOS)
style.
We also work on foundational questions about induction and
co-induction, which are used in intricate combinations in
metamathematics.

Software and hardware systems perform computation (systems that
process, compute and perform) and deduction (systems that
search, check or prove). The makers of those systems express their
intent using various frameworks such as programming languages,
specification languages, and logics. The Partout project aims
at developing and using mathematical principles to design better
frameworks for computation and reasoning. Principles of expression are
researched from two directions, in tandem:

Foundational approaches, from theories to applications: studying fundamental problems of programming and proof theory.

Examples include studying the complexity of reduction strategies in lambda-calculi with sharing, or studying proof representations that quotient over rule permutations and can be adapted to many different logics.

Empirical approaches, from applications to theories: studying systems currently in use to build a theoretical understanding of the practical choices made by their designers.

Examples include studying realistic implementations of programming languages and proof assistants, which differ in interesting ways from their usual high-level formal description (regarding of sharing of code and data, for example), or studying new approaches to efficient automated proof search, relating them to existing approaches of proof theory, for example to design proof certificates or to generalize them to non-classical logics.

One of the strengths of Partout is the co-existence of a number
of different expertise and points of view. Many dichotomies exist in
the study of computation and deduction:
functional programming vs logic programming,
operational semantics vs denotational semantics,
constructive logic vs classical logic,
proof terms vs proof nets, etc.
We do not identify with any one of them in particular, rather with
them as a whole, believing in the value of interaction and
cross-fertilization between different approaches.
Partout defines its scope through the following core tenets:

More concretely, the research in Partout will be centered around
the following four themes:

The Partout team studies the structure of mathematical proofs, in ways that often makes them more amenable to automated theorem proving – automatically searching the space of proof candidates for a statement to find an actual proof – or a counter-example.

(Due to fundamental computability limits, fully-automatic proving is only possible for simple statements, but this field has been making a lot of progress in recent years, and is in particular interested with the idea of generating verifiable evidence for the proofs that are found, which fits squarely within the expertise of Partout.)

Our work on the structure of proofs also suggests ways how they could be presented to a user, edited and maintained, in particular in “proof assistants”, automated tool to assist the writing of mathematical proofs with automatic checking of their correctness.

Our work also gives insight on the structure and properties of programming languages. We can improve the design or implementation of programming languages, help programmers or language implementors reason about the correctness of the programs in a given language, or reason about the cost of execution of a program.

Noam Zeilberger was awarded an ANR PRC grant as project coordinator for the LambdaComb project, which starts in 2022. The aim of the project is to develop some surprising connections between lambda calculus and combinatorics that were discovered over recent years. Partners include labs in Paris (LIX, LIPN, LIGM), Marseille (LIS), and Poland (Jagiellonian), with an overall budget of roughly 285k€.

MOIN is a SWI Prolog theorem prover for classical and intuitionstic modal logics. The modal and intuitionistic modal logics considered are all the 15 systems occurring in the modal S5-cube, and all the decidable intuitionistic modal logics in the IS5-cube. MOIN also provides a protptype implementation for the intuitionistic logics for which decidability is not known (IK4,ID5 and IS4). MOIN is consists of a set of Prolog clauses, each clause representing a rule in one of the three proof systems. The clauses are recursively applied to a given formula, constructing a proof-search tree. The user selects the nested proof system, the logic, and the formula to be tested. In the case of classic nested sequent and Maehara-style nested sequents, MOIN yields a derivation, in case of success of the proof search, or a countermodel, in case of proof search failure. The countermodel for classical modal logics is a Kripke model, while for intuitionistic modal logic is a bi-relational model. In case of Gentzen-style nested sequents, the prover does not perform a countermodel extraction.

A system description of MOIN is available at https://hal.inria.fr/hal-02457240

External Collaborators: Matteo Acclavio (University of Luxembourg), Davide Catta (University of Montpellier)

Continuing our work on constructive and intuitionistic modal logics, we provide the first game semantics for the constructive modal logic CK. We define arenas encoding modal formulas, and we define winning innocent strategies for games on these arenas. Finally we characterize the winning strategies corresponding to proofs in the logic CK. To prove the full-completeness of our semantics, we provide a sequentialization procedure of winning strategies. We also prove their compositionality and showing how our results can be extend to the constructive modal logic CD. The results are published in 23 and 24.

External Collaborators: Dominic Hughes (U.C. Berkeley)

We uncover a close relationship between combinatorial and syntactic proofs for first-order logic (without equality). Whereas syntactic proofs are formalized in a deductive proof system based on inference rules, a combinatorial proof is a syntax-free presentation of a proof that is independent from any set of inference rules. We show that the two proof representations are related via a deep inference decomposition theorem that establishes a new kind of normal form for syntactic proofs. This yields (a) a simple proof of soundness and completeness for first-order combinatorial proofs, and (b) a full completeness theorem: every combinatorial proof is the image of a syntactic proof.

This result is published in the LICS 2021 conference 17

External Collaborators: Danko Ilik (Siemens)

A compiler consists of a sequence of phases going from lexical analysis to code generation. Ideally, the formal verification of a compiler should include the formal verification of every component of the tool-chain. In order to contribute to the end-to-end verification of compilers, we implemented a verified lexer generator with usage similar to OCamllex. This software-Coqlex-reads a lexer specification and generates a lexer equipped with Coq proofs of its correctness. Although the performance of the generated lexers does not measure up to the performance of a standard lexer generator such as OCamllex, the safety guarantees it comes with make it an interesting alternative to use when implementing totally verified compilers or other language processing tools.

This work has been presented at the ML 2021 workshop 22

For more than thirty years, various researchers, including current and previous members of Partout and Parsifal, have been applying proof theory to multiple topics in computational logic. A survey of that work has been published in 9, an invited submission to the 20th Anniversary Issue of the Theory and Practice of Logic Programming. This survey documents various ways that proof theory has been applied to logic programming. One can actually see from this history a surprising influence of logic programming also on the development of proof theory. This reciprocal influences between logic programming and proof theory is reported in the paper 10.

External Collaborators: Tomer Libal, University of Luxembourg, Computer Science Department

Higher-order pattern unification is often used in proof assistants and other computational logic systems to discover appropriate instances of quantifiers when implementing the mechanical search for proofs. Recently, this approach to unification has been expanded to the functions-as-constructors higher-order unification setting: this expanded setting continues to be decidable and possesses most general unifiers whenever unifiers exist 7.

External Collaborators: Alexandre Viel

The formulation of equality uses in the team's implementations of model checking and theorem proving identifies equality as a logical connective. Other treatments of equality treat it as a non-logical predicate axiomatized by an appropriate theory. Our treatment as a logical connective, however, allows us to have very strong focusing theorems in the setting of arithmetic proofs. The Bedwyr and Abella systems incorporate this approach to equality. The paper 11 provides a proof that such unification, in its most general form, is undecidability.

External Collaborators: Chuck Liang, Hofstra University, New York, USA.

Gentzen's sequent calculi LK is a landmark proof systems. Given the extensive uses that have been made of LK by computer scientists in recent decades, several undesirable features of this calculus have been identified. Among such features is that its inferences rules are low-level and frequently permute over each other. As a result, large-scale structures within sequent calculus proofs are hard to identify. Liang and Miller 26 present a different approach to designing a sequent calculus for classical logic. Starting with LK, they examined the proof search meaning of its inference rules and classify them as involving either don't care nondeterminism or don't know nondeterminism. Based on that classification, they designed the focused proof system LKF in which inference rules belong to one of two phases of proof construction depending on which flavor of nondeterminism they involve. They then prove that the cut rule and the general form of the initial rule are admissible in LKF. These results can be used to provide simple proofs of various meta-theoretic properties of classical logic, including Herbrand's theorem.

Inferno is a software library from François Pottier (EPI Cambium) to implement constraint-based type-inference in a pleasant, declarative style. It contains a proof-of-concept inference engine for a very small programming language, but it is not obvious how to scale its declarative style to richer language features.

Olivier Martinot, as a PhD student, has been working with Gabriel Scherer on extending the Inferno approach to more language features, hoping to eventually cover a large subset of the OCaml type system. This action is continued from last year.

Gabriel presented a part of this joint work at the "ML Family Workshop" 2021, 21.

Participants: Nicolas Chataing (M2 intern), Gabriel Scherer

Constructor unboxing is a proposed data-representation optimization for OCaml; it could improve memory usage in certain cases by eliminating overhead in the memory representation of values. We worked on a prototype implmentation of constructor unboxing. This required solving a decision problem for unfolding of ML datatype declarations in presence of mutual recursion, an interesting scientific result of its own.

This work was presented at the ML workshop 20 and will be presented at the JFLA'22 19.

Participants: Nathanaëlle Courant (INRIA Paris, EPI Cambium), Julien Lepiller (Yale University), Gabriel Scherer

Camlboot is a software project to "debootstrap" the OCaml compiler, that is, compile the OCaml compiler (which is itself written in OCaml) without requiring the use of a previous version of the OCaml compiler. It relies on a reference interpreter for OCaml written in a small subset of OCaml, that can be compiled to bytecode by a small, special-purpose compiler written in Scheme. A reference interpreter for OCaml is also a result of independent interest.

External Collaborators:
Ugo Dal Lago (University of Bologna & Inria) and
Gabriele Vanoni (University of Bologna & Inria).

This work studies the time performance of abstract machines for the

This work belongs to the research theme Foundations of complexity analysis for functional programs and it has been published in 5.

External Collaborators:
Ugo Dal Lago (University of Bologna & Inria) and
Gabriele Vanoni (University of Bologna & Inria).

This work complements the one in the previous subsection by studying the space of the interaction abstract machine (shortened to IAM) for the

This work belongs to the research theme Foundations of complexity analysis for functional programs and it has been published in 15.

External Collaborators:
Andrea Condoluci (Tweag I/O) and Claudio Sacerdoti Coen (University of Bologna).

This work proves that the number of steps taken by the strong call-by-value evaluation strategy of the linear in both the number of logarithmic inthe number of

This work belongs to the research theme Foundations of complexity analysis for functional programs and it has been published in 13.

Participants: Noam Zeilberger

External Collaborators:
Olivier Bodini, LIPN, Université Sorbonne Paris Nord; Alexandros Singh, LIPN, Université Sorbonne Paris Nord

This work builds on recent surprising connections discovered between lambda calculus and the study of map enumeration, which is an active subfield of combinatorics initiated by Bill Tutte in the 1960s.
Notably, bijections between different families of linear lambda terms and different families of rooted maps were independently discovered by Bodini, Gardy, and Jacquot (2013) and by Zeilberger and Giorgetti (2015), and have since been the subject of a variety of followup works (for an overview, see the introduction to Zeilberger, “A theory of linear typings as flows on 3-valent graphs”, LICS'2018).

In this paper, we dive deeper into the study of the combinatorics of linear lambda calculus, focusing on the analysis of different parameters of lambda terms and their map-theoretic counterparts.
For instance, under the bijections mentioned above, closed subterms of a linear lambda term correspond to bridges in the corresponding map, i.e., edges whose deletion increases the number of connected components.
We proved that the limit distribution of the number of closed proper subterms of a random closed linear lambda term is a Poisson distribution of parameter 1 (= the asymptotic probability of having

This is work that occurs in the context of Singh's doctoral thesis research, co-supervised by Bodini and Zeilbeger.
It has been presented by Singh at various venues including the journées ALEA 2021, the “Structure Meets Power” workshop at LICS 2021, and the “Combinatorics and Arithmetic for Physics” workshop at IHES, among others.
The pre-print 25 has been submitted to the open-access journal Combinatorial Theory and is currently under review.

In 2013 we proposed a new method of interactive theorem proving based on the use of subformula linking, which involves the use of links between arbitrary subformulas of a goal conjecture (i.e., the theorem being proved) 29. In a work published at CADE 2021 16 we show how to extend this technique to intuitionistic logic and a certain kind of intuitionistic type theories.

The main purpose of this work is to build new interactive theorem proving interfaces that are decoupled from the formal languages used to instruct proof verifiers. This allows the same proof building interface to be used across a variety of theorem provers, and also to simplify the instruction that must be given to novice users.

We are currently exploring mechanisms to add induction and higher-order logic to this framework.

The goal of the thesis is to develop ways to optimize the performance of software, while not sacrificing the guarantees of safety already provided for non-optimized code. The software that Siemens is using for their self-driving trains (e.g. Metro 14 in Paris) is programmed in Ada. Due to the high safety requirements for the software, the used Ada compiler has to be certified. At the current state of the art, only non-optimized code fulfils all necessary requirements. Because of higher performance needs, we are interested in producing optimized code that also fulfils these reqirements.

Stated most generally, the aim of the thesis is to assure, at the same time:

The OCaml Software Foundation (OCSF),4 established in 2018 under the umbrella of the Inria Foundation, aims to promote, protect, and advance the OCaml programming language and its ecosystem, and to support and facilitate the growth of a diverse and international community of OCaml users.

Since 2019, Gabriel Scherer serves as the director of the foundation.

Nomadic Labs, a Paris-based company, has implemented the Tezos blockchain and cryptocurrency entirely in OCaml. In 2019, Nomadic Labs and Inria have signed a framework agreement (“contrat-cadre”) that allows Nomadic Labs to fund multiple research efforts carried out by Inria groups. Within this framework, we participate to the following grants, in collaboration with the project-team Cambium at INRIA Paris:

This grant is intended to fund a number of improvements to OCaml, including the addition of new features and a possible re-design of the OCaml type-checker. This grant funds the PhD thesis of Olivier Martinot on this topic.

This grant is intended to fund the day-to-day maintenance of OCaml as well as the considerable work involved in managing the release cycle.