There is an emerging consensus that formal methods must be used as a matter of course in software development. Most software is too complex to be fully understood by one programmer or even a team of programmers, and requires the help of computerized techniques such as testing and model checking to analyze and eliminate entire classes of bugs. Moreover, in order for the software to be maintainable and reusable, it not only needs to be bug-free but also needs to have fully specified behavior, ideally accompanied with formal and machine-checkable proofs of correctness with respect to the specification. Indeed, formal specification and machine verification is the only way to achieve the highest level of assurance (EAL7) according to the ISO/IEC Common Criteria.1

Historically, achieving such a high degree of certainty in the operation of software has required significant investment of manpower, and hence of money. As a consequence, only software that is of critical importance (and relatively unchanging), such as monitoring software for nuclear reactors or fly-by-wire controllers in airplanes, has been subjected to such intense scrutiny. However, we are entering an age where we need trustworthy software in more mundane situations, with rapid development cycles, and without huge costs. For example: modern cars are essentially mobile computing platforms, smart-devices manage our intensely personal details, elections (and election campaigns) are increasingly fully computerized, and networks of drones monitor air pollution, traffic, military arenas, etc. Bugs in such systems can certainly lead to unpleasant, dangerous, or even life-threatening incidents.

The field of formal methods has stepped up to meet this growing need for trustworthy general purpose software in recent decades. Techniques such as computational type systems and explicit program annotations/contracts, and tools such as model checkers and interactive theorem provers, are starting to become standard in the computing industry. Indeed, many of these tools and techniques are now a part of undergraduate computer science curricula. In order to be usable by ordinary programmers (without PhDs in logic), such tools and techniques have to be high level and rely heavily on automation. Furthermore, multiple tools and techniques often need to marshaled to achieve a verification task, so theorem provers, solvers, model checkers, property testers, etc. need to be able to communicate with—and, ideally, trust—each other.

With all this sophistication in formal tools, there is an obvious
question: what should we trust?
Sophisticated formal reasoning tools are, generally speaking, complex
software artifacts themselves; if we want complex software to undergo
rigorous formal analysis we must be prepared to formally analyze the
tools and techniques used in formal reasoning itself.
Historically, the issue of trust has been addressed by means of
relativizing it to small and simple cores.
This is the basis of industrially successful formal reasoning systems
such as Coq, Isabelle, HOL4, and ACL2.
However, the relativization of trust has led to a balkanization of the
formal reasoning community, since the Coq kernel, for example, is
incompatible with the Isabelle kernel, and neither can directly
cross-validate formal developments built with the other.
Thus, there is now a burgeoning cottage industry of translations and
adaptations of different formal proof languages for bridging the gap.
A number of proposals have also been made for universal or
retargetable proof languages (e.g., Dedukti, ProofCert) so that the
cross-platform trust issues can be factorized into single trusted
checkers.

Beyond mutual incompatibility caused by relativized trust, there is a
bigger problem that the proof evidence that is accepted by small
kernels is generally far too detailed to be useful.
Formal developments usually occurs at a much higher level, relying on
algorithmic techniques such as unification, simplification, rewriting,
and controlled proof search to fill in details.
Indeed, the most reusable products of formal developments tend to be
these algorithmic techniques and associated collections of
hand-crafted rules.
Unfortunately, these techniques are even less portable than the fully
detailed proofs themselves, since the techniques are often implemented
in terms of the behaviors of the trusted kernels.
We can broadly say that the problem with relativized trust is that it
is based on the operational interpretation of implementations
of trusted kernels.
There still remains the question of meta-theoretic correctness.
Most formal reasoning systems implement a variant of a well known
mathematical formalism (e.g., Martin-Löf type theory, set theory,
higher-order logic), but it is surprising that hardly any mainstream
system has a formalized meta-theory.2
Furthermore, formal reasoning systems are usually associated with
complicated checkers for side-conditions that often have unclear
mathematical status.
For example, the Coq kernel has a built-in syntactic termination
checker for recursive fixed-point expressions that is required to work
correctly for the kernel to be sound.
This termination checker evolves and improves with each version of
Coq, and therefore the most accurate documentation of its behavior is
its own source code.
Coq is not special in this regard: similar trusted features exist in
nearly every mainstream formal reasoning system.

The Partout project is interested in the principles of deductive
and computational formalisms.
In the broadest sense, we are interested in the question of
trustworthy and verifiable meta-theory.
At one end, this includes the well studied foundational questions of
the meta-theory of logical systems and type systems: cut-elimination
and focusing in proof theory, type soundness and normalization
theorems in type theory, etc.
The focus of our research here is on the fundamental relationships
behind the the notions of computation and deduction.
We are particularly interested in relationships that go beyond the
well known correspondences between proofs and programs.3
Indeed, interpreting computation in terms of deduction (as in
logic programming) or deduction in terms of computation (as in
rewrite systems or in model checking) can often lead to fruitful and
enlightening research questions, both theoretical and practical.

From another end, Partout works on the question of the
essential nature of deductive or computational formalisms.
For instance, we are interested in the question of proof
identity that attempts to answer the following question: when are
two proofs of the same theorem the same?
Surprisingly, this very basic question is left unanswered in
proof theory, the branch of mathematics that supposedly treats
proofs as algebraic objects of interest.
We also pay particular attention to the combinatorial and
complexity-theoretic properties of the formalisms.
Indeed, it is surprising that until very recently the

To put trustworthy meta-theory to use, the Partout project also
works on the design and implementations of formal reasoning tools and
techniques.
We study the mathematical principles behind the representations of
formal concepts (structural operational semantics (SOS)
style.
We also work on foundational questions about induction and
co-induction, which are used in intricate combinations in
metamathematics.

Software and hardware systems perform computation (systems that
process, compute and perform) and deduction (systems that
search, check or prove). The makers of those systems express their
intent using various frameworks such as programming languages,
specification languages, and logics. The Partout project aims
at developing and using mathematical principles to design better
frameworks for computation and reasoning. Principles of expression are
researched from two directions, in tandem:

Foundational approaches, from theories to applications: studying fundamental problems of programming and proof theory.

Examples include studying the complexity of reduction strategies in lambda-calculi with sharing, or studying proof representations that quotient over rule permutations and can be adapted to many different logics.

Empirical approaches, from applications to theories: studying systems currently in use to build a theoretical understanding of the practical choices made by their designers.

Examples include studying realistic implementations of programming languages and proof assistants, which differ in interesting ways from their usual high-level formal description (regarding of sharing of code and data, for example), or studying new approaches to efficient automated proof search, relating them to existing approaches of proof theory, for example to design proof certificates or to generalize them to non-classical logics.

One of the strengths of Partout is the co-existence of a number
of different expertise and points of view. Many dichotomies exist in
the study of computation and deduction:
functional programming vs logic programming,
operational semantics vs denotational semantics,
constructive logic vs classical logic,
proof terms vs proof nets, etc.
We do not identify with any one of them in particular, rather with
them as a whole, believing in the value of interaction and
cross-fertilization between different approaches.
Partout defines its scope through the following core tenets:

More concretely, the research in Partout will be centered around
the following four themes:

The Partout team studies the structure of mathematical proofs, in ways that often makes them more amenable to automated theorem proving – automatically searching the space of proof candidates for a statement to find an actual proof – or a counter-example.

(Due to fundamental computability limits, fully-automatic proving is only possible for simple statements, but this field has been making a lot of progress in recent years, and is in particular interested with the idea of generating verifiable evidence for the proofs that are found, which fits squarely within the expertise of Partout.)

Our work on the structure of proofs also suggests ways how they could be presented to a user, edited and maintained, in particular in “proof assistants”, automated tool to assist the writing of mathematical proofs with automatic checking of their correctness.

Our work also gives insight on the structure and properties of programming languages. We can improve the design or implementation of programming languages, help programmers or language implementors reason about the correctness of the programs in a given language, or reason about the cost of execution of a program.

Partout participated to the discussion within INRIA Saclay and LIX of a new multi-year law on research (LPPR: Loi Pluriannuelle de Programamtion de la Recherche). In particular, Partout team members (Gabriel Scherer and Luc Pellissier) organized and ran the main physical meeting of the lab / research centre on this topic in March 2020.

MOIN is a SWI Prolog theorem prover for classical and intuitionstic modal logics. The modal and intuitionistic modal logics considered are all the 15 systems occurring in the modal S5-cube, and all the decidable intuitionistic modal logics in the IS5-cube. MOIN also provides a protptype implementation for the intuitionistic logics for which decidability is not known (IK4,ID5 and IS4). MOIN is consists of a set of Prolog clauses, each clause representing a rule in one of the three proof systems. The clauses are recursively applied to a given formula, constructing a proof-search tree. The user selects the nested proof system, the logic, and the formula to be tested. In the case of classic nested sequent and Maehara-style nested sequents, MOIN yields a derivation, in case of success of the proof search, or a countermodel, in case of proof search failure. The countermodel for classical modal logics is a Kripke model, while for intuitionistic modal logic is a bi-relational model. In case of Gentzen-style nested sequents, the prover does not perform a countermodel extraction.

A system description of MOIN is available at <https://hal.inria.fr/hal-02457240>

Participants: Matteo Acclavio, Lutz Straßburger

External Collaborators: Ross Horne, University of Luxembourg

In this work we developed a proof system that does not have formulas as its basic object of reasoning but arbitrary graphs. More precisey, we start from the well-known relationship between formulas and cographs, which are undirected graphs that do not have

Participants: Marianna Girlando, Marianela Morales, Lutz Straßburger

External Collaborators: Sonia Marin, UCL

We we continued our work on the development of labelled sequent systems and a nested sequent systems for intuitionistic modal logics. In particular, our labelled systems are equipped with two relation symbols, one for the accessibility relation associated with the Kripke semantics for modal logics and one for the preorder relation associated with the Kripke semantics for intuitionistic logic.

The nested sequent systems come in three flavours: First, standard Gentzen-style single-conclusion systems; second Maehara-style multiple conclusion systems; and third, fully structured with two kinds of brackets, one We present a labelled sequent system and a nested sequent system for intuitionisticmodal logics equipped with two relation symbols, one for the accessibility relation associated with the Kripke semantics for modal logics and one for the preorder relation associated with the Kripke semantics for intuitionistic logic. For all systems we established a close correspondence with the bi-relational Kripke semantics for intuitionistic modal logic.

Participants: Gabriel Scherer

External Collaborators: Alban Reynaud (ENS Lyon), Jeremy Yallop (University of Cambridge, UK)

This works exposes a new approach to checking recursive definitions, designed to solve a technical difficulty in the design of the OCaml programming language, that was causing the compiler to accept erroneous (memory-unsafe) programs involving certain classes of recursive definitions. Our involvement in this work started during the 2018 internship of Alban Reynaud in our project-team, followed by development work resulting in the inclusion of our new criterion in the OCaml compiler in 2019, and finally an academic publication (to be published in January 2021 4) that details the new check, studies its theory and proves its correctness in an idealized setting.

Participants: Gabriel Scherer

External Collaborators: Francesco Mecca (University of Turin, Italy)

We propose an algorithm for the translation validation of a pattern matching compiler for a small subset of the OCaml pattern matching features. Given a source program and its compiled version the algorithm checks wheter the two are equivalent or produce a counter example in case of a mismatch.

Our equivalence algorithm works with decision trees. Source patterns are converted into a decision tree using matrix decomposition. Target programs, described in a subset of the Lambda intermediate representation of the OCaml compiler, are turned into decision trees by applying symbolic execution.

This work, presenting the results of Francesco Mecca's master internship in our project-team, was presented at the "ML Family Workshop" in August 2020 22.

Participants: Gabriel Scherer, Olivier Martinot

Inferno is a software library from François Pottier (EPI Cambium) to implement constraint-based type-inference in a pleasant, declarative style. It contains a proof-of-concept inference engine for a very small programming language, but it is not obvious how to scale its declarative style to richer language features.

Olivier Martinot, as a master intern and then a beginning PhD student, has been working with Gabriel Scherer on extending the Inferno approach to more language features, hoping to eventually cover a large subset of the OCaml type system.

Olivier presented a part of his master work at the "ML Family Workshop" 2020, 21.

Participants: Luc Pellissier

External Collaborators: Davide Catta (LIRMM, Montpellier), Christian Rétoré (LIRMM, Montpellier)

According to inferentialism, the meaning of a statement lies in its argumentative use, its justifications, its refutations and more generally its deductive relation to other statements. Luc Pellissier worked on a first step towards an "implementation" of the inferentialist view of meaning, and a first proposal for a logical structure which describes an argumentation. The work 11 proposes a simple notion of argumentative dialogue, which can be either carried in purely logical terms or in natural language.

Participants: Dale Miller

Most computer checked proofs are tied to the particular technology of a prover's software. While sharing results between proof assistants is a recognized and desirable goal, the current organization of theorem proving tools makes such sharing an exception instead of the rule. In fact, the current architecture of proof assistants and formal proofs needs to be turned inside-out. That is, instead of having a few mature theorem provers include within them their formally checked theorems and proofs, proof assistants should sit on the edge of a web of formal proofs and proof assistants should be exporting their proofs so that they can exist independently of any theorem prover. While it is necessary to maintain the dependencies between definitions, theories, and theorems, no explicit library structure should be imposed on this web of formal proofs. Thus a theorem and its proofs should not necessarily be located at a particular URL or within a particular prover's library. While the world of symbolic logic and proof theory certainly allows for proofs to be seen as global and permanent objects, there is a lot of research and engineering work that is needed to make this possible. The W3Proof Action Exploratoire is based on these observations and goals.

This work has been published in 14.

Participants: Matteo Manighetti, Dale Miller

External Collaborators:
Roberto Blanco, Max Planck Institute for Security and Privacy; Enrico
Tassi, Inria Sophia Antipolis.

Logic programming implementations of the foundational proof certificate (FPC) framework are capable of checking a wide range of proof evidence. Proof checkers based on logic programming can make use of both unification and backtracking search to allow varying degrees of proof reconstruction to take place during proof checking. Such proof checkers are also able to elaborate proofs lacking full details into proofs containing much more detail. We are using the Coq-Elpiplugin, which embeds an implementation of

This work has been published in 25. This work was also presented as “work-in-progress” at the LFMTP 2020 workshop in June 2020.

Participants: Dale Miller

External Collaborators:
Matteo Cimini, University of Massachusetts, Lowell;
Jeremy Siek, Indiana University, Bloomington.

In this work, we present a type system over language definitions that classifies parts of the operational semantics of a language in input, and models a common language design organization. The resulting typing discipline guarantees that the language at hand is automatically type sound. Thanks to the use of types to model language design, our type checker has a high-level view on the language being analyzed and can report messages using the same jargon of language designers. We have implemented our type system in the LANG-N-CHECK tool, and we have applied it to derive the type soundness of several functional languages, including those with recursive types, polymorphism, exceptions, lists, sums, and several common types and operators. Our system is designed to output proof scripts of language correctness using the Abella theorem prover.

This work has been published in 12.

Participants: Beniamino Accattoli

External Collaborators:
Ugo Dal Lago, University of Bologna & Inria;
Gabriele Vanoni, University of Bologna & Inria.

This work revisits the Interaction Abstract Machine (IAM), a machine based on Girard’s Geometry of Interaction. It is an unusual machine, radically different with respect to the mainstream paradigm of environment-based machines for functional languages. The soundness proof in the literature—due to Danos, Regnier & Herbelin—is convoluted and passes through various other formalisms. Here we provide a new direct proof of its correctness, based on a variant of Sands’s improvements, a natural notion of bisimulation. Moreover, our proof is carried out on a new presentation of the IAM, defined as a machine acting directly on λ-terms,rather than on linear logic proof nets.

The work was the first step towards the complexity analysis of the IAM. As such, it belongs to the research theme Foundations of complexity analysis for functional programs.

This work has been published in 5.

Participants: Beniamino Accattoli

External Collaborators:
Alejandro Díaz-Caro, CONICET & University of Buenos Aires & University Nacional de Quilmes (Argentina).

We introduce a simple extension of the

This work arose as a collaboration while Accattoli was visiting the university of Buenos Aires in the summer 2019 to teach at the ECI 2019 summer school.

This work has been published in 6.

Participants: Beniamino Accattoli

External Collaborators:
Stéphane Graham-Lengrand, SRI International;
Delia Kesner, Université de Paris & CNRS.

This line of work extends a work titled "Tight Typings and Split Bounds" previusly published in the conference ICFP 2018 by the same authors, and included in 2018 report of the PARSIFAL team.

Essentially, that paper was about the use of multi types, a variant of intersection types, to extract bounds over evaluation lengths and the size of normal form of typed terms, with respect to different evaluation strategies and notions of normal form.

Here we refined and extended many of the results in the conference versions, producing an extended journal version 3

Participants: Beniamino Accattoli

External Collaborators:
Claudia Faggian,
Université de Paris & CNRS, France;
Giulio Guerrieri
University of Bath, UK.

This is a work about rewriting theory, with applications to the

We present a new technique for proving factorization theorems for compound rewriting systems in a modular way, which is inspired by the Hindley-Rosen technique for confluence. Specifically, our approach is well adapted to deal with extensions of the call-by-name and call-by-value λ-calculi.

The technique is first developed abstractly. We isolate a sufficient condition (called linear swap) for lifting factorization from components to the compound system, and which is compatible with β-reduction. We then closely analyze some common factorization schemas for the λ-calculus.

Concretely, we apply our technique to diverse extensions of the

The work has been published in 7

Participants: Noam Zeilberger

External Collaborators:
Tarmo Uustalu, Reykjavik University and Tallinn University of Technology;
Niccolò Veltri, Tallinn University of Technology.

Skew monoidal categories are a well-motivated generalization of monoidal categories where the three structural laws of left and right unitality and associativity are not required to be isomorphisms but merely transformations in a particular direction.
They have been thoroughly studied from a categorical perspective since being axiomatized by Szlachányi (2012), and in a programming languages context, they were considered by Uustalu as an outgrowth of his influential work on relative monads (Altenkirch, Chapman, Uustalu 2015).
The simpler setting where one drops the unit laws and only keeps an ordered associativity law Tamari order, which has likewise seen a resurgence of interest from the enumerative combinatorics community following an unexpected result by Chapoton (2006) on the number of intervals in Tamari lattices.

In “A sequent calculus for a semi-associative law” (FSCD'2017, extended version published as LMCS 15:1, 2019), Zeilberger observed that the Tamari order is precisely captured by a sequent calculus with a simple variation of the rules in Lambek (1958). This was used to give a surprising application of proof theory to combinatorics, in particular a new proof of Chapoton's result by way of a cut-elimination and focusing theorem for the sequent calculus, as well as a new proof of the original result by Friedman and Tamari (1961) that the order satisfies a lattice property.

These two independent threads were brought together in the collaboration by the three named authors, which resulted in four papers published in 2020.
23 is a longer version of a paper at MFPS'2018, in which we showed how the sequent calculus for the Tamari order may be extended to a sequent calculus that precisely captures skew monoidal categories, in the sense that we can use it to prove a coherence theorem.
In 17, we further extended this approach to capture “partially skew” categories with different sets of normality conditions.
Our MFPS'2020 paper 16 had a more semantic bent, studying the relationship between skew monoidal categories and skew closed categories.
Finally, 15 investigated the proof theory of skew prounital closed categories, which have good motivations as models of ordered linear lambda calculus.

Participants: Noam Zeilberger

External Collaborators:
Nicolas Blanco, University of Birmingham.

Polycategories are known to give rise to models of classical linear logic in so-called representable polycategories with duals, which ask for the existence of various polymaps satisfying the different universal properties needed to define tensor, par, and negation. We begin by explaining how these different universal properties can all be seen as instances of a single notion of universality of a polymap parameterised by an input or output object, which also generalises the classical notion of universal multimap in a multicategory. We then proceed to introduce a definition of in-cartesian and out-cartesian polymaps relative to a refinement system (= strict functor) of polycategories, in such a way that universal polymaps can be understood as a special case. In particular, we obtain that a polycategory is a representable polycategory with duals if and only if it is bifibred over the terminal polycategory 1. Finally, we present a Grothendieck correspondence between bifibrations of polycategories and pseudofunctors into MAdj, the (weak) 2-polycategory of multivariable adjunctions. When restricted to bifibrations over 1 we get back the correspondence between *-autonomous categories and Frobenius pseudomonoids in MAdj that was recently observed by Shulman.

This work arose in the context of Blanco's doctoral thesis research, and has been published in 10.

Participants: Noam Zeilberger

External Collaborators:
Olivier Bodini, LIPN, Université Sorbonne Paris Nord; Alexandros Singh, LIPN, Université Sorbonne Paris Nord

In graph theory, a “map” is another name for a graph embedded on a surface in such a way that the surface is cut up into a collection of simply-connected regions.
The study of map enumeration has been an active subfield of combinatorics since the pioneering work of Bill Tutte in the 1960s, and quite surprisingly appears to have deep connectionss to the combinatorics of lambda calculus.
Notably, bijections between different families of linear lambda terms and different families of maps were independently discovered by Bodini, Gardy, and Jacquot (2013) and by Zeilberger and Giorgetti (2015), and have since been the subject of a variety of followup works (for an overview, see the introduction to Zeilberger, “A theory of linear typings as flows on 3-valent graphs”, LICS'2018).

In this work, we dive deeper into the study of the combinatorics of linear lambda calculus, focusing on the analysis of different parameters of lambda terms and their map-theoretic counterparts.
For instance, under the bijections mentioned above, closed subterms of a linear lambda term correspond to bridges in the corresponding map, i.e., edges whose deletion increases the number of connected components.
We proved that the limit distribution of the number of closed proper subterms of a random linear lambda term is a Poisson distribution of parameter 1 (= the asymptotic probability of having

This is work that occurs in the context of Singh's doctoral thesis research, and was presented as a talk at CLA'2020.

The goal of the thesis is to develop ways to optimize the performance of software, while not sacrificing the guarantees of safety already provided for non-optimized code. The software that Siemens is using for their self-driving trains (e.g. Metro 14 in Paris) is programmed in Ada. Due to the high safety requirements for the software, the used Ada compiler has to be certified. At the current state of the art, only non-optimized code fulfils all necessary requirements. Because of higher performance needs, we are interested in producing optimized code that also fulfils these reqirements.

Stated most generally, the aim of the thesis is to assure, at the same time:

The OCaml Software Foundation (OCSF),4 established in 2018 under the umbrella of the Inria Foundation, aims to promote, protect, and advance the OCaml programming language and its ecosystem, and to support and facilitate the growth of a diverse and international community of OCaml users.

Since 2019, Gabriel Scherer serves as the director of the foundation.

Nomadic Labs, a Paris-based company, has implemented the Tezos blockchain and cryptocurrency entirely in OCaml. In 2019, Nomadic Labs and Inria have signed a framework agreement (“contrat-cadre”) that allows Nomadic Labs to fund multiple research efforts carried out by Inria groups. Within this framework, we participate to the following grants, in collaboration with the project-team Cambium at INRIA Paris:

This project teams up three research groups, one at Inria Saclay, one at the University of Bath, and one at University College London, who are driven by their joint interest in the development of a combinatorial proof theory which is able to treat formal proofs independently from syntactic proof systems.

We plan to focus our research in two major directions: First, study the normalization of combinatorial proofs, with possible applications for the implementation of functional programming languages, and second, study combinatorial proofs for the logic of bunched implications, with the possible application for separation logic and its use in the verification of imperative programs.

ANR JCJC project COCA HOLA: Cost Models for Complexity Analyses of Higher-Order Languages, coordinated by B. Accattoli, 2016–2021, ANR-16-CE40-004-01.

Jui-Hsuan Wu, supervised by Lutz Straßburger (April 2020 – August 2020, M2 MPRI)