## Section: Research Program

### Semantic and logical foundations for effects in proof assistants based on type theory

We propose the incorporation of effects in the theory of proof assistants at a foundational level. Not only would this allow for certified programming with effects, but it would moreover have implications for both semantics and logic.

We mean *effects* in a broad sense that encompasses both Moggi's
monads [90] and Girard's linear
logic [57]. These two seminal works have given rise to respective
theories of effects (monads) and resources (co-monads). Recent advances,
however have unified these two lines of thought: it is now clear that
the defining feature of effects, in the broad sense, is sensitivity
to evaluation order [79], [50].

In contrast, the type theory that forms the foundations of proof assistants
is based on pure $\lambda $ calculus and is built on the assumption
that evaluation order is irrelevant. Evaluation order is therefore
the blind spot of type theory. In Moggi [91], integrating
the dependent types of type theory with monads is *“the next
difficult step [...] currently under investigation”*.

Any realistic program contains effects: state, exceptions, input-output. More generally, evaluation order may simply be important for complexity reasons. With this in mind, many works have focused on certified programming with effects: notably Ynot [95], and more recently ${\mathrm{F}}^{\u2606}$ [105] and Idris [41], which propose various ways for encapsulating effects and restricting the dependency of types on effectful terms. Effects are either specialised, such as the monads with Hoare-style pre- and post-conditions found in Ynot or ${\mathrm{F}}^{\u2606}$, or more general, such as the algebraic effects implemented in Idris. But whereas there are several experiments and projects pursuing the certification of programs with effects, each making its own choices on how effects and dependency should be merged, there is on the other hand a deficit of logical and semantic investigations.

We propose to develop the foundations of a type theory with effects taking into account the logical and semantic aspects, and to study their practical and theoretical consequences. A type theory that integrates effects would have logical, algebraic and computational implications when viewed through the Curry-Howard correspondence. For instance, effects such as control operators establish a link with classical proof theory [62]. Indeed, control operators provide computational interpretations of type isomorphisms such as $A\cong \neg \neg A$ and $\neg \forall xA\cong \exists x\neg A$ (e.g. [92]), whereas the conventional wisdom of type theory holds that such axioms are non-constructive (this is for instance the point of view that has been advocated so far in homotopy type theory [107]). Another example of an effect with logical content is state (more precisely memoization) which is used to provide constructive content to the classical dependent axiom of choice [38], [74], [66]. In the long term, a whole body of literature on the constructive content of classical proofs is to be explored and integrated, providing rich sources of inspiration: Kohlenbach's proof mining [73] and Simpson's reverse mathematics [103], for instance, are certainly interesting to investigate from the Curry-Howard perspective.

The goal is to develop a type theory with effects that accounts both for practical experiments in certified programming, and for clues from denotational semantics and logical phenomena, in a unified setting.

#### Models for integrating effects with dependent types

A crucial step is the integration of dependent types with effects,
a topic which has remained *“currently under investigation”*
[91] ever since the beginning. The difficulty resides
in expressing the dependency of types on terms that can perform side-effects
during the computation. On the side of denotational semantics, several
extensions of categorical models for effects with dependent types
have been proposed [29], [108] using axioms that should
correspond to restrictions in terms of expressivity but whose practical
implications, however, are not immediately transparent. On the side
of logical approaches [66], [67], [77], [89],
one first considers a drastic restriction to terms that do not compute,
which is then relaxed by semantic means. On the side of systems for
certified programming such as ${\mathrm{F}}^{\u2606}$, the type system ensures that
types only depend on pure and terminating terms.

Thus, the recurring idea is to introduce restrictions on the dependency
in order to establish an encapsulation of effects. In our approach,
we seek a principled description of this idea by developing the concept
of *semantic value* (thunkables, linears) which arose from foundational
considerations [56], [102], [93] and
whose relevance was highlighted in recent works [80], [99].
The novel aspect of our approach is to seek a proper extension of
type theory which would provide foundations for a classical type theory
with axiom of choice in the style of Herbelin [66],
but which moreover could be generalised to effects other than just
control by exploiting an abstract and adaptable notion of semantic
value.

#### Intuitionistic depolarisation

In our view, the common idea that evaluation order does not matter
for pure and termination computations should serve as a bridge between
our proposals for dependent types in the presence of effects and traditional
type theory. Building on the previous goal, we aim to study the relationship
between semantic values, purity, and parametricity theorems [101], [58].
Our goal is to characterise parametricity as a form of intuitionistic*
depolarisation* following the method by which the first game model
of full linear logic was given (Melliès [86], [87]).
We have two expected outcomes in mind: enriching type theory with
intensional content without losing its properties, and giving an explanation
of the dependent types in the style of Idris and ${\mathrm{F}}^{\u2606}$ where purity-
and termination-checking play a role.

#### Developing the rewriting theory of calculi with effects

An integrated type theory with effects requires an understanding of evaluation order from the point of view of rewriting. For instance, rewriting properties can entail the decidability of some conversions, allowing the automation of equational reasoning in types [27]. They can also provide proofs of computational consistency (that terms are not all equivalent) by showing that extending calculi with new constructs is conservative [104]. In our approach, the $\lambda $-calculus is replaced by a calculus modelling the evaluation in an abstract machine [51]. We have shown how this approach generalises the previous semantic and proof-theoretic approaches [33], [79], [81], and overcomes their shortcomings [94].

One goal is to prove computational consistency or decidability of conversions purely using advanced rewriting techniques following a technique introduced in [104]. Another goal is the characterisation of weak reductions: extensions of the operational semantics to terms with free variables that preserve termination, whose iteration is equivalent to strong reduction [28], [54]. We aim to show that such properties derive from generic theorems of higher-order rewriting [110], so that weak reduction can easily be generalised to richer systems with effects.

#### Direct models and categorical coherence

Proof theory and rewriting are a source of *coherence theorems
*in category theory, which show how calculations in a category can
be simplified with an embedding into a structure with stronger properties
[84], [75]. We aim to explore such results
for categorical models of effects [79], [50]. Our key
insight is to consider the reflection between *indirect and direct
models * [56], [93] as a coherence theorem:
it allows us to embed the traditional models of effects into structures
for which the rewriting and proof-theoretic techniques from the previous
section are effective.

Building on this, we are further interested in connecting operational semantics to 2-category theory, in which a second dimension is traditionally considered for modelling conversions of programs rather than equivalences. This idea has been successfully applied for the $\lambda $-calculus [72], [68] but does not scale yet to more realistic models of computation. In our approach, it has already been noticed that the expected symmetries coming from categorical dualities are better represented, motivating a new investigation into this long-standing question.

#### Models of effects and resources

The unified theory of effects and resources [50] prompts an investigation into the semantics of safe and automatic resource management, in the style of Modern C++ and Rust. Our goal is to show how advanced semantics of effects, resources, and their combination arise by assembling elementary blocks, pursuing the methodology applied by Melliès and Tabareau in the context of continuations [88]. For instance, by combining control flow (exceptions, return) with linearity allows us to describe in a precise way the “Resource Acquisition Is Initialisation” idiom in which the resource safety is ensured with scope-based destructors. A further step would be to reconstruct uniqueness types and borrowing using similar ideas.