Section: Research Program
Semantic and logical foundations for effects in proof assistants based on type theory
We propose the incorporation of effects in the theory of proof assistants at a foundational level. Not only would this allow for certified programming with effects, but it would moreover have implications for both semantics and logic.
We mean effects in a broad sense that encompasses both Moggi's monads [90] and Girard's linear logic [57]. These two seminal works have given rise to respective theories of effects (monads) and resources (co-monads). Recent advances, however have unified these two lines of thought: it is now clear that the defining feature of effects, in the broad sense, is sensitivity to evaluation order [79], [50].
In contrast, the type theory that forms the foundations of proof assistants
is based on pure
Any realistic program contains effects: state, exceptions, input-output.
More generally, evaluation order may simply be important for complexity
reasons. With this in mind, many works have focused on certified programming
with effects: notably Ynot [95], and more recently
We propose to develop the foundations of a type theory with effects
taking into account the logical and semantic aspects, and to study
their practical and theoretical consequences. A type theory that integrates
effects would have logical, algebraic and computational implications
when viewed through the Curry-Howard correspondence. For instance,
effects such as control operators establish a link with classical
proof theory [62]. Indeed, control
operators provide computational interpretations of type isomorphisms
such as
The goal is to develop a type theory with effects that accounts both for practical experiments in certified programming, and for clues from denotational semantics and logical phenomena, in a unified setting.
Models for integrating effects with dependent types
A crucial step is the integration of dependent types with effects,
a topic which has remained “currently under investigation”
[91] ever since the beginning. The difficulty resides
in expressing the dependency of types on terms that can perform side-effects
during the computation. On the side of denotational semantics, several
extensions of categorical models for effects with dependent types
have been proposed [29], [108] using axioms that should
correspond to restrictions in terms of expressivity but whose practical
implications, however, are not immediately transparent. On the side
of logical approaches [66], [67], [77], [89],
one first considers a drastic restriction to terms that do not compute,
which is then relaxed by semantic means. On the side of systems for
certified programming such as
Thus, the recurring idea is to introduce restrictions on the dependency in order to establish an encapsulation of effects. In our approach, we seek a principled description of this idea by developing the concept of semantic value (thunkables, linears) which arose from foundational considerations [56], [102], [93] and whose relevance was highlighted in recent works [80], [99]. The novel aspect of our approach is to seek a proper extension of type theory which would provide foundations for a classical type theory with axiom of choice in the style of Herbelin [66], but which moreover could be generalised to effects other than just control by exploiting an abstract and adaptable notion of semantic value.
Intuitionistic depolarisation
In our view, the common idea that evaluation order does not matter
for pure and termination computations should serve as a bridge between
our proposals for dependent types in the presence of effects and traditional
type theory. Building on the previous goal, we aim to study the relationship
between semantic values, purity, and parametricity theorems [101], [58].
Our goal is to characterise parametricity as a form of intuitionistic
depolarisation following the method by which the first game model
of full linear logic was given (Melliès [86], [87]).
We have two expected outcomes in mind: enriching type theory with
intensional content without losing its properties, and giving an explanation
of the dependent types in the style of Idris and
Developing the rewriting theory of calculi with effects
An integrated type theory with effects requires an understanding of
evaluation order from the point of view of rewriting. For instance,
rewriting properties can entail the decidability of some conversions,
allowing the automation of equational reasoning in types [27].
They can also provide proofs of computational consistency (that terms
are not all equivalent) by showing that extending calculi with new
constructs is conservative [104]. In our approach,
the
One goal is to prove computational consistency or decidability of conversions purely using advanced rewriting techniques following a technique introduced in [104]. Another goal is the characterisation of weak reductions: extensions of the operational semantics to terms with free variables that preserve termination, whose iteration is equivalent to strong reduction [28], [54]. We aim to show that such properties derive from generic theorems of higher-order rewriting [110], so that weak reduction can easily be generalised to richer systems with effects.
Direct models and categorical coherence
Proof theory and rewriting are a source of coherence theorems in category theory, which show how calculations in a category can be simplified with an embedding into a structure with stronger properties [84], [75]. We aim to explore such results for categorical models of effects [79], [50]. Our key insight is to consider the reflection between indirect and direct models [56], [93] as a coherence theorem: it allows us to embed the traditional models of effects into structures for which the rewriting and proof-theoretic techniques from the previous section are effective.
Building on this, we are further interested in connecting operational
semantics to 2-category theory, in which a second dimension is traditionally
considered for modelling conversions of programs rather than equivalences.
This idea has been successfully applied for the
Models of effects and resources
The unified theory of effects and resources [50] prompts an investigation into the semantics of safe and automatic resource management, in the style of Modern C++ and Rust. Our goal is to show how advanced semantics of effects, resources, and their combination arise by assembling elementary blocks, pursuing the methodology applied by Melliès and Tabareau in the context of continuations [88]. For instance, by combining control flow (exceptions, return) with linearity allows us to describe in a precise way the “Resource Acquisition Is Initialisation” idiom in which the resource safety is ensured with scope-based destructors. A further step would be to reconstruct uniqueness types and borrowing using similar ideas.