EN FR
EN FR


Section: New Results

Probabilistic Systems and Resource Control

Participants : Martin Avanzini, Flavien Breuvart, Alberto Cappai, Raphaelle Crubillé, Ugo Dal Lago, Francesco Gavazzo, Charles Grellois, Simone Martini, Alessandro Rioli, Davide Sangiorgi, Marco Solieri, Valeria Vignudelli.

Probabilistic Systems

Behavioural Equivalences and Metrics

Finding effective methodologies to check program equivalence is one of the oldest problems in the theory of programming languages, and has been studied also in the realm of probabilistic programming idioms. One particularly fruitful research direction consists in characterising context equivalence, the most natural way to define equivalence in higher-order languages, by way of coinductive notions of equivalence akin to bisimulation. In 2016, Focus has been involved in defining notions of environmental bisimulation for probabilistic lambda-calculi [37], proving them not only adequate, but also fully-abstract. Environmental bisimulation, contrarily to applicative bisimulation, is robust enough to be applicable to languages with local store. Moreover, the proof of adequacy of environmental bisimulation turns out to be simpler than that of applicative bisimulation, the latter requiring sophisticated arguments from linear programming. In a probabilistic setting, programs are more naturally compared through metrics rather than through equivalences, due to their intrinsic quantitative nature. Nicely, coinductive methodologies for program equivalence can be generalised to metrics by way of so-called behavioural metrics. This year, we have studied behavioural metrics in the context of concurrent processes, and defined enhancements of the proof method based on bisimulation metrics, by extending the theory of up-to techniques to premetrics on discrete probabilistic concurrent processes [32].

Programming Languages for Machine Learning

In recent years, higher-order functional programming languages like Church, Anglican, and Venture, have proved to be extremely effective as ways to specify not algorithms but rather bayesian models in the context of machine learning. The operational semantics of these languages, and learning algorithms when applied to programs in these languages, have been so far defined only informally. In 2016, we developed the operational semantics of an untyped probabilistic lambda-calculus with continuous distributions, as a foundation for universal probabilistic programming languages like those cited above [31]. Our first contribution was to adapt the classic operational semantics of lambda-calculus to a continuous setting. Our second contribution was to formalise the implementation technique of trace Markov chain Monte Carlo (MCMC) for our calculus and to show its correctness.

Resource Control

Complexity Analysis of Higher-Order Functional Programs

Complexity analysis of higher-order programs have been one of the main research themes inside Focus since its inception. It remains so today, although the emphasis is progressively shifting towards problems related to the implementation of complexity analysis methodologies rather than on their foundations. One issue with most existing complexity analysis methodologies is that they are insensitive to the sharing of computations among subprograms. We have studied how the interpretation method and dependency tuples, two prominent complexity analysis techniques can be adapted to graph-rewriting, thus accounting for the possible performance gains due to sharing [38]. We have also collaborated to the development of TCT, the Tyrolean Complexity Tool [29], a state-of-the-art complexity analyzer for term rewrite systems, making it capable to efficiently apply not one but many methodologies to the input program. Finally, we studied how the geometry of interaction can provide effective ways to compile higher-order functional programs into circuits, thus guaranteeing space efficiency [21].

On the Foundations of Complexity Analysis

One of the main foundational issues in complexity analysis is whether simple time cost models can be proved invariant, i.e., polynomially related to low-level models like those traditionally defined on Turing machines. We have solved a long-standing open problem, and proved that the unitary cost model, namely that attributing unitary cost to each beta-reduction step, is invariant for the pure lambda-calculus when evaluated leftmost-outermost [12]. We have also studied to which extent traditional methodologies like the interpretation method and light logics can be adapted to higher-order languages [16] and processes [20], respectively.