Ubiquitous Computing refers to the situation in which computing facilities are embedded or integrated into everyday objects and activities. Networks are large-scale, including both hardware devices and software agents. The systems are highly mobile and dynamic: programs or devices may move and often execute in networks owned and operated by others; new devices or software pieces may be added; the operating environment or the software requirements may change. The systems are also heterogeneous and open: the pieces that form a system may be quite different from each other, built by different people or industries, even using different infrastructures or programming languages; the constituents of a system only have a partial knowledge of the overall system, and may only know, or be aware of, a subset of the entities that operate on the system.
A prominent recent phenomenon in Computer Science is the emerging of
interaction and communication as key architectural and programming
concepts. This is especially visible in ubiquitous systems.
Complex distributed
systems are being thought of and designed as structured composition of
computational units, usually referred to as components. These
components are supposed to interact with each other and such
interactions are supposed to be orchestrated into conversations and
dialogues. In the remainder, we will write CBUS for
Component-Based Ubiquitous Systems.
In CBUS, the systems are complex. In the same way as for complex systems in other disciplines, such as physics, economics, biology, so in CBUS theories are needed that allow us to understand the systems, design or program them, analyze them.
Focus investigates the semantic foundations for CBUS. The foundations are intended as instrumental to formalizing and verifying important computational properties of the systems, as well as to proposing linguistic constructs for them. Prototypes are developed to test the implementability and usability of the models and the techniques. Throughout our work, `interaction' and `component' are central concepts.
The members of the project have a solid experience in algebraic and logical models of computation, and related techniques, and this is the basis for our study of ubiquitous systems. The use of foundational models inevitably leads to opportunities for developing the foundational models themselves, with particular interest for issues of expressiveness and for the transplant of concepts or techniques from a model to another one.
The objective of Focus is to develop concepts, techniques, and
possibly also tools, that may contribute to the analysis and synthesis
of CBUS.
Fundamental to these activities is modeling.
Therefore designing, developing and studying computational models
appropriate for CBUS is a central activity of the project.
The models are used
to formalise and verify important computational
properties of the systems, as well as to propose new linguistic constructs.
The models we study are in the process
calculi (e.g., the
Modern distributed
systems have witnessed a clear shift
towards interaction and conversations
as
basic building blocks
for software architects and programmers.
The systems are made by components, that
are supposed to
interact and carry out
dialogues in order to achieve
some predefined goal; Web services are a good example of this.
Process calculi are models that have been designed precisely with the
goal
of understanding interaction and composition.
The theory and tools that have been developed on top of process
calculi can set a basis with which CBUS challenges can be tackled.
Indeed industrial proposals of languages for Web services such as BPEL
are strongly inspired by process calculi, notably the
Type systems and logics for reasoning on computations are among the
most successful outcomes in the history of the research in
A number of elegant and powerful results have been obtained
in implicit computational complexity
concerning the
The main application domain for Focus are ubiquitous systems, i.e. systems whose distinctive features are: mobility, high dynamicity, heterogeneity, variable availability (the availability of services offered by the constituent parts of a system may fluctuate, and similarly the guarantees offered by single components may not be the same all the time), open-endedness, complexity (the systems are made by a large number of components, with sophisticated architectural structures). In Focus we are particularly interested in the following aspects.
Today the component-based methodology often refers to Service Oriented Computing. This is a specialized form of component-based approach. According to W3C, a service-oriented architecture is “a set of components which can be invoked, and whose interface descriptions can be published and discovered”. In the early days of Service Oriented Computing, the term “services” was strictly related to that of Web Services. Nowadays, it has a much broader meaning as exemplified by the XaaS (everything as a service) paradigm: based on modern virtualization technologies, Cloud computing offers the possibility to build sophisticated service systems on virtualized infrastructures accessible from everywhere and from any kind of computing device. Such infrastructures are usually examples of sophisticated service oriented architectures that, differently from traditional service systems, should also be capable to elastically adapt on demand to the user requests.
Davide Sangiorgi received with his work “A Theory of Bisimulation for the pi-Calculus” the test of time award from CONCUR (period 1992–1995).1
The purpose of the award is to recognize important achievements in concurrency theory that were published at the CONCUR conference and have stood the test of time.
In 2020 Jolie transitioned to version 1.9.x. The new major release includes a more advanced tracing system, forward and backward documentation primitives, the support for configurations with JSON files (useful, for example, in the development of Docker images), and the extension of the support to Java 11+. The release also includes minor fixes like more complete support for the HTTP(S) protocol, runtime checks for infinite alias loops, and other performance optimisations.
Also the Jolie ecosystem expanded in 2020 with new tools like: Jolier, which aids in the publication of Jolie APIs following the REST style, jolie2openapi, which produces an OpenAPI definition from a Jolie interface, openapi2jolie, which produces a Jolie interface from an OpenAPI definition, jolietraceviewer, which makes use of the updated tracing system to visualise the execution trace of a service, joliedoc, which is a preexisting tool that received major improvements, including support for forward and backward documentation primitives and a facelift to the documents that it generates.
The documentation of the language received a major restyle, both content-wise but also structure-wise, to distinguish between features that belong to different versions of the language — e.g. so that users do not mistakenly assume the presence of some more modern features in older versions of the language and can access consistent documentation dedicated to the version they are using.
Development-wise, the Jolie build system has been ported to Maven and now includes continuous-integration routines to expedite the inclusion of new features and fixes in the main development branch.
During 2019 the Jolie project saw three major actions.
The first action regards the build system used for the development of the language, which has been transitioned to Maven, the main build automation tool used for Java projects. The move to Maven is dictated by two needs. The first is to streamline the development and release processes of Jolie, as Maven greatly helps in obtaining, updating, and managing library dependencies. The second necessity addressed by Maven is helping in partitioning the many sub-projects that constitute the Jolie codebase, reducing development and testing times. Having Jolie as a Maven project also helps in providing Jolie sub-components (as Maven libraries) to other projects. Finally, the move to Maven is set within a larger effort to expedite the inclusion in the main Jolie development branch of contributions by new members of its growing community.
The second action regards the transition to Netty as a common framework to support communication protocols and data formats in Jolie. Netty is a widely-adopted Java framework for the development of network applications, and it was used in 2018 to successfully support several IoT communication protocols and data formats in a Jolie spin-off project, called JIoT. The work in 2019 integrated into the Jolie codebase the protocols and data format developed within the JIoT project and pushed towards the integration of the Netty development branch into the main branch of the Jolie project (i.e., re-implementing using Netty the many protocol and data-formats already supported by Jolie). The Netty development branch is currently in a beta phase and it is subject to thorough in-production tests, to ensure that the behaviour remains consistent with the previous implementation.
The third action regards the development and support for a new official integrated development environment (IDE) for Jolie. Hence, along with the ones already existing for the Atom and Sublime Text text editors, Jolie developers can use the Jolie plugin (based on the Language Server Protocol) for the Visual Studio Code text editor to obtain syntax highlighting, documentation aids, file navigation, syntax checking, semantic checking, and quick-run shortcuts for their Jolie programs.
In addition to the above actions, in 2019 Jolie transitioned through three minor releases and a major one, from 1.7.1 to 1.8.2. The minor releases mainly fixed bugs, improved performance, and included new protocol/data-format functionalities. The major release included a slim-down of the notation for the composition of statements, types definitions, and tree structures, for a terser codebase. Upgrades to 1.8.2 also introduced: timeouts for solicit-response invocations to handle the interruption of long-standing requests, more user-friendly messages from the Jolie interpreter, including easier-to-parse errors and the pretty-printing of data structures, for a more effective development and debugging experience.
In 2019 Jolie also saw the development of a new Jolie library, called TQuery, which is a query framework integrated into the Jolie language for the data handling/querying of Jolie trees. Tquery is based on a tree-based instantiation (language and semantics) of MQuery, a sound variant of the Aggregation Framework, the query language of the most popular document-oriented database: MongoDB. Usage scenarios for Tquery are (but not limited to) eHealth, the Internet-of-Things, and Edge Computing, where data should be handled in an ephemeral way, i.e., in a real-time manner but with the constraint that data shall not persist in the system.
Tquery is a query framework integrated into the Jolie language for the data handling/querying of Jolie trees.
Tquery is based on a tree-based instantiation (language and semantics) of MQuery, a formalisation of a sound fragment of the Aggregation Framework, the query language of the most popular document-oriented database: MongoDB.
Tree-shaped documents are the main format in which data flows within modern digital systems - e.g., eHealth, the Internet-of-Things, and Edge Computing. Tquery is particularly suited to develop real-time, ephemeral scenarios, where data shall not persist in the system.
Serverless computing is a Cloud development paradigm where developers write and compose stateless functions, abstracting from their deployment and scaling.
APP is a declarative language of Allocation Priority Policies to specify policies that inform the scheduling of Serverless function execution to optimise their performance against some user-defined goals.
APP is currently implemented as a prototype extension of the Serverless Apache OpenWhisk platform.
Choral is a language for the programming of choreographies. A choreography is a multiparty protocol that defines how some roles (the proverbial Alice, Bob, etc.) should coordinate with each other to do something together.
Choral is designed to help developers program distributed authentication protocols, cryptographic protocols, business processes, parallel algorithms, or any other protocol for concurrent and distributed systems. At the press of a button, the Choral compiler translates a choreography into a library for each role. Developers can use the generated libraries to make sure that their programs (like a client, or a service) follow the choreography correctly. Choral makes sure that the generated libraries are compliant implementations of the source choreography.
In essence, Choral developers program a choreography with the simplicity of a sequential program. Then, through the Choral compiler, they obtain a set of programs that implement the roles acting in the distributed system. The generated programs coordinate in a decentralised way and they faithfully following the specification from their source choreography, avoiding possible incompatibilities arising from discordant manual implementations. Programmers can use or distribute the single implementations of each role to their customers with a higher level of confidence in their reliability. Moreover, they can reliably compose different Choral(-compiled) programs, to mix different protocols and build the topology that they need.
Choral currently interoperates with Java (and it is planned to support also other programming languages) at three levels: 1) its syntax is a direct extension of Java (if you know Java, Choral is just a step away), 2) Choral code can reuse Java libraries, 3) the libraries generated by Choral are in pure Java with APIs that the programmer controls, and that can be used inside of other Java projects directly.
Automata models are well-established in many areas of computer science and are supported by a wealth of theoretical results including a wide range of algorithms and techniques to specify and analyse systems. In 14, we have introduced choreography automata for the choreographic modelling of communicating systems. The projection of a choreography automaton yields a system of communicating finite-state machines. We have considered both the standard asynchronous semantics of communicating systems and a synchronous variant of it. For both, the projections of well-formed automata are proved to be live as well as lock- and deadlock-free.
Serverless computing is a paradigm for programming cloud applications in terms of stateless functions, executed and scaled in proportion to inbound requests. In 19 we have revisited SKC, a calculus capturing the essential features of serverless programming. By exploring the design space of the language, we refined the integration between the fundamental features of the two calculi that inspire SKC: the
Finally, following previous work on the automated deployment of component based applications, we have presented in 27 a formal model specifically tailored for reasoning on the deployment of microservice architectures. The first result that we have presented is a formal proof of decidability of the problem of synthesizing optimal deployment plans for microservice architectures, a problem which was previously proved to be undecidable for generic component-based applications. Then, given that such proof translates the deployment problem into a constraint satisfaction problem, we have presented the implementation of a tool that, by exploiting state-of-the-art constraint solvers, can be used to actually synthesize optimal deployment plans. We have also evaluated the applicability of our tool on a realistic microservice architecture taken from the literature, namely, the deployment of an email processing pipeline that needs to scale-in or -out depending on the amount of incoming emails.
We have continued the study of reversibility started in the past
years. A main line of research this year has been trying to distil
the essence of causal-consistent reversibility (where any action of a
concurrent system can be undone provided that its consequences have
been undone beforehand) so to extract general results and techniques
from the plethora of ad hoc approaches in the literature. Following
this idea, in 21 we considered systems based
on causal-consistent reversibility as Labelled Transition Systems with
Independence and investigated which axioms need to hold in order to
ensure that relevant properties from the literature (such as the
Parabolic Lemma or the Causal-Consistency Theorem) hold. It turned out
that few axioms are enough to guarantee most of them. Also, we defined
two new properties, Causal Liveness and Causal Safety, which state,
respectively, that a past action can be undone if and only if all its
consequences have been undone. These properties directly formalise
the common informal presentation of causal-consistent
reversibility. In 20 we defined a technique to
take a forward system defined as a Labelled Transition System
satisfying suitable constraints and automatically build its
causal-consistent reversible extension. This nicely complements the
work above, and indeed we showed that the built model satisfies the
axioms proposed in 21 and hence enjoys a
number of relevant properties. The general studies above were
expressive enough to cover many ad hoc models in the literature, such
as reversible higher-order
On the application side, in 23 we defined a formal framework to model Software Transactional Memories (STM), a concurrency control mechanism for shared memory systems where concurrent accesses are allowed, but undone in case they create interferences. In particular, when a transaction aborts, all the updates it made are reversed and the system is brought back to the state before the transaction is executed. We have shown that with minor variations it is possible to model two common policies for STM: reader preference and writer preference.
Finally, we have participated to a large dissemination effort to present the results of the European COST Action IC1405 on “Reversible Computation - Extending Horizons of Computing”, which took place in the years 2015-2019. The COST Action covered most areas of reversible computation, and we have contributed to the areas of foundations 26 and software and systems 30 as well as to the case study on debugging 29. Since the COST Action involved 33 (mostly European) countries, the dissemination effort included most of the results obtained by the European reversible computation community in the last 5 years.
In Focus, we are interested in studying probabilistic higher-order programming languages and, more generally, the fundamental properties of probabilistic computation when placed in an interactive scenario, for instance the one of concurrent systems.
One of the most basic but nevertheless
desirable properties of programs is of course termination. Termination
can be seen as a minimal guarantee about the time complexity of the
underlying program. When probabilistic choice comes into play,
termination can be defined by stipulating that a program is
terminating if its probability of convergence is 1, this way giving
rise to the notion of almost sure termination.
Termination, already undecidable for
deterministic (universal) programming languages, remains so in the
presence of probabilistic choice, becoming provably harder.
A stronger
notion of termination is the one embodied in positive almost sure
termination, which asks the average runtime of the underlying program
to be finite.
If the average computation time is not only finite, but also suitably
limited (for example by a polynomial function), one moves towards a
notion of bounded average runtime complexity. Over the recent years, the Focus team has established various formal systems for reasoning about (positive) almost sure termination and average runtime complexity, and has even established methodologies for deriving average runtime bounds in a fully automated manner. This trend continued in 2020.
Recently, Focus has also begun to take an interest in the foundational aspects of quantum computing and in particular in quantum programming languages and quantum computational models. In presence of measurements, in fact, quantum programs have an essentially probabilistic evolution and therefore any techniques for termination and complexity analysis can potentially also be applied to quantum programs. The resource of interest here includes the number of qubits, a parameter of paramount importance given the inherent scarcity of this resource in concrete quantum architectures.
In addition to the analysis of complexity, which can be seen as a property of individual programs, Focus has also been interested, for some years now, in the study of relational properties of programs. More specifically, we are interested in how to evaluate the differences between behaviours of distinct programs, going beyond the concept of program equivalence, but also beyond that of metrics. In this way, only approximate correct program transformations can be justified, while it becomes possible to give a measure of how close a program is to a certain specification.
Below we describe the results obtained by Focus this year, dividing them into three strands.
In the last two decades, there has been much progress on model checking
of both probabilistic systems and higher-order programs. Dal Lago, together
with Kobayashi and Grellois, have initiated a study 7
on the probabilistic higher-order model checking problem, by giving some
first theoretical and experimental results. Interestingly, reachability analysis has been proven to be undecidable already at order two.
Dal Lago, in a joint work with Heijtjes and Guerrieri 18, has introduced a probabilistic lambda-calculus in which the probabilistic choice operator is decomposed into two syntactic constructs, thus recovering a form of confluence which is impossible to achieve without this decomposition.
Avanzini et al. 12 has introduced a novel methodology
for the automated resource analysis of probabilistic imperative programs, which gives rise to a modular approach. Program fragments are analysed in full independence, thereby allowing the methodology to scale to larger programs.
This methodology has also been implemented in the tool eco-IMP, see Section 6. Ample experimental evidence shows that this tool runs typically at least one order of magnitude faster than existing tools, while retaining their precision.
In joint work with Crubillé and Barak 13, Dal Lago has introduced a new model of complexity-bounded probabilistic higher-order computation in which every algorithm is by construction polynomial time computable, and which is thus perfectly adequate as a language for cryptographic construction. Interestingly, certain cryptographic primitives are shown to be impossible to generalize to an higher-order setting.
Martini, together with Guerrini and Masini 6, has proposed a definition of quantum Turing machines more general than the usual ones, at the same time extending and unifying the existing definitions due to Deutsch and Bernstein & Vazirani. In particular, an arbitrary quantum input is allowed, together with meaningful superpositions of computations, some of them being “terminated” with an “output”, some others not.
Geoffroy and Pistone 24 have shown that the standard metric on real numbers can be lifted to higher-order types in a novel way, yielding a metric semantics of the simply typed lambda-calculus in which types are interpreted as quantale-valued partial metric spaces. Using such metrics, a class of higher-order denotational models, called diameter space models, has been shown to provide a quantitative semantics of approximate program transformations.
It is known that proving behavioural equivalences in
higher-order languages can be hard,
because interactions involve complex values, namely terms of the
language.
In coinductive (i.e., bisimulation-like)
techniques for these languages, a useful enhancement is the 'up-to
context' reasoning, whereby common pieces of context in related
terms are factorised out and erased. In higher-order process
languages, however, such techniques are rare, as their soundness is
usually delicate and difficult to establish. In
4
we have adapted
the technique of unique solution of equations, that implicitly
captures 'up-to context' reasoning, to the setting of the
Higher-Order
In 10 we have studied the formalisation (in the HOL theorem prover HOL4) of the theory of unique solution of equations and contraction, for CCS as well as for the main results in Milner's book on the the Calculus of Communicating Systems (CCS). The formalisation consists of about 24,000 lines (1MB) of code. Some refinements of the theory of `unique solution of contractions' itself have thereby been derived.
Below we summarise other papers that are about applications of the theory of coinduction to various settings.
In 11 we have revisited the Interaction Abstract Machine (IAM), a machine based on Girard's Geometry of Interaction. It is an unusual machine, not relying on environments, presented on linear logic proof nets, and whose soundness proof is convoluted and passes through various other formalisms. Here we have provided a new direct proof of its correctness, based on a variant of Sands's improvements, a form of bisimulation related to the above-mentioned contractions.
In 3 we have introduced a notion of applicative similarity in which not terms but monadic values arising from the evaluation of effectful terms, can be compared. We have proven this notion to be fully abstract whenever terms are evaluated in a call-by-name order.
The
In 15 we have discussed how to ensure relevant communication properties of communicating systems such as deadlock freedom in a compositional way. The basic idea is that communicating systems can be composed by taking two of their participants and transforming them into coupled forwarders connecting the two systems. We have investigated how to adaptat the idea to settings with asynchronous and synchronous communications.
In 2 we have reviewed techniques, mainly based on the theory of process calculi, that we used over the past twenty years to prove results about the expressiveness of coordination languages and behavioural contracts for Service-Oriented Computing. Then, we have shown how such techniques recently contributed to the clarification of aspects of session types such as asynchronous session subtyping.
In 17 we have presented a type-based analysis ensuring memory safety and object protocol completion in the Java-like language Mungo. Objects are annotated with usages, typestates-like specifications of the admissible sequences of method calls. The type system has been implemented in the form of a type checker and a usage inference tool.
Concurrent Constraint Programming (CCP) is a declarative model for concurrency where agents interact by telling and asking constraints (pieces of information) in a shared store. Some previous works have developed (approximated) declarative debuggers for CCP languages. However, the task of debugging concurrent programs remains difficult. In 5 we have defined a dynamic slicer for CCP (and other language variants) and we have shown it to be a useful companion tool for the existing debugging techniques. Our slicer allows for marking part of the state of the computation and assists the user to eliminate most of the redundant information in order to highlight the errors. We have shown that this technique can be tailored to several variants of CCP, such as the timed language ntcc, linear CCP (an extension of CCP based on linear logic where constraints can be consumed) and some extensions of CCP dealing with epistemic and spatial information. We have also developed a prototypical implementation freely available for making experiments.
Logical relations are one of the most powerful techniques in the
theory of programming languages, and have been used extensively for
proving properties of a variety of higher-order calculi. However,
there are properties that cannot be immediately proved by means of
logical relations, for instance program continuity and
differentiability in higher-order languages extended with
real-valued functions.
Informally, the problem stems from the fact
that these properties are naturally expressed in terms of non-ground
type (or, equivalently, on open terms of base type), and there is no
apparent good definition for a base case (i.e. for closed terms of
ground types). To overcome this issue, in
16
we have studied a generalization of
the concept of a logical relation, called open logical relation.
Our setting is a simply-typed
We have studied why and how to teach computer science principles (nowadays often referred to as “computational thinking”, CT), in the context of K-12 education. We have been interested in philosophical, sociological, and historical motivations to teach computer science. Furthermore, we have studied what concepts and skills related to computer science are not only technical abilities, but have a general value for all students. Finally, we have tried to find/produce/evaluate suitable materials (tools, languages, lesson plans...) to teach these concepts, taking into account: difficulties in learning computer science concepts (particularly programming); stereotypes about computer science (teachers' and students' mindset); teacher training (both non-specialist and disciplinary teachers); innovative teaching methodologies (primarily based on constructivist and constructionist learning theories).
Apart from these investigations, this year we have also been involved in the development of methodologies for assessing dropout rates of students, to benefit students as well as education institutions.
We have reviewed several definitions of computational thinking 8, finding they share a lot of common elements, of very different nature. We have classified them in mental processes, methods, practices, and transversal skills. Many of these elements seem to be shared with other disciplines and resonate with the current narrative on the importance of 21st-century skills. Our classification helps on shedding light on the misconceptions related to each of the four categories, showing that, not to dilute the concept, elements of computational thinking should be intended inside the discipline of Informatics, being its “disciplinary way of thinking”.
Among the many open problems in the learning process, students dropout is one of the most complicated and negative ones, both for the student and the institutions, and being able to predict it could help to alleviate its social and economic costs. To address this problem we have developed in 28 a tool that, by exploiting machine learning techniques, allows one to predict the dropout of a first-year undergraduate student. The proposed tool allows one to estimate the risk of quitting an academic course, and it can be used either during the application phase or during the first year, since it selectively accounts for personal data, academic records from secondary school and also first year course credits. Our experiments have been performed by considering real data of students from eleven schools of a major University.
SEAWALL (SEAmless loW latency cLoud pLatforms) is coordinated by a company in Bologna "Poggipolini". M. Gabbrielli is coordinating the University of Bologna unit. The industrial partners are Aetna, Bonfiglioli Riduttori, IMA, Sacmi, Philip Morris, Siemens, CDM, Digital River.
The project started in July 2020, and is expected to take 18 months.
In the following we list only those projects held by members of Focus which are managed by INRIA, notably, this excludes the ERC CoG DIAPAsON "Differential Program Semantics" 2.
Martin Avanzini is member of the INRIA associate team TC(PRO)
Due to the ongoing COVID-19 pandemic, no international research visits have been conducted this year.
BEHAPI (Behavioural Application Program Interfaces) is an European Project H2020-MSCA-RISE-2017, running in the period March 2018 – February 2022. The topic of the project is behavioural types, as a suite of technologies that formalise the intended usage of API interfaces. Indeed, currently APIs are typically flat structures, i.e. sets of service/method signatures specifying the expected service parameters and the kind of results one should expect in return. However, correct API usage also requires the individual services to be invoked in a specific order. Despite its importance, the latter information is either often omitted, or stated informally via textual descriptions. The expected benefits of behavioural types include guarantees such as service compliance, deadlock freedom, dynamic adaptation in the presence of failure, load balancing etc. The project aims to bring the existing prototype tools based on these technologies to mainstream programming languages and development frameworks used in industry.
DCore (Causal debugging for concurrent systems) is a 4-years ANR project that started on March 2019 and that will end in August 2023.
The overall objective of the project is to develop a semantically well-founded, novel form of concurrent debugging, which we call “causal debugging”. Causal debugging will comprise and integrate two main engines: (i) a reversible execution engine that allows programmers to backtrack and replay a concurrent or distributed program execution and (ii) a causal analysis engine that allows programmers to analyze concurrent executions to understand why some desired program properties could be violated.
REPAS (Reliable and Privacy-Aware Software Systems via Bisimulation Metrics) is an ANR Project that started on October 2016 and that finished on October 2020.
The project aims at investigating quantitative notions and tools for proving program correctness and protecting privacy. In particular, the focus was put on bisimulation metrics, which are the natural extension of bisimulation to quantitative systems. As a key application, we developed mechanisms to protect the privacy of users when their location traces are collected.
PROGRAMme (What is a program? Historical and philosophical perspectives) is an ANR project started on October 2017 and that will finish on October 2022; PI: Liesbeth De Mol (CNRS/Université de Lille3).
The aim of this project is to develop a coherent analysis and pluralistic understanding of “computer program” and its implications to theory and practice.
PPS (Probabilistic Programming Semantics) is an ANR PCR project that started on January 2020 and that will finish on July 2024.
Probabilities are essential in Computer Science. Many algorithms use probabilistic choices for efficiency or convenience and probabilistic algorithms are crucial in communicating systems. Recently, probabilistic programming, and more specifically, functional probabilistic programming, has shown crucial in various works in Bayesian inference and Machine Learning. Motivated by the rising impact of such probabilistic languages, the aim of this project is to develop formal methods for probabilistic computing (semantics, type systems, logical frameworks for program verification, abstract machines etc.) to systematize the analysis and certification of functional probabilistic programs.
Below are the details on the PhD students in Focus: starting date, topic or provisional title of the thesis, supervisor(s).
PhD theses completed in 2020:
Michael Lodi and Simone Martini have carried out extended work of scientific popularization, including the following.
Apart from the above mentioned contributions, Martini has taken steps in a larger project—reflecting and tracing the interaction between mathematical logic and programming (languages), identifying some of the driving forces of this process.
Despite the insight of some of the pioneers (Turing, von Neumann, Curry, Böhm), programming the early computers was a matter of fiddling with small architecture-dependent details. Only in the sixties some form of “mathematical program development” will be in the agenda of some of the most influential players of that time. A “Mathematical Theory of Computation” is the name chosen by John McCarthy for his approach, which uses a class of recursively computable functions as an (extensional) model of a class of programs. It is the beginning of that grand endeavour to present programming as a mathematical activity, and reasoning on programs as a form of mathematical logic. An important part of this process is the standard model of programming languages-the informal assumption that the meaning of programs should be understood on an abstract machine with unbounded resources, and with true arithmetic. In 22 we present some crucial moments of this story, concluding with the emergence, in the seventies, of the need of more “intensional” semantics, like the sequential algorithms on concrete data structures.