Ubiquitous Computing refers to the situation in which computing facilities are embedded or integrated into everyday objects and activities. Networks are large-scale, including both hardware devices and software agents. The systems are highly mobile and dynamic: programs or devices may move and often execute in networks owned and operated by others; new devices or software pieces may be added; the operating environment or the software requirements may change. The systems are also heterogeneous and open: the pieces that form a system may be quite different from each other, built by different people or industries, even using different infrastructures or programming languages; the constituents of a system only have a partial knowledge of the overall system, and may only know, or be aware of, a subset of the entities that operate on the system.

A prominent recent phenomenon in Computer Science is the emerging of
interaction and communication as key architectural and programming
concepts. This is especially visible in ubiquitous systems.
Complex distributed
systems are being thought of and designed as structured composition of
computational units, usually referred to as components. These
components are supposed to interact with each other and such
interactions are supposed to be orchestrated into conversations and
dialogues. In the remainder, we will write CBUS for
Component-Based Ubiquitous Systems.

In CBUS, the systems are complex. In the same way as for complex systems in other disciplines, such as physics, economics, biology, in CBUS theories are needed that allow us to understand the systems, to design or program them, and to analyze them.

Focus investigates the semantic foundations for CBUS. The foundations are intended as instrumental to formalizing and verifying important computational properties of the systems, as well as to proposing linguistic constructs for them. Prototypes are developed to test the implementability and usability of the models and the techniques. Throughout our work, `interaction' and `component' are central concepts.

The members of the project have a solid experience in algebraic and logical models of computation, and related techniques, and this is the basis for our study of ubiquitous systems. The use of foundational models inevitably leads to opportunities for developing the foundational models themselves, with particular interest for issues of expressiveness and for the transplant of concepts or techniques from a model to another one.

The objective of Focus is to develop concepts, techniques, and
possibly also tools, that may contribute to the analysis and synthesis
of CBUS.
Fundamental to these activities is modeling.
Therefore designing, developing and studying computational models
appropriate for CBUS is a central activity of the project.
The models are used
to formalise and verify important computational
properties of the systems, as well as to propose new linguistic constructs.

The models we study are in the process
calculi (e.g., the

Modern distributed
systems have witnessed a clear shift
towards interaction and conversations
as
basic building blocks
for software architects and programmers.
The systems are made by components, that
are supposed to
interact and carry out
dialogues in order to achieve
some predefined goal; Web services are a good example of this.
Process calculi are models that have been designed precisely with the
goal
of understanding interaction and composition.
The theory and tools that have been developed on top of process
calculi can set a basis with which CBUS challenges can be tackled.
Indeed, industrial proposals of languages for Web services such as BPEL
are strongly inspired by process calculi, notably the

Type systems and logics for reasoning on computations are among the
most successful outcomes in the history of the research in

A number of elegant and powerful results have been obtained
in implicit computational complexity
concerning the

The main application domain for Focus are ubiquitous systems, i.e. systems whose distinctive features are: mobility, high dynamicity, heterogeneity, variable availability (the availability of services offered by the constituent parts of a system may fluctuate, and similarly the guarantees offered by single components may not be the same all the time), open-endedness, complexity (the systems are made by a large number of components, with sophisticated architectural structures). In Focus we are particularly interested in the following aspects.

Today the component-based methodology often refers to Service Oriented Computing. This is a specialized form of component-based approach. According to W3C, a service-oriented architecture is “a set of components which can be invoked, and whose interface descriptions can be published and discovered”. In the early days of Service Oriented Computing, the term “services” was strictly related to that of Web Services. Nowadays, it has a much broader meaning as exemplified by the XaaS (everything as a service) paradigm: based on modern virtualization technologies, Cloud computing offers the possibility to build sophisticated service systems on virtualized infrastructures accessible from everywhere and from any kind of computing device. Such infrastructures are usually examples of sophisticated service oriented architectures that, differently from traditional service systems, should also be capable to elastically adapt on demand to the user requests.

In Service-Oriented Computing (SOC), systems made of interacting and collaborating components are specified by describing the “local behaviour” of each of them (i.e., the way one component may interact with the others), and/or the “global behaviour” of the system (i.e., the expected interaction protocols that should take place within the system).

From the language perspective, the programming of “local” and “global” behaviours have been respectively addressed with orchestration and choreography languages. Regarding applications, where usually SOC meets Cloud Computing, we find two state-of-the-art architectural styles. Microservices are a revisitation of service-orientated architectures where fine-grained, loosely coupled, independent services help developers assemble reusable, flexible, and scalable architectures. Serverless is a programming style and deployment technique where users program Cloud applications in terms of stateless functions, which execute and scale in proportion to inbound requests.

Communication is an essential element of modern software, yet programming and analysing communicating systems are difficult tasks. In 25 we tackle the lack of compositional mechanisms that preserve relevant communication properties. This belongs in a strand of work hinged on the usage of sets of communicating components modelled as finite-state machines. We propose a composition mechanism based on gateways that, under some suitable compatibility conditions, guarantees deadlock freedom. Previous work focussed on (a)synchronous symmetric communications (where sender and receiver play the same part in determining which message to exchange). In 25 we provide the same guarantees for the synchronous asymmetric case (where senders decide independently which message should be exchanged).

We also worked on formalisations of choreographic languages. In 24 we introduce a metamodel dubbed formal choreographic languages. Our main motivation is to establish a framework for the comparison and generalisation of standard constructions and properties from the literature. Formal choreographic languages capture existing formalisms for message-passing systems; we detail the cases of multiparty session types and choreography automata. Unlike many other models, formal choreographic languages can naturally model systems exhibiting non-regular behaviour. We also worked on the modelling of real business processes taken from the official documentation of European customs business process models, using a formal choreographic approach 29.

In 35 we extend the theory of choreography automata by equipping communications with assertions on the values that can be communicated, enabling a design-by-contract approach. We provide a toolchain allowing the generation of APIs for TypeScript web programming. Programs communicating via the generated APIs follow, by construction, the prescribed communication pattern and are free from communication errors such as deadlocks.

The service-oriented programming language Jolie is a long-standing project within Focus. In 36 we study the usage of model-driven engineering languages to produce Jolie APIs. This allows us to implement a synthesiser that, given a data model (specified in the LEMMA modelling framework), can produce the interfaces of the corresponding data types and service interfaces (APIs) in Jolie. Another work related to Jolie is Tquery 18, the first query framework designed for ephemeral data handling (where queried data must not persist in the system) in microservices. We first formalise the theory for querying semi-structured data compatible with Jolie and we implement it into a Jolie library. We both present a use case of a medical algorithm to exemplify the usage of the library and report microbenchmarks that show that, for ephemeral data handling, Tquery outperforms the use of a database (e.g., MongoDB).

Looking more generally at Cloud deployment, we worked on both microservices and serverless.

In 14 we conducted a systematic review of the relationship between microservices and security, collecting 290 relevant publications (at the time, the largest curated dataset on the topic). We developed our analysis both by looking at publication metadata—charting publication outlets, communities, approaches, and tackled issues—and through 20 research questions, used to aggregate the literature and spot gaps left open. These results allowed us to summarise our conclusions in the form of a call for action to address the main open challenges of the field.

In previous work, Focus members developed “global scaling”, a microservices deployment technique that, given a functional specification of a microservice architecture, orchestrates the scaling of all its components, avoiding cascading slowdowns typical of uncoordinated, mainstream autoscaling. In 43 we propose a proactive version of global scaling, implemented using data analytics, able to anticipate future scaling actions. This eliminates inefficiencies of the original, reactive version of global scaling. We also present a hybrid variant that mixes reactive and proactive scaling. From our benchmarks, proactive global scaling consistently outperforms reactive, while the hybrid solution is the best-performing one.

Besides scaling techniques, we also worked on adaptability. In 12, 23 we focussed on workload orchestration, and we presented the SEAWALL platform, which enables heterogeneous data acquisition and low-latency processing for Industry 4.0. SEAWALL is developed within the homonymous project founded by the Italian BIREX industrial consortium. SEAWALL architecture features cutting-edge technologies (such as Kubernetes, ISTIO, KubeEdge, W3C WoT) to support the seamless orchestration of workloads among the nodes of a cloud-edge continuum in QoS-aware scenarios where the latency requirement of anomaly detection must be continuously assessed. We present the industrial use case from the SEAWALL project and the cloud/edge microservice architecture. In 28 we discuss possible trade-offs and challenges of adopting autonomic adaptability, following a MAPE-K approach, to help software architects build adaptable microservice-based systems.

Finally, in 40 we worked on serverless function scheduling. Indeed, state-of-the-art serverless platforms use hardcoded scheduling policies that are unaware of the possible topological constraints of functions. Considering these constraints when scheduling functions leads to sensible performance improvements, e.g., minimising loading times or data-access latencies. To address this problem, we present and implement (as an extension of the OpenWhisk serverless platform) a declarative language for defining serverless scheduling policies to express constraints on topologies of schedulers and execution nodes.

We have continued the study of reversibility started in the past years. As usual, our efforts target concurrent systems and follow the causal-consistent reversibility approach, where any action can be undone provided that its consequences, if any, are undone beforehand. We tackled both theoretical questions and practical applications in the area of debugging of concurrent Erlang programs.

From a theoretical point of view, we considered the impact of time on causal-consistent reversibility 27. More precisely, we defined a causal-consistent reversible extension of the Temporal Process Language by Hennessy and Regan, a CCS with discrete time and a timeout operator. We proved that our extension satisfies the properties expected from a causal-consistent reversible calculus and we showed that it can also be interpreted as an extension of reversible CCS (in particular CCSK from Ulidowski and Phillips) with time.

Reversible debugging of concurrent systems can be tackled using causal-consistent debugging, which exploits causal-consistent reversibility to explore a concurrent execution backward following causality links from a visible misbehavior towards the bug causing it. In the past we exploited this idea to build CauDEr, a Causal-consistent Debugger for Erlang. We extended CauDEr to support the primitives used in Erlang to manage a shared map associating process identifiers to names 38. Notably, these primitives are imperative, while the core of Erlang is functional. We also showcased the application of CauDEr to find a concurrency bug in a small case study in a form accessible to programmers with no previous knowledge of reversibility 45. Finally, to bridge the gap between the theoretical foundations of causal-consistent reversibility and CauDEr, we developped a generator able to take a semantics in Maude (following a pre-defined structure) and automatically generating its causal-consistent extension 34. We applied it to a semantics of Erlang we defined. The resulting executable causal-consistent semantics can be used as an oracle against which one can test CauDEr behavior, while being in a direct match with the theoretical foundations of Erlang and reversibility.

In Focus, we are interested in studying probabilistic higher-order programming languages and, more generally, the fundamental properties of probabilistic computation when placed in an interactive scenario, for instance the one of concurrent systems.

One of the most basic but nevertheless desirable properties of programs is of course termination. Termination can be seen as a minimal guarantee about the time complexity of the underlying program. When probabilistic choice comes into play, termination can be defined by stipulating that a program is terminating if its probability of convergence is 1, this way giving rise to the notion of almost sure termination. Termination, already undecidable for deterministic (universal) programming languages, remains so in the presence of probabilistic choice, becoming provably harder. A stronger notion of termination is the one embodied in positive almost sure termination, which asks the average runtime of the underlying program to be finite. If the average computation time is not only finite, but also suitably limited (for example by a polynomial function), one moves towards a notion of bounded average runtime complexity. Over the recent years, the Focus team has established various formal systems for reasoning about (positive) almost sure termination and average runtime complexity, and has even established methodologies for deriving average runtime bounds in a fully automated manner. This trend continued in 2022.

In addition to the analysis of complexity, which can be seen as a property of individual programs, Focus has also been interested, for some years now, in the study of relational properties of programs. More specifically, we are interested in how to evaluate the differences between behaviours of distinct programs, going beyond the concept of program equivalence, but also beyond that of metrics. In this way, approximate correct program transformations can be justified, while it also becomes possible to give a measure of how close a program is to a certain specification.

Below we describe the results obtained by Focus this year, dividing them into several strands.

In FoCUS, we are interested in studying notions of termination and resource analysis for non-standard computing paradigms, like those induced by the presence of randomized and quantum effects. Noticeably, some of the contributions along these lines in 2022 have to do with the Curry-Howard correspondence.

In 31, a system of session types is introduced as induced by a Curry-Howard correspondence applied to bounded linear logic, suitably extended with probabilistic choice operators and ground types. The resulting system satisfies some expected properties, like subject reduction and progress, but also unexpected ones, like a polynomial bound on the time needed to reduce processes. This makes the system not only capable to characterize notions of efficiency in a randomized setting but also suitable for modeling experiments and proofs from the so-called computational model of cryptography.

We also showed that an intuitionistic version of counting propositional logic
(itself introduced and studied within FoCUS in 2021) corresponds, in the sense
of Curry and Howard, to an expressive type system for the probabilistic event
guarantee that validity (respectively, termination) holds, but reveal
the underlying probability. We also showed how to obtain a system
precisely capturing the probabilistic behavior of

We also looked at the nature of time in the context of security protocols.
Timed cryptography refers to cryptographic primitives designed to meet their
security goals only for a short (polynomial) amount of time, and has not been
dealt with in the symbolic model, so far. We introduced a timed extension of
the applied

Additionally, we achieved major improvements in the study of relational reasoning for higher-order continuous probabilistic programs, viz. higher-order programs sampling from continuous distributions. In 13, in fact, we have isolated a set of natural conditions on the expressive power of a programming language to guarantee full abstraction of bisimulation-based equivalences, viz. event and applicative bisimilarity, this way solving an open problem in the context of higher-order probabilistic reasoning.

Finally, we have also investigated to which extent techniques for reasoning
about time (in expectation) and termination extends from classical, probabilistic
programs to quantum programs. Specifically,
in 22 we introduce a new kind of expectation transformer for a mixed
classical-quantum programming language. This semantic approach relies on a
new notion of a cost structure, which we introduce and which can be seen as a
specialisation of the Kegelspitzen of Keimel and Plotkin. Along the analysis of
several quantum algorithms within this new framework, we have shown that our
calculus is both sound and adequate with respect to the operational
semantics endowed on our quantum language.

Program semantics is traditionally concerned with program equivalence, in which all pairs of programs which are not equivalent are treated the same, and simply dubbed as incomparable. In recent years, various forms of program metrics have been introduced such that the distance between non-equivalent programs is measured as a not necessarily numerical quantity.

By letting the underlying quantale vary as the type of the compared programs become more complex, the recently introduced framework of differential logical relations allows for a new contextual form of reasoning. We showed 16 that all this can be generalised to effectful higher-order programs, in which not only the values, but also the effects computations produce can be appropriately distanced in a principled way. We show that the resulting framework is flexible, allowing various forms of effects to be handled, and that it provides compact and informative judgments about program differences. Moreover, we have introduced 30 a new general categorical construction that allows one to extract differential logical relations from traditional ones, and used such a construction to prove new correctness results for operationally-defined (pure and effectful) differential logical relations.

Mardare et al.'s quantitative algebras represent a novel and nice way of
generalizing equational logic to a notion of distance. We explored the
possibility of extending quantitative algebras to the structures which
naturally emerge from Combinatory Logic and the

Sometimes, the quantitative aspects are in the language itself. This is the case for graded modal types systems and coeffects, which are becoming a standard formalism to deal with context-dependent, usage-sensitive computations, especially when combined with computational effects. We developed 15 a general theory and calculus of program relations for higher-order languages with combined effects and coeffects. The relational calculus builds upon the novel notion of a corelator (or comonadic lax extension) to handle coeffects relationally. Inside such a calculus, we define three notions of effectful and coeffectful program refinements: contextual approximation, logical preorder, and applicative similarity. We prove that all of them are precongruences, thus sound.

While the amount of time a functional program necessitates when
evaluated can be taken as the number of reduction steps the underlying abstract
machine or rewriting mechanism performs, the same cannot be said about space.
What is a proper notion of working space for a functional program?
We introduced 20 a new reasonable space cost model
for the

In this area during the past year our efforts have gone in 3 main directions: unifying semantics; coinductive proof techniques; type-based techniques.

This direction has to with
comparing, and
possibly unifying two major approaches to the semantics of
higher-order languages. One
is the operational approach stemming from process calculi (such as the

In other works, we investigate the kind of models that pure process calculi
such as

In 11 we report on a tool that verifies Java source code with respect to typestates. A typestate defines the object's states, the methods that can be called in each state, and the states resulting from the calls. The tool statically verifies that when a Java program runs: sequences of method calls obey to object's protocols; objects' protocols are completed; null-pointer exceptions are not raised; subclasses' instances respect the protocol of their superclasses.

The paper 33
is a retrospection on our earlier work on session types, where we show that session
types are encodable into standard

There exist a rich and well-developed theory of enhancements of the coinduction proof method, widely used on behavioural relations such as bisimilarity. In 19, we study how to develop an analogous theory for inductive behaviour relations, i.e., relations defined from inductive observables. Similarly to the coinductive setting, our theory makes use of (semi)-progressions of the form R->F(R), where R is a relation on processes and F is a function on relations, meaning that there is an appropriate match on the transitions that the processes in R can perform in which the process derivatives are in F(R). For a given preorder, an enhancement corresponds to a sound function, i.e., one for which R->F(R) implies that R is contained in the preorder; and similarly for equivalences. We isolate a subclass of sound functions that contains non-trivial functions and enjoys closure properties with respect to desirable function constructors, so to be able to derive sophisticated sound functions (and hence sophisticated proof techniques) from simpler ones. We test our enhancements on a few non-trivial examples.

We have designed and tested activities to teach the basic concepts of cryptography to high school students, creating both digital environments that simulate unplugged activities (for distance learning) and task-specific block-based programming languages in order to learn by manipulating computational objects 39. The course focuses on the big ideas of cryptography, and the introduction of a cryptosystem is motivated by overcoming the limitations of the previous one.

We also designed and tested an interdisciplinary training module on cryptography 42 for prospective STEM teachers that leveraged some “boundary objects” between Math and CS (e.g., adjacency matrices, graphs, computational complexity, factoring) in an important social context (the debate on the benefits and risks of end-to-end cryptography). The module proved useful in making students mobilize concepts, methods, and practices of the two disciplines and making them move between semiotic representations of the interdisciplinary objects involved.

(SEAmless loW latency cLoud pLatforms) is an industrial project financed by the Italian Ministry of Industry and Economical Development dedicated to the digital innovation of manufacturing companies. In particular, methods and techniques have been investigated in order to manage the development of cloud applications having low-latency constraints. The proposed solution is based on a platform, deployed on the edge-cloud continuum, in which latency-critical tasks are dynamically and automatically moved from cloud to edge and vice-versa. In this way, automatic workload mobility can be managed at run time following user desiderata constraints and preferences.

SEAWALL is coordinated by a company in Bologna (Poggipolini). M. Gabbrielli is coordinating the University of Bologna unit. The industrial partners are Aetna, Bonfiglioli Riduttori, IMA, Sacmi, Philip Morris, Siemens, CDM, and Digital River. The project started in July 2020, and ended in January 2022.

Giallorenzo co-leads a three-year project collaboration, called “Ranflood”, started in July 2021, between the “Regional Environmental Protection and Energy Agency” of Emilia-Romagna (ARPAE Emilia-Romagna) and the “Department of Computer Science and Engineering” (DISI) at the University of Bologna. The collaboration regards the development of techniques and software to combat the spread of malware by exploiting resource contention.

(Behavioural Application Program Interfaces) is an European Project H2020-MSCA-RISE-2017, running in the period March 2018 — February 2024. The topic of the project is behavioural types, as a suite of technologies that formalise the intended usage of API interfaces. Indeed, currently APIs are typically flat structures, i.e. sets of service/method signatures specifying the expected service parameters and the kind of results one should expect in return. However, correct API usage also requires the individual services to be invoked in a specific order. Despite its importance, the latter information is either often omitted, or stated informally via textual descriptions. The expected benefits of behavioural types include guarantees such as service compliance, deadlock freedom, dynamic adaptation in the presence of failure, load balancing etc. The project aims to bring the existing prototype tools based on these technologies to mainstream programming languages and development frameworks used in industry.

DCore (Causal debugging for concurrent systems) is an ANR project that started on March 2019 and that will end in March 2024.

The overall objective of the project is to develop a semantically well-founded, novel form of concurrent debugging, which we call “causal debugging”. Causal debugging will comprise and integrate two main engines: (i) a reversible execution engine that allows programmers to backtrack and replay a concurrent or distributed program execution and (ii) a causal analysis engine that allows programmers to analyze concurrent executions to understand why some desired program properties could be violated.

PROGRAMme (What is a program? Historical and philosophical perspectives) is an ANR project started on October 2017 and that will finish on October 2023 (extension of one year granted);

The aim of this project is to develop a coherent analysis and pluralistic understanding of “computer program” and its implications to theory and practice.

PPS (Probabilistic Programming Semantics) is an ANR PCR project that started on January 2020 and that will finish on July 2024.

Probabilities are essential in Computer Science. Many algorithms use probabilistic choices for efficiency or convenience and probabilistic algorithms are crucial in communicating systems. Recently, probabilistic programming, and more specifically, functional probabilistic programming, has shown crucial in various works in Bayesian inference and Machine Learning. Motivated by the rising impact of such probabilistic languages, the aim of this project is to develop formal methods for probabilistic computing (semantics, type systems, logical frameworks for program verification, abstract machines etc.) to systematize the analysis and certification of functional probabilistic programs.

Below are the details on the PhD students in Focus: starting date, topic or provisional title of the thesis, supervisor(s).

Michael Lodi and Simone Martini have carried out extended work of scientific popularization, including the following.