EN FR
EN FR


Section: Overall Objectives

Overall objectives

Most software driven systems we commonly use in our daily life are huge hierarchical assemblings of components. This observation runs from the micro-scale (multi-core chips) to the macro-scale (data centers), and from hardware systems (telecommunication networks) to software systems (choreographies of web services). The main features of these pervasive applications are size, complexity, heterogeneity, and modularity (or concurrency). Besides, several of such systems are out and running before they are fully mastered, or they have grown so much that they now raise new problems that are hardly manageable by human operators. And while these systems and applications become more essential, or even critical, the demand for their reliability, efficiency and manageability becomes a central concern for computer science. The main objective of SUMO is to develop theoretical tools to address such systems, according to the following axes.

Necessity of quantitative models.

Several disciplines in computer science have of course addressed some of the issues raised by large systems. For example formal methods (essentially for verification purposes), discrete event systems (diagnosis, control, planning, and their distributed versions), but also concurrency theory (modelling and analysis of large concurrent systems). Practical needs have oriented these methods towards the introduction of quantitative aspects, as time, probabilities, costs, and combinations of them. This drastically changes the nature of questions that are addressed. For example, verification questions become the reachability of a state in a limited time, the average sojourn duration in a state, the probability that a run of the system satisfies some property, the existence of control strategies with a given winning probability, etc. In this setting, exact computations are not always appropriate as they may end up with unaffordable complexities, or even with undecidability. Approximation strategies then offer a promising way around, and are certainly also a key to handling large systems. Discrete event systems approaches follow the same trend towards quantitative models. For diagnosis aspects, one is interested in the most likely explanations to observed failures, in the identification of the most informative tests to perform, in the optimal placement of sensors, and for control problems, one is of course interested in optimal control, in minimizing communications, in the robustness of the proposed controlers, in the online optimization of QoS (Quality of Service) indicators, etc.

Specificities of distributed systems.

While the above questions have already received partial answers, they remain largely unexplored in a distributed setting. We focus on structured systems, typically a network of dynamic systems with known interaction topology, the latter being either static or dynamic. Interactions can be synchronous or asynchronous. The state space explosion raised by such systems has been addressed through two techniques. The first one consists in adopting true concurrency models, which take advantage of the parallelism to reduce the size of the trajectory sets. The second one looks for modular or distributed “supervision" methods, taking the shape of a network of local supervisors, one per component. While these approaches are relatively well understood, their mixing with quantitative models remains a challenge, and in particular there exists no proper setting assembling concurrency theory with stochastic systems. This field is largely open both for modeling, analysis and verification purposes, and for distributed supervision techniques. The difficulties combine with the emergence of data driven distributed systems (as web services or data centric systems), where the data exchanged by the various components influence both the behaviors of these components and the quantitative aspects of their reactions (e.g. QoS). Such systems call for symbolic or parametric approaches for which a theory is still missing.

New issues raised by large systems.

Some existing distributed systems like telecommunication networks, data centers, or large scale web applications have reached sizes and complexities that reveal new management problems. One can no longer assume that the model of the managed systems is static and fully known at any time and any scale. To scale up the management methods to such applications, one needs to be able to design reliable abstractions of parts of the systems, or to build online a part of their model, on demand of the management functions to realize. Besides, one does not wish to define management objectives at the scale of each single component, but rather to pilot these systems through high-level policies (maximizing throughput, minimizing energy consumption, etc.). These systems and problems have connections with other approaches for the management of large structured stochastic systems, as Bayesian networks (BN) and their variants. The similarity can actually be made more formal: inference techniques for BN rely on the concept of conditional independence, which has a counterpart for networks of dynamic  systems and is at the core of techniques like distributed diagnosis, distributed optimal planning, or the synthesis of distributed controlers. The potential of this connection is largely unexplored, but it suggests that one could rather easily derive from it good approximate management methods for large distributed dynamic systems.