Section: Research Program
Management of Quantitative Behavior
Participants : Benedikt Bollig, Thomas Chatain, Paul Gastin, Stefan Haar, Serge Haddad, Benjamin Monmege.
Introduction
Besides the logical functionalities of programs, the quantitative aspects of component behavior and interaction play an increasingly important role.

Realtime properties cannot be neglected even if time is not an explicit functional issue, since transmission delays, parallelism, etc, can lead to timeouts striking, and thus change even the logical course of processes. Again, this phenomenon arises in telecommunications and web services, but also in transport systems.

In the same contexts, probabilities need to be taken into account, for many diverse reasons such as unpredictable functionalities, or because the outcome of a computation may be governed by race conditions.

Last but not least, constraints on cost cannot be ignored, be it in terms of money or any other limited resource, such as memory space or available CPU time.
Traditional mainframe systems were proprietary and (essentially) localized; therefore, impact of delays, unforeseen failures, etc. could be considered under the control of the system manager. It was therefore natural, in verification and control of systems, to focus on functional behavior entirely.
With the increase in size of computing system and the growing degree of compositionality and distribution, quantitative factors enter the stage:

calling remote services and transmitting data over the web creates delays;

remote or nonproprietary components are not “deterministic”, in the sense that their behavior is uncertain.
Time and probability are thus parameters that management of distributed systems must be able to handle; along with both, the cost of operations is often subject to restrictions, or its minimization is at least desired. The mathematical treatment of these features in distributed systems is an important challenge, which MExICo is addressing; the following describes our activities concerning probabilistic and timed systems. Note that cost optimization is not a current activity but enters the picture in several intended activities.
Probabilistic distributed Systems
Participants : Stefan Haar, Serge Haddad, Claudine Picaronny.
Nonsequential probabilistic processes
Practical fault diagnosis requires to select explanations of maximal likelihood. For partialorder based diagnosis, this leads therefore to the question what the probability of a given partially ordered execution is. In Benveniste et al. [51] , [44] , we presented a model of stochastic processes, whose trajectories are partially ordered, based on local branching in Petri net unfoldings; an alternative and complementary model based on Markov fields is developed in [69] , which takes a different view on the semantics and overcomes the first model's restrictions on applicability.
Both approaches abstract away from real time progress and randomize choices in logical time. On the other hand, the relative speed  and thus, indirectly, the realtime behavior of the system's local processes  are crucial factors determining the outcome of probabilistic choices, even if nondeterminism is absent from the system.
In another line of research [55] we have studied the likelihood of occurrence of nonsequential runs under random durations in a stochastic Petri net setting. It remains to better understand the properties of the probability measures thus obtained, to relate them with the models in logical time, and exploit them e.g. in diagnosis.
Distributed Markov Decision Processes
Participant : Serge Haddad.
Distributed systems featuring nondeterministic and probabilistic aspects are usually hard to analyze and, more specifically, to optimize. Furthermore, high complexity theoretical lower bounds have been established for models like partially observed Markovian decision processes and distributed partially observed Markovian decision processes. We believe that these negative results are consequences of the choice of the models rather than the intrinsic complexity of problems to be solved. Thus we plan to introduce new models in which the associated optimization problems can be solved in a more efficient way. More precisely, we start by studying connection protocols weighted by costs and we look for online and offline strategies for optimizing the mean cost to achieve the protocol. We have been cooperating on this subject with the SUMO team at Inria Rennes; in the joint work [45] ; there, we strive to synthesize for a given MDP a control so as to guarantee a specific stationary behavior, rather than  as is usually done  so as to maximize some reward.
Large scale probabilistic systems
Addressing largescale probabilistic systems requires to face state explosion, due to both the discrete part and the probabilistic part of the model. In order to deal with such systems, different approaches have been proposed:

Restricting the synchronization between the components as in queuing networks allows to express the steadystate distribution of the model by an analytical formula called a productform [50] .

Some methods that tackle with the combinatory explosion for discreteevent systems can be generalized to stochastic systems using an appropriate theory. For instance symmetry based methods have been generalized to stochastic systems with the help of aggregation theory [58] .

At last simulation, which works as soon as a stochastic operational semantic is defined, has been adapted to perform statistical model checking. Roughly speaking, it consists to produce a confidence interval for the probability that a random path fulfills a formula of some temporal logic [83] .
We want to contribute to these three axes: (1) we are looking for productforms related to systems where synchronization are more involved (like in Petri nets), see [9] ; (2) we want to adapt methods for discreteevent systems that require some theoretical developments in the stochastic framework and, (3) we plan to address some important limitations of statistical model checking like the expressiveness of the associated logic and the handling of rare events.
Real time distributed systems
Nowadays, software systems largely depend on complex timing constraints and usually consist of many interacting local components. Among them, railway crossings, traffic control units, mobile phones, computer servers, and many more safetycritical systems are subject to particular quality standards. It is therefore becoming increasingly important to look at networks of timed systems, which allow realtime systems to operate in a distributed manner.
Timed automata are a wellstudied formalism to describe reactive systems that come with timing constraints. For modeling distributed realtime systems, networks of timed automata have been considered, where the local clocks of the processes usually evolve at the same rate [74] [56] . It is, however, not always adequate to assume that distributed components of a system obey a global time. Actually, there is generally no reason to assume that different timed systems in the networks refer to the same time or evolve at the same rate. Any component is rather determined by local influences such as temperature and workload.
Implementation of RealTime Concurrent Systems
Participants : Thomas Chatain, Stefan Haar, Serge Haddad.
This was one of the tasks of the ANR ImpRo.
Formal models for realtime systems, like timed automata and time Petri nets, have been extensively studied and have proved their interest for the verification of realtime systems. On the other hand, the question of using these models as specifications for designing realtime systems raises some difficulties. One of those comes from the fact that the realtime constraints introduce some artifacts and because of them some syntactically correct models have a formal semantics that is clearly unrealistic. One famous situation is the case of Zeno executions, where the formal semantics allows the system to do infinitely many actions in finite time. But there are other problems, and some of them are related to the distributed nature of the system. These are the ones we address here.
One approach to implementability problems is to formalize either syntactical or behavioral requirements about what should be considered as a reasonable model, and reject other models. Another approach is to adapt the formal semantics such that only realistic behaviors are considered.
These techniques are preliminaries for dealing with the problem of implementability of models. Indeed implementing a model may be possible at the cost of some transformation, which make it suitable for the target device. By the way these transformations may be of interest for the designer who can now use highlevel features in a model of a system or protocol, and rely on the transformation to make it implementable.
We aim at formalizing and automating translations that preserve both the timed semantics and the concurrent semantics. This effort is crucial for extending concurrencyoriented methods for logical time, in particular for exploiting partial order properties. In fact, validation and management  in a broad sense  of distributed systems is not realistic in general without understanding and control of their realtime dependent features; the link between realtime and logicaltime behaviors is thus crucial for many aspects of MExICo's work.
Weighted Automata and Weighted Logics
Participants : Benedikt Bollig, Paul Gastin.
Time and probability are only two facets of quantitative phenomena. A generic concept of adding weights to qualitative systems is provided by the theory of weighted automata [43] . They allow one to treat probabilistic or also reward models in a unified framework. Unlike finite automata, which are based on the Boolean semiring, weighted automata build on more general structures such as the natural or real numbers (equipped with the usual addition and multiplication) or the probabilistic semiring. Hence, a weighted automaton associates with any possible behavior a weight beyond the usual Boolean classification of “acceptance” or “nonacceptance”. Automata with weights have produced a wellestablished theory and come, e.g., with a characterization in terms of rational expressions, which generalizes the famous theorem of Kleene in the unweighted setting. Equipped with a solid theoretical basis, weighted automata finally found their way into numerous application areas such as natural language processing and speech recognition, or digital image compression.
What is still missing in the theory of weighted automata are satisfactory connections with verificationrelated issues such as (temporal) logic and bisimulation that could lead to a general approach to corresponding satisfiability and modelchecking problems. A first step towards a more satisfactory theory of weighted systems was done in [54] . That paper, however, does not give definite answers to all the aforementioned problems. It identifies directions for future research that we will be tackling.