The project is located at the LSV, a joint INRIA-CNRS-ENSC lab at ENS Cachan.

In the increasingly networked world, reliability of applications becomes ever more critical as the number of users of, e.g., communication systems, web services, transportation etc., grows steadily. Management of networked systems, in a very general sense of the term, therefore is a crucial task, but also a difficult one.

*MExICo*strives to take advantage of distribution by orchestrating cooperation between different agents that observe local subsystems, and interact in a localized fashion.

The need for applying formal methods in the analysis and management of complex systems has long been recognized. It is with much less unanimity that the scientific community embraces methods based on asynchronous and distributed models. Centralized and sequential modeling still prevails.

However, we observe that crucial applications have increasing numbers of users, that networks providing services grow fast both in the number of participants and the physical size and
degree of spatial distribution. Moreover, traditional
*isolated*and
*proprietary*software products for local systems are no longer typical for emerging applications.

In contrast to traditional centralized and sequential machinery for which purely functional specifications are efficient, we have to account for applications being provided from diverse and non-coordinated sources. Their distribution (e.g. over the Web) must change the way we verify and manage them. In particular, one cannot ignore the impact of quantitative features such as delays or failure likelihoods on the functionalities of composite services in distributed systems.

We thus identify three main characteristics of complex distributed systems that constitute research challenges:

*Concurrency*of behavior;

*Interaction*of diverse and semi-transparent components; and

management of
*Quantitative*aspects of behavior.

The increasing size and the networked nature of communication systems, controls, distributed services, etc. confront us with an ever higher degree of parallelism between local processes.
This field of application for our work includes telecommunication systems and composite web services. The challenge is to provide sound theoretical foundations and efficient algorithms for
management of such systems, ranging from controller synthesis and fault diagnosis to integration and adaptation. While these tasks have received considerable attention in the
*sequential*setting, managing
*non-sequential*behavior requires profound modifications for existing approaches, and often the development of new approaches altogether. We see concurrency in distributed systems as an
opportunity rather than a nuisance. Our goal is to
*exploit*asynchronicity and distribution as an advantage. Clever use of adequate models, in particular
*partial order semantics*(ranging from Mazurkiewicz traces to event structures to MSCs) actually helps in practice. In fact, the partial order vision allows us to make causal precedence
relations explicit, and to perform diagnosis and test for the dependency between events. This is a conceptual advantage that interleaving-based approaches cannot match. The two key features
of our work will be
*(i)*the exploitation of concurrency by using asynchronous models with partial order semantics, and
*(ii)*distribution of the agents performing management tasks.

Systems and services exhibit non-trivial
*interaction*between specialized and heterogeneous components. A coordinated interplay of several components is required; this is challenging since each of them has only a limited,
partial view of the system's configuration. We refer to this problem as
*distributed synthesis*or
*distributed control*. An aggravating factor is that the structure of a component might be semi-transparent, which requires a form of
*grey box management*.

Besides the logical functionalities of programs, the
*quantitative*aspects of component behavior and interaction play an increasingly important role.

*Real-time*properties cannot be neglected even if time is not an explicit functional issue, since transmission delays, parallelism, etc, can lead to time-outs striking, and thus
change even the logical course of processes. Again, this phenomenon arises in telecommunications and web services, but also in transport systems.

In the same contexts,
*probabilities*need to be taken into account, for many diverse reasons such as unpredictable functionalities, or because the outcome of a computation may be governed by race
conditions.

Last but not least, constraints on
*cost*cannot be ignored, be it in terms of money or any other limited resource, such as memory space or available CPU time.

The article
*"Synthesis and Analysis of Product-form Petri Nets"*by Serge Haddad, Jean Mairesse and Hoang-Thach Nguyen received the
*Best Paper Award*at the
*International Conference on Theory and Application of Petri Nets (ICATPN) 2011*in Newcastle, UK.

For a large Markovian model, a "product form" is an explicit description of the steady-state behaviour which is otherwise generally untractable. Being first introduced in queueing networks, it has been adapted to Markovian Petri nets. Here we address three relevant issues for product-form Petri nets which were left fully or partially open: (1) we provide a sound and complete set of rules for the synthesis; (2) we characterise the exact complexity of classical problems like reachability; (3) we introduce a new subclass for which the normalising constant (a crucial value for product-form expression) can be efficiently computed.

Property of systems allowing some interacting processes to be executed in parallel.

The process of deducing from a partial observation of a system aspects of the internal states or events of that system; in particular,
*fault diagnosis*aims at determining whether or not some non-observable fault event has occurred.

Feeding dedicated input into an implemented system

It is well known that, whatever the intended form of analysis or control, a
*global*view of the system state leads to overwhelming numbers of states and transitions, thus slowing down algorithms that need to explore the state space. Worse yet, it often blurs the
mechanics that are at work rather than exhibiting them. Conversely, respecting concurrency relations avoids exhaustive enumeration of interleavings. It allows us to focus on `essential'
properties of non-sequential processes, which are expressible with causal precedence relations. These precedence relations are usually called causal (partial) orders. Concurrency is the
explicit absence of such a precedence between actions that do not have to wait for one another. Both causal orders and concurrency are in fact essential elements of a specification. This is
especially true when the specification is constructed in a distributed and modular way. Making these ordering relations explicit requires to leave the framework of state/interleaving based
semantics. Therefore, we need to develop new dedicated algorithms for tasks such as conformance testing, fault diagnosis, or control for distributed discrete systems. Existing solutions for
these problems often rely on centralized sequential models which do not scale up well.

*Fault Diagnosis*for discrete event systems is a crucial task in automatic control. Our focus is on
*event oriented*(as opposed to
*state oriented*) model-based diagnosis, asking e.g. the following questions:

given a - potentially large -
*alarm pattern*formed of observations,

what are the possible
*fault scenarios*in the system that
*explain*the pattern ?

Based on the observations, can we deduce whether or not a certain - invisible - fault has actually occurred ?

Model-based diagnosis starts from a discrete event model of the observed system - or rather, its relevant aspects, such as possible fault propagations, abstracting away other dimensions. From this model, an extraction or unfolding process, guided by the observation, produces recursively the explanation candidates.

In asynchronous partial-order based diagnosis with Petri nets
,
,
, one unfolds the
*labelled product*of a Petri net model
*(configurations)*that explain
*exactly*

Diagnosis algorithms have to operate in contexts with low observability, i.e., in systems where many events are invisible to the supervisor. Checking
*observability*and
*diagnosability*for the supervised systems is therefore a crucial and non-trivial task in its own right. Analysis of the relational structure of occurrence nets allows us to check
whether the system exhibits sufficient visibility to allow diagnosis. Developing efficient methods for both verification of
*diagnosability checking*under concurrency, and the
*diagnosis*itself for distributed, composite and asynchronous systems, is an important field for
*MExICo*.

Distributed computation of unfoldings allows one to factor the unfolding of the global system into smaller
*local*unfoldings, by local supervisors associated with sub-networks and communicating among each other. In
,
, elements of a methodology for distributed computation of
unfoldings between several supervisors, underwritten by algebraic properties of the category of Petri nets have been developed. Generalizations, in particular to Graph Grammars, are still
do be done.

Computing diagnosis in a distributed way is only one aspect of a much vaster topic, that of
*distributed diagnosis*(see
,
). In fact, it involves a more abstract and often indirect
reasoning to conclude whether or not some given invisible fault has occurred. Combination of local scenarios is in general not sufficient: the global system may have behaviors that do not
reveal themselves as faulty (or, dually, non-faulty) on any local supervisor's domain (compare
,
). Rather, the local diagnosers have to join all
*information*that is available to them locally, and then deduce collectively further information from the combination of their views. In particular, even the
*absence*of fault evidence on all peers may allow to deduce fault occurrence jointly, see
,
. Automatizing such procedures for the supervision and management
of distributed and locally monitored asynchronous systems is a mid-term goal of
*MExICo*.

Assuring the correctness of concurrent systems is notoriously difficult due to the many unforeseeable ways in which the components may interact and the resulting state-space explosion. A well-established approach to alleviate this problem is to model concurrent systems as Petri nets and analyse their unfoldings, essentially an acyclic version of the Petri net whose simpler structure permits easier analysis .

However, Petri nets are inadequate to model concurrent read accesses to the same resource. Such situations arise naturally in many circumstances, for instance in concurrent databases or in asynchronous circuits. The encoding tricks typically used to model these cases in Petri nets make the unfolding technique inefficient.

Contextual nets, which explicitly do model concurrent read accesses, address this problem. Their accurate representation of concurrency makes contextual unfoldings up to exponentially smaller in certain situations, which promises to yield more efficient analysis procedures. In order to realize such procedures, we shall study contextual nets and their properties, in particular the efficient construction and analysis of their unfoldings, and their applications in verification, diagnosis, and planning.

In a DIGITEO PhD project, we will study logical specification formalisms for concurrent recursive programs. With the advent of multi-core processors, the analysis and synthesis of such programs is becoming more and more important. However, it cannot be achieved without more comprehensive formal mathematical models of concurrency and parallelization. Most existing approaches have in common that they restrict to the analysis of an over- or underapproximation of the actual program executions and do not focus on a behavioral semantics. In particular, temporal logics have not been considered. Their design and study will require the combination of prior works on logics for sequential recursive programs and concurrent finite-state programs.

The gap between specification and implementation is at the heart of research on formal testing. The general
*conformance testing problem*can be defined as follows: Does an implementation
*input streams*for

In this project, we focus on distributed or asynchronous versions of the conformance testing problem. There are two main difficulties. First, due to the distributed nature of the system,
it may not be possible to have a unique global observer for the outcome of a test. Hence, we may need to use
*local*observers which will record only
*partial views*of the execution. Due to this, it is difficult or even impossible to reconstruct a coherent global execution. The second difficulty is the lack of global synchronization
in distributed asynchronous systems. Up to now, models were described with I/O automata having a centralized control, hence inducing global synchronizations.

Since 2006 and in particular during his sabbatical stay at the University of Ottawa, Stefan Haar has been working with Guy-Vincent Jourdan and Gregor v. Bochmann of UOttawa and Claude Jard of IRISA on asynchronous testing. In the synchronous (sequential) approach, the model is described by an I/O automaton with a centralized control and transitions labeled with individual input or output actions. This approach has known limitations when inputs and outputs are distributed over remote sites, a feature that is characteristic of , e.g., web computing. To account for concurrency in the system, they have developed in , asynchronous conformance testing for automata with transitions labeled with (finite) partial orders of I/O. Intuitively, this is a “big step” semantics where each step allows concurrency but the system is synchronized before the next big step. This is already an important improvement on the synchronous setting. The non-trivial challenge is now to cope with fully asynchronous specifications using models with decentralized control such as Petri nets.

Message-Sequence-Charts (MSCs) provide models of behaviors of distributed systems with communicating processes. An important problem is to test whether an implementation conforms to a
specification given for instance by an HMSC. In
*local testing*, one proceeds by injecting messages to the local processes and observing the responses: for each process

The first step that should be reached in the near future is the completion of asynchronous testing in the setting without any big-step synchronization. In parallel, work on the problems
in local testing should progress sufficiently to allow, in a mid-term perspective, to understand the relations and possible interconnections between local (i.e. distributed) and
asynchronous (centralized) testing. This is the objective of the
*TECSTES*project (2011-2014), funded by a DIGITEO
*DIM/LSC*grant, and which involves Hernán Ponce de Léon and Stefan Haar of
*MExICo*, and Delphine Longuet at LRI, University Paris-Sud/Orsay.

The mid-to long term goal (perhaps not yet to achieve in this four-year term) is the comprehensive formalization of testing and testability in asynchronous systems with distributed architecture and test protocols.

Systems and services exhibit non-trivial
*interaction*between specialized and heterogeneous components. This interplay is challenging for several reasons. On one hand, a coordinated interplay of several components is required,
though each has only a limited, partial view of the system's configuration. We refer to this problem as
*distributed synthesis*or
*distributed control*. An aggravating factor is that the structure of a component might be semi-transparent, which requires a form of
*grey box management*.

Interaction, one of the main characteristics of systems under consideration, often involves an environment that is not under the control of cooperating services. To achieve a common goal,
the services need to agree upon a strategy that allows them to react appropriately regardless of the interactions with the environment. Clearly, the notions of opponents and strategies fall
within
*game theory*, which is naturally one of our main tools in exploring interaction. We will apply to our problems techniques and results developed in the domains of distributed games and
of games with partial information. We will consider also new problems on games that arise from our applications.

Program synthesis, as introduced by Church aims at deriving directly an implementation from a specification, allowing the implementation to be correct by design. When the implementation is already at hand but choices remain to be resolved at run time then the problem becomes controller synthesis. Both program and controller synthesis have been extensively studied for sequential systems. In a distributed setting, we need to synthesize a distributed program or distributed controllers that interact locally with the system components. The main difficulty comes from the fact that the local controllers/programs have only a partial view of the entire system. This is also an old problem largely considered undecidable in most settings , , , , .

Actually, the main undecidability sources come from the fact that this problem was addressed in a synchronous setting using global runs viewed as sequences. In a truly distributed system where interactions are asynchronous we have recently obtained encouraging decidability results , . This is a clear witness where concurrency may be exploited to obtain positive results. It is essential to specify expected properties directly in terms of causality revealed by partial order models of executions (MSCs or Mazurkiewicz traces). We intend to develop this line of research with the ambitious aim to obtain decidability for all natural systems and specifications. More precisely, we will identify natural hypotheses both on the architecture of our distributed system and on the specifications under which the distributed program/controller synthesis problem is decidable. This should open the way to important applications, e.g., for distributed control of embedded systems.

SERGE : que faire de cette section ?

Contrary to mainframe systems or monolithic applications of the past, we are experiencing and using an increasing number of services that are performed not by one provider but rather by
the interaction and cooperation of many specialized components. As these components come from different providers, one can no longer assume all of their internal technologies to be known (as
it is the case with proprietary technology). Thus, in order to compose e.g. orchestrated services over the web, to determine violations of specifications or contracts, to adapt existing
services to new situations etc, one needs to analyze the interaction behavior of
*boxes*that are known only through their public interfaces. For their semi-transparent-semi-opaque nature, we shall refer to them as
**grey boxes**. While the concrete nature of these boxes can range from vehicles in a highway section to hotel reservation systems, the tasks of
*grey box management*have universal features allowing for generalized approaches with formal methods. Two central issues emerge:

Abstraction: From the designer point of view, there is a need for a trade-off between transparency (no abstraction) in order to integrate the box in different contexts and opacity (full abstraction) for security reasons.

Adaptation: Since a grey box gives a partial view about the behavior of the component, even if it is not immediately useable in some context, the design of an adaptator is possible. Thus the goal is the synthesis of such an adaptator from a formal specification of the component and the environment.

Besides the logical functionalities of programs, the
*quantitative*aspects of component behavior and interaction play an increasingly important role.

*Real-time*properties cannot be neglected even if time is not an explicit functional issue, since transmission delays, parallelism, etc, can lead to time-outs striking, and thus
change even the logical course of processes. Again, this phenomenon arises in telecommunications and web services, but also in transport systems.

In the same contexts,
*probabilities*need to be taken into account, for many diverse reasons such as unpredictable functionalities, or because the outcome of a computation may be governed by race
conditions.

Last but not least, constraints on
*cost*cannot be ignored, be it in terms of money or any other limited resource, such as memory space or available CPU time.

Traditional mainframe systems were proprietary and (essentially) localized; therefore, impact of delays, unforeseen failures, etc. could be considered under the control of the system
manager. It was therefore natural, in verification and control of systems, to focus on
*functional*behavior entirely.

With the increase in size of computing system and the growing degree of compositionality and distribution, quantitative factors enter the stage:

calling remote services and transmitting data over the web creates
*delays*;

remote or non-proprietary components are not “deterministic”, in the sense that their behavior is uncertain.

*Time*and
*probability*are thus parameters that management of distributed systems must be able to handle; along with both, the
*cost*of operations is often subject to restrictions, or its minimization is at least desired. The mathematical treatment of these features in distributed systems is an important
challenge, which
*MExICo*is addressing; the following describes our activities concerning probabilistic and timed systems. Note that cost optimization is not a current activity but enters the picture in
several intended activities.

Practical fault diagnosis requires to select explanations of
*maximal likelihood*; this leads therefore to the question what the probability of a given partially ordered execution is. In Benveniste et al.
,
, we presented a model of stochastic processes, whose
trajectories are partially ordered, based on local branching in Petri net unfoldings; an alternative and complementary model based on Markov fields is developed in
, which takes a different view on the semantics and overcomes the
first model's restrictions on applicability.

Both approaches abstract away from real time progress and randomize choices in
*logical*time. On the other hand, the relative speed - and thus, indirectly, the real-time behavior of the system's local processes - are crucial factors determining the outcome of
probabilistic choices, even if non-determinism is absent from the system.

Recently, we started a new line of research with Anne Bouillard, Sidney Rosario, and Albert Benveniste in the DistribCom team at INRIA Rennes, studying the likelihood of occurrence of non-sequential runs under random durations in a stochastic Petri net setting.

Once the properties of the probability measures thus obtained are understood, it will be interesting to relate them with the two above models in logical time, and understand their differences. Another mid-term goal, in parallel, is the transfer to diagnosis with possible cooperation with René Boel's group in Ghent/Belgium.

Distributed systems featuring non-deterministic and probabilistic aspects are usually hard to analyze and, more specifically, to optimize. Furthermore, high complexity theoretical lower bounds have been established for models like partially observed Markovian decision processes and distributed partially observed Markovian decision processes. We believe that these negative results are consequences of the choice of the models rather than the intrinsic complexity of problems to be solved. Thus we plan to introduce new models in which the associated optimization problems can be solved in a more efficient way. More precisely, we start by studying connection protocols weighted by costs and we look for online and offline strategies for optimizing the mean cost to achieve the protocol. We cooperate on this subject with Eric Fabre in the DistribCom team at INRIA Rennes, in the context of the DISC project.

Addressing large-scale probabilistic systems requires to face state explosion, due to both the discrete part and the probabilistic part of the model. In order to deal with such systems, different approaches have been proposed:

Restricting the synchronization between the components as in queuing networks allows to express the steady-state distribution of the model by an analytical formula called a product-form .

Some methods that tackle with the combinatory explosion for discrete-event systems can be generalized to stochastic systems using an appropriate theory. For instance symmetry based methods have been generalized to stochastic systems with the help of aggregation theory .

At last simulation, which works as soon as a stochastic operational semantic is defined, has been adapted to perform statistical model checking. Roughly speaking, it consists to produce a confidence interval for the probability that a random path fulfills a formula of some temporal logic .

We want to contribute to these three axes: (1) we are looking for product-forms related to systems where synchronization are more involved (like in Petri nets), (2) we want to adapt methods for discrete-event systems that require some theoretical developments in the stochastic framework and, (3) we plane to address some important limitations of statistical model checking like the expressiveness of the associated logic and the handling of rare events.

Nowadays, software systems largely depend on complex timing constraints and usually consist of many interacting local components. Among them, railway crossings, traffic control units, mobile phones, computer servers, and many more safety-critical systems are subject to particular quality standards. It is therefore becoming increasingly important to look at networks of timed systems, which allow real-time systems to operate in a distributed manner.

Timed automata are a well-studied formalism to describe reactive systems that come with timing constraints. For modeling distributed real-time systems, networks of timed automata have been considered, where the local clocks of the processes usually evolve at the same rate . It is, however, not always adequate to assume that distributed components of a system obey a global time. Actually, there is generally no reason to assume that different timed systems in the networks refer to the same time or evolve at the same rate. Any component is rather determined by local influences such as temperature and workload.

A first step towards formal models of distributed timed systems with independently evolving clocks was done in
. As the precise evolution of local clock rates is often too
complex or even unknown, the authors study different semantics of a given system: The
*existential semantics*exhibits all those behaviors that are possible under
*some*time evolution. The
*universal semantics*captures only those behaviors that are possible under
*all*time evolutions. While emptiness and universality of the universal semantics are in general undecidable, the existential semantics is always regular and offers a way to check a
given system against safety properties. A decidable under-approximation of the universal semantics, called
*reactive semantics*, is introduced to check a system for liveness properties. It assumes the existence of a
*global*controller that allows the system to react upon local time evolutions. A short term goal is to investigate a
*distributed*reactive semantics where controllers are located at processes and only have local views of the system behaviors.

Several questions, however, have not yet been tackled in this previous work or remain open. In particular, we plan to exploit the power of synchronization via local clocks and to
investigate the
*synthesis problem*: For which (global) specifications

If, on the other hand, a system is already given and complemented with a specification, then one is usually interested in controlling the system in such a way that it meets its
specification. The interaction between the actual
*system*and the
*environment*(i.e., the local time evolution) can now be understood as a 2-player game: the system's goal is to guarantee a behavior that conforms with the specification, while the
environment aims at violating the specification. Thus, building a controller of a system actually amounts to computing winning strategies in imperfect-information games with infinitely many
states where the unknown or unpredictable evolution of time reflects an imperfect information of the environment. Only few efforts have been made to tackle those kinds of games. One reason
might be that, in the presence of imperfect information and infinitely many states, one is quickly confronted with undecidability of basic decision problems.

This is one of the tasks of the ANR ImpRo.

The objective is to provide formal guarantees on the implementation of real-time distributed systems, despite the semantic differences between the model and the code. We consider two kinds of timed models: time Petri nets and networks of timed automata .

Time Petri Nets allow the designer to explicit concurrent parts of the system, but without having decided yet to localize the different actions on the different components. In that sense, TPNs are more abstract than networks of timed automata, which can be seen as possible (ideal) distributed implementations. This raises the question of semantical comparison of these two models in the light of preserving the maximum of concurrency.

In order to implement our models on distributed architectures, we need a way to evaluate how much the implementation preserves the concurrency that is described in the model. For this we must be able to identify concurrency in the behavior of the models. This is done by equipping the models with a concurrent semantics (unfoldings), allowing us to consider the behaviors as partial orders.

For instance, we would like to be able to transform a time Petri net into a network of timed automata, which is closer to the implementation since the processes are well identified. But we require that this transformation preserves concurrency. Yet the first works about formal comparisons of the expressivity of these models , , , , do not consider preservation of concurrency.

In contrast, we aim at formalizing and automating translations that preserve both the timed semantics and the concurrent semantics. This effort is crucial for extending
concurrency-oriented methods for logical time, in particular for exploiting partial order properties. In fact, validation and management - in a broad sense - of distributed systems is not
realistic
*in general*without understanding and control of their real-time dependent features; the link between real-time and logical-time behaviors is thus crucial for many aspects of
*MExICo*'s work.

Time and probability are only two facets of quantitative phenomena. A generic concept of adding weights to qualitative systems is provided by the theory of weighted automata . They allow one to treat probabilistic or also reward models in a unified framework. Unlike finite automata, which are based on the Boolean semiring, weighted automata build on more general structures such as the natural or real numbers (equipped with the usual addition and multiplication) or the probabilistic semiring. Hence, a weighted automaton associates with any possible behavior a weight beyond the usual Boolean classification of “acceptance” or “non-acceptance”. Automata with weights have produced a well-established theory and come, e.g., with a characterization in terms of rational expressions, which generalizes the famous theorem of Kleene in the unweighted setting. Equipped with a solid theoretical basis, weighted automata finally found their way into numerous application areas such as natural language processing and speech recognition, or digital image compression.

What is still missing in the theory of weighted automata are satisfactory connections with verification-related issues such as (temporal) logic and bisimulation that could lead to a general approach to corresponding satisfiability and model-checking problems. A first step towards a more satisfactory theory of weighted systems was done in . That paper, however does not give final solutions to all the aforementioned problems. It identifies directions for future research that we will be tackling.

*MExICo*'s research is motivated by problems on system management in several domains:

In the domain of service oriented computing, it is often necessary to insert some Web service into an existing orchestrated business process, e.g. to replace another
component after failures. This requires to ensure, often actively, conformance to the interaction protocol. One therefore needs to synthesize
*adaptators*for every component in order to steer its interaction with the surrounding processes.

Still in the domain of telecommunications, the supervision of a network tends to move from out-of-band technology, with a fixed dedicated supervision infrastructure, to in-band supervision where the supervision process uses the supervised network itself. This new setting requires to revisit the existing supervision techniques using control and diagnosis tools.

Several recent Intelligent Transport Systems projects aim at providing assistance to drivers, in the way of (partially) automated motorways. We will focus on the modeling and analysis of the collision avoidance problems in critical short sections of motorways.

This list is likely to grow over the next years as we continue our research.

In the context of traditional hard-wired communication networks, supervision structures for managing faults, configuration, provisioning etc could be developed with a fixed infrastructure,
and perform the communication between sensors, supervisors, policy enforcement points etc over a separate network using separate hardware. This rigid,
**out-of-band**technology does not survive passing to today's and tomorrow's services and networks. In fact, the dynamic mobility of services combined across sites and domains cannot be
captured unless the network used for supervision evolves in the same way and simultaneously, which rules out static solutions; but providing out-of-band infrastructure that grows with the
networks to be supervised would be prohibitively expensive, if at all technically feasible.
*Heterogeneity*is the other feature of modern networks that forces a change, since different domains are not likely to agree on a pervasive third-party supervision. Rather, the providers
will keep control over the internal state and evolution of their domain, and accept only exchange through standardized outward interfaces.

Supervision has thus to be re-invented on an
*in-band*,
*autonomous*base: monitoring probes deployed on the web, dysfunctions on one peer node diagnosed by another peer in a network with changing configuration, enhanced supervisor and actor
capacities of services, etc.
*MExICo*will work on improving the interoperability of service components through continued application of e.g. distributed techniques for control and diagnosis.

The
*Intelligent Transport Systems*
**(ITS)**community tries to deal with the numerous challenges that arise when designing secure and reliable software dedicated to automatic transport systems.

Several recent ITS projects aim at providing assistance to drivers and deal with partially automated motorways. The community investigated first a fully automated infrastructure and vehicles approach (as in the PATH project ) in the 1990's. That approach was then abandoned in favor of a new line of research and development activities, more centered on safety strategies to ensure properties such as Collision Avoidance or Safety Margin for Assistance Vehicles .

This vision relies on cooperative systems where
*“road operators, infrastructure, vehicles, their drivers and other road users will cooperate to deliver the most efficient, safe, secure and comfortable journeys”*
. Implementing such a system then follows a peer-to-peer organization
where each vehicle must fully cooperate in a time-constrained and safety-critical environment.

In that context, many projects are dealing with safety-oriented applications based on sensors, communication devices and protocols as well as distributed traffic management systems involving cooperation between the infrastructure and vehicles , , . Thus, reliability, flexibility in the design as well as safety are primary issues. Such systems are even more complex to analyze than previous distributed systems. Consequently, there is a need for a specific methodology and tools to design and analyze them.

We will focus on an approach for the modeling and analysis of the collision avoidance problems in critical short sections of motorways with the aim to check whether a control strategy exists depending on the parameters (speed, safety distances, etc.). We intend to cope with the undecidability of such problems by appropriate discretizations and with high complexity of the obtained systems by using elaborated data structures based on decision diagrams.

Specific applications targeted by
*MExICo*include the problem of adaptation in Service-Oriented Computing (SOC). The challenge is here twofold, stemming both from the distributed nature of services (scattered over the
entire web) and their heterogeneous origins.

Web services have become the most frequently used model of design and programming based on components for business applications. Web service languages like BPEL have useful constructors that manage for instance exceptions, (timed guarded) waiting of messages, parallel execution of processes, distant service invocations, etc. Interoperability of components is based on interaction protocols associated with them and often published on public or private registers. In the framework of Web services, these protocols are called abstract processes by contrast with business processes (i.e. services). Composition of components must be analyzed for several reasons and at least to avoid deadlocks during execution. This has led to numerous works that focus on compositional verification, substitution of a component by another one, synthesis of adaptators, etc., and triggered a push towards a unifying theoretical framework (see e.g. , )

Interoperability requires that when a user or a program wants to interact with the component, the knowledge of the interaction protocol is enough. Our previous works have shown that the interaction protocols can be inherently ambiguous: no client can conduct a correct interaction with the component in every scenario. This problem is even more complex when the protocol can evolve during execution due to adaptation requirements. The composition of components also raises interesting problems. When composing optimal components (w.r.t. the number of states for instance) the global component can be non optimal. So one aims at reducing a posteriori or better on the fly the global component. At last, the dynamical insertion of a component in a business process requires to check whether this insertion is behaviorally consistent ,

We do not intend to check global properties based on a modular verification technique. Rather, given an interaction protocol per component and a global property to ensure, we want to synthesize an adaptator per component such that this property is fulfilled or to detect that there cannot exist such adaptators . In another research direction, one can introduce the concept of utility of a service and then optimize a system i.e. keeping the same utility value while reducing the resources (states, transitions, clocks, etc.).

**libalf**is a comprehensive, open-source library for learning finite-state automata covering various well-known learning techniques (such as, Angluin s L
**libalf**is highly flexible and allows for facilely interchanging learning algorithms and combining domain-specific features in a plug-and-play fashion. Its modular design avirtual
plantsnd its implementation in C++ make it a flexible platform for adding and engineering further, efficient learning algorithms for new target models (e.g., Büchi automata).

Details on
**libalf**can be found at
http://

**Mole**computes, given a safe Petri net, a finite prefix of its unfolding. It is designed to be compatible with other tools, such as PEP and the Model-Checking Kit, which are using the
resulting unfolding for reachability checking and other analyses. The tool Mole arose out of earlier work on Petri nets. Details on Mole can be found at
http://

In the context of MExICo, we have created a new tool called
**Cunf**, which is able to handle contextual nets (Petri nets with read arcs). Recent work carried out within MExICo
has transformed a preliminary implementation into an efficient
tool. While in principle every contextual net can be transformed into an “equivalent” Petri net and then unfolded using Mole, Cunf can take advantage of their special features to do the job
faster. More details can be found at
http://

COSMOS is a statistical model checker for the Hybrid Automata Stochastic Logic (HASL). HASL employs Linear Hybrid Automata (LHA), a generalization of Deterministic Timed Automata (DTA), to describe accepting execution paths of a Discrete Event Stochastic Process (DESP), a class of stochastic models which includes, but is not limited to, Markov chains. As a result HASL verification turns out to be a unifying framework where sophisticated temporal reasoning is naturally blended with elaborate reward-based analysis. COSMOS takes as input a DESP (described in terms of a Generalized Stochastic Petri Net), an LHA and an expression Z representing the quantity to be estimated. It returns a confidence interval estimation of Z. COSMOS is written in C++ and is freely available to the research community.

Details on COSMOS can be found at
http://

We have introduced a mathematical model to capture the behavior of concurrent and recursive systems. We have also identified typical properties of these systems that programmers may want to verify. We have come up with a specification language which is powerful enough to express such properties. In fact, we give a framework by which programmers can define their own specification language depending on the specific application as long as the semantics of the operators can be defined in monadic second-order logic. We have shown that checking whether a specified property is satisfiable or whether a given system satisfies a property specified in such a language is decidable with a manageable complexity (double exponential time). The proof technique is so general that it captures the results for various other well studied models. Our results were presented at MFCS'11 .

While product-form Petri nets have been intensively studied some important questions were left open. In , we have solved most of the open problems. We have provided a sound and complete set of rules to synthetise product form Petri nets. We have characterized the complexity class of standard problems (liveness, reachability and cover ability). At last we have proposed a large subclass of product form Petri nets for which the normalising constant (a key quantity) can be efficiently computed. This paper has obtained the outstanding paper award of the ATPN'2011 conference.

We have designed a logic HASL for stochastic systems that can express elaborated performance indices related to path behaviours . We have shown how it can be integrated in the design process of flexible manufacturing systems . We have developed a tool COSMOS for statistical model checking of HASL formula over a stochastic Petri net with general distributions . In parallel, we have developed the first importance sampling method for rare event that still produces a confidence interval (rather than an estimated value) and we have integrated this method in COSMOS .

We have developed a framework to efficiently solve large Markov decision problems specified by high-level Petri nets . Such a specification allows to decrease the size of the MDP by the analysis of the symmetries of the model. In a purely probabilistic context, we have designed two complementary methods for handling partial symmetries in stochastic high level Petri nets and studied their efficiency on several relevant case studies .

We studied data words, i.e, strings where each position carries both a label from a finite alphabet and some values from an infinite domain. Data words are suitable to model the behavior of concurrent systems with dynamic process creation, as the infinite alphabet can be used to represent an unbounded number of process identifiers. A variety of formalisms, including logic and automata, have been studied to specify sets of data words in the context of verification. However, logic and automata that capture dynamic communicating systems were missing. We closed this gap and developed a quite general logical and automata-theoretic framework for the specification and implementation of sets of data words. On the specification side, we considered a fragment of monadic second-order (MSO) logic, which comes with a predicate to test two word positions for data equality. As a model of an implementation, we introduced class register automata. Our model combines the well known models of register automata and class memory automata, and it indeed captures dynamic communicating automata, whose semantics can be described as a set of message sequence charts. We studied the realizability problem and show that every formula from the existential fragment of MSO logic can be effectively translated into a class register automaton. These results were presented at CONCUR'11 .

We continued our study of the verification of quantitative properties and applications to queries over XML documents. Verification of quantitative systems follow a classical scheme in three steps: specification, modeling, and algorithmics. Hence, we started by exhibiting a specification language. To describe natural qualitative properties, we chose to use, as a fragment, boolean logic like first-order logic or monadic second-order logic. We then encapsulate this properties into the quantitative formalism, allowing sums and products computations in a specified general semiring. In the word case, we obtained very strong results relating this kind of specification/computation languages with the well-known weighted finite automata, and the new weighted pebble automata, which permit to model several interesting quantitative computations over words. We extended these results to trees, and in particular, finite unranked trees or nested words, which are a natural model for XML documents. We published preliminary results in a research report , and we have worked on a submission of these results to several conferences. Our next goal is to tackle some of the algorithmic questions that naturally arise in this context, like satisfiability or model checking.

Contextual nets are an extension of Petri nets that – unlike ordinary Petri nets – faithfully models concurrent read accesses to shared resources. This is not only interesting from a semantic but also from an algorithmic point of view, as the analysis of such nets can better exploit the fact that concurrent reads are independent and concurrent.

In particular, the unfolding of a contextual net may be up to exponentially smaller in certain situations. While the theoretical foundations of contextual unfoldings were established in and , it remained unclear whether the approach could be useful in practice.

Recent work on the theoretical foundations, as well as appropriate data structures and algorithms, has closed this gap. This underlying work has been presented at Concur'11 and has resulted in an efficient tool . More details can be found in a technical report .

We are currently exploring applications of these techniques in the areas of verification, diagnosis, and planning. Some preliminary steps have been presented in .

Occurrence nets are a well known partial order model for the concurrent behavior of Petri nets. The causality and conflict relations between events, which are explicitly represented in
occurrence nets, induce logical dependencies between event occurrences: the occurrence of an event e in a run implies that all its causal predecessors also occur, and that no event in conflict
with e occurs. But these structural relations do not express all the logical dependencies between event occurrences in maximal runs: in particular, the occurrence of e in any maximal run may
imply the occurrence of another event that is not a causal predecessor of e, in that run. The
*reveals*relation had been introduced in
to express this dependency between two events. In this work,
presented at ACSD 2011
, we extend the theory in two ways : first, wegeneralize the reveals
relation to express more general dependencies, involving more than two events, and we introduce ERL logic to express these dependencies as boolean formulas. Secondly, we solve the synthesis
problem that arises: given an ERL formula

The
*reveals*relation has been introduced in
between events in occurrence nets. Essentially, event
*reveal*event

In , the reveals relation was shown to be decidable for occurrence nets that represent unfoldings of safe Petri nets. However, the upper bound was prohibitive for computing the relation in practice. In we address this problem and drastically reduce the upper bound. We also propose efficient algorithms to actually compute the relation on a given occurrence net.

So far, several contacts with industry have been established, but no bilateral contracts have materialized yet. Cooperations with France Télécom , Alcatel-Lucent and NEC are currently being developped within the EU IP UNIVERSELF, which has started in October 2010.

Attacks with timing channels have been described and simulated for instance on TCP/IP protocols, Web communications or cryptographic operations. The scientific objective of the CoChaT project is to study the conditions underwhich such attacks can occur in timed systems, with two main directions. a. The first step consists in defining a theoretical framework, in which timing channels can be formally described. b. A second part of thework concerns the design of detection and verification algorithms, for which decidability issues are involved. Progress in both steps will have to take into account practical examples like the case studies mentionned above, in order to validate the formal approach.

Benedikt Bollig and Paul Gastin obtained a DIGITEO PhD grant for their student Aiswarya Cyriac. The aim of the PhD will be to design linear-time temporal logics for concurrent recursive programs.

Stefan Haar and Delphine Longuet of LRI, Univ. Paris-Sud/Orsay, have obtained a
*DIM/LSC*grant for the project
*TECSTES*which finances the PhD thesis of their student HernánPonce de Léon. The subject of the project is the asynchronous testing of concurrent systems via Event Structures.

This project involves IRCCyN (Nantes), IRISA (Rennes), LIP6 (Paris), LSV (Cachan), LIAFA (Paris), LIF (Marseille)

It addresses the issues related to the practical implementation of formal models for the design of communicating embedded systems: such models abstract many complex features or limitations of the execution environment. The modeling of time, in particular, is usually ideal, with infinitely precise clocks, instantaneous tests or mode commutations, etc. Our objective is thus to study to what extent the practical implementation of these models preserves their good properties. We will first define a generic mathematical framework to reason about and measure implementability, and then study the possibility to integrate implementability constraints in the models. We will particularly focus on the combination of several sources of perturbation such as resource allocation, the distributed architecture of applications, etc. We will also study implementability through control and diagnostic techniques. We will finally apply the developed methods to a case study based on the AUTOSAR architecture, a standard of the automotive industry.

The increasing use of computerised systems in all aspects of our lives gives an increasing importance on the need for them to function correctly. The presence of such systems in safety-critical applications, coupled with their increasing complexity, makes indispensable their verification to see if they behaves as required . Thus the model checking techniques, i.e. the automated form of formal verification , are of particular interest. Since verification techniques have become more efficient and more prevalent, the natural extension is to extend the range of models and specification formalisms to which model checking can be applied. Indeed the behaviour of many real-life processes is inherently stochastic, thus the formalism has been extended to probabilistic model checking. Therefore, different formalisms in which the underlying system has been modelled by Markovian models have been proposed.

Stochastic model checking can be performed by numerical or statistical methods. In model checking formalism, models are checked to see if the considered measures are guaranteed or not, bounding techniques become useful.We propose to apply Stochastic Comparison technique for numerical stochastic model checking. The main advantage of this approach is the possibility to derive transient and steady-state bounding distributions as well as the possibility to avoid the state space explosion problem. For the statistical model checking we propose to study the application of perfect simulation by coupling in the past. This method has been shown that to be efficient when the underlying system is monotonous for the exact steady-state distribution sampling. We consider to extend this approach for transient analysis and to model checking by means of bounding models and the stochastic monotonicity. One of difficult problems for model checking formalism, we envisage to study is when the state space is infinite. In some cases, it would be possible to consider bounding models defined in finite state space.

Indeed, formal verification using model checking and performance and dependability evaluation have a lot of things in common. We think that it would be interesting to apply the methods that we have a large experience in quantitative evaluation in the context of stochastic model checking.

Title:
*Univerself*

Type: COOPERATION (ICT)

Defi: The Network of the Future

Instrument: Integrated Project (IP)

Duration: September 2010 - August 2013

Coordinator: Alcatel Lucent (France)

Others partners:

Universiteit Twente,

Alcatel Lucent Ireland,

Alcatel Lucent Deutschland,

Valtion Teknillinen Tutkimuskeskus (Finland),

University of Piraeus,

France Telecom,

Telecom Italia,

National University of Athens,

Fraunhofer-Gesellschaft zur Förderung der Angewandten Forschung,

Interdisciplinary Institute for Broadband Technology,

Telefonica Investigacion y Desarrollo,

Thales Communications,

INRIA,

Nec Europe,

University of Surrey,

University College London

IBBT (Belgium).

See also:
http://

Abstract:
*UniverSelf*unites 17 partners with the aim of overcoming the growing management complexity of future networking systems, and to reduce the barriers that complexity and
ossification pose to further growth.
*Univerself*has been launched in October 2010 and is scheduled for four years.

Serge Haddad and Stefan Haar are participating, as associate members of INRIA Rennes, in the Project on
*Distributed Supervisory Control of Large Plants - DISC*. The European Commission supports the project financially by the EU.ICT program, Challenge ICT-2007.3.3 (Information and
Communication Technologies (ICT)). 1 September 2008 - 1 September 2011. Project partners:

University of Cagliari (coordinator),

CWI - Amsterdam, Ghent University,

Technical University of Berlin,

University of Zaragoza,

INRIA,

Akhela s.r.l. Italy,

Czech Academy of Sciences,

Ministry of the Flemish Government,

CyBio AG.

Serge Haddad and Stefan Haar are among the INRIA participants of the IP Univerself on autonomous Management in telecommunications, along with members of the Distribcom group at INRIA Rennes and the MADYNES group at INRIA Nancy. The project consortium is :

Title:Hycon2 (Highly-complex and networked control systems)

Type: COOPERATION (ICT)

Defi: Engineering of Networked Monitoring and Control Systems

Instrument: Network of Excellence (NoE)

Duration: September 2010 - August 2014

Coordinator: CNRS (France)

Others partners:

Institut fran cais des sciences et technologies des transports, de l'aménagement et des réseaux (IFSTTAR), France;

European Embedded Control Institute (EECI), France;

Eidgenössische Technische Hochschule (ETH) Zürich, Switzerland;

Technische Universität Dortmund,Germany;

Technische Universität Berlin,Germany;

Universität Kassel, Germany;

Ruhr-Universität Bochum, Germany;

Universidad de Sevilla, Spain;

Universidad de Valladolid, Spain;

Università degli Studi dell'Aquila, Italy;

Università di Pisa, Italy;

Università degli Studi di Trento,Italy;

Consiglio Nazionale delle Ricerche, Italy;

Università degli Studi di Cagliari, Italy;

Università degli Studi di Padova, Italy;

Università degli Studi di Pavia, Italy;

Technische Universiteit Eindhoven, Netherlands;

Technische Universiteit Delft, Netherlands;

Rijksuniversiteit Groningen, Netherlands;

Kungliga Tekniska Högskolan, Sweden;

Lunds Universitet, Sweden;

Laboratoire de Recherche en Informatique LRI - Univ. Paris-Sud

IMT - Lucca Institute for Advanced Studies IMT Italy.

See also:
http://

Abstract: The FP7 NoE HYCON2, started in September 2010, is a four-year project coordinated by Françoise Lamnabhi-Lagarrigue. It aims at stimulating and establishing a long-term integration in the strategic field of control of complex, large-scale, and networked dynamical systems. It focuses in particular on the domains of ground and aerospace transportation, electrical power networks, process industries, and biological and medical systems.

TU München, Lehrstuhl Esparza (Germany) organisme 1, labo 1 (pays 1)

Unfoldings of Petri nets (reveals relation)

University of Padova, Department of Pure and Applied mathematics (Italy)

Analysis of Contextual nets and their unfoldings

DISCO team, Università degli Studi di Milano (Italy)

Structural analysis of partially ordered structures, in particular orthomodularity.

Madhavan Mukund of CMI Chennai, India, visited (within the ARCUS project ) from May 1 to Mai 2011.

K. Narayan Kumar of CMI Chennai, India, visited (within the ARCUS project ) from Mai 2 to June 5 and from Nov 7 to November 20.

From February 23 to 25, the team received the visit of Lucia Pomello, Luca Bernardinello (both professors of the University of Milan, Italy) and Carlo Ferigato researcher at JRC Ispra, Italy.

Roshan Kumar (IIT Delhi, India) cancelled his summer internship due to illness

Subject: Petri net unfolding methods for verifying weak properties

Institution: IIT Delhi (India)

Stefan Schwoon visited the LABRI on February 16 to work with Jerome Leroux and to give a talk in the MVTSI seminar. From Nov 17 to 21, he visited Javier Esparza's group at TU München and gave a talk in the PUMA seminar.

Within the DISC project, Serge Haddad and Stefan Haar participated in the project meeting at CWI Amsterdam , March 14–16, and the summer school in Cagliari, June 6-10.

Stefan HAAR visited the DISCO team of Lucia Pomello and Luca Bernardinello at University of Milan from June 8 to June 10.

In October 2011, Benedikt Bollig joined Dietrich Kuske's group at the Technische Universität Ilmenau, Germany, for two weeks.

Serge Haddad gave talks at the DISC summer school in Cagliari (June 6-10) and an invited talk at the summer school of the University of Western Britanny (August 29 to September 1).

Most members of the team have participated in the sub-project 4,
*Formal approaches for computer systems*, of the
*Île de France/INDE*project of the ARCUS Program (Region Île de France and French Ministry of foreign affairs), initially funded for 3 years (2008–2010) and extended until September
2011.

To pursue the very active and fruitful collaboration with our Indian partners, we have proposed the creation of an INTERNATIONAL ASSOCIATED LABORATORY (LIA). The LIA is called INFORMEL which stands for INdo-French FORmal MEthods Lab. The scientific coordinators are Paul Gastin (LSV, ENS Cachan) and Madhavan Mukund (CMI, Chennai). The french partners are mainly from LSV (ENS Cachan) and LaBRI (Bordeaux). The indian partners are from the Chennai Mathematical Institute (CMI), the Institute of Mathematical Sciences (IMSc) and the Indian Institute of Science (IISc) of Bangalore. The LIA proposal has been positively evaluated by the CoNRS and should be created on January 1st 2012.

was the organizer and a PC member of the 3rd Young Researchers Workshop on Concurrency Theory (YR-CONCUR), which was co-located with CONCUR'11. He was a member of the scientific committee of the 3rd workshop on Automata, Concurrency and Timed Systems (ACTS), which took place at the Chennai Mathematical Institute, India. He was a PC member of the 18th International Symposium on Temporal Representation and Reasoning (TIME'11) and of the International Workshop on Interactions, Games and Protocols (IWIGP'11). Moreover, he served as a reviewer for numerous international conferences and journals. He also was a member of the commission scientifique INRIA Saclay.

supervises Sandie Balaguer's PhD thesis on concurrency in real-time distributed systems. He is a member of the program committee of ACSD'12 (
*International Conference on Application of Concurrency to System Design*).

is an associate editor of the
*Journal of Automata, Languages and Combinatorics*.

He was a member of the program committees of CONCUR'11 (
*International Conference on Concurrency Theory*) and DLT'11 (
*International Conference on Development in Language Theory*), and will be a PC menber of LATA 2012 and CONCUR 2012.

He is the head of the computer science department of ENS Cachan.

is an associate editor for
*Discrete event dynamic systems: theory and application*, and on the programm committees of
*SAFEPROCESS 2012*and
*PNSE 2012*. He is the correspondent of the
*DRI*(international relations service) of INRIA for the Saclay center, and (eo ipso) member of GTRI (international relations working group ) of INRIA's COST. Stefan was the president of
the PhD defence board of Bartosz Grabiec , on
*"Supervision of distributed systems using constrained unfoldings of timed models"*at ENS Cachan-Antenne de Bretagne in Ker Lann on October 4. The jury members were Claude Jard
(supervisor), Béatrice Bérard and Pierre-Yves Schobbens (reviewers), Loic Helouet, Axel Legay and Didier Lime. He supervises, with Delphine Longuet of LRI/ Univ. Paris-Sud, the PhD thesis of
Hernán Ponce de Léon on testing of concurrent systems via event structures.

has been a member of the editorial board of the journal
*Technique et Science Informatiques*since 2007, and a member of the steering committee of the
*International Conference on Applications and Theory of Petri Nets (ICATPN)*since 2001. In 2011,

He has been reviewer of the PhDs of Gilles Benattar and Loic Pauleve (Nantes) and and the HdR of Hind Castel (Telecom SudParis). He has also been member of the PhD committee of Mathieu Sassolas (Paris 6) and of the HdR commitee of Benoit Caillaud (Rennes).

He was invited by the GDR MACS to present a seminar on recursive Petri nets. He was also invited by the ETR'2011 to present a seminar on model checking.

He was member of three selection committees (two in Nancy, one in Paris 6).

He taught a course at the DISC PhD School (Cagliari, Italy) on verification and expressiveness of timed models.

Cooperation with professor R. Hennicker University Ludwig-Maximilians of Munich , and with associate professor F. Rosa Velardo University of Madrid. Serge Haddad is responsible of years L3 and M1 of the computer science department of ENS Cachan.

is a member of the program committees of TACAS'12 (18th International Conference on Tools and Algorithms for the Construction and Analysis of Systems) and GRAPHITE'12 (First Workshop on GRAPH Inspection and Traversal Engineering). He supervises César Rodríguez' PhD thesis on contextual Petri net unfoldings. He gave an invited presentation at DCFS'11 (13th International Workshop on Descriptional Complexity of Formal Systems).

**Sandie Balaguer**is a teaching assistant at Paris Diderot University.

**Benedikt Bollig**gave a lecture (12h) about distributed timed systems within the Parisian Master of Research in Computer Science (MPRI).

**Thomas Chatain**is maître de conférences at ENS Cachan.

**Aiswarya Cyriac**is a teaching assistant at the Department of Computer Science of ENS Cachan.

**Hilal Djafri**is ATER at University Paris Est Creteil.

**Paul Gastin**is a professor and head of the computer science department at ENS Cachan. He was also head of the Parisian Master of Research in Computer Science until August 2009.
REMPLIR

**Stefan Haar**teaches - with Serge Haddad and Thomas Chatain - course on Algorithms in the preparation program for
*agrégation*. He also taught a course on Languages, Decidability and Complexity in the DISC PhD Summer School in Cagliari/IT, June 2011.

**Serge Haddad**is a professor at ENS Cachan. He teaches “Algorithms” in L3 and in the preparation program for the Agregation of Mathematics. He also teaches “Computational Complexity”
in L3 and “Probabilistic Aspects of Computer Science” in M1.

**Benjamin Monmege**is a teaching assistant at ENS Cachan. REMPLIR

**César Rodríguez**is a teaching assistant at the Department of Computer Science of ENS Cachan.

**Stefan Schwoon**is maître de conférences at ENS Cachan. He currently teaches a course and tutorial on operating systems at the L3 level of the ENS Cachan, and a course on verification
at the M1 level of the MPRI program. He has also participated in the entrance examination
*(concours d'entrée en 3ème année, informatique 1)*at ENS Cachan in 2011.

**Marc Zeitoun**has participated in the entrance examination
*(concours d’entrée en 3ème année, épreuve écrite Informatique 2)*at ENS Cachan 2010. He has taught a course on weighted automata in the master’s program MPRI 2010-11 (C 2-8). He has
taught a lecture on algebraic aspects of language theory, at the spring school
*École de Printemps d’Informatique Théorique*in May 2011.

PhD & HdR PhD theses in progress:

**Sandie Balaguer**, since September 2010:
*Concurrency in real-time distributed systems*, under the supervision of Thomas Chatain and Stefan Haar.

**Benoît Barbot**, since September 2011:
*Rare events for statistical model checking*, under the supervision of Serge Haddad and Claudine Picaronny.

**Aiswarya Cyriac**, since September 2010:
*Temporal logics for concurrent recursive programs*, under the supervision of Benedikt Bollig and Paul Gastin

**Hilal Djafri**, since October 2008:
*Numerical and Statistical Approaches for Model-Checking of Stochastic Processes*, under the supervision of Serge Haddad.

**Benjamin Monmege**, since September 2010:
*Verification of qualitative properties and applications to queries for XML documents*, under the supervision of under the supervision of Benedikt Bollig and Paul Gastin.

**Hernán Ponce de Léon**, since September 2011:
*conformance testing for concurrent systems through event structures*, under the supervision of Stefan Haar and Delphine Longuet (LRI).

**César Rodríguez**, since September 2010:
*Contextual nets and their applications*, under the supervision of Stefan Schwoon.