In the increasingly networked world, reliability of applications becomes ever more critical as the number of users of, e.g., communication systems, web services, transportation etc., grows steadily. Management of networked systems, in a very general sense of the term, therefore is a crucial task, but also a difficult one.
MExICo strives to take advantage of distribution by orchestrating cooperation between different agents that observe local subsystems, and interact in a localized fashion.
The need for applying formal methods in the analysis and management of complex systems has long been recognized. It is with much less unanimity that the scientific community embraces methods based on asynchronous and distributed models. Centralized and sequential modeling still prevails.
However, we observe that crucial applications have increasing numbers of users, that networks providing services grow fast both in the number of participants and the physical size and degree of spatial distribution. Moreover, traditional isolated and proprietary software products for local systems are no longer typical for emerging applications.
In contrast to traditional centralized and sequential machinery for which purely functional specifications are efficient, we have to account for applications being provided from diverse and non-coordinated sources. Their distribution (e.g. over the Web) must change the way we verify and manage them. In particular, one cannot ignore the impact of quantitative features such as delays or failure likelihoods on the functionalities of composite services in distributed systems.
We thus identify three main characteristics of complex distributed systems that constitute research challenges:
Concurrency of behavior;
Interaction of diverse and semi-transparent components; and
management of Quantitative aspects of behavior.
The increasing size and the networked nature of communication systems, controls, distributed services, etc. confront us with an ever higher degree of parallelism between local processes. This field of application for our work includes telecommunication systems and composite web services. The challenge is to provide sound theoretical foundations and efficient algorithms for management of such systems, ranging from controller synthesis and fault diagnosis to integration and adaptation. While these tasks have received considerable attention in the sequential setting, managing non-sequential behavior requires profound modifications for existing approaches, and often the development of new approaches altogether. We see concurrency in distributed systems as an opportunity rather than a nuisance. Our goal is to exploit asynchronicity and distribution as an advantage. Clever use of adequate models, in particular partial order semantics (ranging from Mazurkiewicz traces to event structures to MSCs) actually helps in practice. In fact, the partial order vision allows us to make causal precedence relations explicit, and to perform diagnosis and test for the dependency between events. This is a conceptual advantage that interleaving-based approaches cannot match. The two key features of our work will be (i) the exploitation of concurrency by using asynchronous models with partial order semantics, and (ii) distribution of the agents performing management tasks.
Systems and services exhibit non-trivial interaction between specialized and heterogeneous components. A coordinated interplay of several components is required; this is challenging since each of them has only a limited, partial view of the system's configuration. We refer to this problem as distributed synthesis or distributed control. An aggravating factor is that the structure of a component might be semi-transparent, which requires a form of grey box management.
Besides the logical functionalities of programs, the quantitative aspects of component behavior and interaction play an increasingly important role.
Real-time properties cannot be neglected even if time is not an explicit functional issue, since transmission delays, parallelism, etc, can lead to time-outs striking, and thus change even the logical course of processes. Again, this phenomenon arises in telecommunications and web services, but also in transport systems.
In the same contexts, probabilities need to be taken into account, for many diverse reasons such as unpredictable functionalities, or because the outcome of a computation may be governed by race conditions.
Last but not least, constraints on cost cannot be ignored, be it in terms of money or any other limited resource, such as memory space or available CPU time.
Since the creation of MExICo, the weight of quantitative aspects in all parts of our activities has grown, be it in terms of the models considered (weighted automata and logics), be it in transforming verification or diagnosis verdict into probabilistic statements (probabilistic diagnosis, statistical model checking), or within the recently started SystemX cooperation on supervision in multi-modal transport systems. This trend is certain to continue over the next couple of years, along with the growing importance of diagnosis and control issues.
In another development, the theory and use of partial order semantics has gained momentum in the past four years, and we intend to further strengthen our efforts and contacts in this domain to further develop and apply partial-order based deduction methods.
As concerns the study of interaction, our progress has been thus far less in the domain of distributed approaches than in the analysis of system composition, such as in networks of untimed or timed automata. While continuing this line of study, we also intend to turn more strongly towards distributed algorithms, namely in terms of parametrized verification methods.
Concurrency; Semantics; Automatic Control ; Diagnosis ; Verification
Property of systems allowing some interacting processes to be executed in parallel.
The process of deducing from a partial observation of a system aspects of the internal states or events of that system; in particular, fault diagnosis aims at determining whether or not some non-observable fault event has occurred.
Feeding dedicated input into an implemented system
It is well known that, whatever the intended form of analysis or control, a global view of the system state leads to overwhelming numbers of states and transitions, thus slowing down algorithms that need to explore the state space. Worse yet, it often blurs the mechanics that are at work rather than exhibiting them. Conversely, respecting concurrency relations avoids exhaustive enumeration of interleavings. It allows us to focus on `essential' properties of non-sequential processes, which are expressible with causal precedence relations. These precedence relations are usually called causal (partial) orders. Concurrency is the explicit absence of such a precedence between actions that do not have to wait for one another. Both causal orders and concurrency are in fact essential elements of a specification. This is especially true when the specification is constructed in a distributed and modular way. Making these ordering relations explicit requires to leave the framework of state/interleaving based semantics. Therefore, we need to develop new dedicated algorithms for tasks such as conformance testing, fault diagnosis, or control for distributed discrete systems. Existing solutions for these problems often rely on centralized sequential models which do not scale up well.
Fault Diagnosis for discrete event systems is a crucial task in automatic control. Our focus is on event oriented (as opposed to state oriented) model-based diagnosis, asking e.g. the following questions:
given a - potentially large - alarm pattern formed of observations,
what are the possible fault scenarios in the system that explain the pattern ?
Based on the observations, can we deduce whether or not a certain - invisible - fault has actually occurred ?
Model-based diagnosis starts from a discrete event model of the observed system - or rather, its relevant aspects, such as possible fault propagations, abstracting away other dimensions. From this model, an extraction or unfolding process, guided by the observation, produces recursively the explanation candidates.
In asynchronous partial-order based diagnosis with Petri nets
, , , one unfolds the
labelled product of a Petri net model
Diagnosis algorithms have to operate in contexts with low observability, i.e., in systems where many events are invisible to the supervisor. Checking observability and diagnosability for the supervised systems is therefore a crucial and non-trivial task in its own right. Analysis of the relational structure of occurrence nets allows us to check whether the system exhibits sufficient visibility to allow diagnosis. Developing efficient methods for both verification of diagnosability checking under concurrency, and the diagnosis itself for distributed, composite and asynchronous systems, is an important field for MExICo.
Distributed computation of unfoldings allows one to factor the unfolding of the global system into smaller local unfoldings, by local supervisors associated with sub-networks and communicating among each other. In , , elements of a methodology for distributed computation of unfoldings between several supervisors, underwritten by algebraic properties of the category of Petri nets have been developed. Generalizations, in particular to Graph Grammars, are still do be done.
Computing diagnosis in a distributed way is only one aspect of a much vaster topic, that of distributed diagnosis (see , ). In fact, it involves a more abstract and often indirect reasoning to conclude whether or not some given invisible fault has occurred. Combination of local scenarios is in general not sufficient: the global system may have behaviors that do not reveal themselves as faulty (or, dually, non-faulty) on any local supervisor's domain (compare , ). Rather, the local diagnosers have to join all information that is available to them locally, and then deduce collectively further information from the combination of their views. In particular, even the absence of fault evidence on all peers may allow to deduce fault occurrence jointly, see , . Automatizing such procedures for the supervision and management of distributed and locally monitored asynchronous systems is a long-term goal to which MExICo hopes to contribute.
Hybrid systems constitute a model for cyber-physical systems which integrates continuous-time dynamics (modes) governed by differential equations, and discrete transitions which switch instantaneously from one mode to another. Thanks to their ease of programming, hybrid systems have been integrated to power electronics systems, and more generally in cyber-physical systems. In order to guarantee that such systems meet their specifications, classical methods consist in finitely abstracting the systems by discretization of the (infinite) state space, and deriving automatically the appropriate mode control from the specification using standard graph techniques. These methods face the well-known problem of “curse of dimensionality”, and cannot generally treat systems of dimension exceeding 5 or 6. Thanks to the introduction of original compositional techniques as well as finer estimations of integration errors , we are now able to control several case studies of greater dimension. Actually, in the real world, many parameters of hybrid models are not known precisely, and require adjustements to experimental data. We plan to elaborate methods based on parameter estimation and machine learning techniques in order to define formal stability criteria and well-posed learning problems in the framework of hybrid systems with nonlinear dynamics.
Assuring the correctness of concurrent systems is notoriously difficult due to the many unforeseeable ways in which the components may interact and the resulting state-space explosion. A well-established approach to alleviate this problem is to model concurrent systems as Petri nets and analyse their unfoldings, essentially an acyclic version of the Petri net whose simpler structure permits easier analysis .
However, Petri nets are inadequate to model concurrent read accesses to the same resource. Such situations often arise naturally, for instance in concurrent databases or in asynchronous circuits. The encoding tricks typically used to model these cases in Petri nets make the unfolding technique inefficient. Contextual nets, which explicitly do model concurrent read accesses, address this problem. Their accurate representation of concurrency makes contextual unfoldings up to exponentially smaller in certain situations. An abstract algorithm for contextual unfoldings was first given in . In recent work, we further studied this subject from a theoretical and practical perspective, allowing us to develop concrete, efficient data structures and algorithms and a tool (Cunf) that improves upon existing state of the art. This work led to the PhD thesis of César Rodríguez in 2014 .
Contexutal unfoldings deal well with two sources of state-space explosion: concurrency and shared resources. Recently, we proposed an improved data structure, called contextual merged processes (CMP) to deal with a third source of state-space explosion, i.e. sequences of choices. The work on CMP is currently at an abstract level. In the short term, we want to put this work into practice, requiring some theoretical groundwork, as well as programming and experimentation.
Another well-known approach to verifying concurrent systems is partial-order reduction, exemplified by the tool SPIN. Although it is known that both partial-order reduction and unfoldings have their respective strengths and weaknesses, we are not aware of any conclusive comparison between the two techniques. Spin comes with a high-level modeling language having an explicit notion of processes, communication channels, and variables. Indeed, the reduction techniques implemented in Spin exploit the specific properties of these features. On the other side, while there exist highly efficient tools for unfoldings, Petri nets are a relatively general low-level formalism, so these techniques do not exploit properties of higher language features. Our work on contextual unfoldings and CMPs represents a first step to make unfoldings exploit richer models. In the long run, we wish raise the unfolding technique to a suitable high-level modelling language and develop appropriate tool support.
Besides the logical functionalities of programs, the quantitative aspects of component behavior and interaction play an increasingly important role.
Real-time properties cannot be neglected even if time is not an explicit functional issue, since transmission delays, parallelism, etc, can lead to time-outs striking, and thus change even the logical course of processes. Again, this phenomenon arises in telecommunications and web services, but also in transport systems.
In the same contexts, probabilities need to be taken into account, for many diverse reasons such as unpredictable functionalities, or because the outcome of a computation may be governed by race conditions.
Last but not least, constraints on cost cannot be ignored, be it in terms of money or any other limited resource, such as memory space or available CPU time.
Traditional mainframe systems were proprietary and (essentially) localized; therefore, impact of delays, unforeseen failures, etc. could be considered under the control of the system manager. It was therefore natural, in verification and control of systems, to focus on functional behavior entirely.
With the increase in size of computing system and the growing degree of compositionality and distribution, quantitative factors enter the stage:
calling remote services and transmitting data over the web creates delays;
remote or non-proprietary components are not “deterministic”, in the sense that their behavior is uncertain.
Time and probability are thus parameters that management of distributed systems must be able to handle; along with both, the cost of operations is often subject to restrictions, or its minimization is at least desired. The mathematical treatment of these features in distributed systems is an important challenge, which MExICo is addressing; the following describes our activities concerning probabilistic and timed systems. Note that cost optimization is not a current activity but enters the picture in several intended activities.
Practical fault diagnosis requires to select explanations of maximal likelihood. For partial-order based diagnosis, this leads therefore to the question what the probability of a given partially ordered execution is. In Benveniste et al. , , we presented a model of stochastic processes, whose trajectories are partially ordered, based on local branching in Petri net unfoldings; an alternative and complementary model based on Markov fields is developed in , which takes a different view on the semantics and overcomes the first model's restrictions on applicability.
Both approaches abstract away from real time progress and randomize choices in logical time. On the other hand, the relative speed - and thus, indirectly, the real-time behavior of the system's local processes - are crucial factors determining the outcome of probabilistic choices, even if non-determinism is absent from the system.
In another line of research we have studied the likelihood of occurrence of non-sequential runs under random durations in a stochastic Petri net setting. It remains to better understand the properties of the probability measures thus obtained, to relate them with the models in logical time, and exploit them e.g. in diagnosis.
Distributed systems featuring non-deterministic and probabilistic aspects are usually hard to analyze and, more specifically, to optimize. Furthermore, high complexity theoretical lower bounds have been established for models like partially observed Markovian decision processes and distributed partially observed Markovian decision processes. We believe that these negative results are consequences of the choice of the models rather than the intrinsic complexity of problems to be solved. Thus we plan to introduce new models in which the associated optimization problems can be solved in a more efficient way. More precisely, we start by studying connection protocols weighted by costs and we look for online and offline strategies for optimizing the mean cost to achieve the protocol. We have been cooperating on this subject with the SUMO team at Inria Rennes; in the joint work ; there, we strive to synthesize for a given MDP a control so as to guarantee a specific stationary behavior, rather than - as is usually done - so as to maximize some reward.
Addressing large-scale probabilistic systems requires to face state explosion, due to both the discrete part and the probabilistic part of the model. In order to deal with such systems, different approaches have been proposed:
Restricting the synchronization between the components as in queuing networks allows to express the steady-state distribution of the model by an analytical formula called a product-form .
Some methods that tackle with the combinatory explosion for discrete-event systems can be generalized to stochastic systems using an appropriate theory. For instance symmetry based methods have been generalized to stochastic systems with the help of aggregation theory .
At last simulation, which works as soon as a stochastic operational semantic is defined, has been adapted to perform statistical model checking. Roughly speaking, it consists to produce a confidence interval for the probability that a random path fulfills a formula of some temporal logic .
We want to contribute to these three axes: (1) we are looking for product-forms related to systems where synchronization are more involved (like in Petri nets), see ; (2) we want to adapt methods for discrete-event systems that require some theoretical developments in the stochastic framework and, (3) we plan to address some important limitations of statistical model checking like the expressiveness of the associated logic and the handling of rare events.
Nowadays, software systems largely depend on complex timing constraints and usually consist of many interacting local components. Among them, railway crossings, traffic control units, mobile phones, computer servers, and many more safety-critical systems are subject to particular quality standards. It is therefore becoming increasingly important to look at networks of timed systems, which allow real-time systems to operate in a distributed manner.
Timed automata are a well-studied formalism to describe reactive systems that come with timing constraints. For modeling distributed real-time systems, networks of timed automata have been considered, where the local clocks of the processes usually evolve at the same rate . It is, however, not always adequate to assume that distributed components of a system obey a global time. Actually, there is generally no reason to assume that different timed systems in the networks refer to the same time or evolve at the same rate. Any component is rather determined by local influences such as temperature and workload.
This was one of the tasks of the ANR ImpRo.
Formal models for real-time systems, like timed automata and time Petri nets, have been extensively studied and have proved their interest for the verification of real-time systems. On the other hand, the question of using these models as specifications for designing real-time systems raises some difficulties. One of those comes from the fact that the real-time constraints introduce some artifacts and because of them some syntactically correct models have a formal semantics that is clearly unrealistic. One famous situation is the case of Zeno executions, where the formal semantics allows the system to do infinitely many actions in finite time. But there are other problems, and some of them are related to the distributed nature of the system. These are the ones we address here.
One approach to implementability problems is to formalize either syntactical or behavioral requirements about what should be considered as a reasonable model, and reject other models. Another approach is to adapt the formal semantics such that only realistic behaviors are considered.
These techniques are preliminaries for dealing with the problem of implementability of models. Indeed implementing a model may be possible at the cost of some transformation, which make it suitable for the target device. By the way these transformations may be of interest for the designer who can now use high-level features in a model of a system or protocol, and rely on the transformation to make it implementable.
We aim at formalizing and automating translations that preserve both the timed semantics and the concurrent semantics. This effort is crucial for extending concurrency-oriented methods for logical time, in particular for exploiting partial order properties. In fact, validation and management - in a broad sense - of distributed systems is not realistic in general without understanding and control of their real-time dependent features; the link between real-time and logical-time behaviors is thus crucial for many aspects of MExICo's work.
Telecommunications
MExICo’s research is motivated by problems of system management in several domains, such as:
In the domain of service oriented computing, it is often necessary to insert some Web service into an existing orchestrated business process, e.g. to replace another component after failures. This requires to ensure, often actively, conformance to the interaction protocol. One therefore needs to synthesize adaptators for every component in order to steer its interaction with the surrounding processes.
Still in the domain of telecommunications, the supervision of a network tends to move from out- of-band technology, with a fixed dedicated supervision infrastructure, to in-band supervision where the supervision process uses the supervised network itself. This new setting requires to revisit the existing supervision techniques using control and diagnosis tools.
Currently, we have no active cooperation on thes subjects.
We have begun in 2014 to examine concurrency issues in systems biology, and are currently enlarging the scope of our research’s applications in this direction. To see the context, note that in recent years, a considerable shift of biologists’ interest can be observed, from the mapping of static genotypes to gene expression, i.e. the processes in which genetic information is used in producing functional products. These processes are far from being uniquely determined by the gene itself, or even jointly with static properties of the environment; rather, regulation occurs throughout the expression processes, with specific mechanisms increasing or decreasing the production of various products, and thus modulating the outcome. These regulations are central in understanding cell fate (how does the cell differenciate ? Do mutations occur ? etc), and progress there hinges on our capacity to analyse, predict, monitor and control complex and variegated processes. We have applied Petri net unfolding techniques for the efficient computation of attractors in a regulatory network; that is, to identify strongly connected reachability components that correspond to stable evolutions, e.g. of a cell that differentiates into a specific functionality (or mutation). This constitutes the starting point of a broader research with Petri net unfolding techniques in regulation. In fact, ,he use of ordinary Petri nets for capturing regulatory network (RN) dynamics overcomes the limitations of traditional RN models : those impose e.g. Monotonicity properties in the influence that one factor had upon another, i.e. always increasing or always decreasing, and were thus unable to cover all actual behaviours (see [75]). Rather, we follow the more refined model of boolean networks of automata, where the local states of the different factors jointly detemine which state transitions are possible. For these connectors, ordinary PNs constitute a first approximation, improving greatly over the literature but leaving room for improvement in terms of introducing more refined logical connectors. Future work thus involves transcending this class of PN models. Via unfoldings, one has access – provided efficient techniques are available – to all behaviours of the model, rather than over-or under-approximations as previously. This opens the way to efficiently searching in particular for determinants of the cell fate : which attractors are reachable from a given stage, and what are the factors that decide in favor of one or the other attractor, etc. Our current research focusses on cellular reprogramming.
The validation of safety properties is a crucial concern for the design of computer guided systems, in particular for automated transport systems Our approach consists in analyzing the interactions of a randomized environment (roads, cross-sections, etc.) with a vehicle controller. This requires to :
define the relevant case studies;
extend our tool COSMOS to handle general hybrid systems;
conduct experimentations and analyze their results.
In [SIA2017], we have shown that this approach scales pretty well but with a controller written in C. The next step will be to combine Simulink models with Petri nets since Simulink is widely used for specifying hybrid systems in industry. In order to so, we need to define an operational semantic for Simulink and to design an elegant way for specifying the interface between nets and Simulink models. Then we will implement the solution in Cosmos.
See the 'New results' section.
Functional Description: COSMOS is a statistical model checker for the Hybrid Automata Stochastic Logic (HASL). HASL employs Linear Hybrid Automata (LHA), a generalization of Deterministic Timed Automata (DTA), to describe accepting execution paths of a Discrete Event Stochastic Process (DESP), a class of stochastic models which includes, but is not limited to, Markov chains. As a result HASL verification turns out to be a unifying framework where sophisticated temporal reasoning is naturally blended with elaborate reward-based analysis. COSMOS takes as input a DESP (described in terms of a Generalized Stochastic Petri Net), an LHA and an expression Z representing the quantity to be estimated. It returns a confidence interval estimation of Z, recently, it has been equipped with functionalities for rare event analysis. COSMOS is written in C++
Participants: Benoît Barbot, Hilal Djafri, Marie Duflot-Kremer, Paolo Ballarini and Serge Haddad
Contact: Hilal Djafri
Functional Description: CosyVerif is a platform dedicated to the formal specification and verification of dynamic systems. It allows to specify systems using several formalisms (such as automata and Petri nets), and to run verification tools on these models.
Participants: Alban Linard, Fabrice Kordon, Laure Petrucci and Serge Haddad
Partners: LIP6 - LSV - LIPN (Laboratoire d'Informatique de l'Université Paris Nord)
Contact: Serge Haddad
Functional Description: Mole computes, given a safe Petri net, a finite prefix of its unfolding. It is designed to be compatible with other tools, such as PEP and the Model-Checking Kit, which are using the resulting unfolding for reachability checking and other analyses. The tool Mole arose out of earlier work on Petri nets.
Participant: Stefan Schwoon
Contact: Stefan Schwoon
Diagnosis is the task of detecting fault occurrences in a partially observed system. Depending on the possible observations, a discrete-event system may be diagnosable or not. Active diagnosis aims at controlling the system to render it diagnosable. Past research has proposed solutions for this problem, but their complexity remains to be improved. Here, we solve the decision and synthesis problems for active diagnosability, proving that (1) our procedures are optimal with respect to computational complexity, and (2) the memory required for our diagnoser is minimal. We then study the delay between a fault occurrence and its detection by the diagnoser. We construct a memory-optimal diagnoser whose delay is at most twice the minimal delay, whereas the memory required to achieve optimal delay may be highly greater. We also provide a solution for parametrized active diagnosis, where we automatically construct the most permissive controller respecting a given delay.
The diagnosis problem for discrete event systems consists in deciding whether some fault event occurred or not in the system, given partial observations on the run of that system. Diagnosability checks whether a correct diagnosis can be issued in bounded time after a fault, for all faulty runs of that system. This problem appeared two decades ago and numerous facets of it have been explored, mostly for permanent faults. It is known for example that diagnosability of a system can be checked in polynomial time, while the construction of a diagnoser is exponential. The present paper examines the case of transient faults, that can appear and be repaired. Diagnosability in this setting means that the occurrence of a fault should always be detected in bounded time, but also before the fault is repaired, in order to prepare for the detection of the next fault or to take corrective measures while they are needed. Checking this notion of diagnosability is proved to be PSPACE-complete. It is also shown that faults can be reliably counted provided the system is diagnosable for faults and for repairs.
Le diagnostic actif est opéré par un contrôleur en vue de rendre un système diagnosticable. Afin d'éviter que le contrôleur ne dégrade trop fortement le système, on lui affecte généralement un second objectif en termes de qualité de service. Dans le cadre des systèmes probabilistes, une spécification possible consiste à assurer une probabilité positive qu'une exécution infinie soit correcte, ce qu'on appelle le diagnostic actif sûr. Nous introduisons ici deux spécifications alternatives. La gamma-correction du système affecte à une exécution une valeur de correction dépendant d'un facteur de décote gamma et le contrôleur doit assurer une valeur moyenne supérieure à un seuil fixé. La alpha-dégradation requiert qu'asymptotiquement, à chaque unité de temps une proportion supérieure à alpha des exécutions jusqu'alors correctes le demeure. D'un point de vue sémantique, nous explicitons des liens significatifs entre les différentes notions. Algorithmiquement, nous établissons la frontière entre décidabilité et indécidabilité des problèmes et dans le cas positif nous exhibons la complexité précise ainsi qu'une synthèse, potentiellement à mémoire infinie.
Diagnosability and opacity are two well-studied problems in discrete-event systems. We revisit these two problems with respect to expressiveness and complexity issues. We first relate different notions of diagnosability and opacity. We consider in particular fairness issues and extend the definition of Germanos et al. [ACM TECS, 2015] of weakly fair diagnosability for safe Petri nets to general Petri nets and to opacity questions. Second, we provide a global picture of complexity results for the verification of diagnosability and opacity. We show that diagnosability is NL-complete for finite state systems, PSPACE-complete for safe Petri nets (even with fairness), and EXPSPACE-complete for general Petri nets without fairness, while non diagnosability is inter-reducible with reachability when fault events are not weakly fair. Opacity is ESPACE-complete for safe Petri nets (even with fairness) and undecidable for general Petri nets already without fairness.
We consider opacity questions where an observation function provides to an external attacker a view of the states along executions and secret executions are those visiting some state from a fixed subset. Disclosure occurs when the observer can deduce from a finite observation that the execution is secret, the ε-disclosure variant corresponding to the execution being secret with probability greater than 1 − ε. In a probabilistic and non deterministic setting, where an internal agent can choose between actions, there are two points of view, depending on the status of this agent: the successive choices can either help the attacker trying to disclose the secret, if the system has been corrupted, or they can prevent disclosure as much as possible if these choices are part of the system design. In the former situation, corresponding to a worst case, the disclosure value is the supremum over the strategies of the probability to disclose the secret (maximisation), whereas in the latter case, the disclosure is the infimum (minimisation). We address quantitative problems (comparing the optimal value with a threshold) and qualitative ones (when the threshold is zero or one) related to both forms of disclosure for a fixed or finite horizon. For all problems, we characterise their decidability status and their complexity. We discover a surprising asymmetry: on the one hand optimal strategies may be chosen among deterministic ones in maximisation problems, while it is not the case for minimisation. On the other hand, for the questions addressed here, more minimisation problems than maximisation ones are decidable.
We introduce in this paper D-SPACES, an implementation of constraint systems with space and extrusion operators. Constraint systems are algebraic models that allow for a semantic language-like representation of information in systems where the concept of space is a primary structural feature. We give this information mainly an epistemic interpretation and consider various agents as entities acting upon it. D-SPACES is coded as a c++11 library providing implementations for constraint systems, space functions and extrusion functions. The interfaces to access each implementation are minimal and thoroughly documented. D-SPACES also provides property-checking methods as well as an implementation of a specific type of constraint systems (a boolean algebra). This last implementation serves as an entry point for quick access and proof of concept when using these models. Furthermore, we offer an illustrative example in the form of a small social network where users post their beliefs and utter their opinions.
Computing steady-state distributions in infinite-state stochastic systems is
in general a very difficult task. Product-form Petri nets are those Petri
nets for which the steady-state distribution can be described as a natural
product corresponding, up to a normalising constant, to an exponentiation of
the markings. However, even though some classes of nets are known to have a
product-form distribution, computing the normalising constant can be hard.
The class of (closed)
We present an application of statistical model-checking to the verification of an autonomous vehicle controller. Our goal is to check safety properties in various traffic situations. More specifically, we focus on a traffic jam situation.
The controller is specified by a C++ program. Using sensors, it registers positions and velocities of nearby vehicles and modifies the position and velocity of the controlled vehicle to avoid collisions. We model the environment using a stochastic high level Petri net, where random behaviors of other vehicles can be described. We use HASL, a quantitative variant of linear temporal logic, to express the desired properties. A large family of performance indicators can be specified in HASL and we target in particular the expectation of travelled distance or the collision probability.
We evaluate the properties of this model using COSMOS1. This simulation tool implements numerous statistical techniques such as sequential hypothesis testing and most confidence range computation methods. Its efficiency allowed us to conduct several experiments with success.
De nombreux projets industriels, notamment dans la construction automobile, font appel à la suite d'outils Simulink pour la conception et la validation de composants critiques représentant des systèmes hybrides c'est-à-dire combinant des aspects discrets et continus. Cependant les formalismes associés ne disposent pas d'une sémantique formelle ce qui peut diminuer la confiance des ingénieurs vis-à-vis des résultats produits. Nous proposons ici une telle sémantique en procédant en deux étapes. Nous développons d'abord une sémantique exacte mais non exécutable. Puis nous l'enrichissons d'une sémantique opérationnelle approchée avec pour objectif une quantification de l'erreur issue de cette approximation.
Continuous Petri nets are a relaxation of classical discrete Petri nets in which transitions can be fired a fractional number of times, and consequently places may contain a fractional number of tokens. Such continuous Petri nets are an appealing object to study since they over approximate the set of reachable configurations of their discrete counterparts, and their reachability problem is known to be decidable in polynomial time. The starting point of this paper is to show that the reachability relation for continuous Petri nets is definable by a sentence of linear size in the existential theory of the rationals with addition and order. Using this characterization, we obtain decidability and complexity results for a number of classical decision problems for continuous Petri nets. In particular, we settle the open problem about the precise complexity of reachability set inclusion. Finally, we show how continuous Petri nets can be incorporated inside the classical backward coverability algorithm for discrete Petri nets as a pruning heuristic in order to tackle the symbolic state explosion problem. The cornerstone of the approach we present is that our logical characterization enables us to leverage the power of modern SMT-solvers in order to yield a highly performant and robust decision procedure for coverability in Petri nets. We demonstrate the applicability of our approach on a set of standard benchmarks from the literature.
Memoryless determinacy of (infinite) parity games is an important result with numerous applications. It was first independently established by Emerson and Jutla [1] and Mostowski [2] but their proofs involve elaborate developments. The elegant and simpler proof of Zielonka [3] still requires a nested induction on the finite number of priorities and on ordinals for sets of vertices. There are other proofs for finite games like the one of Bjørklund, Sandberg and Vorobyovin [4] that relies on relating infinite and finite duration games. We present here another simple proof that finite parity games are determined with memoryless strategies using induction on the number of relevant states. The closest proof that relies on induction over non absorbing states is the one of Graedel [5]. However instead of focusing on a single appropriate vertex for induction as we do here, he considers two reduced games per vertex, for all the vertices of the game. The idea of reasoning about a single state has been inspired to me by the analysis of finite stochastic priority games by Karelovic and Zielonka [6].
Markov Decision Processes (MDP) are a widely used model including both non-deterministic and probabilistic choices. Minimal and maximal probabilities to reach a target set of states, with respect to a policy resolving non-determinism, may be computed by several methods including value iteration. This algorithm, easy to implement and efficient in terms of space complexity, iteratively computes the probabilities of paths of increasing length. However, it raises three issues: (1) defining a stopping criterion ensuring a bound on the approximation, (2) analysing the rate of convergence, and (3) specifying an additional procedure to obtain the exact values once a sufficient number of iterations has been performed. The first two issues are still open and, for the third one, an upper bound on the number of iterations has been proposed. Based on a graph analysis and transformation of MDPs, we address these problems. First we introduce an interval iteration algorithm, for which the stopping criterion is straightforward. Then we exhibit its convergence rate. Finally we significantly improve the upper bound on the number of iterations required to get the exact values. We extend our approach to also deal with Interval Markov Decision Processes (IMDP) that can be seen as symbolic representations of MDPs.
A novel method to cluster event log traces is presented in this paper. In contrast to the approaches in the literature, the clustering approach of this paper assumes an additional input: a process model that describes the current process. The core idea of the algorithm is to use model traces as centroids of the clusters detected, computed from a generalization of the notion of alignment. This way, model explanations of observed behavior are the driving force to compute the clusters, instead of current model agnostic approaches, e.g., which group log traces merely on their vector-space similarity. We believe alignment-based trace clustering provides results more useful for stakeholders. Moreover, in case of log incompleteness, noisy logs or concept drift, they can be more robust for dealing with highly deviating traces. The technique of this paper can be combined with any clustering technique to provide model explanations to the clusters computed. The proposed technique relies on encoding the individual alignment problems into the (pseudo-)Boolean domain, and has been implemented in our tool DarkSider that uses an open-source solver.
Certifying that a process model is aligned with the real process executions is perhaps the most desired feature a process model may have: aligned process models are crucial for organizations, since strategic decisions can be made easier on models instead of on plain data. In spite of its importance, the current algorithmic support for computing alignments is limited: either techniques that explicitly explore the model behavior (which may be worst-case exponential with respect to the model size), or heuristic approaches that cannot guarantee a solution, are the only alternatives. In this paper we propose a solution that sits right in the middle in the complexity spectrum of alignment techniques; it can always guarantee a solution, whose quality depends on the exploration depth used and local decisions taken at each step. We use linear algebraic techniques in combination with an iterative search which focuses on progressing towards a solution. The experiments show a clear reduction in the time required for reaching a solution, without sacrificing significantly the quality of the alignment obtained.
Cellular reprogramming, a technique that opens huge opportunities in modern and regenerative medicine, heavily relies on identifying key genes to perturb. Most of computational methods focus on finding mutations to apply to the initial state in order to control which attractor the cell will reach. However, it has been shown, and is proved in this article, that waiting between the perturbations and using the transient dynamics of the system allow new reprogramming strategies. To identify these temporal perturbations, we consider a qualitative model of regulatory networks, and rely on Petri nets to model their dynamics and the putative perturbations. Our method establishes a complete characterization of temporal perturbations, whether permanent (mutations) or only temporary, to achieve the existential or inevitable reachability of an arbitrary state of the system. We apply a prototype implementation on small models from the literature and show that we are able to derive temporal perturbations to achieve trans-differentiation.
Unfoldings provide an efficient way to avoid the state-space explosion due to interleavings of concurrent transitions when exploring the runs of a Petri net. The theory of adequate orders allows one to define finite prefixes of unfoldings which contain all the reachable markings. In this paper we are interested in reachability of a single given marking, called the goal. We propose an algorithm for computing a finite prefix of the unfolding of a 1-safe Petri net that preserves all minimal configurations reaching this goal. Our algorithm combines the unfolding technique with on-the-fly model reduction by static analysis aiming at avoiding the exploration of branches which are not needed for reaching the goal. We present some experimental results.
Hybrid systems are a powerful formalism for modeling and reasoning about cyber-physical systems. They mix the continuous and discrete natures of the evolution of computerized systems. Switched systems are a special kind of hybrid systems, with restricted discrete behaviours: those systems only have finitely many different modes of (continuous) evolution, with isolated switches between modes. Such systems provide a good balance between expressiveness and controllability, and are thus in widespread use in large branches of industry such as power electronics and automotive control. The control law for a switched system defines the way of selecting the modes during the run of the system. Controllability is the problem of (automatically) synthesizing a control law in order to satisfy a desired property, such as safety (maintaining the variables within a given zone) or stabilisation (confinement of the variables in a close neighborhood around an objective point). In order to compute the control of a switched system, we need to compute the solutions of the differential equations governing the modes. Euler's method is the most basic technique for approximating such solutions. We present here an estimation of the Euler's method local error, using the notion of ”one-sided Lispchitz constant” for modes. This yields a general control synthesis approach which can encompass several features such as bounded disturbance and compositionality.
A novel algorithm for the control synthesis for nonlinear switched systems is presented in this paper. Based on an existing procedure of state-space bisection and made available for nonlinear systems with the help of guaranteed integration, the algorithm has been improved to be able to consider longer patterns of modes with a better pruning approach. Moreover, the use of guaranteed integration also permits to take bounded perturbations and varying parameters into account. It is particularly interesting for safety critical applications, such as in aeronautical, military or medical fields. The whole approach is entirely guaranteed and the induced controllers are correct-by-design. Some experimentations are performed to show the important gain of the new algorithm.
In a previous work, we explained how Euler's method for computing
approximate solutions of systems of ordinary differential equations can be
used to synthesize safety controllers for sampled switched systems. We
continue here this line of research by showing how Euler's method can also be
used for synthesizing safety controllers in a distributed manner. The global
system is seen as an interconnection of two (or more) sub-systems where, for
each component, the sub-state corresponding to the other component is seen as
an “input”; the method exploits (a variant of) the notions of incremental
input-to-state stability (
In this paper, we propose a symbolic control synthesis method for nonlinear sampled switched systems whose vector fields are one-sided Lipschitz. The main idea is to use an approximate model obtained from the forward Euler method to build a guaranteed control. The benefit of this method is that the error introduced by symbolic modeling is bounded by choosing suitable time and space discretizations. The method is implemented in the interpreted language Octave. Several examples of the literature are performed and the results are compared with results obtained with a previous method based on the Runge-Kutta integration method.
We propose a novel method for transforming delay- line time-to-digital converters (TDCs) into TDCs that output Gray code without relying on synchronizers. We formally prove that the inevitable metastable memory upsets (Marino, TC'81) do not induce an additional time resolution error. Our modified design provides suitable inputs to the recent metastability-containing sorting networks by Lenzen and Medina (ASYNC'16) and Bund et al. (DATE'17). In contrast, employing existing TDCs would require using thermometer code at the TDC output (followed by conversion to Gray code) or resolving metastability inside the TDC. The former is too restrictive w.r.t. the dynamic range of the TDCs, while the latter loses the advantage of enabling (accordingly much faster) computation without having to first resolve metastability.
Our all-digital designs are also of interest in their own right: they support high sample rates and large measuring ranges at nearly optimal bit-width of the output, yet maintain the original delay-line?s time resolution. No previous approach unifies all these properties in a single device.
Synchronization using flip-flop chains imposes a latency of a few clock cycles when transferring data and control signals between clock domains. We propose a design scheme that avoids this latency by performing synchronization as part of state/data computations while guaranteeing that metastability is contained and its effects tolerated (with an acceptable failure probability). We present a theoretical framework for modeling synchronous state machines in the presence of metastability and use it to prove properties that guarantee some form of reliability. Specifically, we show that the inevitable state/data corruption resulting from propagating metastable states can be confined to a subset of computations. Applications that can tolerate certain failures can exploit this property to leverage low-latency and quasi-reliable operation simultaneously. We demonstrate the approach by designing a Network-on-Chip router with zero-latency asynchronous ports and show via simulation that it outperforms a variant with two flip-flop synchronizers at a negligible cost in packet transfer reliability.
Thomas Chatain, Stefan Haar , Serge Haddad and Stefan Schwoon are participating in the ANR Project ALGORECELL.
Matthias Függer participates in the ANR project FREDDA.
-
Title: Life Sciences need formal Methods !
International Partner (Institution - Laboratory - Researcher):
Newcastle University (United Kingdom) - School of Computing Science - Victor Khomenko
Start year: 2016
This project extends an existing cooperation between the MEXICO team and Newcastle University on partial-order based formal methods for concurrent systems. We enlarge the partnership to bioinformatics and synthetic biology. The proposal addresses addresses challenges concerning formal specification, verification, monitoring and control of synthetic biological systems, with use cases conducted in the Center for Synthetic Biology and the Bioeconomy (CSBB) in Newcastle. A main challenge is to create a solid modelling framework based on Petri-net type models that allow for causality analysis and rapid state space exploration for verification, monitoring and control purposes; a potential extension to be investigated concerns the study of attractors and cell reprogramming in Systems Biology.
Joost-Pieter Katoen, Aachen, spent two weeks with MEXICO.
Aalok Thakkar, 2nd year student from CMI (India), did a two-month research internship on 'Semantics of Mutation Dynamics' under the supervision of Stefan Haar, from May 2nd to July 21st, 2017.
Matthias Függer is general co-chair of ASYNC 2018
Laurent Fribourg was a PC member of:
7th International Conference on New Computational Methods for Inverse Problems, Cachan, 2017
The 27th International Symposium on Logic-based Program Synthesis and Transformation, Namur, Belgium, 2017
Seventh Workshop on Design, Modeling and Evaluation of Cyber Physical Systems, Seoul, 2017.
Matthias Függer was a member of the PC of DDECS'17.
Stefan Haar was a member of the PCs of the conferences MSR 2017 and ACSD 2017 and of the workshop ATAED 2017.
Serge Haddad was a PC member of the International Workshop on Petri Nets and Software Engineering (PNSE) 2017 at Zaragoza, Spain, and of the 11th International Conference on Verification and Evaluation of Computer and Communication Systems (VECOS) 2017 at Montreal, Québec, Canada. He was also a member of the scientific committee of Ecole d'été temps-réel (ETR 2017).
Matthias Függer was a reviewer for ICALP, DISC, FMCAD, ICDCN,and OPODIS.
Stefan Haar was a reviewer for FOSSACS.
Stefan Schwoon was a reviewer for the following conferences taking place in 2017: STACS, TACAS, ESOP, ATVA, and FSTTCS.
Stefan Haar is associate editor of the Journal of Discrete Event Dynamic Systems: Theory and Applications.
Thomas Chatain was a reviewer for Acta Informatica, Artificial Intelligence and Journal of Discrete Event Dynamic Systems.
Matthias Függer was a reviewer for MFCS.
Stefan Haar was a reviewer for MSCS and IEEE Transations on Automatic Control.
Stefan Schwoon was a reviewer for Fundamenta Informaticae, International Journal on Software Tools for Technology Transfer, and the Journal of Discrete Event Dynamic Systems.
Laurent Fribourg gave the following invited talk: "Euler’s Method Applied to the Control of Switched Systems", at 15th International Conference on Formal Modelling and Analysis of Timed Systems, Berlin, 2017
Matthias Függer gave invited talks at the Theory of Hardware seminar in Vienna in February, the Noon seminar at Max-Planck Institute for Informatics in April, and the Distributed Computing Seminar at Labri in November.
Serge Haddad gave the following invited talks:
at Centre Fédéré en Vérification, Bruxelles, Belgique on February 24, 2017, entitled From Continuous Petri nets to Petri nets and Back;
at LACL, Créteil on February 27, 2017, entitled Probabilistic Disclosure: Maximisation vs. Minimisation
at the MSR 2017 conference, Marseille, France, on November 16, 2017, entitled Réseaux de Petri discrets et continus : apports réciproques.
Claudine Picaronny gave an invited talk on 'Vérification probabiliste, numérique ou statistique', on the 21th of april 2017 at Alea 17, CIRM Marseille, France
Serge Haddad was expert for the evaluation of the researcher premiums at University Pierre et Marie Curie
Laurent Fribourg is a member of
Comité Direction of Department Sciences et technologies de l’information et de la communication of Université Paris-Saclay,
Bureau of Domaine d’Intérêt Majeur émergent du Réseau Francilien en Sciences Informatiques
Stefan Haar is the president of Inria's GTRI-COST committee for international relations, and the head of the SciLex (Software Reliability and Security) axis of the LABEX DIGICOSME, and ipso facto a member of DIGICOSME's executive committee and scientific commission.
Serge Haddad was the president of the HCERES evaluation committee of the laboratory LIAS, Poitiers.
Stefan Haar taught one half of L3 level class on Formal Languages at ENS Paris-Saclay (15 h CM, 22.5 EQTD).
Serge Haddad is professor at ENS Paris-Saclay. Claudine Picaronny, Thomas Chatain, and Stefan Schwoon are associate professors of the same university.
Serge Haddad is the head of the Computer Science Department, and Stefan Schwoon is in charge of the L3 formation.
Claudine Picaronny is a co-director of the ENS Paris-Saclay’s Mathematics department and a member of the juries of the `agrégation interne de Mathématiques’ and of the second `concours de Mathématiques’ of ENS Paris-Saclay; she is also the coordinator of the mathematics/computer science examination of E3A, parts MP and MC.
Matthias Függer is teaching "Initiation à la recherche" at ENS Paris-Saclay.
Theses in progress:
Hugues Mandon, ENS Paris-Saclay since October 2016, on computational models and algorithms for the prediction of Cell Reprogramming Strategies, co-supervised by Stefan Haar and L. Paulevé (LRI)
Juraj Kolc̆ák, , ENS Paris-Saclay since March 2017, on Unfoldings and Abstract Interpretation for Parametric Biological Regulatory Networks, co-supervised by Stefan Haar and L. Paulevé (LRI).
Adnane Saoud, Université Paris-Saclay since 2016, jointly supervised by Laurent Fribourg and Antoine Girard (Centrale-Supelec).
Engel Lefaucheux, ENS Paris-Saclay since 2015, Controlling information in probabilistic systems, jointly supervised by Nathalie Bertrand (SUMO team) and Serge Haddad
Yann Duplouy, IRT SystemX since 2015, Application of formal methods to the development of embedded systems for autonomous vehicles, supervised by Béatrice Bérard and Serge Haddad.
Robert Najvirt (TU Wien, Austrian FWF SIC project), realistic delay models with applica- tions in high-speed and low-power circuits, co-supervised by Matthias Függer and Andreas Steininger.
Martin Perner (TU Wien, Austrian FWF SIC project), clock generation on-chip and formalisms suitable to prove correct VLSI circuits, co-supervised by Matthias Függer and Ulrich Schmid.
Juergen Maier (TU Wien, Austrian FWF SIC project), on realistic delay models with applications in high-speed and low-power circuits, with focus on noise and high-order models, co-supervised by Matthias Függer and with Ulrich Schmid.
Laurent Fribourg was a member of the Jury of Irini-Eleftheria Mens's PhD Thesis on “Learning regular languages over large alphabets", defended at University of Grenoble, October 2017.
Stefan Haar was a reviewer of the thesis by Guillaume Madelaine on 'Simplifications Exactes et Structurelles de Réseaux de Réactions Biologiques', defended on February 28 at Lille University, France.
Serge Haddad was:
reviewer in the jury of Bruno Karelovic on Quantitative Analysis of Stochastic Systems – Priority Games and Populations of Markov Chains on July 7 2017, University Paris 7
president of the jury of Nicolas David on Réseaux de Petri à Paramètres Discrets on October 20 University Nantes
reviewer in the jury of Thomas Geffroy on Vers des outils efficaces pour la vérification de systèmes concurrents on December 12 2017, University Bordeaux
Claudine Picaronny was Member of the jury of Pierre Carlier's Thesis on 'Verification of Stochastic Timed Automata', on the 8th of december 2017, Mons, Belgium