The VerTeCs team is focused on the use of formal methods to assess the reliability, safety and security of reactive software systems. By reactive software system we mean a system controlled by software which reacts with its environment (human or other reactive software). Among these, critical systems are of primary importance, as errors occurring during their execution may have dramatic economical or human consequences. Thus, it is essential to establish their correctness before they are deployed in a real environment, or at least detect incorrectness during execution and take appropriate action. For this aim, the VerTeCs team promotes the use of formal methods, i.e. formal specification of software and their required properties and mathematically founded validation methods. Our research covers several validation methods, all oriented towards a better reliability of software systems:

Verification, which is used during the analysis and design phases, and whose aim is to establish the correctness of specifications with respect to requirements, properties or higher level specifications.

Control synthesis, which consists in “forcing” (specifications of) systems to stay within desired behaviours by coupling them with a supervisor.

Conformance testing, which is used to check the correctness of a real system with respect to its specification. In this context, we are interested in model-based testing, and in particular automatic test generation of test cases from specifications.

Diagnosis and monitoring, which are used during execution to detect erroneous behaviour.

Combinations of these techniques, both at the methodological level (combining several techniques within formal validation methodologies) and at the technical level (as the same set of formal verification techniques - model checking, theorem proving and abstract interpretation - are required for control synthesis, test generation and diagnosis).

Our research is thus concerned with the development of formal models for the description of software systems, the formalization of relations between software artifacts (e.g. satisfaction, conformance between properties, specifications, implementations), the interaction between these artifacts (modelling of execution, composition, etc). We develop methods and algorithms for verification, controller synthesis, test generation and diagnosis that ensure desirable properties (e.g. correctness, completeness, optimality, etc). We try to be as generic as possible in terms of models and techniques in order to cope with a wide range of application domains and specification languages. Our research has been applied to telecommunication systems, embedded systems, smart-cards application, and control-command systems. We implement prototype tools for distribution in the academic world, or for transfer to the industry.

Our research is based on formal models and our basic tools are
**verification**techniques such as model checking, theorem proving, abstract interpretation, the control theory of discrete event systems, and their underlying models and logics. The close
connection between testing, control and verification produces a synergy between these research topics and allows us to share theories, models, algorithms and tools.

N. Bertrand has been hired as Inria researcher in 2007 and arrived in October 1st. She comes with a strong background in formal verification, in particular on qualitative and quantitative verification on Markovian models.

This project started in 2004 and ended in september 2007 has been our first involvement in security. It allowed us to learn a lot in this domain, to make some contributions by adaptating some work in test generation, diagnosis and control, and to start a larger collaboration in the RNTL Politess grant.

The formal models we use are mainly automata-like structures such as labelled transition systems (LTS) and some of their extensions: an LTS is a tuple
M= (
Q,
,
,
q_{o})where
Qis a non-empty set of states;
q_{o}Qis the initial state;
Ais the alphabet of actions,
is the transition relation. These models are adapted to testing and controller synthesis.

To model reactive systems in the testing context, we use Input/Output labeled transition systems (IOLTS for short). In this setting, the interactions between the system and its environment
(where the tester lies) must be partitioned into inputs (controlled by the environment), outputs (observed by the environment), and internal (non observable) events modeling the internal
behavior of the system. The alphabet
is then partitioned into
where
_{!}is the alphabet of outputs,
_{?}the alphabet of inputs, and
the alphabet of internal actions.

In the controller synthesis theory, we also distinguish between controllable and uncontrollable events (
=
_{c}_{uc}), observable and unobservable events (
).

In order to cope with more realistic models, closer to real specification languages, we also need higher level models that consider both control and data aspects. We defined (input-output)
symbolic transition systems ((IO)STS), which are extensions of (IO)LTS that operate on data (i.e., program variables, communication parameters, symbolic constants) through message passing,
guards, and assignments. Formally, an IOSTS is a tuple
(
V,
,
,
T), where
Vis a set of variables (including a counter variable encoding the control structure),
is the initial condition defined by a predicate on
V,
is the finite alphabet of actions, where each action has a signature (just like in IOLTS,
can be partitioned as e.g.
),
Tis a finite set of symbolic transitions of the form
t= (
a,
p,
G,
A)where
ais an action (possibly with a polarity reflecting its input/output/internal nature),
pis a tuple of communication parameters,
Gis a guard defined by a predicate on
pand
V, and
Ais an assignment of variables. The semantics of
IOSTSis defined in terms of (IO)LTS where states are vectors of values of variables, and transitions between them are labelled with instantiated actions (action with valued communication
parameter). This (IO)LTS semantics allows us to perform syntactical transformations at the (IO)STS level while ensuring semantical properties at the (IO)LTS level. We also consider extensions
of these models with added features such as recursion, fifo channels, etc. An alternative to IOSTS to specify systems with data variables is the model of synchronous dataflow equations.

Our research is based on well established theories: conformance testing, supervisory control, abstract interpretation, and theorem proving. Most of the algorithms that we employ take their origins in these theories:

graph traversal algorithms (breadth first, depth first, strongly connected components, ...). We use these algorithms for verification as well as test generation and control synthesis.

BDDs (Binary Decision Diagrams) algorithms, for manipulating Boolean formula, and their MTBDDs (Multi-Terminal Decision Diagrams) extension for manipulating more general functions. We use these algorithms for verification and test generation.

abstract interpretation algorithms, specifically in the abstract domain of convex polyhedra (for example, Chernikova's algorithm for the computation of dual forms). Such algorithms are used in verification and test generation.

logical decision algorithms, such as satisfiability of formulas in Presburger arithmetics. We use these algorithms during generation and execution of symbolic test cases.

Most of our research, and in particular controller synthesis and conformance testing, relies on the ability to solve some verification problems. A large part of these problems reduces to
reachability and coreachability problems on a formal model (a state
sis
*reachable from an initial state
s_{i}*if an execution starting from

Verification in its full generality consists in checking that a system, which is specified in a formal model, satisfies a required property. When the state space of the system is finite and not too large, verification can be carried out by graph algorithms (model-checking). For large or infinite state spaces, we can perform approximate computations, either by computing a finite abstraction and resort to graph algorithms, or preferably by using more sophisticated abstract interpretation techniques. Another way to cope with large or infinite state systems is deductive verification, which, either alone or in combination with compositional and abstraction techniques, can deal with complex systems that are beyond the scope of fully automatic methods.

Most problems in test generation or controller synthesis reduce to state reachability and state coreachability problems which can be solved by fixpoint computations (and also by deductive methods).

The big change induced by taking into account the data and not only the (finite) control of the systems under study is that the fixpoints become uncomputable. The undecidability is overcome by resorting to approximations, using the theoretical framework of Abstract Interpretation .

Abstract Interpretation is a theory of approximate solving of fixpoint equations applied to program analysis. Most program analysis problems, among others reachability analysis, come down to solving a fixpoint equation

x=
F(
x),
xC

where
Cis a lattice. In the case of reachability analysis, if we denote by
Sthe state space of the considered program,
Cis the lattice
(
S)of sets of states, ordered by inclusion, and
Fis roughly the “
*successor states*” function defined by the program.

The exact computation of such an equation is generally not possible for undecidability (or complexity) reasons. The fundamental principles of Abstract Interpretation are:

to substitute to the
*concrete domain*
Ca simpler
*abstract domain*
A(static approximation) and to transpose the fixpoint equation into the abstract domain, so that one has to solve an equation
y=
G(
y),
yA;

to use a
*widening operator*(dynamic approximation) to make the iterative computation of the least fixpoint of
Gconverge after a finite number of steps to some upper-approximation (more precisely, a post-fixpoint).

Approximations are conservative so that the obtained result is an upper-approximation of the exact result. Those two principles are well illustrated by the Interval Analysis , which consists in associating to each numerical variable of a program an interval representing an (upper) set of reachable values:

One substitutes to the concrete domain
(
R^{n})induced by the numerical variables the abstract domain
(
I
_{R})
^{n}, where
I_{R}denotes the set of intervals on real numbers; a set of values of a variable is then represented by the smallest interval containing it;

An iterative computation on this domain may not converge: it is indeed easy to generate an infinite sequence of intervals which is strictly growing. The “standard” widening operator extrapolates by + the upper bound of an interval if the upper bound does not stabilize within a given number of steps (and similarly for the lower bound).

In this example, the state space
(
R^{n})that should be abstracted has a simple structure, but this may be more complicated when variables belong to different data types (Booleans, numerics, arrays) and when it is
necessary to establish
*relations*between the values of different types.

Programs performing dynamic allocation of objects in memory have an even more complex state space. Solutions have been devised to represent in an approximate way the memory heap and
pointers between memory cells by graphs (
*shape analysis*
,
). Values contained in memory cells are however generally ignored.

In the same way, programs with recursive procedure calls, parameter passing and local variables are more delicate to analyse with precision. The difficulty is to abstract the execution stacks which may have a complex structure, particularly when parameters passing by reference are allowed, as it induces aliasing on the stack .

For verification we also use theorem proving and more particularly the
pvs
and
Coq
proof assistants. These are two general-purpose systems based on two different versions of higher-order logic. A
verification task in such a proof assistant consists in encoding the system under verification and its properties into the logic of the proof assistant, together with verification
*rules*that allow to prove the properties. Using the rules usually requires input from the user; for example, proving that a state predicate holds in every reachable state of the system
(i.e., it is an
*invariant*) typically requires to provide a stronger,
*inductive*invariant, which is preserved by every execution step of the system. Another type of verification problem is proving
*simulation*between a concrete and an abstract semantics of a system. This can also be done by induction in a systematic manner, by showing that, in each reachable state of the system,
each step of the concrete system is simulated by a corresponding step at the abstract level.

In testing, we are mainly interested in conformance testing. Conformance testing consists in checking whether a black box implementation under test (the real system that is only known by its interface) behaves correctly with respect to its specification (the reference which specifies the intended behavior of the system). In the line of model-based testing, we use formal specifications and their underlying models to unambiguously define the intended behavior of the system, to formally define conformance and to design test case generation algorithms. The difficult problems are to generate test cases that correctly identify faults (the oracle problem) and, as exhaustiveness is impossible to reach in practice, to select an adequate subset of test cases that are likely to detect faults. Hereafter we detail some elements of the models, theories and algorithms we use.

**Models:**We use IOLTS (or IOSTS) as formal models for specifications, implementations, test purposes, and test cases. Most often, specifications are not directly given in such low-level
models, but are written in higher-level specification languages (e.g. SDL, UML, Lotos). The tools associated with these languages often contain a simulation API that implements their semantics
under the form of IOLTS. On the other hand, the IOSTS model is expressive enough to allow a direct representation of most constructs of the higher-level languages.

**Conformance testing theory:**We adapt a well established theory of conformance testing
, which formally defines conformance as a relation between formal models of specifications and implementations.
This conformance relation, called
**ioco**is defined in terms of visible behaviors (called
*suspension traces*) of the implementation
I(denoted by
STraces(
I)) and those of the specification
S(denoted by
STraces(
S)). Suspension traces are sequence of inputs, outputs or quiescence (absence of action denoted by
), thus abstract away internal behaviors that cannot be observed by testers. The conformance relation
iocowas originally written in
as follows:

where
Mafteris the set of states where
Mcan stay after the observation of the suspension trace
, and
Out(
Mafter)is the set of outputs and quiescence allowed by
Min this set. Intuitively,
IiocoSif, after a suspension trace of the specification, the implementation
Ican only show outputs and quiescences of the specification
S. We re-formulated ioco as a partial inclusion of visible behaviors as follows

Intuitively, this means that suspension traces of
Iwhich are suspension traces of
Sprolongated by an output or quiescence, should still be suspension traces of
S. Interestingly, this characterization presents conformance with respect to
Sas a safety property of suspension traces of
I. In fact
characterizes finite unexpected behaviours. Thus conformance with respect to
Sis clearly a safety property of
Iwhich negation can be specified by a “non conformance” observer
A_{¬
i
o
c
o
S}built from
Sand recognizing these unexpected behaviours. However, as
Iis a black box, one cannot check conformance exhaustively, but may only experiment
Iusing test cases, expecting the detection of some non-conformances. In fact the non-conformance observer
A_{¬
i
o
c
o
S}can also be thought as the
*canonical tester*of
Sfor
ioco, i.e. the most general testing process of
Sfor
ioco. It then serves also as a basis for test selection.

Test cases are processes executed against implementations in order to detect non-conformance. They are also formalized by IOLTS (or IOSTS) with special states indicating
*verdicts*. The execution of test cases against implementations is formalized by a parallel composition with synchronization on common actions. Usually a
*Fail*verdict means that the IUT is rejected and should correspond to non-conformance, a
*Pass*verdict means that the IUT exhibited a correct behavior and some specific targeted behaviour has been observed, while an
*Inconclusive*verdict is given to a correct behavior that is not targeted. Based on these models, the execution semantics, and the conformance relation, one can then define required
properties of test cases and test suites (sets of test cases). Typical properties are soundness (only non conformant implementations should be rejected by a test case) and exhaustiveness (every
non conformant implementation may be rejected by a test case). Soundness is not difficult to obtain, but exhaustiveness is not possible in practice and one has to select test cases.

**Test selection:**in the literature, in particular in white box testing, test selection is often based on coverage of some criteria (state coverage, transition coverage, etc). But in
practice, test cases are often associated with
*test purposes*describing some particular behaviors targeted by a test case. We have developed test selection algorithms based on the formalization of these
*test purposes*. In our framework, test purposes are specified as IOLTS (or IOSTS) associated with marked states or dedicated variables, giving them the status of automata or observers
accepting runs (or sequences of actions or suspension traces). We note
ASTraces(
S,
TP)the suspension traces of these accepted runs. Now selection of test cases amounts at selecting these traces
ASTraces(
S,
TP), and then complement them with unspecified outputs leading to
*Fail*. Alternatively, this can be seen as the computation of a sub-automaton of the canonical tester
A_{¬
i
o
c
o
S}whose accepting traces are
ASTraces(
S,
TP)and failed traces are a subset of
. The resulting test case is then both an observer of the negation of a safety property (non-conformance wrt.
S), and an observer of a reachability property (acceptance by the test purpose).

Test selection algorithms are based on the computation of the visible behaviors of the specification
STraces(
S), involving the identification of quiescence (
actions) followed by determinisation, the construction of a product between the specification and test purpose which accepted behavior is
ASTraces(
TP), and finally the selection of these accepted behaviors. Selection can be seen reduced to a model-checking problem where one wants to identify states (and
transitions between them) which are both reachable from the initial state and co-reachable from the accepting states. We have proved that these algorithms ensure soundness. Moreover the
(infinite) set of all possibly generated test cases is also exhaustive. Apart from these theoretical results, our algorithms are designed to be as efficient as possible in order to be able to
scale up to real applications.

Our first test generation algorithms are based on enumerative techniques, thus adapted to IOLTS models, and optimized to fight the state-space explosion problem. We have developed on-the-fly algorithms, which consist in performing a lazy exploration of the set of states that are reachable in both the specification and the test purpose . This technique is implemented in the TGV tool (see ). However, this enumerative technique suffers from some limitations when specification models contain data.

More recently, we have explored symbolic test generation techniques for IOSTS specifications
. This is a promising technique whose main objective is to avoid the state space explosion problem induced by the
enumeration of values of variables and communication parameters. The idea consists in computing a test case under the form of an
*IOSTS*, i.e., a reactive program in which the operations on data are kept in a symbolic form. Test selection is still based on test purposes (also described as IOSTS) and involves
syntactical transformations of IOSTS models that should ensure properties of their IOLTS semantics. However, most of the operations involved in test generation (determinisation, reachability,
and coreachability) become undecidable. For determinisation we employ heuristics that allow us to solve the so-called bounded observable non-determinism (i.e., the result of an internal choice
can be detected after finitely many observable actions). The product is defined syntactically. Finally test selection is performed as a syntactical transformation of transitions which is based
on a semantical reachability and co-reachability analysis. As both problems are undecidable for IOSTS, syntactical transformations are guided by over-approximations using abstract
interpretation techniques. Nevertheless, these over-approximations still ensure soundness of test cases
. These techniques are implemented in the STG tool (see
), with an interface with
NBACused for abstract interpretation.

**The Supervisory Control Problem**is concerned with ensuring (not only checking) that a computer-operated system works correctly. More precisely, given a specification model and a required
property, the problem is to control the specification's behavior, by coupling it to a supervisor, such that the controlled specification satisfies the property
. The models used are LTSs, say
G, and the associated languages, say
, which make a distinction between
*controllable*and
*non-controllable*actions and between
*observable*and
*non-observable*actions. Typically, the controlled system is constrained by the supervisor, which acts on the system's controllable actions and forces it to behave as specified by the
property. The control synthesis problem can be seen as a constructive verification problem: building a supervisor that prevents the system from violating a property. Several kinds of properties
can be ensured such as reachability, invariance (i.e. safety), attractivity, etc. Techniques adapted from model checking are then used to compute the supervisor w.r.t. the objectives.
Optimality must be taken into account as one often wants to obtain a supervisor that constrains the system as few as possible.

**The Supervisory Control Theory overview**. Supervisory control theory deals with control of Discrete Event Systems
. In this theory, the behavior of the system
Sis assumed not to be fully satisfactory. Hence, it has to be reduced by means of a feedback control (named Supervisor or Controller) in order to achieve a given set of requirements
. Namely, if
Sdenotes the specification of the system and
is a safety property that has to be ensured on
S(i.e.
S¬
), the problem consists in computing a supervisor
, such that

where
is the classical parallel composition between two LTSs. Given
S, some events of
Sare said to be uncontrollable (
_{uc}), i.e. the occurrence of these events cannot be prevented by a supervisor, while the others are controllable (
_{c}). It means that all the supervisors satisfying (
) are not good candidates. In fact, the behavior of the controlled system must respect an additional condition that happens to
be similar to the
iococonformance relation that we previously defined in
. This condition is called the
*controllability condition*and is defined as follows.

Namely, when acting on
S, a supervisor is not allowed to disable uncontrollable events. Given a safety property
, that can be modeled by an LTS
, there actually exists many different supervisors satisyfing both (
) and (
). Among all the valid supervisors, we are interested in computing the supremal one, ie the one that restricts the system as
few as possible. It has been shown in
that such a supervisor always exists and is unique. It gives access to a behavior of the controlled system that
is called the supremal controllable sub-language of
w.r.t.
Sand
_{uc}. In some situations, it may also be interesting to force the controlled system to be non-blocking (See
for details).

The underlying techniques are similar to the ones used for Automatic Test Generation. It consists in computing a product between the specification and and to remove the states of the obtained LTS that may lead to states that violate the property by triggering only uncontrollable events.

**Control of Structured Discrete Event System.**In many applications and control problems, LTS are the starting point to model fragments of a large scale system, which usually consists of
several composed and nested sub-systems. Knowing that the number of states of the global system grows exponentially with the number of parallel and nested sub-systems, we have been interested
in designing algorithms that perform the controller synthesis phase by taking advantage of the structure of the plant without expanding the system
.

Similarly, in order to take into account nested behaviors, some techniques based on model aggregation methods , have been proposed to deal with hierarchical control problems. Another direction has been proposed in . Brave and Heimann in introduced Hierarchical State Machines which constitute a simplified version of the Statecharts. Compared to the classical state machines, they add concurrency and hierarchy features. Some other works dealing with control and hierarchy can be found in , . This is the direction we have chosen in the VerTeCsTeam .

**Optimal Control.**We are also interested in the Optimal Control Problem. The purpose of optimal control is to study the behavioral properties of a system in order to generate a supervisor
that constrains the system to a desired behavior according to quantitative and qualitative requirements. In this spirit, we have been working on the optimal scheduling of a system through a set
of multiple goals that the system must visit one by one
. We have also extended the results of
to the case of partial observation in order to handle more realistic applications
. Symbolic algorithms have also be developped and implemented in Sigali

The methods and tools developed by the VerTeCsproject-team for test generation and control synthesis of reactive systems are intended to be as generic as possible. This allows us to apply them in many application domains where the presence of software is predominant and its correctness is essential. In particular, we apply our research in the context of telecommunication systems, for embedded systems, for smart-cards application, and control-command systems.

Our research on test generation was initially proposed for conformance testing of telecommunication protocols. In this domain, testing is a normalized process , and formal specification languages are widely used (SDL in particular). Our test generation techniques have already proved useful in this context, going up to industrial transfer. New standardized component-based design methodologies such as UML and OMG's MDE increase the need for formal techniques in order to ensure the composionality of components, by verification and testing. Our techniques, by their genericity and adaptativity, have also proved useful at different levels of these methodologies, from component testing to system testing. The telecommunication industry now also tries to provide more and more services to the users. These services must be validated. We are involved with France Telecom R & D in a project on the validation of vocal services (see ). Very recently, we also started to study the impact of our test generation techniques in the domain of network security. More specifically, we believe that testing that a network or information system meets its security policy is a major concern, and complements other design and verification techniques.

In the context of transport, software embedded systems are increasingly predominant. This is particularly important in automotive systems, where software replaces electronics for power train, chassis (e.g. engine control, steering, brakes) and cabin (e.g. wiper, windows, air conditioning) or new services to passengers are increasing (e.g. telematics, entertainment). Car manufacturers have to integrate software components provided by many different suppliers, according to specifications. One of the problems is that testing is done late in the life cycle, when the complete system is available. Faced with these problems, but also complexity of systems, compositionality of components, distribution, etc, car manufacturers now try to promote standardized interfaces and component-based design methodologies. They also develop virtual platforms which allow for testing components before the system is complete. It is clear that software quality and trust are one of the problems that have to be tackled in this context. This is why we believe that our techniques (testing and control) can be useful in such a context.

We have also applied our test generation techniques in the context of smart-card applications. Such applications are typically reactive as they describe interactions between a user, a terminal and a card. The number and complexity of such applications is increasing, with more and more services offered to users. The security of such applications is of primary interest for both users and providers and testing is one of the means to improve it.

The main application domain for controller synthesis is control-command systems. In general, such systems control costly machines (see e.g. robotic systems, flexible manufacturing systems), that are connected to an environment (e.g. a human operator). Such systems are often critical systems and errors occurring during their execution may have dramatic economical or human consequences. In this field, the controller synthesis methodology (CSM) is useful to ensure by construction the interaction between 1) the different components, and 2) the environment and the system itself. For the first point, the CSM is often used as a safe scheduler, whereas for the second one, the supervisor can be interpreted as a safe discrete tele-operation system.

Stg(Symbolic Test Generation) is a prototype tool for the generation and execution of test cases using symbolic techniques. It takes as input a
specification and a test purpose described as IOSTS, and generates a test case program also in the form of IOSTS. Test generation in STG is based on a syntactic product of the specification and
test purpose IOSTS, an extraction of the subgraph corresponding to the test purpose, elimination of internal actions, determinisation, and simplification. The simplification phase now relies on
NBAC, which approximates reachable and coreachable states using abstract interpretation. It is used to eliminate unreachable states, and to strengthen
the guards of system inputs in order to eliminate some
*Inconclusive*verdicts. After a translation into C++ or Java, test cases can be executed on an implementation in the corresponding language. Constraints on system input parameters are
solved on-the-fly (i.e. during execution) using a constraint solver. The first version of STG was developed in C++, using Omega as constraint solver during execution. This version has been
deposit at APP (IDDN.FR.001.510006.000.S.P.2004.000.10600).

A new version in OCaml has been developed in the last two years. This version is more generic and will serve as a library for symbolic operations on IOSTS. Most functionalities of the C++ version have been re-implemented. Also a new translation of abstract test cases into Java executable tests has been developed, in which the constraint solver is LuckyDraw( VERIMAG). This version has also been deposit at APP and is available for download on the web as well as its documentation and some examples.

Sigaliis a model-checking tool that operates on ILTS (Implicit Labeled Transition Systems, an equational representation of an automaton), an intermediate model for discrete event systems. It offers functionalities for verification of reactive systems and discrete controller synthesis. It is developed jointly by the ESPRESSOand VerTeCsteams. The techniques used consist in manipulating the system of equations instead of the set of solutions, which avoids the enumeration of the state space. Each set of states is uniquely characterized by a predicate and the operations on sets can be equivalently performed on the associated predicates. Therefore, a wide spectrum of properties, such as liveness, invariance, reachability and attractivity, can be checked. Algorithms for the computation of predicates on states are also available , . Sigaliis connected with the Polychrony environment ( ESPRESSOproject-team) as well as the Matou environment ( VERIMAG), thus allowing the modeling of reactive systems by means of Signal Specification or Mode Automata and the visualization of the synthesized controller by an interactive simulation of the controlled system. Sigaliis protected by APP (Agence de Protection des Programmes).

Ctrl-Sis a tool dedicated to the control and simulation of structured discrete event systems. Ctrl-Sis a graphical tool connected with Oris dedicatedto (1) of synchronous products of finite state machines, and (2) the integration of toolboxes that compute their controllers. It now encompasses the former tool Syntool that was developped in our team during the past years.

For several years we have been interested in the control of Concurrent Discrete Event Systems defined by a collection of components that interact with each other. We investigate the
computation of the supremal controllable language contained in the language of the specification. We make the use of a modular centralized approach and perform the control on some
approximations of the plant derived from the behavior of each component. The behavior of these approximations is restricted so that they respect a new language property for discrete event
systems called
*partial controllability condition*that depends on the safety property. It is shown that, under some assumptions (the objectives have to be
*locally consistent*
), the intersection of these “controlled approximations” corresponds to the supremal controllable language
contained in the property with respect to the plant. This computation is performed without building the whole plant. Further, we relax the usual assumption that all shared events are
controllable by introducing two new structural conditions relying on the global mutual controllability condition. The novel concept used as a sufficient structural condition is strong global
mutual controllability. The main result uses a weaker condition called global mutual controllability together with local consistency of the specification. An example illustrates the approach.
This work has been done in cooperation with Jan Komenda (Academy of Sciences, Brno, Czech Republic), Jan van Schuppen (CWI, The Netherlands) and Benoit Gaudin (UCD, Dublin)
.

Embedded systems require safe design methods based on formal methods, as well as safe execution based on fault-tolerance techniques. This year, we propose a safe design method for safe execution systems: it uses optimal discrete controller synthesis (DCS) to generate a correct reconfiguring fault-tolerant system. The properties enforced concern consistent execution, functionality fulfillment (whatever the faults, under some failure hypothesis), and several optimizations (of the tasks' execution time). We propose an algorithm for optimal DCS on bounded paths. We propose model patterns for a set of periodic tasks with checkpoints, a set of distributed, heterogeneous and fail-silent processors, and an environment model that expresses the potential fault patterns. We describe an implementation of our method, using the Sigali symbolic DCS tool and Mode Automata.This work has been done in cooperation with Emil Dumitrescu, Alain Girault and Eric Rutten , , .

This year, we have pursued, in collaboration with Sophie Pinchinat from the INRIA project S4 at IRISA, the development of the open platform, named Ctrl-S, dedicated to (1) the simulation of synchronous products of finite state machines, and (2) the integration of toolboxes that compute their controllers. This development has started in 2005 as a demo for the 30th Birthday of IRISA. Programming tasks have been assigned to Samer Maroun, an MSc. student from “École Supérieure d'ingénieurs de Beyrouth” (Liban), and was supported by an INRIA INTERSHIP. We also pursued the integration of the tool syntool, by considering new controller synthesis algorithms. A generic 3D libraries of components has been developped allowing an easy devising of demonstrations .

In this work we describe a methodology integrating verification and conformance testing. A specification of a system - an extended input-output automaton, which may be infinite-state - and a set of safety properties (“nothing bad ever happens”) and possibility properties (“something good may happen”) are assumed. The properties are first tentatively verified on the specification using automatic techniques based on approximated state-space exploration, which are sound, but, as a price to pay for automation, are not complete for the given class of properties. Because of this incompleteness and of state-space explosion, the verification may not succeed in proving or disproving the properties. However, even if verification did not succeed, the testing phase can proceed and provide useful information about the implementation. Test cases are automatically and symbolically generated from the specification and the properties, and are executed on a black-box implementation of the system. The test execution may detect violations of conformance between implementation and specification; in addition, it may detect violation/satisfaction of the properties by the implementation and by the specification. In this sense, testing completes verification. The approach is illustrated on simple examples and on a Bounded Retransmission Protocol

This work is done in collaboration with Bertrand Jeannet (Inria Rhône-Alpes) and partly supported by France Telecom R & D. It adresses the generation of test cases for testing the conformance of a reactive black-box implementation with respect to its specification. We aim at extending the principles and algorithms of model-based testing for recursive interprocedural specifications that can be modeled by Push-Down Systems (PDS). Such specifications may be more compact than non-recursive ones and are more expressive. The generated test cases are selected according to a test purpose, a (set of) scenario of interest that one wants to observe during test execution. The test generation method we propose in this paper is based on program transformations and a coreachability analysis, which allows to decide whether and how the test purpose can still be satisfied. However, despite the possibility to perform an exact analysis, the inability of test cases to inspect their own stack prevents it from using fully the coreachability information. We discuss this partial observation problem, its consequences, and how to minimize its impact .

While a lot of work has been done on formal verification of security, in particular for cryptographic protocols, very little has been done on formal security testing. As a consequence, testing security often resort on the expert knwoledge and leads to ad hoc solutions. The general challenge is to study how formalization of security policies and information systems can help in automatically (or systematically) performing security testing. Several approaches are already investigated. In the context of ACI Potestat and RNRT Politess, we study how test generation techniques, and in particular test generation from safety properties , can be used for the automatic generation of possible attacks, attacks which should then be tested on the real system, due to the abstraction used in modelling and generation.

Finally, during our collaboration with University of Nijmegen we studied the combinaison of verification, testing and learning. The verification of cryptographic protocol specifications is an active research topic and has received much attention from the formal verification community. By contrast, the black-box testing of actual implementations of protocols, which is, arguably, as important as verification for ensuring the correct functioning of protocols in the “real” world, is not much studied. We propose an approach for checking secrecy and authenticity properties not only on protocol specifications, but also on black-box implementations. The approach is compositional and integrates ideas from verification, testing, and learning. It is illustrated on the Basic Access Control protocol implemented in biometric passports .

The PhD thesis of Tristan Le Gall, co-supervised by Bertrand Jeannet (Pop-Art project team) is concerned by the verification of asynchronous systems communicating through FIFO channels and its applications. Communication protocols can be formally described by the Communicating Finite-State Machines (CFSM) model. This model is expressive, but not expressive enough to deal with complex protocols that involve structured messages encapsulating integers or lists of integers. That is the reason why we studied, this year, more complex models with an infinite alphabet of messages. We thus propose a new abstract domain for languages on infinite alphabets, which acts as a functor taking an abstract domain for a concrete alphabet and lift it to an abstract domain for words on this alphabet. The abstract representation is based on lattice automata, which are finite automata labeled by elements of an atomic lattice. We define a normal form, standard language operations and a widening operator for these automata. We apply this abstract lattice for the verification of symbolic communicating machines, and we discuss its usefulness for interprocedural analysis , .

This is common work with Manuel Clavel from the University of Madrid. In
we present an approach based on inductive theorem proving for verifying invariance properties of systems
specified in Rewriting Logic (
rl)
, an executable specification language implemented, among others, in the Maude tool
. Since theorem proving is not directly available for rewriting logic, we define an encoding of rewriting logic
into its Membership Equational (sub)Logic (
mel)
. Then, inductive theorem provers for
mel, such as the
itptool
, can be used for verifying the resulting membership equational logic specification, and, implicitly, for
verifying the original
rlspecification. The approach is illustrated on the 2-process Bakery algorithm and also on the parametric,
n-process version of the algorithm.

Like most models used in model-checking, timed automata are an idealized mathematical model used for representing systems with strong timing requirements. In such mathematical models, properties can be violated, due to unlikely (sequences of) events. In , we propose two new semantics for the satisfaction of LTL formulas, one based on probabilities, and the other one based on topology, to rule out these sequences. We prove that the two semantics are equivalent and lead to a PSPACE-Complete model-checking problem for LTL over finite executions.

Regarding security, besides our work on test generation for security properties, we have been interested in constructing monitors for the detection of confidential information flow in the context of partially observed discrete event systems modelled by finite labelled transitions systems. We focused on the case where the secret information is given as regular languages. First, we characterised the set of observations allowing an attacker to infer secret information. Further, based on the diagnosis of discrete event systems theory, we provided necessary and sufficient conditions under which detection and prediction of secret information flow can be ensured and construct a monitor allowing an administrator to detect it. We consider the general case where the attacker and the administrator have different partial views of the system .

The goal of this 3-year project (starting October 2004) is to build a platform for the formal validation of France Telecom's vocal phone services. Vocal services are based on speech recognition and synthesis algorithms, and they include automatic connection to the callee's phone number by pronouncing her name, or automatic pronounciation of the callee's name whose phone number was dialed in by the user. Here, we are not interested in validating the voice recognition/synthesis algorithms, but on the logic surrounding them. For example, the system may allow itself a certain number of attempts for recognizing a name, after which it switches to normal number-dialing mode, during which the user may choose to go back to voice-recognition mode by pronouncing a certain keyword. This logic may become quite intricate, and this complexity is multiplied by the number of clients that may be using the service at any given time. Its correctness has been identified by France Telecom as a key factor in the success of the deployment of voice-based systems. To validate them we are planning to apply a combination of formal verification and conformance testing techniques (cf. Section ). In the context of Camille Constant's PhD, we also study test generation from models of programs with recursion (pushdown automata and extensions).

The POTESTAT project [2004-2007] (
http://

In the framework of open service implementations, based on the interconnection of heterogeneous systems, the security managers lack of well-formalized analysis techniques. The security of such systems is therefore organized from pragmatic elements, based on well-known vulnerabilities and their associated solutions. It then remains to verify if such security policies are correctly and effectively implemented in the actual system. This is usually carried out by auditing the administrative procedures and the system configuration. Tests are then performed, for instance by probing, to check the presence of some particular vulnerabilities. Although some tools are already available for specific tests (like password crackers), there is no solution to analyse the whole system conformance with respect to a security policy. The initial approach to the problem was based on previous experience of the partners. We had experience on the use of formal models either to test the conformance of a distributed implementation to a specification (conformance testing for network protocols) or to analyse downloaded code (where testing can complement static analysis techniques). Based on this background, we proposed the two following different directions.

Diagnosis. Whereas protocol testing is usually done through active tests, it turns out that passive testing techniques may be better related to the control of security requirements, through monitors or access controllers for instance .

Generation of attacks. We investigated the use of test generation techniques for the generation of attacks from security policies (modeled as observers) and network models (an abstraction of the network behavior) .

The POLITESS project (
http://

We are partners of the ARTIST2 Network of Excellence on Embedded Systems (
http://

In ARTIST2, the main role of VerTeCsis to integrate our research on testing and test generation based on symbolic transition systems with other works based on timed models.

We collaborate with several Inria project-teams. We collaborate with the LANDE project-team in two ACI-Sécurité grants (V3F and POTESTAT). With ESPRESSOproject-team for the development of the Sigalitool inside the Polychrony environment. With the Pop-Art project-team on the use of the controller synthesis methodology for the control of control-command systems (e.g. robotic systems). With DISTRIBCOM on security testing in the context of Potestat and Politess grants. With the S4 project-team on the use of control, game theory and diagnosis for test generation. With the VASY project-team on the use of CADP libraries in TGV and the distribution of TGV in the CADP toolbox.

Our main collaborations in France are with Vérimag. Beyond formalized collaborations, (ACI Potestat and APRON, RNRT Politess, Rex ARTIST2), we also collaborate on the connection of NBACwith Lurette for the analysis of Lustre programs, as well as the connection of SIGALIand Matou. We are also involved in several collaborations with LSR Imag (ACI Potestat and RNRT Politess).

in Tunisia (M. Tahar Bhiri) on security testing. Thierry Jéron is co-supervisor of a PhD student Hatem Hamdi working on robustness and security testing.

(Jan Komenda) on supervisory control of concurrent systems.

(Prof. Manuel Clavel) on theorem proving for rewriting logic.

in USA (Prof. Stéphane Lafortune) on control and diagnosis of discrete event systems.

is teaching in License and Master in the Univeristy of Rennes 1 (96h/year).

is teaching in INSA of Rennes (30h in 2006-2007), on the Scheme programming language

is teaching on Model-based Testing in Research Master of Computer Science at the University of Rennes 1.

is teaching in License and Master in the Univeristy of Rennes 1 (96h/year)

**Current PhD. theses:**

“
*Verification and symbolic test generation for reactive systems*”, 3rd year,

“
*Abstract lattice of fifo channels for verification and control synthesis*”, 3rd year,

“
*Testing of network security*”, In collaboration with University of Sfax, 2nd year,

“
*Formal methods for testing and monitoring security of open networks*”, 1st year.

**Trainees 2005-2006:**

“
*A tool for the simulation of controlled discrete-Event systems*”, Internship-ESIB (Liban) (2,5 months).

was PC member of Testcom/Fates'07 (Tallinn, Estonia) in June 2007 and Rosatea'07 (Boston) in July 2007. He is PC member of the forthcoming Testcom/Fates'08 (Osaka, Japan)
June 2008, IEEE ICST 2008 (Lillehammer, Norway) in April 2008. Thierry Jéron gave a keynote. He is member of the steering committee and co-organiser of Movep 2008 (Orléans) in June 2008. He
was reviewer and president of the PhD defense of Tarik Nahhal (Verimag, Grenoble, October 2007), reviewer of the PhD defense of Moez Krichen (Verimag, Grenoble, December 2007), and member
of the PhD defense of Alexandra Desmoulins (Univ. Rennes 1, December 2007). He is member of the IFIP Working Group 10.2 on Embedded Systems (
http://

was PC member of the MSR'07 conference on modeling of reactive systems as well as of the ICINCO'07 Conference. He is member of the IFAC Technical Committees (TC 1.3 on Discrete Event and Hybrid Systems) for the 2005-2008 triennium. He is PC member of the forthcoming Wodes'08 and ICINCO'08 Conferences. He is member of the “Commissions de Spécialistes 27e section” at the University of Rennes 1.

Vlad Rusu was PC member of TestCom/Fates'07 (Tallinn, Estonia). He gave invited talks at LILFL (Lille) in May 2007, at LIFC (Besancon) in June 2007, and at LORIA (Nancy) in June 2007. He was a referee in the PhD committee of Delphine Longuet (Univ. Evry, Oct. 2007).

was invited to give seminars on
*Verification of communicating protocols / Abstract interpretation of regula languages*in
VERIMAGGrenoble (June 2006) and Liafa Paris (October 2006).

gave a talk on “the construction of monitor for the supervision of security properties” during the summer school FOSAD 2007. He is president of the ADOC (PhD student association).