The VerTeCsteam is focused on the reliability of reactive software using formal methods. By reactive software we mean software that continuously reacts with its environment. The environment can be a human user for a complete reactive system, or another software using the reactive software as a component. Among these, critical systems are of primary importance, as errors occurring during their execution may have dramatic economical or human consequences. Thus, it is essential to establish their correctness before they are deployed in a real environment. Correctness is also essential for less critical applications, in particular for COTS components whose behavior should be trusted before integration in software systems.

For this, the
VerTeCsteam promotes the use of formal methods, i.e. formal specification and mathematically founded analysis methods. During the analysis and design
phases, correctness of specifications with respect to requirements or higher level specifications can be established by formal
*verification*. Alternatively,
*control*consists in forcing specifications to stay within desired behaviours by coupling them with a supervisor. During validation,
*testing*can be used to check the conformance of implementations with respect to their specifications.
*Test generation*is the process of automatically generating test cases from specifications.

More precisely, the aim of the
VerTeCsproject is to improve the reliability of reactive systems by providing software engineers with methods and tools for automating the
**verification**, the
**test generation**and
**controller synthesis**from formal specifications. We adapt or develop formal models for the description of testing and control artifacts, e.g. specifications, implementations, test cases,
supervisors. We formally describe correctness relations (e.g. conformance or satisfaction). We also formally describe interaction semantics between testing artifacts. From these models,
relations and interaction semantics, we develop algorithms for automatic test and controller synthesis that ensure desirable properties. We try to be as generic as possible in terms of models
and techniques in order to cope with a wide range of specification languages and application domains. We implement prototype tools for distribution in the academic world, or for transfer to
industry.

Our research is based on formal models and our basic tools are
**verification**techniques such as model checking, theorem proving, abstract interpretation, the control theory of discrete event systems, and their underlying models and logics. The close
connection between testing, control and verification produces a synergy between these research topics and allows us to share theories, models, algorithms and tools.

The formal models we use are mainly automata-like structures such as labelled transition systems (LTS) and some of their extensions: an LTS is a tuple
M= (
Q,
,
,
q_{o})where
Qis a non-empty set of states;
q_{o}Qis the initial state;
Ais the alphabet of actions,
is the transition relation. These models are adapted to testing and controller synthesis.

To model reactive systems in the testing context, we use Input/Output labeled transition systems (IOLTS for short). In this setting, the interactions between the system and its environment
(where the tester lies) must be partitioned into inputs (controlled by the environment), outputs (observed by the environment), and internal (non observable) events modeling the internal
behavior of the system. The alphabet
is then partitioned into
where
_{!}is the alphabet of outputs,
_{?}the alphabet of inputs, and
the alphabet of internal actions.

In the controller synthesis theory, we also distinguish between controllable and uncontrollable events (
=
_{c}_{uc}), observable and unobservable events (
).

In order to cope with more realistic models, closer to real specification languages, we also need higher level models that consider both control and data aspects. We defined (input-output)
symbolic transition systems ((IO)STS), which are extensions of (IO)LTS that operate on data (i.e., program variables, communication parameters, symbolic constants) through message passing,
guards, and assignments. Formally, an IOSTS is a tuple
(
V,
,
,
T), where
Vis a set of variables (including a counter variable encoding the control structure),
is the initial condition defined by a predicate on
V,
is the finite alphabet of actions, where each action has a signature (just like in IOLTS,
can be partitioned as e.g.
),
Tis a finite set of symbolic transitions of the form
t= (
a,
p,
G,
A)where
ais an action (possibly with a polarity reflecting its input/output/internal nature),
pis a tuple of communication parameters,
Gis a guard defined by a predicate on
pand
V, and
Ais an assignment of variables. The semantics of
IOSTSis defined in terms of (IO)LTS where states are vectors of values of variables, and transitions between them are labelled with instantiated actions (action with valued communication
parameter). This (IO)LTS semantics allows us to perform syntactical transformations at the (IO)STS level while ensuring semantical properties at the (IO)LTS level. We also consider extensions
of these models with added features such as recursion, fifo channels, etc. An alternative to IOSTS to specify systems with data variables is the model of synchronous dataflow equations.

Our research is based on well established theories: conformance testing, supervisory control, abstract interpretation, and theorem proving. Most of the algorithms that we employ take their origins in these theories:

graph traversal algorithms (breadth first, depth first, strongly connected components, ...). We use these algorithms for verification as well as test generation and control synthesis.

BDDs (Binary Decision Diagrams) algorithms, for manipulating Boolean formula, and their MTBDDs (Multi-Terminal Decision Diagrams) extension for manipulating more general functions. We use these algorithms for verification and test generation.

abstract interpretation algorithms, specifically in the abstract domain of convex polyhedra (for example, Chernikova's algorithm for the computation of dual forms). Such algorithms are used in verification and test generation.

logical decision algorithms, such as satisfiability of formulas in Presburger arithmetics. We use these algorithms during generation and execution of symbolic test cases.

Most of our research, and in particular controller synthesis and conformance testing, relies on the ability to solve some verification problems. A large part of these problems reduces to
reachability and coreachability problems on a formal model (a state
sis
*reachable from an initial state
s_{i}*if an execution starting from

Verification in its full generality consists in checking that a system, which is specified in a formal model, satisfies a required property. When the state space of the system is finite and not too large, verification can be carried out by graph algorithms (model-checking). For large or infinite state spaces, we can perform approximate computations, either by computing a finite abstraction and resort to graph algorithms, or preferably by using more sophisticated abstract interpretation techniques. Another way to cope with large or infinite state systems is deductive verification, which, either alone or in combination with compositional and abstraction techniques, can deal with complex systems that are beyond the scope of fully automatic methods.

The techniques described above, which are dedicated to the analysis of LTS, are already mature. It seems natural to extend them to IOSTSor data-flow applications that manipulate variables taking their values into possibly infinite data domains.

The techniques we develop for test generation or controller synthesis require to solve state reachability and state coreachability problems which can be solved by fixpoint computations (and also by deductive methods).

The big change induced by taking into account the data and not only the (finite) control of the systems under study is that the fixpoints become uncomputable. The undecidability is overcome by resorting to approximations, using the theoretical framework of Abstract Interpretation .

Abstract Interpretation is a theory of approximate solving of fixpoint equations applied to program analysis. Most program analysis problems, among others reachability analysis, come down to solving a fixpoint equation

x=
F(
x),
xC

where
Cis a lattice. In the case of reachability analysis, if we denote by
Sthe state space of the considered program,
Cis the lattice
(
S)of sets of states, ordered by inclusion, and
Fis roughly the ``
*successor states*'' function defined by the program.

The exact computation of such an equation is generally not possible for undecidability (or complexity) reasons. The fundamental principles of Abstract Interpretation are:

to substitute to the
*concrete domain*
Ca simpler
*abstract domain*
A(static approximation) and to transpose the fixpoint equation into the abstract domain, so that one has to solve an equation
y=
G(
y),
yA;

to use a
*widening operator*(dynamic approximation) to make the iterative computation of the least fixpoint of
Gconverge after a finite number of steps to some upper-approximation (more precisely, a post-fixpoint).

Approximations are conservative so that the obtained result is an upper-approximation of the exact result. Those two principles are well illustrated by the Interval Analysis , which consists in associating to each numerical variable of a program an interval representing an (upper) set of reachable values:

One substitutes to the concrete domain
(
R^{n})induced by the numerical variables the abstract domain
(
I
_{R})
^{n}, where
I_{R}denotes the set of intervals on real numbers; a set of values of a variable is then represented by the smallest interval containing it;

An iterative computation on this domain may not converge: it is indeed easy to generate an infinite sequence of intervals which is strictly growing. The ``standard'' widening operator extrapolates by + the upper bound of an interval if the upper bound does not stabilize within a given number of steps (and similarly for the lower bound).

In this example, the state space
(
R^{n})that should be abstracted has a simple structure, but this may be more complicated when variables belong to different data types (Booleans, numerics, arrays) and when it is
necessary to establish
*relations*between the values of different types.

Programs performing dynamic allocation of objects in memory have an even more complex state space. Solutions have been devised to represent in an approximate way the memory heap and
pointers between memory cells by graphs (
*shape analysis*
,
). Values contained in memory cells are
however generally ignored.

In the same way, programs with recursive procedure calls, parameter passing and local variables are more delicate to analyse with precision. The difficulty is to abstract the execution stacks which may have a complex structure, particularly when parameters passing by reference are allowed, as it induces aliasing on the stack .

For verification we also use theorem proving and more particularly the
pvs
and
Coq
proof assistants. These are two
general-purpose systems based on two different versions of higher-order logic. A verification task in such a proof assistant consists in encoding the system under verification and its
properties into the logic of the proof assistant, together with verification
*rules*that allow to prove the properties. Using the rules usually requires input from the user; for example, proving that a state predicate holds in every reachable state of the system
(i.e., it is an
*invariant*) typically requires to provide a stronger,
*inductive*invariant, which is preserved by every execution step of the system. Another type of verification problem is proving
*simulation*between a concrete and an abstract semantics of a system. This too can be done by induction in a systematic manner, by showing that, in each reachable state of the system,
each step of the concrete system is simulated by a corresponding step at the abstract level.

In testing, we are mainly interested in conformance testing. Conformance testing consists in checking whether a black box implementation under test (the real system that is only known by its interface) behaves correctly with respect to its specification (the reference which specifies the intended behavior of the system). In the line of model-based testing, we use formal specifications and their underlying models to unambiguously define the intended behavior of the system, to formally define conformance and to design test case generation algorithms. The difficult problems are to generate test cases that correctly identify faults (the oracle problem) and, as exhaustiveness is impossible to reach in practice, to select an adequate subset of test cases that are likely to detect faults. Hereafter we detail some elements of the models, theories and algorithms we use.

**Models:**We use IOLTS (or IOSTS) as formal models for specifications, implementations, test purposes, and test cases. Most often, specifications are not directly given in such low-level
models, but are written in higher-level specification languages (e.g. SDL, UML, Lotos). The tools associated with these languages often contain a simulation API that implements their semantics
under the form of IOLTS. On the other hand, the IOSTS model is expressive enough to allow a direct representation of most constructs of the higher-level languages.

**Conformance testing theory:**We adapt a well established theory of conformance testing
, which formally defines conformance as a
relation between formal models of specifications and implementations. This conformance relation, called
**ioco**is defined in terms of visible behaviors (called
*suspension traces*) of the implementation
I(denoted by
STraces(
I)) and those of the specification
S(denoted by
STraces(
S)). Suspension traces are sequence of inputs, outputs or quiescence (absence of action denoted by
), thus abstract away internal behaviors that cannot be observed by testers. The conformance relation
iocowas originally written in
as follows:

where
Mafteris the set of states where
Mcan stay after the observation of the suspension trace
, and
Out(
Mafter)is the set of outputs and quiescence allowed by
Min this set. Intuitively,
IiocoSif, after a suspension trace of the specification, the implementation
Ican only show outputs and quiescences of the specification
S. We re-formulated ioco as a partial inclusion of visible behaviors as follows

Intuitively, this says that suspension traces of
Iwhich are suspension traces of
Sprolongated by an output or quiescence, should still be suspension traces of
S. Interestingly, this characterization presents conformance with respect to
Sas a safety property of suspension traces of
I. In fact
characterizes finite unexpected behaviours. Thus conformance with respect to
Sis clearly a safety property of
Iwhich negation can be specified by a ``non conformance'' observer
A_{¬
i
o
c
o
S}built from
Sand recognizing these unexpected behaviours. However, as
Iis a black box, one cannot check conformance exhaustively, but may only experiment
Iusing test cases, expecting the detection of some non-conformances. In fact the non-conformance observer
A_{¬
i
o
c
o
S}can also be thought as the
*canonical tester*of
Sfor
ioco, i.e. the most general testing process of
Sfor
ioco. It then serves also as a basis for test selection.

Test cases are processes executed against implementations in order to detect non-conformance. They are also formalized by IOLTS (or IOSTS) with special states indicating
*verdicts*. The execution of test cases against implementations is formalized by a parallel composition with synchronization on common actions. Usually a
*Fail*verdict means that the IUT is rejected and should correspond to non-conformance, a
*Pass*verdict means that the IUT exhibited a correct behavior and some specific targeted behaviour has been observed, while an
*Inconclusive*verdict is given to a correct behavior that is not targeted. Based on these models, the execution semantics, and the conformance relation, one can then define required
properties of test cases and test suites (sets of test cases). Typical properties are soundness (only non conformant implementations should be rejected by a test case) and exhaustiveness (every
non conformant implementation may be rejected by a test case). Soundness is not difficult to obtain, but exhaustiveness is not possible in practice and one has to select test cases.

**Test selection:**in the literature, in particular in white box testing, test selection is often based on coverage of some criteria (state coverage, transition coverage, etc). But in
practice, test cases are often associated with
*test purposes*describing some particular behaviors targeted by a test case. We have developed test selection algorithms based on the formalization of these
*test purposes*. In our framework, test purposes are specified as IOLTS (or IOSTS) associated with marked states or dedicated variables, giving them the status of automata or observers
accepting runs (or sequences of actions or suspension traces). We note
ASTraces(
S,
TP)the suspension traces of these accepted runs. Now selection of test cases amounts at selecting these traces
ASTraces(
S,
TP), and then complement them with unspecified outputs leading to
*Fail*. Alternatively, this can be seen as the computation of a sub-automaton of the canonical tester
A_{¬
i
o
c
o
S}whose accepting traces are
ASTraces(
S,
TP)and failed traces are a subset of
. The resulting test case is then both an observer of the negation of a safety property (non-conformance wrt.
S), and an observer of a reachability property (acceptance by the test purpose).

Test selection algorithms are based on the computation of the visible behaviors of the specification
STraces(
S), involving the identification of quiescence (
actions) followed by determinisation, the construction of a product between the specification and test purpose which accepted behavior is
ASTraces(
TP), and finally the selection of these accepted behaviors. Selection can be seen reduced to a model-checking problem where one wants to identify states (and
transitions between them) which are both reachable from the initial state and co-reachable from the accepting states. We have proved that these algorithms ensure soundness. Moreover the
(infinite) set of all possibly generated test cases is also exhaustive. Apart from these theoretical results, our algorithms are designed to be as efficient as possible in order to be able to
scale up to real applications.

Our first test generation algorithms are based on enumerative techniques, thus adapted to IOLTS models, and optimized to fight the state-space explosion problem. We have developed on-the-fly algorithms, which consist in performing a lazy exploration of the set of states that are reachable in both the specification and the test purpose . This technique is implemented in the TGV tool (see ). However, this enumerative technique suffers from some limitations when specification models contain data.

More recently, we have explored symbolic test generation techniques for IOSTS specifications
. This is a promising technique whose main
objective is to avoid the state space explosion problem induced by the enumeration of values of variables and communication parameters. The idea consists in computing a test case under the form
of an
*IOSTS*, i.e., a reactive program in which the operations on data is kept in a symbolic form. Test selection is still based on test purposes (also described as IOSTS) and involves
syntactical transformations of IOSTS models that should ensure properties of their IOLTS semantics. However, most of the operations involved in test generation (determinisation, reachability,
and coreachability) become undecidable. For determinisation we employ heuristics that allow us to solve the so-called bounded observable non-determinism (i.e., the result of an internal choice
can be detected after finitely many observable actions). The product is defined syntactically. Finally test selection is performed as a syntactical transformation of transitions which is based
on a semantical reachability and co-reachability analysis. As both problems are undecidable for IOSTS, syntactical transformations are guided by over-approximations using abstract
interpretation techniques. Nevertheless, these over-approximations still ensure soundness of test cases
. These techniques are implemented in the STG
tool (see
), with an interface with
NBACused for abstract interpretation.

**The Supervisory Control Problem**is concerned with ensuring (not only checking) that a computer-operated system works correctly. More precisely, given a specification model and a required
property, the problem is to control the specification's behavior, by coupling it to a supervisor, such that the controlled specification satisfies the property
. The models used are LTSs, say
G, and the associated languages, say
, which make a distinction between
*controllable*and
*non-controllable*actions and between
*observable*and
*non-observable*actions. Typically, the controlled system is constrained by the supervisor, which acts on the system's controllable actions and forces it to behave as specified by the
property. The control synthesis problem can be seen as a constructive verification problem: building a supervisor that prevents the system from violating a property. Several kinds of properties
can be ensured such as reachability, invariance (i.e. safety), attractivity, etc. Techniques adapted from model checking are then used to compute the supervisor w.r.t. the objectives.
Optimality must be taken into account as one often wants to obtain a supervisor that constrains the system as few as possible.

**The Supervisory Control Theory overview**. Supervisory control theory deals with control of Discrete Event Systems
. In this theory, the behavior of the system
Sis assumed not to be fully satisfactory. Hence, it has to be reduced by means of a feedback control (named Supervisor or Controller) in order to achieve a given set of
requirements
. Namely, if
Sdenotes the specification of the system and
is a safety property that has to be ensured on
S(i.e.
S¬
), the problem consists in computing a supervisor
, such that

where
is the classical parallel composition between two LTSs. Given
S, some events of
Sare said to be uncontrollable (
_{uc}), i.e. the occurrence of these events cannot be prevented by a supervisor, while the others are controllable (
_{c}). It means that all the supervisors satisfying (
) are not good candidates. In fact, the behavior of the
controlled system must respect an additional condition that happens to be similar to the
iococonformance relation that we previously defined in
. This condition is called the
*controllability condition*and is defined as follows.

Namely, when acting on
S, a supervisor is not allowed to disable uncontrollable events. Given a safety property
, that can be modeled by an LTS
, there actually exists many different supervisors satisyfing both (
) and (
). Among all the valid supervisors, we are interested in
computing the supremal one, ie the one that restricts the system as few as possible. It has been shown in
that such a supervisor always exists and is
unique. It gives access to a behavior of the controlled system that is called the supremal controllable sub-language of
w.r.t.
Sand
_{uc}. In some situations, it may also be interesting to force the controlled system to be non-blocking (See
for details).

The underlying techniques are similar to the ones used for Automatic Test Generation. It consists in computing a product between the specification and and to remove the states of the obtained LTS that may lead to states that violate the property by triggering only uncontrollable events.

**Optimal Control.**We are also interested in the Optimal Control Problem. The purpose of optimal control is to study the behavioral properties of a system in order to generate a supervisor
that constrains the system to a desired behavior according to quantitative and qualitative requirements. In this spirit, we have been working on the optimal scheduling of a system through a set
of multiple goals that the system had to visit one by one
. We have also extended the results of
to the case of partial observation in order to
handle more realistic applications
.

**Control of Structured Discrete Event System.**In many applications and control problems, LTS are the starting point to model fragments of a large scale system, which usually consists of
several composed and nested sub-systems. Knowing that the number of states of the global system grows exponentially with the number of parallel and nested sub-systems, we have been interested
in designing algorithms that perform the controller synthesis phase by taking advantage of the structure of the plant without expanding the system. Given a concurrent system and a
*safety property, modeled as a language*, also called specification that have to be ensured on this system, we have investigated in e.g.
the computation of the supremal controllable
language contained in the expected language. To do so, we use a modular centralized approach and perform the control on some approximations of the plant derived from the behavior of each
component. The behavior of these approximations is restricted so that they respect a new language property for discrete event systems called
*partial controllability condition*that depends on the safety property. It is shown that, under some assumptions the intersection of these ``controlled approximations'' corresponds to the
supremal controllable language contained in the specification with respect to the plant. This computation is performed without building the whole plant, hence avoiding the state space explosion
induced by the concurrent nature of the plant.

Similarly, in order to take into account nested behaviors, some techniques based on model aggregation methods , have been proposed to deal with hierarchical control problems. Another direction has been proposed in . Brave and Heimann in introduced Hierarchical State Machines which constitute a simplified version of the Statecharts. Compared to the classical state machines, they add concurrency and hierarchy features. Some other works dealing with control and hierarchy can be found in , . This is the direction we have chosen in the VerTeCsTeam .

The methods and tools developed by the VerTeCsproject-team for test generation and control synthesis of reactive systems are intended to be as generic as possible. This allows us to apply them in many application domains where the presence of software is predominant and its correctness is essential. In particular, we apply our research in the context of telecommunication systems, for embedded systems, for smart-cards application, and control-command systems.

Our research on test generation was initially proposed for conformance testing of telecommunication protocols. In this domain, testing is a normalized process , and formal specification languages are widely used (SDL in particular). Our test generation techniques have already proved useful in this context, going up to industrial transfer. New standardized component-based design methodologies such as UML and OMG's MDE increase the need for formal techniques in order to ensure the composionality of components, by verification and testing. Our techniques, by their genericity and adaptativity, have also proved useful at different levels of these methodologies, from component testing to system testing. The telecommunication industry now also tries to provide more and more services to the users. These services also have to be validated. We are involved with France Telecom R & D in a project on the validation of vocal services (see ). Very recently, we also started to study the impact of our test generation techniques in the domain of network security. More specifically, we believe that testing that a network or information systems meets its security policy is a major concern, and complements other design and verification techniques.

In the context of transport, software embedded systems are increasingly predominant. This is particularly important in automotive systems, where software replaces electronics for power train, chassis (e.g. engine control, steering, brakes) and cabin (e.g. wiper, windows, air conditioning) or new services to passengers are increasing (e.g. telematics, entertainment). Car manufacturers have to integrate software components provided by many different suppliers, according to specifications. One of the problems is that testing is done late in the life cycle, when the complete system is available. Faced with these problems, but also complexity of systems, compositionality of components, distribution, etc, car manufacturers now try to promote standardized interfaces and component-based design methodologies. They also develop virtual platforms which allow for testing components before the system is complete. It is clear that software quality and trust are one of the problems that have to be tackled in this context. This is why we believe that our techniques (testing and control) can be useful in such a context.

We have also applied our test generation techniques in the context of smart-card applications. Such applications are typically reactive as they describe interactions between a user, a terminal and a card. The number and complexity of such applications is increasing, with more and more services offered to users. The security of such applications is of primary interest for both users and providers and testing is one of the means to improve it.

The main application domain for controller synthesis is control-command systems. In general, such systems control costly machines (see e.g. robotic systems, flexible manufacturing systems), that are connected to an environment (e.g. a human operator). Such systems are often critical systems and errors occurring during their execution may have dramatic economical or human consequences. In this field, the controller synthesis methodology (CSM) is useful to ensure by construction the interaction between 1) the different components, and 2) the environment and the system itself. For the first point, the CSM is often used as a safe scheduler, whereas for the second one, the supervisor can be interpreted as a safe discrete tele-operation system.

NBACis a verification/slicing tool developed in collaboration with Vérimag. This tool analyses synchronous and deterministic reactive systems containing combination of Boolean and numerical variables and continuously interacting with an external environment. Its input format is directly inspired by the low-level semantics of the LUSTRE dataflow synchronous language. Asynchronous and/or non-deterministic systems can be compiled in this model. The kind of analyses performed by NBACare: reachability analysis from a set of initial states, which allows to compute invariants satisfied by the system; coreachability analysis from a set of final states, which allows to compute sets of states that may lead to a final state; and combination of the above. The result of an analysis is either a set of states together with a necessary condition on states and inputs to stay in this set during an execution, either a verdict of a verification problem. The tool is founded on the theory of abstract interpretation: sets of states are approximated by abstract values belonging to an abstract domain, on which fix-point computations are performed. The originality of NBACresides in

the use of a very general notion of control structure in order to very precisely tune the trade-off between precision and efficiency;

the ability to dynamically refine the control structure, and to guide this refinement by the needs of the analysis;

sophisticated methods for computing postconditions and preconditions of abstract values.

More recently,
NBAChas been extended with auxiliary translation tools
Auto2nbacand
Nbac2auto. This allows to specify systems to be analyzed as a product of hybrid automata with constant differential inclusion (
*e.g.*,
) and to get the result of the analysis on the product automaton.

Stg(Symbolic Test Generation) is a prototype tool for the generation and execution of test cases using symbolic techniques. It takes as input a specification and a test purpose described as IOSTS, and generates a test case program also in the form of IOSTS. Test generation in STG is based on a syntactic product of the specification and test purpose IOSTS, an extraction of the subgraph corresponding to the test purpose, elimination of internal actions, determinisation, and simplification. The simplification phase now relies on NBAC, which approximates reachable and coreachable states using abstract interpretation. It is used to eliminate unreachable states, and to strengthen the guards of system inputs in order to eliminate some Inconclusive verdicts. After a translation into C++ or Java, test cases can be executed on an implementation in the corresponding language. Constraints on system input parameters are solved on-the-fly during execution using a constraint solver. The first version of STG was developed in C++, using Omega as constraint solver during execution. This version has been deposit at APP (IDDN.FR.001.510006.000.S.P.2004.000.10600).

A new version in OCaml is still under development. This version is more generic and will serve as a library for symbolic operations on IOSTS. Most functionalities of the C++ version have been re-implemented. Also a new translation of abstract test cases into Java executable tests has been developed, in which the constraint solver is LuckyDraw( VERIMAG).

Sigaliis a model-checking tool that operates on ILTS (Implicit Labeled Transition Systems, an equational representation of an automaton), an intermediate model for discrete event systems. It offers functionalities for verification of reactive systems and discrete controller synthesis. It is developed jointly by the ESPRESSOand VerTeCsteams. The techniques used consist in manipulating the system of equations instead of the set of solutions, which avoids the enumeration of the state space. Each set of states is uniquely characterized by a predicate and the operations on sets can be equivalently performed on the associated predicates. Therefore, a wide spectrum of properties, such as liveness, invariance, reachability and attractivity can be checked. Algorithms for the computation of predicates on states are also available , . Sigaliis connected with the Polychrony environment ( ESPRESSOproject-team) as well as the Matou environment ( VERIMAG), thus allowing the modeling of reactive systems by means of Signal Specification or Mode Automata and the visualization of the synthesized controller by an interactive simulation of the controlled system. Sigaliis protected by APP (Agence de Protection des Programmes).

Ctrl-Sis a tool dedicated to the control and simulation of structured discrete event systems. Ctrl-Sis a graphical tool connected with Oris for the simulation aspect as well as Umdes for the controller synthesis computations. It now encompasses the former tool Syntool that was developped in our team during the past years. This tool is currently under testing.

This year we investigated the supervisor synthesis for concurrent systems based on reduced system models with the intention of complexity reduction. It is assumed that the expected behavior (specification) is given on a subset of the system alphabet, and the system behavior is reduced to this alphabet. Supervisors are computed for each reduced subsystem employing a modular decentralized approach. Depending on the chosen architecture, we provide sufficient conditions for the consistent implementation of the reduced supervisors for the original system . This work has been done in cooperation with Klaus Schmidt and Benoit Gaudin.

We plunge decentralized control problems into modular ones to benefit from the know-how of modular control theory: any decentralized control problem is associated to a natural modular control problem, which over-approximates it. Then, we discuss how a solution of the latter problem delivers a solution of the former . This work has been done in cooperation with Jan Komenda and Sophie Pinchinat.

We here tackled the safety controller synthesis problem for various models (from finite transition systems to hybrid systems). Within this framework, we are mainly interested in the synthesis problem for an intermediate model: the symbolic transition system. Modelization needs lead us to redefine the concept of controllability by associating it to guards of transitions instead of events. We then define synthesis algorithms based on abstract interpretation techniques so that we can ensure finiteness of the computations. We finally generalize our methodology to the control of hybrid systems, which gives an unified framework to the supervisory control problem for several classes of models .

For several years we address the generation of symbolic test cases for testing the conformance of a black-box implementation with respect to a specification. More specifically, the problem we consider is the off-line selection of test cases according to a test purpose, which is here a set of scenarii of interest that one wants to observe during test execution. In and , we extend them in the context of infinite-state symbolic models (IOSTS), by showing how approximate fixpoint computations can be used in a conservative way. The same kind of technique is also adapted for test selection with respect to safety properties and its combination with verification (see ). When dealing with non-deterministic IOSTS specifications, off-line test selection involves a determinisation phase, which is not always feasible for IOSTS. However a determinisation procedure which terminates for a sub-class of IOSTS has been identified (see ).

Instead of considering the extension of the finite-state IOLTS model with variables, one can also consider the extension of the IOLTS model with recursion, which corresponds to a pushdown system. A preliminary study was done in 2004 with the master thesis of Liva Randriamanohisoa. One of the problems still to be solved is the determinisation of a pushdown system which may be necessary when testing under partial observation. When pushdown automata are deterministic however, test selection techniques with test purposes can be used with some adaptations. This is the object of Camille Constant's PhD.

This year, we also started some work on testing the conformance of open networks with their security properties. This is part of Jeremy Dubreil and Hatem Hamdi's PhDs.

In this work we describe a methodology integrating verification and conformance testing for the formal validation of reactive systems. A specification of a system - an extended input-output automaton, which may be infinite-state - and a set of safety properties (``nothing bad ever happens'') and possibility properties (``something good may happen'') are assumed. The properties are first tentatively verified on the specification using automatic techniques based on approximated state-space exploration, which are sound, but, as a price to pay for automation, are not complete for the given class of properties. Because of this incompleteness and of state-space explosion, the verification may not succeed in proving or disproving the properties. However, even if verification did not succeed, the testing phase can proceed and provide useful information about the implementation. Test cases are automatically and symbolically generated from the specification and the properties, and are executed on a black-box implementation of the system. The test execution may detect violations of conformance between implementation and specification; in addition, it may detect violation/satisfaction of the properties by the implementation and by the specification. In this sense, testing completes verification. The approach is illustrated on simple examples and on a Bounded Retransmission Protocol .

The PhD thesis of Tristan Le Gall is concerned by the verification of asynchronous systems communicating through FIFO channels and its applications. This year, we addressed the verification of communication protocols or distributed systems that can be modeled by Communicating Finite State Machines (CFSMs), i.e. a set of sequential machines communicating via unbounded FIFO channels. Unlike recent related works based on acceleration techniques, we propose to apply the Abstract Interpretation approach to such systems, which consists in using approximated representations of sets of configurations. We show that the use of regular languages together with an extrapolation operator provides a simple and elegant method for the analysis of CFSMs, which is moreover often as accurate as acceleration techniques, and in some cases more expressive. Last, when the system has several queues, our method can be implemented either as an attribute-independent analysis or as a more precise (but also more costly) attribute-dependent analysis .

We define a symbolic determinisation procedure for a class of infinite-state systems, which consists of automata extended with symbolic variables that may be infinite-state. The subclass of extended automata for which the procedure terminates is characterised as bounded lookahead extended automata. It corresponds to automata for which, in any location, the observation of a bounded-length trace is enough to infer the first transition actually taken. We discuss applications of the algorithm to the verification, testing, and diagnosis of infinite-state systems , .

In this work, we are interested in the diagnosis of discrete event systems modeled by finite transition systems. We propose a model of supervision patterns general enough to capture past occurrences of particular trajectories of the system. Modeling the diagnosis objective by supervision patterns allows us to generalize the properties to be diagnosed and to render them independent of the description of the system. We first formally define the diagnosis problem in this context. We then derive techniques for the construction of a diagnoser and for the verification of the diagnosability based on standard operations on transition systems. We show that these techniques are general enough to express and solve in a unified way a broad class of diagnosis problems found in the literature, e.g. diagnosing permanent faults, multiple faults, fault sequences and some problems of intermittent faults , , . This work has been done in cooperation with Marie-Odile Cordier (Dream project-team) and Sophie Pinchinat (S4 project-team).

Our aim is now to extend these results to infinite state systems as well as to non permanent patterns, and to apply these techniques to the automatic generation of passive testers (intruder detection systems) in order to test on-line whether an implementation respects a given security policy.

In this work, we describe a methodology and a case study in formal verification. The case study is the SSCOP protocol, a member of the ATM adaptation layer whose main role is to perform a reliable data transfer over an unreliable communication medium. The methodology involves: (1) simulation for initial debugging; (2) partial-order abstraction that preserves the properties of interest; and (3) compositional verification of the properties at the abstract level using the PVS theorem prover. Steps (2) and (3) guarantee that the properties still hold on the whole (composed, concrete) system. The value of the approach lies in adapting and integrating several existing formal techniques into a new verification methodology that is able to deal with real case studies .

We present a practical tool for defining and proving properties of recursive functions in the Coq proof assistant. The tool proceeds by generating from pseudo-code (Coq functions that need not be total nor terminating) the graph of the intended function as an inductive relation, and then proves that the relation actually represents a function, which is by construction the function that we are trying to define. Then, we generate induction and inversion principles, and a fixpoint equation for proving other properties of the function. Our tool builds upon state-of-the-art techniques for defining recursive functions, and can also be used to generate executable functions from inductive descriptions of their graph. We illustrate the benefits of our tool on two case studies .

The goal of this 3-year project (starting October 2004) is to build a platform for the formal validation of France Telecom's vocal phone services. Vocal services are based on speech recognition and synthesis algorithms, and they include automatic connection to the callee's phone number by pronouncing her name, or automatic pronounciation of the callee's name whose phone number was dialed in by the user. Here, we are not interested in validating the voice recognition/synthesis algorithms, but on the logic surrounding them. For example, the system may allow itself a certain number of attempts for recognizing a name, after which it switches to normal number-dialing mode, during which the user may choose to go back to voice-recognition mode by pronouncing a certain keyword. This logic may become quite intricate, and this complexity is multiplied by the number of clients that may be using the service at any given time. Its correctness has been identified by France Telecom as a key factor in the success of the deployment of voice-based systems. To validate them we are planning to apply a combination of formal verification and conformance testing techniques (cf. Section ). In the context of Camille Constant's PhD, we also study test generation from models of programs with recursion (pushdown automata and extensions).

The Potestat project ( http://www-lsr.imag.fr/POTESTAT/) [2004-2007] involves Lsr-IMAG Grenoble, VERIMAGGrenoble and Lande and VerTeCsproject teams in Irisa.

In the framework of open service implementations, based on the interconnection of heterogeneous systems, the security managers lack of well-formalized analysis techniques. The security of
such systems is therefore organized from pragmatic elements, based on well-known vulnerabilities and their associated solutions. It then remains to verify if such security policies are
correctly and effectively implemented in the actual system. This is usually carried out by auditing the administrative procedures and the system configuration. Tests are then performed, for
instance by probing, to check the presence of some particular vulnerabilities. Although some tools are already available for specific tests (like password crackers), there is no solution to
analyse the whole system conformance with respect to a security policy. This lack may be explained by several factors. First, there is currently no complete study about the formal modeling of
a security policy, even if some particular aspects have been more thoroughly studied. Furthermore, verification based researches about security usually concern more precise elements, like
cryptographic protocols or code analysis. Finally, most of these works are dedicated to an
*a priori*verification of the coherency of security policies before their implementation. We are concerned here by the conformance of a system configuration with respect to a given
policy. In the framework of the POTESTAT project we plan to tackle this problem according to the following items:

Formal modeling of security policies, allowing a test directed analysis.

Definition of a conformance notion between a system configuration and some security policies elements. The goal is to obtain a test theory similar to the one existing in the protocol testing area (like the Z.500 norm).

Definition of methods to test this conformance notion, including the testability problems, the environment of execution, code analysis and test selection.

A long-term objective of this project is to offer some tools allowing security managers to model information flow, network elements (protocols, node types and their associated security policy, etc) to better describe the security policy for conformance testing and to provide some practical tools to perform coherency verification and vulnerabilities detection.

The APRON (Analyse de PROgrammes Numériques) project ( http://www.cri.ensmp.fr/apron/) [2004-2007] involves ENSMP, LIENS-ENS, LIX-Polytechnique, VERIMAGand VerTeCs-Irisa.

The goal is to develop methods and tools to analyse statically embedded software with high-level of criticity for which the detection of errors at run-time is unacceptable for safety or
security reasons. Such safety and security software is found in the context of transportation, automotive, avionics, space, industrial process control and supervision, etc. One
characteristics of such software is that it is based on physical models whence involve a lot of numerical computations. Moreover,
*counters*play an important role in the control of reactive programs (e.g., delay counting in synchronous programming). Critical properties depending on these counters are generally
outside the scope of model-checking approaches, while being simple enough to be accurately analysed by more sophisticated numerical analyses.

The goal of the project is the static analysis of large specifications (e.g. à la Lustre) and corresponding programs (e.g. of 100 to 500 000 LOCs of C), made of thousands of procedures, involving a lot of numerical floating-point computations, as well as boolean and counter-based control in order to verify critical properties (including the detection of possible runtime errors), and to help in automatically locating the origin of critical property potential violation.

An example of such critical properties, as found in control/command programs, is of the form ``under a condition holding on boolean and numerical variables for some time, the program must imperatively establish a given boolean and/or numerical property, in a given bounded delay''.

VerTeCscontributes to the following topics within the APRON project:

The design and implementation of a common interface to several abstraction libraries (intervals, linear equalities, octagons, polyhedra, ...and their combination).

The study of adaptative techniques for adjusting the trade-off between the efficiency and the precision of analyses, among other dynamic partitioning techniques . Results have already been obtained in the intraprocedural case, but to a less extend in the interprocedural case.

VerTeCsfocuses mainly on Lustrespecifications and provides with the NBactool one of the main experimental platforms of the project for the verification of critical properties on such specifications.

In 2006, most of the effort of VerTeCswas spent on the design and implementation of the common interface.

V3F ( http://lifc.univ-fcomte.fr/~v3f/)[2003-2006] is a project involving LIFC Besançon, Inria-I3S Nice, LIST-CEA Saclay and project teams Lande and VerTeCsin Irisa. The goal of this project is to provide tools to support the verification and validation process of programs with floating-point numbers. More precisely, project V3F investigates techniques to check that a program satisfies the calculations hypothesis on the real numbers that have been done during the modeling step. The underlying technology will be based on constraint programming. Constraints solving techniques have been successfully used during the last years for automatic test data generation, model-checking and static analysis. However in all these applications, the domains of the constraints were restricted either to finite subsets of the integers, rational numbers or intervals of real numbers. Hence, the investigation of solving techniques for constraint systems over floating-point numbers is an essential issue for handling problems over the floats.

The results obtained in the course of the project V3F are a clean design of constraint solving techniques over floating-point number, and a study of the capabilities of these techniques in the software validation and verification process. An open and generic prototype of a constraint solver over the floats was developped. We also paid attention on the integration of floats into various formal notations (e.g., B, Lustre, UML/OCL) to allow an effective use of the constraint solver in formal model verification, automatic test data generation (functional and structural) and static analysis.

Our contribution to this project is to precisely formalize a conformance testing theory for programs with floating point with respect to their specifications, and second, to describe test generation algorithms in this framework. We consider the IOSTS model for the specification and the test purpose. An important point is to obtain a computable conformance relation. The solution that we propose takes into account the inaccuracy of floating points computations w.r.t. real semantics by allowing a limited skew of floating points values in conformant traces. In order to be able to recognize conformant traces/execution during test execution, and to check that the allowed skew does not diverge, we use a projection technique that allows the tester to use safely the values emitted by the implementation for its own execution. A nice point of our approach is that we can fully reuse the test generation and selection techniques implemented in our STG tool, the only change being in the implementation of the test driver. This result has been presented in the last meeting but has not yet been published.

This project ended this year with a final report presenting the main results of the project and the organization by V3F project members of the workshop CSTVA'06 (Workshop on Constraints in Software Testing) in Nantes in September 2006, where the results of V3F were presented .

The POLITESS project ( http://www.rnrt-politess.info/) [2006-2008] involves GET (INT Evry and ENST Rennes), INPG-IMAG (LSR and VERIMAGlaboratories), France Telecom R&D Caen, Leyrios Technologies, SAP Research, AQL Silicomp Rennes and Irisa. In a sense, this project is an extension of the Potestat project. The objective of the project is to study and provide methodological guidelines and software solutions for a formal approach to security of networks. This encompasses the specification of high level security policies with clear semantics, their deployment on the network in terms of security artifacts and the analysis of this deployment, testing and monitoring of security based on models of security policies and abstract models of networks. Our team is involved in several activities, in particular in modelling (defining adequate models for both the system and security policies), testing (modelling security testing, test generation/selection), supervision (intrusion detection, diagnosis) and case studies.

We are partners of the ARTIST2 Network of Excellence on Embedded Systems ( http://www.artist-embedded.org/), involved in the Testing and Verification cluster with Brics in Aalborg (DK), University of Twente (NL), University of Liège (B), Uppsala (SE), VERIMAGGrenoble, ENS Cachan, LIAFA Paris, EPFL Lausanne (S). The aim of the cluster is to develop a theoretical foundation for real-time testing, real-time monitoring and optimal control, to design data structures and algorithms for quantitative analysis, and to apply testing and verification tools in industrial settings. For security, we plan to create a common semantic framework for describing security protocols including notion of "trust" and to develop tools and methods for the verification of security protocols. Test and verification tools developed by partners will be made available via a web-portal and with dedicated verification servers.

In Artist2, the main role of VerTeCsis to integrate our research on testing and test generation based on symbolic transition systems with other works based on timed models.

This year we participated in a cluster meeting that took place at the Embedded Systems Institute in Eindhoven (The Netherlands) in April 2006. We also participated to the second review meeting of Artist2 in Paris in November 2006. A three weeks visit of Thierry Massart (ULB) was funded by Artist2.

This grant involving three groups, IRISA/INRIA, VERIMAG, and University of Illinois at Urbana-Champaign (Grigore Roşu), is concerned with various aspects of runtime analysis of software systems, and aims both at advancing theoretical foundations and at developing and improving supporting tools and prototypes.

This bilateral collaboration grant, between France and Argentina, involves 4 teams: the team MOVE of LIF (Laboratoire d'Informatique Fondamentale) of Marseille (Peter Niebert) and the VerTeCsproject, on the French side, the university of Cordoba (Pedro d'Argenio) and the La Empesa University of Buenos Aires (Alfredo Olivero), on the Argentinian side.

The aim of this collaboration (august 2005- august 2007) is to make progress in the verification of probabilistic timed concurrent systems. The VerTeCsproject brings its expertise in algorithms and abstraction techniques implemented in the Rapturetool for Markov Decision Processes.

Our ambition is to progress in the following directions:

Apply the abstraction techniques implemented in the
Rapturetool and the probabilistic partial order reduction techniques to
*timed*probabilistic automata, instead of untimed systems only;

Extend the same techniques to the verification of untimed and timed temporal logical formula (expressed in the (timed) LTL logic);

Explore the applicability of program slicing and abstract interpretation techniques to quantitative model checking;

Explore the possibility to use these techniques for performance analysis;

Implement the fundamental results;

Analyse case studies in order to better understand the performances of the tools.

We collaborate with several Inria project-teams. We collaborate with the LANDE project-team in two ACI-Sécurité grants (V3F and POTESTAT). With ESPRESSOproject-team for the development of the Sigalitool inside the Polychrony environment. With the DART project-team on the use of the controller synthesis methodology for the control of control-command systems (e.g. robotic systems). With DISTRIBCOM on security testing in the context of Potestat and Politess grants. With the S4 project-team on the use of control, game theory and diagnosis for test generation. With the DREAMproject-team on the diagnosis of discrete event systems. With the VASY project-team on the use of CADP libraries in TGV and the distribution of TGV in the CADP toolbox.

Our main collaborations in France are with Vérimag. Beyond formalized collaborations, (ACI Potestat and APRON, RNRT Politess, Rex Artist2), we also collaborate on the connection of NBACwith Lurette for the analysis of Lustre programs, as well as the connection of SIGALIand Matou. We are also involved in several collaborations with LSR Imag (ACI Potestat and RNRT Politess).

in The Netherlands (E. Brinksma, J. Tretmans) on test generation (symbolic in particular) following the Van Gogh bilateral cooperation (1999-2001). This year, Vlad Rusu was a sabbatical visitor at the University of Nijmegen (NL) from February to September 2006, in the Informatics for Technical Applications (ITA) led by Prof. Frits Vandraager. He worked in a joint project between the ITA and Security of Systems (SoS) groups on the formal validation of the new Dutch electronic passport by means of a combination of formal verification, black-box testing, and learning techniques .

in Tunisia (M. Tahar Bhiri) on security testing. Thierry Jéron is co-supervisor of a PhD student Hatem Hamdi working on robustness and security testing.

in Belgium (Thierry Massart) on testing. Thierry Massart visited us for three weeks in April 2006, funded by Artist2. We collaborate with him on testing from pushdown automata models.

in Quebec (Guy-Vincent Jourdan) on security testing. G.-V. Jourdan visited IRISA during summer 2006.

(Jan Komenda) on supervisory control of concurrent systems. Hervé Marchand published a paper with Jan Komenda.

in Germany (with Klaus Schmidt). Hervé Marchand published a paper with Klaus Schmidt (other are in preparation).

in USA (Prof. Stéphane Lafortune) on control an diagnosis of discrete event systems. Hervé Marchand visited this university this year (1 week).

is teaching in INSA of Rennes (32h in 2005-2006), on compilation.

is teaching in INSA of Rennes (26h in 2006), on the Scheme programming language

is teaching in License in the Univeristy of Rennes 1 and in "Magistère Informatique-Télécom" in ENS Cachan-Bretagne (64h/year)

**Current PhD. theses:**

``
*Verification and symbolic test generation for reactive systems*'', 2nd year,

``
*Abstract lattice of fifo channels for verification and control synthesis*'', 2nd year,

``
*Testing of network security*'', In collaboration with University of Sfax, 1st year,

``
*Formal methods for testing and monitoring security of open networks*'', 1st year.

**Trainees 2005-2006:**

``
*A tool for the simulation of controlled discrete-Event systems*'', Internship-ESIB (Liban) (2 months).

``
*Diagnosis for intrusion detection*'', Master student, university of Uppsala and Enst Brest, (5 months).

was PC member of Testcom 2006 (New-York, May 2006), PC member and Tool Chair of Tacas'06 (Vienna, March 06), PC member of Rosatea'06 (Portland, Maine, USA, July 2007). He is SC of the
Movep summer school (Bordeaux, June 2006). He is PC member of the forthcoming Testcom/Fates'07 (Tallinn, Estonia, June 2007) and Rosatea'07 (Boston, July 2007). Thierry Jéron gave a keynote
speech on
*Model-based test selection for infinite state reactive systems*at the 5th International Symposium on Formal Methods for Components and Objects (FMCO'06, Amsterdam, November 2006) and
was invited to give a seminar in Labri Bordeaux (June 2006) on
*Test selection with approximate analysis*. He was reviewer of the PhD defense of Assia Touil (University Evry, December 2006). He is member of the new IFIP Working Group 10.2 on
Embedded Systems (
http://jerry.c-lab.de/ifip-wg-102/).

is PC member of the forthcoming MSR'07 conference on modeling of reactive systems. He was PC member of WODES'2006 and is member of the IFAC Technical Committees (TC 1.3 on Discrete Event and Hybrid Systems) for the 2005-2008 triennium. He gave two seminars on "Control of Concurrent Discrete Event System" and "Diagnosis of Discrete-Event Systems" at the univesity of Michigan. He was also invited to give a seminar on the same last topic in LABRI (Bordeaux). He is member of the ``Commissions de Spécialistes 27e section'' at the University of Rennes 1.

Vlad Rusu gave invited talks to at the Centrum voor Wiskunde en Informatica (CWI), Amsterdam, NL (may 2006) on
*Combining verification and testing for reactive systems*and at IPA Dutch spring school in Computer Science, Vught, NL (april 2006) on the same topic.

was invited to give seminars on
*Verification of communicating protocols / Abstract interpretation of regula languages*in
VERIMAGGrenoble (June 2006) and Liafa Paris (October 2006).

is member of the Ifsic Council as PhD representative.