The VerTeCs team is focussed on the reliability of reactive software
using formal methods. By reactive software we mean software that
continuously reacts with its environment. The environment can be a human
user for a complete reactive system, or another software using the reactive
software as a component. Among these, critical systems are of primary
importance, as errors occurring during their execution may have dramatic
economical or human consequences. Thus, it is essential to establish their
correctness before they are deployed in a real environment. Correctness is
also essential for less critical applications, in particular for COTS
components whose behavior should be trusted before integration in software
systems.

For this, the VerTeCs team promotes the use of formal methods, i.e.
formal specification and mathematically founded analysis methods. During the
analysis and design phases, correctness of specifications with respect to
requirements or higher level specifications can be established by formal verification. Alternatively, control consists in forcing
specifications to stay within desired behaviours by coupling with a
supervisor. During validation, testing can be used to check the
conformance of implementations with respect to their specifications. Test
generation is the process of automatically generating test cases from
specifications.

More precisely, the aim of the VerTeCs project is to improve the
reliability of reactive systems by providing software engineers with methods
and tools for automating the test generation and controller
synthesis from formal specifications. We adapt or develop formal models for
the description of testing and control artifacts, e.g. specifications,
implementations, test cases, supervisors. We formally describe correctness
relations (e.g. conformance or satisfaction). We also formally describe
interaction semantics between testing artifacts.
From these models, relations and
interaction semantics, we develop algorithms for automatic test and controller
synthesis that ensure desirable properties. We try to be as generic as
possible in terms of models and techniques in order to cope with a wide range
of application domains. We implement prototype tools for distribution in the
academic world, or for transfer to industry.

Our research is based on formal models and verification techniques such
as model checking, theorem proving, abstract interpretation, the control theory
of discrete event systems, and their underlying models and logics. The close
connection between testing, control and verification produces a synergy between
these research topics and allows us to share theories, models, algorithms and
tools.

The formal models we use are mainly automata-like structures such as labelled
transition systems (LTS) and some of their extensions: an LTS is a tuple

To model reactive systems in the testing context, we use Input/Output labeled transition systems (IOLTS for short). In this setting, interactions between the system and its environment are modeled by input (controlled by the environment) and output events (observed by the environment), and the internal behavior of the system is modeled by internal (non observable) events. In the controller synthesis theory, we also distinguish between controllable and uncontrollable events, observable and unobservable events. In testing, we also manipulate input-output symbolic transition systems (IOSTS), which are extensions of IOLTS that operate on data (i.e., program variables, communication parameters, symbolic constants) through message passing, guards, and assignments. An alternative to IOSTS to specify systems with data variables is the model of synchronous dataflow equations.

Our research is based on well established theories: conformance testing, supervisory control, abstract interpretation, and theorem proving. Most of the algorithms that we employ take their origins in these theories:

graph traversal algorithms (breadth first, depth first, strongly connected components, ...). We use these algorithms for verification as well as test generation and control synthesis.

abstract interpretation algorithms, specifically in the abstract domain of polyhedraes (for example, Chernikova's algorithm for the computation of dual forms). Such algorithms are used in verification and test generation.

logical decision algorithms, such as satifiability of formulas in Presburger arithmetics. We use these algorithms during generation and execution of symbolic test cases.

Conformance testing consists in checking
whether an
implementation under test (abbreviated as IUT) behaves correctly with respect
to its specification. In the line of model-based testing,
we use formal specifications and their underlying models
to unambiguously define conformance testing and test case generation.
One difficult problem is to generate adequate
test cases (the selection problem) that correctly identify faults (the oracle
problem). We use test purposes for selection, which allow to
generate tests targeted to specific behaviours.
For solving the oracle problem we adapt a well established
theory of conformanced testing ioco compares
suspension traces of the specification and implementation.
Suspension traces are sequence of input, output or
quiescence (absence of action), thus abstract away internal behaviors
that cannot be observed by testers.
Roughly speaking, an implementation is conformant if after a
suspension trace of the specification,
the implementation can only show outputs and quiescences of
specfication.

We use IOLTS (or IOSTS) as formal models for specifications, implementations, test purposes, and test cases. Most often, specifications are not directly given in such low-level models, but are written in higher-level specification languages (e.g. SDL, UML, Lotos). The tools associated with these languages often contain a simulation API that implements their semantics under the form of IOLTS. On the other hand, the IOSTS model is expressive enough to allow a direct representation of most constructs of the higher-level languages. Test purposes are specified directly as IOLTS (or IOSTS). They are associated with marked states, giving them the status of automata or observers. When IOLTS are considered, test purposes are accepting sequences of actions whose projection on visible actions are selected for test generation. When IOSTS are considered, test purposes are extended automata that observe behaviors, i.e. sequences of actions and vectors of variables of the specifications, whose projection is selected for test generation.

A test case produces
a verdict when executed on an implementation.
These are formalized in IOLTS (or IOSTS) by special states (or locations).
A Fail verdict means that the IUT is rejected, a Pass verdict means that
the IUT exhibited a correct behavior and the test purpose has been satisfied, while an
Inconclusive verdict is given to a correct behavior that is not accepted by
the test purpose.
Based on these models, an
interaction semantics, and the conformance relation, one can then define
correctness properties of test cases and test suites (sets of test cases).
Typical properties are soundness (no conformant implementation may be
rejected) and exhaustiveness (every non conformant implementation may be
rejected).

We have developed test generation algorithms. These algorithms are based on the construction of a product between the specification and test purpose, the computation of its visible behaviors involving the identification of quiescence and determinization, and the selection of accepted behaviors. Selection can be seen as a model-checking problem where one wants to identify states (and transition between them) that are reachable from the initial state and co-reachable from the accepting states. We have proved that these algorithms ensure that the (infinite) set of all possibly generated test cases is sound and exhaustive.

Apart from these theoretical results, our algorithms are designed to be as efficient as possible in order to be able to scale up to real applications. Roughly speaking, test generation consists in computing the intersection of the observable behavior of the specification (traces and quiescence) and the language accepted by the test purpose. The computation of observable behaviors involves determinization, while language intersection is based on computing the set of states that are reachable from initial states and co-reachable states from accepting states.

Our first test generation algorithms are based on enumerative techniques, optimized to fight the state-space explosion problem. We have developed on-the-fly algorithms, which consist in performing a lazy exploration of the set of states that are reachable in both the specification and the test purpose. The resulting test case is an IOLTS whose traces describe interactions with an implementation under test. This technique is now mature and implemented in the TGV tool, which we often use in industrial collaborations. We are continuously improving the technique. However, what characterizes this enumerative technique is that values of variables and communication parameters are instantiated at test generation time.

More recently, we have explored symbolic test generation techniques. This is a promising technique whose main objective
is to avoid the state space explosion problem induced by the enumeration of values of
variables and communication parameters. The idea consists in computing a test case under the form of an IOSTS,
i.e., a reactive program in which the operations on data is kept in a symbolic form. However, most of the operations involved in test generation
(determinization, reachability, and coreachability) become
undecidable. For determinization we employ heuristics that allow us
to solve the
so-called bounded observable non-determinism (i.e., the result of an internal choice can be detected after finitely many observable actions).
For the remaining operations, reachable and co-reachable sets of states are approximated using abstract interpretation techniques.
These techniques are implemented in the STG tool.

is concerned with ensuring (not only checking) that a computer-operated system
works correctly. More precisely, given a specification model and a required
property, the problem is to control the specification's behavior, by coupling
it to a supervisor, such that the controlled specification satisfies the
property controllable and non-controllable actions and between observable and non-observable actions. Typically, the controlled system is constrained by
the supervisor, which acts on the system's controllable actions and forces it
to behave as specified by the property. The control synthesis problem can be
seen as a constructive verification: building a supervisor that prevents the
system from violating a property. Several kinds of properties can be ensured
such as reachability, invariance, attractivity, etc. Techniques adapted from
model checking are then used to compute the supervisor w.r.t. the objectives.
Optimality must be taken into account as one often wants to obtain a supervisor
that constrains the system as few as possible.

We are also interested in the Optimal Control Problem. The purpose of optimal
control is to study the behavioral properties of a system in order to generate
a supervisor that constrains the system to a desired behavior according to
quantitative and qualitative requirements. In this spirit, we have been
working on the optimal scheduling of a system through a set of multiple goals
that the system had to visit one by one

In many applications and control problems, FSM are the starting point to model
fragments of a large scale system, which usually consists of several composed
and nested sub-systems. Knowing that the number of states of the global
systems grows exponentially with the number of parallel and nested sub-systems,
we have been interested in designing algorithms that perform the controller
synthesis phase by taking advantage of the structure of the plant without
expanding the system

In order to reduce the complexity of the supervisor synthesis phase, several
approaches have been considered in the literature. Modular
control Statecharts. Compared to the classical state machines, they add
orthogonality and hierarchy features. Some other works dealing with control
and hierarchy can be found in Vertecs Team

Both controller synthesis and conformance testing rely on the ability to solve reachability and coreachability problems on a formal model. These problems are particular, but important cases of verification problems. Verification in its full generality consists in checking that a system, which is specified in a formal model, satisfies a required property. When the state space of the system is finite and not too large, verification can be carried out by graph algorithms (model-checking). For large or infinite state spaces, we can perform approximate computations, either by computing a finite abstraction and resort to graph algorithms, or preferably by using more sophisticated abstract interpretation techniques. Another way to cope with large or infinite state systems is deductive verification, which, either alone or in combination with compositional and abstraction techniqes, can deal with complex systems that are beyond the scope of fully automatic methods.

The techniques described above, which are dedicated to the analysis of
LTSs, are already mature. It seems natural to extend them to
IOSTSs or data-flow applications that manipulate variables taking
their values into possibly infinite data domains.

The techniques we develop for test generation or controller synthesis require
to solve state reachability and state coreachability problems (a state reachable from an initial stateb ${s}_{i}$ if an execution starting from

The big change induces by taking into account the data and not only the
(finite) control of the systems under study is that the fixpoints
become not computable. The undecidability is overcomed by resorting to
approximations, using the theoritical framework of Abstract
Interpretation

Abstract Interpretation is a theory of approximate solving of fixpoint equations applied to program analysis. Most program analysis problems, among others reachability analysis, come down to solving a fixpoint equation

where successor states'' function defined by the program.

The exact computation of such an equation is generally not possible for undecidability (or complexity) reasons. The fundamental principles of Abstract Interpretation are:

to substitute to the

concrete domain a simplerC abstract domain (static approximation) and to transpose the fixpoint equation into the abstrat domain, so that one have to solve an equationA $y=G\left(y\right),y\in A$ ;to use a

widening operator(dynamic approximation) to make the iterative computation of the least fixpoint of converge after a finite number of steps to some upper-approximation (more precisely, a post-fixpoint).G

Apprximations are conservative so that the obtained result is an
upper-approximation of the exact result. Those two principles are well illustrated by the Interval Analysis

One subtitute to the concrete domain

$\wp ({R}^{n})$ induced by the numerical variables the abstract domain$({I}_{R}{)}^{n}$ , where${I}_{R}$ denotes the set of intervals on real numbers; a set of values of a variable is then represented by the smallest intervals containing it;An iterative computation on this domain may not converge: it is indeed esasy to generate an infinite sequence of intervals which is strictly growing. The ``standard'' widening operator extrapolates by

$+\infty $ the upper bound of an interval if the upper bound does not stabilize within a given number of steps (and similarly for the lower bound).

In this example, the state space relations between the values of different types.

Programs performing dynamic allocation of objects in memory have an even more
complex state space. Solutions have been devised to represent in an
approximate way the memory heap and pointers between memory cells by graphs
(shape analysis

In the same way, programs with recursive procedure calls, parameter passing
and local variables are more delicate to analyse with precision. The
difficulty is to abstract the execution stacks which may have a complex
structure, particularly whenparameter passing by reference is allowed, as it
induces aliasing on the stack

For verification we also use theorem proving and more particularly the
pvs Coq rules that allow to prove the properties.
Using the rules usually requires input from the user; for example, proving that a state
predicate holds in every reachable state of the system (i.e., it is an invariant)
typically requires to provide a stronger, inductive invariant, which is preserved
by every execution step of the system. Another type of verification problem is proving
simulation between a concrete and an abstract semantics of a system. This too can
be done by induction in a systematic manner, by showing that, in each reachable state of the system, each step of the concrete system is simulated by a corresponding step at the abstract level.

The methods and tools developed by the VerTeCs project-team for test
generation and control synthesis of reactive systems are quite generic. This
allows us to apply them in many application domains where the presence of
software is predominant and its correctness is essential. In particular, we
apply our research in the context of telecommunication systems, for embedded
systems, for smart-cards application, and control-command systems.

Our research on test generation was initially proposed for conformance
testing of telecommunication protocols. In this domain, testing is a
normalized process

In the context of transport, software embedded systems are increasingly predominant. This is particularly important in automotive systems, where software replace electronics for power train, chassis (e.g. engine control, steering, brakes) and cabin (e.g. wiper, windows, air conditioning) or new services to passengers are increasing (e.g. telematics, entertainment). Car manufacturers have to integrate software components provided by many different suppliers, according to specifications. One of the problems is that testing is done late in the life cycle, when the complete system is available. Faced with these problems, but also complexity of systems, compositionality of components, distribution, etc, car manufacturers now try to promote standardized interfaces and component-based design methodologies. They also develop virtual platforms which allows for testing components before the system is complete. It is clear that software quality and trust are one of the problems that have to be tackled in this context. This is why we believe that our techniques (testing and control) can be useful in such a context.

We have also applied our test generation techniques in the context of smart-card applications. Such applications are typically reactive as they describe interactions between a user, a terminal and a card. The number and complexity of such applications is increasing, with more and more services offered to users. The security of such applications is of primary interest for both users and providers and testing is one the means to improve it.

The main application domain for controller synthesis is control-command systems. In general, such systems control costly machines (see e.g. robotic systems, flexible manufacturing systems), that are connected to an environment (e.g. a human operator). Such systems are often critical systems and errors occurring during their execution may have dramatic economical or human consequences. In this field, the controller synthesis methodology (CSM) is useful to ensure by construction the interaction between 1) the different components 2) the environment and the system itself. For the first point, the CSM is often used as a safe scheduler, whereas for the second one, the supervisor can be interpreted as a safe discrete tele-operation system.

TGV (Test Generation with Verification technology) is a tool for test generation
of conformance test suites from specifications of reactive
systems

NBAC is a verification/slicing tool developed in collaboration with
Vérimag. This tool analyses synchronous and deterministic reactive systems
containing combination of Boolean and numerical variables and continuously
interacting with an external environment. Its input format is directly
inspired by the low-level semantics of the LUSTRE dataflow synchronous
language. Asynchronous and/or non-deterministic systems can be compiled in
this model. The kind of analyses performed by NBAC are: reachability
analysis from a set of initial states, which allows to compute invariants
satisfied by the system; coreachability analysis from a set of final states,
which allows to compute sets of states that may lead to a final state; and
combination of the above. The result of an analysis is either a set of
states together with a necessary condition on states and inputs to stay in
this set during an execution, either a verdict of a verification problem.
The tool is founded on the theory of abstract interpretation: sets of states
are approximated by abstract values belonging to an abstract domain, on which
fix-point computations are performed. The originality of NBAC resides in

the use of a very general notion of control structure in order to very precisely tune the tradeoff between precision and efficiency;

the ability to dynamically refine the control structure, and to guide this refinement by the needs of the analysis.

sophisticated methods for computing postconditions and preconditions of abstract values.

Stg (Symbolic Test Generation)

Sigali is a model-checking tool that operates on ILTS (Implicit Labeled
Transition Systems, an equational representation of an automaton), an
intermediate model for discrete event systems. It offers functionalities for
verification of reactive systems and discrete controller synthesis. It is
developed jointly by the ESPRESSO and VerTeCs teams. The
techniques used consist in manipulating the system of equations instead of
the sets of solution, which avoids the enumeration of the state space. Each
set of states is uniquely characterized by a predicate and the operations on
sets can be equivalently performed on the associated predicates. Therefore,
a wide spectrum of properties, such as liveness, invariance, reachability and
attractivity can be checked. Algorithms for the computation of predicates on
states are also available

Syntool is a tool dedicated to the control of structured discrete
systems event. It implements the theory developed
in Syntool has an API allowing the user to
graphically describe the different LTS modeling the plant, to perform some
controller synthesis computations solving e.g. the forbidden state avoidance
problem for structured systems and finally to simulate the result (i.e. the
behavior of the controlled system). This tool is currently under testing.

Rapture is a verification tool developed jointly by BRICS and
INRIA à la
CSP. Processes can also manipulate local and global variables of finite
type. Probabilistic reachability properties are specified by defining two
sets of initial and final states together with a probability bound. The
originality of the tool is to provide two reduction techniques that limit the
state space explosion problem: automatic abstraction and refinement
algorithms, and the so-called essential states reduction

Following our preliminary results on control of hierarchical
systems

These techniques has been implemented in a prototype, named Syntool. It
is currently under testing.

In collaboration with the BIP Project (now POP-ART) and VERIMAG, we have been interested in the programming of real-time control systems, such as in robotics or avionics. These systems are designed with multiple tasks, each with multiple modes. We propose a model of tasks in terms of transition systems, designed especially with the purpose of applying existing discrete controller synthesis techniques (based on the SIGALI framework). This provides us with a systematic methodology, for the automatic generation of safe task handlers, with the support of synchronous languages and associated tools for compilation and formal computation. This work is still under progress.

Our test generation techniques were previously based on a selection by test purposes. However, this approach necessitates to specify those test purposes. Users sometimes want more automatic ways to generate test cases, from more general selection mechanisms. In the context of the Agedis european project (see ), we have defined more general selection mechanisms, called test selection directives. They allow to describe both coverage directives (on states, transitions and more generally expressions on variables), test purposes (extended to more general observers) and constraints on data values, and to combine them. Taking into account these test directives involved a deep modification in some test generation algorithms. We also designed algorithms that generate test cases randomly, without any test directive. All these algorithms are incremental in the sense that they produce test cases when these are computed, without waiting the end of the process, thus allowing users to interupt the process with a partial result. These results are still unpublished.

In this work, we define a methodology that combines verification and conformance testing for validating safety requirements of reactive systems. The safety requirements, specified as observers of visible behaviors, are first automatically verified on the system's specification. Then, test cases are automatically derived from the specification and the requirements, and executed on a black-box implementation of the system. This allows to check whether the requirements hold on the implementation as well. It is shown that an implementation conforms to its specification (for the conformance relation of Tretmans) if and only if it satisfies all the relevant safety requirements that are satisfied by the specification. The main differences with our previous works on test generation from test purposes is that test purposes express reachability properties, while requirements express safety properties, which are the most used type of property used in verification, and our methodology establishes a direct link between verification and test generation as what is tested on the implementation is exactly what is verified on the specification. This work will be presented in Testcom in 2004.

The PhD. thesis of Elena Zinovieva describes the integration of the approximated reachability and co-reachability algorithms embodied into the NBac tool, into the general symbolic test generation algorithm of the STG tool. The new reachability/co-reachability algorithms allow for a precise handling of data, and as a result, the new version of the tool is able to generate better test cases (i.e., having less Inconclusive verdicts).

The problem addressed here is how to force a known implementation model to
conform to a reference specification. The proposed solution is to control
the implementation with an internal controller and to compute it by control
synthesis techniques, considering conformance with the specification as a
control objective. The problem is attacked in the context of total and
partial observation of the controller on the
implementation

Robustness is the ability of a system to behave acceptably in the presence of hazards. One problem is the generation of test cases for testing robustness. Given a specification with modes (normal and degraded) and hazards, and required robustness properties, we propose an approach in two phases. First the specification should at least ensure robustness properties, which is achieved by control. Then test cases are selected, focussed on robustness. This is done by using a TGV-like technique on the controlled specification, but focussed on mode changes and hazards, using test purposes derived from properties, and results of the control problem.

Before visiting us, Ahmed Khoumsi had contributed to the testing and control
of real-time systems, based on a transformation of classes of Timed Automata
(TA) into equivalent finite state automata called SetExp Automata (SEA).
During his visit, we used this transformation as a basis for the
generalization of two problems adressed in the project. First, for a class
of Determinizable TA, we extended the TGV method to the real-time
case ioco conformance relation.

This work components) from a specification, and to perform the
verification on the components rather than on the whole specification.
Under reasonable sufficient conditions, this constitutes a sound
compositional verification technique, in the sense that a property
verified on the components also holds on the whole specification. This
may considerably reduce the global verification effort. Moreover, once
verified, a component forms the basis of an adequate test case, i.e., when
executed on an implementation, it will not issue false positive or
negative verdicts with respect to the verified properties.
The approach has been implemented using the STG test selection tool STG and the
PVS theorem prover, and demonstrated on a smart-card application (an electronic purse system).

In this
work pvs. Its
originality lies
more in the combination of the methods than in the methods themselves;
and its value is that it scales up to real-size systems.
It is
demonstrated by verifying a real ATM protocol whose main requirement is to perform a
reliable data transfer over an unreliable communication medium.

This work has been done in cooperation with David Cachera, Thomas Jensen, and David Pichardie from the Lande project-team of Irisa. We show how to formalise a constraint-based data flow analysis in the
specification language of the Coq proof assistant. This involves
defining a dependent type of lattices together with a library of
lattice functors allowing for a modular construction of complex
abstract domains. Constraints
are expressed via an intermediate representation that allows
for efficient constraint resolution.
Correctness with respect to an operational semantics is proved
formally. The proof of existence of a
correct, minimal solution of the constraints is constructive, which means that
the extraction mechanism of Coq provides
a provably correct data flow analyser in Ocaml.
The library of lattices together with the intermediate representation
of constraints are defined in an analysis-independent fashion, thus
providing a generic framework for proving and extracting static
analysers in Coq.
This work will be presented at ESOP in 2004.

In the context of the ARC Modocop, we have proposed a new approach to
interprocedural analysis/verification of programs, consisting in deriving an
interprocedural analysis method by abstract interpretation of the standard
operational semantics of programs

A tool is currently being implemented. Our final goal is to extend to
recursive programs the sophisticated techniques implemented in the tool
NBac for reactive programs.

This work has been done in cooperation with Thomas Reps, of the University of Wisconsin–Madison, during my three months stay in this place. The goal of the shape analysis is to analyze the possible memory configurations occuring during the execution of a program performing dynamic allocation of objects in the memory heap. Of course the configurations computed by such an analysis abstracts the concrete memory configurations, usually by using graphs representing memory cells and their pointer relations. We have applied the interprocedural analysis method described above to shape analysis, using the abstract lattice of 3-valued logical structures developped by Thomas Reps, M. Sagiv and R. Wilhelm. The challenge was to apply the interprocedural method with a very complex abstract lattice, and to extend the abstract lattice with interprocedural operations.

This work has been done in collaboration with the synchronous team of
VERIMAG Ludic isolates the relevant parts of the program using
slicing techniques. Then the verification tool NBac computes the
set of states that are both reachable from the starting (or initial) state
and coreachable from the states satisfying the property Lurette uses this information
for an efficient search for a (short) execution starting from the initial
state and leading to some final state.

Agedis [11/2000-01/2004] is an European project IST
(

The EAST-EEA project (ITEA-Project-No. 00009,
AEE Development Initiative (htmladdnormallink

The objective of this project with CEA LIST, Thalès Airborne Systems and Thalès R & D, is to automate testing from UML models in the context of Model Driven Engeneering. Our participation consists in expertise for test generation and the use of the TGV tool, in collaboration with the Triskell project-team.

Modocop
(VerTeCs works on the analysis of
Java programs and interfaces STG with (Synchronous) Java, as output for
execution of test cases, and as input to allow a direct specification in a
Java-like language. W. Serwe made his Post-Doc in this context.

AS STIC 23 (

The objective is to define properly what robustness testing is, and to define new techniques for test generation for robustness. In this context, we proposed a new approach based on a combination of control synthesis and conformance test generation (see ).

We collaborate with several Inria project-teams. With ESPRESSO project-team for the development of the SIGALI tool inside the Polychrony environment. With the POP-ART project-team on the use of the controller synthesis methodology for the control of control-command systems (e.g. robotic systems). With the TRISKELL project-team we have different collaborations on testing, in particular on the connection of TGV and UMLAUT, and on symbolic distributed test generation. With the S4 project-team on the use of control and game theory for test generation. With the VASY project-team on the use of CADP libraries in TGV and the distribution of TGV in the CADP toolbox.

Our main collaborations are with Vérimag. Beyond formalized collaborations (IST Agedis, CNRS AS Test, ARC Modocop), we also collaborate on the connection of NBAC with Lurette for the analysis of Lustre programs, as well as the connection of SIGALI and Matou.

in Denmark (K. G. Larsen) and

University of Twente(P. Katoen) on probabilistic verification. We participate in the development of the Rapture tool.in USA (D. Clarke) on symbolic test generation with emphasis on the STG tool.

in Italy (A. Bertolino) on using TGV for test generation for software architectures.

(T. Reps) on shape analysis. Bertrand Jeannet visited Tom Reps during spring 2003.

in Tunisia (M. Tahar Bhiri). Thierry Jéron is co-supervisor of a master student Hatem Hamdi working on robustness testing.

We have been involved in three proposals for Networks of Excellence
in the 6th PCRD: Define (

T. Jéron: is responsible of a course in Master of Computer Science in University of Rennes 1, and of a course in the engineering school DIIC in University of Rennes 1. He also teaches in the engineering school EnstB in Rennes and Brest. The topics of all lectures are testing and model-checking, He is also member of the ``Comissions de Spécialistes'' of ENS Cachan and University Rennes 1.

V. Rusu teaches in the Master of Computer Science in Rennes, on deductive verification methods.

B. Jeannet teaches in the Master of Computer Science in Rennes, on abstract interpretation,

B. Gaudin is teaching in DEUG and DIIC 2 in University of Rennes 1 (64h/year).

V. Tschaen is teaching in DEUG and DIIC 3 in University of Rennes 1 (64h/year).

E. Demairy is teaching compilation and project management in IUP-2 and IUP-3 in University of Rennes 1 (40h).

Current PhD. thesis:

Benoit Gaudin, « Control of Structured Discrete Event Systems »

Valéry Tschaen, « Automatic test generation: Models and Techniques »

Elena Zinovieva « Generation and simplification of symbolic testing »

Trainees :

Laurent Payen, « Syntool a Tool dedicated to the controller synthesis for Structured Discrete Event Systems », DESS Trainee (6 months).

Sophie Quinton, « Deductive proof with PVS », ENS Cachan (3 months).

Mbolatiana Rafamantanantsoa, « Robustness testing », Master student (6 months).

Thierry Jéron was member of the PhD. jurys of Manuel Aguilar (INPG Grenoble) and Armelle Prigent (Ecole Centrale de Nantes).

Thierry Jéron is PC member of Fates'03 (Montréal, 10/03), Testcom 2004 (Oxford, 3/04), Tacas'05 (Edinburgh, 4/04). He is Financial Chair of ISSRE 2004 (St Malo, 11/04). He is member of the Organizing Committee of the Movep School (Liège, 12/04) .

Thierry Jéron was invited to give a tutorial on test generation in the school ETR'03 (Ecole Temps Réél, Toulouse).