The VerTeCs team is focussed on the reliability of reactive software using formal methods. By reactive software we mean software that continuously reacts with its environment. The environment can be a human user for a complete reactive system, or another software using the reactive software as a component. Among these, critical systems are of primary importance, as errors occurring during their execution may have dramatic economical or human consequences. Thus, it is essential to establish their correctness before they are deployed in a real environment. Correctness is also essential for less critical applications, in particular for COTS components whose behavior should be trusted before integration in software systems.

For this, the VerTeCs team promotes the use of formal methods, i.e.
formal specification and mathematically founded analysis methods. During the
analysis and design phases, correctness of specifications with respect to
requirements or higher level specifications can be established by formal *verification*. Alternatively, *control* consists in forcing
specifications to stay within desired behaviours by coupling with a
supervisor. During validation, *testing* can be used to check the
conformance of implementations with respect to their specifications. *Test
generation* is the process of automatically generating test cases from
specifications.

More precisely, the aim of the VerTeCs project is to improve the
reliability of reactive systems by providing software engineers with methods
and tools for automating the **test generation** and **controller
synthesis** from formal specifications. We adapt or develop formal models for
the description of testing and control artifacts, e.g. specifications,
implementations, test cases, supervisors. We formally describe correctness
relations (e.g. conformance or satisfaction). We also formally describe
interaction semantics between testing artifacts.
>From these models, relations and
interaction semantics, we develop algorithms for automatic test and controller
synthesis that ensure desirable properties. We try to be as generic as
possible in terms of models and techniques in order to cope with a wide range
of specification languages and application domains.
We implement prototype tools for distribution in the
academic world, or for transfer to industry.

Our research is based on formal models and **verification** techniques such
as model checking, theorem proving, abstract interpretation, the control theory
of discrete event systems, and their underlying models and logics. The close
connection between testing, control and verification produces a synergy between
these research topics and allows us to share theories, models, algorithms and
tools.

The formal models we use are mainly automata-like structures such as labelled
transition systems (LTS) and some of their extensions: an LTS is a tuple M = (Q, , , q_{o}) where Q is a non-empty set of states;
q_{o}Q is the initial state; A is the alphabet of actions, is the transition relation.

To model reactive systems in the testing context, we use Input/Output labeled
transition systems (IOLTS for short). In this setting, interactions between
the system and its environment are modeled by input (controlled by the
environment) and output events (observed by the environment), and the
internal behavior of the system is modeled by internal (non observable)
events. The alphabet is then partitionned into where _{!} is the alphabet of outputs,
_{?} the alphabet of outputs, and the alphabet of internal
actions. In the controller synthesis theory, we also distinguish between
controllable and uncontrollable events ( = _{c}_{uc}), observable and unobservable events (). In order to consider both control and data aspects, we also
manipulate (input-output) symbolic transition systems ((IO)STS), which are
extensions of (IO)LTS that operate on data (i.e., program variables,
communication parameters, symbolic constants) through message passing,
guards, and assignments. An IOSTS is a tuple (V, , , T), where
V is a set of variables (including a counter variable encoding the control
structure), is the initial condition defined by a predicate on V,
is the finite alphabet of actions, where each action has a signature
(just like in IOLTS, can be partitionned as e.g. ), T is a finite set of symbolic
transitions of the form t = (a, p, G, A) where a is an action, p
is a tuple of communication parameters, G is a guard defined by a predicate
on p and V, and A is an meassignment of variables. The semantics of
IOSTS is defined in terms of (IO)LTS where states are vectors of values of
variables, and transitions between them are labelled with instantiated
actions. This (IO)LTS semantics allows us to perform syntactical
transformations at the (IO)STS level while ensuring semantical properties at
the (IO)LTS level. An alternative to IOSTS to specify systems with data
variables is the model of
synchronous dataflow equations.

Our research is based on well established theories: conformance testing, supervisory control, abstract interpretation, and theorem proving. Most of the algorithms that we employ take their origins in these theories:

graph traversal algorithms (breadth first, depth first, strongly connected components, ...). We use these algorithms for verification as well as test generation and control synthesis.

abstract interpretation algorithms, specifically in the abstract domain of polyhedraes (for example, Chernikova's algorithm for the computation of dual forms). Such algorithms are used in verification and test generation.

logical decision algorithms, such as satifiability of formulas in Presburger arithmetics. We use these algorithms during generation and execution of symbolic test cases.

Conformance testing consists in checking whether an implementation under test
abbreviated as *IUT*) behaves correctly with respect to its
specification. In the line of model-based testing, we use formal
specifications and their underlying models to unambiguously define
conformance testing and test case generation. One difficult problem is to
generate adequate test cases (the selection problem) that correctly identify
faults (the oracle problem). We use *test purposes* for selection, which
allows us to generate tests targeted to specific behaviours. For solving the
oracle problem we adapt a well established theory of conformanced
testing , which formally defines conformance as a
relation between formal models of specifications and implementations.
This
conformance relation, called **ioco** is defined in terms
of visible behaviors (called *suspension traces*)
of the implementation I (denoted by STraces(I))
and those of the specification S (denoted by STraces(S)).
Suspension traces are sequence of input,
output or quiescence (absence of action denoted by ),
thus abstract away internal
behaviors that cannot be observed by testers.
The conformance relation ioco
can be formulated as a partial inclusion of visible behaviors

Roughly speaking, an implementation is conformant if after a suspension trace of the specification, the implementation can only show outputs and quiescences of specfication.

We use IOLTS (or IOSTS) as formal models for specifications, implementations,
test purposes, and test cases. Most often, specifications are not directly
given in such low-level models, but are written in higher-level specification
languages (e.g. SDL, UML, Lotos). The tools associated with these languages
often contain a simulation API that implements their semantics under the form
of IOLTS. On the other hand, the IOSTS model is expressive enough to allow a
direct representation of most constructs of the higher-level languages. Test
purposes are specified directly as IOLTS (or IOSTS). They are associated with
marked states, giving them the status of automata or observers
accepting sequences of actions or visible behaviors (suspension traces),
ASTraces(TP).
Selection of test cases roughly amounts at computing
visible behaviors of the specification
that are accepted by the test purpose i.e.
STraces(S)ASTraces(TP).

A test case produces a *verdict* when executed on an implementation. These
are formalized in IOLTS (or IOSTS) by special states (or locations). A *Fail* verdict means that the IUT is rejected, a Pass verdict means that the
IUT exhibited a correct behavior and the test purpose has been satisfied, while
an Inconclusive verdict is given to a correct behavior that is not accepted by
the test purpose. Based on these models, an interaction semantics, and the
conformance relation, one can then define correctness properties of test cases
and test suites (sets of test cases). Typical properties are soundness (no
conformant implementation may be rejected) and exhaustiveness (every non
conformant implementation may be rejected).

We have developed test generation algorithms. These algorithms are based on
the
computation of the visible behaviors of the specification STraces(S),
involving the identification of quiescence and determinization,
the construction of a product between the specification and test purpose
which accepted behavior is STraces(S)ASTraces(TP),
and finally the selection of accepted behaviors.
Selection can be seen as a model-checking problem
where one wants to identify states (and transition between them)
that are reachable from the initial state and
co-reachable from the accepting states.
We have proved that these algorithms
ensure that the (infinite) set of all possibly generated test cases is sound
and exhaustive.

Apart from these theoretical results, our algorithms are designed to be as efficient as possible in order to be able to scale up to real applications. Roughly speaking, test generation consists in computing the intersection of the observable behavior of the specification (traces and quiescence) and the language accepted by the test purpose. The computation of observable behaviors involves determinization, while language intersection is based on computing the set of states that are reachable from initial states and co-reachable states from accepting states.

Our first test generation algorithms are based on enumerative techniques, optimized to fight the state-space explosion problem. We have developed on-the-fly algorithms, which consist in performing a lazy exploration of the set of states that are reachable in both the specification and the test purpose. The resulting test case is an IOLTS whose traces describe interactions with an implementation under test. This technique is now mature and implemented in the TGV tool, which we often use in industrial collaborations. We are continuously improving the technique. However, what characterizes this enumerative technique is that values of variables and communication parameters are instantiated at test generation time.

More recently, we have explored symbolic test generation techniques. This is
a promising technique whose main objective is to avoid the state space
explosion problem induced by the enumeration of values of variables and
communication parameters. The idea consists in computing a test case under
the form of an *IOSTS*, i.e., a reactive program in which the operations
on data is kept in a symbolic form.
The syntactical transformations that we define on IOSTS
should ensure properties of their IOLTS semantics.
However, most of the operations involved
in test generation (determinization, product, reachability, and coreachability) become
undecidable. For determinization we employ heuristics that allow us to solve
the so-called bounded observable non-determinism (i.e., the result of an
internal choice can be detected after finitely many observable actions).
The product is defined syntactically.
Finally test selection is performed as a syntactical transformation
of transitions which is based on a semantical reachability and
co-reachability analysis. As both problems are undecidable for IOSTS,
we compute over-approximations by abstract interpretation techniques.
Nevertheless, these overapproximations still ensure soundness
of test cases.
These techniques are implemented in the STG tool, with an interface
with NBAC used for abstract interpretation.

is concerned with ensuring (not only checking) that a computer-operated system
works correctly. More precisely, given a specification model and a required
property, the problem is to control the specification's behavior, by coupling
it to a supervisor, such that the controlled specification satisfies the
property . The models used are LTS (and the associated
language), which make a distinction between *controllable* and *non-controllable* actions and between *observable* and *non-observable* actions. Typically, the controlled system is constrained by
the supervisor, which acts on the system's controllable actions and forces it
to behave as specified by the property. The control synthesis problem can be
seen as a constructive verification: building a supervisor that prevents the
system from violating a property. Several kinds of properties can be ensured
such as reachability, invariance, attractivity, etc. Techniques adapted from
model checking are then used to compute the supervisor w.r.t. the objectives.
Optimality must be taken into account as one often wants to obtain a supervisor
that constrains the system as few as possible.

We are also interested in the Optimal Control Problem. The purpose of optimal control is to study the behavioral properties of a system in order to generate a supervisor that constrains the system to a desired behavior according to quantitative and qualitative requirements. In this spirit, we have been working on the optimal scheduling of a system through a set of multiple goals that the system had to visit one by one . We have also extended the results of to the case of partial observation in order to handle more realistic applications .

In many applications and control problems, FSM are the starting point to model fragments of a large scale system, which usually consists of several composed and nested sub-systems. Knowing that the number of states of the global systems grows exponentially with the number of parallel and nested sub-systems, we have been interested in designing algorithms that perform the controller synthesis phase by taking advantage of the structure of the plant without expanding the system . In other words, given the modular structure of the system, it becomes of interest, for computational reasons, to be able to synthesize a supervisor on each sub-part of the system and then to infer a global supervisor from the local ones.

In order to reduce the complexity of the supervisor synthesis phase, several approaches have been considered in the literature. Modular control and modular plant are natural ways to handle this problem. Similarly, in order to take into account nested behaviors, some techniques based on model aggregation methods have been proposed to deal with hierarchical control problems. Another direction has been proposed in . Brave and Heimann in introduced Hierarchical State Machines which constitute a simplified version of the Statecharts. Compared to the classical state machines, they add orthogonality and hierarchy features. Some other works dealing with control and hierarchy can be found in . This is the direction we have chosen in the Vertecs Team .

Both controller synthesis and conformance testing rely on the ability to solve reachability and coreachability problems on a formal model. These problems are particular, but important cases of verification problems. Verification in its full generality consists in checking that a system, which is specified in a formal model, satisfies a required property. When the state space of the system is finite and not too large, verification can be carried out by graph algorithms (model-checking). For large or infinite state spaces, we can perform approximate computations, either by computing a finite abstraction and resort to graph algorithms, or preferably by using more sophisticated abstract interpretation techniques. Another way to cope with large or infinite state systems is deductive verification, which, either alone or in combination with compositional and abstraction techniqes, can deal with complex systems that are beyond the scope of fully automatic methods.

The techniques described above, which are dedicated to the analysis of LTSs, are already mature. It seems natural to extend them to IOSTSs or data-flow applications that manipulate variables taking their values into possibly infinite data domains.

The techniques we develop for test generation or controller synthesis require
to solve state reachability and state coreachability problems (a state s is
*reachable from an initial stateb s_{i}* if an execution starting from

The big change induces by taking into account the data and not only the (finite) control of the systems under study is that the fixpoints become not computable. The undecidability is overcomed by resorting to approximations, using the theoritical framework of Abstract Interpretation .

Abstract Interpretation is a theory of approximate solving of fixpoint equations applied to program analysis. Most program analysis problems, among others reachability analysis, come down to solving a fixpoint equation

xFxxC

where C is a lattice. In the case of reachability
analysis, if we denote by S the state space of the considered program, C
is the lattice (S) of sets of states, ordered by inclusion, and F is
roughly the ``*successor states*'' function defined by the program.

The exact computation of such an equation is generally not possible for undecidability (or complexity) reasons. The fundamental principles of Abstract Interpretation are:

to substitute to the *concrete domain*C a simpler *abstract
domain*A (static approximation) and to transpose the fixpoint equation
into the abstrat domain, so that one have to solve an equation y = G(y), yA;

to use a *widening operator* (dynamic approximation) to make the
iterative computation of the least fixpoint of G converge after a finite
number of steps to some upper-approximation (more precisely, a
post-fixpoint).

Apprximations are conservative so that the obtained result is an upper-approximation of the exact result. Those two principles are well illustrated by the Interval Analysis , which consists in associating to each numerical variable of a program an interval representing an (upper) set of reachable values:

One subtitute to the concrete domain (R^{n}) induced by the
numerical variables the abstract domain (I_{R})^{n}, where I_{R} denotes
the set of intervals on real numbers; a set of values of a variable is then
represented by the smallest intervals containing it;

An iterative computation on this domain may not converge: it is indeed esasy to generate an infinite sequence of intervals which is strictly growing. The ``standard'' widening operator extrapolates by + the upper bound of an interval if the upper bound does not stabilize within a given number of steps (and similarly for the lower bound).

In this example, the state space (R^{n}) that should be abstracted
has a simple structure, but this may be more complicated when variables belong
to different data types (Booleans, numerics, arrays) and when it is necessary
to establish *relations* between the values of different types.

Programs performing dynamic allocation of objects in memory have an even more
complex state space. Solutions have been devised to represent in an
approximate way the memory heap and pointers between memory cells by graphs
(*shape analysis*). Values contained in memory
cells are however generally ignored.

In the same way, programs with recursive procedure calls, parameter passing and local variables are more delicate to analyse with precision. The difficulty is to abstract the execution stacks which may have a complex structure, particularly whenparameter passing by reference is allowed, as it induces aliasing on the stack .

For verification we also use theorem proving and more particularly the pvs and Coq proof assistants. These
are two general-purpose systems based on two different versions of higher-order
logic. A verification task in such a proof assistant consists in encoding the
system under verification and its properties into the logic of the proof
assistant, together with verification *rules* that allow to prove the
properties. Using the rules usually requires input from the user; for example,
proving that a state predicate holds in every reachable state of the system
(i.e., it is an *invariant*) typically requires to provide a stronger, *inductive* invariant, which is preserved by every execution step of the
system. Another type of verification problem is proving *simulation*
between a concrete and an abstract semantics of a system. This too can be done
by induction in a systematic manner, by showing that, in each reachable state
of the system, each step of the concrete system is simulated by a corresponding
step at the abstract level.

The methods and tools developed by the VerTeCs project-team for test generation and control synthesis of reactive systems are quite generic. This allows us to apply them in many application domains where the presence of software is predominant and its correctness is essential. In particular, we apply our research in the context of telecommunication systems, for embedded systems, for smart-cards application, and control-command systems.

Our research on test generation was initially proposed for conformance testing of telecommunication protocols. In this domain, testing is a normalized process , and formal specification languages are widely used (SDL in particular). Our test generation techniques have already proved useful in this context, going up to industrial transfer. New standardized component-based design methodologies such as UML and OMG's MDE will certainly increase the need for formal techniques in order to ensure the composability of components, by verification and testing. We believe that our techniques, by their genericity and adaptativity, will also prove useful at different levels of these methodologies, from component testing to system testing.

In the context of transport, software embedded systems are increasingly predominant. This is particularly important in automotive systems, where software replace electronics for power train, chassis (e.g. engine control, steering, brakes) and cabin (e.g. wiper, windows, air conditioning) or new services to passengers are increasing (e.g. telematics, entertainment). Car manufacturers have to integrate software components provided by many different suppliers, according to specifications. One of the problems is that testing is done late in the life cycle, when the complete system is available. Faced with these problems, but also complexity of systems, compositionality of components, distribution, etc, car manufacturers now try to promote standardized interfaces and component-based design methodologies. They also develop virtual platforms which allows for testing components before the system is complete. It is clear that software quality and trust are one of the problems that have to be tackled in this context. This is why we believe that our techniques (testing and control) can be useful in such a context.

We have also applied our test generation techniques in the context of smart-card applications. Such applications are typically reactive as they describe interactions between a user, a terminal and a card. The number and complexity of such applications is increasing, with more and more services offered to users. The security of such applications is of primary interest for both users and providers and testing is one the means to improve it.

The main application domain for controller synthesis is control-command systems. In general, such systems control costly machines (see e.g. robotic systems, flexible manufacturing systems), that are connected to an environment (e.g. a human operator). Such systems are often critical systems and errors occurring during their execution may have dramatic economical or human consequences. In this field, the controller synthesis methodology (CSM) is useful to ensure by construction the interaction between 1) the different components 2) the environment and the system itself. For the first point, the CSM is often used as a safe scheduler, whereas for the second one, the supervisor can be interpreted as a safe discrete tele-operation system.

NBAC is a verification/slicing tool developed in collaboration with Vérimag. This tool analyses synchronous and deterministic reactive systems containing combination of Boolean and numerical variables and continuously interacting with an external environment. Its input format is directly inspired by the low-level semantics of the LUSTRE dataflow synchronous language. Asynchronous and/or non-deterministic systems can be compiled in this model. The kind of analyses performed by NBAC are: reachability analysis from a set of initial states, which allows to compute invariants satisfied by the system; coreachability analysis from a set of final states, which allows to compute sets of states that may lead to a final state; and combination of the above. The result of an analysis is either a set of states together with a necessary condition on states and inputs to stay in this set during an execution, either a verdict of a verification problem. The tool is founded on the theory of abstract interpretation: sets of states are approximated by abstract values belonging to an abstract domain, on which fix-point computations are performed. The originality of NBAC resides in

the use of a very general notion of control structure in order to very precisely tune the tradeoff between precision and efficiency;

the ability to dynamically refine the control structure, and to guide this refinement by the needs of the analysis.

sophisticated methods for computing postconditions and preconditions of abstract values.

Sigali is a model-checking tool that operates on ILTS (Implicit Labeled Transition Systems, an equational representation of an automaton), an intermediate model for discrete event systems. It offers functionalities for verification of reactive systems and discrete controller synthesis. It is developed jointly by the ESPRESSO and VerTeCs teams. The techniques used consist in manipulating the system of equations instead of the sets of solution, which avoids the enumeration of the state space. Each set of states is uniquely characterized by a predicate and the operations on sets can be equivalently performed on the associated predicates. Therefore, a wide spectrum of properties, such as liveness, invariance, reachability and attractivity can be checked. Algorithms for the computation of predicates on states are also available . SIGALI is connected with the Polychrony environment (as well as the Matou environment), thus allowing the modeling of reactive systems by means of Signal Specification or Mode Automata and the visualization of the synthesized controller by an interactive simulation of the controlled system. SIGALI is protected by APP (Agence de Protection des Programmes).

Syntool is a tool dedicated to the control of structured discrete systems event. It implements the theory developed in . Syntool has an API allowing the user to graphically describe the different LTS modeling the plant, to perform some controller synthesis computations solving e.g. the forbidden state avoidance problem for structured systems and finally to simulate the result (i.e. the behavior of the controlled system). This tool is currently under testing.

Rapture is a verification tool developed jointly by BRICS and
INRIA . The tool is designed to verify reachability
properties on Markov Decision Processes (MDP), also known as Probabilistic
Transition Systems. This model can be viewed both as an extension to
classical (finite-state) transition systems extended with probability
distributions on successor states, or as an extension of Markov Chains with
non-determinism. We have developed a simple automata language that allows to
describe a set of processes communicating over a set of channels *à la*
CSP. Processes can also manipulate local and global variables of finite
type. Probabilistic reachability properties are specified by defining two
sets of initial and final states together with a probability bound. The
originality of the tool is to provide two reduction techniques that limit the
state space explosion problem: automatic abstraction and refinement
algorithms, and the so-called essential states reduction .

We investigated the supervisory control of a class of Discrete Event Systems modeled either by a collection of Finite State Machines that behave asynchronously or by a Hierarchical Finite State Machine. The basic problem of interest is to ensure the invariance of a set of particular configurations in the system. When the system is modeled as asynchronous FSMs, we provide algorithms that, based on a particular decomposition of the set of forbidden configurations, solve the control problem locally (i.e. on each component without computing the whole system) and produce a global supervisor ensuring the desired property. We then provide sufficient conditions under which the obtained controlled system is non-blocking. This kind of objectives may be useful to perform dynamic interactions between different parts of a system. Finally, we apply these results to the case of Hierarchical Finite State Machines. .

We also developed a methodology allowing to control structured discrete event
systems, when the control objectives are expressed in terms of languages. We
investigated the computation of the supremal controllable language contained
in the one of the specification. We do not adopt the decentralized approach.
Instead, we have chosen to perform the control on some approximations of the
plant derived from the behavior of each component. The behavior of these
approximations is restricted so that they respect a new language property for
discrete event systems called *partial controllability condition* that
depends on the specification. It is shown that, under some assumptions, the
intersection of these ``controlled approximations'' corresponds to the
supremal controllable language contained in the specification with respect to
the plant. This computation is performed without having to build the whole
plant, hence avoiding the state space explosion induced by the concurrent
nature of the plant .

In collaboration with the POP-ART Project, we are been interested in the programming of real-time control systems, such as in robotic, automotive or avionics systems. These systems are designed with multiple tasks, each with multiple modes. implementing a functionality with different levels of quality (e.g., computation approximation), and cost (e.g., computation time, energy). Due to the interactions between components, it is complex to design task handlers that control the switching of activities in order to insure safety properties of the global system. We proposed a model of tasks in terms of transition systems, designed especially with the purpose of applying existing (optimal) discrete controller synthesis techniques. This provides us with a systematic methodology, for the automatic generation of safe task handlers, with the support of synchronous languages and associated tools for compilation and formal computation .

This year, we have been interested in solving the safety controller synthesis problem for various models (from finite transition systems to hybrid systems). Within this framework, we have been mainly interested in an intermediate model: the symbolic transition system. We first redefined the concept of controllability by introducing the notion of uncontrollable transitions. We then defined synthesis algorithms based on abstract interpretation techniques so that we can ensure finiteness of the computations. We finally generalized our methodology to the control of hybrid systems, which gives an unified framework to the supervisory control problem for several classes of models.

Our test generation techniques were previously based on a selection by test purposes. However, this approach necessitates to specify those test purposes. Users sometimes want more automatic ways to generate test cases, from more general selection mechanisms. In the context of the Agedis european project (see ), we have defined more general selection mechanisms, called test selection directives. They allow to describe both coverage directives (on states, transitions and more generally expressions on variables), test purposes (extended to more general observers) and constraints on data values, and to combine them. Taking into account these test directives involved a deep modification in some test generation algorithms. We also designed algorithms that generate test cases randomly, without any test directive. All these algorithms are incremental in the sense that they produce test cases when these are computed, without waiting the end of the process, thus allowing users to interupt the process with a partial result . We are now formalizing these notions and algorithms. The starting point is a variation of the IOSTS model and its IOLTS semantics where information in states (composed of vectors of variable values) is explicitely considered for test generation. Operations performed during test generation with various selection directives including coverage can then be defined at this semantical level. This formalization should bridge the gap between enumerative test generation and symbolic test generation.

In our seminal work , we presented a framework for symbolic test generation. These were preliminary ideas which have been improved since then. First the PhD. thesis of Elena Zinovieva presents a complete formalization of these ideas, and describes the integration of the approximated reachability and co-reachability algorithms embodied into the NBac tool, into the general symbolic test generation algorithm of the STG tool. The new reachability/co-reachability algorithms allow for a precise handling of data, and as a result, the new version of the tool is able to generate better test cases (i.e., having less Inconclusive verdicts). The Habilitation of Thierry Jéron presents another viewpoint and new results with a more abstract IOSTS model and a language point of view allowing to completely explain IOSTS transformations in terms of their IOLTS semantics. Finally, in we reformulate these results and define new qualitative properties of test cases. We precisely show what is lost by approximate analysis compared to exact analysis (e.g. in the finite IOLTS case) : inconclusive verdict can be delayed, while pass and fail verdicts are exact.

In this work, we present combinations of verification and conformance testing techniques for the formal validation of reactive systems. A formal specification of a system - an input-output automaton with variables that may range over infinite domains - is assumed. Additionally, a set of safety properties for the specification are given under the form of observers described in the same formalism. Then, each property is verified on the specification using automatic techniques (e.g., abstract interpretation) that are sound but not necessarily complete for the class of safety properties considered here. Next, for each property, a test case is automatically generated from the specification and the property and is executed on a black-box implementation of the system. If the verification step was successful, that is, it has established that the specification satisfies the property, then the test execution may detect the violation of the property by the implementation and the violation of the standard ioco conformance relation between implementation and specification. On the other hand, if the verification step did not conclude (i.e., it did not allow to prove or to disprove the property), then the test execution may additionally detect a violation of the property by the specification. The informations about the relative (in)consistencies between specification, implementation, and properties are reported to the user as test verdicts. The approach is illustrated on the BRP protocol . This work proposes a symbolic approach that extends , that were based on finite model using enumerative methods.

The project has mainly studied the problem of conformance test
generation in the case where the specification is
intraprocedural, *i.e.,* it does not contain procedure
calls and returns. However, allowing such a feature in
specifications would allow more expressiveness, both
theoritically and from a specification language point of view.
The master thesis of L. Randriamanohisoa
investigated the problem of test
generation in the case where the specification is defined by a
pushdown system (PDS), which extend finite transition systems
(LTS) with a stack of symbols belonging to a finite set, and
which generates context-free languages. PDS allows to model any
recursive programs manipulating finite-state variables and can
thus be viewed as the interprocedural generalization of LTS for
our application. Two important problems
for test generation are still decidable with a low polynomial
complexity: the language intersection with a regular language,
which allows to consider test purposes modeled with IOLTS, and
coreachability (and reachability) analysis ,
which allows to perform test selection. This allowed us to
design an exact test generation algorithm resulting in an IOPDS
test case. Interesting problems of partial observation are
raised by this work: a PDS test case indeed observes only the topmost
symbol of its stack.

We studied the problem of controlling a plant of a system by means of an automatically computed supervisor, in order to ensure a certain conformance relation between the plant and its formal specification. The supervisor can be seen as a device that automatically fixes errors, which otherwise should have been discovered by testing and fixed by hand. The resulting controlled plant conforms to the specification and is maximal in terms of observable behavior .

This work was part of the master trainee of Sebastien Saudrais. Model checking consists in checking that a model of a system satisfies some property. Typically properties are expressed with a temporal logic (CTL, LTL, etc), and systems are modelled with Kripke structures (LTS where states carry atomic propositions true in those states). Coverage in model checking consists in checking that a property covers the model of the system. Intuitively, a state is covered if it plays a role in the satisfaction of the property, i.e. if a mutant obtained by inverting an atomic proposition in this state violates the property. After a study of the state of the art, we tried to transpose the problem in the model of IOLTS. Now information is carried by transitions We first defined mutants for these models and their unfoldings by addition, suppression and modification of transitions. We then defined adequate notions of coverage for LTL properties on traces and compared these notions. This work constitutes a first step towards a new formal definitions of coverage for test generation.

This work has been done in cooperation with David Cachera, Thomas Jensen, and David Pichardie from the Lande project-team of Irisa. We show how to formalise a constraint-based data flow analysis in the specification language of the Coq proof assistant. This involves defining a dependent type of lattices together with a library of lattice functors allowing for a modular construction of complex abstract domains. Constraints are expressed via an intermediate representation that allows for efficient constraint resolution. Correctness with respect to an operational semantics is proved formally. The proof of existence of a correct, minimal solution of the constraints is constructive, which means that the extraction mechanism of Coq provides a provably correct data flow analyser in Ocaml. The library of lattices together with the intermediate representation of constraints are defined in an analysis-independent fashion, thus providing a generic framework for proving and extracting static analysers in Coq .

A tool is currently being implemented. Our final goal is to extend to recursive programs the sophisticated techniques implemented in the tool NBac for reactive programs.

This work addresses the verification of properties of imperative programs with recursive procedure calls, heap-allocated storage, and destructive updating of pointer-valued fields-i.e., interprocedural shape analysis. It presents a way to apply some previously known approaches to interprocedural dataflow analysis-which in past work have been applied only to a much less rich setting-so that they can be applied to programs that use heap-allocated storage and perform destructive updating .

Agedis [11/2000-01/2004] is an European project IST (http://www.agedis.de/) on the automated generation and execution of test suites for distributed component based software. The goal of the project is to propose a toolset for test generation, test execution and test result analysis, starting from UML models of distributed software. This project allows us to improve the TGV technology by extending it with coverage based test generation (see ) using a more complete API to the IF simulation tool of VERIMAG. It also gives us a new opportunity to connect our technology to the UML world. Partners are IBM Haifa, IBM Hursley, France Télécom R& D, IntraSoft, Imbus, Oxford University and Vérimag. We are subcontractors of Vérimag in this project. Agedis lasted in the beginning of year 2004 with demonstrations of the complete chain, from UML design models and test selection directives to test generation and execution on target.

The goal of this 3-year project (starting October 2004) is to build a platform for the formal validation of France Telecom's vocal phone services. Vocal services are based on speech recognition and synthesis algorithms, and they include automatic connection to the callee's phone number by pronouncing her name, or automatic pronuncation of the callee's name whose phone number was dialed in by the user. Here, we are not interested in validation the voice recognition/synthesis algorithms, but on the logic surrounding them. For example, the system may allow itself a certain number of attempts for recognizing a name, after which it switches to normal number-dialing mode, during which the user may choose to go back to voice-recognition mode by pronouncing a certain keyword. This logic may become quite intricate, and this complexity is multiplied by the number of clients that may be using the service at any given time. Its correctness has been identified by France Telecom as a key factor in the success of the deployment of voice-based systems. To validate them we are planning to apply a combination of formal verification and conformance testing techniques (cf. Section ).

V3F (http://lifc.univ-fcomte.fr/~v3f/)[2003-2005] is a project involving LIFC Besançon, Inria-I3S Nice, LIST-CEA Saclay and project teams Lande and Vertecs in Irisa. The goal of this project is to provide tools to support the verification and validation process of programs with floating-point numbers. More precisely, project V3F will investigate techniques to check that a program satisfies the calculations hypothesis on the real numbers that have been done during the modelling step. The underlying technology will be based on constraint programming. Constraints solving techniques have been successfully used during the last years for automatic test data generation, model-checking and static analysis. However in all these applications, the domains of the constraints were restricted either to finite subsets of the integers, rational numbers or intervals of real numbers. Hence, the investigation of solving techniques for constraint systems over floating-point numbers is an essential issue for handling problems over the floats.

So, the expected results of project V3F are a clean design of constraint solving techniques over floating-point number, and a deep study of the capabilities of these techniques in the software validation and verification process. More precisely, we will develop an open and generic prototype of a constraint solver over the floats. We will also pay a special attention on the integration of floats into various formal notations (e.g., B, Lustre, UML/OCL) to allow an effective use of the constraint solver in formal model verification, automatic test data generation (functional and structural) and static analysis.

Our contribution to this project is first to precisely formalize a conformance testing theory for programs with floating point with respect to their specifications, and second, to describe test generation algorithms in this framework. We currently investigate several possibiliies for the first point, where the conformance testing theory should take into account imprecisions due to floating point calculations.

The Potestat project (http://www-lsr.imag.fr/POTESTAT/) [2004-2006] involves LSR-IMAG Grenoble, Verimag Grenoble and Lande and Vertecs project teams in Irisa.

In the framework of open service implementations, based on the interconnection of heterogeneous systems, the security managers lack of well-formalized analysis techniques. The security of such systems is therefore organized from pragmatic elements, based on well-known vulnerabilities and their associated solutions. It then remains to verify if such security policies are correctly and effectively implemented in the actual system. This is usually carried out by auditing the administrative procedures and the system configuration. Tests are then performed, for instance by probing, to check the presence of some particular vulnerabilities. Although some tools are already available for specific tests (like password crackers), there is no solution to analyze the whole system conformance with respect to a security policy. This lack may be explained by several factors. First, there is currently no complete study about the formal modeling of a security policy, even if some particular aspects have been more thoroughly studied. Furthermore, verification based researches about security usually concerned more precise elements, like cryptographic protocols or code analysis. Finally, most of these works are dedicated to a priori verification of the coherency of security policies before their implementation. We are concerned here by the conformance of a system configuration with respect to a given policy. In the framework of the POTESTAT project we plan to tackle this problem according to the following items: - Formal modelization of security policies, allowing a test directed analysis. - Definition of a conformance notion between a system configuration and some security policies elements. The goal is to obtain a test theory similar to the one existing in the protocol testing area (like the Z.500 norm). - Definition of methods to test this conformance notion, including the testability problems, the environment of execution, code analysis and test selection. A long-term of this project is to offer some tools allowing security managers: - to modelize information flow, network elements (protocols, node types and their associated security policy, etc) to better describe the security policy for conformance testing; - to provide some practical tools to perform coherency verification and vulnerabilities detection.

The APRON (Analyse de PROgrammes Numériques) project (http://www.cri.ensmp.fr/apron/) [2004-2006] involves ENSMP, LIENS-ENS, LIX-Polytechnique, VERIMAG and Vertecs-Irisa.

The goal is to develop methods and tools to analyze statically embedded
software with high-level of criticity for which the detection of errors at
run-time is unacceptable for safety or security reasons. Such safety and
security software is found in the context of transportation, automotive,
avionics, space, industrial process control and supervision, etc. One
characteristics of such software is that it is based on physical models
whence involve a lot of numerical computations.
Moreover, *counters* play an important role in the control of reactive
programs (e.g., delay counting in synchronous programming). Critical properties
depending on these counters are generally outside the scope of model-checking
approaches, while being simple enough to be accurately analyzed by more
sophisticated numerical analyses.

The goal of the project is the static analysis of large specifications (e.g. à la Lustre) and corresponding programs (e.g. of 100 to 500 000 LOCs of C), made of thousands of procedures, involving a lot of numerical floating-point computations, as well as boolean and counter-based control in order to:

verify critical properties (including the detection of possible runtime errors), and

help in automatically locating the origin of critical property potential violation.

An example of such critical properties, as found in control/command programs, is of the form ``under a condition holding on boolean and numerical variables for some time, the program must imperatively establish a given boolean and/or numerical property, in given bounded delay''.

Vertecs contributes to the following topics within the APRON project:

The design and implementation of a common interface to several abstraction libraries (intervals, linear equalities, octagons, polyhedra, ...and their combination).

The study of adaptative techniques for adjusting the tradeoff between the efficiency and the precision of analyses, among other dynamic partitioning techniques . Results have already been obtained in the intraprocedural case, but to a less extend in the interprocedural case.

Vertecs focus mainly on Lustre specifications and provides with the NBac tool the main experimental platform of the project for the verification of critical properties on such specifications.

We collaborate with several Inria project-teams. We collaborate with the LANDE project-team in two ACI-Securité grants (V3F and POTESTAT). With ESPRESSO project-team for the development of the SIGALI tool inside the Polychrony environment. With the POP-ART project-team on the use of the controller synthesis methodology for the control of control-command systems (e.g. robotic systems). With the TRISKELL project-team we have different collaborations on testing, in particular on the connection of TGV and UMLAUT. With DISTRIBCOM on symbolic distributed test generation and securtity testing. With the S4 project-team on the use of control and game theory for test generation. With the VASY project-team on the use of CADP libraries in TGV and the distribution of TGV in the CADP toolbox.

*Our main collaborations are with Vérimag. Beyond formalized
collaborations (IST Agedis, CNRS AS Test, ARC Modocop), we also collaborate
on the connection of NBAC with Lurette for the analysis of Lustre programs,
as well as the connection of SIGALI and Matou. *

*in Denmark (K. G. Larsen) and University of
Twente (P. Katoen) on probabilistic verification. We participate in the
development of the Rapture tool.*

*in USA (D. Clarke) on symbolic test generation
with emphasis on the STG tool.*

*in Italy (A. Bertolino) on using TGV for
test generation for software architectures.*

*(T. Reps) on shape analysis. Bertrand
Jeannet visited Tom Reps during summer 2004.*

*in Tunisia (M. Tahar Bhiri). Thierry Jéron is
co-supervisor of a master student Hatem Hamdi working
on robustness testing.*

We are partners of the ARTIST Network of Excellence (http://www.artist-embedded.org/) on Embedded Systems, involved in the Testing and Verification cluster with Brics in Aalborg (DK), University of Twente (NL), University of Liège (B), Upsalla (SE), Verimag Grenoble, ENS Cachan, LIAFA Paris, EPFL Lausanne (S). The aim of the cluster is to develop a theoretical foundation for real-time testing, real-time monitoring and optimal control, to design data structures and algorithms for quantitative analysis, and to apply testing and verification tools in industrial settings. For security, we plan to create a common semantic framework for describing security protocols including notion of "trust" and to develop tools and methods for the verification of security protocols. Test and verication tools developed by partners will be made available via web-portal and with dedicated verfication servers. A first cluster meeting took place in Bruxelles in december 04 during which T. Jéron and V. Rusu gave presentations.

B. Gaudin is teaching in DEUG and DIIC 2 in University of Rennes 1 (64h/year).

B. Jeannet teaches in the Master of Computer Science in Rennes, on abstract interpretation,

T. Jéron is responsible of a course on Testing in Master of Computer Science at the University of Rennes 1. He also teaches in the engineering school EnstB in Rennes and Brest. The topics of both lectures are testing and model-checking,

V. Rusu teaches in the Master of Computer Science in Rennes, on deductive verification methods.

V. Tschaen is teaching in DEUG and DIIC 3 in University of Rennes 1 (64h/year).

PhD theses defended in 2004

Benoit Gaudin, ``*Control of Structured Discrete Event Systems*''.
November 15th 2004

Elena Zinovieva ``*Symbolic methods in test generation for reactive
systems with data*''. November 22th 2004.

Current PhD. theses:

Camille Constant, ``*Verification and symbolic test generation for reactive systems*'',

Tristan Le Gall, ``*Abstract lattice of fifo channels for
verification and control synthesis*''

Valéry Tschaen, ``*Automatic test generation: Models and Techniques*''

Trainees 2003-2004:

Camille Constant, ``*Verification and Test of safety properties*'',
Master student (6 months).

Tristan Le Gall, ``*Supervisory Control of symbolic and hybrid transition systems*'', Master student (6 months).

Sebastien Saudrais, ``*Coverage in model checking and testing*'',
Master student (6 months).

Liva Randriamanohisoa, ``*Test generation for interprocedural specifications*'',
Master student (6 months).

Hatem Hamdi, ``*Generation and execution of test cases with Agedis*'', Master student ENIS Sfax.

Trainees 2004-2005:

Sophie Quinton, ``*Vérification à l'éxécution et test de conformité*'',
Master student (6 months).

T. Jéron has been member of the ``Comissions de Spécialistes 27e'' University Rennes 1 until 2004. He was member of the local selection jury for Inria researchers.

Hervé Marchand is member of the ``Comissions de Spécialistes 27e'' at University of Rennes 1.

Bertrand Jeannet is the organizer of the thematic seminar ``68 NQRT'' (http://www.irisa.fr/NQRT/index.html) in Irisa.

Thierry Jéron is PC member of Fates'04 (Linz, Austria 09/04) and Fates'05 (Edinburgh, 07/05), Testcom 2004 (Oxford, 03/04) and Testcom 2005 (Montreal, 06/05), Tacas'05 (Edinburgh, 04/04) and Tool Chair of Tacas'06 (Vienna, 03/06). He is member of the Organizing Committee of the Movep School (Bruxelles, 12/04) . He is Financial Chair of ISSRE 2004 (St Malo, 11/04). He is reviewer for Zentralblatt Math.

Thierry Jéron gave a presentation on test generation in the Inria-Industry seminar in January 2004. He was also invited to give a seminar in Labri Bordeaux in January 2004 on test generation and control synthesis.

Jérome Leroux was invited to give a seminar on ``The convex hull of a Number Decision Diagran is an effectively computable polyhedron'' at LORIA, Nancy (10/2004)

Hervé Marchand is PC member of MSR'05 (Grenoble, 10/05) and is member of the Organizing Committee of ACSD'05 (St Malo, 06/05).

Hervé marchand was invited to give a seminar on ``the optimal control of discrete event systems'' during the "journée GDR MACS" (Aix-en-Provence, 10/04) and on ``the control of structured discrete event systems'' during the "Journée QSL" (LORIA, Nancy, 10/05).

Vlad Rusu gave invited talks at the Dagstuhl seminar ``Perspectives in Model-Based Testing'' (12/09) and at the British FORTEST seminar on model-based testing (IBM Hursley, 21/09).

Vlad Rusu organized a meeting (25-26/11) at IRISA on ``Verification, Testing, and Synthesis'' around the visit of Doron Peled, chair of Software Engineering, University of Warwick. The meeting includes a talk by Doron; talks by members of several project-teams of IRISA that are active in the fields of verification, testing, and synthesis; and discussions on collaborations.

Valery Tschaen gave an invited talk on ``Test generation algorithms based on preorders'' at the Dagstuhl seminar ``Model-based Testing of Reactive Systems'' (12-15/01)