The PAREO team aims at designing and implementing tools for the specification, analysis and verification of software and systems. At the heart of our project is therefore the will to study fundamental aspects of programming languages (logic, semantics, algorithmic, etc.) and to make major contributions to the design of new programming languages. An important part of our research effort will be dedicated to the design of new fundamental concepts and tools to analyze existing programs and systems. To achieve this goal we focus on:

the improvement of theoretical foundations of rewriting and deduction;

the integration of the corresponding formal methods in programming and verification environments;

the practical applications of the proposed formalisms.

It is a common claim that rewriting is ubiquitous in
computer science and mathematical logic. And indeed the
rewriting concept appears from very theoretical settings to
very practical implementations. Some extreme examples are the
mail system under Unix that uses rules in order to rewrite
mail addresses in canonical forms (see the
`/etc/sendmail.cf`file in the configuration of the
mail system) and the transition rules describing the
behaviors of tree automata. Rewriting is used in semantics in
order to describe the meaning of programming languages
as well as in program
transformations like, for example, re-engineering of Cobol
programs
. It is used in order to compute,
implicitly or explicitly as in Mathematica or MuPAD, but also
to perform deduction when describing by inference rules a
logic
, a theorem prover
or a constraint solver
. It is of course central in
systems making the notion of rule an explicit and first class
object, like expert systems, programming languages based on
equational logic, algebraic specifications (
*e.g.*, OBJ), functional programming (
*e.g.*, ML) and transition systems (
*e.g.*, Murphi).

In this context, the study of the theoretical foundations of rewriting have to be continued and effective rewrite based tools should be developed. The extensions of first-order rewriting with higher-order and higher-dimension features are hot topics and these research directions naturally encompass the study of the rewriting calculus, of polygraphs and of their interaction. The usefulness of these concepts becomes more clear when they are implemented and a considerable effort is thus put nowadays in the development of expressive and efficient rewrite based programming languages.

Programming languages are formalisms used to describe programs, applications, or software which aim to be executed on a given hardware. In principle, any Turing complete language is sufficient to describe the computations we want to perform. However, in practice the choice of the programming language is important because it helps to be effective and to improve the quality of the software. For instance, a web application is rarely developed using a Turing machine or assembly language. By choosing an adequate formalism, it becomes easier to reason about the program, to analyze, certify, transform, optimize, or compile it. The choice of the programming language also has an impact on the quality of the software. By providing high-level constructs as well as static verifications, like typing, we can have an impact on the software design, allowing more expressiveness, more modularity, and a better reuse of code. This also improves the productivity of the programmer, and contributes to reducing the presence of errors.

The quality of a programming language depends on two main
factors. First, the
*intrinsic design*, which describes the programming
model, the data model, the features provided by the language,
as well as the semantics of the constructs. The second factor
is the programmer and the application which is targeted. A
language is not necessarily good for a given application if
the concepts of the application domain cannot be easily
manipulated. Similarly, it may not be good for a given person
if the constructs provided by the language are not correctly
understood by the programmer.

In the
*Pareo*group we target a population of programmers
interested in improving the long-term maintainability and the
quality of their software, as well as their efficiency in
implementing complex algorithms. Our privileged domain of
application is large since it concerns the development of
*transformations*. This ranges from the transformation
of textual or structured documents such as XML, to the
analysis and the transformation of programs and models. This
also includes the development of tools such as theorem
provers, proof assistants, or model checkers, where the
transformations of proofs and the transitions between states
play a crucial role. In that context, the
*expressiveness*of the programming language is
important. Indeed, complex encodings into low level data
structures should be avoided, in contrast to high level
notions such as abstract types and transformation rules that
should be provided.

It is now well established that the
notion of
*term*and
*rewrite rule*are two universal abstractions well suited
to model tree based data types and the transformations that
can be done upon them. Over the last ten years we have
developed a strong experience in designing and programming
with rule based languages
,
,
. We have introduced and studied
the notion of
*strategy*
, which is a way to control how
the rules should be applied. This provides the separation
which is essential to isolate the logic and to make the rules
reusable in different contexts.

To improve the quality of programs, it is also essential
to have a clear description of their intended behaviors. For
that, the
*semantics*of the programming language should be
formally specified.

There is still a lot of progress to be done in these directions. In particular, rule based programming can be made even more expressive by extending the existing matching algorithms to context-matching or to new data structures such as graphs or polygraphs. New algorithms and implementation techniques have to be found to improve the efficiency and make the rule based programming approach effective on large problems. Separating the rules from the control is very important. This is done by introducing a language for describing strategies. We still have to invent new formalisms and new strategy primitives which are both expressive enough and theoretically well grounded. A challenge is to find a good strategy language we can reason about, to prove termination properties for instance.

On the static analysis side, new formalized typing algorithms are needed to properly integrate rule based programming into already existing host languages such as Java. The notion of traversal strategy merits to be better studied in order to become more flexible and still provide a guarantee that the result of a transformation is correctly typed.

The huge diversity of the rewriting concept is obvious and when one wants to focus on the underlying notions, it becomes quickly clear that several technical points should be settled. For example, what kind of objects are rewritten? Terms, graphs, strings, sets, multisets, others? Once we have established this, what is a rewrite rule? What is a left-hand side, a right-hand side, a condition, a context? And then, what is the effect of a rule application? This leads immediately to defining more technical concepts like variables in bound or free situations, substitutions and substitution application, matching, replacement; all notions being specific to the kind of objects that have to be rewritten. Once this is solved one has to understand the meaning of the application of a set of rules on (classes of) objects. And last but not least, depending on the intended use of rewriting, one would like to define an induced relation, or a logic, or a calculus.

In this very general picture, we have introduced a
calculus whose main design concept is to make all the basic
ingredients of rewriting explicit objects, in particular the
notions of rule
*application*and
*result*. We concentrate on
*term*rewriting, we introduce a very general notion of
rewrite rule and we make the rule application and result
explicit concepts. These are the basic ingredients of the
*rewriting-*or
*-*calculus whose originality comes from the fact
that terms, rules, rule application and application
strategies are all treated at the object level (a rule can be
applied on a rule for instance).

The -calculus is usually put forward as the abstract computational model underlying functional programming. However, modern functional programming languages have pattern-matching features which cannot be directly expressed in the -calculus. To palliate this problem, pattern-calculi , , have been introduced. The rewriting calculus is also a pattern calculus that combines the expressiveness of pure functional calculi and algebraic term rewriting. This calculus is designed and used for logical and semantical purposes. It could be equipped with powerful type systems and used for expressing the semantics of rule based as well as object oriented languages. It allows one to naturally express exception handling mechanisms and elaborated rewriting strategies. It can be also extended with imperative features and cyclic data structures.

The study of the rewriting calculus turns out to be
extremely successful in terms of fundamental results and of
applications
. Different instances of this
calculus together with their corresponding type systems have
been proposed and studied. The expressive power of this
calculus was illustrated by comparing it with similar
formalisms and in particular by giving a typed encoding of
standard strategies used in first-order rewriting and
classical rewrite based languages like
*ELAN*and
*Tom*.

Rewriting theory, in computer science, and combinatorial
algebra, in mathematics, are two research fields that share
striking similarities: both study the properties of
*intentional*descriptions of complex objects,
respectively rewriting systems and presentations by
generators and relations. This research direction is devoted
to develop a theoretical setting that unifies rewriting and
combinatorial algebra, in order to transpose methods of one
field to the other one.

Rewriting systems and presentations of algebraic
structures have a common generalisation as
*polygraphs*, which are cellular specifications of
higher-dimensional categories
. In this setting, we can
describe, in a uniform way, different kinds of objects, such
as the following ones:

A rule-based program that computes the list-splitting function used in the merge-sort algorithm:

The usual presentation by generators and relations of the structure of Hopf algebras, containing, in particular, the following relations:

The Reidemeister moves for braids, giving a combinatorial description of those topological objects:

More precisely, polygraphs are a common algebraic setting for: abstract, word, term and term-graph rewriting systems , , and, in particular, first-order functional programs , , ; set-theoretic operads, pro(p)s, algebraic theories , ; Turing machines and Petri nets , , , ; formal proofs of propositional calculus and linear logic .

In the theoretical setting of polygraphs, the Fox-Squier theory shows that the computational properties of rewriting systems and the mathematical properties of presentations are intimately related to the same topological properties of polygraphs . From this starting point, we progressively establish a correspondence between programs and algebras and we use it to develop applications in different directions:

Algebraic semantics of programs, such as the new characterisation of evaluation strategies in terms of algebraic resolutions . Here, we want to use well-founded mathematical theories to give a better understanding of programming and, that way, extend the expressiveness of rule-based programming languages.

Methods from algebraic topology to
produce new tools for the static analysis of programs,
such as the use of
*derivations*to prove termination and bound
computational complexity
,
,
. In this direction, we plan
to develop tools from cohomology to classify derivations
and, this way, to propose a radically new point of view
on computational complexity theory.

New applications for rewriting, in the field of the formalisation and certification of mathematics, such as the use of rewriting methods to prove coherence theorems or to build resolutions , .

Beside the theoretical transfer that can be performed via
the cooperations or the scientific publications, an important
part of the research done in the
*Pareo*group team is published within software.
*Tom*is our flagship implementation. It is available via
the INRIA Gforge (
http://

Teaching: when (for good or bad
reasons) functional programming is not taught nor used,
*Tom*is an interesting alternative to exemplify the
notions of abstract data type and pattern-matching in a
Java object oriented course.

Software quality: it is now well
established that functional languages such as Caml are
very successful to produce high-assurance software as
well as tools used for software certification. In the
same vein,
*Tom*is very well suited to develop, in Java, tools
such as provers, model checkers, or static analyzers.

Symbolic transformation: the use of
formal anchors makes possible the transformation of
low-level data structures such as C structures or arrays,
using a high-level formalism, namely pattern matching,
including associative matching.
*Tom*is therefore a natural choice each time a
symbolic transformation has to be implemented in C or
Java for instance.
*Tom*has been successfully used to implement the
Rodin simplifier, for the B formal method.

Prototyping: by providing abstract
data types, private types, pattern matching, rules and
strategies,
*Tom*allows the development of quite complex
prototypes in a short time. When using Java as the
host-language, the full runtime library can be used.
Combined with the constructs provided by
*Tom*, such as strategies, this procures a
tremendous advantage.

One of the most successful transfer is certainly the use
of
*Tom*made by Business Objects/SAP. Indeed, after
benchmarking several other rule based languages, they decided
to choose
*Tom*to implement a part of their software.
*Tom*is used in Paris, Toulouse and Vancouver. The
standard representation provided by
*Tom*is used as an exchange format by the teams of these
sites.

ATerm (short for Annotated Term) is an abstract data type designed for the exchange of tree-like data structures between distributed applications.

The ATerm library forms a comprehensive procedural interface which enables creation and manipulation of ATerms in C and Java. The ATerm implementation is based on maximal subterm sharing and automatic garbage collection.

A binary exchange format for the concise representation of ATerms (sharing preserved) allows the fast exchange of ATerms between applications. In a typical application—parse trees which contain considerable redundant information—less than 2 bytes are needed to represent a node in memory, and less than 2 bits are needed to represent it in binary format. The implementation of ATerms scales up to the manipulation of ATerms in the giga-byte range.

The ATerm library provides a comprehensive interface in C and Java to handle the annotated term data-type in an efficient manner.

We are involved (with the CWI) in the implementation of
the Java version, as well as in the garbage collector of the
C version. The Java version of the ATerm library is used in
particular by
*Tom*.

The ATerm library is documented, maintained, and available
at the following address:
http://

Since 2002, we have developed a new system called
*Tom*
, presented in
,
. This system consists of a
pattern matching compiler which is particularly well-suited
for programming various transformations on trees/terms and
XML documents. Its design follows our experiences on the
efficient compilation of rule-based systems
. The main originality of this
system is to be language and data-structure independent. This
means that the
*Tom*technology can be used in a C, C++ or Java
environment. The tool can be seen as a Yacc-like compiler
translating patterns into executable pattern matching
automata. Similarly to Yacc, when a match is found, the
corresponding semantic action (a sequence of instructions
written in the chosen underlying language) is triggered and
executed.
*Tom*supports sophisticated matching theories such as
associative matching with neutral element (also known as
list-matching). This kind of matching theory is particularly
well-suited to perform list or XML based transformations for
example.

In addition to the notion of
*rule*,
*Tom*offers a sophisticated way of controlling their
application: a strategy language. Based on a clear semantics,
this language allows to define classical traversal strategies
such a
*innermost*,
*outermost*,
*etc.*. Moreover,
*Tom*provides an extension of pattern matching, called
*anti-pattern matching*. This corresponds to a natural
way to specify
*complements*(
*i.e.*what should not be there to fire a rule).
*Tom*also supports the definition of cyclic graph
data-structures, as well as matching algorithm and rewriting
rules for term-graphs.

*Tom*is documented, maintained, and available at
http://

*Lemuridae*is a proof assistant for the sequent calculus
instance of superdeduction modulo. It is written in
*Tom*and features automatic super-rules derivation with
support for axiom directed focussing, automated derivation of
induction principles using deduction modulo encoding of
higher order logic, as well as some basic automatic tactics.
The soundness is ensured by a tiny kernel checking the
generated proof trees.

It has been used as a prototyping environment for the developpement of the encoding of Pure Type Systems as well as the simulation of inductive types in superdeduction modulo.

In 2009, the kernel has been entirely rewritten to take
advantage of the proofterms developped in
. To this end, we have developped
*FreshGom*, an extension of the
*Tom*language providing mechanisms that ease the
manipulation terms containing binders. This allowed us to
export
*Lemuridae*'s proofs to other formats, including
Parigot's
-calculus
and
*Coq*proofterms
.

*Lemuridae*is available in the
*Tom*subversion repository, under
`applications/lemuridae`.

Cat is a library for polygraphic calculus, written in Caml. It has been used, in a joint work with F. Blanqui, to produce an automatic termination prover for first-order functional programs. It translates such a rewriting system into a polygraph and tries to find a derivation proving its termination, using the results of , . If possible, it seeks a derivation that proves that the program is polynomial , . Cat is also at the basis of Catex.

Catex is a tool for (pdf)Latex, used in the same way as Bibtex, that automatically produces string diagrams from their algebraic expression. It follows the same design as Tom, a Catex file being a Latex file enriched with formal islands corresponding to those algebraic expressions, such as:

`\deftwocell[red]{delta : 1 -> 2}`

`\deftwocell[orange]{mu : 2 -> 1}`

`\deftwocell[crossing]{tau : 2 -> 2}`

`\twocell{(delta *0 delta) *1 (1 *0 tau *0 1) *1 (mu *0
mu)}`

Catex dissolves such an island into Latex code, using the PGF/Tikz package. Executed on the result, (pdf)Latex produces the following diagram:

Catex is distributed through the page:
http://

In collaboration with P. Malbos (Institut Camille Jordan, Univ. Lyon 1), we have explored two extensions of our polygraphic version of Squier's theory .

In , we have used rewriting to give a theoretical setting and concrete formal methods to formalise and give constructive proofs of coherence theorems. The first one is Mac Lane's classical result on monoidal categories: the new proof we give is a direct application of . Then, cases like symmetric monoidal categories are a first step towards a "rewriting modulo" version of the same work. Finally, we give a new understanding and a constructive proof of the result for cases like braided monoidal categories.

In , we have generalised the work of to any dimension. We have introduced a notion of polygraphic resolution that generalises both usual algebraic resolutions, in combinatorial algebra, but also, more surprisingly, normalisation strategies, as used in rewriting theory and, in particular, in rule-based programming languages. Thus, a functional program can be mathematically defined as a complete cellular model of the functions it computes. This gives a strong mathematical background to the notion of program, together with a constructive way to build resolutions from convergent polygraphs. This work has been presented during an invited conference at the International Congress on Operads and Universal Algebra, held in Tianjin, China, in July 2010. Those results will be further explored to give a mathematical description of the strategies used in Tom, in order to develop methods from algebraic topology to study their computational properties, like termination and complexity.

Higher order rewrite systems are useful abstractions to model both operational semantics of programming languages and equational reasoning in certain theories. The termination of these systems is very useful for proving correctness of programs, finding decision procedures, and proving consistency of certain theorem proving systems based on type theory.

A well studied method of proving termination is
*size based termination*which allows comparison of
sizes of arguments in recursive calls by typing.

We investigate a type system related to size based
termination,
*refinement types*, in which the type of terms in a
given data-type contain additional information on the
*shape*of the normal forms of the term, in the spirit
of abstract interpretation. We show that given a typed
higher-order rewrite system, it is possible to build a
type-based
*dependency graph*, and give a termination criterion
by syntactic considerations of this graph, which is
sufficiently powerful to express structural decrease. We
demonstrate the advantages of this method over other
approaches to higher-order dependency pair frameworks. This
work is described in a technical report
.

Superdeduction is a method specially designed to ease the use of first-order theories in predicate logic. The theory is used to enrich the deduction system with new deduction rules in a systematic, correct and complete way. A proof-term language and a cut-elimination reduction already exist for superdeduction, both based on Christian Urban's work on classical sequent calculus. However Christian Urban's calculus is not directly related to the Curry-Howard correspondence contrarily to the -calculus which relates straightaway to the lambda-calculus. In , we initiate a further exploration of the computational content of superdeduction proofs, for we extend the -calculus in order to obtain a proofterm language together with a cut-elimination reduction for superdeduction. We also prove strong normalisation for this extension of the -calculus.

We have defined a methodology for validating implicit induction proofs. In collaboration with Vincent Demange, we gave evidence of the possibility to perform implicit induction-based proofs inside certified reasoning environments, as that provided by the Coq proof assistant. This is the first step of a long term project focused on 1) mechanically certifying implicit induction proofs generated by automated provers like Spike, and 2) narrowing the gap between automated and interactive proof techniques by devising powerful proof strategies inside proof assistants that aim to perform automatically multiple induction steps and to deal with mutual induction more conveniently. Contrary to the current approaches of reconstructing implicit induction proofs into scripts based on explicit induction tactics that integrate the usual proof assistants, our checking methodology is simpler and fits better for automation. The underlying implicit induction principles are separated and validated independently from the proof scripts that consist in a bunch of one-to-one translations of implicit induction proof steps. The translated steps can be checked independently, too, so the validation process fits well for parallelisation and for the management of large proof scripts. Moreover, our approach is more general; any kind of implicit induction proof can be considered because the limitations imposed by the proof reconstruction techniques no longer exist. This result has been firstly presented at the Poster session of 2010 Grande Region Security and Reliability Day, Saarbrucken.

Based on the previous result, an implementation that
integrates automatic translators for generating fully
checkable Coq scripts from Spike proofs is reported
in
. The induction ordering
underlying the Spike induction principle was defined using
*COCCINELLE*
, a Coq library well suited for
modelling mathematical notions needed for rewriting, such
as term algebras and RPO.
*COCCINELLE*formalises RPO in a generic way using a
precedence and a status (multiset/lexicographic) for each
function symbol. Spike automatically generates a term
algebra starting from Coq function symbols which preserve
the precedence of the original Spike symbols. Many useful
properties about the RPO orderings have been already
provided by
*COCCINELLE*. On the other hand, the induction
ordering was modelled as a multiset extension of RPO and
only few properties about it were provided by
*COCCINELLE*. We have proved useful lemmas about it
and added them to
*COCCINELLE*, for example, the multiset extensions of
RPO is stable under substitutions. Finally, every single
inference step derived with a restricted version of Spike
can be automatically translated into equivalent Coq script.
The restricted inference system was powerful enough to
prove properties about specifications involving mutually
defined functions and to validate a sorting algorithm. The
scripts resulted from the translation of these proofs were
successfully validated by Coq.

Another improvement of the
*COCCINELLE*library was the redefinition of the RPO
ordering in order to consider precedencies that take into
account equivalent function symbols. The new release of
CoLoR (
http://
*COCCINELLE*. This improvement allowed Rainbow, a
program developped within the CoLoR project that uses
*COCCINELLE*'s formalisation for RPO, to certify more
than 30 additional proofs from the set of CPF files
generated during the last annual termination competition (
http://

In
we have proposed a framework
which makes possible the integration of formally defined
constructs into an existing language. The
*Tom*system is an instance of this principle: terms,
first-order signatures, rewrite rules, strategies and
matching constructs are integrated into Java and C for
instance. The high level programming features provided by
this approach are presented in
. The
*Tom*system is documented in
. A general overview of the
research problem raised by this approach are presented
in
.

One interest of
*Tom*is to make the compilation process independent of
the considered data-structure. Given any existing
data-structure, using a
*formal anchor*definition, it becomes possible to
match and to apply transformation rules on the considered
data-structures.

We have defined a new type system
for
*Tom*which allows to declare first-order signatures
with subtyping. This is particularly useful to encode
inheritance relations into algebraic terms. Considering
Java as the host language, in
we present this type system
with subtyping for
*Tom*, that is compatible with Java's type system, and
that performs both type checking and type inference. We
propose an algorithm that checks if all patterns of a
*Tom*program are well-typed. In addition, we propose
an algorithm based on equality and subtyping constraints
that infers types of variables occurring in a pattern. Both
algorithms are exemplified and the proposed type system is
showed to be sound and complete.

A first application consists in defining algebraic mappings for implementations of meta-models generated by the Eclipse Modeling Framework.

Negation is intrinsic to human thinking and most of the
time when searching for something, we base our patterns on
both positive and negative conditions. In
we present the notion of
anti-terms, i.e. terms that may contain complement symbols.
We present algorithms for solving anti-pattern matching
problems in the syntactic case as well as modulo an
arbitrary equational theory E, and we study the
specific and practically very useful case of associativity,
possibly with a unity (AU). To this end, based on the
syntacticness of associativity, we present a rule-based
associative matching algorithm, and we extend it to AU.
This algorithm is then used to solve AU antipattern
matching problems. AU anti-patterns are implemented in the
*Tom*language and we show some examples of their
usage.

Access control policies, a particular case of security
policies should guarantee that information can be accessed
only by authorized users and thus prevent all information
leakage. We proposed in
,
a methodology for specifying,
implementing and analysing access control policies using
the rewrite based framework
*Tom*. This verification approach specify the analysed
systems without making a clear distinction between the
policies and the corresponding contexts. In
we propose a framework where
the security policies and the systems they are applied on
are specified separately but using a common formalism. This
separation allows not only some analysis of the policy
independently of the target system but also the application
of a given policy on different systems. In this framework,
we propose a method to check properties like
confidentiality, integrity or confinement over secure
systems based on different policy specifications. We also
consider in
,
the operational aspects related
to the abstract point of view mentioned above.

In computer networks, security policies are generally implemented by firewalls. We propose in an original framework based on tree automata which can be used to specify firewalls and which takes into account the network address translation functionality. We show that this framework allows us to perform structural analysis as well as query analysis and comparisons over firewall policies.

New development chains of critical systems rely on
Domain Specific Modeling Languages (DSML) and on
qualifiable transformations (insurance that a
transformation preserves interesting properties). To
specify and to make such transformations we have started to
extend
*Tom*.

A first part of this extension is an
*EMF*
*Tom*with
*EMF*. The idea of this tool is to generate
*Tom*mappings (
*i.e.*an algebraic view) by introspecting
*EMF*generated Java code. These mappings can then be
used to describe transformations of models that have been
created in Eclipse.
*Tom*-
*EMF*is documented and available in the
*Tom*source distribution.

The second part of this extension is still in
development: it consists in the addition of new
*Tom*language constructs to express transformations of
models. Studying several use cases

We obtained a financial support from the Lorraine region for funding the research activities of Tony Bourdier.

We participate in the “Logic and Complexity” part of the GDR–IM (CNRS Research Group on Mathematical Computer Science), in the projects “Logic, Algebra and Computation” (mixing algebraic and logical systems) and “Geometry of Computation” (using geometrical and topological methods in computer science).

We participate and co-animate the “Transformation” group of the GDR–GPL (CNRS Research Group on Software Engineering).

The ANR project “Complexité implicite, concurrence et extraction” (Complice), headed by Patrick Baillot (CNRS, LIP Lyon), federates researchers from Lyon (LIP), Nancy (LORIA) and Villetaneuse (LCR). The coordinator for the LORIA site is Guillaume Bonfante (Carte).

Ravaj (Réécriture et Approximation pour la Vérification d'Applications Java) is an ANR project coordinated by Thomas Genet (Irisa). The goal is to model Java bytecode programs using term rewriting and to use completion techniques to compute the set of reachable terms. Then, it is possible to check some properties related to reachability (in particular safety and security properties) on the modeled system using tree automata intersection algorithms.

“SSURF: Safety and Security under FOCAL” is an ANR
project coordinated by Mathieu Jaume (LIP6). The SSURF
project consists in characterizing and studying the
required features that an Integrated Development
Environment (IDE) must provide in order not only to obtain
software systems in conformance with high Evaluation
Assurance Levels (EAL-5, 6 and 7), but also to ease
the evaluation process according to various standards (
*e.g.*IEC61508, CC, ...). Moreover we aim at
developing a formal generic framework describing various
security properties,
*e.g.*access control policies, together with their
implementations using such an IDE. A more detailed
presentation is available at
http://

The INRIA ARC "Conception et réalisation d'assistants à
la preuve fondés sur la surdéduction modulo" (CORiAS),
headed by Germain Faure (INRIA, Saclay), federates
researchers from Nancy (LORIA) and Saclay (LIX). The
coordinator for the LORIA site is Horatiu Cirstea. A
detailed presentation is available at
http://

This project is concerned with the security and access
control for Web data exchange, in the context of Web
applications and Web services. We aim at defining automatic
verification methods for checking properties of access
control policies (ACP) for XML, like consistency or
secrecy. A more detailed presentation is available at
http://

The INRIA ARC "Redesigning logical syntax" (REDO),
headed by Lutz Straßburger (INRIA, LIX Saclay), federates
researchers from Bath (Department of Computer Science),
Nancy (LORIA) and Saclay (LIX). The coordinator for the
LORIA site is François Lamarche (Calligramme). The ARC held
a meeting in Nancy on November 16-18. A more detailed
presentation is available at,
http://

“QUARTEFT: QUAlifiable Real TimE Fiacre Transformations”
is a research project founder by the FRAE (Fondation de
Recherche pour l'Aéronautique et l'Espace). A first goal is
to develop an extension of the Fiacre intermediate language
to support real-time constructs. A second goal is to
develop new model transformation techniques to translate
this extended language, Fiacre-RT, into core Fiacre. A main
difficulty consists in proposing transformation techniques
that could be verified in a formal way. A more detailed
presentation is available at
http://

**Chili.**We have an associated team “VanaWeb” that
started in 2008 and continues the collaboration initiated
during the joint project INRIA-CONICYT (Chili), VANANAA
(formerly, COCARS). It is a project on rules and strategies
for the hybrid resolution of constraint problems with
applications to composition problems for the Web. We have
many exchanges with Carlos Castro and his group (UTFSM,
Valparaiso, Chile).

**Brazil.**Project INRIA-CNPq (Brazil), DA CAPO -
Automated deduction for the verification of specifications
and programs. It is a project on the development of proof
systems for the verification of specifications and software
components. The coordinators of this project are David
Déharbe (UFRN Natal, Brazil) and Christophe Ringeissen
(CASSIS). On the french side, DA CAPO also involves the
CASSIS project.

Tony Bourdier:

Member of the scientific board of the University of Nancy 1.

Member of the board of the Doctoral School in Computer Science and Mathematics.

Horatiu Cirstea:

Member of the Loria Laboratory Council.

PC member of RuleML 2010 (International RuleML Symposium on Rule Interchange and Applications).

Steering committee of RULE.

Yves Guiraud:

Animation of the group “Invariants
algébriques en informatique” [Algebraic invariants in
Computer Science],
http://

Pierre-Etienne Moreau:

Co-chair of LDTA 2010 (International Workshop on Language Descriptions, Tools and Applications),

PC member of SLE 2010 (International Conference on Software Language Engineering), RULE 2010 (International Workshop on Rule-Based Programming), WRLA 2010 (International Workshop on Rewriting Logic and its Applications), IWS 2010 (International Workshop on Strategies in Rewriting, Proving, and Programming).

Member of the board of the Doctoral School in Computer Science and Mathematics.

Member of the Direction board of LORIA.

Head of the local committee for INRIA “détachements” and “délégations”.

We do not mention the teaching activities of the various teaching assistants, associate professors and professors of the project who work in various universities of the region.

Tony Bourdier:

Licence (L3) course in Nancy (ESSTIN) on statistics.

Supervision of Francois Prugniel (internships ESIAL).

Horatiu Cirstea:

Master course in Nancy on programming and proving with rule based languages, with Pierre-Etienne Moreau.

Supervision of Cyrille Cornu (internship École des Mines de Nancy).

Pierre-Etienne Moreau:

Head of the Computer Science department at École des Mines de Nancy

Master course in Nancy on programming and proving with rule based languages, with Horatiu Cirstea.

Sorin Stratulat:

In the frame of the ERASMUS LLP EMaCS
project (
http://

Tony Bourdier:

“Constrained rewriting in recognizable theories”, Journées Nationales GEOCAL-LAC, March 2010, Nice.

“Formal analysis of firewalls using tree automata techniques”, Grande Region Security and Reliability Day, March 2010, Saarbrucken, Germany.

Horatiu Cirstea:

“On Formal Specification and Analysis of Security Policies”, Grande Region Security and Reliability Day, March 2010, Saarbrucken, Germany.

“Rule-based Specification and Analysis of Security Policies”, 5th International Workshop on Security and Rewriting Techniques - SecRet'10, June 2010, Valencia, Spain.

Yves Guiraud:

“Higher-dimensional normalisation strategies for acyclicity”, invited conference, Operads and Universal Algebra, Tianjin, China, July 2010.

Pierre-Etienne Moreau:

Invited talk at "Journées Nationales 2010 du GDR GPL" : "Combiner Java et réécriture, c'est possible et utile", march 2010, Pau.

Cody Roux:

“Dependent types as higher order dependency pairs”, Journées Nationales GEOCAL-LAC, March 2010, Nice.

Horatiu Cirstea, “Le calcul de réécriture”, HDR

Clément Houtmann, “Représentation et interaction des preuves en superdéduction modulo”, PhD

Paul Brauner, “Fondements et mise en œuvre de la Super Déduction Modulo”, PhD

Pierre-Etienne Moreau:

Paul Brauner: “Fondements et mise en œuvre de la Super Déduction Modulo”, PhD, March 2010, INPL,

Julien Blond: “Modélisation et implantation d'une politique de sécurité d'un OS multi-niveaux via une traduction de FoCaLize vers C”, PhD, December 2010, Paris 6.

Laurent Hubert: “Foundations and Implementation of a Tool Bench for Static Analysis of Java Bytecode Programs”, PhD, December 2010, Rennes 1.

Member of the recruitment committee for a Professor position at ESIAL (Nancy 1).