The aim of the Parsifal team is to develop and exploit the theories of proofs and types to support the specification and verification of computer systems. To achieve these goals, the team works on several level.

The team has expertise in
*proof theory*and
*type theory*and conducts basic research in these fields: in particular, the team is developing results that help with the automation of deduction and with the formal manipulation and
communication of proofs.

Based on experience with computational systems and theoretical results, the team
*designs*new logical principles, new proof systems, and new theorem proving environments.

Some of these new designs are appropriate for
*implementation*and the team works at developing prototype systems to help validate basic research results.

By using the implemented tools, the team can develop examples of specifications and verification to test the success of the design and to help suggest new logical and proof theoretic principles that need to be developed in order to improve ones ability to specify and verify.

The foundational work of the team focuses on the proof theory of classical, intuitionistic, and linear logics making use, primarily, of sequent calculus and deep inference formalisms. A major challenge for the team is the reasoning about computational specifications that are written in a relational style: this challenge is being addressed with the introduction of some new approaches to dealing with induction, co-induction, and generic judgments. Another important challenge for the team is the development of normal forms of deduction: such normal forms can be used to greatly enhance the automation of search (one only needs to search for normal forms) and for communicating proofs (and proof certificates) for validation.

The principle application areas of concern for the team currently are in functional programming (e.g., -calculus), concurrent computation (e.g., -calculus), interactive computations (e.g., games), and biological systems.

The team has developed the theme of
*focused proof search*to broaden its proof theoretical and game theoretic basis as well as to support a number of computer science applications. Additional advances were made on an
approach to reasoning with bindings in syntactic expressions. Together with a team at the University of Minneapolis, we have implemented two prototype provers, Abella and Taci, which have been
used to explore the effectiveness of both these theoretical threads.

In the specification of computational systems, logics are generally used in one of two approaches. In the
*computation-as-model*approach, computations are encoded as mathematical structures, containing such items as nodes, transitions, and state. Logic is used in an external sense to make
statements
*about*those structures. That is, computations are used as models for logical expressions. Intensional operators, such as the modals of temporal and dynamic logics or the triples of Hoare
logic, are often employed to express propositions about the change in state. This use of logic to represent and reason about computation is probably the oldest and most broadly successful use
of logic in computation.

The
*computation-as-deduction*approach, uses directly pieces of logic's syntax (such as formulas, terms, types, and proofs) as elements of the specified computation. In this much more rarefied
setting, there are two rather different approaches to how computation is modeled.

The
*proof normalization*approach views the state of a computation as a proof term and the process of computing as normalization (know variously as
-reduction or cut-elimination). Functional programming can be explained using proof-normalization as its theoretical basis
and has been used to justify the design of new functional programming languages
.

The
*proof search*approach views the state of a computation as a sequent (a structured collection of formulas) and the process of computing as the process of searching for a proof of a
sequent: the changes that take place in sequents capture the dynamics of computation. Logic programming can be explained using proof search as its theoretical basis
and has been used to justify the design of new logic programming languages
.

The divisions proposed above are informal and suggestive: such a classification is helpful in pointing out different sets of concerns represented by these two broad approaches (reductions, confluence, etc, versus unification, backtracking search, etc). Of course, a real advance in computation logic might allow us merge or reorganize this classification.

Although type theory has been essentially designed to fill the gap between these two kinds of approaches, it appears that each system implementing type theory up to now only follows one of the approaches. For example, the Coq system implementing the Calculus of Inductive Constructions (CIC) uses proof normalization while the Twelf system , implementing the Edinburgh Logical Framework (LF, a sub-system of CIC), follows the proof search approach (normalization appears in LF, but it is much weaker than in, say, CIC).

The Parsifal team works on both the proof normalization and proof search approaches to the specification of computation.

Once a computational system (e.g., a programming language, a specification language, a type system) is given a logic (relational) specifications, how do we reason about the formal properties of such specifications? New results in proof theory are being developed to help answer this question.

The traditional architecture for systems designed to help reasoning about the formal correctness of specification and programming languages can generally be characterized at a high-level as
follows:
**First: Implement mathematics.**This often involves choosing between a classical or constructive (intuitionistic) foundation, as well as a choosing abstraction mechanism (eg, sets or
functions). The Coq and NuPRL systems, for example, have chosen intuitionistically typed
-calculus for their approach to the formalization of mathematics. Systems such as HOL
use classical higher-order logic while systems such as Isabelle/ZF
use classical logic.
**Second: Reduce programming correctness problems to mathematics.**Thus, data structures, states, stacks, heaps, invariants, etc, all are represented as various kinds of mathematical
objects. One then reasons directly on these objects using standard mathematical techniques (induction, primitive recursion, fixed points, well-founded orders, etc).

Such an approach to formal methods is, of course, powerful and successful. There is, however, growing evidence that many of the proof search specifications that rely on such intensional
aspects of logic as bindings and resource management (as in linear logic) are not served well by encoding them into the traditional data structures found in such systems. In particular, the
resulting encoding can often be complicated enough that the
*essential logical character*of a problem is obfuscated.

Despeyroux, Pfenning, Leleu, and Schürmann proposed two different type theories
,
based on modal logic in which expressions (possibly with binding) live in the functional space
ABwhile general functions (for case and iteration reasoning) live in the full functional space
. These works give a possible answer to the problem of extending the Edinburgh Logical Framework, well suited for describing expressions with binding, with recursion and induction
principles internalized in the logic (as done in the Calculus of Inductive Constructions). However, extending these systems to dependent types seems to be difficult (see
where an initial attempt was given).

The LINC logic of appears to be a good meta-logical setting for proving theorems about such logical specifications The three key ingredients of LINC can be described as follows.

First, LINC is an intuitionistic logic for which provability is described similarly to Gentzen's LJ calculus
. Quantification at higher-order types (but not predicate types) is allowed and terms are simply typed
-terms over
-equivalence. This core logic provides support for
*-tree syntax*, a particular approach to
*higher-order abstract syntax*. Considering a classical logic extension of LINC is also of some interest, as is an extension allowing for quantification at predicate type.

Second, LINC incorporates the proof-theoretical notion of
*definition*(also called
*fixed points*), a simple and elegant device for extending a logic with the if-and-only-if closure of a logic specification and for supporting inductive and co-inductive reasoning over
such specifications. This notion of definition was developed by Hallnäs and Schroeder-Heister
and, independently, by Girard
. Later McDowell, Miller, and Tiu have made substantial extensions to our understanding of this concept
,
,
. Tiu and Momigliano
,
have also shown how to modify the notion of definition to support induction and co-induction in the sequent
calculus.

Third, LINC contains a new (third) logical quantifier
(nabla). After several attempts to reason about logic specifications without using this new quantifier
,
,
, it became clear that when the object-logic supports
-tree syntax, the
*generic judgment*
,
and its associated quantifier could provide a strong and declarative role in reasoning. This new quantifier is
able to help capture internal (intensional) reasons for judgment to hold generically instead of the universal judgment that holds for external (extensional) reasons. Another important
observation to make about
is that if given a logic specification that is essentially a collection of Horn clauses (that is, there is no uses of negation in the specification), there is no distinctions to be made
between
or
in the premise (body) of semantic definitions. In the presence of negations and implications, a difference between these two quantifiers does arise
.

There is a great deal of non-determinism that is present in the search for proofs (in the sense of automated deduction). The non-determinism involved with generating lemmas is one extreme:
when attempting to prove one formula it is possible to generate a potential lemma and then attempt to prove it and to use it to prove the original formula. In general, there are no clues as to
what is a useful lemma to construct. The famous “cut-elimination” theorem say that it is possible to prove theorems without using lemmas (that is, by restricting to
*cut-free*proofs). Of course, cut-free proofs are not appropriate for all domains of computation logic since they can be vastly larger than proofs containing cuts. Even when restricting to
cut-free proofs make sense (as in logic programming, model checking, and some areas of automated reasoning), the construction of cut-free proofs still contains a great deal of
non-determinism.

Structuring the non-deterministic choices within the search for cut-free proofs has received increasing attention in recent years with the development of
*focusing proofs systems*. In such proof systems, there is a clear separation between non-deterministic choices for which no backtracking is required (“don't care non-determinism”) and
choices were backtracking may be required (“don't know non-determinism”). Furthermore, when a backtrackable choice is required, that choice actually extends over a series of inference rules,
representing a “focus” during the construction of a proof. One focusing-style proof systems was developed within the early work of providing a proof-theoretic foundations for logic programming
via
*uniform proofs*
. The first comprehensive analysis of focusing proofs was done in linear logic by Andreoli
. There it was shown that proofs are constructed in two alternating phases: a
*negative phase*in which don't-care-non-determinism is done and a
*positive phase*in which a focused sequence of don't-know-non-determinism choices is applied.

Since a great deal of automated deduction (in the sense of logic programming, type inference, and model checking) is done in intuitionistic and classical logic, there is a strong need to have comprehensive focusing results for these logics as well. In linear logic, the assignment of inference rules to the positive and negative phases is canonical (only the treatment of atomic formulas is left as a non-canonical choice). Within intuitionistic and classical logic, a number of inference rules do not have canonical treatments. Instead, several focusing-style proof systems have been developed one-by-one for these logics. A general scheme for putting all of these choices together has recently been developed within the team and will be described below.

There has been a good deal of concern in the proof theory literature on the nature of proofs as objects. An example of such a concern is the question as to whether or not two proofs should be considered equal. Such considerations were largely of interest to philosophers and logicians. Computer scientists started to get involved with the structure of proofs more generally in essentially two ways. The first is the extraction of algorithms from constructive proofs. The second is the use of proof-like objects to help make theorem proving systems more sophisticated (proofs could be stored, edited, and replayed, for example).

It was not until the development of the topic of
*proof carrying code (PCC)*that computer scientists from outside the theorem proving discipline took a particular interesting in having proofs as actual data structure within computations.
In the PCC setting, proofs of safety properties need to be communicated from one host to another: the existence of the proof means that one does not need to trust the correctness of a piece of
software (for example, that it is not a virus). Given this need to produce, communicate, and check proofs, the actual structure and nature of proofs-as-objects becomes increasingly
important.

Often the term
*proof certificate*(or just
*certificate*) is used to refer to some data structure that can be used to communicate a proof so that it can be checked. A number of proposals have been made for possible structure of
such certificates: for examples, proof scripts in theorem provers such as Coq are frequently used. Other notions include oracle
and fixed points
.

The earliest papers on PCC made use of logic programming systems Twelf and Prolog . It seems that the setting of logic programming is a natural one for exploring the structure of proofs and the trade-offs between proof size and the need for run-time proof search. For example, there is a general trade-off between the size of a proof object and the amount of search one must do to verify that an object does, in fact, describe a proof. Exploring such trade-off should be easy and natural in the proof search setting where such search is automated. In particular, focused proof systems should be a large component of such an analysis.

Deep inference , is a novel methodology for presenting deductive systems. Unlike traditional formalisms like the sequent calculus, it allows rewriting of formulas deep inside arbitrary contexts. The new freedom for designing inference rules creates a richer proof theory. For example, for systems using deep inference, we have a greater variety of normal forms for proofs than in sequent calculus or natural deduction systems. Another advantage of deep inference systems is the close relationship to categorical proof theory. Due to the deep inference design one can directly read off the morphism from the derivations. There is no need for a counterintuitive translation.

One reason for using categories in proof theory is to give a precise algebraic meaning to the identity of proofs: two proofs are the same if and only if they give rise to the same morphism
in the category. Finding the right axioms for the identity of proofs for classical propositional logic has for long been thought to be impossible, due to “Joyal's Paradox”. For the same
reasons, it was believed for a long time that it it not possible to have proof nets for classical logic. Nonetheless, Lutz Straßburger and François Lamarche provided proof nets for classical
logic in
, and analyzed the category theory behind them in
. In
and
, one can find a deeper analysis of the category theoretical axioms for proof identification in classical logic.
Particular focus is on the so-called
*medial rule*which plays a central role in the deep inference deductive system for classical logic.

The following research problems are investigated by members of the Parsifal team:

Find deep inference system for richer logics. This is necessary for making the proof theoretic results of deep inference accessible to applications as they are described in the previous sections of this report.

Investigate the possibility of focusing proofs in deep inference. As described before, focusing is a way to reduce the non-determinism in proof search. However, it is well investigated only for the sequent calculus. In order to apply deep inference in proof search, we need to develop a theory of focusing for deep inference.

Use the results on deep inference to find new axiomatic description of categories of proofs for various logics. So far, this is well understood only for linear and intuitionistic logics. Already for classical logic there is no common accepted notion of proof category. How logics like LINC can be given a categorical axiomatisation is completely open.

Proof nets are abstract (graph-like) presentations of proofs such that all "trivial rule permutations" are quotiented away. More generally, we investigate combinatoric objects and correctness criteria for studying proofs independently from syntax. Ideally the notion of proof net should be independent from any syntactic formalism. But due to the almost absolute monopoly of the sequent calculus, most notions of proof nets proposed in the past were formulated in terms of their relation to the sequent calculus. Consequently we could observe features like “boxes” and explicit “contraction links”. The latter appeared not only in Girard's proof nets for linear logic but also in Robinson's proof nets for classical logic. In this kind of proof nets every link in the net corresponds to a rule application in the sequent calculus.

The concept of deep inference allows to design entirely new kinds of proof nets. Recent work by Lamarche and Straßburger and have extended the theory of proof nets for multiplicative linear logic to multiplicative linear logic with units. This seemingly small step—just adding the units—had for long been an open problem, and the solution was found only by consequently exploiting the new insights coming from deep inference. A proof net no longer just mimics the sequent calculus proof tree, but rather an additional graph structure that is put on top of the formula tree (or sequent forest) of the conclusion. The work on proof nets within the team is focused on the following two directions

Extend the work of Lamarche and Straßburger to larger fragments of linear logic, containing the additives, the exponentials, and the quantifiers.

Finding (for classical logic) a notion of proof nets that is deductive, i.e., can effectively be used for doing proof search. An important property of deductive proof nets must be that the correctness can be checked in linear time. For the classical logic proof nets by Lamarche and Straßburger this takes exponential time (in the size of the net). We hope that eventually deductive proof nets will provide a “bureaucracy-free” formalism for proof search.

Systems in molecular biology, such as those for regulatory gene networks or protein-protein interactions, can be seen as state transition systems that have an additional notion of
*rate*of change. Methods for specifying such systems is an active research area. However, to our knowledge, no logic (more powerful than the boolean logic) have been proposed so far to
both specify and reason about these systems.

One current and prominent method uses process calculi, such as the stochastic -calculus, that has a built in notion of rate . Process calculi, however, have the deficiency that reasoning about the specifications is external to the specifications themselves, usually depending on simulations and trace analysis.

Kaustuv Chaudhuri and Joëlle Despeyroux are considering the problem of giving a
*logical*instead of a
*process-based*treatment both to specify and to reason about biological systems in a uniform linguistic framework. The logic they have proposed, called HyLL, is an extension of
(intuitionistic) linear logic with a modal situated truth that may be reified by means of the
operator from
*hybrid logic*. A variety of semantic interpretation can be given to this logic, including the rates and the delay of formation.

The expressiveness of the logic has been demonstrated on small examples and first meta-theoretical properties of the logic have been proven. Considerable work needs to be done before this proposal succeeds as a natural logical framework for systems biology. Remaining work mainly includes the description of larger examples (requiring more specifications of usual biological notions), and automating reasoning about the specifications. It also includes further studies of the meta-theoretical properties of the logic, and of course eventual extensions of the logic (for example to get branching semantics).

When operational semantics is presented as inference rules, it can often be encoded naturally as a logic program, which means that it is usually easy to animate such semantic specifications in direct and natural ways. Given the natural duality between finite success and finite failure (given a proof theoretic foundations in papers such as and ) it is also possible to describe model checking systems from a proof theoretic setting.

One application area for this work is, thus, the development of model checking software that can work on linguistic expressions that may contain bound variables. Specific applications could be towards checking bisimulation of -calculus and -calculus expressions.

More about a prototype model checker in this area is described below.

There has been increasing interest in the international community with the use of formal methods to provide proofs of properties of programs and entire programming languages. The example of proof carrying code is one such example. Two more examples for which the team's efforts should have important applications are the following two challenges.

Tony Hoare's Grand Challenge titled “Verified Software: Theories, Tools, Experiments” has as a goal the construction of “verifying compilers” to support a vision of a world where programs would only be produced with machine-verified guarantees of adherence to specified behavior. Guarantees could be given in a number of ways: proof certificates being one possibility.

When one looks at systems of biochemical reactions in molecular biology, such as gene-protein and protein-protein interaction systems, one observes two basic phenomena: state change (where, for example, two or more molecules interact to form other molecules) and delay. Each of the state changes has an associated delay before the state change is observed, or, more precisely, a probability distribution over possible delays: the rate of the change. A system of biochemical reactions can therefore be seen as a stochastic computation.

The HyLL logic proposed by Kaustuv Chaudhuri and Joëlle Despeyroux is a first attempt at providing a logical framework for both specifying and reasoning about such computations.

In order to provide some practical validation of the formal results mentioned above regarding the logic LINC and the quantifier , we picked a small but expressive subset of that logic for implementation. While that subset did not involve the proof rules for induction and co-induction (which are difficult to automate) the subset did allow for model-checking style computation. During 2006 and 2007, the Parsifal team, with contributions from our close colleagues at the University of Minnesota and the Australian National University, designed and implemented the Bedwyr system for doing proof search in that fragment of LINC. This system is organized as an open source project and is hosted on INRIA's GForge server. It has been described in the conference papers and . This systems, which is implemented in OCaml, has been download about 200 times since it was first released.

Bedwyr is a generalization of logic programming that allows model checking directly on syntactic expressions possibly containing bindings. This system, written in OCaml, is a direct implementation of two recent advances in the theory of proof search.

It is possible to capture both finite success and finite failure in a sequent calculus. Proof search in such a proof system can capture both may and must behavior in operational semantics.

Higher-order abstract syntax is directly supported using term-level -binders, the -quantifier, higher-order pattern unification, and explicit substitutions. These features allow reasoning directly on expressions containing bound variables.

Bedwyr has served well to validate the underlying theoretical considerations while at the same time providing a useful tool for exploring some applications. The distributed system comes with several example applications, including the finite -calculus (operational semantics, bisimulation, trace analysis, and modal logics), the spi-calculus (operational semantics), value-passing CCS, the -calculus, winning strategies for games, and various other model checking problems.

During the summer of 2007, Baelde (LIX PhD student) and visiting intern Zachery Snow (PhD student from the University of Minnesota) built a prototype theorem prover, called
*Taci*, that we are currently using “in-house” to experiment in a number of large examples and a few different logics. During the summer of 2008, Snow visited the team again: he and Baelde
and Viel developed Taci in a new direction: they took one logic (an intuitionistic logic of fixed points) and developed an automated prover for it based on all of the recent focused proof
search techniques that the team and others have been developing in recent years. This new prototype is now a focus point for further implementations.

An important application area for some of the proof theory results of the team is in reasoning about the operational semantics of programs and specifications. Miller provided a survey of various approaches to encoding such semantic specifications in the survey article . Miller and the former team member Alwen Tiu recent submitted a paper that provides, in their opinion, the definitive treatment of the (finite) -calculus within the -tree syntax approach to encoding. In that paper, the syntax and transition semantics for the -calculus are presented as simple logic programs. Using those specifications, they developed the notions of open and late bisimulation. All of these specifications were declarative and without side conditions. A result of maintaining a high-level of declarativeness in these specifications, it was possible to describe a novel characterization of the differences between open and late bisimulations. Full adequacy results were provided, there by showing a precise match between the “standard” techniques for the specification of the -calculus and the more abstract, proof-theoretically inspired approach based on -trees syntax.

The team has been actively extending the scope of effectiveness
-quantifications. As Tiu and Miller have shown in
, the
quantifier (developed in previous years within the team) provides a completely satisfactory treatment of binding structures in the
*finite*
-calculus. Moving this quantifier to treat infinite behaviors via induction and co-induction, required new advances in the underlying proof theory of
-quantification.

The team has explored two different approaches to this problem. David Baelde , has developed a minimalist generalization of previous work by Miller and Tiu: he has found what seems to be the simplest extension that earlier work that allows to interact properly with fixed points and their inference rules (namely, induction and co-induction). His logical approach allows for a rather careful and rigid understanding of scope in the treatment of the meta-theory of logics and computational specifications.

Another angle has been developed as a result of our close international collaborations. Alwen Tiu, now at the Australian National University, has developed a logic, called
which extends the earlier, “minimal” approach by introducing the structural rules of strengthening and exchange into the context of generic variables. As a result, the behavior of
bindings becomes much more like the behavior of names more generally, while still maintaining much of the status as being binders. In combination with our close colleagues at the University of
Minnesota, we have extended this work to include a new definitional principle, called
*nabla-in-the-head*, that strengthens our ability to declaratively describe the structure of contexts and proof invariants. This new definitional principle was first presented in
and examples of it were presented in
. Our colleague, Andrew Gacek (a PhD student at the University of Minnesota and former intern with Parsifal) has
also built the Abella proof editor that allows for the direct implementation of this new definitional principle. His system is in distribution and has been used by a number of people to develop
examples in this logic.

Since focusing proof systems seem to be behind much of our computational logic framework, the team has spent some energies developing further some foundational aspects of this approach to proof systems.

Chuck Liang and Miller have recently finished the paper in which a comprehensive approach to focusing in intuitionistic and classical logic was developed.

Given the team's ambitious to automate logics that require induction and co-induction, we have also looked in detail at the proof theory of fixed points. In particular, David Baelde's recent PhD thesis contains a number of important, foundational theorems regarding focusing and fixed points. In particular, he has examined the logic MALL (multiplicative and additive linear logic). To strengthen this decidable logic into a more general logic, Girard added the exponentials, which allowed for modeling unbounded (“infinite”) behavior. Baelde considers, however, the addition of fixed points instead and he has developed the proof theory of the resulting logic. We see this logic as being behind much of the work that the team will be doing in the coming few years.

Alexis Saurin's recent PhD also contains a wealth of new material concerning focused proof system. In particular, he provides a new and modular approach to proving the completeness of focused proof systems as well as develops the theme of multifocusing.

A particular outcome of our work on focused proof search is the use of
*maximally multifocused proofs*to help provide sequent calculus proofs a canonicity. In particular, Chaudhuri (a former Parsifal post doc), Miller, and Saurin have shown in
that it is possible to show that maximally multifocused sequent proofs can be placed in one-to-one
correspondence with more traditional proof net structures for subsets of MALL.

Given the team's previous efforts at building automated deduction systems that viewed finite success and finite failure as duals within a proof theory setting, we were intrigued to see if that duality could be extended beyond the simple, Horn clause setting. This past year, Olivier Delande and Miller described in the paper a way to extend that notion of proving and refuting to the richer setting of MALL. They showed how it is possible in that logic to view moves in a game as contributing simultaneously to both a proof and a refutation of a given formula. Of course, only one of the players of such a game can be a winner. Delande, Miller, and Saurin have completed a comprehensive approach to this style of game in .

It is well known how to use an intuitionistic meta-logic to specify natural deduction systems. It is also possible to use linear logic as a meta-logic for the specification of a variety of
sequent calculus proof systems
. Nigam and Miller have shown that adopting different
*focusing*annotations for such linear logic specifications, a range of other proof systems can also be specified. In particular, they have shown that natural deduction (normal and
non-normal), sequent proofs (with and without cut), tableaux, and proof systems using general elimination and general introduction rules can all be derived from essentially the same linear
logic specification by altering focusing annotations. By using elementary linear logic equivalences and the completeness of focused proofs, we are able to derive new and modular proofs of the
soundness and completeness of these various proofs systems for intuitionistic and classical logics.

In proof complexity one usually distinguishes between proofs “with extension” and proofs “without extension”, whereas in other areas of proof theory one distinguishes between proofs “with cut” and proofs “without cut”. We have shown that with the use of deep inference it is possible to provide a uniform treatment for both classifications. This allows, in particular, to study cut-free proofs with extension, which is not possible with other formalisms. By using deep inference we could also give a new and simpler proof for the well-known theorem saying that extended Frege-systems p-simulate Frege-systems with substitution, and vice versa.

One way of obtaining cut-free proofs is to normalize a proof that contains cuts. Stéphane Lengrand has studied the termination of normalization procedures in sequent calculi.

The methodology uses proof-terms and rewrite systems on these objects, which have to be proved terminating.

Lengrand in has shown the termination of a cut-elimination system presented in for a focused sequent calculus in relation to call-by-value semantics.

The proof term approach could provide a systematic method to obtain cut-elimination results in focused sequent calculi such as LJF and LKF .

Meta-variables are central in proof search mechanisms to represent incomplete proofs and incomplete objects. They are used in almost all implementations of proof software, yet their meta-theory remains less explored than that of complete proofs and objects such as the -calculus.

Stéphane Lengrand and Jamie Murdoch Gabbay have studied an extension of -calculus with a particular kind of meta-variables originating from nominal logic. A paper on this study is in revision phase for a publication in Information and Computation.

The highlight of 2008 is the design of a typing system with polymorphism
*à la*Hindley-Milner, as used in programming languages like CaML. The extension of
-calculus with meta-variables creates particular difficulties for the typing algorithm to be implemented in the compiler. A deciding algorithm for typing is being developed using
constraints and graph theory.

Kaustuv Chaudhuri and Joëlle Despeyroux have proposed a logic, called HyLL (for Hybrid Linear Logic), for defining stochastic transition systems using linear implications for the state transitions and a modal logic with the worlds representing the rates of transitions. The worlds (rates) are reified in the propositional syntax using the connectives of hybrid logic, which allows directly encoding and reasoning about biological reaction systems, the intended use of this logic. This year, they have demonstrated the expressivity of the logic by giving an adequate encoding of the stochastic pi-calculus using a focused sequent calculus. A paper has been submitted and a technical report is in preparation.

The ANR-project blanc titled “INFER: Theory and Application of Deep Inference” that is coordinated by Lutz Straßburger has been accepted in September 2006. Besides Parsifal, the teams associated with this effort are represented by François Lamarche (INRIA-Loria) and Michel Parigot (CNRS-PPS).

Slimmer stands for
*Sophisticated logic implementations for modeling and mechanical reasoning*is an “Equipes Associées” with seed money from INRIA. This project is initially designed to bring together the
Parsifal personnel and Gopalan Nadathur's Teyjus team at the University of Minnesota (USA). Separate NSF funding for this effort has also been awards to the University of Minnesota. We are
planning to expand the scope of this project to include other French and non-French sites, in particular, Alwen Tiu (Australian National University), Elaine Pimentel (Universidade Federal de
Minas Gerais, Brazil) and Brigitte Pientka (McGill University, Canada).

Mobius stands for “Mobility, Ubiquity and Security” and is a Proposal for an Integrated Project in response to the call FP6-2004-IST-FETPI. This proposal involve numerous sites in Europe and was awarded in September 2005. This large, European based project is coordinated via INRIA-Sophia.

TYPES has been an European project (a coordination action from the IST program) aiming at developing the technology of formal reasoning based on type theory. The project brought together 36 universities and research centers from 8 European countries (France, Italy, Germany, Netherlands, United Kingdom, Sweden, Poland and Estonia). It was the continuation of a number of European projects since 1992. The funding from the last project enabled the maintaining of collaboration within the community by supporting an annual workshop, a few smaller thematic workshops, one summer school, and visits of researchers to one another's labs. The funding of the project ended in April 2008.

The "PAI" Amadeus for collaboration between France and Austria has approved the grant "The Realm of Cut Elimination" in November 2006. This proposal has allowed for collaborations between the Parsifal team and the groups of Agata Ciabattoni at Technische Universität Wien (Austria) and Michel Parigot at CNRS-PPS.

This collaboration between Paris and Bern (Germany) aims at exploring some questions in the structure and identity of proofs. People involved in the Paris area are Lutz Straßburger, Dale Miller, Alexis Saurin, David Baelde, Michel Parigot, Stéphane Lengrand, and Séverine Maingaud. People involved in Bern are Kai Brünnler, Richard McKinley, and Phiniki Stouppa.

Lutz Straßburger co-organized (with Pascal Manoury, Michel Parigot, and Paul Roziere from PPS) a workshop on “Structural Proof Theory” from 19–21 November 2008 in Paris. The workshop was partially supported by the projects ANR INFER, PHC “Realm of Cut Elimination”, and PHC “Deep Inference and the Essence of Proofs”.

Stéphane Lengrand organized two workshops on “Proof Search in Type Theories” at the École Polytechnique on 10th April and 5th-6th June 2006.

Dale Miller has the following editorial duties.

*Theory and Practice of Logic Programming*. Member of Advisory Board since 1999. Cambridge University Press.

*ACM Transactions on Computational Logic (ToCL)*. Area editor for
*Proof Theory*since 1999. Published by ACM.

*Journal of Functional and Logic Programming*. Permanent member of the Editorial Board since 1996. MIT Press.

*Journal of Logic and Computation*. Associate editor since 1989. Oxford University Press.

Dale Miller was a program committee member for the following conferences.

PPDP 2008: 10th International ACM SIGPLAN Symposium on Principles and Practice of Declarative Programming, Valencia, 15-17 July.

LSFA08: Third Workshop on Logical and Semantic Frameworks with Applications, Brazil.

CSL08: 17th Annual Conference of the European Association for Computer Science Logic, 15-20 September, Bertinoro, Italy.

TCS 2008: 5th IFIP International Conference on Theoretical Computer Science, August, Milano, Italy.

FOSSACS 2008: Foundations of Software Science and Computation Structures. Budapest, Spring.

FLOPS 2008: Ninth International Symposium on Functional and Logic Programming, 14-16 April, Ise, Japan.

Stéphane Lengrand has been invited to speak at the Workshop on Classical Logic and Computation 2008 (affiliated to IJCAR08), Reykjavik, 13 July 2008.

Dale Miller has been invited to speak at the following meetings.

Journées du projet PEPS-Relations, University of Paris XIII, 15 - 16 December 2008.

APS: 4th International Workshop on Analytic Proof Systems, part of LPAR 2008, Doha, Qatar, 22 November 2008.

SOS 2008: Structural Operational Semantics, an affiliated workshop of ICALP 2008, Reykjavik, Iceland, 6 July 2008.

WFLP 2008: 17th International Workshop on Functional and (Constraint) Logic Programming, Siena, 3-4 July 2008.

LFMTP 2008: International Workshop on Logical Frameworks and Meta-Languages: Theory and Practice (affiliated with LICS08), Pittsburgh, 23 June 2008.

Dimostrazioni, Polaritá e Cognizione, Facoltà di Lettere e Filosofia, Università di Roma Tre, 18 April 2008.

Dale Miller co-teaches the course “Logique Linéaire et paradigmes logiques du calcul” in the masters program MPRI (“Master Parisien de Recherche en Informatique”) (2004-2008).

Stéphane Lengrand teaches the course “Logique formelle et Programmation Logique” at the École d'ingénieur ESIEA (2008).

Dale Miller was an examinator for the PhD thesis of Samuel Mimram (Université Paris VII, 1 Dec 2008) and of Paolo Di Giamberardino (University of Rome 3 and University of the Mediterranean, 18 April 2008).

From March to August 2008, Edlira Nano (MPRI) was writing her Master's thesis on “Star-autonomous categories without units” under the supervision of Lutz Straßburger.

Also from March to August 2008, both Ivan Gazeau and Alexandre Viel conducted their MPRI internships at LIX under the supervision of Dale Miller.