The aim of the CARTE research team is to take into account adversity in computations, which is implied by actors whose behaviors are unknown or unclear. We call this notion adversary computation.

The project combines two approaches, and we think that their combination will be fruitful. The first one is the analysis of the behavior of a wide-scale system, using tools coming from both Continuous Computation Theory and Game Theory. The second approach is to build defenses with tools coming rather from logic, rewriting and, more generally, from Programming Theory.

The activities of the CARTE team are organized around three research actions:

Computations by Dynamical Systems

Robust and Distributed Algorithms, Algorithmic Game Theory

Computer Virology.

Winner of the Oseo prize in the category Emergence of the project of malware detector based on morphological analysis. The current implementation is validated with large scale experiments.

Opening of the High Security lab (LHS), see
http://

Survey in the Journal TOCL on sup-interpretation methods to analyze program resources

With code instrumentation, thanks to PIN and TraceSurfer, we are able to deal with packed malware .

We propose a fined grained stratification to characterize small complexity classes with a diagonalisation method to separate classes .

We survey the different fields, which underline the scientific basis of CARTE, enhancing the aspects around adverse computations.

Today's classical computability and complexity theory deals with discrete time and space models of computation. However, discrete time models of machines working on a continuous space have
been considered: see e.g.
*Blum, Shub and Smale*machines
, or recursive analysis
. Models of machines working with continuous time and space can also be
considered: see e.g. the General Purpose Analog Computer of Claude Shannon
.

Continuous models of computation lead to particular continuous dynamical systems. More generally, continuous time dynamical systems arise as soon as one attempts to model systems that evolve over a continuous space with a continuous time. They can even emerge as natural descriptions of discrete time or space systems. Utilizing continuous time systems is a common approach in fields such as biology, physics or chemistry, when a huge population of agents (molecules, individuals, ...) is abstracted into real quantities such as proportions or thermodynamic data , .

Computation theory of continuous dynamical systems allows us to understand both the hardness of questions related to continuous dynamical systems and the computational power of continuous analog models of computations.

A survey on continuous-time computation theory, with discussions of relations between both approaches, co-authored by Olivier Bournez and Manuel Campagnolo, can be found in .

Rewriting has reached some maturity and the rewriting paradigm is now widely used for specifying, modelizing, programming and proving. It allows for easily expressing deduction systems in a declarative way, for expressing complex relations on infinite sets of states in a finite way, provided they are countable. Programming languages and environments have been developed, which have a rewriting based semantics. Let us cite ASF+SDF , Maude , and Tom .

For basic rewriting, many techniques have been developed to prove properties of rewrite systems like confluence, completeness, consistency or various notions of termination. In a weaker proportion, proof methods have also been proposed for extensions of rewriting like equational extensions, consisting of rewriting modulo a set of axioms, conditional extensions where rules are applied under certain conditions only, typed extensions, where rules are applied only if there is a type correspondence between the rule and the term to be rewritten, and constrained extensions, where rules are enriched by formulas to be satisfied , , .

An interesting aspect of the rewriting paradigm is that it allows automatable or semi-automatable correctness proofs for systems or programs. Indeed, properties of rewriting systems as those cited above are translatable to the deduction systems or programs they formalize and the proof techniques may directly apply to them.

Another interesting aspect is that it allows characteristics or properties of the modelized systems to be expressed as equational theorems, often automatically provable using the rewriting mechanism itself or induction techniques based on completion . Note that the rewriting and the completion mechanisms also enable transformation and simplification of formal systems or programs. Applications of rewriting-based proofs to computer security are various. Let us mention recent work using rule-based specifications for detection of comupter viruses , .

Game theory aims at discussing situations of competition between rational players . After the seminal works of Emile Borel and John von Neumann, one key events was the publication in 1944 of the book by John von Neumann and Oskar Morgenstern. Game theory then spent a long period in the doldrums. Much effort was devoted at that time towards the mathematics of two-person, zero-sum games.

For general games, the key concept of Nash equilibrium was proposed in the early 50s by John Nash in , but it was not until the early 70s that it was fully realized what a powerful tool Nash has provided in formulating this concept. This is now a central concept in economics, biology, sociology and psychology to discuss general situations of competition, as attested for example by several Nobel prizes of economics.

Algorithmic game theory differs from game theory by taking into account algorithmic and complexity aspects. Indeed, historically main developments of classical game theory have been realized in a mathematical context, without true considerations on effectiveness of constructions.

Game theory and algorithmic game theory have large domains of applications in theoretical computer science: it has been used to understand complexity of computing equilibria , the loss of performance due to individual behavior in distributed algorithmics , the design of incentive mechanisms , the problems related to the pricing of services in some protocols ...

From an historical point of view, the first official virus appeared in 1983 on Vax-PDP 11. In the very same time, a series of papers was published which always remain a reference in computer virology: Thompson , Cohen and Adleman .

The literature which explains and discusses practical issues is quite extensive, see for example Ludwig's book or Szor's one and all web sites...But, we think that the best references are both books of Filiol (English translation ) and . However, there are only a few theoretical/scientific studies, which attempt to give a model of computer viruses.

A virus is essentially a self-replicating program inside an adversary environment. Self-replication has a solid background based on works on fixed point in -calculus and on studies of Von Neumann . More precisely we establish in that Kleene's second recursion theorem is the cornerstone from which viruses and infection scenarios can be defined and classified. The bottom line of a virus is behavior is

A virus infect programs by modifying them

A virus copies itself and can mutate

Virus spreads throughout a system

The above scientific foundation justifies our position to use the word virus as a generic word for self-replicating malwares. (There is yet a difference. A malware has a payload, and virus may not have one.) For example, worms are an autonous self-replicating malware and so fall into our definition. In fact, the current malware taxonomy (virus, worms, trojans, ...) is unclear and subject to debate.

Understanding computation theories for continuous systems leads to studying hardness of verification and control of these systems. This has been used to discuss problems in fields as diverse as verification (see e.g. ), control theory (see e.g. ), neural networks (see e.g. ), and so on.

We are interested in the formal decidability of properties of dynamical systems, such as reachability , the Skolem-Pisot problem , the computability of the -limit set . Those problems are analogous to verification of safety properties.

In contrast with the discrete setting, it is of utmost importance to compare the various models of computation over the reals, as well as their associated complexity theories. In particular, we focus on the General Purpose Analog Computer of Claude Shannon , on recursive analysis , on the algebraic approach and on computability in a probabilistic context .

A crucial point for future investigations is to fill the gap between continuous and discrete computational models. This is one deep motivation of our work on computation theories for continuous systems.

The other research direction on dynamical systems we are interested in is the study of properties of adversary systems or programs, i.e. of systems whose behavior is unknown or indistinct, or which do not have classical expected properties. We would like to offer proof and verification tools, to guarantee the correctness of such systems.

On one hand, we are interested in continuous and hybrid systems. In a mathematical sense, a hybrid system can be seen as a dynamical system, whose transition function does not satisfy the classical regularity hypotheses, like continuity, or continuity of its derivative. The properties to be verified are often expressed as reachability properties. For example, a safety property is often equivalent to (non-)reachability of a subset of unsure states from an initial configuration, or to stability (with its numerous variants like asymptotic stability, local stability, mortality, etc ...). Thus we will essentially focus on verification of these properties in various classes of dynamical systems.

We are also interested by rewriting techniques, used to describe dynamic systems, in particular in the adversary context. As they were initially developed in the context of automated deduction, the rewriting proof techniques, although now numerous, are not yet adapted to the complex framework of modelization and programming. An important stake in the domain is then to enrich them to provide realistic validation tools, both in providing finer rewriting formalisms and their associated proof techniques, and in developing new validation concepts in the adversary case, i.e. when usual properties of the systems like, for example, termination are not verified.

For several years, we have been developing specific procedures for property proofs of rewriting, for the sake of programming, in particular with an inductive technique, already applied with success to termination under strategies , , , to weak termination , sufficient completeness and probabilistic termination .

The last three results take place in the context of adversary computations, since they allow for proving that even a divergent program, in the sense where it does not terminate, can give the expected results.

A common mechanism has been extracted from the above works, providing a generic inductive proof framework for properties of reduction relations, which can be parametrized by the property to be proved , . Provided program code can be translated into rule-based specifications, this approach can be applied to correctness proof of software in a larger context.

A crucial element of safety and security of software systems is the problem of resources. We are working in the field of Implicit Computational Complexity. Interpretation based methods like Quasi-interpretations (QI) or sup-interpretations, are the approach we have been developing these last five years, see , , . Implicit complexity is an approach to the analysis of the resources that are used by a program. Its tools come essentially from proof theory. The aim is to compile a program while certifying its complexity.

One of the problems related to distributed algorithmics corresponds to the minimization of resources (time of transit, quality of services) in problems of transiting information (routing problems, group telecommunications) in telecommunication networks.

Each type of network gives rise to natural constraints on models. For example, a network is generally modeled by a graph. The material and physical constraints on each component of the network (routers, communication media, topology, etc ...) result in different models. One natural objective is then to build algorithms to solve those types of problems on various models. One can also constrain solutions to offer certain guarantees: for example the property of self-stabilization, which expresses that the system must end in a correct state whatever its initial state is; or certain guarantees of robustness: even in the presence of a small proportion of Byzantine actors, the final result will remain correct; even in the presence of rational actors with divergent interests, the final result will remain acceptable.

Algorithms of traditional distributed algorithmics were designed with the strong assumption that the interest of each actor does not differ from the interest of the group. For example, in a routing problem, classical distributed algorithms do not take into account the economic interests of the various autonomous systems, and only try to minimize criteria such as shortest distances, completely ignoring the economical consequences of decisions for involved agents.

If one wants to have more realistic models, and take into account the way the different agents behave, one gets more complex models.

However, today, one gets models which are hard to analyse. For example,

Models of dynamism are missing: e.g., how to model a negotiation in a distributed auction mechanism for the access to a telecommunications service,

only few methods are known to guarantee that the equilibrium reached by such systems remains in some domains that could be qualified as safe or reasonable,

there is almost no method discussing the speed of convergence, when there is convergence,

only a little is known about the time and space resources necessary to establish some techniques to guarantee correct behavior.

Thus, it is important to reconsider the algorithms of the theory of distributed algorithmics, under the angle of the competitive interests that involved agents can have (Adversary computation). This requires to include/understand well how to reason on these types of models.

Nowadays, our thoughts lead us to define four different research tracks, that we are describing below.

It is rightful to wonder why there is only a few fundamental studies on computer viruses while it is one of the important flaws in software engineering. The lack of theoretical studies explains maybe the weakness in the anticipation of computer diseases and the difficulty to improve defenses. For these reasons, we do think that it is worth exploring fundamental aspects, and in particular self-reproducing behaviors.

The crucial question is how to detect viruses or self-replicating malwares. Cohen demonstrated that this question is undecidable. The anti-virus heuristics are based on two methods. The first one consists in searching for virus signatures. A signature is a regular expression, which identifies a family of viruses. There are obvious defects. For example, an unknown virus will not be detected, like ones related to a 0-day exploit. We strongly suggest to have a look at the independent audit in order to understand the limits of this method. The second one consists in analysing the behavior of a program by monitoring it. Following , this kind of methods is not yet really implemented. Moreover, the large number of false-positive implies this is barely usable. To end this short survey, intrusion detection encompasses virus detection. However, unlike computer virology, which has a solid scientific foundation as we have seen, the IDS notion of “malwares” with respect to some security policy is not well defined. The interested reader may consult .

The aim is to define security policies in order to prevent malware propagation. For this, we need (i) to define what is a computer in different programming languages and setting, (ii) to
take into consideration resources like time and space. We think that formal methods like rewriting, type theory, logic, or formal languages, should help to define the notion of a
*formal immune system*, which defines a certified protection.

This study on computer virology leads us to propose and construct a “high security lab” in which experiments can be done in respect with the French law. This project of “high security lab” in one of the main project of the CPER 2007-2013.

In the context of our study of rule-based program proof and validation, we develop and distribute CARIBOO (
http://

Written in ELAN and Java, it has a reflexive aspect, since ELAN is itself a rule-based language. CARIBOO was partially developed in the Toundra QSL project, and reinforced in the framework of the Modulogic ACI , .

The CROCUS software aims at synthesizing quasi-interpretations. It takes programs as input and returns the corresponding quasi-interpretation. Doing this, it can guarantee some bounds on the memory used along computations by the input program. The currently analyzed programs are written in a subset of the CAML language, more precisely a first-order functional language subset of CAML. The synthesis procedure has been reconsidered, it is more robust and efficient.

We develop a new approach to detect malware that we name morphological analysis. Refer to Section for explanations on how it works. This software is registered (APP deposit). Thanks to this malware detection engine, we won an OSEO prize in the category Emergence in 2009. Publications related to this are .

TraceSurfer is our prototype implementation using dynamic binary instrumentation for malware analysis. This tool, based on Pin , can reconstruct the code waves used in self-modifying programs and detect protection patterns based on these code waves. Publications related to TraceSurfer are , ,

Mr. Waffles is a small implementation of the CTL model checking algorithm described in the classical textbook by Clarke et al. Its purpose is to allow easy experimentation with model
checking for academic projects, with a focus on program analysis. For this reason, we actually implemented checking over CTL-FV formulas, a more expressive extension of CTL with backward
branches and free variables originally described in
. The project has been released as an open source Python library at
http://

Tartetatintools is made of four program instrumentation tools which are useful for program analysis :

antiantidebug: detects a few anti-debugging tricks on Windows

puppetmaster: detects a few CPU-based VMM detection tricks

stracewin_ia32: logs system calls and their parameters for the traced program

tracesurfer: a self-modifying code analyzer (along with an IDA add-on)

The project has been released as an open source project at
http://

Cremebrulee is an experimental Javascript dynamic instrumentation engine. It takes a script, rewrites it (i.e. instruments it) and runs the instrumented version. During the rewrite, a few modifications occur that log interesting events in obfuscated scripts.

This tool is useful to analyze programs written in java script.

The project has been released as an open source project at
http://

In the context of program analysis, we develop and distribute Pym's library
http://

The former is is python disassembling library that provides interfaces for reverse engineering and static analysis such as control flow extraction. The latter is a Software As A Service application which allows to visualize the instructions and the control flow of a program.

While the notion of computable function over the natural numbers is universally accepted, its counterpart over continuous spaces, as the real line, is subject to discussion. The wide range of possible formalizations partly has its origin in the diversity of structures continuous spaces can be endowed with: e.g. the real line can be seen as a topological space, a measure space, a field, a vector space, a manifold, etc., depending on the particular problem one is concerned with. It happens that the topological structure of the set of real numbers is usually implicitly taken as a reference for the theory of computable functions.

On the other hand, we are interested in the analysis of dynamical systems from the computability point of view. It happens that the probabilistic framework is of much interest to understand the behavior of dynamical systems, as it enables one to distinguish physically relevant features of such systems, providing at the same time a way to understand robustness to noise. In , Mathieu Hoyrup, together with Peter Gács and Cristóbal Rojas, fully characterize the algorithmic effectivity of a natural class of properties arising naturally in dynamical systems.

As a result, restricting to a topological approach is somewhat limitative and we are interested in a theory of computable functions that would fit well with probabilities. We carry out such
a development in
. Here the algorithmic theory of randomness, initiated by Martin-Löf
in 1966
, come into play. This theory offers a way to distinguish, in a
probability space, elements that are plausible w.r.t. the probability measure put onto the space, the
*random*elements. This theory is already at the intersection between probability and computability. In
Mathieu Hoyrup and Cristóbal Rojas show that it gives a powerful and
elegant way of handling computability in a probabilistic context. They present applications of this framework in
.

Olivier Bournez, Walid Gomaa, and Emmanuel Hainry presented in a framework that uses approximation to characterize both computability and complexity classes of functions from recursive analysis. This work provides an algebraical characterization of polynomial-time computable functions in the sense of Ko and also extends techniques introduced in for comparing discrete models with continuous models.

In the last few years, a significative amount of work has been done to propose correctness proof methods for rewriting-based programming. For termination and sufficient completeness, for instance, various proof techniques are now available, when the reduction relation is enriched by equations, conditions, or when it is applied with particular strategies. Nevertheless, there is still a lack of techniques, for example for certain strategies, of for weak properties i.e., properties that are not verified on every computation branch of the reduction relation. The latter properties are interesting since in practice, programs do not always verify the properties in their strong acceptance.

For several years, we have been trying to answer the above problem in developing an induction based proof approach. For the problem of strategies, specific procedures were given for proving termination under innermost, outermost and local strategies , , . We then have extracted the common mechanisms of theses procedures, to propose a simpler and more general framework, parametrized by the strategy . We also have proposed an instance of this mechanism for priority rewriting, for which there was no specific termination proof method until now .

For the problem of weak properties, our technique was applied to weak termination under the innermost strategy
, and to C-reducibility : a weak form of sufficient completeness,
we have defined as the existence of a constructor form on at least one derivation branch from every term
. We have continued the generalization work of our approach for the
proof of weak properties. Our inductive technique consists in developing proof trees from patterns representing ground terms, by abstracting subterms, induction can be applied on, and by
narrowing. Thanks to a lifting mechanism, the proof trees model the rewriting trees, the properties to be proved are defined on. For weak properties, the choice of narrowing branches of a term
uis crucial. For weak termination, it is sufficient to consider a set of branches representing at least one rewriting step for every reducible ground instance of
u. For C-reducibility, the set of narrowing branches has to be covering i.e., has to represent at least one reduction step for every ground instance of
u. A new definition of narrowing has been proposed to integrate these conditions. The correctness proof of the approach has also been factorized, enlightening the common and the specific
characteristics of both properties
.

Whatever the property to be proved, the above inductive technique lies on the notion of reductibility on ground terms. We have characterized how to model reducibility and irreducibility of rewriting on ground terms using equational and disequational constraints. We have shown in particular that innermost (ir)reducibility can be modeled with a particular narrowing relation and that equational and disequational constraints are issued from the most general unifiers of this narrowing relation. We then have proposed a proof of an innermost lifting lemma using this (dis)equation-based characterization .

The goal of implicit computational complexity is to give ontogenetic models of computational complexity. We follow two lines of research. The first line is more theoretical and is related to the initial ramified recursion theory due to Leivant and Marion and to light linear logic due to Girard. The second is more practical and is related to interpretation methods, quasi-interpretation and sup-interpretation, in order to provide an upper bound on some computational resources, which are necessary for a program execution. This approach seems to have some practical interests, and we develop a software Crocus that automatically infer complexity upper bounds of functional programs.

Guillaume Bonfante, together with Florian Deloup and Antoine Henrot have reconsidered the use of reals in the context of interpretation of programs in . The main issue is that the ordering over the reals is not well-founded, and, consequently, bounds on the length of computations are lost. Actually, bounds on the size of terms are also lost. The contribution is to show that these bounds can be recovered when one uses interpretations defined by the functions max and polynomials. This comes from the Positivstellensatz, a deep result of algebraic geometry.

The sup-interpretation method is proposed as a new tool to control memory resources of first order functional programs with pattern matching by static analysis . Basically, a sup-interpretation provides an upper bound on the size of function outputs. A criterion, which can be applied to terminating as well as non-terminating programs, is developed in order to bound polynomially the stack frame size. Sup-interpretations are proposed by Jean-Yves Marion and Romain Péchoux. Sup-interpretations may be used in various programming setting like object oriented language programming .

Olivier Bournez, Walid Gomaa and Emmanuel Hainry investigated the notion of implicit complexity in the framework of recursive analysis. In , was presented a characterization of polynomial-time computable functions as well as a framework to extend classical complexity result to the real field. This characterization is the first implicit characterization of this class of functions and as such opens the field of implicit complexity in recursive analysis.

We considered a model of interdomain routing proposed by a partner from SOGEA project that is based on the well known BGP protocol. We proved that the model has no pure Nash equilibria, even for 4 nodes. Proof of convergence of the fictious player dynamics for the corresponding network has been established for some specific cases.

We reviewed the different models of dynamism in literature in game theory, in particular models from evolutionary game theory. We presented some ways to use them to realize distributed computations in . Considered models are particular continuous time models, and hence are also covered by the survey . Octave Boussaton, who has now completed his PhD, is currently working on the theory of learning equilibria, in particular in Wardrop routing networks. The proof of the convergence of a specific learning strategy has been established for some networks. The result has been presented in .

We analyzed the behavior of providers on a specific scenario, mainly by considering the simple but not simplistic case of one source and one destination. The analysis of the centralized transit price negotiation problem shows that the only one non cooperative equilibrium is when the lowest cost provider takes all the market. The perspective of the game being repeated makes cooperation possible while maintaining higher prices. Then, we considered the system under a distributed framework. We simulated the behavior of the distributed system under a simple price adjustment strategy and analyzed whether it matches the theoretical results or not. This work is published in .

Moreover, we presented both a game theoretic and an algorithmic approach for solving the routing problem of choosing the best path in a path based protocol such as BGP. We proposed a distributed learning algorithm which is able to learn Nash equilibria in a Wardrop network. This work was published in Parallel Processing Letters and a newer result on the time of convergence was published in . The complexity of the method depends on the total number of paths, which can become unsustainable if the network is too large. We subsequently developed another approach that is able to narrow down the complexity of the method which is now based on the number of nodes in the graph that represents the network. This work has not been published yet, it appears in Octave Boussaton's PhD and will soon lead to a submission.

The morphological analysis is a new methods of malware detection that we propose . It is based on signature recognition of abstraction of the control flow graphs of binaries. We provide fast rooted and directed acyclic graph pattern matching algorithm based on tree automata. Compare to other (industrial) approaches, the morphological analysis based detection engines have at least two advantage. First, there are quite robust wrt malware code mutation. Second, signatures may be automatically extracted. There is a running implementation that we currently test on thousand of samples coming from honeypots and the telescope which is operated by Madynes EPI in the context of LHS. This detector may run in two modes: (i) it analyses statically binaries and (ii) it analyses dynamically binaries using an instrumentation method based on PIN and related to our second main software developement TraceSurfer.

Most of the malware are nowadays packed in order to protect themselves against analysis performed by computers or by humans. In order to cope with packing techniques, we begin studies on self-modifying programs from both a theoretical and a practical perspective. In particular, packers are a particular case of self-modifying programs. We build a tool, TraceSurfer , , , based on instruction-level trace analysis and a theoretical framework. In order to model self-modifying programs, we introduce the notion of pseudo-programs, for which the program text is not fixed wrt semantics. We then develop a type system which collects information at runtime (like tainting), but which also has the ability to predict information-flow properties (like traditional type systems). This leads us to explain a self-modifying program execution as a sequence of code waves. Next, we study non-interference like properties. Then, we use this typing information to define behavior patterns, which give a high level description of decrypted or scrambled code for example. With these behavior patterns we are able to classify binaries, to detect suspect runs, and to design security policies. TraceSurfer has been tested on thousand of binaries on large scale experiments and using the cluster of LHS.

On a more theoretical aspect and related to , Guillaume Bonfante, Jean-Yves Marion and Daniel Reynaud have proposed a new formalization of the notion of self-rewriting. To hide themselves from antivirus software, malware heavily use self-modification. In , we provide an operational semantics for an abstract programming language. We prove that both compilations, from non self-modifying programs to self-modifying programs, and conversely from self-modifying programs to self-modifying programs can be performed. These compilation procedures are based on two theoretical constructions: the Rogers isomorphism and the Futamura projection.

We work on behavioral analysis in order to detect malware. The idea is to detect a behavior like a keylogger. Again our approach is to have a sound theory in order to try to give solutions . Lastly, we also propose an attack on electronic vote based on web browsers , .

In 2009, we pursue the construction of the high security lab (LHS) in order to make experiments about computer security on a safe platform The EPI Madynes is working with us on this project. There are currently two operational modules : A telescope and a "baby" cluster. There will be two equipped and secure rooms inside Loria building devoted to LHS.

CARTE is part of the “Sécurité et Sûreté des Systèmes (SSS)” theme of the “contrat de plan État-Région”. Olivier Bournez is the head of the research operation TATA. Jean-Yves Marion is the co-head of the research operation LHS.

Jean-Yves Marion is the head of the high security lab (laboratoire de haute sécurtité - LHS). CARTE members are fully involved in this project.

The three-year “COMPLICE” began on January 2009. It deals with implicit computational complexity.

We participate to a 18 month research project on CyS cybercriminalities and smartphones with Technology University of Troyes (UTT) and IRCGN (Institut de Recherche Criminelle de la Gendarmerie Nationale à Rosny-sous-Bois).

We get funded on a project related to computer virology by INPL and Région Lorraine.

Jean-Yves Marion is member of the steering committee of the International workshop on Logic and Computational Complexity (LCC/ICC),

Équipe Associée ComputR. The Équipe Associée ComputR began in january 2009. It involves members of the Carte team, members of the Laboratoire d'Informatique de l'École Polytechnique and members of the Instituto de Telecomunicaçoes, Instituto Superior Técnico from Lisbon. It deals with computation in a continuous context. The head of this project is Emmanuel Hainry.

http://

José Fernandez from the Ecole polytechnique of Montreal was invited by Jean-Yves Marion.

Marco Gaboardi from Torino University

Guillaume Bonfante: member of the engineering part of the Comipers hiring committee at LORIA.

Jean-Yves Marion:

member of the “équipe de direction” and associate director of Loria

member of CNU, section 27

Expert for AERES (LIP on december 2009)

elected to the scientific council of INPL in July 2003 and member of the board,

Expert PES (Prime d'excellence Scientifique) section 27

member of the board of GIS 3SGS,

member of the hiring committee at ESIAL - UHP

Guillaume Bonfante organised PCC at Loria.

Jean-Yves Marion is co-chair of STACS in February 2009

Jean-Yves Marion is member of the PC of Malware, EICAR and FOPARA

Isabelle Gnaedig is coordinator of the course on “Design of Safe Software” at ESIAL, 3rd year. In this context, she also gave courses and supervised practical works on “Rule-based Programming”.

Guillaume Bonfante is teaching (full service) at the "Ecole des Mines de Nancy".

Emmanuel Hainry is teaching (full service) at the Institut Universitaire de Technologie Nancy Brabois (Nancy Université, Université Henri Poincaré).

Romain Péchoux is teaching (full service) at Université Nancy 2.

Jean-Yves Marion is supervising the thesis work of Philippe Beaucamps from November 2007.

Jean-Yves Marion is supervising the thesis work of Daniel Reynaud from November 2007.

Jean-Yves Marion and José Fernandez (Ecole Polytechnique of Montréal) are supervising the thesis work of Joan Calvet from September 2009.

Emmanuel Hainry has been supervising two postdoctoral fellows, Walid Gomaa and Mathieu Hoyrup.

Isabelle Gnaedig: ESIAL admission committee.

Jean-Yves Marion is member of the jury of habilitation of Véronique Cortier, Ammar Oulamara and Radu State.

Here is the list of talks given by members of the team during the year 2009.

Walid Gomaa:

*Algebraic Characterization of Computable and Complexity-Theoretic Analysis*. “New Worlds of Computation” workshop, Orléans, January 12, 2009.

*A Survey of Recursive Analysis and Moore's Notion of Real Computation*. “Physics and Computation” satellite workshop of UC09, Ponta Delgada, September 10, 2009.

Emmanuel Hainry:

*Computing over the reals, computing with the reals*. SIESTE (Student's seminar of the École Normale Supérieure de Lyon) on December 2, 2008.

*Decidability in continuous time dynamical systems*. “New Worlds of Computation” workshop, Orléans, January 12, 2009.

*Implicit complexity in recursive analysis*. 3rd meeting of the Complice ANR Project, Nancy, October 23, 2009.

Mathieu Hoyrup:

*Effective probability theory*. Invited talk in the workshop “New Interactions between Analysis, Topology and Computability”, Birmingham, January 9, 2009. Invited talk.

*Approches algorithmiques des probabilités*. “Séminaire de Probabilités de l'Institut Élie Cartan”, Nancy, April 30, 2009.

*Layerwise computability*. Invited talk in the 4th conference on “Logic, Computablity and Randomness”, CIRM Marseille, June 30, 2009.

*Dynamical systems: unpredicdability vs uncomputability*. Invited talk in the workshop “Physics and Computation”, satellite of UC09, Ponta Delgada, September 10, 2009.

Jean-Yves Marion

*NICS*. Invited talk at Torino university with a one week stay.