DistribCom addresses distributed and iterative algorithms for both network & service management and digital communications.

The first and main focus of DistribCom is on
*algorithms for distributed management.*Today, research on network and service management focuses mainly on issues of software architecture and infrastructure deployment. However, management involves also algorithmic problems such as fault diagnosis and alarm correlation, provisioning
and optimization, negotiation for QoS, and security. DistribCom develops the foundations supporting such algorithms: fundamentals of distributed observation and supervision of systems involving concurrency.

Our algorithms are model-based. For obvious reasons of complexity, such models cannot be built by hand. Therefore we also address the novel topic of
*self-modeling,*i.e., the automatic construction of models, both structural and behavioral. For this we use techniques from computer and software engineering (e.g., genericity and reflexive infrastructures) as well as techniques from statistics and control (e.g., Bayesian learning
techniques to infer probabilistic parameters from observations).

Some of the iterative techniques we develop are also useful at handling
*joint algorithms of signal processing and coding in digital communications.*We develop such studies in the context of Multiple Input Multiple Output (MIMO) or Multi User digital communications. As antennas play a central role in the latter sector, we complement this line of research by
investigating estimation, detection, and identification techniques related to antenna processing.

Our research topics are currently structured as follows: 1/ fundamentals of distributed observation and supervision of concurrent systems; 2/ self-modeling; 3/ algorithms for distributed management of telecommunications systems and services; 4/ joint algorithms of signal processing and coding in digital communications; 5/ estimation with parsimony in signal processing. The team will be reorganized in 2006, where the activities related to signal processing for digital communications will be moved to another team.

Our main industrial ties are with Alcatel and France-Telecom, on the topic of networks and service management. We participate jointly with them to the SWAN RNRT project on self-management of networks and Web services. On a related topic, we cooperate with France-Telecom on the technology of scenarios.

Blaise Genest was recruited at CNRS this year, and joined our team only this fall. Therefore his activities are not included here.

For Finite State Machines (FSM), a large body of theory has been developed to address problems such as: observation (the inference of hidden state trajectories from incomplete observations), control, diagnosis, and learning. These are difficult problems, even for simple models such as FSM's. One of the research tracks of DistribCom consists in extending such theories to distributed systems involving concurrency, i.e., systems in which both time and states are local, not global. For such systems, even very basic concepts such as ``trajectories'' or ``executions'' need to be deeply revisited. Computer scientists have for a long time recognized this topic of concurrent and distributed systems as a central one. In this section, we briefly introduce the reader to the models of scenarios, event structures, nets, languages of scenarios, graph grammars, and their variants.

The simplest concept related to concurrency is that of a finite execution of a distributed machine. The
*scenario*shown in Figure

is an example. The figure shows the life-time (from top to bottom) of four processes (or instances). The instance can exchange asynchronous messages. In this example, some local variables can be tested and assigned. In this model, events are totally ordered for each instance, but only
partially ordered between different instances. Thus, time is local, not global. The natural concept of state is local too (i.e., attached to individual instances). Global states can be defined, they however require nontrivial algorithms for their distributed construction. Finite scenarios
introduce the two key concepts of
*causality*and
*concurrency.*The causality relation is a partial order, we denote it by
. In Figure
, the reception of AU_AIS is causally related to the sending of MS_AIS by the rs_TTP, while it is concurrent with
the receipt of MS_AIS by the alarm manager.

Scenarios have been informally used by telecom engineers for a long time. Their formalization was introduced by the work done in the framework of ITU and OMG on High-level Message Sequence Charts and on UML Sequence Diagrams in the last ten years, see , . This allowed in particular to formally define infinite scenarios, and to enhance them with variables, guards, etc , , . Today, scenarios are routinely offered by UML and related systems and software modeling tools, Figure showed such an example.

The next step is to model sets of finite executions of a distributed machine.
*Event structures*were invented by Glynn Winskel and co-authors in 1980
,
. This data structure collects all the executions by superimposing shared prefixes. Figure
shows an example.

The top most diagram shows an HMSC. i.e., an automaton whose transitions are labeled by basic scenarios. Regard first the scenarios as abstract labels. The set of all executions of this automaton is then shown on the bottom left diagram, in the form of an execution tree. For sequential machines, executions trees collect all the executions by superimposing shared prefixes.

Now, the right diagram shows the ``white box'' version of the former, in which the concatenation of the successive basic scenarios has been performed by chaining them instance by instance. The result is a
*event structure,*i.e., a branching structure consisting of events related by a
*causality*relation (depicted by directed arrows) and a
*conflict*relation (depicted by a non directed arc labeled by a # symbol). Events that are neither causally related nor in conflict are called
*concurrent.*Concurrent processes model the ``parallel progress'' of components.

Categories of event structures have been defined, with associated morphisms, products, and co-products, see . Products and co-products formalize the concepts of parallel composition and ``union'' of event structures, respectively. This provides the needed apparatus for composing and projecting (or abstracting) systems. Event structures have been mostly used to give the semantics of various formalisms or languages, such as Petri nets, CCS, CSP, etc , . We in DistribCom make a nonstandard use of these, e.g., we use them as a structure to compute and express the solutions of observation or diagnosis problems, for concurrent systems.

The next step is to have finite representations of systems having possibly infinite executions. In DistribCom, we use two such formalisms:
*Petri nets*
,
and
*languages of scenarios*such as High-level Message Sequence Charts (HMSC)
,
. Petri nets are well known, at least in their basic form, we do not introduce them here. We use so-called
*safe*Petri Nets, in which markings are boolean (tokens can be either 0 or 1); and we use also variants, see below. Languages of scenarios are simply obtained as illustrated in Figure
: 1/ equip basic scenarios with a concatenation operation, 2/ consider an automaton whose transitions are labeled
with basic scenarios. Executions of Petri Nets and HMSC can be represented with concurrency in the form of event structures. We have shown this for HMSC's in Figure
, and it is obtained in a similar way for Petri nets.

Two extensions of the basic concepts of nets or scenario languages are useful for us:

Nets or scenario languages enriched with variables, actions, and guards. This is useful to model general concurrent and distributed dynamical systems in which a certain discrete abstraction of the control is represented by means of a net or a scenario language. Manipulating such
*symbolic nets*requires using abstraction techniques. Time Petri nets and network of timed automata are particular cases of symbolic nets.

Probabilistic Nets or event structures. Whereas a huge literature exists on stochastic Petri nets or stochastic process algebras (in computer science), randomizing
*concurrent models,*i.e., with
's being concurrent trajectories, not sequential ones, has been addressed only since the 21st century. We have contributed to this new area of research.

The last and perhaps most important issue, for our applications, is the handling of dynamic changes in the systems model. This is motivated by the constant use of dynamic reconfigurations in management systems. Extensions of net models have been proposed to capture this, for example the
*dynamic nets*of Vladimiro Sassone
; for the moment, such models lack a suitable theory of unfoldings. A relevant alternative is the class of
*graph grammars*
,
. Graph grammars transform graphs by means of a finite set of rules.

Graph grammars have been equipped with a rich theory of unfoldings, and more generally received much attention from a theoretical viewpoint. While there are numerous modeling applications in Biology, Chemistry, computer science, etc; We at DistribCom test their use for distributed network management algorithms for systems subject to reconfiguration.

Due to the ever-growing number of users and the demand for transmission of huge amounts of data (for new services), high rate communication is of great importance today (e.g., for multimedia wireless communications, fixed wireless loops, LAN, and more). However the physical limitations of the wireless channels such as scarce channel capacity and frequency selective fading make high-rate transmission particularly challenging. Facing this challenge and achieving higher data rates therefore require a major research effort.

The past decade has witnessed notable advances to achieve reliable communications and signal processing has become a mature but also very specialized field. To meet future demands in wireless communications, recent trends show that digital communications at the physical layer should evolve from a traditional approach where the different functions of modulation, coding, and equalization, are considered separately, to an integrated systems approach.

Traditionally these problems have been solved separately, mainly for complexity reasons. E.g., equalization to deal with multipath channels, channel coding to better utilize the channel capacity, thus to lower the required signal-to-noise ratios (SNRs), and multiple access to make the users transmit simultaneously. Although this disjoint approach ensures lower complexity, it is sub-optimal since different elements may be optimized in a possibly antagonistic way—most conventional equalizers assume that all possible channel input sequences are equally likely; this however is not true in the presence of channel coding, which is used in most wireless systems.

Rather than considering each problem in isolation, it is therefore more appropriate to consider them jointly. The complexity of the optimal solution (in the sense of the Maximum Likelihood) increases exponentially with the system size.
*Turbo*or
*iterative*techniques were proposed to address the resulting performance/complexity trade-off. This idea stems from the turbo-codes
which aimed at getting the same performance with the code length doubled, whereas the complexity is only doubled
and not squared. In
, Berrou
*et al.*showed that turbo-codes can perform within 0.5 dB of the Shannon capacity (at a bit error rate of
10
^{-5}). Turbo-techniques can also be successfully applied to other joint problems (equalization and decoding, multiuser detection and decoding, source/channel coding). In that case, the joint design of different functions can be seen as the distributed coordination between
different algorithms.

Moreover, , have shown the connection between the turbo algorithm of Berrou et al. (1993) and Pearl's (1982) belief propagation algorithm, well known in the artificial intelligence community. This allowed to cast all the turbo-iterative schemes in a unified scheme, in which the transmitter is naturally described by graphs (factor graph, or bipartite graph) and the data are estimated at the receiver by the message passing algorithm (the sum-product or the max-product algorithm).

This connection explains the good performance of the turbo-codes only partially. Since a graph in a compound system has many loops, the belief propagation yields only approximate maximum likelihood solutions. It is thus fundamental to create tools to describe such iterative algorithms in order to design the system (transmitter and receiver). One of the most promising approaches for analyzing iterative decoding schemes employs the notion of density evolution , , where the density of the soft messages exchanged is tracked. This approach was performed for a particular type of codes: the Low Density Parity Check codes (LDPC) for which the graph contains variables of dimension 2 only. Instead of the whole density, tenBrink proposed to visualize the mutual information , which can therefore be applied to more general graphs. In , we have shown that mutual information can also be used to design the transmitter.

Traditional statistical modeling has paid a great deal of effort at first identifying the model structure, including the number of its involved parameters. This has lead to the rich body of work around Akaike Information Criterion (AIC) and its many variants BIC, MDL, etc.

Recent studies , , have lead to novel alternative approaches in which sparsity in modeling is achieved without the need for a preliminary structure estimation procedure that is always difficult to achieve.

It can be seen as a Bayesian or inverse-problem approach in which the classical maximum likelihood criterion is replaced by a compound criterion that combines fit of the model to the observations with prior information or sparsity requirements.

An interesting approach in this philosophy consists in combining
_{2}and
_{1}norms in the criterion, where the
_{2}-part measures the fit of the model to the observations (e.g., the maximum likelihood criterion in the presence of gaussian noise) and the
_{1}-part ensures parsimony of the representation. In case of linear parameterizations the criterion remains convex and problems with moderate to high number of unknowns are reliably solved with standard programs, such linear or quadratic programming, from well established scientific
program libraries. While the algorithmic part is easy, the analysis is in general quite difficult and only preliminary results
in quite trivial situations are at hand so far. Even in the most basic identification schemes where the traditional
methods require preliminary structure estimation (model order detection or selection of regressors for instance), comparisons of performance seem to be out of reach.

Typical questions amenable to a solution are of the following type: given a matrix
Aof dimension
(
n,
m)with
m>
nand a vector
b=
Ax, find sufficient conditions for
bto have a unique sparsest representation as a linear combination of columns of
A. Answers to this question are known for arbitrary
Awith unit Euclidean norm column vectors, for both the true sparseness
where
denotes the number of non–zero components
xand the so-called
_{1}-sparseness i.e.
where
denotes the
_{1}norm of
x. If
bor more precisely if
b=
Ax_{o}with
satisfies this condition then both the solution of the linear program:

and the quadratic program:

``Synchronous Data Hierarchy,'' synchronous transport protocol for high rate transmissions, generally on optical links ; French version of SONet.

``Wavelength Division Multiplex,'' modulation of several lasers, with different wavelengths, on the same fiber, in order to increase the transmission rate.

``(Generalized) Multi Protocol Label Switching,'' a technique to create dedicated connections over any type of network/protocol, either connection oriented (like SDH), or packet oriented (like IP), in order to guarantee a level of quality of service (QoS).

``Service Level Agreement/Specification,'' a contract defining the QoS that a service provider has to guarantee to a user. The tendency is to ``program'' networks at the ``business level,'' by specifying services/policies, and not technical parameters. The latter have to adjust automatically to provide the requested service levels.

A service/application provided on the web. The trend is to gather elementary services (e.g. weather forecast, plane and hotel reservation, banking,...) into larger services (e.g. travel agency).

Telecommunications have grown up from a basic technology of networks and transport to a much more complex jungle of networks, services, and applications. This motivates a strong research effort towards ``autonomic communications'': one would like to program networks at the service level (sometimes called the business level, since it directly involves contracts with customers), and let the network adapt itself in order to ensure a given QoS, isolate and repair failures, etc. This tendency appears for example in the policy-based management, or in the quest for ``self-XX'' functionalities (self-configuration, self-monitoring, self-healing, etc.). One of our objectives it to try giving some content to these hidden automatic tasks.

In higher layers, at the level of services, the same search for flexibility motivates the development of tools to rapidly assemble web-services into larger services. Here again, problems are related to the automatic (re)configuration capabilities, to the on-line monitoring of the QoS, the isolation of failures or QoS violations, etc.

These problems have several common features. First of all, they involve concurrent systems,
*i.e.*systems where several things can happen at the same time. Secondly, they are built in a modular way, by combining elementary components into large connected structures. Third common point: these systems have a dynamic structure. Reconfigurations, connections/deconnections of new
components or clients are part of the normal activity, and should not require that monitoring algorithms be reset or modified each time the system is changed. Four, one is partly able to position observation points in the system, and decide what are the relevant phenomena to observe. And
finally, the size and heterogeneity of these systems forbids a centralized monitoring architecture. This motivates developing distributed and modular approaches.

As an example of distributed monitoring, our first application was related to diagnosis issues in transport networks, i.e. the low-level layers of networks (physical, transport and network layers). We have focused on circuit oriented networks, such as SDH/WDM protocols or GMPLS protocols. These systems assemble hundreds of functions of components, and the failure of one of them generally induces side-effects in many others. This phenomenon is known as ``fault propagation;'' it results in hundreds of alarms produced by the various components and collected at different locations in the network. Identifying origins of faults from these alarms has now reached a level of complexity that prevents the traditional human analysis of alarms. Due to the size of systems, the automatic diagnosis task cannot be done in a centralized manner, and must be solved by a network of local supervisors that coordinate their work to provide a coherent view of what happened in the network.

Currently we are addressing issues concerning dynamic services in heterogeneous networks and web services. The emphasis is on guaranteeing desired levels of QoS in situations where SLAs have to be negotiated, instantiated, and monitored in a non-local fashion: the immediate peer-to-peer contact allowing for negotiation, monitoring and penalties, is between a client and a provider, or two neighboring providers only.

The SOFAT toolbox is a scenario manipulation toolbox. Its aim is to implement all known formal manipulations on scenarios. The toolbox implements several formal models such as partial orders, graph grammars, graphs, and algorithm dedicated to these models (Tarjan, cycle detection for graphs, Caucal's normalization for graph grammars, etc. ). The SOFAT toolbox is permanently updated to integrate new algorithms. It is currently used for a research contract with France Telecom, and is freely available form INRIA's website. The last update of SOFAT includes the fibered product operation described in .

The emerging topic of self-modeling addresses the automatic construction of sophisticated behavioral models from having or discovering its architecture.

In the previous years we developed an approach and tool to automatically construct a network of automata to model a management system. To this end, we assume that the structure of the system can be discovered (by network discovery) and the system is composed of a pre-defined set of prototypes whose behavioral model is known. The so-obtained model was then used to automatically generate the associated diagnosis algorithm. This technique is currently being experimented in a real situation under a contract with the Alcatel business division on Optical Communication Systems, see .

This year, we have investigated several approaches to infer scenarios from distributed executions.

In complement, we studied how to obtain HMSCs by abstraction of communicating automata . This idea of automatic abstraction, from a low-level model given in term of network of interacting automata to a high-level message sequence chart allows the designer to play in a coherent way with the local and global views of a system, and opens new perspectives in reverse model engineering. Our technique is based on a partial order semantics of synchronous parallel automata and the construction of a finite complete prefix of an event-structure coding all the behaviors. A small software prototype has been implemented.

The fundamental work on on-line distributed and asynchronous diagnosis has appeared. Also, the work of Stefan Haar on asynchronous diagnosability has been written and submitted to a journal. See Activity Report 2004 for a detailed description of these subjects. Besides this, several fundamental results have been obtained this year.

Stefan Haar has continued the research on DES diagnosis via event correlation. His joint work with Zoe Abrams, Tova Milo, and Serge Abiteboul on the encoding of the unfolding-based asynchronous diagnosis in the abstract query language
*datalog*was presented in
.

In pursuing his quest for factorized data structures to perform distributed algorithms, Eric Fabre has introduced so-called
*trellis processes*
, in replacement of the branching processes used so far. The trellis of a net, or its ``time-unfolding,'' is obtained
by unfolding time but not conflicts in safe Petri nets, thus providing a much more compact structure. By then unfolding conflicts in a trellis, one recovers the usual unfolding of a net. These two successive types of unfoldings (on time and on conflicts) define two functors:

which are right adjoints of the canonical embeddings

These functors preserve limits of the respective categories. In particular, products and pullbacks are preserved—both constructs are ways to handle a distributed system in a modular manner. These results allow recovering modular distributed diagnosis in a compact and elegant categorical setting.

Trellis processes generalize to concurrent systems the usual notion of trellis of an automaton. In addition, trellis processes can be defined in a flexible, parameterized way: some conflicts can still be unfolded if one wishes. This allows a fine tuning of the data structures used in diagnosis applications. Such properties are exploited by Eric Fabre for the ongoing implementation of distributed diagnosis algorithms at Alcatel, performed by Franck Wielgus and Guy-Bertrand Kamga—see .

Also, Eric Fabre showed that the category
*Nets*of safe Petri nets (with so-called Winskel's morphisms) is
*complete,*meaning that all limits exist in this category
. Surprisingly, this result was not known under this simple form ; it could, in principle, be derived from more
general approaches... but with a great effort. Consequently, all pullbacks exist. Informally, pullbacks are a way to glue two nets by superimposing a common subnet, serving as an interface.

We have pursued two directions toward enriching basic models of concurrent and distributed systems. First, we have further extended our previous work on probabilistic models. Second, we have pursued our work on enhancing unfoldings with variables and handling these in a symbolic way.

Our work on
*true concurrency probabilistic models*is joint work with our former PhD student Samy Abbes. Samy was having his post-doc at and funded by the Systems Research Center at Maryland University, where he was hosted by John Baras. The work of this year has consisted in improving the results
of Samy's thesis and submitting papers. We review our progresses and refer the reader to the 2004 activity report for the motivations of this study.

In year 2000, we launched a research programme on probabilistic models of concurrent systems. This is different from stochastic Petri nets in all existing variants, since the latter are ultimately interpreted as Markov chains, a model in which both state and time are global. This is also different from probabilistic automata and process algebras, which are in fact tightly related to Markov Decision Processes. In our case, trajectories are partial orders of events with local states, and the space consists of the set of maximal configurations of the unfolding of the considered net. This problem was also recently and independently considered by Hagen Völzer , Daniele Varacca and Glynn Winskel . In his thesis, Samy Abbes has solved an impressive number of difficult questions:

When can be constructed as a projective limit of finite probabilistic concurrent models? This allows using the elegant Kolmogorov-Prokhorov extension theorem for such a construction.

Since our models involve concurrency, can we have concurrency tightly reflected by some kind of probabilistic independence? Samy Abbes has introduced the concept of
*distributed*probability, and he showed how to construct them.

Since we do not want to regard our probabilistic models as Markov chains, what is the proper concept of state in our case?
*Branching cells*play such a role, they carry the successive and possibly concurrent random choices along executions. The above three sets of results is collected in the papers
,
.

*Markov property*and the concept of
*homogeneity*(such as in Markov chains) can be generalized. Samy Abbes has developed a corresponding
*renewal theory*, i.e., the probabilistic study of recurrence. The resulting model is called
*Markov net.*Samy Abbes has published this result
.

Last but not least, Samy Abbes has proved very deep
*limit theorems.*The
*Law of Large Numbers*is a deep and nontrivial generalization of the usual one. What should replace the counter
Nin the average
, since we have no concept of global time? Counting how many branching cells are traversed turns out to be the right solution. Since concurrency is allowed, it is possible that the considered system is made of two non-interacting subsystems; these can obviously run at independent
speeds, which raises a difficulty since our average uses a single counter. The law of large numbers has been proved therefore for Markov nets with so-called
*integrable concurrent height,*which states that intervals between synchronization of processes must be finite in average
.

In 2005, the introduction of symbolic aspects in our diagnosis method has allowed us to deal with two novel aspects of distributed systems: dynamicity and real-time. In each of these extensions, the main points are the definition of a symbolic unfolding for the model we consider, and how to use the observations to constrain the model so that the unfolding contains exactly all executions of the system that are valid explanations of the observations.

In , we have extended the notion of unfolding of high-level Petri nets to encompass dynamicity. We have explained how to use unfoldings of dynamic nets for diagnosis. Our approach allows the diagnoser to infer from the observations the dynamic changes that have occurred in the system. This work is useful for the management of Web based distributed applications.

Monitoring real-time concurrent systems is a challenging task. By using time Petri nets as a true-concurrency model with a strong time semantics, we have presented in , a new definition of the unfolding of time Petri nets with dense time. This is a first step toward extending our diagnosis method to real-time systems. To actually perform diagnosis we consider a model (under the form of a time Petri net) which defines explicitly the causality and concurrency relations between the observable events, and constrained by time aspects. We do not require that time is observable: our method of diagnosis based on the on-the-fly construction of an unfolding, guided by the observations, allows the supervisor to infer not only the partial the ordering of the events, but also their possible firing dates.

Our intern Pierre Bourhis from École Normale Supérieure de Cachan, has worked on the different assumptions that can be done on the way the time appears in the observations. Several setups are reasonable, and they change the way the observations are used to constrain the model and the unfolding.

consist in combining successive or concurrent calls to Web Services to define a new Web Service. The so obtained service therefore relies and depends on the service delivered by its called sub-services.

Our team has started last year to address the general topic of Web services. We have decided to focus on two complementary aspects:

Web services with strong emphasis on data. This is the subject of a cooperation with the Gemo team of Serge Abiteboul, who developed the
*Active XML*formalism (AXML). AXML is a formalism for Peer-to-Peer manipulation of semi-structured data.

Web services with strong emphasis on control. We investigate fundamental issues raised by Web services
*orchestrations*or
*choreographies,*including semantic aspects and Quality of Service (QoS) aspects. The latter topic is developed in the framework of the RNRT-SWAN project.

The language
*Active XML*or
*AXML*is an extension of XML which allows to enrich documents with
*service calls*or sc's for short. These sc's point to web services that, when triggered, access other documents; this materialization of sc's produces in turn AXML code that is included in the calling document. One therefore speaks of dynamic or intensional documents; note in
particular that materialization can be
*total*(inserting data in XML format) or
*partial*(inserting AXML code containing further sc's). AXML has been developed by the GEMO team at INRIA Futurs, headed by Serge Abiteboul; it allows to set up P2P systems around repositories of AXML documents (one repository per peer).

We are currently cooperating with the GEMO team (Serge Abiteboul) and the LIAFA laboratory in Paris (Anca Muscholl) to explore the behavioral semantics of AXML in the framework of the ASAX project, see below. Our objective is to be able to ensure confluence despite distribution and asynchrony, even for documents not belonging to the so-called ``positive'' class, where confluence is ensured thanks to the absence of revision of facts.

To gain acquaintance with AXML, Sandeep Gupta, for his internship May to July 2005, explored possibilities of optimizing data exchange in AXML systems through choice of materialization strategies (i.e. lazy vs eager evaluation). With a similar objective, Hélia Pouyllau has implemented a prototype negotiation module in AXML, see below.

Whereas standards exist or are under preparation that formalize some Service Level Agreement (SLA) parameters for Web Service orchestrations and the protocols for negotiating them, little is available regarding so-called negotiation modules. Negotiation modules refer to algorithms to relate the different interacting contracts handled by a provider, or to help reaching a fair balance between different peers during the negotiation. Within the SWAN consortium, the following two topics have been investigated by our team this year:

*Cross-domain services in heterogeneous networks*. Consider the life-cycle of a cross-domain video-conference on demand. Suppose End-user
Arequests, with his host domain, a video-conference connection with user
B, such that
Aand
Bdo not have direct access to a common domain; the instances of the domains concerned (called service providers or SPs for short) thus have to set up a chain of inter-domain connections until the SP giving access to
Bis reached. Clearly, local domain services will not suffice for negotiating and managing this chain. With the Madynes team at LORIA and Alcatel, we are working on the algorithms for cross-domain QoS contract negotiation and monitoring; Hélia Pouyllau has
implemented a prototype negotiation module using web services for the peer-to-peer negotiation of QoS budget along a fixed chain of service providers, by nested contractualization in which only neighboring SP's interact. The description of the algorithm has been submitted to a
conference, and integration of the module into the management platform is in progress.

*Web Services Orchestrations.*Here, the challenge is: 1/ To establish a relation between the QoS of queried WS and that of the orchestration; 2/ To negotiate and tune the QoS parameters of the orchestration, in an efficient way; and 3/ To predict the breaching of a QoS
contract.

Regarding Web services orchestrations and choreographies, several standardization efforts are underway. The most mature effort is around Business Process Execution Languages (BPEL)
. WS-Choreography Definition Language (WS-CDL)
complements BPEL by paying attention to so-called
*choreographies*, i.e., peers of interacting business processes. As these formalisms result from standardization discussions, they are quite complex, offer a number of detail features, and address technical difficulties such as the so-called problem of ``correlations'' with
lengthly and informal explanations, which makes their modeling a cumbersome task—see, however, the work of
modeling of BPEL by means of Petri net systems of workflow type. This is why we decided to base this study on a
simpler and much cleaner formalism for WS orchestrations, namely the
Orcformalism proposed by Jayadev Misra and co-workers
. The current work of Sydney Rosario consists in translating
Orcinto the formalism of Petri net systems,
*i.e.,*systems of equations involving Petri nets
. A simulator is currently being developed, based on this translation, and QoS aspects will be later
considered. This is the subject of regular scientific exchanges with Jayadev Misra's group at Austin, Texas.

A covert channel is an unauthorized information flow between corrupted users. These flows are usually created to bypass security or billing mechanisms of a system, and divert systems resources from their original purpose. Covert channels are information leaks that divert resources of a system from their original purpose. They are a threat for security, and several standardization documents recommend to detect these flows with reproducible and formal methods.

Previous approaches have been proposed for Bell & La Padulla , models. A more recent trend in security is to define covert channel presence as a non-interference property violation. A process P is said to interfere with a process Q in a system if the actions of P can influence what Q can do or observe. The first notion of non-interference is due to Goguen and Messeguer . We have proposed a method to detect certain kinds of covert channels in protocols from a scenario model: covert channels are modeled as a game between a pair of corrupted users and the system , and the existence of a strategy for this game implies the presence of a covert channel. Once a covert channel is discovered, an important task is computing its bandwidth, as proposed in .

More recently, we have studied the covert channel problem from another point of view during Aldric Degorre's master. The main principle was to analyze a distributed system modeled as a network of communicating automata with games and alternating logics, and to compare both approaches. The ``covert channel game'' can be modeled as a distributed game with incomplete information. In this kind of game, two player try to win a challenge without knowing precisely the state of the whole system (they can only infer it from their own observations). Computing a winning strategy in this framework is an undecidable problem that can be compared to the problem of controller synthesis in a distributed framework. Note however that the comparison between controller synthesis and covert channels detection stops here, as entering an undesirable state in the control setting is strictly forbidden, while losing a bit of information in a covert channel just means that the channel is lossy or noisy. Furthermore, the coding strategy in a covert channel does not have to be optimal.

Hence, exhibiting a covert channel with a decidable fragment of a theory produces a restricted kind of covert channel, but yet provides useful information. During Aldric Degorre's master, we have shown that the existence of a sub-optimal covert channel can be brought back to the satisfiability of a formula expressed in a decidable variant of the alternating logic ATL. When this formula is satisfied, then there exists a winning strategy in the distributed covert channel game. This strategy can be effectively computed, and allows for a reliable transmission of covert information.

Error correcting codes introduced by Berrou, Glavieux and Thitimajshima , joining two convolution codes by means of an interleaver. The name is due to the iterative decoding algorithm, which uses ``soft'' probabilistic information.

The algorithms we propose for joint processing, arise from the turbo (or iterative) algorithm in digital communications. Our current research on this subject focuses on the analysis of such algorithms in order to propose new designs. In the following we describe, our main focuses.

In wireless communications, the transmission channel introduces time-varying multi-path fading to the transmitted signal and hence, an equalizer is needed to recover the transmitted data at the receiver. The optimal equalizer to be used is based on maximum
*a posteriori*(MAP) detection and depends on the transmission channel, which is a priori unknown. Therefore, the receiver contains a channel estimation algorithm to estimate a proper channel parameter set.

Moreover efficient equalizers has been proposed to take into account that the data are coded : the turbo-equalizer. It contains a MAP equalizer fed with
*a priori*information on the transmitted data and provided by another module in the receiver, for instance the decoder.

This motivated our study on the impact of channel estimation and
*a priori*information in a maximum a posteriori (MAP) equalizer. More precisely we first considered the case where the MAP equalizer is fed with a priori information on the transmitted data and studied analytically their impact on the MAP equalizer performance. Then we assumed that the
channel is not perfectly estimated and show that the use of both the a priori information and the channel estimate is equivalent to a shift in terms of signal-to-noise ratio (SNR) for which we provided an analytical expression
.

Then we studied analytically the distribution of the output of the MAP equalizer , . The aim of this study is to perform in the future the analytical convergence analysis of turbo equalizers using MAP equalization and to derive statistical description of estimates resulting from iterative code-aided estimation algorithms. This will be performed in collaboration with N. Noels, H. Steendam, M. Moeneclaey (University of Gent, Belgium) in the NewCom project framework.

This study of the robustness of the MAP equalizer naturally leads us to the design of equalizers robust to channel estimation errors. Hence we initiate a collaboration with N. Sellami (ISECS, Sfax, Tunisia), M. Siala (SupCom, Tunis, Tunisia) and tackle the problem of MAP equalization for non-ideal channel knowledge.

Finally, in collaboration with V. Ramon, C. Herzet, L. Vandendorpe (Université Catholique de Louvain la Neuve, UCL, Belgium) under the umbrella of the NewCom project, we extend the analysis to other equalizers as the Minimum Mean Square Error / Interference Cancellation equalizer. Sensitivity of this equalizer to an imperfect knowledge of the channel taps and signal-to-noise ratio (or, equivalently, noise variance) is analyzed. We plan to compare the robustness of all these equalizers to different parameter estimation errors.

The purpose is to extend previous work on sparse representations of signals in redundant bases developed in the noise-free case to the case of noisy observations.

We consider the case
b=
Ax_{o}+
ewhere
x_{o}satisfies the sparsity conditions requested in the noise-free case and
eis a vector of additive noise or modeling errors and seek conditions under which
x_{o}can be recovered from
bin a sense to be defined. Since a probabilistic approach is difficult and leads to weak asymptotic results, we consider the case where the energy of the noise is
_{2}-bounded and get conditions under which recovery or partial recovery is possible
.

In an attempt to extend this type of results to different assumptions on the additive discrepancies, we have been investigating the recovery conditions when the noise is bounded in
_{1}or
. It appears that the resulting conditions are -so far- not in a easily usable form and more work is required.

When applying sparse representation techniques to practical situations such as linear inverse-, identification- or joint detection-estimation problems one often faces the real-time issue, hence the necessity to develop new algorithms allowing to handle larger problems in reasonable time.

In the standard basis-pursuit, the problem generally amounts to seek the solution of

While this criterion can be seen as a quadratic program and thus solved using standard routines, there are far more efficient algorithm that heavily rely on its very specific nature namely on the presence of the
_{1}penalty term. We have developed such an algorithm that has also been proposed in the statistics literature under the name Least Angular Regression (LARS). The idea is quite simple once one has carefully understood the optimality condition of the criterion
. One can observe that for
hfixed the optimum is attained at a sparse point
x(
h)having few non-zero components. Moreover, around the current
hthere is an interval whose boundaries are easy to get, within which the number of non-zero components remains the same and the variation of
x(
h)is linear. To solve the criterion, one thus starts with
for which a unique component becomes non zero in
x(
h), computes its value and gets the value of
hfor which a second component becomes non-zero and proceeds in this way until one reaches the interval containing the desired
h. We are currently applying this algorithm to the Space Time Adaptive Processing (STAP) domain where it allows to detect targets in clutter and noise. We identify both the clutter and the targets from the data. Due to to the amount of data to be handled, standard
algorithms can be applied. With the iterative (in the number of elements retained in the identification) algorithm we manage to handle matrices of dimension
200×10000on standard desk computers in a few seconds of computation times.

A second algorithm which solves the same problem has also been proposed and analyzed. It is comparable from a computation time point of view to quadratic programming type algorithms but its interest lies in its tracking capabilities, if in the standard Basis Pursuit criterion the
observation vector
bevolves slowly over the time, the optimal sparse basis has to be adapted in real time and this algorithm which relies on a fixed point approach has this faculty. The iteration is as follows:

x
x
^{+}
X
A
^{
T
}
A
X
A
^{
T
}
h
I
^{-1}
b

where
. When initialized at any point
x^{o}having no zero component, as for instance at the least norm solution to
Ax=
b, we prove that it converges to the optimum. While we obtained this recursion using a fixed point approach to the optimality condition of the criterion similar recursions have been obtained by applying an Iterative Reweighted Least Squares (IRLS) approach to similar
criteria. Indeed the corresponding algorithms can also be seen as an Expectation-Maximization (EM) algorithms by making an adequate choice of the probability law and the set of hidden data. It appears however that though the convergence has been established under quite weak conditions under
both the IRLS or the EM philosophy, they do not encompass the non-smooth basis pursuit criterion.

Contract INRIA2 04 A 0082 MC 01 1 — December 2003/June 2006

The project
*``SWAN: Self-Aware Management''*is being funded by the French national network RNRT, Ministry of Research. I started in December 2003 and is scheduled to last 30 months. The DistribCom team cooperates in SWAN with

the
*MADYNES*team of INRIA Lorraine and Paris-Nord University (the latter replaced by
*LABRI*Bordeaux in 2005),

industrial partners Alcatel, France Telecom, and QoSmetrics.

SWAN aims at empowering local autonomous diagnosis and administration functions in networks and services. Compared to the preceding projects
*MAGDA*and
*MAGDA2*, where
*asynchronicity*and
*distribution*were already at the heart, the new additional challenge in SWAN is
*dynamicity*, namely non-static topologies of interaction. Networks expand or shrink as peers and connections are added or withdrawn at runtime, with the necessary adaptations and negotiations managed locally in the domain directly concerned. Web Services show by nature this dynamical
behavior. Both applications present thus a fundamental challenge to all model-based approaches to diagnosis and supervision more generally: find models that allow for self-modification - compare the discussion in the section on models of concurrency. DistribCom is leader of the SWAN project,
and in charge of providing mathematical models and algorithms for the two fields of applications. Details on the activities of DistribCom can be found in
.

External research project with France Telecom

Software development often starts with requirement capture, i.e. collecting a set of representative behaviors of a system. Scenarios collected can be considered as partial views of a system, but may however comport some incoherences. The objective of CO2 is to provide formal definitions of scenario compositions, define notions of coherence for a set of views defined as a collection of scenarios, and provide decision algorithms indicating whether there exists an implementation realizing them.

RNRT November 2004 - November 2006

Very often, software and systems functionalities and performance models are developed by different kind of specialists. The goal of Persiform is to provide a complete methodology and toolbox to allow performance evaluation from functional models. More precisely, the project should identify the minimal information needed on a functional model to be able to derive a performance model. The outputs of a simulation performed on this model will then provide some relevant information such as the system's throughput, the resources use, etc. The partners for this project are: France Telecom R&D, Verimag, INRIA Rennes, INT, and a software company, Orpheus.

December 2004 - May 2006

The general objective of this contract is to perform exploratory developments with two Alcatel business divisions, namely: Optical Communications Systems and Mobile Radio Communications systems.

In general, telecommunication systems are composed of many interconnected functions, softwares, protocol layers, etc. These elements are designed to monitor their internal state and their ability to fulfill the desired function. In case of abnormal behavior, malfunction or failure, they raise alarms that are collected by a supervisor. The interdependence of components, and the general failure propagation phenomenon, introduces a strong correlation between alarms. So one generally observes bursts of correlated information, that must be analyzed and interpreted to locate possible origin(s) of the failures (up to now, this work is performed by a human operator).

The objective of this contract is to develop diagnosis methods for such systems, made of many interconnected functions. We make use of models that capture the concurrency of behaviors, and describe runs of such systems by event structures. Two application domains have been identified : Submarine Line Terminal Equipments, for high rate optical transmissions, and the radio access layer, for GSM/GPRS networks.

ACI Sécurité — september 2004 - september 2007

The purpose of the Potestat ACI is to study security policies in networks, and to analyze the security of such networks with test techniques. The partners involved in this ACI are : LSR/IMAG - INPG (Vasco team), VERIMAG (DCS team), INRIA Rennes (Vertecs, Lande, and DistribCom teams)

February 2005 - January 2007

ASAX ( http://gemowiki.futurs.inria.fr/twiki/bin/view/Gemo/AsaxWeb) is a cooperative research action headed by DistribCom, in cooperation with INRIA's GEMO team, the LIAFA/Paris, and Tel-Aviv university. It started in January 2005 and is scheduled to last 24 months. ASAX's purpose is the analysis of Active XML systems, see URL http://activexml.netand on Active XML and Web services. Currently, only a fragment of AXML, called ``positive AXML'', such as systems having monotonic answers to queries, can be given a deterministic behavioral semantics; the goal of ASAX is to break this limitation, and provide a formal semantics and analysis algorithms for AXML systems.

Contract cnrs— March 2004/September 2006.

The NEWCOM project proposal (Network of Excellence in Wireless COMmunication) addresses the design of systems ``beyond 3G''. This requires to successfully solve problems such as: the inter-technology mobility management between 3G and ad-hoc wireless LANs, the coexistence of a variety of traffic/services with different and sometimes conflicting Quality of Service (QoS) requirements, new multiple-access techniques in a hostile environment like a channel severely affected by frequency selective fading, the quest for higher data rates also in the overlay cellular system, scaling with those feasible in a wireless LAN environment, permitting seamless hand-over with the same degree of service to the user, the cross-layer optimization of physical coding/modulation schemes with the medium access control (MAC) protocols to conform with fully packetised transmission as well as the TCP/IP rules of the core network, and the like. Newcom has a duration of 21 months and we proposed 6 man-months to be allocated to the DistribCom team.

Collaboration with Nele Noels, Heidi Steendam, Marc Moeneclaey (University of Gent, Belgium) on the topic: "Statistical description of estimates resulting from iterative code-aided estimation algorithms".

Collaboration with Valery Ramon, Cedric Herzet, Luc Vandendorpe (Université Catholique de Louvain la Neuve, UCL, Belgium) on the topic: "Performance degradation of iterative schemes due to channel and SNR estimation errors."

Contract INRIA January 2005/December 2005. Under revision for year 2006.

This is a collaboration with N. Sellami (ISECS, Sfax, Tunsia) and I. Fijalkow (ETIS, Cergy France). The goal of the proposed project is the analysis of turbo-like receivers and more particularly the robustness to channel estimation errors.This project has a duration of one year and can be extended to one more year. The grant supports travel and living expenses of investigators for short visits to partner institutions abroad.

A. Benveniste is associated editor at large (AEAL) for the journal
*IEEE Trans. on Automatic Control*and member of the editorial board of the journal and «Proceedings of the
ieee». He has been in 2005 member of the Program Committee of the following conferences: CONCUR, HSCC, EMSOFT. He is member of the Strategic Advisory Council of the Institute for Systems Research, Univ. of Maryland, College Park, USA. He is in charge
of managing the INRIA side of the Alcatel external Research Programme (ARP).

J.J. Fuchs is a member of the IEEE-SAM ``Sensor Array and Multichannel'' technical committee and is in the Technical Committee of the fourth IEEE Sensor Array and Multichannel Signal Processing Workshop
*(SAM06)*to be held in Waltham (MA) July 12 - 14, 2006. He is a member of the Program Committee of the French 20th Symposium on Signal and Image Processing
gretsiheld in Louvain-la-Neuve sept. 6-9, 2005. He is also a member of the technical committee of the 14-th European Signal Processing conference (EUSIPCO) to be held in Florence in september 2006.

C. Jard has been in 2005 member of the Program Committee of the following conferences: MSR, FORTE, NOTERE, SAPIR and has been invited for 2006 to the Program Committee of SAPIR (as TPC chair), TESTCOM, AFADL, MOVEP and NOTERE. He was in 2005 member of the Advisory Council of the
Information and Communication department of the CNRS, and has served as an expert in several programs of the French ministry of research (particularly in the RNRT programme in telecommunications). He is also member of the editorial board of the
*Annales des Télécommunications*and the steering committee of MSR series of conferences. C. Jard is vice-chairman of the board of directors of the ENS Cachan. He has been president of the Atlanstic research program (at Nantes) and has participated to the recruitment jury of young
researchers at Inria. In 2005, C. Jard was member of the PhD Committees of B. Gueraz and G. Feuillade at Rennes (as president), of M. Ghazel (as rapporteur) in Lille. He also participated to the Habilitation Committee of O. Roux at Nantes.

Stefan Haar is member of the working group for evaluation of international activities with the COST committee of INRIA, and the IFSIC's "commission de spécialistes'' section 27. He has been appointed associate editor of
*IEEE Transactions on Automatic Control*for 2006.

J.J. Fuchs is a full-time professor at the Université de Rennes 1 and teaches mainly in the Diplôme d'Ingénieur en Informatique et téléCommunications (DIIC). In the Master-Recherche-STI of this same university he gives a course on Optimization.

É. Fabre teaches information theory and communication theory at Ecole Normale Supérieure de Cachan, Ker Lann campus, in the computer science and telecommunications magistère program.

S. Haar teaches, with J. Deshayes, the course on mathematics in computer science and telecommunications of the magistère program at Ecole Normale Supérieure de Cachan, Ker Lann campus, and contributes to the
*Master de recherche*course on supervision of large distributed systems.

L. Hélouët teaches the UML notation to the mastere classes at ENST Bretagne. He also participates in module MAS (with C.Jard and S.Haar) of master M2RI, that is dedicated to models and algorithms for large systems supervision.

C. Jard is a full-time professor at the ENS Cachan and teaches mainly at the Master level, in Computer Science and Telecom, and in Maths. He manages the Info-Telecom track of the Master-Recherche-STS of the Rennes 1 university. It is to be noted that one course in this track is on the research subject of DistribCom. He is also in charge of the competitive examination for the entry of new students in computer science in the French ENS schools.

A. Roumy teaches information theory and communication theory at Ecole Normale Supérieure de Cachan, Ker Lann campus, in the computer science and telecommunications magistère program. She also teaches «Modern coding theory» and «Multiuser detection» in the master SIC(Systèmes Intelligents et Communicants) at ENSEA, université de Cergy Pontoise.

E. Fabre visited Glynn Winskel (Cambridge Univ., UK) in March (7-11), for a joint work on categorical aspects of distributed computations with event structures. E. Fabre was invited by C. Hadjicostis, CS dept, Univ. of Urbana-Champaign, Ill., USA, in October (1-8). This collaboration is funded by an exchange program with UIUC, and dedicated to modular/distributed methods to deal with large discrete event systems. E.F. also gave a seminar on ``A Trellis Notion for Distributed Concurrent Systems'' at the University of Newcastle upon Tyne, UK, in November (3-4).

A. Benveniste and E. Fabre visited Microsoft Cambridge in March (7-11). They discussed in particular with Laurent Massoulié on intrusion detection using Page-Hinkley stopping rule, and Rebecca Isaacs regarding the Magpie project on performance debugging of Web based distributed
applications. A. Benveniste gave a presentation on
*Distributed and asynchronous diagnosis*.

S. Haar has presented ``Fault Diagnosis for Distributed Asynchronous Dynamically Reconfigured Discrete Event Systems'' at the IFAC World Congress, Prague, in July 2005.

A. Roumy has presented a joint work on the analysis of the MAP equalizer at the IEEE sixth international workshop on Signal Processing Advances in Wireless Communications (SPAWC) in June 2005. A. Roumy visited L. Vandendorpe's group (Université Catholique de Louvain la Neuve, UCL, Belgium) in April and September 2005. (NewCom project). She also visited N. Sellami (ISECS, Sfax Tunisia) in May 2005. (Inria-DGRSRT project).

C. Jard has been invited in July 2005 to visit the computer science department of the Ottawa university. A formal collaboration is being to be established in the domain of testing based on partial-order models. In 2005, C. Jard gave a series of conferences in Taipei, Lisbon and Grenoble.

L. Hélouët has visited the National University of Singapore in April 2005 to initiate a cooperation between DistribCom and professor Thiagarajan's research group.He has also presented his work on games for security at a procope meeting in Rennes in October 2005.

J.J. Fuchs has been invited to present his work on sparse representations at the Ecole Polytechnique Fédérale (EPFL) in Lausanne on April 6, 2005

Barbara König, of the University of Stuttgart, and Paolo Baldan, of the University Ca'Foscari, Venice, have visited DistribCom from January 31 to February 4, 2005.

Professor Thiagarajan has visited DistribComin may 2005.

Erik Sudderth, PhD student at MIT, has visited Irisa in June (funded by the MIT-France program). He gave a seminar on particle methods for belief propagation on graphs.

Christoforos Hadjicostis (Univ. of Illinois, Urbana-Champaign) has visited DistribCom in July, in the framework of an exchange program with UIUC. A collaboration has started on the construction of modular diagnosers.