The main objectives of the project are the identification, the conception and the selection of the most appropriate network architectures of a communication service, as well as the development of computing and mathematical tools for the fulfillment of these tasks. These objectives lead to two types of complementary research fields: the systems' qualitative aspects (e.g. protocols' test and design) and the quantitative aspects which are essential to the correct dimensioning of these architectures and the associated services (performance, dependability, Quality of Service, Quality of Experience and performability evaluation).
The DIONYSOS group works on different problems related to the design and the analysis of communication services. Such services require functionality specifications, decisions about where and how they must be deployed in a system, and the dimensioning of their different components. The interests of the project concern not only particular classes of systems but also methodological aspects.
Concerning the communication systems themselves, we focus mainly on IP networks, at different levels. Concerning the types of networks considered, we mainly work in the wireless area, in particular on sensor networks, on Content Delivery Networks for our work around measuring the perceived quality, or QoE, and on some aspects of optical networks. Our work on interoperability concerns IPv6 devices. This activity is essential to ensure that network components interact correctly before they get deployed in a real environment. As such, it is considered as a part of the standardization process. Our team contributes in providing solutions (methods, algorithms and tools) which help in obtaining efficient interoperability test suites for new generation networks. From the application point of view, we also have activities in pricing methodologies, a critical multi-disciplinary area for telecommunications providers, with many defying open problems for the near future.
Related to the previous elements we have the quantitative aspects of most of those problems. In view of this, we develop techniques for the evaluation of different aspects of the considered systems through modelsand through measurement techniques. The quantitative aspects we are interested in are QoE, performance, dependability, performability, QoS, vulnerability, etc.. In particular, we develop techniques to measure in an automatic way the quality of a video or audio communication as perceived by the final user. The methods we work with go from discrete event simulation and Monte Carlo procedures to analytical techniques, and include numerical algorithms as well. Our main mathematical tools are stochastic processes in general and queuing models and Markov chains in particular, optimization techniques, graph theory, combinatorics, etc.
In 2008, we underline the following elements of our activity:
We received the international conference in performance and dependability modeling QEST'08 (see ). The General Chair was G. Rubino.
Associated with QEST'08, there will be a special issue of the international journal Performance Evaluation (to appear in 2009), edited by Susanna Donatelli (U. Torino, Italy), Prakash Panangaden (Mc Gill U., Montréal, Canada) and Gerardo Rubino.
We received the international workshop RESIM'08 (see ). The General Chair was G. Rubino, and the Program Chair was B. Tuffin.
Associated with RESIM'08 there will be a special issue of the international journal ACM Transactions on Modelling and Computer Simulation (to appear in 2009), edited by G. Rubino and B. Tuffin.
César Viho has been nominated as “IPv6 Forum Fellow” for his work in the world-wide IPv6 Ready Logo Certification Programme.
The demonstration of our first prototype of a QoE monitoring tool for video transmission networks was awarded at Sigmetrics'08.
The scientific foundations of our work are those of network design and network analysis. Specifically, this concerns the principles of packet switching and in particular of IP networks (protocol design, protocol testing, routing, scheduling techniques), and the mathematical and algorithmic aspects of the problems on which our methods and tools are based.
These foundations are described in the following paragraphs. We begin by a subsection dedicated to Quality of Service, since it can be seen as an unifying concept in our activities. Then we briefly describe the specific sub-area of models' evaluation and about the particular multidisciplinary domain of pricing problems.
Since it is difficult to develop as many communication solutions as possible applications, the scientific and technological communities aim towards providing general servicesallowing to give to each application or user a set of properties nowadays called “Quality of Service” (QoS), a terminology lacking a precise definition. This QoS concept takes different forms according to the type of communication service and the aspects which matter for a given application: for performance it comes through specific metrics (delays, jitter, throughput, ...), for dependability it also comes through appropriate metrics: reliability, availability, or vulnerability, in the case for instance of WAN (Wide Area Network) topologies, etc.
QoS is at the heart of our research activities: we look for methods to obtain specific “levels” of QoS and for techniques to evaluate the associated metrics. Our ultimate goal is to provide tools (mathematical tools and/or algorithms, under appropriate software “containers” or not) allowing users and/or applications to attain specific levels of QoS, or to improve the provided QoS, if we think of a particular system, with an optimal use of the resources available. Obtaining a good QoS level is a very general objective. It leads to many different areas, depending on the systems, applications and specific goals being considered. Our team works on several of these areas. We also investigate the impact of network QoS on multimedia payloads to reduce the impact of congestion.
Some important aspects of the behavior of modern communication systems have subjective components: the quality of a video stream or an audio signal, as perceived by the user, is related to some of the previous mentioned parameters (packet loss, delays, ...) but in an extremely complex way. We are interested in analyzing these types of flows from this user-oriented point of view. We focus on the user perceived quality, the main component of what is nowadays called Quality of Experience (in short, QoE), to underline the fact that, in this case, we want to center the analysis on the final user. We work on the automatic analysis of the QoE, activity that concerns basically CDNs (Content Delivery Networks). In this context, we have a global project called PSQA, which stands for Pseudo-Subjective Quality Assessment, and which refers to a methodology allowing to automatically measuring the QoE.
Another special case to which we devote research efforts in the team is the analysis of qualitative properties related to interoperability assessment. This refers to the act of determining if end-to-end functionality between at least two communicating systems is as required by the base standards for those systems. Conformance testing is the act of determining to what extent a single component conforms to the individual requirements of the standard it is based on. We consider that conformance tests are used in order to validate single networks for interoperability purposes. As a consequence, since a couple of years, our research activity focuses on interoperability testing, even though we still have to deal with some issues that apply also for conformance testing. No real formal framework exists in the interoperability testing area, contrary to conformance testing. Our purpose is to provide such a formal framework (methods, algorithms and tools) for interoperability assessment, in order to help in obtaining efficient interoperability test suites for new generation networks, mainly around IPv6 related protocols. The interoperability test suites generation is based on specifications (standards and/or RFCs) of network components and protocols to be tested. The model used is an automaton-like structure called IOLTS (Input Output Labeled Transition Systems). It is an LTS which distinguishes inputs, outputs and internal actions.
The scientific foundations of our modeling activities are composed of stochastic processes theory and, in particular, Markov processes, queuing theory, stochastic graphs theory, etc., either for developing analytical models or for discrete event simulation or Monte Carlo (and Quasi-Monte Carlo) techniques. We are interested in models' evaluation techniques for dependability and performability analysis, both in static (network reliability) and dynamic contexts (depending on the fact that time plays an explicit role in the analysis or not). We look at systems from the classical so-called call level, leading to standard models (for instance, queuing ones) and also at the burst level, leading to fluid models. We also work on the design of the topologies of WANs, which leads to optimization techniques, often to solve very large optimization problems usually formulated in terms of graphs.
Pricing is a good example of a multi-disciplinary research activity half-way between applied mathematics, economy and networking, centered on stochastic modeling issues. Indeed, the Internet is facing a tremendous increase of its traffic volume. As a consequence, real users complain that large data transfers take too long, without any possibility to improve this by themselves (by paying more, for instance). A possible solution to cope with congestion is to increase the link capacities; however, many authors consider that this is not a viable solution as the network must respond to increasing demand (and experience has shown that demand of bandwidth has always been ahead of supply), especially now that the Internet is becoming a commercial network. Furthermore, incentives for a fair utilization between customers are not included in the current Internet. For these reasons, it has been suggested that the current flat-rate fees, where customers pay a subscription and obtain an unlimited usage, be replaced by usage-based fees. Besides, the future Internet will carry heterogeneous flows such as video, voice, email, web, file transfers and remote login among others. Each of these applications requires a different QoS level: for example, video needs very small delays and packet losses, voice requires small delays but can afford some packet losses, email can afford delay (within a given bound) while file transfer needs a good average throughput and remote login requires small round-trip times. Some pricing incentives should exist so that each user does not always choose the best QoS for her application and so that the final result is a fair utilization of the bandwidth. On the other hand, we need to be aware of the trade-off between engineering efficiency and economic efficiency; for example, traffic measurements can help in improving the management of the network but is a costly option. These are some of the various aspects often present in the pricing problems we address in our research effort.
Our main application domains are those related to network design, at both the transport infrastructure and the service levels. Our expertise currently focuses on IP technology in a variety of contexts (IP QoS, IP mobility, ...), and on analysis and dimensioning tools: telecommunications architecture configuration, bottleneck search, resource allocation policies comparison, etc.
We can start by pointing out the PSQA technology we have been developing in the last years (PSQA stands for Pseudo-Subjective Quality Assessment) that allows an automatic and quantitative evaluation of the quality delivered to the user by a network transporting audio or video content. PSQA is accurate (which means that it provides values close to those that would have been obtained using a panel of human observers) and efficient (which means that it can work, if useful or necessary, in real time). It's main application area is in network monitoring: PSQA allows to deploy an auditing system that can continuously analyze the perceived quality (the QoE) at specific points in the network. The other main application area of PSQA is in network control, exploiting the fact that the quality assessment can been done in real time. The first applications of our technique that are currently being explored are in the monitoring and control of networks transporting video flows, with focus on IPTV applications in the context of P2P infrastructures, on networks of mobile terminals, on the properties of the SVC codec and their impact on the QoE.
In the field of traffic engineering and system dimensioning, technological evolution also raises a number of new performance evaluation problems. Besides these main application domains, other important subjects where quantitative analysis plays a central role are, for example, the analysis of control mechanisms, or the problems posed by pricing, which are of evident interest for operators. In the IP world, extensions such as mobile IP, or cellular IP, are also important application domains for our research work.
The first field in which the team's expertise is requested is the area of IP networks. The usual context is that of an industry member who wishes to develop new techniques, or that of a user who has to set up a new communications system or to upgrade (or more generally, modify) an existing one. This may involve a specific aspect of the system (e.g. the costs model which allows the development of a billing policy), or a particular kind of network (for instance, a home-network), or a family of services (for instance, a security policy).
We can also classify our main application domains per type of services involved. Then, the past and current expertise of the team's members mainly involve the transport of multimedia flows over IP, the various network QoS management aspects, the testing techniques (interoperability tests, implementation validation tests – especially for IPv6, and test generation). In this context we find, for instance, problems related to the conception of mechanisms well adapted to specific flow types and QoS goals, both at the network access level, and at the intermediary node level.
With regard to analysis and dimensioning, we contribute to the different related methodologies (measurements, simulation, analytical techniques), and also to the development of new mathematical and software tools. We develop models for the collection of specific characteristics of the studied systems (e.g., those related to QoS). We also develop new simulation methodologies, in order to overcome certain limitations of the existing techniques. Finally, it should be noted that networks now offer services with a certain level of redundancy, which leads to problems of reliability. Our team has a long experience in the specific study of this systems' aspect and in related problems such as performability and vulnerability (a notion aiming at quantifying the robustness of a network architecture (topology) without taking into account the reliability of each component).
We have built a toolkit for easing executing conformance tests written in the standardised TTCN-3 test specification language. This toolkit is made of a C++ library and of a highly
customisable CoDec generator. It allows fast development of external components necessary to execute a test suite (CoDec, System and Platform Adapters) by mixing efficiently and reliably parts
that can be generated automatically from the TTCN-3 testsuite and parts that are written manually in C++ and gathered in distinct entities named Codets. The toolkit fixes issues that are not
yet covered by ETSI standards while being fully compatible with the existing standard interfaces (TRI & TCI). It has been publicly released under the name T3DevKit and made available under
the Apache License. All these tools with associated test suites are freely available at
http://
We develop software tools for the evaluation of two classes of models: Markov models and reliability networks. The main objective is to quantify dependability aspects of the behaviors of the modeled systems, but other aspects of the systems can be handled (performance, performability, vulnerability). The tools are specialized libraries implementing numerical, Monte Carlo and Quasi-Monte Carlo algorithms.
One of these libraries has been developed for the Celar (DGA), and its goal is the evaluation of dependability and vulnerability metrics of wide area communication networks (WANs). The algorithms in this library can also evaluate the sensitivities of the implemented dependability measures with respect to the parameters characterizing the behavior of the components of the networks (nodes, lines).
We are also developing tools with the objective of building Markovian models and to compute bounds of asymptotic metrics such as the asymptotic availability of standard metrics of models in equilibrium, loss probabilities, blocking probabilities, mean backlogs,...). A set of functions designed for dependability analysis is being built under the name DependLib.
We have made several contributions to the QNAP language, which is currently a part of the package MODLINE, distributed by SIMULOG. We currently participate to the design and evolution of the SPNP ( Stochastic Petri Net Package) tool , implemented in more than 200 sites. The main designer is Duke University. Our contributions are on Monte Carlo methods. We plan now to increase our participation in the development of this tool.
Pricing is probably one of the most efficient means to control congestion in a communication network. It is furthermore mandatory for service differentiation or is a way to yield incentives for participation in P2P or ad hoc networks. Our work in the area has focused on two aspects: the design and feasability of pricing schemes first, and more recently investigating the behavior of those pricing schemes in the case of an oligopoly.
We have therefore first looked at different ways to design a pricing schemes. In , we have developped several schemes for a RED buffer, where the drop probability (or more exactly the slope of the drop curve of RED) depends on the willingness to pay of the users: the more you pay, the less one of your packets is likely to be dropped. Learning techniques are used to drive to an equilibrium. In , a pricing technique is developed specifically for uplink CDMA transmissions with randomly active users.
On the other hand, our activity around pricing has mostly been redirected towards competition issues among providers: the impact of this competition has to be carefully analyzed. In , we have designed a slotted resource allocation game with several providers. Total user demand is therefore split among providers according to Wardrop's principle. Using the characterization of the resulting equilibrium, we prove, under mild conditions, the existence and uniqueness of a Nash equilibrium in the pricing game between providers. We also show that, remarkably, this equilibrium actually corresponds to the socially optimal situation obtained when both users and providers cooperate to maximize the sum of all utilities, this even if providers have the opportunity to artificially reduce their capacity. Another issue is to develop regulation rules to control the behavior of competing providers. As an illsutration, migration processes of customers between alternative providers are becoming more and more relevant. Providers competing for migrating customer may adopt a delaying strategy to retain customers who are willing to leave, facing regulatory sanctions for the unfair behavior. A game-theoretic model is proposed in to describe the resulting competition and propose relevant sanctions in case of large retention delays.
We maintain a research activity in different areas related to dependability, performability, vulnerability analysis of communication systems. In 2008 our effort has been on evaluation techniques using both the Monte Carlo and the Quasi-Monte Carlo approaches. Following the cooperative research action (ARC) RARE we were leading in 2006-2007, we have published a book on rare event simulation . Monte Carlo methods often represent the single tool to solve complex problems. In the context of rare-event simulation, splitting and importance sampling (IS) are the primary approaches to make important rare events happen more frequently in a simulation and yet recover an unbiased estimator of the target performance measure, with much smaller variance than a straightforward Monte Carlo (MC) estimator , . In , we provide a guided tour of IS, while the same thing is done for splitting in , relating it with the work on Feynman-Kac formulae. In , we describe the robustness properties a rare event estimator should verify as the event goes rarer, and investigate the reliability of the associated confidence interval; potential diagnostics on the validity of the estimator are also discussed. Specific contexts, such as highly reliable Markovian systems representing computer or telecommunication systems subject to failures and repairs, are studied in . When studying rare events, it is, as stated above, primordial to design estimators that stay efficient as their probability decreases to zero. While robustness properties generally look at the second moment only, we discuss in the importance of investigating higher order moments, and define related properties. In we apply the splitting technique, designed for dealing with rare events in a dynamic context, to a static one (to a classic network reliability problem).
The “best” estimator is the one producing always the exact result, the so-called zero-varianceestimator. This estimator can be expressed explicitly for IS and control variates, but it cannot be implemented because it requires the knowledge of the value we are looking for. Nonetheless, it provides valuable insights about the ideal form of an estimator. All the game is then to adequately estimate parameters, from heuristics or learning procedures, which should result in an efficient estimator. Such a general framework is described in . We have also designed specific aprroximaton techniques in the case of IS and in a reliability setting, with Markov models for steady-state , or even with static (time-independent) models thanks to a Markov-like contruction procedure .
In Quasi-Monte Carlo (QMC), the error when estimating an integral uses a deterministic sequence (instead of a random one) called a low discrepancy sequence and having the property to spread quickly over the integration domain. The estimation error is bounded by the product of a quantity depending on the discrepancy of the sequence and the variation of the integrand. But this bound is proved to be useless in practice. By combining MC and QMC methods, we can benefit from the advantages of both methods: error estimation from MC and convergence speed from QMC. Randomized quasi-Monte Carlo (RQMC) is another class of methods for reducing the noise of simulation estimators, by sampling more evenly than with standard MC. In , we show how the array-RQMCtechnique, a randomized QMC method we have previously designed and devoted to the simulation of Markov chains, can be used jointly with splitting and/or IS to construct better estimators than those obtained by either of these methods alone.
In , we address the problem due to uncertainty on input values such as mean up time or mean down time when trying to evaluate the asymptotic availability of components with new generation technology. In such situations, the asymptotic availability becomes a random variable. In the past, we exhibited the expectation of such a random variable when the input variables are uniformly distributed. In this new communication we compute the distribution of the asymptotic availability of components for uniform or triangle random variable distributions. This allows us to give results such that, for example, the probability that the asymptotic availability is bellow a given threshold. We also addressed the case of complex architectures, but at that level, we use Monte-Carlo simulation to estimate the consequences of uncertainty on input values. We have also considered redundant structures where the commutation is not instantaneous and where some elements can be in hot stand-by while some others can be in cold standby. Moreover, different repair policies can be assumed in order to decrease the repair frequency. In , the steady state availability has been determined in those situations.
In large-scale wireless networks, distributed self-organization is more convenient than centralized planning. Self-stabilization protocols are a useful technique to provide self-organization but their stabilizing time is related to the size of the network. A wide range of problems such as TDMA assignment or clustering may be solved thanks to local coloring on a graph model but with a tradeoff between the coloring time and the stabilization time of the protocol using the coloring. This stabilization time is related to the height of a directed acyclic graph induced by the colors, thus to the longest strictly ascending sequence of colors. In , we model this height by the longest increasing contiguous sequence of non-independent uniform random variables. Then using a Markov chain approach, we obtain a theoretical upper bound on the stabilization time. More precisely, our results show the scalability properties of such a protocol, but also that using a large number of colors does not impact its stabilization time.
This work is a collaboration with the Inria team-project Asap. We proposed in a fully decentralized algorithm to provide each node with a value reflecting its connectivity quality. Comparing these values between nodes, enables to have a local approximation of a global characteristic of the graph. Our algorithm relies on an anonymous probe visiting the network in a unbiased random fashion. Each node records the time elapsed between visits of the probe which is called the return time of the random walk. Computing the standard deviation of such return times enables to approximate the conductance of the graph. Typically, this information may be used by nodes to assess their position, and therefore the fact that they are critical, in a graph exhibiting low conductance.
We analyzed in the sequence of the successive sojourn times spent by a fluid queue, driven by a homogeneous Markov chain, in various levels of its state space. These fluid flows models are widely used in the performance analysis of telecommunication systems. The analysis is carried out in the Laplace-Stieljes transform domain and we study the limiting behavior of this sequence of sojourn times.
A crucial property of second order fluid models is the behavior of the fluid level at the boundaries. In , two cases have been considered: the reflecting and the absorbing boundary. This paper presents an approach for the stationary analysis of second order fluid models with any combination of boundary behaviors. The proposed approach is based on the solution of a linear system whose coefficients are obtained from a matrix exponent. A practical example demonstrates the suitability of the technique in performance modeling.
In previous work, we have proposed formal definitions of interoperability that helps for automatic test case generation. This year we have studied how to include quiescence management and causal dependencies between inputs and outputs in formalizing interoperability testing . We have also developed new algorithms using formal methods for automatic interoperability tests generation .
This year we have also added a method and associated library that allows the usage of our TTCN-3 based T3DevKit tools in both Java and C++ plaforms .
There is a growing demand for efficient multimedia streaming applications over the Internet and next generation mobile networks. The third generation (3G) mobile systems are designed to further enhance the communication by providing high data rates of the order of 2 Mbps. High Speed Downlink Packet Access (HSDPA) is an enhancement to 3G networks that supports data rates of several Mbit/s, making it suitable for applications like multimedia, in addition to traditional services like voice call. Services like person-to-person two way video calls or one way video calls, aim to improve person-to-person communication. Entertainment services like gaming, video streaming of a movie, movie trailers or video clips are also supported in 3G. Many more of such services are possible due to the augmented data rates supported by the 3G networks and because of the support for Quality of Service (QoS) differentiation in order to efficiently deliver required quality for different types of services.
We studied the provisioning of QoS over High Speed Downlink Packet Access (HSDPA) making it suitable for multimedia applications. We used the salient feature of HSDPA that is packet scheduling to satisfy the QoS requirement of H.264 video streaming applications. In , we focused on the issue of subjective video quality in HSDPA, and designed a novel estimator of the user-perceived quality, or Quality of Experience (QoE), that operates in real time. We proposed to integrate such estimator in User Equipments (UEs) so that it can provide regular feedback that can help the UMTS resource management procedures. In addition, we used the QoE module to study the impact of the different HSDPA schedulers on the perceived video quality.
We also studied congestion control schemes that are suitable for video flows. The problem is that packet losses, during bad radio conditions in 3G, not only degrade the multimedia quality, but render the current congestion control algorithms as inefficient. We proposed a solution that integrated the congestion control schemes with an adaptive retransmission scheme in order to selectively retransmit some lost multimedia packets. Moreover, we integrated a wireless loss estimation scheme to improve the efficiency of congestion control. Our results showed that the integrated scheme significantly improved the multimedia quality in wireless networks .
We continue the development of the PSQA technology (Pseudo-Subjective Quality Assessment) in the area of Quality of Experience (QoE). PSQA is a method to build a measuring module capable of quantifying the quality of a video or an audio sequence, as perceived by the user, when received through an IP network. PSQA provides an accurate and efficiently computed evaluation of the quality. Accuracy means that PSQA gives values close to those than can be obtained from a panel of human observers, under a controlled subjective testing experiment, following an appropriate norm (which depends on the type of sequence or application). Efficiency means that our measuring tool can work in real time, if necessary. Observe that perceived quality is the main component of QoE.
In 2008 our work focused on the application of our technique to the design of a P2P network for distributing real-time video flows (TV over IP). We made the choice of a structured system,
that is, a P2P network with a central control manager. The main reason is that we have the PSQA method allowing to measure the perceived quality in real time, which suggests using its output as
feedback information in order to optimally controlling the network. We also chose a multisource approach, to address the main drawback in this type of system, the dynamics of the peers that
enter and leave the system continuously. The name multisource refers to the fact that, in our system, a node receives the TV stream from many different peers (sources in this case). Our method
allows to split the flow in an arbitrary manner, possibly distributing the load in a way that depends on the types of the frames (with MPEG-2 or MPEG-4 coding), with an arbitrary amount of
redundancy, and reducing the signaling to a negligible volume. For this purpose, our distribution system is based on the properties of pseudo-random generators, which constitutes an original
application of these tools. In
we show how to use the possibility of real-time QoE assessment to design such an infrastructure. We illustrate
the approach by looking for optimal values of the main parameters of our solution. In
we look at some optimization problems related to the design of the P2P architecture itself, that is, to the
topology of the connections. We also work on the design of a general QoE monitoring system, based on PSQA. In
we illustrate the possibilities of such a system. As a byproduct of our work applying PSQA to P2P networks, we
designed a new Bit-Torrent-like solution for the distribution of real-time video flows (TV channels), which is now being tested by an operator in Uruguay. See
http://
Typical optical-based backbones are in general underutilized. This is not due to the lack of transmission needs, but to other factors, among which we underline two: the bottleneck effects at access networks, and the somehow rigid and inefficient way of using the optical infrastructure as nowadays available in the current technology.
Optical technology has increased significantly the transmission capacity of today's transport networks, and it is playing important roles in supporting the rapidly increasing data traffic. Meanwhile, congestion issues are definitively relieved in such core networks. Nonetheless, the rigid and large routing granularity (i.e. wavelength) entailed by such an approach could lead to bandwidth waste . In this regard, increasing research interest is now focusing on the development of new concept of traffic aggregation in optical networks. The main objective of our work is therefore eliminating both the bandwidth underutilization and the scalability concerns that are typical of all-optical wavelength-routed networks.
Specifically, we propose using multiple-access lightpaths (i.e., optical circuits) instead of the traditional point-to-point lightpaths. By doing so, we aim at increasing the lightpath utilization since its capacity is shared by multiple connections instead of a single end-to-end connection. To achieve this and a first main contribution of our work, we conceived new medium access and sharing mechanisms adapted to such very high speed networks.
To assess the efficiency of these concepts, all underlying network costs have to be quantified. These costs include that of the transceivers required at node level, as well as the number of wavelengths. In other words, the network dimensioning is achieved by evaluating the cross-connect and IP router dimensions as well as the number of wavelengths. To date, it lacks a real methodology to dimension such high-speed networks. Our second main objective is therefore to develop models to dimension optical networks. In this regard, we developed new models to dimension optical networks considering both ring and arbitrary meshed topologies . Our results show that our proposed aggregation technique can significantly improve the network throughput while reducing its cost.
Future wireless networks are expected to provide IP-based coverage and efficient mobility support with end-to-end Quality of Service (QoS) requirements. In our work, we propose a new architecture that supports both mobility and QoS management in wireless MPLS networks. Specifically, we provide new techniques to reduce both the service disruption and packet loss that occur during the hand off operation. Our proposal includes two protocol variants . In the first variant called FH-Micro Mobile MPLS, we consider the fast handoff mechanism, which anticipates the LSP procedure setup with an adjacent neighbor subnet that a mobile node (MN) is likely to visit. This mechanism is proposed to reduce service disruption by using the link-layer (L2) functionalities. In the second variant called FC-Micro Mobile MPLS, the forwarding chain concept, which is a set of forwarding path, is provided to track efficiently the host mobility within a domain. This concept can significantly reduce the registration updates cost and provide low handoff latency as demonstrated in .
One of the major concern in multi-hop wireless mesh networks (WMNs), is also the radio resource utilization efficiency, which can be enhanced by managing efficiently the mobility of users. Our main objective is therefore to route efficiently the traffic generated by mobile nodes including the signaling messages in order to optimize network radio resource utilization. In other words, we aim at minimizing the total signaling cost by controlling the number of registration updates with the root of the domain. To achieve this, we propose new micro-mobility management schemes based on clustering techniques to track efficiently the mobility of nodes within the network. These mechanisms are conceived to minimize the total signaling cost of exchanged messages needed to manage the mobility of nodes as well as to optimize the link usage cost of the data traffic generated by each mobile user .
As a second alternative to increase the capacity of wireless mesh networks, we propose using the cognitive radio (CR) capabilities. The capacity of a WMN is indeed dependent on the spectrum resources it has, and the efficiency with which it uses them.
Most current WMNs rely on existing technologies such as IEEE 802.11, and operate using unlicensed spectrum. This has contributed to the rapid growth of the technology, as WMNs have been deployed across campuses, rural regions, and even entire cities. However, the bandwidth-intensive nature of WMNs - the result of using multi-hop communication in a shared medium - creates difficulties for delivering satisfactory quality-of-service (QoS), particularly for networks sharing spectrum with other networks and technologies
In our work, we considered the use of cognitive radio to improve this efficiency, by allowing networks belonging to different service providers to share both spectrum and infrastructure resources according to several different models. Using an ILP based problem formulation, this approach is demonstrated to yield significant benefits to the networks, by increasing QoS or allowing the networks to decrease their spectrum requirements. Moreover, the feasibility of CR-based virtual wireless networks and service differentiation are demonstrated, further enhancing the efficiency of spectrum use
The above-mentioned advances in wireless communication and embedded computing technologies have lead to the emergence of wireless sensor nodes technology, which can be deployed in many domains including health, environment and battlefield monitoring. In such environments, it is difficult to act physically on the wireless sensor network (WSN) as often as required, since some interventions can damage the monitored area and thus disturb the accuracy of the observations. Due to the limited capacity of sensor nodes' batteries, once a WSN is in place, its lifetime must last as long as possible based on the initially provided amount of energy. The key challenge is then maximizing the WSN lifetime. Reducing the energy consumption is indeed the main objective of our work . In order to minimize the energy consumption in WSNs, most previous works focused mainly on proposing energy-efficient MAC and routing protocols without paying attention to the impact of the density of reporting nodes on the WSN performances . In our work, we rather aim at avoiding the transmission of redundant information regarding the detected event. In doing so, we can achieve significant energy conservation. To achieve this, we proposed limiting the reporting tasks of a detected event to a small subset of reporting nodes instead of using all the nodes in the event area . Specifically, we investigated the relationship between sensor network lifetime, and the number of active reporting nodes (i.e., sensor nodes' density) by using both analytical and simulation approaches .
Moreover, previous works focused mainly on the energy minimization problems. Whereas, minimizing the energy consumption must be achieved while respecting the specific QoS requirements of sensor applications, such as the maximum tolerable time to report an event, and the required event reliability, etc. To tackle these issues, we studied the energy-latency-reliability tradeoff; our first results are presented in .
To further reduce energy consumption in WSNs, we propose also balancing the energy consumption among nodes. Typically, we demonstrated that sending the traffic generated by each sensor node through multiple paths instead of using a single path allows significant energy conservation. A new analytical model for load-balanced systems is complemented by simulations in order to quantitatively evaluate the benefits of the proposed load balancing technique. Specifically, we derived the set of paths to be used by each sensor node and the associated weights (i.e., proportion of utilization) that maximize the network lifetime. These original contributions are the object of reference and the forthcoming publication .
Supporting real-time applications in IEEE 802.11 networks is an important challenge due to the probabilistic nature of the employed MAC protocol, the well known Carrier Sense with Multiple Access (CSMA/CA). One solution to be considered is to restrict the accepted traffics through admission control algorithms, which permit to guarantee and maintain service quality of current serving traffic. If there are no restrictions to limit the volume of traffic being introduced to the service set, performance degradation will result due to higher backoff time and collision rate. In , we presented a delay-based admission control algorithm in IEEE 802.11. We first presented an accurate delay estimation model to adjust the contention window size in real-time basis by considering key network factors, MAC queue dynamics, and application-level QoS requirements. After that, we used the abovementioned delay-based CW size adaptation model to introduce a fully distributed admission control model that provides protection for existing flows in terms of QoS guarantees.
Since QoS parameters are not enough to highlight user satisfaction, we used PSQA to control IEEE 802.11 and hence guarantee QoS. In this context, we proposed in a Mean Opinion Score (MOS) based admission control mechanism. Instead of relying on technical parameters such as bandwidth, loss, or latency; our solution is based on Quality of Experience (QoE). Indeed, by using PSQA, we can track continually user satisfaction (MOS), thus we can accept new flow, only if the other users' MOS (based on a threshold) are not degraded.
Another aspect considered for QoS support, is the network selection procedure when different 802.11 Access Point (AP) are available. A simple decision is usually based only on signal strength measured at the receiver, so in general, users will choose the closest AP because it provides the strongest signal. Using this strategy can sometimes lead to the problem of excessive demand on one access point and underutilization of others. The new user always selects the AP with the strongest signal without knowing actual load of the network or actual quality y experienced by ongoing user. If unfortunately the chosen access point is already high-loaded, one more connection will result in severe degradation of quality for every user of this network. In order to prevent this situation from happening, we need to have a better selection strategy, which provides pertinent information about status of the network, to help users make a good decision. For that, the IEEE 802.11 Task Group “k” is developing an extension to the IEEE 802.11 standard, referred to as 802.11k. This extension is a specification of radio resource measurement. It is intended to improve the provision of traffic in the physical and medium access layers by defining a series of measurement requests and reports that can be used in selecting the best available access point. Basing on the concept of 802.11k, we presented in a novel network selection scheme based on PSQA. In this protocol, users select the network that shows the best MOS value. In fact, they take the mean opinion score (MOS) of overall users into account while making the decision (network-assisted approach). As a consequence, they will connect to the network where they will be best connected and avoid high-loaded networks automatically due to their lower MOS. Therefore, the scheme is profitable for preventing access networks from over- or under-utilization.
Extending 802.11's coverage is facing some challenges as the wired link, inter-connecting the APs, is increasing complexity and deployment cost in many situations. Therefore it is desirable to connect the APs via wireless links as well and create a WLAN Mesh. In this context, the IEEE 802.11s group is designed to establish a new extension of the IEEE 802.11 standard to specify the functionality as the extended service set (ESS). The aim is to apply multihop mesh techniques to specify the functionality of a wireless distribution system for interconnection 802.11 access points together that can be used to build a wireless infrastructure for small-to-large-scale WLANs. Voice over IP (VoIP) service meanwhile, known to be a great concern from last decade, however before deploying such application in IEEE 802.11s-based WMN (Wireless Mesh Network) there is a need of investigating the WMNs capacity to support quality of service (QoS). When using VoIP, factors like delay, packet loss and overhead require special consideration. Motivated by the fact that path selection (routing) in WMN has a strong impact on the capacity to support VoIP connections, we investigate in the comparison of VoIP performances over three reactive routing protocols in IEEE 802.11s-based WMN. The routing protocols considered in this comparison are: Adhoc On-demand Distance Vector Routing (AODV), Dynamic Source Routing (DSR) as well as Hybrid Wireless Mesh Protocol (HWMP), which is defined by the 802.11s group for the upcoming IEEE 802.11s standard.
Cifre contract (PhD thesis supervision) on pricing and revenue and resource management for an integrated telecommunication provider in the context of competition and convergence of services. PhD student: Hélène Le Cadre. Advisor: B. Tuffin.
Cifre contract (PhD thesis supervision) on data aggregation in passive optical networks (resource management and sharing). PhD student: Charlotte Roger. Advisor: N. Bouabdallah.
SS
ARED contract (with Région Bretagne) for the PhD thesis of Sagga Samira on rare event simulation with applications in telecommunications. Advisor: B. Tuffin.
We are active member of the IST-Go4IT project. IST-Go4IT is a project co-financed by the European Commission under the 6th Framework Programme of the European Research Area. The project has started in November 2005 and ended in April 2008. The project partners are made of 11 organizations coming from Europe, China and Brazil. The objectives of Go4IT were:
Promote and foster conformance testing oriented validation approach as well as associated technologies such as TTCN3.
Develop the users community of such an approach
Supply a range of executable and freely accessible test services
Supply the associated range of support services on a free basis
Supply complementary commercial services such as certification, consulting
Set-up the environment required to develop a low cost, open and generic solution
The Dionysos team was the main contributor in this project that leads to an open-source platform for IPv6 test suite development.
The Anemone project (Advanced Next gEneration Mobile Open NEtwork) is a IST-STREP (Specific Targeted Research Project) project that started from June 1st, 2006 for a duration of 30 months. The objectives of ANEMONE are:
To gather and integrate in a single place all the components (i.e. latest standards in wireless access technologies, communication protocols, and applications) necessary to conduct research, development and study the feasibility of deployment of the IPv6 mobility technologies.
To share the partial experiences the partners of the project have on IPv6 mobility deployment in order to leverage the European IPv6 experience.
To provide a pan-European IPv6 mobility testbed open to the research and developer communities so that they could test and validate their new applications and services.
To define procedures and tools to conduct experiments, to gather results, to evaluate the performance and to validate the compliance with IETF standards.
To gather together a significant number of "real users" coming from different cultural and social populations (students, teachers, ISP customers…).
The Dionysos team is the leader of the WP3: Testbed integration and Validation. This testbed is used for several experiments and is ready for other usages .
In the context of our work on interoperability assessment, we continue our cooperation with CELAR (Centre d'Électronique de l'Armement), a research laboratory of the French Army. Our work is to provide new framework for interoperability testing with application to routing protocols. The project started in December 2007 and runs for two years.
We work in the 2-years (October 2008 – October 2010) DGE Project P2Pim@ges devoted to P2P architectures for video distribution. Our contribution mainly focuses on evaluating the QoE of P2P applications. We also develop performance evaluation analysis of P2P solutions, in the context of a managed network (for instance, the network of a telecommunications operator). The first obtained results have been published in . The project is leaded by Thomson, wit the participation of Orange, TMG, Devoteam, IPdiva, and the academics Telecom Bretagne, M@rsouin, University of Rennes 1, and 4 teams of INRIA, including Dionysos.
We work in the 2-years (October 2007 – October 2009) DGE Project QoSmobile on the supervision (monitoring and control) of TV distribution systems over mobile terminals. Our contribution focuses on evaluating the QoE of such an infrastructure. QoSmobile is leaded by ENENSYS, and the partners are Expway, Alcatel-Lucent, Siradel and our team.
Bruno Sericola continues its collaboration with Fabrice Guillemin from France Telecom (Lannion) on the analysis of standard and fluid queues
We are part of the ANR Télécommnucation WINEM: WIMAX Network Engineering and Multihoming.
ANR project 2007-2009, in cooperation with Motorola, the GET (INT and ENST Bretagne), MAESTRO project-team at INRIA Sophia-Antipolis, the University of Avignon, the Eurécom institute and France Telecom R&D.
It is dedicated to the IEEE 802.16e standard for Broadband Wireless Metropolitan Access (see
http://
We coordinate the ANR Verso CAPTURES: Competition Among Providers for Telecommunication Users: Rivalry and Earning Stakes.
ANR project Dec. 2008- Nov. 2012, in cooperation with Telecom Bretagne and France Telecom R&D. The goal of this project is to deal with competition among providers in
telecommunications. We need to study the distribution of customers among providers as a first level of game, and then to focus on a second higher level, the price and QoS war. See
http://
We participate to the (CNRS funded) group of discussion ROTJER 2007-2008, on Operations Research and Game Theory for communication networks.
We are part of the INRIA cooperative research action FRACAS (Fiabilité des Réseaux Autonomes de Capteurs et Applications à la Sécurité). The partners are the LRI Paris and the INRIA project-teams ARES, DIONYSOS, REGAL and Grand-Large. This project spans over two years, ending in December 2008.
EuroFGI is the follow-up of EUroNGI network of excellence, starting in December 2006 and ending in June 2008. Bruno Tuffin is the INRIA team leader in this project.
The group has contributed to the deliverables of working packages (Joint Research Activities)
WP.JRA.5.4: Network optimization and control;
WP.JRA.5.5: Numerical, simulation and analytic methodologies;
WP.JRA.6.1: Quality of service from the users' perspective and feedback mechanisms for quality control;
WP.JRA.6.2: Payment and cost models for NGI.
EuroNF Euro-NF is a Network of Excellence on the Network of the Future, formed by 35 institutions (from the academia and industry) from 16 countries. Its main target is to integrate the
research effort of the partners to be a source of innovation and a think tank on possible scientific, technological and socio-economic trajectories towards the network of the future. It has
started in January 2008 and is ending in December 2010 (see
http://
Bruno Tuffin is the INRIA team leader in this project. Gerardo Rubino is the responsible for relations with industry partners for the whole network.
The group is contributing to the following working packages (Joint Research Activities):
WP.JRA.2.2: Traffic Engineering, Mechanisms and Protocols for Controlled Bandwidth Sharing;
WP.JRA.2.4: Routing and Traffic Management in a Multi-Provider Context;
WP.JRA.2.5: Design of Optimal Highly Dependable Networks;
WP.JRA.3.2: SLAs, Pricing, Quality of Experience;
WP.JRA.3.3: Cost Models.
We are member of CAP project (Competition Among Providers in the access network) within EUroFGI NoE, funded for a period of one year between 2007 and 2008, in collaboration with GET/ENST Bretagne, the University of Rome 2 and the University of Cantalabria.
We are member of PRECO project (Pricing and Regulation in COmpetitive telecommunication networks) within EuroNF NoE, funded for a period of one year Sept. 2008 and Sept. 2009, in collaboration with TELECOM Bretagne and the University of Rome 2.
Bruno Tuffin is the French national delegate and project coordinator for the EU COST Activity IS0605. The goal of ECONTEL is to develop a strategic research and training network linking
key individuals and organizations in order to enhance Europe’an competence in the field of telecommunications economics, to support related R&D-initiatives, and to provide guidelines and
recommendations to European players (end-users, enterprises, operators, regulators, policy makers, content providers) concerning the provision to citizens and enterprises of new converged
broadband and wireless content delivery networks (see
http://
We are currently working with the Marie-Ange Remiche and Guy Latouche from the university of Brussels (ULB) on the analysis of stationary fluid queues.
We also work on second order fluid queues with Miklos Telek (Technical University of Budapest, Hungary), Marco Gribaudo and Daniele Manini (University of Torino, Italy) .
We work on new queuing models (B. Sericola, G. Rubino, with Miklos Telek), coming out of our paper ).
N. Bouabdallah, A. Ksentini and B. Sericola are currently working with Hamid Nafaa from the University College Dublin (UCD) on the analysis of Video on Demand multi-source streaming architectures.
For other European cooperation, see our activities in the NoE Euro-FGI above.
The goal of this team is to develop efficient Monte Carlo methods to compute integrals, sums or to solve equations or optimization problems. They are unavoidable tools in areas such as finance, electronics, seismology, computer science, engineering, physics, transport, biology, social sciences... Nonetheless, they have the reputation of being slow, i.e. to require a large computational time to reach a given precision. The goal of the project is to work on acceleration techniques, meaning to reach faster the targeted precision. The typical framework is that of rare event simulation for which getting even only one occurrence of the event could require a very long time. There are two main acceleration techniques: importance sampling and splitting on which we work. A combination with the faster randomized quasi-Monte Carlo methods is also a challenge we want to address.
(see
http://
The goal of this collaboration between INRIA and Sup?Com (École supérieure des communications, Tunis) is to study QoS aspects in WiMAX mobile networks. Specifically, we aim at proposing a cross-layer solution that enables efficient management of the handoff operations and allows at the same time an efficient resource allocation.
We work with Columbia University and Stanford University on rare event simulation. We received Peter W. Glynn for two weeks in September.
We work with Florida State University on quasi-Monte Carlo methods and their randomizations, with applications in finance and telecommunications.
We work with Stevens Institute of Technology on pricing issues in telecommunications.
We work with (and received for two months) Pieter de Boer, from the University of Twente, The Netherlands, on rare event simulation.
We work with Peter Reichl (FTW, Vienna, Austria) on pricing and security issues.
We work with Hector Cancela (Montevideo, Uruguay) on simulation issues.
We work with the University of Waterloo (Canada) on wireless mesh and sensor networks.
Peter W. Glynn (Stanford University, USA) visited Dionysos for two weeks in September 2008, to work on rare event simulation.
Peter Reichl (FTW, Vienna, Austria) was our guest for 6 weeks during summer, to work on socio-economic aspects of next generation networks.
Miklos Telek, from the technical university of Budapest, visited us for some days. We work together on fluid queues and on memory constrained queues.
Hamid Nafaa and Cédric Lamorinière, from the university college Dublin, visited us for some days. We start a collaboration on the performance analysis of a large-scale video on demand architecture based on a P2P solution.
R. Marie and G. Rubino are members of the IFIP WG 7.3 (Working Group in Computer Performance Modeling and Analysis).
Bruno Tuffin was the local organization Chair, 5th International Conference on the Quantitative Evaluation of SysTems (QEST'08), Saint-Malo, September 2008.
Bruno Tuffin was Scientific Committee Chair, 7th International Workshop on Rare Event Simulation (RESIM'08), Rennes, September 2008.
Nizar Bouabdallah is serving as a TPC Co-Chair for the IEEE GLOBECOM 2009 Ad Hoc and Sensor Networks Symposium, Honolulu, Hawaii, USA.
Adlen Ksentini is serving as PHD Forum Chair for the ICST Mobile Lightweight Wireless Systems (MOBILIGHT) 2009
Nizar Bouabdallah served in the Program Committee of the following conferences:
IEEE ICC 2009, Ad-Hoc and Sensor Networking Symposium, June 2009, Dresden, Germany.
IEEE GLOBECOM 2008, Ad Hoc, Sensor and Mesh Networking Symposium, December 2008, New Orleans, LA, USA.
IWCMC 2008, Wireless LANs and Wireless PANs Symposium, August 2008, Chania, Crete Island, Greece.
IEEE PIMRC 2008, Mobile and Wireless Networks Track, September 2008, Cannes, France.
IEEE WCNC 2008, Networking Track, April 2008, Las Vegas, USA.
Radio Resource Management in Wireless Mesh Networks (RRMinMesh'08) Workshop held in conjunction with AICCSA'08 conference, April 2008, Doha, Qatar.
Gerardo Rubino was the General Chair of the following conferences:
5th International Conference on the Quantitative Evaluation of SysTems (QEST'08), Saint-Malo, September 2008.
7th International Workshop on Rare Event Simulation (RESIM'08), Rennes, September 2008.
Gerardo Rubino served in the Program Committee of the following conferences:
8th ICIL (International Conference on Industrial Logistics), March 2008, Neguev, Israel;
SMCtools 2008: International Workshop on Tools for solving Structured Markov Chains, Athens, Greece, October 2008;
8th ICOR (International Conference on Operations Research), February 2008, La Habana, Cuba;
4th Euro-NGI (Next Generation Internet Networks Design and Engineering for Heterogeneity), April 28–30, 2008, Kraków, Poland, 2008.
Bruno Sericola served in the Program Committee of the following conferences
ASMTA'08, 15th International Conference on Analytical and Stochastic Modelling Techniques and Applications, Nicosia, Cyprus, 4-6 June 2008.
MAM6, 6th International Conference on Matrix Analytic Methods in Stochastic Models, Beijing, P. R. China, June 11-14, 2008.
CFIP'08, Colloque francophone sur l'ingénierie des protocoles, Les Arcs, France, 25-28 mars 2008.
Bruno Tuffin served in the Program Committee of the following conferences
2nd IEEE International Workshop on Bandwidth on Demand (BoD 2008) April 11, 2008, Salvador da Bahia, Brazil.
4th EURO-NGI Conference on Next Generation Internet Networks Design and Engineering for Heterogeneity (NGI'08), April 28-30, Krakow, Poland.
9ème Atelier en Évaluation de Performances, 1-4 Juin 2008, Aussois, France.
8th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing (MCQMC'08), July 7-11, 2008, Montréal, Canada.
17th International Conference on Computer Communications and Networks (ICCCN'08), August 4 - 7, 2008 St. Thomas U.S. Virgin Islands.
The 5th International Workshop on Grid Economics and Business Models (GECON 2008), August 25-26, 2008, Las Palmas, Gran Canaria, Spain.
3rd International Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS'08), October 13-18, 2008, Athens, Greece.
“European Simulation and Modelling Conference” (ESM2008), October 27-28, 2008, Le Havre, France.
Co-organizer of the invited session “Game theory in communication networks” at the Roadef Conference, Clermont Ferrand, February 25-27, 2008.
Adlen Ksentini served in the Program Committee of the following conferences
The IEEE 22nd International Conference on Advanced Information Networking and Applications (AINA 2008), GinoWan, Okinawa, Japan, March 2008.
3rd ACM International Workshop on Performance Monitoring, Measurement, and Evaluation of Heterogeneous Wireless and Wired Networks (PM2HW2N 2008), Vancouver, British Columbia, Canada, October 2008.
3rd Workshop on multiMedia Applications over Wireless Networks (MediaWiN 2008), Marrakech, Morocco, July 6th, 2008.
Bruno Tuffin is a member of the “Comité des actions incitatives” for the “Conseil d'Orientation Scientifique et Technologique de l'INRIA” (COST).
The team's members have a variety of responsibilities concerning teaching in the local environment (Ifsic, Telecom Bretagne, Rennes Mathematics Institute). At the Bac+5 level, N. Bouabdallah, R. Marie, G. Rubino, B. Sericola, A. Ksentini, B. Tuffin, C. Viho, give different courses in two Masters (in Probability and in Computer science), in the 3rd year of DIIC at the Rennes 1 university, at the Telecom Bretagne. The main subjects are networking, protocols, dimensioning problems, dependability analysis, etc. A. Ksentini is in charge of the 2nd year of the Master in Computer Science at the university of Rennes 1.
G. Rubino teaches on Performance Evaluation of Computer Networks at the Lebanese University, at Beirut.
Bruno Tuffin is associate Editor for INFORMS Journal on Computing.
Bruno Tuffin is associate Editor for Mathematical Methods of Operations Research.
Bruno Tuffin was Guest editor of a special issue of Performance Evaluation Journal (Volume 65, Issues 11+12, November 2008) for selected papers from Valuetools’07 .
Bruno Sericola is a member of the Editorial Advisory Board of The Open Operational Research Journal.
The Dionysos team dedicates a significant effort towards standardization and certification in the telecommunications area. We participate in several working groups of the main telecommunication standardization institutes like the IETF (Internet Engineering Task Force), ETSI (European Telecommunication Standardization Institute), etc. We are also active in the main mailing-lists treating new generation networks and protocols. Several proposals of drafts and contributions to the definition of standards and RFCs (Request For Comments) have been published. Our contributions focus today mainly on IPv6 and related protocols such as IPv6 mobility.
Dionysos team has also a major role in the world-wide certification process for IPv6 products launched by the IPv6 Forum, the “IPv6 Ready Logo Program”. For details, see
http://