Section: New Results

IP networks

Participants : Eitan Altman, Konstantin Avrachenkov.

Interdisciplinary study of the Internet access and of network neutrality

In our previous research we have identified large inefficiencies that occur when one allows one type of provider (e.g. access provider) to impose costs on another type of provider (e.g. content provider). This part in which E. Altman collaborated with P. Bernhard (Inria project-team Biocore ), S. Caron and G. Kesidis (both from Pennsylvania State Univ., USA), J. Rojas-Mora (Univ. of Barcelona, Spain), and S. Wong (Univ. of A Coruña, Spain) has now appeared in [96] .

This investigation has been pursued in various directions. In [42] , E. Altman, A. Legout (Inria project-team Planete ) and Y. Xu (Univ. Avignon/LIA) have studied a hierarchical structure of ISPs, the economic impact of some cashing placement policies, and more complex demand functions (the demands of users for content). In [33] , E. Altman and the law specialist S. Wong (Univ. of A Coruña, Spain) analyze in cooperation with the economist J. Rojas-Mora (Univ. of Barcelona, Spain) the impact of legislation related to network neutrality on the quality of service for the end users.

Adaptive monitoring system for IP networks

The remarkable growth of the Internet infrastructure and the increasing heterogeneity of applications and users' behavior make more complex the manageability and monitoring of ISP networks and raises the cost of any new deployment. The main consequence of this trend is an inherent disagreement between existing monitoring solutions and the increasing needs of management applications. In this context, in [62] K. Avrachenkov, I. Lassoued, A. Krifa and C. Barakat (all three from Inria project-team Planete ) present the design of an adaptive centralized architecture that provides visibility over the entire network through a network-wide cognitive monitoring system. Practically, given a measurement task and a constraint on the volume of collected information, the proposed architecture drives the sampling rates on the interface of network routers to achieve the maximum possible accuracy, while adapting itself to any change in network traffic conditions. The authors tune the system parameters with the help of FAST sensitivity test.

Size based scheduling

Size-based scheduling is a promising solution to improve the response time of small flows (mice) that have to share bandwidth with large flows (elephants). To do this, one important task is to track the size of the ongoing flows at the router. However, most of the proposed size-based schedulers either employ the trivial way of tracking the size information of all flows, or require changes at end-hosts. Hence, either they are not scalable or they increase complexity. In [55] , E. Altman, D. Mon Divakaran (IIT Mandi, India) and P. Vicat-Blanc Primet (Lyatiss, France) have proposed a new way of performing size-based scheduling in a practical and scalable fashion, by identifying and `de-prioritizing' elephants only at times of high load. They exploit TCP's behavior by using a mechanism that detects a window of packets - called spikes - when the buffer length exceeds a certain threshold. This spike-detection is used to identify elephant flows and thereafter de-prioritize them. Two-level processor-sharing (TLPS) scheduling is employed to schedule flows in two queues, one with the high-priority flows, and the other with the de-prioritized flows. They show that the proposed mechanism not only improves the response time of mice flows in a scalable way, but also gives better response times to other flows by treating them preferentially as long as they do not overload the high-priority queue.

Accuracy of fluid models for bandwidth-sharing networks

Optimal control of stochastic bandwidth-sharing networks is typically difficult. In order to facilitate the analysis, deterministic analogues of stochastic bandwidth-sharing networks, the so-called fluid models, are often chosen for analysis, as their optimal control can be found more easily. The tracking policy translates the fluid optimal control policy back into a control policy for the stochastic model, so that the fluid optimality can be achieved asymptotically when the stochastic model is scaled properly. In [20] K. Avrachenkov, A. Piunovsky and Y. Zhang (both from the University of Liverpool, UK) study the efficiency of the tracking policy, that is, how fast the fluid optimality can be achieved in the stochastic model with respect to the scaling parameter. In particular, the result of [20] shows that, under certain conditions, the tracking policy can be as efficient as feedback policies.

Bootstrap method for simulating bandwidth sharing

In [71] , E. Altman, T. Jimenez and J. Rojas-Mora (both from Univ. Avignon/LIA) identify difficulties in evaluating through simulations the expected transfer time of a file when several TCP connections share a common bottleneck buffer. The main difficulties are due to the fact that the file size over the Internet has been reported to have a Pareto distribution with parameter smaller than 1.5. This implies that the number of ongoing connections as well as the sojourn times have infinite variance. This has two implications: one cannot estimate the confidence intervals for simulation based on the CLT (central Limit Theory approach), and the duration of the simulations needed to get to steady state is very long. The authors show how to solve both problems by the use of the bootstrap approach.