EN FR
• Legal notice
• Accessibility - non conforme
COATI - 2013

Research Program
Application Domains
Software and Platforms
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Bibliography

Section: New Results

Network Design and Management

Participants : Julio Araújo, Jean-Claude Bermond, Luca Chiaraviglio, David Coudert, Frédéric Giroire, Alvinice Kodjo, Aurélien Lancin, Remigiusz Modrzejewski, Christelle Molle-Caillouet, Joanna Moulierac, Nicolas Nisse, Stéphane Pérennes, Truong Khoa Phan, Ronan Pardo Soares, Issam Tahiri.

Optimization in backbone networks

The notion of Shared Risk Link Groups (SRLG) has been introduced to capture survivability issues where some links of a network fail simultaneously. In this context, the diverse routing problem is to find a set of pairwise SRLG-disjoint paths between a given pair of end nodes of the network. This problem has been proved NP-complete in general and some polynomial instances have been characterized.

In [33] , [32] , we investigate the diverse-routing problem in networks where the SRLGs are localized and satisfy the star property. This property states that a link may be subject to several SRLGs, but all links subject to a given SRLG are incident to a common node. We first provide counterexamples to the polynomial-time algorithm proposed in the literature for computing a pair of SRLG-disjoint paths in networks with SRLGs satisfying the star property, and then prove that this problem is in fact NP-complete. We have also characterized instances that can be solved in polynomial time or are fixed parameter tractable, in particular when the number of SRLGs is constant, the maximum degree of the vertices is at most 4, and when the network is a directed acyclic graph. Moreover, we have considered the problem of finding the maximum number of SRLG-disjoint paths in networks with SRLGs satisfying the star property. We have proved that such problem is NP-hard and hard to approximate. Then, we have provided exact and approximation algorithms for relevant subcases.

Wavelength assignment in WDM networks

Let $𝒫$ be a family of directed paths in a directed graph $G$. The load of an arc is the number of directed paths containing this arc. Let $\pi \left(G,𝒫\right)$ be the maximum of the load of all the arcs and let $w\left(G,𝒫\right)$ be the minimum number of wavelengths (colours) needed to colour $𝒫$ in such a way that two directed paths with the same wavelength are arc-disjoint. These two parameters correspond respectively to the clique number and the chromatic number of the associated conflict graph, and $\pi \left(G,𝒫\right)\le w\left(G,𝒫\right)$. It was known that there exists directed acyclic graphs (DAGs) such that the ratio between $w\left(G,𝒫\right)$ and $w\left(G,𝒫\right)$ is arbitrarily large. In [18] , solving a conjecture of an earlier article, we show that the same is true for a very restricted class of DAGs, the UPP-DAGs, those for which there is at most one directed path from a vertex to another. We also characterized the DAGs such that $\pi \left(G,𝒫\right)=w\left(G,𝒫\right)$ for all families of directed paths.

Mutli-operators microwave backhaul networks

In [35] , we consider the problem of sharing the infrastructure of a backhaul network for routing. We investigate on the revenue maximization problem for the physical network operator (PNO) when subject to stochastic traffic requirements of multiple virtual network operators (VNO) and prescribed service level agreements (SLA). We use robust optimization to study the tradeoff between revenue maximization and the allowed level of uncertainty in the traffic demands. This mixed integer linear programming model takes into account end-to-end traffic delays as example of quality-of-service requirement in a SLA. To show the effectiveness of our model, we present a study on the price of robustness, i.e. the additional price to pay in order to obtain a feasible solution for the robust scheme, on realistic scenarios.

Energy efficiency

With one third of the world population online in 2013 and an international Internet bandwidth multiplied by more than eight since 2006, the ICT sector is a non-negligible contributor of worldwide greenhouse gases emissions and power consumption. Indeed, power consumption of telecommunication networks has become a major concern for all the actors of the domain, and efforts are made to reduce their impact on the overall figure of ICTs, and to support its foreseen growth in a sustainable way. In this context, the contributors of the European Network of Excellence TREND have developed innovative solutions to improve the energy efficiency of optical networks summarized in [45] .

Energy aware routing with redundancy elimination

Many studies have shown that energy-aware routing (EAR) can significantly reduce energy consumption of a backbone network. Redundancy Elimination (RE) techniques provide a complementary approach to reduce the amount of traffic in the network. In particular, the GreenRE model combines both techniques, offering potentially significant energy savings.

In [44] , we enhance the MIP formulation proposed in  [75] for the GreenRE model. We derive cutting planes, extending the well-known cutset inequalities, and report on preliminary computations.

In [37] , we propose a concept for respecting uncertain rates of redundant traffic within the GreenRE model, closing the gap between theoretical modeling and drawn-from-life data. To model redundancy rate uncertainty, the robust optimization approach in  [73] is adapted and the problem is formally defined as mixed integer linear program. An exemplary evaluation of this concept with real-life traffic traces and estimated fluctuations of data redundancy shows that this closer-to-reality model potentially offers significant energy savings in comparison to GreenRE and EAR.

Energy Efficient Content Distribution

The basic protocols of the Internet are point-to-point in nature. However, the traffic is largely broadcasting, with projections stating that as much as 80-90% of it will be video by 2016. This discrepancy leads to an inefficiency, where multiple copies of essentially the same messages travel in parallel through the same links. We have studied approaches to mitigate this inefficiency and reduce the energy consumption of future networks, in particular in [13] .

In [29] , we study the problem of reducing power consumption in an Internet Service Provider (ISP) network by designing the content distribution infrastructure managed by the operator. We propose an algorithm to optimally decide where to cache the content inside the ISP network. We evaluate our solution over two case studies driven by operators feedback.

Recently, there is a trend to introduce content caches as an inherent capacity of network equipment, with the objective of improving the efficiency of content distribution and reducing network congestion. In [57] , [46] , [29] , we study the impact of using in-network caches and content delivery network (CDN) cooperation on an energy-efficient routing. Experimental results show that by placing a cache on each backbone router to store the most popular content, along with well choosing the best content provider server for each demand to a CDN, we can save up to 23% of power in the backbone.

Distributed systems

Distributed Storage systems.

In a P2P storage system using erasure codes, a data block is encoded in many redundancy fragments. These fragments are then sent to distinct peers of the network. In [24] , we study the impact of different placement policies of these fragments on the performance of storage systems.

In [39] , we propose a new analytical framework that takes into account the correlation between data reconstructions when estimating the repair time and the probability of data loss. The models and schemes proposed are validated by mathematical analysis, extensive set of simulations, and experimentation using the GRID5000 test-bed platform. This new model allows system designers to operate a more accurate choice of system parameters in function of their targeted data durability.

P2P Streaming systems

In [41] , [68] , we propose and analyze a simple localized algorithm to balance a tree. The motivation comes from live distributed streaming systems in which a source diffuses a content to peers via a tree, a node forwarding the data to its children. Such systems are subject to a high churn, peers frequently joining and leaving the system. It is thus crucial to be able to repair the diffusion tree to allow an efficient data distribution. In particular, due to bandwidth limitations, an efficient diffusion tree must ensure that node degrees are bounded. Moreover, to minimize the delay of the streaming, the depth of the diffusion tree must also be controlled. We propose here a simple distributed repair algorithm in which each node carries out local operations based on its degree and on the subtree sizes of its children.

We study the problem of gathering information from the nodes of a radio network into a central node. We model the network of possible transmissions by a graph and consider a binary model of interference in which two transmissions interfere if the distance in the graph from the sender of one transmission to the receiver of the other is ${d}_{I}$ or less.

In [19] , we give an algorithm to construct minimum makespan transmission schedules for data gathering under the following hypotheses: the communication graph G is a tree network, and no buffering is allowed at intermediate nodes and ${d}_{I}\ge 2$. In the interesting case in which all nodes in the network have to deliver an arbitrary positive number of packets, we provide a closed formula for the makespan of the optimal gathering schedule. Additionally, we consider the problem of determining the computational complexity of data gathering in general graphs and show that the problem is NP–complete. On the positive side, we design a simple ($1+2/{d}_{I}$)-factor approximation algorithm for general networks.

In [59] , we focus on the gathering and personalized broadcasting problem in grids. We still consider the non-buffering model. In this setting, though the problem of determining the complexity of computing the optimal makespan in a grid is still open, we present linear (in the number of messages) algorithms that compute schedules for gathering with ${d}_{I}=0,1,2$. In particular, we present an algorithm that achieves the optimal makespan up to a small additive constant. Note that, the approximation algorithms that we present also provide approximation up to a ratio 2 for the gathering with buffering. All our results are proved in terms of personalized broadcasting.

In [20] , we now allow transmission till a distance ${d}_{T}$ and buffering in intermediate nodes. We focus on the specific case where the network is a path with the sink at an end vertex of the path and where the traffic is unitary ($w\left(u\right)=1$ for all $u$); indeed this simple case appears to be already very difficult. We first give a new lower bound and a protocol with a gathering time that differs only by a constant independent from the length of the path. Then we present a method to construct incremental protocols which are optimal for many values of ${d}_{T}$ and ${d}_{I}$ (in particular when ${d}_{T}$ is prime).

In [50] , we focus on gathering uncertain traffic demands in mesh networks with multiple sources and sinks. The scheduling is relaxed into the round weighting problem in which a set of pairwise non-interfering links is called a round, and we seek to successively activate rounds in order to get enough capacity on links to route the demand from the set of sources to the set of sinks. We propose a new robust model considering traffic demand uncertainty, efficiently solved by column generation, and quantify the price of robustness, i.e., the additional cost to pay in order to obtain a feasible solution for the robust scheme.

Routing

Routing models evaluation

The Autonomous System (AS)-level topology of the Internet that currently comprises more than 40k ASs, is growing at a rate of about 10% per year. In these conditions, Border Gateway Protocol (BGP), the inter-domain routing protocol of the Internet starts to show its limits, among others in terms of the number of routing table entries it can dynamically process and control. To overcome this challenging situation, the design but also the evaluation of alternative dynamic routing models and their comparison with BGP will be performed by means of simulation. However, existing routing models simulators such as DRMSim, the Dynamic Routing Model Simulator developped in Coati in collaboration with Alcatel-Lucent [72] , are limited in terms of the number of routing table entries they can dynamically process and control on a single computer.

In [63] , we have conducted a feasibility study of the extension of DRMSim so as to support the Distributed Parallel Discrete Event paradigm. We have studied several distribution models and their associated communication overhead. We have in particular evaluated the expected additional time required by a distributed simulation of BGP (border gate protocol) on topologies with 100k ASes compared to its sequential simulation. We show that such a distributed simulation of BGP is possible with a reasonable time overhead.

Complexity of Shortest Path Routing

In telecommunication networks packets are carried from a source $s$ to a destination $t$ on a path that is determined by the underlying routing protocol. Most routing protocols belong to the class of shortest-path routing protocols. For better protection and efficiency, one wishes to use multiple (shortest) paths between two nodes. Therefore the routing protocol must determine how the traffic from s to t is distributed among the shortest paths. In the protocol called OSPF-ECMP (for Open Shortest Path First-Equal Cost Multiple Path) the traffic incoming at every node is uniformly balanced on all outgoing links that are on shortest paths. In [43] , [42] , we show that the problem of maximizing even a single commodity flow for the OSPF-ECMP protocol cannot be approximated within any constant factor ratio. Besides this main theorem, we derive some positive results which include polynomial-time approximations and an exponential-time exact algorithm.