Section: New Results

Network Design and Management

Participants : Jean-Claude Bermond, Christelle Caillouet, David Coudert, Frédéric Giroire, Nicolas Huin, Joanna Moulierac, Stéphane Pérennes.

Network design is a very wide subject which concerns all kinds of networks. In telecommunications, networks can be either physical (backbone, access, wireless, ...) or virtual (logical). The objective is to design a network able to route a (given, estimated, dynamic, ...) traffic under some constraints (e.g. capacity) and with some quality-of-service (QoS) requirements. Usually the traffic is expressed as a family of requests with parameters attached to them. In order to satisfy these requests, we need to find one (or many) paths between their end nodes. The set of paths is chosen according to the technology, the protocol or the QoS constraints.

We mainly focus on three topics: firstly Fixed wireless Backhaul Networks, with the objective of achieving a high reliability of the network. Secondly, Software-Defined networks, in which a centralized controller is in charge of the control plane and takes the routing decisions for the switches and routers based on the network conditions. This new technology brings new constraints and therefore new algorithmic problems such as the problem of limited space in the switches to store the forwarding rules. Finally, the third topic investigated is Energy Efficiency within Backbone networks and for content distribution. We focus on Redudancy Elimination, and we use SDN as a tool to turn-off the links in real networks. We validated our algorithms on a real SDN platform (Testbed with SDN hardware, in particular a switch HP 5412 with 96 ports, hosted at I3S laboratory. A complete fat-tree architecture with 16 servers can be built on the testbed.).

Fault tolerance

Survivability in networks with groups of correlated failures

The notion of Shared Risk Link Groups (SRLG) captures survivability issues when a set of links of a network may fail simultaneously. The theory of survivable network design relies on basic combinatorial objects that are rather easy to compute in the classical graph models: shortest paths, minimum cuts, or pairs of disjoint paths. In the SRLG context, the optimization criterion for these objects is no longer the number of edges they use, but the number of SRLGs involved. Unfortunately, computing these combinatorial objects is NP-hard and hard to approximate with this objective in general. Nevertheless some objects can be computed in polynomial time when the SRLGs satisfy certain structural properties of locality which correspond to practical ones, namely the star property (all links affected by a given SRLG are incident to a unique node) and the span 1 property (the links affected by a given SRLG form a connected component of the network). The star property is defined in a multi-colored model where a link can be affected by several SRLGs while the span property is defined only in a mono-colored model where a link can be affected by at most one SRLG. We have extended in [23] these notions to characterize new cases in which these optimization problems can be solved in polynomial time. We have also investigated the computational impact of the transformation from the multi-colored model to the mono-colored one. Reported experimental results validate the proposed algorithms and principles.

Reliability of fixed wireless backaul networks

The reliability of a fixed wireless backhaul network is the probability that the network can meet all the communication requirements considering the uncertainty (e.g., due to weather) in the maximum capacity of each link. In [48], we provide an algorithm to compute the exact reliability of a backhaul network, given a discrete probability distribution on the possible capacities available at each link. The algorithm computes a conditional probability tree, where each leaf in the tree requires a valid routing for the network. Any such tree provides an upper and lower bound on the reliability, and the algorithm improves these bounds by branching in the tree. We also consider the problem of determining the topology and configuration of a backhaul network that maximizes reliability subject to a limited budget. We provide an algorithm that exploits properties of the conditional probability tree used to calculate reliability of a given network design. We perform a computational study demonstrating that the proposed methods can calculate reliability of large backhaul networks, and can optimize topology for modest size networks.

Fault tolerance of Linear Access Network

In [52], we study the disconnection of a moving vehicle from a linear access network composed by cheap WiFi Access Points in the context of the telecommuting in massive transportation systems. In concrete, we analyze the probability for a user to experience a disconnection longer than a threshold t∗, leading to a disruption of all on-going communications between the vehicle and the infrastructure network. We provide an approximation formula to estimate this probability for large networks. We then carry out a sensitivity analysis and supply a guide for operators when choosing the parameters of the networks. We focus on two scenarios: an intercity bus and an intercity train. Last, we show that such systems are viable, as they attain a very low probability of long disconnections with a very low maintenance cost.

Routing in Software Defined Networks (SDN)

Software-defined Networks (SDN), in particular OpenFlow, is a new networking paradigm enabling innovation through network programmability. SDN is gaining momentum with the support of major manufacturers. Over past few years, many applications have been built using SDN such as server load balancing, virtual-machine migration, traffic engineering and access control.

MINNIE: an SDN World with Few Compressed Forwarding Rules

While SDN brings flexibility in the management of flows within the data center fabric, this flexibility comes at the cost of smaller routing table capacities. Indeed, the Ternary Content Addressable Memory (TCAM) needed by SDN devices has smaller capacities than CAMs used in legacy hardware. In [34], [54], we investigate compression techniques to maximize the utility of SDN switches forwarding tables. We validate our algorithm, called Minnie , with intensive simulations for well-known data center topologies, to study its efficiency and compression ratio for a large number of forwarding rules. Our results indicate that Minnie scales well, being able to deal with around a million of different flows with less than 1000 forwarding entry per SDN switch, requiring negligible computation time. To assess the operational viability of MINNIE in real networks, we deployed a testbed able to emulate a k=4 fat-tree data center topology. We demonstrate on one hand, that even with a small number of clients, the limit in terms of number of rules is reached if no compression is performed, increasing the delay of new incoming flows. Minnie , on the other hand, reduces drastically the number of rules that need to be stored, with no packet losses, nor detectable extra delays if routing lookups are done in ASICs. Hence, both simulations and experimental results suggest that Minnie can be safely deployed in real networks, providing compression ratios between 70% and 99%.

Energy-Aware Routing in Software-Defined Networks

In [51], we focus on using SDN for energy-aware routing (EAR). Since traffic load has a small influence on power consumption of routers, EAR allows to put unused devices into sleep mode to save energy. SDN can collect traffic matrix and then computes routing solutions satisfying QoS while being minimal in energy consumption. However, prior works on EAR have assumed that the forwarding table of OpenFlow switch can hold an infinite number of rules. In practice, this assumption does not hold since such flow tables are implemented in Ternary Content Addressable Memory (TCAM) which is expensive and power-hungry. We consider the use of wildcard rules to compress the forwarding tables. In this paper, we propose optimization methods to minimize energy consumption for a backbone network while respecting capacity constraints on links and rule space constraints on routers. In details, we present two exact formulations using Integer Linear Program (ILP) and introduce efficient heuristic algorithms. Based on simulations on realistic network topologies, we show that, using this smart rule space allocation, it is possible to save almost as much power consumption as the classical EAR approach

Reducing Networks' Energy Consumption

Due to the increasing impact of ICT (Information and Communication Technology) on power consumption and worldwide gas emissions, energy efficient ways to design and operate backbone networks are becoming a new concern for network operators. Recently, energy-aware routing (EAR) has gained an increasing popularity in the networking research community. The idea is that traffic demands are redirected over a subset of the network devices, allowing other devices to sleep to save energy. We studied variant of this problems.

Energy efficient Content Distribution

To optimize energy efficiency in network, operators try to switch off as many network devices as possible. Recently, there is a trend to introduce content caches as an inherent capacity of network equipment, with the objective of improving the efficiency of content distribution and reducing network congestion. In [36], we study the impact of using in-network caches and CDN cooperation on an energy-efficient routing. We formulate this problem as Energy Efficient Content Distribution. The objective is to find a feasible routing, so that the total energy consumption of the network is minimized subject to satisfying all the demands and link capacity. We exhibit the range of parameters (size of caches, popularity of content, demand intensity, etc.) for which caches are useful. Experiment results show that by placing a cache on each backbone router to store the most popular content, along with well choosing the best content provider server for each demand to a CDN, we can save a total up to 23% of power in the backbone, while 16% can be gained solely thanks to caches.

Energy-Efficient Service Function Chain Provisioning

Network Function Virtualization (NFV) is a promising network architecture concept to reduce operational costs. In legacy networks, network functions, such as firewall or TCP optimization, are performed by specific hardware. In networks enabling NFV coupled with the Software Defined Network (SDN) paradigm, network functions can be implemented dynamically on generic hardware. This is of primary interest to implement energy efficient solutions, which imply to adapt dynamically the resource usage to the demands. In [53], [55], we study how to use NFV coupled with SDN to improve the energy efficiency of networks. We consider a setting in which a flow has to go through a Service Function Chain, that is several network functions in a specific order. We propose a decomposition model that relies on lightpath configuration to solve the problem. We show that virtualization allows to obtain between 30% to 55% of energy savings for networks of different sizes.

Other results

Well Balanced design for Data placement

The have considered in [17] a problem motivated by data placement, in particular data replication in distributed storage and retrieval systems. We are given a set V of v servers along with b files (data, documents). Each file is replicated on exactly k servers. A placement consists in finding a family of b subsets of V (representing the files) called blocks, each of size k. Each server has some probability to fail and we want to find a placement which minimizes the variance of the number of available files. It was conjectured that there always exists an optimal placement (with variance better than that of any other placement for any value of the probability of failure). We show that the conjecture is true, if there exists a well balanced design, that is a family of blocks, each of size k, such that each j-element subset of V , 1jk, belongs to the same or almost the same number of blocks (difference at most one). The existence of well balanced designs is a difficult problem as it contains as a subproblem the existence of Steiner systems. We completely solve the case k=2 and give bounds and constructions for k=3 and some values of v and b.

Study of Repair Protocols for Live Video Streaming Distributed Systems

In [33], we study distributed systems for live video streaming. These systems can be of two types: structured and un-structured. In an unstructured system, the diffusion is done opportunistically. The advantage is that it handles churn, that is the arrival and departure of users, which is very high in live streaming systems, in a smooth way. On the opposite, in a structured system, the diffusion of the video is done using explicit diffusion trees. The advantage is that the diffusion is very efficient, but the structure is broken by the churn. In this paper, we propose simple distributed repair protocols to maintain, under churn, the diffusion tree of a structured streaming system. We study these protocols using formal analysis and simulation. In particular, we provide an estimation of the system metrics, bandwidth usage, delay, or number of interruptions of the streaming. Our work shows that structured streaming systems can be efficient and resistant to churn.

Gathering in radio networks

In [16], we consider the problem of gathering information in a gateway in a radio mesh access network. Due to interferences, calls (transmissions) cannot be performed simultaneously. This leads us to define a round as a set of non-interfering calls. Following the work of Klasing, Morales and Pérennes, we model the problem as a Round Weighting Problem (RWP) in which the objective is to minimize the overall period of non-interfering calls activations (total number of rounds) providing enough capacity to satisfy the throughput demand of the nodes. We develop tools to obtain lower and upper bounds for general graphs. Then, more precise results are obtained considering a symmetric interference model based on distance of graphs, called the distance-d interference model (the particular case d = 1 corresponds to the primary node model). We apply the presented tools to get lower bounds for grids with the gateway either in the middle or in the corner. We obtain upper bounds which in most of the cases match the lower bounds, using strategies that either route the demand of a single node or route simul- taneously flow from several source nodes. Therefore, we obtain exact and constructive results for grids, in particular for the case of uniform demands answering a problem asked by Klasing, Morales and Pérennes.