EN FR
EN FR


Section: New Results

Open Network Architecture

Participants : Bruno Astuto Arouche Nunes, Chadi Barakat, Daniel Camara, Walid Dabbous, Lucia Guevgeozian Odizzio, Young-Hwan Kim, Mohamed Amine Larabi, Arnaud Legout, Emilio Mancini, Xuan-Nam Nguyen, Thierry Parmentelat, Alina Quereilhac, Damien Saucez, Julien Tribino, Thierry Turletti, Fréderic Urbani.

  • Delay Tolerant Networks

     

    Delay Tolerant Networks (DTNs) stand for wireless networks where disconnections may occur frequently. In order to achieve data delivery in such challenging environments, researchers have proposed the use of store-carry-and-forward protocols: there, a node may store a message in its buffer and carry it along for long periods of time, until an appropriate forwarding opportunity arises. Multiple message replicas are often propagated to increase delivery probability. This combination of long-term storage and replication imposes a high storage and bandwidth overhead. Thus, efficient scheduling and drop policies are necessary to: (i) decide on the order by which messages should be replicated when contact durations are limited, and (ii) which messages should be discarded when nodes' buffers operate close to their capacity. We worked on a content-centric dessemination algorithm for delay-tolerant networks, called for short CEDO, that distributes content to multiple receivers over a DTN. CEDO assigns a utility to each content item published in the network; this value gauges the contribution of a single content replica to the network’s overall delivery-rate. CEDO performs buffer management by first calculating the delivery-rate utility of each cached content-replica and then discarding the least-useful item. When an application requests content, the node supporting the application will look for the content in its cache. It will immediately deliver it to the application if the content is stored in memory. In case the request cannot be satisfied immediately, the node will store the pending request in a table. When the node meets another device, it will send the list of all pending requests to its peer; the peer device will try to satisfy this list by sending the requester all the matching content stored in its own buffer. A meeting between a pair of devices might not last long enough for all requested content to be sent. We address this problem by sequencing transmissions of data in order of decreasing delivery-rate utility. A content item with few replicas in the network has a high delivery rate utility; these items must be transmitted first to avoid degrading the content delivery-rate metric. The node delivers the requested content to the application as soon as it receives it in its buffer. We implemented CEDO over the CCNx protocol, which provides the basic tools for requesting, storing, and forwarding content. Detailed information on CEDO and the implementation work carried out herein can be found in this publication [22] and at the following web page: http://planete.inria.fr/Software/CEDO/ .

  • Predicting nodes spatial node density in mobile ad-hoc networks

     

    User mobility is of critical importance when designing mobile networks. In particular, “waypoint” mobility has been widely used as a simple way to describe how humans move. This paper introduces the first modeling framework to model waypoint-based mobility. The proposed framework is simple, yet general enough to model any waypoint-based mobility regimes. It employs first order ordinary differential equations to model the spatial density of participating nodes as a function of (1) the probability of moving between two locations within the geographic region under consideration, and (2) the rate at which nodes leave their current location. We validate our models against real user mobility recorded in GPS traces collected in three different scenarios. Moreover, we show that our modeling framework can be used to analyze the steady-state behavior of spatial node density resulting from a number of synthetic waypoint-based mobility regimes, including the widely used Random Waypoint (RWP) model. Another contribution of the proposed framework is to show that using the well-known preferential attachment principle to model human mobility exhibits behavior similar to random mobility, where the original spatial node density distribution is not preserved. Finally, as an example application of our framework, we discuss using it to generate steady-state node density distributions to prime mobile network simulations. This work was done in collaboration with Dr. Katia Obraczka, from UC Santa Cruz, and was published in WINET [12] .

  • Software Defined Networking in Heterogeneous Networked Environments

     

    We worked on the exploration of the software defined networking paradigm to facilitate the implementation and large scale deployment of new network protocols and services in heterogeneous networked environments. Our activities related to this research thrust are described hereafter. We wrote a survey of the emerging field of Software-Defined Networking (SDN). SDN is currently attracting significant attention from both academia and industry. Its field is quite recent, yet growing at a very fast pace. Still, there are important research challenges to be addressed. We look at the history of programmable networks, from early ideas until recent developments. In particular we described the SDN architecture in detail as well as the OpenFlow standard. We presented current SDN implementations and testing platforms and examined network services and applications that have been developed based on the SDN paradigm. We concluded with a discussion of future directions enabled by SDN ranging from support for heterogeneous networks to Information Centric Networking (ICN). The survey will be published in 2014 in the IEEE Surveys and Tutorials journal [32] .

    We have also specified a number of use cases motivating the need for extending the SDN model to heterogeneous networked environments. Such environments consist of infrastructure-based and infrastructure-less networks. These specifications and use cases were summarized in a recent publication [19] .

    We have also implemented a Capacity Sharing platform by leveraging SDN in hybrid networked environments, i.e., environments that consist of infrastructure-based as well as infrastructureless networks. The proposed SDN-based framework provides flexible, efficient, and secure capacity sharing solutions in a variety of hybrid network scenarios. In the paper published at the Capacity Sharing Workshop CSWS 2013 [40] , we identify the challenges raised by capacity sharing in hybrid networks, describe our framework in detail and how it addresses these challenges, and discuss implementation issues.

    The aforementioned capacity sharing work is just one application and a preliminary of our longer term effort. We have also started to specify the H-SDN protocols based on the use cases mentioned above, including the capacity sharing use case. These efforts are part of a broader work where we propose a framework to enable the implementation and deployment of more generic H-SDN networks and applications. This framework contemplates important issues regarding H-SDN deployment, such as: security, increased scalability and performance by distribution of SDN control and seamless handover of mobile stations, to name a few. We have targeted Mobisys2014 as a venue for publishing our proposal and results regarding this topic [39] .

  • Rule Placement in Software-Defined Networking

     

    OpenFlow is a new communication standard that decouples control and data planes to simplify traffic management. More precisely, OpenFlow switches populate their forwarding tables by opportunistically querying a centralized controller for flows whose rules (i.e., forwarding actions) are not yet installed. However, the flexibility offered by this new paradigm comes at the expense of extra signaling overhead as, in practice, switches might not be able to store all rules in their local forwarding tables. The question of which rules to install then becomes essential. In our research, we leverage the fact that some flows are more important to manage than others, and thus construct an optimal placement problem of rules in OpenFlow switches that ensures the most valuable traffic is matched by its appropriate rules while respecting switches and links capacity constraints. The rest of the traffic with no installed rules follows a default, yet less appropriate, path within the network. We have formulated and solved this optimisation problem in the case of realistic operational needs, and prove that the optimal placement of rules is NP-hard. The intrinsic complexity of the problem led us to design a greedy heuristic that we evaluated with two representative use cases: BGP multihoming and Access Control Lists. On one hand, the evaluation shows the versatility and the generality of the optimization problem, and on another hand, it demonstrates that heuristics with apparent simplicity are still efficient. We are now extending this work to support traffic dynamics and mobility. This work is currently under submission.

  • Information-Centric Networking and economical aspects

     

    With the explosion of broadband Over-The-Top (OTT) services all around the world, the Internet is autonomously migrating toward overlay and incrementally deployable content distribution infrastructures. Information-Centric Networking (ICN) technologies are the natural candidates to more efficiently bind and distribute popular contents to users. However, the strategic incentives in exploiting ICN, for both users and ISPs, are much less understood to date. In this work, we shed light on how OTTs shall shape prices and discounts to motivate ICN usage, depending on their awareness over content distribution costs. Actually, the Internet ecosystem is fast and dynamic and new ideas can rapidly reach millions of users spread worldwide without having to rely on special involvement of intermediate transit networks. In this context, Over-The-Top broadband content providers can leverage their customer resources to allowing, from one hand, to improve access performance, and, from the other hand, to reduce operational costs the OTT provider would incur on by directly serving the customers. In this context, Information-Centric Networking appears as an adequate offloading technique, if incrementally deployed as an overlay network. This paper analyses the incentive compatibility in the adoption of a ICN overlay for OTT services and is, as of our knowledge, we are the first in addressing the topic by following a non-cooperative game theory reasoning, we believe adequate in its non-cooperative nature due to independency between the involved ICN stakeholders. Our analysis allows us to assess that the business model currently standing for legacy CDNs does not make strategic sense for ICN overlays and that, hover, it exists incentives for OTT customers to get involved in the distributions process via an ICN overlay reducing so server load. These unique specifications for the design of an ICN overlay for OTT content distribution do also have relevant implications for ICN protocol design. The OTT provider would need a form of control over the ICN overlay operations. We identify the usage of a OTT- set policy metric for ICN routing as the most appropriate way to ensure ICN users follow the equilibrium strategy suggested by our incentive compatibility framework. We highlight moreover the need of a scalable way of building and controlling ICN overlays over the legacy TCP/IP Internet to support related signaling, forwarding rule registration, and positive strategic behaviour.

  • Information-Centric Networking and rate control implications

     

    Information-centric networking (ICN) leverages content demand redundancy and proposes in-network caching to reduce network and servers load and to improve quality of experience. We have studied the interaction between in-network caching of ICN and Additive Increase Multiplicative Decrease (AIMD) end-to-end congestion control with a focus on how bandwidth is shared, as a function of content popularity and caches provisioning. As caching shortens AIMD feedback loop, the download rate of AIMD is impacted. We earlier shed light on the potential negative impact of in-network caching on instantaneous throughput fairness. The work accomplished in 2013 precisely quantify the issue thanks to an analytic model based on Discriminatory Processor Sharing and real experiments, we observe that popular contents benefit from caching and realize shorter download times at the expense of unpopular contents which see their download times inflated by a factor bounded by 11-ρ , where ρ is the network load. This bias can be removed by redefining congestion control to be delay independent or by over-provisioning link capacity at the edge so that to compensate for the greediness of popular contents. The experimentation study has been supported by the work of Ilaria Cianci internship on the CCN-Jocker emulator. This work is currently under submission.

  • Routing in Information-Centric Network

     

    The idea behind Information-Centric Networking (ICN) is to omit the notion of host and location and use contents as direct routing and forwarding primitives, instead of IP addresses. This shift of paradigm allow ICN to natively offer in-network caching, i.e., to cache content on the path from content providers to requesters. Actually our studies shows a large spatial and temporal locality of contents amongst users in the same network which proves that in-network caching can achieve good overall performance. However, caching contents strictly on their paths is far from being optimal when paths are not shared among content consumers as contents may be replicated on routers so reducing the total volume of contents that can be cached. To overcome this limitation, we introduced the notion of off-path caching in [21] where we allocate content to well defined off-path caches within the network and deflect the traffic off the optimal path toward these caches that are spread across the network. Off-path caching improves the global hit ratio by efficiently utilizing the network-wide available caching capacity and permits to reduce egress links bandwidth usage.

  • Locator/Identifier Separation Protocol (LISP)

     

    The future Internet has been a hot topic during the past decade and many approaches proposed towards this future Internet, ranging from incremental evolution to complete clean state ones, have been proposed. One of the proposition, LISP, advocates for the separation of the identifier and the locator roles of IP addresses to reduce BGP churn and BGP table size. Up to now, however, most studies concerning LISP have been theoretical and, in fact, little is known about the actual LISP deployment performance. We filled this gap through measurement campaigns carried out on the LISP Beta Network. More precisely, we evaluated the performance of the two key components of the infrastructure: the control plane (i.e., the mapping system) and the interworking (i.e., communication between LISP and non-LISP sites). Our measurements highlight that performance offered by the LISP interworking infrastructure is strongly dependent on BGP routing policies. If we exclude misconfigured nodes, the mapping system typically provides reliable performance and relatively low median mapping resolution delays. Although the bias is not very important, control plane performance favours USA sites as a result of its larger LISP user base but also because European infrastructure is unreliable. Finally, the LISP Map-versioning RFC mentioned in the last year activity report was published this year [33] . All details are reported in [17] , [29] .

  • Running Live CCNx Experiments on Wireless and Wired Testbeds with NEPI

     

    CCNx has long left the early development stage where simulation and emulation frameworks, like ccnSim and mininet, were enough to validate new approaches and improvements. It has now reached a level of maturity which calls for evaluation in more realistic environments. If it is to be deployed in the wild Internet or even in private network settings, a framework that provides proper validation in comparable environments is required. For this purpose we demonstrate the capabilities of the NEPI framework to run CCNx experiments in realistic environments. NEPI can run CCNx experiments directly on Internet settings as well as wireless or wired private network environments. This framework allows to automate host con guration, software installation, result collection and to define execution sequence between applications. Furthermore, it provides the ability to conduct interactive experiments where researchers are free to modify the experiment scenario on the fly. These results were demonstrated at CCNxCon'2013 [38] .

  • Evaluating costs of CCN overlays

     

    We are currently involved in a collaboration with PARC (Palo Alto research center) regarding the evaluation of the CCN (Control Centric Networking) technology. Early results of this work were presented in the poster session at the CCNxConf 2013 meeting. In this work we present a set of scenarios to evaluate the performance of CCN overlays on top of the Internet, for worse case conditions. We used the NEPI experiment API to construct different overlay topologies on PlanetLab, for which we varied the topology configuration (e.g. number and degree of nodes), the CCN parameters (e.g. pipeline, cache usage, prefix routes) and the traffic patterns (e.g. single stream, prefix independent chunks). The objective of this study is to find correlations between these variables and the time to deliver content and the overlay network utilization. Our contribution is twofold. In one hand we provide a benchmark which can be used as reference for comparison of new CCNx versions and for other ICN solutions, and as input traces for CCN simulations. In the other hand, we provide results that can be used to improve the CCNx implementation and that can help Internet providers or end users to better design CCN overlays to satisfy their needs. The work is still ongoing and will be submitted soon.

  • Enabling Iterative Development and Reproducible Evaluation of Network Protocols

     

    Over the last two decades several efforts have been made to provide adequate experimental environments, aiming to ease the development of new network protocols and applications. These environments range from network simulators providing highly controllable evaluation conditions, to live testbeds providing realistic evaluation environment. While these different approaches foster network development in different ways, there is no simple way to gradually transit from one to another, or to combine their strengths to suit particular evaluation needs. We believe that enabling a gradual transition from a pure simulated environment to a pure realistic one, where the researcher can decide which aspects of the environment are realistic and which are controllable, allows improving network solutions by simplifying the problem analysis and resolution. We have designed a new network experimentation framework, called IDEV, where simulated and real components can be arbitrarily combined to build custom test environments, allowing refining and improving new protocols and applications implementations by gradually increasing the level of realism of the evaluation environment. Moreover, we proposed a testbed architecture specifically adapted to support the proposed concept, and discuss the design choices we made based on our previous experience in the area of network testbeds. These choices address key issues in network testbed development, such as ease of experimentation, experiment reproducibility, and testbed federation, to enable scaling the size of experiments beyond what a single testbed would allow. This work has been described in a paper that will be published in the Computer Networks journal in 2014, see [15] .

  • Direct Code Execution: Revisiting Library OS Architecture for Reproducible Network Experiments

     

    We proposed Direct Code Execution (DCE), a framework that dramatically increases the number of available protocol models and realism available for ns-3 simulations. DCE meets the goals recently proposed for fully reproducible networking research and runnable papers, with the added benefits of 1) the ability of completely deterministic reproducibility, 2) the scalability that simulation time dilation offers, 3) capabilities supporting automated code coverage analysis, and 4) improved debuggability via execution within a single address space. We reported on packet processing benchmark and showcased key features of the framework with different use cases. Then, we reproduced a previously published Multipath TCP (MPTCP) experiment and highlight how code coverage testing can be automated by showing results achieving 55-86% coverage of the MPTCP implementation. We also demonstrated how network stack debugging can be easily performed and reproduced across a distributed system. Our first benchmarks are promising and we believe this framework can benefit the network community by enabling realistic, reproducible experiments and runnable papers. This work has been published in the ACM CoNext conference 2013 [25] , in Santa Barbara, CA, USA and will be published in IEEE Communication Magazine in 2014 [14] . DCE has been demonstrated at the ACM MSWiM conference at Barcelona, Spain in November 2013 [42] In the same context, we designed DCE Cradle, a framework that allows to use any features of the Linux kernel network stack with existing ns-3 applications. DCE Cradle uses DCE to address the brittleness of Network Simulation Cradle (NSC). We carefully designed DCE Cradle without breaking the existing functionality of DCE and ns-3 socket architecture by considering the gaps between the asynchronous ns-3 socket API and the general POSIX socket API. We validated the implementation of DCE Cradle with the behavior of TCP implementation in congested links, and then studied its performance by focusing on the simulation time and network scale. We showed that DCE Cradle is at most 1.3 times faster than NSC, while it is about 2.2 times slower than the ns-3 native stack. Then we showcased an actual implementation of the DCCP transport protocol to verify how easy it is to simulate a real implementation using DCE Cradle. We believe that this tool can highly benefit the network community by enabling more realistic evaluation of network protocols. This work has been published in the ns-3 workshop in 2013 in Cannes and got the best paper award [26] .

  • The ns-3 Consortium

     

    We have founded in 2012 a consortium between Inria and University of Washington. The goals of this consortium are to (1) provide a point of contact between industrial members and the ns-3 project, to enable them to provide suggestions and feedback about technical aspects, (2) guarantee maintenance of ns-3’s core, organize public events in relation to ns-3, such as users’ day and workshops and (3) provide a public face that is not directly a part of Inria or NSF by managing the http://www.nsnam.org web site. The Consoritum started his activities in March 2013. Two European institutions (Centre Tecnològic de Telecomunicacions de Catalunya - CTTC and INESC Porto)) and two Amercian universities (Georgia Tech and Bucknell) joined the consortium as Executive members in 2013. For more details see the consortium web page https://www.nsnam.org/consortium/ .

  • Contiki over ns-3

     

    This year we worked on the adaptation of Contiki OS over ns-3. Contiki is a popular, and highly optimized, operating system for sensor nodes. We developed a proof of concept adaptation layer that, even though simple and limited, was able to show that such interaction is indeed possible. The adaptation layer was capable of transferring data from different sensors using ns-3 to interconnect them. Sensor nodes were controlled by the ns-3 scheduler, respecting the ns-3 clock and executing over simulated time. In fact, the sensors were not even aware they were placed over a simulated network.

  • Federation of experimental testbeds

     

    We are involved in the F-Lab (French ANR) project, the FED4FIRE (E.U. IP) project and have the lead of the "Control Plane Extensions" WorkPackage of OpenLab (E.U. IP) project. Within these frameworks, as part of the co-development agreement between the DIANA team and Princeton University, we kept contributing into one of the most visible and renown implementations of the Testbed-Federation architecture known as SFA for Slice-based Federation Architecture. As a sequel of former activities we also keep a low-noise maintenance activity of the PlanetLab software, which has been running in particular on the PlanetLab global testbed since 2004, with an ad-hoc federated model in place between PlanetLab Central (hosted by Princeton University) and PlanetLab Europe (hosted at Inria) since 2007. During 2013, as a step forward to our contribution to the specification of the Aggregate Manager (AM) API v3, which is the control plane interface through which experimenters discover and reserve resources at testbeds, we have focused on coming up with a separate implementation of SFAWrap that supports AM API v3 and brings a more elaborate lifecycle for slices provisioning. Secondly, we implemented a AM API v2 to AM API v3 adapter, which represents the glue between the already existing AM API v2 compliant testbed drivers and the AM API v3 compliant interfaces of SFAWrap. The v2 to v3 adapter provides AM API v3 compatibility to already existing AM API v2-based testbed drivers until their authors find the time to adapt their driver for a native support of AM API v3 if they want to take full advantage of the new lifecycle. Thirdly, within the contexts of the formerly listed projects, and as a consequence of the growing need for testbeds federation, the providers of testbeds such as: BoneFire, SmartSantander decided to adopt SFAWrap in order to join the global federation of testbeds by exposing their testbeds through SFA. Thus, we had to provide to those partners a close support to achieve this goal. Finally, as for any kind of software development project, and due to the growing usage of SFAWrap, we had to be active on both operational and maintanace tasks. See [37] and [41] for more details. We also contributed, in the context of the Fed4FIRE project, to the definition and early implementation of an architecture for heterogeneous federation of future internet experimental facilities. The results of this work were presented at the FutureNetworkSummit 2013 conference. In this work, requirements involving different aspects of the federation of heterogeneous facilities where collected and analysed, and a multilayer architecture was proposed to address them. Our contribution mainly focuses on the experiment control plane of the federation architecture [28] . The experiment control plane involves the interface between the experimenter and the facilities, and it covers tasks such as federation of the resource discovery, provisioning, reservation, configuration and deployment. The proposed architecture combines the use of SFA (Slice Federation Architecture) and OMF (cOntrol and Management Framework) into a common middle-ware that allows to federate resource control within an experiment across facilities.