Section: New Results
Applications in Telecommunications
Participants : Eitan Altman, Konstantin Avrachenkov, Giovanni Neglia.
Elastic cloud caching services
In [37], G. Neglia, together with D. Carra (Univ of Verona, Italy) and P. Michiardi (Eurecom), has considered in-memory key-value stores used as caches, and their elastic provisioning in the cloud. The cost associated to such caches not only includes the storage cost, but also the cost due to misses: in fact, the cache miss ratio has a direct impact on the performance perceived by end users, and this directly affects the overall revenues for content providers. The goal of their work is to adapt dynamically the number of caches based on the traffic pattern, to minimize the overall costs. They present a dynamic algorithm for TTL caches whose goal is to obtain close-to-minimal costs and propose a practical implementation with limited computational complexity: their scheme requires constant overhead per request independently from the cache size. Using real-world traces collected from the Akamai content delivery network, they show that their solution achieves significant cost savings specially in highly dynamic settings that are likely to require elastic cloud services.
Neural networks for caching
In [19] G. Neglia, together with V. Fedchenko (Univ Côte d'Azur) and B. Ribeiro (Purdue Univ, USA), has proposed a caching policy that uses a feedforward neural network (FNN) to predict content popularity. This scheme outperforms popular eviction policies like LRU or ARC, but also a new policy relying on the more complex recurrent neural networks. At the same time, replacing the FNN predictor with a naive linear estimator does not degrade caching performance significantly, questioning then the role of neural networks for these applications.
Similarity caching
In similarity caching systems, a user request for an object that is not in the cache can be (partially) satisfied by a similar stored object , at the cost of a loss of user utility. Similarity caching systems can be effectively employed in several application areas, like multimedia retrieval, recommender systems, genome study, and machine learning training/serving. However, despite their relevance, the behavior of such systems is far from being well understood. In [41], G. Neglia, together with M. Garetto (Univ of Turin, Italy) and E. Leonardi (Politechnic of Turin, Italy), provides a first comprehensive analysis of similarity caching in the offline, adversarial, and stochastic settings. They show that similarity caching raises significant new challenges, for which they propose the first dynamic policies with some optimality guarantees. They evaluate the performance of the proposed schemes under both synthetic and real request traces.
Performance evaluation and optimization of 5G wireless networks
In small cell networks, high mobility of users results in frequent handoff and thus severely restricts the data rate for mobile users. To alleviate this problem, one idea is to use heterogeneous, two-tier network structure where static users are served by both macro and micro base stations, whereas the mobile (i.e., moving) users are served only by macro base stations having larger cells; the idea is to prevent frequent data outage for mobile users due to handoff. In [16], A. Chattopadhyay and B. Błaszczyszyn (Inria Dyogene team) in collaboration with E. Altman use the classical two-tier Poisson network model with different transmit powers, assume independent Poisson process of static users and doubly stochastic Poisson process of mobile users moving at a constant speed along infinite straight lines generated by a Poisson line process. Using stochastic geometry, they calculate the average downlink data rate of the typical static and mobile (i.e., moving) users, the latter accounted for handoff outage periods. They consider also the average throughput of these two types of users.
In [15], the same authors consider location-dependent opportunistic bandwidth sharing between static and mobile downlink users in a cellular network. Each cell has some fixed number of static users. Mobile users enter the cell, move inside the cell for some time and then leave the cell. In order to provide higher data rate to mobile users, the authors propose to provide higher bandwidth to the mobile users at favourable times and locations, and provide higher bandwidth to the static users in other times. They formulate the problem as a long run average reward Markov decision process (MDP) where the per-step reward is a linear combination of instantaneous data volumes received by static and mobile users, and find the optimal policy. The transition structure of this MDP is not known in general. To alleviate this issue, they propose a learning algorithm based on single timescale stochastic approximation. Also, noting that the unconstrained MDP can be used to solve a constrained problem, they provide a learning algorithm based on multi-timescale stochastic approximation. The results are extended to address the issue of fair bandwidth sharing between the two classes of users. Numerical results demonstrate performance improvement by their scheme, and also the trade-off between performance gain and fairness.
The age of information
Two decades after the seminal paper on software aging and rejuvenation appeared in 1995, a new concept and metric referred to as the age of information (AoI) has been gaining attention from practitioners and the research community. In the vision paper [46], D.S. Menasche (UFRJ, Brazil), K. Trivedi (Duke Univ, USA) and E. Altman show the similarities and differences between software aging and information aging. In particular, modeling frameworks that have been applied to software aging, such as the semi Markov approach can be immediately applied in the realm of age of information. Conversely, they indicate that questions pertaining to sampling costs associated with the age of information can be useful to assess the optimal rejuvenation trigger interval for software systems.
The demand for Internet services that require frequent updates through small messages has tremendously grown in the past few years. Although the use of such applications by domestic users is usually free, their access from mobile devices is subject to fees and consumes energy from limited batteries. If a user activates his mobile device and is in the range of a publisher, an update is received at the expense of monetary and energy costs. Thus, users face a tradeoff between such costs and their messages aging. It is then natural to ask how to cope with such a tradeoff, by devising aging control policies. An aging control policy consists of deciding, based on the utility of the owned content, whether to activate the mobile device, and if so, which technology to use (WiFi or cellular). In [28] E. Altman, R. El-Azouzi (CERI/LIA, Univ Avignon), D.S. Menasche (UFRJ, Brazil) and Y. Xu (Fudan Univ, China) show the existence of an optimal strategy in the class of threshold strategies, wherein users activate their mobile devices if the age of their poadcasts surpasses a given threshold and remain inactive otherwise. The accuracy of their model is validated against traces from the UMass DieselNet bus network. The first version of this paper, among the first to introduce the age of information, appeared already in arXiv on 2010.
Wireless transmission vehicle routing
The Wireless Transmission Vehicle Routing Problem (WT-VRP) consists of searching for a route for a vehicle responsible for collecting information from stations. The new feature w.r.t. classical vehicle routing is the possibility of picking up information via wireless transmission, without visiting physically the stations of the network. The WT-VRP has applications in underwater surveillance and environmental monitoring. In [53], L. Flores Luyo and E. Ocaña Anaya (IMCA, Brazil), A. Agra (Univ Aveiro, Brazil), R. Figueiredo (CERI/LIA, Univ Avignon) and E. Altman, study three criteria for measuring the efficiency of a solution and propose a mixed integer linear programming formulation to solve the problem. Computational experiments were done to access the numerical complexity of the problem and to compare solutions under the three criteria proposed.
Video streaming in 5G cellular networks
Dynamic Adaptive Streaming over HTTP (DASH) has become the standard choice for live events and on-demand video services. In fact, by performing bitrate adaptation at the client side, DASH operates to deliver the highest possible Quality of Experience (QoE) under given network conditions. In cellular networks, in particular, video streaming services are affected by mobility and cell load variation. In this context, DASH video clients continually adapt the streaming quality to cope with channel variability. However, since they operate in a greedy manner, adaptive video clients can overload cellular network resources, degrading the QoE of other users and suffer persistent bitrate oscillations. In [40] R. El-Azouzi (CERI/LIA, Univ Avignon), A. Sunny (IIT Palakkad, India), L. Zhao (Huazhong Agricultural Univ, China), E. Altman, D. Tsilimantos (Huawei Technologies, France), F. De Pellegrini (CERI/LIA Univ Avignon), and S. Valentin (Darmstadt Univ, Germany) tackle this problem using a new scheduler at base stations, named Shadow-Enforcer, which ensures minimal number of quality switches as well as efficient and fair utilization of network resources.
While most modern-day video clients continually adapt quality of the video stream, they neither coordinate with the network elements nor among each other. Consequently, a streaming client may quickly overload the cellular network, leading to poor Quality of Experience (QoE) for the users in the network. Motivated by this problem, A. Sunny (IIT Palakkad, India), R. El-Azouzi, A. Arfaoui (both from CERI/LIA, Univ Avignon), E. Altman, S. Poojary (BITS, India), D. Tsilimantos (Huawei Technologies, France) and S. Valentin (Darmstadt Univ, Germany) introduce in [24] D-VIEWS — a scheduling paradigm that assures video bitrate stability of adaptive video streams while ensuring better system utilization. The performance of D-views is then evaluated through simulations.
In [39], R. El-Azouzi, K.V. Acharya (ENS Lyon), M. Haddad (CERI/LIA, Univ Avignon), S. Poojary (BITS, India), A. Sunny (IIT Palakkad, India), D. Tsilimantos (Huawei Technologies, France), S. Valentin (Darmstadt Univ, Germany) and E. Altman, develop an analytical framework to compute the Quality-of-Experience (QoE) metrics of video streaming in wireless networks. Their framework takes into account the system dynamics that arises due to the arrival and departure of flows. They also consider the possibility of users abandoning the system on account of poor QoE. Considering the coexistence of multiple services such as video streaming and elastic flows, they use a Markov chain based analysis to compute the user QoE metrics: probability of starvation, prefetching delay, average video quality and bitrate switching. The simulation results validate the accuracy of their model and describe the impact of the scheduler at the base station on the QoE metrics.
A learning algorithm for the Whittle index policy for scheduling web crawlers
In [31] K. Avrachenkov and V.S. Borkar (IIT Bombay, India) have revisited the Whittle index policy for scheduling web crawlers for ephemeral content and developed a reinforcement learning scheme for it based on LSPE(0). The scheme leverages the known structural properties of the Whittle index policy.
Distributed cooperative caching for VoD with geographic constraints
Consider the caching of video streams in a cellular network in which each base station is equipped with a cache. Video streams are partitioned into multiple substreams and the goal is to place substreams in caches such that the residual backhaul load is minimized. In [36] K. Avrachenkov together with J. Goseling (UTwente, The Netherlands) and B. Serbetci (Eurecom) have studied two coding mechanisms for the substreams: Layered coding (LC) mechanism and multiple description coding (MDC). They develop a distributed asynchronous algorithm for deciding which files to store in which cache to minimize the residual bandwidth, i.e., the cost for downloading the missing substreams of the user's requested video with a certain video quality from the gateway (i.e., the main server). They show that their algorithm converges rapidly. Finally, they show that MDC partitioning is better than the LC mechanism when the most popular content is stored in caches; however, their algorithm enables to use the LC mechanism as well without any performance loss.
Further, in [35], K. Avrachenkov together with J. Goseling (UTwente, The Netherlands) and B. Serbetci (Eurecom), have considered the same setting as above but maximized the expected utility. The utility depends on the quality at which a user is requesting a file and the chunks that are available. They impose alpha-fairness across files and qualities. Similarly to [36] they have developed a distributed asynchronous algorithm for deciding which chunks to store in which cache.