The main objectives of the project are the identification, the conception and the selection of the most appropriate network architectures for a communication service, as well as the development of computing and mathematical tools for the fulfillment of these tasks. These objectives lead to two types of complementary research fields: the systems' qualitative aspects (e.g. protocol testing and design) and the quantitative aspects which are essential to the correct dimensioning of these architectures and the associated services (performance, dependability, Quality of Service (QoS), Quality of Experience (QoE) and performability); our activities lie essentially in the latter.
The Dionysos group works on different problems related to the design and the analysis of communication services. Such services require functionality specifications, decisions about where and how they must be deployed in a system, and the dimensioning of their different components. The interests of the project concern not only particular classes of systems but also methodological aspects.
Concerning the communication systems themselves, we focus on IP networks, at different levels. Concerning the types of networks considered, we mainly work in the wireless area, in particular on sensor networks, on Content Delivery Networks for our work around measuring the perceived quality, the main component of QoE, and on some aspects of optical networks. We also work on the assessment of interoperability between specific network components, which is essential to ensure that they interact correctly before they get deployed in a real environment. Our team contributes in providing solutions (methods, algorithms and tools) which help in obtaining efficient interoperability test suites for new generation networks. From the application point of view, we also have activities in network economics methodologies, a critical multi-disciplinary area for telecommunications providers, with many defying open problems for the near future.
For most of previous mentioned problems, our work concern their quantitative aspects. The quantitative aspects we are interested in are QoE, performance, dependability, performability, QoS, vulnerability, etc. We develop techniques for the evaluation of these different aspects of the considered systems through models and through measurement techniques. In particular, we develop techniques to measure in an automatic way the quality of a video or audio communication as perceived by the final user. The methods we work with range from discrete event simulation and Monte Carlo procedures to analytical techniques, and include numerical algorithms as well. Our main mathematical tools are stochastic processes in general and queuing models and Markov chains in particular, optimization techniques, graph theory, combinatorics, etc.
The scientific foundations of our work are those of network design and network analysis. Specifically, this concerns the principles of packet switching and in particular of IP networks (protocol design, protocol testing, routing, scheduling techniques), and the mathematical and algorithmic aspects of the associated problems, on which our methods and tools are based.
These foundations are described in the following paragraphs. We begin by a subsection dedicated to Quality of Service (QoS) and Quality of Experience (QoE), since they can be seen as unifying concepts in our activities. Then we briefly describe the specific sub-area of model evaluation and about the particular multidisciplinary domain of network economics.
Since it is difficult to develop as many communication solutions as possible applications, the scientific and technological communities aim towards providing general services allowing to give to each application or user a set of properties nowadays called “Quality of Service” (QoS), a terminology lacking a precise definition. This QoS concept takes different forms according to the type of communication service and the aspects which matter for a given application: for performance it comes through specific metrics (delays, jitter, throughput, etc.), for dependability it also comes through appropriate metrics: reliability, availability, or vulnerability, in the case for instance of WAN (Wide Area Network) topologies, etc.
QoS is at the heart of our research activities: We look for methods to obtain specific “levels” of QoS and for techniques to evaluate the associated metrics. Our ultimate goal is to provide tools (mathematical tools and/or algorithms, under appropriate software “containers” or not) allowing users and/or applications to attain specific levels of QoS, or to improve the provided QoS, if we think of a particular system, with an optimal use of the resources available. Obtaining a good QoS level is a very general objective. It leads to many different areas, depending on the systems, applications and specific goals being considered. Our team works on several of these areas. We also investigate the impact of network QoS on multimedia payloads to reduce the impact of congestion.
Some important aspects of the behavior of modern communication systems have subjective components: the quality of a video stream or an audio signal, as perceived by the user, is related to some of the previous mentioned parameters (packet loss, delays, ...) but in an extremely complex way. We are interested in analyzing these types of flows from this user-oriented point of view. We focus on the user perceived quality, in short, PQ, the main component of what is nowadays called Quality of Experience (in short, QoE), to underline the fact that, in this case, we want to center the analysis on the user. In this context, we have a global project called PSQA, which stands for Pseudo-Subjective Quality Assessment, and which refers to a technology we have developed allowing to automatically measure this PQ.
Another special case to which we devote research efforts in the team is the analysis of qualitative properties related to interoperability assessment. This refers to the act of determining if end-to-end functionality between at least two communicating systems is as required by the base standards for those systems. Conformance is the act of determining to what extent a single component conforms to the individual requirements of the standard it is based on. Our purpose is to provide such a formal framework (methods, algorithms and tools) for interoperability assessment, in order to help in obtaining efficient interoperability test suites for new generation networks, mainly around IPv6-related protocols. The interoperability test suites generation is based on specifications (standards and/or RFCs) of network components and protocols to be tested.
The scientific foundations of our modeling activities are composed of stochastic processes theory and, in particular, Markov processes, queuing theory, stochastic graphs theory, etc. The objectives are either to develop numerical solutions, or analytical ones, or possibly discrete event simulation or Monte Carlo (and Quasi-Monte Carlo) techniques. We are always interested in model evaluation techniques for dependability and performability analysis, both in static (network reliability) and dynamic contexts (depending on the fact that time plays an explicit role in the analysis or not). We look at systems from the classical so-called call level, leading to standard models (for instance, queues or networks of queues) and also at the burst level, leading to fluid models.
In recent years, our work on the design of the topologies of WANs led us to explore optimization techniques, in particular in the case of very large optimization problems, usually formulated in terms of graphs. The associated methods we are interested in are composed of simulated annealing, genetic algorithms, TABU search, etc. For the time being, we have obtained our best results with GRASP techniques.
Network pricing is a good example of a multi-disciplinary research activity half-way between applied mathematics, economy and networking, centered on stochastic modeling issues. Indeed, the Internet is facing a tremendous increase of its traffic volume. As a consequence, real users complain that large data transfers take too long, without any possibility to improve this by themselves (by paying more, for instance). A possible solution to cope with congestion is to increase the link capacities; however, many authors consider that this is not a viable solution as the network must respond to an increasing demand (and experience has shown that demand of bandwidth has always been ahead of supply), especially now that the Internet is becoming a commercial network. Furthermore, incentives for a fair utilization between customers are not included in the current Internet. For these reasons, it has been suggested that the current flat-rate fees, where customers pay a subscription and obtain an unlimited usage, should be replaced by usage-based fees. Besides, the future Internet will carry heterogeneous flows such as video, voice, email, web, file transfers and remote login among others. Each of these applications requires a different level of QoS: for example, video needs very small delays and packet losses, voice requires small delays but can afford some packet losses, email can afford delay (within a given bound) while file transfer needs a good average throughput and remote login requires small round-trip times. Some pricing incentives should exist so that each user does not always choose the best QoS for her application and so that the final result is a fair utilization of the bandwidth. On the other hand, we need to be aware of the trade-off between engineering efficiency and economic efficiency; for example, traffic measurements can help in improving the management of the network but is a costly option. These are some of the various aspects often present in the pricing problems we address in our work. More recently, we have switched to the more general field of network economics, dealing with the economic behavior of users, service providers and content providers, as well as their relations.
Our global research effort concerns networking problems, both from the analysis point of view, and around network design issues. Specifically, this means the IP technology in general, with focus on specific types of networks seen at different levels: wireless systems, optical infrastructures, peer-to-peer architectures, Software Defined Networks, Content Delivery Networks, Content-Centric Networks, clouds.
A specific aspect of network applications and/or services based on video or voice content, is our PSQA technology, able to measure the Perceptual Quality automatically and in real time. PSQA provides a MOS value as close as it makes sense to the value obtained from subjective testing sessions. The technology has been tested in many environments, including one way communications as, for instance, in video streaming, and bi-directional communications as in IP telephony, UDP- or TCP-based systems, etc. It has already served in many collaborative projects as the measuring tool used.
Many of the techniques developed at Dionysos are related to the analysis of complex systems in general, not only in telecommunications. For instance, our Monte Carlo methods for analyzing rare events have been used by different industrial partners, some of them in networking but recently also by companies building transportation systems. We develop methods in different areas: numerical analysis of stochastic models, bound computations in the same area, Discrete Event Simulation, or, as just mentioned, rare event analysis.
We organized the 13th International Conference in Monte Carlo & Quasi-Monte Carlo Methods in Scientific Computing, in Rennes, July 1-6, 2018. The MCQMC Conference is a biennial meeting on Monte Carlo and quasi-Monte Carlo methods and the premier event on the topic; it has attracted 300 people from all over the world.
Pierre L'Ecuyer received the 2018 Outstanding Simulation Publication Award by Informs (recognizing outstanding contributions to the simulation literature) for his article published in 2016 in Operations Research on arrival rates modeling with application to call centers.
Functional Description: These test suites are developed using the TTCN-3 environment.
The packages contains the full Abstract Test Suites written in TTCN-3 and the source files for building the codecs and adapters with the help of T3DevKit.
Participants: Annie Floch, Anthony Baire, Ariel Sabiguero, Bruno Deniaud, César Viho and Frédéric Roudaut
Contact: César Viho
Participants: Anthony Baire and César Viho
Contact: Anthony Baire
Keywords: IPv6 - Conformance testing - TTCN-3
Scientific Description: We have built a toolkit for easing executing tests written in the standardized TTCN-3 test specification language. This toolkit is made of a C++ library together with a highly customizable CoDec generator that allows fast development of external components (that are required to execute a test suite) such as CoDec (for message Coding/Decoding), System and Platform Adapters. It also provides a framework for representing and manipulating TTCN-3 events so as to ease the production of test reports. The toolkit addresses issues that are not yet covered by ETSI standards while being fully compatible with the existing standard interfaces: TRI (Test Runtime Interfaces) and TCI (Test Control Interfaces), it has been tested with four TTCN-3 environments (IBM, Elvior, Danet and Go4IT) and on three different platforms (Linux, Windows and Cygwin).
Functional Description: T3DevKit is a free open source toolkit to ease the development of test suites in the TTCN-3 environment. It provides:
a CoDec generator (t3cdgen) that automates the development process of the CoDec needed for coding TTCN-3 values into physically transmittable messages and decoding incoming messages a library (t3devlib) that provides an object oriented framework to manipulate TTCN-3 entities (values, ports, timers, external functions…) an implementation of the TRI and TCI standard interfaces default implementations for the system adapter (SA), platform adapter (PA), test management (TM), test logging (TL) and component handling (CH) modules default codecs build scripts for the generation of executable test suites, these are tool-independent and facilitate the distribution of test suite sources
Participants: Annie Floch, Anthony Baire, Ariel Sabiguero, César Viho and Frédéric Roudaut
Contact: Federico Sismondi
Testing Tool Prototype
Keywords: Interoperability - Conformance testing - TTCN-3
Functional Description: ttproto is an experimental tool for implementing testing tools, for conformance and interoperability testing.
It was first implemented to explore new features and concepts for the TTCN-3 standard, but we also used it to implement a passive interoperability test suite we provided for the CoAP interoperability event held in Paris in March 2012.
This tool is implemented in python3 and its design was influenced mainly by TTCN-3 (abstract model, templates, snapshots, behaviour trees, communication ports, logging) and by Scapy (syntax, flexibility, customisability)
Its purpose is to facilitate rapid prototyping rather than experimentations (rather than production use). We choosed to maximise its modularity and readability rather than performances and real-time considerations.
Now you should have a look at the Features page:
https://
Contact: Federico Sismondi
URL: https://
Keywords: Test - Interoperability - Conformance testing - Plugtests
Functional Description: The software helps developers of the CoAP protocol assessing if their implementations (either CoAP clients or CoAP servers) are conformant to protocol specifications, and interoperable with other implementations. It encompasses:
Coordination of CoAP interoperability tests
Analysis of CoAP traces & issuing verdicts
Automation of open source CoAP implementations for based reference interop testing
Authors: Federico Sismondi and César Viho
Contact: Federico Sismondi
Interoperability testing
Keywords: Interoperability - Conformance testing - CoAP - 6LoWPAN - OneM2M
Functional Description: The software is a framework for developing interoperability tests. The interoperability tests help developers of network protocol assessing if their implementations are conformant to protocol specifications, and interoperable with other implementations.
The software already integrates interoperability tests for CoAP, OneM2M and 6LoWPAN The framework provides the following features to the users:
Coordination of the interoperability tests (enabling remote testing )
VPN-like connectivity between users' implementations (enabling remote testing )
Analysis of exchanged network traces & issuing verdicts
Automation of open source implementations for based reference interop testing
This framework is the evolution of the CoAP Testing Tool (https://
Contact: Federico Sismondi
URL: https://
Stream Processing Systems. Stream processing systems are today gaining momentum as tools to perform analytics on continuous data streams. Their ability to produce analysis results with sub-second latencies, coupled with their scalability, makes them the preferred choice for many big data companies.
A stream processing application is commonly modeled as a direct acyclic graph where data operators, represented by nodes, are interconnected by streams of tuples containing data to be analyzed, the directed edges (the arcs). Scalability is usually attained at the deployment phase where each data operator can be parallelized using multiple instances, each of which will handle a subset of the tuples conveyed by the operators’ ingoing stream. Balancing the load among the instances of a parallel operator is important as it yields to better resource utilization and thus larger throughputs and reduced tuple processing latencies.
Shuffle grouping is a technique used by stream processing frameworks to share input load among parallel instances of stateless operators. With shuffle grouping each tuple of a stream can be assigned to any available operator instance, independently from any previous assignment. A common approach to implement shuffle grouping is to adopt a Round-Robin policy, a simple solution that fares well as long as the tuple execution time is almost the same for all the tuples. However, such an assumption rarely holds in real cases where execution time strongly depends on tuple content. As a consequence, parallel stateless operators within stream processing applications may experience unpredictable unbalance that, in the end, causes undesirable increase in tuple completion times. We proposed Online Shuffle Grouping (OSG), a novel approach to shuffle grouping aimed at reducing the overall tuple completion time. OSG estimates the execution time of each tuple, enabling a proactive and online scheduling of input load to the target operator instances. Sketches are used to efficiently store the otherwise large amount of information required to schedule incoming load. We provide a probabilistic analysis and illustrate, through both simulations and a running prototype, its impact on stream processing applications.
We consider recently an application to continuous queries, which are processed by a stream processing engine (SPE) to generate timely results given the ephemeral input data. Variations of input data streams, in terms of both volume and distribution of values, have a large impact on computational resource requirements. Dynamic and Automatic Balanced Scaling for Storm (DABS-Storm) is an original solution for handling dynamic adaptation of continuous queries processing according to evolution of input stream properties, while controlling the system stability. Both fluctuations in data volume and distribution of values within data streams are handled by DABS-Storm to adjust the resources usage that best meets processing needs. To achieve this goal, the DABS-Storm holistic approach combines a proactive auto-parallelization algorithm with a latency-aware load balancing strategy.
Sampling techniques is a classical method for detection in large-scale data streams. We have proposed a new algorithm that detects on the fly the
Throughput Prediction in Cellular Networks. Downlink data rates can vary significantly in cellular networks, with a potentially non-negligible effect on the user experience. Content providers address this problem by using different representations (e.g., picture resolution, video resolution and rate) of the same content and switch among these based on measurements collected during the connection. Knowing the achievable data rate before the connection establishment should definitely help content providers to choose the most appropriate representation from the very beginning. We have conducted several large measurement campaigns involving a panel of users connected to a production network in France, to determine whether it is possible to predict the achievable data rate using measurements collected, before establishing the connection to the content provider, on the operator's network and on the mobile node. We establish evidence that it is indeed possible to exploit these measurements to predict, with an acceptable accuracy, the achievable data rate. We thus introduce cooperation strategies between the mobile user, the network operator and the content provider to implement such anticipatory solution .
Call Centers. In emergency call centers (for police, firemen, ambulances) a single event can sometimes trigger many incoming calls in a short period of time. Several people may call to report the same fire or the same accident, for example. Such a sudden burst of incoming traffic can have a significant impact on the responsiveness of the call center for other events in the same period of time. We examine in data from the SOS Alarm center in Sweden. We also build a stochastic model for the bursts. We show how to estimate the model parameters for each burst by maximum likelihood, how to model the multivariate distribution of those parameters using copulas, and how to simulate the burst process from this model. In our model, certain events trigger an arrival process of calls with a random time-varying rate over a finite period of time of random length.
Performance of CDNs. In order to track the users who illegally re-stream live video streams, one solution is to embed identified watermark sequences in the video segments to distinguish the users. However, since all types of watermarked segments should be prepared, the existing solutions require an extra cost of bandwidth for delivery (at least multiplying by two the required bandwidth). In , we study how to reduce the inner delivery (traffic) cost of a Content Delivery Network (CDN). We propose a mechanism that reduces the number of watermarked segments that need to be encoded and delivered. We calculate the best- and worst-case traffics for two different cases: multicast and unicast. The results illustrate that even in the worst cases, the traffic with our approach is much lower than without reducing. Moreover, the watermarked sequences can still maintain uniqueness for each user. Experiments based on a real database are carried out, and illustrate that our mechanism significantly reduces traffic with respect to the current CDN practice.
Beyond CDNs, Network Operators (NOs) are developing caching capabilities within their own network infrastructure, in order to face the rise in data consumption and to avoid the potential congestion at peering links. These factors explain the enthusiasm of industry and academics around the Information-Centric Networking (ICN) concept and its in-network caching feature. Many contributions focused these last years on improving the caching performance of ICN. In , we propose a very versatile model capable of modeling the most efficient caching strategies. We first start by representing a single generic cache node. We then extend our model for the case of a network of caches. The obtained results are used to derive, in particular, the cache hit probability of a content in such caching systems. Using a discrete event simulator, we show the accuracy of the proposed model under different network configurations.
Probabilistic analysis of population protocols. In , we studied the well-known problem of dissemination of information in large scale distributed networks through pairwise interactions. This problem, originally called rumor mongering, and then rumor spreading has mainly been investigated in the synchronous model. This model relies on the assumption that all the nodes of the network act in synchrony, that is, at each round of the protocol, each node is allowed to contact a random neighbor. In the paper, we drop this assumption under the argument that it is not realistic in large scale systems. We thus consider the asynchronous variant, where at random times, nodes successively interact by pairs exchanging their information on the rumor. In a previous paper, we performed a study of the total number of interactions needed for all the nodes of the network to discover the rumor. While most of the existing results involve huge constants that do not allow us to compare different protocols, we provided a thorough analysis of the distribution of this total number of interactions together with its asymptotic behavior. In this paper we extend this discrete time analysis by solving a conjecture proposed previously and we consider the continuous time case, where a Poisson process is associated to each node to determine the instants at which interactions occur. The rumor spreading time is thus more realistic since it is the real time needed for all the nodes of the network to discover the rumor. Once again, as most of the existing results involve huge constants, we provide tight bound and equivalent of the complementary distribution of the rumor spreading time. We also give the exact asymptotic behavior of the complementary distribution of the rumor spreading time around its expected value when the number of nodes tends to infinity.
The context of is the two-choice paradigm which is deeply used in balanced online resource allocation, priority scheduling, load balancing and more recently in population protocols. The model governing the evolution of these systems consists in throwing balls one by one and independently of each others into
The work described in focuses on pairwise
interaction-based protocols, and proposes an universal mechanism that
allows each agent to locally detect that the system has converged to
the sought configuration with high probability. To illustrate our
mechanism, we use it to detect the instant at which the proportion
problem is solved. Specifically, let
All these works are part of the thesis .
Fluid Queues. Stochastic fluid flow models and in particular those driven by Markov chains have been intensively studied in the last two decades. Not only they have been proven to be efficient tools to mimic Internet traffic flow at a macroscopic level but they are useful tools in many areas of applications such as manufacturing systems or in actuarial sciences to cite but a few. We propose in a forthcoming book, entitled Advanced Trends in Queueing Theory, edited by V. Anisimov and N. Limnios in the Mathematics and Statistics Series, Sciences, Iste & J. Wiley, a chapter which focus on such a model in the context of performance analysis of a potentially congested system. The latter is modeled by means of a finite-capacity system whose content is described by a Markov driven stable fluid flow. We step-by-step describe a methodology to compute exactly the loss probability of the system. Our approach is based on the computation of hitting probabilities jointly with the peak level reached during a busy period, both in the infinite and finite buffer case. Accordingly we end up with differential Riccati equations that can be solved numerically. Moreover we are able to characterize the complete distribution of both the duration of congestion and of the total information lost during such a busy period.
Organizing both transactions and blocks in a distributed ledger. We propose in a new way to organize both transactions and blocks in a distributed ledger to address the performance issues of permissionless ledgers. In contrast to most of the existing solutions in which the ledger is a chain of blocks extracted from a tree or a graph of chains, we present a distributed ledger whose structure is a balanced directed acyclic graph of blocks. We call this specific graph a SYC-DAG. We show that a SYC-DAG allows us to keep all the remarkable properties of the Bitcoin blockchain in terms of security, immutability, and transparency, while enjoying higher throughput and self-adaptivity to transactions demand. To the best of our knowledge, such a design has never been proposed.
Additional intermittent server.
We analyzed the performance of a system consisting of a queue with one regular single server supported by an additional intermittent server who, in order to decrease the mean response time, i) leaves the back office to join the first server when the number of customers reaches the threshold
Operational availability prediction. The evaluation of the operational availability of a fleet of systems on an operational site is far from trivial when the size of the state space of a faithful Markovian model makes this issue unrealistic for many large models. The main difficulty comes from the existence on the site of “on line replaceable units” (lru) that may be unavailable from time to time when a breakdown occurs. To be more precise, let us say that the intrinsic availability is an upper limit of the operational availability because the considered unavailability corresponds only to the time necessary to proceed with the exchange of the defective element. This assumes that the repairer and the spare part are always immediately available on the operational site. To reduce the intervention time at an operational site, system repair consists of replacing the defective subset with an identical one in good condition. These exchangeable subsets are called lrus. Restoring a piece of system by exchanging an lru makes it possible to obtain a rapid return to service of the system and does not require the presence on the operational site of several specialists. Thus, we can change an aircraft engine in a few hours thanks to stakeholders who are knowledgeable about the procedures for intervention on fasteners and connections but who are not required to know the techniques to repair a broken engine. The operational availability (the one that is of interest to the user) will generally be lower than the intrinsic availability because the latter integrates the unavailability of repairer or lru, if any. And if the waiting time for a repairer is generally measured in hours, the unavailability of a lru can be measured in days, in weeks, even in months! It is therefore this last point which a potential customer should worry about in priority. Moreover, the unavailability of the lru is more difficult to model and evaluate than that of the repairer.
In a first study we considered a virtual system with only one type of lru in order to
understand the influence of various parameters on the operational availability both of a
faithful Markovian model (for which we are able to get the exact answer) and of a proposed
approximate method. By doing so, we were able to compute the relative errors induced when
using this latter solution. The approximate method showed a quite good accuracy .
The generalization to systems with multiple types of line replaceable units was conducted in a second study. The main idea is to consider a non product-form queuing network and to aggregate subsets of it as if they were parts of a product-form queuing network. Note that if the spare lrus were always available on the operational site (which is never the case) we could fairly model the behavior of the support system on the site by means of a product-form network. Also in this second study, the potential lack of repairer is taken into account. The low relative complexity of the new recurrent approximate method allows it to be used for applications encountered in the field of maintenance. Although approximate, the method provides the result with a small relative error, ranging from
Modeling loss processes in voice traffic.
Markov models of loss incidents happening during packet voice communications are needful for many engineering tasks, namely network dimensioning and automatic quality assessment. Two very simple ones are Bernoulli and 2-state Markov models, but they carry limited information about incurred loss incidents. On the other hand, a general Markov loss model with
In , we propose a comprehensive and detailed Markov loss model considering the distinguished perceived effects caused by different loss incidents. Specifically, it explicitly differentiates between (1) isolated 20 msec loss incidents which are inaudible by the human ears, (2) highly and lowly frequent short loss incidents (20-80 msec) that are perceived by humans as bubbles and (3) long loss incidents (
Transient analysis of Markovian models. Continuing with a research line that we started years ago with colleagues in California, where we addressed the transient state distributions of Markovian queuing models using the concept of pseudo-dual proposed by Anderson, we discovered this year a new way to attack these problems, leading to an approach with a much wider application range. Moreover, this methodology clarifies some phenomena that appeared when Anderson's tools were used. The result is now two new concepts we propose, related in general to arbitrary square matrices (possibly infinite), the power-dual and the exponential-dual, and the way we can apply them to the analysis of linear systems of difference or differential equations. The first elements of this new theory were discussed in .
Distributed deep learning on edge-devices. A recently celebrated type of deep neural network is the Generative Adversarial Network (GAN). GANs are generators of samples from a distribution that has been learned; they are up to now centrally trained from local data on a single location. We question in and in the performance of training GANs using a spread dataset over a set of distributed machines, using a gossip approach shown to work on standard neural networks. This performance is compared to the federated learning distributed method, that has the drawback of sending model data to a server. We also propose a gossip variant, where GAN components are gossiped independently. Experiments are conducted with Tensorflow with up to 100 emulated machines, on the canonical MNIST dataset. The position of these papers is to provide a first evidence that gossip performances for GAN training are close to the ones of federated learning, while operating in a fully decentralized setup. Second, to highlight that for GANs, the distribution of data on machines is critical (i.e., i.i.d. or not). Third, to illustrate that the gossip variant, despite proposing data diversity to the learning phase, brings only marginal improvements over the classic gossip approach.
Machine learning acceleration. The number of connected devices is increasing with the emergence of new services and trends. This phenomenon is leading to a traffic growth over both the control and the data planes of the mobile core network. Therefore the 3GPP group has rethought the architecture of the New Generation Core (NGC) by defining its components as Virtualized Network Functions (VNF). However, scalability techniques should be envisioned in order to answer the needs, in term of resource provisioning, without degrading the Quality Of Service (QoS) already offered by hardware based core networks. Neural networks, and in particular deep learning, having shown their effectiveness in predicting time series , could be good candidates for predicting traffic evolution.
In , we proposed a novel solution to generalize neural networks while accelerating the learning process by using
Machine Learning in Quality of Experience assessment. In a series of presentations we have disseminated the main ideas behind a new generation of Quality of Esperience assessing tools in preparation in the team. In the meetings and , and also in the plenary , we described some of the key features of the tools we used in our PSQA project, the Random Neural Network of Erol Gelenbe, and the ideas we are following for extending some of their capabilities. The goal is to allow the user to evaluate with little additional cost, the sensitivities of the Quality of Experience with respect to specific metrics of interest, having in mind design applications, or improvement of existing systems. Another example is to invert the PSQA function providing a measure of the Quality of Experience as a function of several QoS and channel-based metrics, in order to define subsets of their joint state space where quality has a given property of interest (for instance, being good enough). In the plenary talk , we described other properties of these tools, and other directions being explored, such as the replacement of the subjective testing sessions leading to fully automatic tools, as well as to big data problems. In the keynote talk we showed how to use our PSQA technology for classic performance evaluation works. The idea is that instead if targeting classic performance metrics such as a mean response time, or a loss rate (or dependability ones, the approach is the same), we can develop models that target the “ultimate goal”, the Quality of Experience itself. That is, instead of, say, providing a formula allowing to relate the loss rate of a system to the input data, we can obtain a (more complex) formula giving a numerical measure of the Quality of Experience as a function of the same data.
The general field of network economics, analyzing the relationships between all acts of the digital economy, has been an important subject for years in the team. The whole problem of network economics, from theory to practice, describing all issues and challenges, is described in our book .
Reliability/security. In an ad hoc network, accessing a point depends on the participation of other, intermediate, nodes. Each node behaving selfishly, we end up with a non-cooperative game where each node incurs a cost for providing a reliable connection but whose success depends not only on its own reliability investment but also on the investment of nodes which can be on a path to the access point. Our purpose in is to formally define and analyze such a game: existence of an equilibrium output, comparison with the optimal cooperative case, etc.
Roaming. In October 2015, the European parliament has decided to forbid roaming charges among EU mobile phone users, starting June 2017, as a first step toward the unification of the European digital market. We discuss in the impact consequences of such a measure.
Community networks. Community networks have emerged as an alternative to licensed-band systems (WiMAX, 4G, etc.), providing an access to the Internet with Wi-Fi technology while covering large areas. A community network is easy and cheap to deploy, as the network is using members' access points in order to cover the area. We study in the competition between a community operator and a traditional operator (using a licensed-band system) through a game-theoretic model, while considering the mobility of each user in the area.
Spectrum sharing & cognitive networks. Licensed shared access (LSA) is a new approach that allows Mobile Network Operators to use a portion of the spectrum initially licensed to another incumbent user, by obtaining a license from the regulator via an auction mechanism. In this context, different truthful auction mechanisms have been proposed, and differ in terms of allocation (who gets the spectrum) but also on revenue. Since those mechanisms could generate an extremely low revenue, we extend them by introducing a reserve price per bidder which represents the minimum amount that each winning bidder should pay. Since this may be at the expense of the allocation fairness, for each mechanism we find in by simulation the reserve price that optimizes a trade-off between expected fairness and expected revenue. For each mechanism, we analytically express the expected revenue when valuations of operators for the spectrum are independent and identically distributed from a uniform distribution. We also propose in PAM: Proportional Allocation Mechanism, which is a truthful auction mechanism offering a good compromise between fairness and efficiency and can generate the highest revenue to the regulator compared to other truthful mechanisms proposed in the literature.
Selfish primary user emulation (PUE) is a serious security problem in cognitive radio networks. By emitting emulated incumbent signals, a PUE attacker can selfishly occupy more channels. Consequently, a PUE attacker can prevent other secondary users from accessing radio resources and interfere with nearby primary users. To mitigate the selfish PUE, a surveillance process on occupied channels could be performed. Determining surveillance strategies, particularly in multi-channel context, is necessary for ensuring network operation fairness. Since a rational attacker can learn to adapt to the surveillance strategy, the question is how to formulate an appropriate modeling of the strategic interaction between a defender and an attacker. In , we study the commitment model in which the network manager takes the leadership role by committing to its surveillance strategy and forces the attacker to follow the committed strategy. The relevant strategy is analyzed through the Strong Stackelberg Equilibrium (SSE). Analytical and numerical results suggest that, by playing the SSE strategy, the network manager significantly improves its utility with respect to playing a Nash equilibrium (NE) strategy, hence obtains a better protection against selfish PUEs. Moreover, the computational effort to compute the SSE strategy is lower than to find a NE strategy.
Network neutrality.
Most of our activity has been devoted to the vivid network neutrality debate, going beyond the traditional for or against neutrality, and trying to tackle it from different angles.
We gave a tutorial on this topic , with a video available at https://
In , we place and discuss with a net neutrality context the conflict in early 2018 between Orange and TV channel TF1 to prevent some content to be distributed. The related issue of big CPs puhsing ISPs to improve their (own) QoS is further analyzed . Indeed, there is a trend for big content providers such as Netflix and YouTube to give grades to ISPs, to incentivize those ISPs to improve at least the quality offered to their service. We design a model analyzing ISPs' optimal allocation strategies in a competitive context and in front of quality-sensitive users. We show that the optimal strategy is non-neutral, that is, it does not allocate bandwidth proportionally to the traffic share of content providers. On the other hand, we show that non-neutrality does not benefit ISPs but is surprisingly favorable to users' perceived quality.
Another current important issue in the current net neutrality debate is that of sponsored data: With wireless sponsored data, a third party, content or service provider, can pay for some of your data traffic so that it is not counted in your plan's monthly cap. This type of behavior is currently under scrutiny, with telecommunication regulators wondering if it could be applied to prevent competitors from entering the market, and what the impact on all telecommunication actors can be. To answer those questions, we design and analyze in a model where a Content Provider (CP) can choose the proportion of data to sponsor and a level of advertisement to get a return on investment, and several Internet Service Providers (ISPs) in competition. We distinguish three scenarios: no sponsoring, the same sponsoring to all users, and a different sponsoring depending on the ISP you have subscribed to. This last possibility may particularly be considered an infringement of the network neutrality principle. We see that sponsoring can be beneficial to users and ISPs depending on the chosen advertisement level. We also discuss the impact of zero-rating where an ISP offers free data to a CP to attract more customers, and vertical integration where a CP and an ISP are the same company.
Search engines. Different search engines provide different outputs for the same keyword. This may be due to different definitions of relevance, and/or to different knowledge/anticipation of users' preferences, but rankings are also suspected to be biased towards own content, which may prejudicial to other content providers. In , we make some initial steps toward a rigorous comparison and analysis of search engines, by proposing a definition for a consensual relevance of a page with respect to a keyword, from a set of search engines. More specifically, we look at the results of several search engines for a sample of keywords, and define for each keyword the visibility of a page based on its ranking over all search engines. This allows to define a score of the search engine for a keyword, and then its average score over all keywords. Based on the pages visibility, we can also define the consensus search engine as the one showing the most visible results for each keyword. We have implemented this model and present in an analysis of the results.
We maintain a research activity in different areas related to dependability, performability and vulnerability analysis of communication systems, using both the Monte Carlo and the Quasi-Monte Carlo approaches to evaluate the relevant metrics. Monte Carlo (and Quasi-Monte Carlo) methods often represent the only tool able to solve complex problems of these types.
MCMC. The current popular method for approximate simulation from the posterior distribution of the linear Bayesian LASSO is a Gibbs sampler. It is well-known that the output analysis of an MCMC sampler is difficult due to the complex dependence among the states of the underlying Markov chain. Practitioners can usually only assess the convergence of MCMC samplers using heuristics. In we construct a method that yields an independent and identically distributed (iid) draws from the LASSO posterior. The advantage of such exact sampling over the MCMC sampling is that there are no difficulties with the output analysis of the exact sampler, because all the simulated states are independent. The proposed sampler works well when the dimension of the parameter space is not too large, and when it is too large to permit exact sampling, the proposed method can still be used to construct an approximate MCMC sampler
Rare event simulation. We develop in simulation estimators of measures associated with the tail distribution of the hitting time to a rarely visited set of states of a regenerative process. In various settings, the distribution of the hitting time divided by its expectation converges weakly to an exponential as the rare set becomes rarer. This motivates approximating the hitting-time distribution by an exponential whose mean is the expected hitting time. As the mean is unknown, we estimate it via simulation. We then obtain estimators of a quantile and conditional tail expectation of the hitting time by computing these values for the exponential approximation calibrated with the estimated mean. Similarly, the distribution of the sum of lengths of cycles before the one hitting the rare set is often well-approximated by an exponential, and we analogously exploit this to estimate tail measures of the hitting time. Numerical results demonstrate the effectiveness of our estimators.
In rare event simulation, the two main approaches are Splitting and Importance Sampling.
Concerning Splitting, we study in the behavior of a method for sampling from a given distribution conditional on the occurrence of a rare event. The method returns a random-sized sample of points such that unconditionally on the sample size, each point is distributed exactly according to the original distribution conditional on the rare event. For a cost function which is nonzero only when the rare event occurs, the method provides an unbiased estimator of the expected cost, but if we select at random one of the returned points, its distribution differs in general from the exact conditional distribution given the rare event. However, we prove that if we repeat the algorithm, the distribution of the selected point converges to the exact one in total variation.
Splitting and another technique based on a conditional Monte Carlo approach have been applied and compared in for the reliability estimation for networks whose links have random capacities and in which a certain target amount of flow must be carried from some source nodes to some destination nodes. Each destination node has a fixed demand that must be satisfied and each source node has a given supply. We want to estimate the unreliability of the network, defined as the probability that the network cannot carry the required amount of flow to meet the demand at all destination nodes. When this unreliability is very small, which is our main interest in this paper, standard Monte Carlo estimators become useless because failure to meet the demand is a rare event. We find that the conditional Monte Carlo technique is more effective when the network is highly reliable and not too large, whereas for a larger network and/or moderate reliability, the splitting approach is more effective. In we presented the main ideas behind another approach for the same kind of problem, where we generalize the Creation Process idea to a multi-level setting, on top of which we explore the behavior of the Splitting method, with very good results when the system is highly reliable.
Importance sampling (IS) is the main used other technique, but it requires a fine tuning of parameters. This has been applied in to urban passenger rail systems, that are large scale systems comprising highly reliable redundant structures and logistics (e.g., spares or repair personnel availability, inspection protocols, etc). To meet the strict contractual obligations, steady state unavailability of such systems needs to be accurately estimated as a measure of a solution’s life cycle costs. We use Markovian Stochastic Petri Nets models to conveniently represent the systems. We propose a multi-level Cross-Entropy optimization scheme, where we exploit the regenerative structure in the underlying continuous time Markov chain and to determine optimal (IS) rates in the case of rare events. The CE scheme is used in a pre-simulation and applied to failure transitions of the Markovian SPN models only. The proposed method divides a rare problem into a series of less rare problems by considering increasingly rare component failures. In the first stage a standard regenerative simulation is used for non-rare system failures. At each subsequent stage, the rarity is progressively increased (by decreasing the failure rates of components) and the IS rates of transitions obtained from the previous problem are used at the current stage. The final pre-simulation stage provides a vector of IS rates that are optimized and are used in the main simulation. The experimental results showed bounded relative error property as the rarity of the original problem increases, and as a consequence a considerable variance reduction and gain (in terms of work normalized variance).
In and we introduced the idea that the problem with the standard estimator in the case of rare events is not the estimator itself but its usual implementation, and we describe an efficient way of implementing it in order to be able to perform estimations that, otherwise, are out of reach following the crude approach as it is usually coded. The idea is to reduce the time needed to sample the standard estimator and not the variance. The interest in taking this viewpoint is also discussed.
In we gave a tutorial on Monte Carlo techniques for rare event analysis, where the basic Splitting and Importance Sampling families of estimators are presented, together with the Zero Variance subfamily of the first class of techniques, plus other methods such as the Recursive Variance Reduction approach.
Taking dependence into account. The Marshall-Olkin copula model has emerged as the standard tool for capturing dependency between components in failure analysis in reliability. In this model shocks arise at exponential random times, that affect one or several components inducing a natural correlation in the failure process. However, the method presents the classic “curse of dimensionality” problem. Marshall-Olkin models are usually intended to be applied to design a network before its construction, therefore it is natural to assume that only partial information about failure behavior can be gathered, mostly from similar existing networks. To construct such a MO model, we propose in an optimization approach in order to define the shock’s parameters in the copula, with the goal of matching marginal failures probabilities and correlations between these failures. To deal with the exponential number of parameters of this problem, we use a column-generation technique. We also discuss additional criteria that can be incorporated to obtain a suitable model. Our computational experiments show that the resulting approach produces a close estimation of the network reliability, especially when the correlation between component failures is significant.
Random variable generation.
Random number generators were invented before there were symbols for writing numbers, and long before mechanical and electronic computers. All major civilizations through the ages found the urge to make
random selections, for various reasons. Today, random number generators, particularly on computers, are an important (although often hidden) ingredient in human activity.
We study in the lattice structure of random number generators of the MIXMAX family, a class of matrix linear congruential generators that produce a vector of random numbers at each step. These generators were initially proposed and justified as close approximations to certain ergodic dynamical systems having the Kolmogorov
Congestion control. The explosive growth of connected objects is certainly one of the most important challenges facing operators' network infrastructures. Although it has been foreseen for a very long time, it is still not clear how to support such huge number of devices efficiently.
A smarter planning of dedicated access slots would certainly limit the burden, at the access network, but remains insufficient since some equipments react to events which cannot be timed. Moreover, barring some Internet of Things (IoT) devices from accessing the network is very efficient; nevertheless, efficiency is generally linked to precise knowledge of the number of contending devices. Though, before connection establishment, the terminals are invisible to access points and, therefore, it is very difficult to estimate their number. A lower bound of backlogged devices can be determined. However, underestimating this number may lead to a congestion collapse whereas an overestimation implies underutilization of resources. In , we propose a lightweight change to the standard to accurately reveal the state of network congestion by overloading connections' requests with the number of access attempts (number of times the device has been barred as well as number of attempts). Using such information, we propose an accurate recursive estimator of the number of devices. The obtained results demonstrated that the proposed solution not only makes it possible to estimate the number of equipments much better than existing techniques, but also allows determining precisely the number of blocked equipments.
Even if the support of IoT objects represents a real challenge to the access, as mentioned above, it is nevertheless important to support them effectively in the core network, this is one of the requirements of 5G networks. While this represents an interesting opportunity for operators to grow their business, it will need new mechanisms to scale and manage the envisioned high number of devices and their generated traffic. Particularity, the signaling traffic, which will overload the 5G core Network Function (NF) in charge of authentication and mobility, namely Access and Mobility Management Function (AMF). The objective of is to provide an algorithm based on Control Theory allowing: (i) to equilibrate the load on the AMF instances in order to maintain an optimal response time with limited computing latency; (ii) to scale out or in the AMF instance (using NFV techniques) depending on the network load to save energy and avoid wasting resources. Obtained results via computer system indicate the superiority of our algorithm in ensuring fair load balancing while scaling dynamically with the traffic load.
Energy efficiency. The use of Low Power Wide Area Networks (LPWANs) is growing due to their advantages in terms of low cost, energy efficiency and range. Although LPWANs attract the interest of industry and network operators, it faces certain constraints related to energy consumption, network coverage and quality of service. We demonstrate in the possibility to optimize the performance of the LoRaWAN (Long Range Wide Area Network) technology, one of the most widely used LPWAN technology. We suggest that nodes use light-weight learning methods, namely, multi-armed bandit algorithms, to select the communication parameters (spreading factor and emission power). Extensive simulations show that such learning methods allow to manage the trade-off between energy consumption and packet loss much better than an Adaptive Data Rate (ADR) algorithm adapting spreading factors and transmission powers on the basis of Signal to Interference and Noise Ratio (SINR) values.
Vehicular networks. According to recent forecasts, constant population growth and urbanization will bring an additional load of 2.9 billion vehicles to road networks by 2050. This will certainly lead to increased air pollution concerns, highly congested roads putting more strain on an already deteriorated infrastructure, and may increase the risk of accidents on the roads as well. Therefore, to face these issues we need not only to promote the usage of smarter and greener means of transportation but also to design advanced solutions that leverage the capabilities of these means along with modern cities' road infrastructure to maximize its utility. To this end, we propose, in , an original Cognitive Radio inspired algorithm, named CRITIC, that aims to mimic the principle of Cognitive Radio technology used in wireless networks on road networks. The key idea behind CRITIC is to temporarily grant regular vehicles access to priority (e.g., bus or carpool) lanes whenever they are underutilized in order to reduce road traffic congestion. In , we explore novel ways of utilizing inter-vehicle and vehicle to infrastructure communication technology to achieve a safe and efficient lane change manoeuvre for Connected and Autonomous Vehicles (CAVs). The need for such new protocols is due to the risk that every lane change manoeuvre brings to drivers and passengers lives in addition to its negative impact on congestion level and resulting air pollution, if not performed at the right time and using the appropriate speed. To avoid this risk, we design two new protocols, one is built upon and extends an existing protocol, and it aims to ensure safe and efficient lane change manoeuvre, while the second is an original solution inspired from mutual exclusion concept used in operating systems. This latter complements the former by exclusively granting lane change permission in a way that avoids any risk of collision.
Adaptive Allocation for Virtual Network Functions in Wireless Access Networks Network Function Virtualization (NFV) is deemed as a mean to simplify deployment and management of network and telecommunication services. With wireless access networks, NFV has to take into account the radio resources at wireless nodes in order to provide an end-to-end optimal virtual network function (VNF) allocation. This topic has been well-studied in existing literature, however, the effects of variations of networks over time have not been addressed yet. In , we provide a model of the adaptive and dynamic VNF allocation problem considering VNF migration. Then, we formulate the optimisation problem as an Integer Linear Programming (ILP) problem and provide a heuristic algorithm for allocating multiple service function chains (SFCs).The proposed approach allows SFCs to be reallocated so as to obtain the optimal solution over time. The results confirm that the proposed algorithm is able to optimize the network utilization while limiting the reallocation of VNFs which could interrupt services.
Service placement. The growing need for a simplified management of network infrastructures has recently led to the emergence of software-defined networking (SDN) and network function virtualization (NFV) paradigms. These concepts have, however, introduced new challenges and notably the service placement problem.
The problem of service placement, in its simplest version, consists in placing virtual machines in a network infrastructure. This placement sometimes also consists in placing flows, and therefore refers to a routing problem. In an even more elaborate version, this consists of combining the two approaches, which comes down to placing a service chain. In one of the most elaborated versions, it is necessary to add to the placement the dynamicity of the services to be deployed.
In we demonstrate the feasibility of an extended and flexible Software Defined Network (SDN) control plane that allows to overcome the limitations of the Openflow protocol by achieving distributed and intelligent network services in SDN networks. This extended control plane is designed according to the following reference guidelines: 1) the concept of generic and programmable network nodes usually known as “white boxes”. They integrate a generic engine to execute the service and a library of elementary components as basic building blocks of any services; 2) a fine grained decomposition logic of network services into elementary components, which allows the services to be designed and customized on the fly using these building blocks available on each network node in libraries; 3) a mechanism for re-configuring or redefinition on the fly of the network services on generic nodes without service interruption; 4) some smart elementary agents called SDN controllers elements to provide and distribute the intelligence necessary to interact with the data plane at different levels of locality. This SDN control plane is illustrated in a proof of concept with the implementation of a distributed monitoring service use case. The monitoring service can act and evolve in a differentiated manner in the network depending on traffic requirements and monitoring usage.
In we set design principles of future distributed edge clouds in order to meet application requirements. We precisely introduce a costless distributed resource allocation algorithm, named CLOSE, which considers local information only. We compare via simulations the performance of CLOSE against those obtained by using mechanisms proposed in the literature, notably the Tricircle project within OpenStack. It turns out that the proposed distributed algorithm yields better performance while requiring less overhead.
As mentioned above, service placement is often closely linked to the routing problem. The latter is all the more complex when it comes to optimizing several metrics at once. An intuitive method is formulating the problem as an Integer Linear Programming and solving it by an approximation algorithm. This method tends to have a specific design and usually suffers from unacceptable computational delays to provide a sub-optimal solution. Genetic algorithms (GAs) are deemed as a promising solution to cope with highly complex optimization problems. However, the convergence speed and the quality of solutions should be addressed in order to fit into practical implementations. In , we propose a genetic algorithm-based mechanism to address the multi-constrained multi-objective routing problem. Using a repairer to reduce the search space to feasible solutions, results confirm that the proposed mechanism is able to find the Pareto-optimal solutions within a short run-time.
Recent studies confirm the ability of Deep Reinforcement Learning (DRL) in solving complex routing problems; however, its performance in the network with QoS-sensitive flows has not been addressed. In , we exploit a DRL agent with convolutional neural networks in the context of SDN networks in order to enhance the performance of QoS-aware routing. The obtained results demonstrate that the proposed approach is able to improve the performance of routing configurations significantly even in complex networks.
One big advantage of using Virtual Network Functions (VNF), is the possibility of dynamically scaling, depending on traffic load (i.e. instantiate new resources to VNF when the traffic load increases, and reduce the number of resources when the traffic load decreases). In and , we propose a novel mechanism to scale 5G core network resources by anticipating traffic load changes through forecasting via Machine Learning (ML) techniques. The traffic load forecast is achieved by using and training a Neural Network on a real dataset of traffic arrival in a mobile network. Two techniques were used and compared: (i) Recurrent Neural Network (RNN), more specifically Long Short Term Memory Cell (LSTM); and (ii) Deep Neural Network (DNN). Simulation results showed that the forecast-based scalability mechanism outperforms the threshold-based solutions, in terms of latency to react to traffic change, and delay to have new resources ready to be used by the VNF to react to traffic increase.
Content Centric Networking. During the last decade, Internet Service Providers (ISPs) infrastructure has undergone a major metamorphosis driven by new networking paradigms, namely: SDN and NFV. The upcoming advent of 5G will certainly represent an important achievement of this evolution. In this context, static (planning) or dynamic (on-demand) caching resources placement remains an open issue. In , we propose a new technique to achieve the best trade-off between the centralization of resources and their distribution, through an efficient placement of caching resources. To do so, we model the cache resources allocation problem as a multi-objective optimization problem, which is solved using Greedy Randomized Adaptive Search Procedures (GRASP). The obtained results confirm the quality of the outcomes compared to an exhaustive search method and show how a cache allocation solution depends on the network's parameters and on the performance metrics that we want to optimize.
Analysis of transmission schemes in networks of sensors. In Wireless Sensor Networks (WSNs), each node typically transmits several control and data packets in a contention fashion to the sink. In the literature, different adaptive schemes have been proposed for this purpose. Their common goal is to offer QoS guarantees in terms of system lifetime (related to energy consumption) and reporting delay (related to the cluster formation delay). In , we analyze and study three unscheduled transmission schemes for control packets in three cluster-based architectures: Fixed Scheme (FS), Adaptive by Estimation Scheme (AES) and Adaptive by Gamma Scheme (AGS). Based on the numerical results, we show that the threshold values are just as important in the system design as the actual value of the transmission probability in adaptive schemes (AES and AGS), to achieve QoS guarantees.
P2P networks for Video on Demand (VoD) services. In we describe a novel scheme that efficiently distributes the resources that are provided by seeds in a P2P network for Video on Demand (VoD) services. In the proposed scheme, that we have called Prioritized-Windows Distribution (PWD), the amount of seed’s resources assigned to a downloader depends on its current progress in the process of downloading the video. We demonstrate through a fluid model analysis and Markov chain numerical evaluations that PWD improves the P2P network performance in terms of the level of cooperation that is required from the seeds to keep the system under abundance conditions. Additionally, we analyze the performance of the system as a function of the initial playback delay, a parameter that highly influences the Quality of Service (QoS) as perceived by the users, and our results show that PWD also improves it.
This is a Cifre contract (2017-2019) including a PhD thesis supervision (PhD of Illyyne Saffar), done with Nokia, on the proposition to use machine learning and data analytics to transform user and network data into actionable knowledge which in turn can be automatically exploited by Autonomic Networking approaches for cognitive self management of the 5G network.
This is a Cifre contract including a PhD thesis supervision (PhD of Corentin Hardy), done with Technicolor. The starting point of this thesis is to consider the possibility to deploy machine-learning algorithms over many cores, but out of the datacenter, on the devices (home-gateways) deployed by Technicolor in users’ homes. In this device-assisted view, an initial processing step in the device may significantly reduce the burden on the datacenter back-end. Problems are numerous (power consumption, CPU power, network bandwidth and latency), but costs for the operator can be lowered and scale may bring some new level in data processing.
This is a Cifre contract (2015-2018) including a PhD thesis supervision (PhD of Alassane Samba), done with Orange, on cooperation in statistical approaches for the prediction of throughput without history. Throughput has a strong impact on user experience in cellular networks. The ability to predict the throughput of a connection, before it starts, bring new possibilities, particularly to the Internet service providers. They could adapt contents to the quality of service really reachable by users, in order to enhance their experience.
This is a Cifre contract (2015-2018) including a PhD thesis supervision (PhD of Imad Alawe), done with TDF, on the proposition of a scalable SDN-based mobile network architectures for the future 5G network.
Bruno Tuffin is the co-director of ALSTOM-Inria common Lab.
The group currently manages a project with ALSTOM on system availability simulation taking into account logistic constraints. Current ALSTOM Transport and Power contracts, especially service-level agreements, impose stringent system availability objectives. Non-adherence to the required performance levels often leads to penalties, and it is therefore critical to assess the corresponding risks already at a tender stage. The challenge is to achieve accurate results in a reasonable amount of time. Monte Carlo simulation provides estimates of the quantities it is desired to predict (e.g., availability). Since we deal with rare events, variance reduction techniques, specifically Importance Sampling (IS) here, is used. The goal of the project is to establish the feasibility of IS for solving problems relevant to ALSTOM and to develop the corresponding mathematical tools.
We participated to the 3-year (January 2015 – June 2018) FUI Project DVD2C, which aims to virtualize CDN through the Cloud and Network Function Virtualization concept. DVD2C is leaded by Orange labs., and the partners are two SMEs (Viotech and Resonate) and two academics (our team and Télécom Paris Sud).
Gerardo Rubino is the coordinator of the research action “Analytics and machine learning”, with Nokia Bell Labs. The objective is to carry out common research on an integrated framework for 5G, programmable networks, IoT and clouds that aims at statically and dynamically managing and optimizing the 5G infrastructure using, in particular, machine learning techniques.
Yann Busnel is a member of the ONCOSHARe project (ONCOlogy bigdata SHARing for Research) funded by Brittany and Pays de la Loire regions, with 280.000 k€ for 24 months.
Bruno Sericola continues to work on the analysis of fluid queues with Fabrice Guillemin from Orange Labs in Lannion, France.
ANR
Yassine Hadjadj-Aoul is participating at 20% of his time to the IRT BCOM granted by the ANR.
Sofiène Jelassi is participating at 20% of his time to the IRT BCOM granted by the ANR.
Yann Busnel is a member of the three following projects: SocioPlug granted by the ANR (ANR-13-INFR-0003), INSHARE granted by the ANR (ANR-15-CE19-0024) and BigClin granted by the LabEx CominLabs (ANR-10-LABX-07-01).
IPL (Inria Project Lab) BetterNet
Yassine Hadjadj-Aoul, Gerardo Rubino and Bruno Tuffin are members of the IPL (Inria Project Lab) BetterNet: An Observatory to Measure and Improve Internet Service Access from User Experience, 2016-2020.
BetterNet aims at building and delivering a scientific and technical collaborative observatory to measure and improve the Internet service access as perceived by users. In this Inria Project Lab, we will propose new original user-centered measurement methods, which will associate social sciences to better understand Internet usage and the quality of services and networks. Our observatory can be defined as a vantage point, where: 1) tools, models and algorithms/heuristics will be provided to collect data, 2) acquired data will be analyzed, and shared appropriately with scientists, stakeholders and civil society, and 3) new value-added services will be proposed to end-users.
Bruno Sericola continues to work on the analysis of fluid queues with Marie-Ange Remiche from the university of Namur in Belgium.
Program: H2020-ICT-12-2015
Project acronym: F-Interop
Project title: FIRE+ online interoperability and performance test tools to support emerging technologies from research to standardization and market launch
Duration: November 2015 – October 2018
Coordinator: UPMC-LIP6
Other partners: 9 partners including F. Sismondi and C. Viho (Dionysos), and T. Watteyne (Eva)
Abstract: The goal of F-Interop is to extend FIRE+ with online interoperability and performance test tools supporting emerging IoT-related technologies from research to standardization and to market launch for the benefit of researchers, product development by SME, and standardization processes.
We maintain a strong line of collaborations with the Technical University Federico Santa María (UTFSM), Valparaíso, Chile. Over the years, this has taken different forms (associated team Manap, Stic AmSud project “AMMA”, Stic AmSud project “DAT”). In 2018, we finished a joint PhD work (co-tutelle PhD of Nicolás Jara), and a new joint PhD will start in 2019 (PhD of Jonathan Olavarría). The first one was on optical network analysis and design; the second one's topic is on modeling evaluation techniques, with focus on Stochastic Activity Networks.
ECOS-Sud project MASC: Mathematical Algorithms for Semantic Cognition. MASC is a three-year project (code U17E03) with the Faculty of Sciences of the university of the Republic, in Uruguay, on the application of mathematical modeling tools to a better understanding of a cognitive disease called semantic dementia. This involves Prof. Eduardo Mizraji and Jorge Graneri, a PhD student whose co-advisors are Prof. Mizraji and G. Rubino from Dionysos, plus Pablo Rodríguez Bocca, from the Engineering Faculty of the university of the Republic. Our contribution to this project is around the use of mathematical models of neural structures.
Pierre L'Ecuyer holds an Inria International Chair, Nov. 2013- Oct. 2018.
Three colleagues from the University of the Republic, Uruguay, visited us in 2018. First, Jorge Graneri, in the context of the starting ECOS project MASC (July–August), then Professor Gustavo Guerberoff to work on mathematical problems related to reliability network models (October–November), and then Professor Pablo Rodríguez Bocca, to work also on the previously entioned ECOS project.
Yassine Hadjadj-Aoul was able to benefit from a one-month scientific stay at the Metropolitan University of Manchester (MMU), in June 2018. This stay was part of the “Research and knowledge Exchange Funding” program run by the MMU.
We organized the 13th International Conference in Monte Carlo & Quasi-Monte Carlo Methods in Scientific Computing, in Rennes, July 1-6, 2018. The MCQMC Conference is a biennial meeting on Monte Carlo and quasi-Monte Carlo methods and the premier event on the topic; it has attracted 300 people from all over the world.
Bruno Tuffin was the General Chair for the 13th International Conference in Monte Carlo & Quasi-Monte Carlo Methods in Scientific Computing, in Rennes, July 1-6, 2018.
Yassine Hadjadj-Aoul was the General Chair for the The 5th International Symposium on Networks, Computers and Communications, Co-sponsored by IEEE, in Roma (Italy), May 2018.
Pierre L'Ecuyer is member of the Steering Committee of MCQMC and the president of the Steering Committee of MCM.
Gerardo Rubino and Bruno Tuffin are members of the Steering Committee of the International Workshop on Rare Event Simulation (RESIM).
Y. Hadjadj-Aoul is co-chairing the Steering Committee of the International Conference on Information and Communication Technologies for Disaster Management (ICT-DM) from December 2016 and member of the steering committee since 2015.
Yann Busnel has been member of the Organization Committee of the 17th International Conference on Ad Hoc Networks and Wireless (AdHoc-Now 2018), in Saint-Malo, France, in July 2018, acting as Publicity Chair.
Bruno Tuffin Area chair for the area “Performance Evaluation, Control and Optimization" of ITC, 11-14 Sep, 2018 in Vienna, Austria.
Bruno Tuffin was Track-Coordinator for Network and Communications mini-track for WSC'2018 (Winter Simulation Conference, Gotheburg, Sweden, Dec 9-12).
Yassine Hadjadj-Aoul was Track-Coordinator for the “Satellites IoT and M2M Networks” of the “International Conference on Smart Communications and Networking” (SmartNets), Hammamet, Tunisia (Nov. 2018).
Pierre L'Ecuyer was a member of the program committee of the following events:
MCQMC’2018: Thirteenth International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, Rennes, France, July 2018.
SIMULTECH 2018: International Conference on Simulation and Modeling Methodologies, Technologies and Applications, Porto, Portugal, July 2018.
ICORES 2018: International Conference on Operations Research and Enterprise Systems, Funchal, Portugal, Feb. 2018
Patrick Maillé was a member of the program committee of the following events:
MobiHoc 2018,
WiOpt 2018,
IFIP Performance 2018,
NetEcon 2018,
AdHocNow 2018.
Bruno Tuffin was a member of the program committee of the following events:
Ninth International Workshop on Simulation, Barcelona, Spain, June 18-22, 2018
The 4th International Symposium on Ubiquitous Networking (UNET 2018), Hammamet, Tunisia, May 03-05, 2018.
IEEE International Conference on Communications, 20-24 May 2018, Kansas City, MO, USA
IEEE 1st 5G World Forum, Santa Clara, July 9-11, 2018
GECON 2018 - International Conference on Economics of Grids, Clouds, Software, and Services, Pisa, 18-20.Sept.2018
NETGCOOP 2018 - International Conference on NETwork Games, COntrol and OPtimisation New York, November 14-16, 2018
IFIP WG 7.3 Performance 2018, 36th International Symposium on Computer Performance, Modelling, Measurement and Evaluation, Toulouse, France, December 5-7, 2018
Yann Busnel was a member of the program committee of the following events:
NCA 2018: 17th IEEE International Symposium on Network Computing and Applications, Boston, USA, October 2018.
AdHoc Now: 17th International Conference on Ad Hoc Networks and Wireless (AdHoc-Now 2018), Saint-Malo, France, July 2018.
Gerardo Rubino was a member of the program committee of the following event:
MCQMC’2018: Thirteenth International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, Rennes, France, July 2018.
Yassine Hadjadj-Aoul was a member of the program committee of the following events:
IEEE Globecom 2018 - IEEE Global Communications Conference, Abu Dhabi, UAE, December 9-13, 2018.
IEEE ICC 2018 - IEEE International Conference on Communications, Kansas City, MO, USA, May 20-24, 2018.
IEEE WCNC 2018 - IEEE Wireless Communications and Networking Conference, Barcelona, Spain, April 15-18, 2018.
IEEE ISCC 2018 - IEEE Symposium on Computers and Communications, Natal, Brazil, June 25-28, 2018.
IEEE CCNC 2018 - IEEE Consumer Communications & Networking Conference, Las Vegas, USA, January 12-15, 2018.
Yann Busnel served as a reviewer for several major international journals and conferences, such as TPDS (IEEE Transactions on Parallel and Distributed Systems).
Bruno Sericola served as a reviewer for several major international conferences.
César Viho has reviewed papers for the journals IEEE Transaction on Wireless Communication, IEEE Transactions on Vehicular Communications, IEEE Communications Magazine, and for the following international conferences: IWCNC, Globecom, and CCNC.
Yassine Hadjadj-Aoul has served as a reviewer for several major international conferences such as IEEE Globecom, IEEE ICC.
Gerardo Rubino served as a reviewer for several major international conferences.
Bruno Tuffin is the Simulation Area Editor for Informs Journal on Computing since Jan. 2016.
Bruno Tuffin is an associate editor for the following journal:
ACM Transactions on Modeling and Computer Simulation, since November 2008.
Pierre L'Ecuyer is an associate editor for the following journals:
ACM Transactions on Mathematical Software, since August 2004.
Statistics and Computing (Springer-Verlag), since June 2003.
International Transactions in Operational Research, since May 2007.
Bruno Sericola is an associate editor for the following journals:
International Journal of Stochastic Analysis, since April 2010.
Performance Evaluation, since April 2015.
Bruno Sericola is Editor in Chief of the books series “Stochastic Models in Computer Science and Telecommunications Networks”, ISTE/WILEY, since March 2015.
Bruno Sericola, Bruno Tffin, Gerardo Rubino, served as reviewers for several major international journals.
Yassine Hadjadj-Aoul has reviewed papers for the journals: Elsevier Computer Networks, Elsevier Computer communication, IEEE Communication Letters, Transaction on Vehicular Technologies (TVT), IEEE Communication magazine, IEEE Journal On Selected Area of Communication, IEEE Transactions on Emerging Topics in Computational Intelligence, Springer Journal of Network and Systems Management, Springer Annals of Telecommunications.
B. Tuffin. gave the tutorial “Recherche opérationnelle et théorie des jeux pour l'analyse de la neutralité du Net" in 19ème congrès de la Société Française de Recherche Opérationnelle et d'Aide A la Decision (ROADEF), Lorient, France, 21-23 Février 2018.
B. Tuffin gave the following seminar presentations:
B. Tuffin. Estimating the mean and tail-distribution measures of time to failure by simulation. Meeting COSMOS(Contrôle et Optimisation Stochastique, MOdélisation et Simulation), GR Recherche Opérationnelle CNRS, June 18, 2018.
B. Tuffin. Mean Time To Failure Estimation by simulation. RNA Kinetics Day, LRI October 3, 2018.
Yann Busnel made several invited and keynote talks:
How to generate uniform Samples on large-scale data streams, Invited talk at SMURF Workshop: Survey Methods and their use in Related Fields, Institute of Statistics, University of Neuchâtel, August 2018.
Big Data Panorama & Large-scale Data Streaming, Keynote at CEP/FDP on Big Data Computing, IIT Patna, India, December 2018.
Gerardo Rubino made several invited and keynote talks:
Machine Learning Developments around the PSQA Project, plenary speaker in the workshop by invitation Machine Learning in Communication Systems, organized by the German DFG Collaborate Research Center 1053 “MAKI”: Multi-Mechanism Adaptation for the Future Internet, Darmstadt, Germany, March 23, 2018.
Random Neural Networks and Extensions. Applications to networking, plenary speaker in the TMA Experts Summit by invitation, as part of IEEE Network Traffic Measurement and Analysis Conference (TMA'18), Vienna, Austria, June 2018.
Duality concepts and applications in difference and differential linear systems, A. Krinik and G. Rubino, invited talk in The American Mathematical Society (AMS) Fall Western Sectional Meeting, San Francisco State University, San Francisco, CA, October 2018.
Performance evaluation targeting Quality of Experience, keynote speaker in The 15th European Performance Engineering Workshop (EPEW'18), Paris, France, October 2018.
Efficient Monte Carlo estimation of network reliability metrics with the standard estimator, invited talk in INFORMS Annual Meeting, Phoenix, USA, November 2018.
Yassine Hadjadj-Aoul gave the following tutorial:
“Analysis of caching strategies for content centric networks” in the International Conference: “The 7th IEEE International Conference on Smart Communications in Network Technologies (SaCoNet)”.
Gerardo Rubino gave the following tutorial:
Introduction to rare event analysis using Monte Carlo, G. Rubino, tutorial in 13th International Conference in Monte Carlo & Quasi-Monte Carlo Methods in Scientific Computing (MCQMC 2018), Rennes, France, July 2018.
Yassine Hadjadj-Aoul gave the following seminar presentation:
“Efficient support of massive IoT deivces’ access in future wireless networks” at the Manchester Metropolitan University (MMU).
Bruno Tuffin is the co-director of the common lab ALSTOM-Inria since 2014.
Bruno Tuffin is an elected member of I-SIM council of INFORMS Society.
Yann Busnel is a member of the CSV (the technical committee) of the Images and Networks Cluster of Brittany, France.
Gerardo Rubino is a member of the CSV (the technical committee) of the Images and Networks Cluster of Brittany, France.
Yann Busnel is a member of the Steering Committee of the RESCOM research group at GDR CNRS RSD.
Yassine Hadjadj-Aoul is a founding member of Special Interests Group “IEEE Sig on Big Data with Computational Intelligence” under the IEEE COMSOC Big Data TC (Since June 2017).
Yassine Hadjadj-Aoul is a member of the scientific committee of GT ARC (Automatique et Réseaux de Communication) scientific committee (since Nov. 2017).
Gerardo Rubino is the coordinator of the research action “Analytics and machine learning”, in collaboration with Nokia Bell Labs.
César Viho has reviewed project proposals for the ANR and for CIFRE contracts for the ANRT.
Yassine Hadjadj-Aoul has reviewed project proposals for the Irish Research Council (COALESCE Research Fund 2019).
Yann Busnel is the head of « Network System, Cybersecurity and Digital law » Research Department at IMT Atlantique (2017–*).
Bruno Sericola is responsible for the Inria Rennes-Bretagne Atlantique budget.
Bruno Sericola is the leader of the research group MAPI (Math Appli Pour l’Info) the goal of which is to improve the collaboration between computer scientists and mathematicians.
Bruno Sericola is member of the research commission of the academic council of University of Rennes 1.
César Viho is director of the MAthSTIC (Mathematics, Electronics and Computer Sciences) doctoral school in charge of managing the recruitment of around 1100 PhD students and their activities during their doctorate, in all the concerned areas of the UBL (Université Bretagne Loire).
Master: Bruno Tuffin, MEPS (performance evaluation), 35h, M1, Univ Rennes, France
Master: Bruno Tuffin, GTA (Game Theory and Applications), 15h, M2, Univ Rennes, France
Master: Patrick Maillé, GTA (Game Theory and Applications), 15h, M2, Univ Rennes, France
Master:Rennes 1 Patrick Maillé, Simulation and queuing theory, 25h, M2, IMT Atlantique, France
Licence:Rennes 1 Patrick Maillé, Techniques and models in networks, 20h, L3, IMT Atlantique, France
Master:Rennes 1 Patrick Maillé, Performance Evaluation, 30h, M1, IMT Atlantique, France
Licence:Rennes 1 Yann Busnel, Introduction to Network, 12h, 1st year ENS Rennes, France.
Master:Rennes 1 Yann Busnel, Big Data and Stream Processing, 9h, 3rd year IMT Atlantique , France.
Licence: Bruno Sericola, Mathematics, 14h, L2, IUT/University of Rennes 1, France.
Master: Bruno Sericola, Mathematics, 12h, M2, Istic/University of Rennes 1, France.
Master: Bruno Sericola, Logistic and performance, 12h, M2, Faculté de sciences économiques, Univ of Rennes 1, France
Master: Bruno Sericola, MEPS (performance evaluation), 36h, M1, Univ Rennes, France
Master M1: César Viho, Networks:Rennes 1 from Services to protocols, 36 hours, Istic/University of Rennes 1, France
Master M2: César Viho, Algorithms on graphs, 40 hours, Istic/University of Rennes 1, France
Bachelor L2: César Viho, Network architecture and components, 16 hours, Istic/University of Rennes 1, France
Master, 2nd year: Yassine Hadjadj-Aoul, Scalable Network Infrastructure (SNI), 10 hours, The Research in Computer Science (SIF) master and EIT Digital Master/University of Rennes 1, France
Master, pro 2nd year: Yassine Hadjadj-Aoul, Multimedia streaming over IP (MMR), 48 hours, Esir/University of Rennes 1, France
Master, pro 2nd year: Yassine Hadjadj-Aoul, Multimedia services in IP networks (RSM), 29 hours, Esir/University of Rennes 1, France
Master, pro 2nd year: Yassine Hadjadj-Aoul, Software Defined Networks, 6 hours, Istic/University of Rennes 1, France
Master, 2nd year: Yassine Hadjadj-Aoul, Video streaming over IP, 8 hours, Istic/University of Rennes 1, France
Master: Yassine Hadjadj-Aoul, Introduction to networking (IR), 26 hours, Esir/University of Rennes 1, France
Master: Yassine Hadjadj-Aoul, Mobile and wireless networking (RMOB), 20 hours, Esir/University of Rennes 1, France
Master 2nd year: Yassine Hadjadj-Aoul, Overview of IoT technologies: focus on LPWAN, 2 hours, INSA, France
Master pro 2nd year: Sofiéne Jelassi, Supervision of heterogeneous networks, 32 hours, Istic/University of Rennes 1, France
Master pro 2nd year: Sofiéne Jelassi, Cloud & SDN virtualization, 32 hours, Istic/University of Rennes 1, France
Master pro 2nd year: Sofiéne Jelassi, Multimedia networks, 32 hours, Istic/University of Rennes 1, France
Master, 2nd year: Gerardo Rubino, Scalable Network Infrastructure (SNI), 10 hours, The Research in Computer Science (SIF) master and EIT Digital Master/University of Rennes 1, France
Supelec Rennes 3rd year: Gerardo Rubino, Dependability Analysis, 15 hours.
Master 2nd year: Gerardo Rubino, Quality of Experience, 4 hours, for two different groups of students of the Esir/University of Rennes 1, France
UDELAR, Uruguay: Gerardo Rubino, post-graduate course on queuing models, 21 hours.
PhD: Ajit Rai, Availability prediction with logistics, ALSTOM/Université de Rennes 1, Defense in July 2018, supervised by Bruno Tuffin.
PhD: Imad Alawe, “Mobile SDN architecture”, Defense in November 2018; supervised by César Viho and Yassine Hadjadj-Aoul, University Rennes 1, Philippe Bertin, B-COM and Davy Darche, TDF.
PhD: Nicolás Jara, “Fault tolerant design of dynamic WDM optical networks”, Technical University Federico Santa María (UTFSM) and university of Rennes 1, France. Advisors: R. Vallejos (Chile) and G. Rubino (France). Defended in July 2018.
PhD in progress: Ximun Castoreo, Measurements to check network neutrality, started 10/2018, supervised by Bruno Tuffin.
PhD in progress: Ayman Chouayakh, Jeux d’acteurs et mécanismes d’enchères pour la gestion du spectre dans les réseaux 5G, Orange/IMT Atlantique, started 03/2017, supervised by Patrick Maillé.
PhD in progress: Hiba Dakdouk, Algorithmes asynchrones et économes pour l’optimisation des communications d’un réseau d’objets connectés, Orange/IMT Atlantique, started 11/2018, supervised by Patrick Maillé
PhD in progress: Alassane Samba, Technologies Big Data et modèles prédictifs pour la QoS des réseaux, IMT Atlantique, defended in October 2018, supervised by Yann Busnel, Gwendal Simon and Philippe Dooze (Inria).
PhD in progress: Richard Westerlynck, Analyse répartie et extraction de tendances à grande échelle pour les données massives en santé, IMT Atlantique, defense in 2020, supervised by Yann Busnel and Marc Cuggia (PUPH, CHU Rennes).
PhD in progress: Vasile Cazacu, Calcul distribué pour la fouille de données cliniques, IMT Atlantique, defense in 2020, supervised by Emmanuelle Anceaume (CNRS/IRISA Rennes), Yann Busnel and Marc Cuggia (PUPH, CHU Rennes).
PhD in progress: Corentin Hardy, Device-Assisted Distributed Machine-Learning on Many Cores, started in April 2016; supervisors: Bruno Sericola and Erwan Le Merrer from Technicolor, University Rennes 1.
PhD: Yves Mocquard, Analyse probabiliste de protocoles de population, Defense in December 2018; supervisors: Bruno Sericola and Emmanuelle Anceaume from team Cidre, University Rennes 1.
PhD in progress: Jean-Michel Sanner; Cifre Grant, Orange Labs, “SDN technologies for network services performances adaptation of carriers networks”; started on January 2013.
PhD in progress: Ali Hodroj, Enhancing content delivery to multi-homed users in broadband mobile networks, started in November 2015; supervisors: Bruno Sericola, Marc Ibrahim and Yassine Hadjadj-Aoul, University Rennes 1 and St Joseph University of Beyrouth.
PhD in progress: Jean-Michel Sanner; Cifre Grant, Orange Labs, “SDN technologies for network services performances adaptation of carriers networks”; started on January 2013; Advisors: Y. Hadjadj-Aoul and G. Rubino; University Rennes 1.
PhD in progress: Hamza Ben Ammar, “Socially-aware network and cache resources optimization for efficient media content delivery in Content Centric Networks”, started in October 2015; advisors: Yassine Hadjadj-Aoul, Gerardo Rubino and Soraya Ait Chellouche, University Rennes 1.
PhD in progress: Imane Taibi, “Big data analysis for network monitoring and troubleshooting”; started on Nov. 2017; Advisors: G. Rubino, Inria, and Yassine Hadjadj-Aoul, University Rennes 1, and Chadi Ibrahim, Inria.
PhD in progress: Anouar Rkhami, “Data analytics for optimized resources' management in future 5G networks”; started on Oct. 2018; Advisor: Gerardo Rubino and Yassine Hadjadj-Aoul, University Rennes 1, and Abdelkader Outtagarts, Nokia Bell labs.
PhD in progress: Mohamed Rahali, “Machine learning-based monitoring and management for hybride SDN networks”; started on Oct. 2017; Advisors: G. Rubino, Inria, and Sofiène Jelassi, University of Rennes 1.
PhD in progress: Laura Aspirot, “Fluid Approximations for Stochastic Telecommunication Models”, University of the Republic, Uruguay. Advisors: E. Mordecki (Uruguay) and G. Rubino (France). Defense in 2019.
PhD in progress: Jorge Graneri, “Mathematical Models for Semantic Memory”, University of the Republic, Uruguay. Advisors: E. Mizraji (Uruguay) and G. Rubino (France). Defense in 2019.
Bruno Tuffin was a member of the following PhD defense committees:
Graciela FERREIRA LEITES MUNDELL. Analysis and Optimization of Highly Reliable Systems. University of the Republic, Montevideo, Uruguay, 2018, rapporteur.
Robert Salomone, Advances in Monte Carlo Methodology, The University of Queensland, Australia, 2018, rapporteur.
Jean-Bernard Eytard, Une approche par la géométrie tropicale et la convexité discrète de la programmation bi-niveau : application à la tarification des données dans les réseaux mobiles de télécommunications, Ecole Polytechnique, Novembre 2018, president.
Patrick Maillé was a member of the following PhD defense committees:
Wilfried YORO. Modeling the correlation between the energy consumption and the end-to-end traffic of services in large telecommunication networks, Telecom SudParis, 2018, rapporteur.
Àngel SANCHIS CANO, Economic analysis of wireless sensor-based services in the framework of the Internet of Things, Polytechnic University of Valencia, Spain, 2018, rapporteur.
Kodjo Sena APEKE, Modélisation hybride et multiéchelle pour la prédiction de la réponse d’une tumeur sous traitement en radiothérapie, Décember 2018, président.
Yann Busnel was a member of the following PhD defense committee:
Mansour Khelghatdoust, University of Sidney, November 2018, rapporteur.
Bruno Sericola was member of the final selecting board for the recruitment of CNRS researchers in 2018.
César Viho was a member of juries for the recruitment of:
Young graduate scientists and senior researchers at Inria.
Associate Professors and senior Professors at ISTIC-Université
Gerardo Rubino was a member of the following PhD defense committees:
Imen TRIKI, Rôle du vidéo streaming mobile qui dépend du contexte dans l’amélioration de la qualité d’expérience. Université d'Avigon, June 5, 2018. Rapporteur.
Nadhir Ben Rached. Rare Events Simulations with Applications to the Performance Evaluation of Wireless Communication Systems. Kaust, Saudi Arabia, October 8, 2018. Rapporteur.
Yann Busnel is a member of Development Council of Computer Sciences Master at Université de Nantes (2017–*).
Bruno Tuffin was a member of Inria-MITACS selection committee
G. Rubino makes regular presentations to high school students about the research work in general, and specific technical topics in particular. Current talks:
Randomness as a tool
Internet as a research problem
Great challenges in maths: the Riemann Hypothesis
Great challenges in math/computer science: the “P versus NP” problem