A large number of real-world structures and phenomena can be described by networks: separable elements with connections between certain pairs of them. Among such networks, the best known and the most studied in computer science is the Internet. Moreover, the Internet (as the physical underlying network) gives itself rise to many new networks, like the networks of hyperlinks, Internet based social networks, distributed data bases, codes on graphs, local interactions of wireless devices. These huge networks pose exciting challenges for the mathematician and the mathematical theory of networks faces novel, unconventional problems. For example, very large networks cannot be completely known, and data about them can be collected only by indirect means like random local sampling or by monitoring the behavior of various aggregated quantities.

The scientific focus of DYOGENE is on geometric network dynamics arising in communications. By geometric networks we understand networks with a nontrivial, discrete or continuous, geometric definition of the existence of links between the nodes. In stochastic geometric networks, this definition leads to random graphs or stochastic geometric models. A first type of geometric network dynamics is the one where the nodes or the links change over time according to an exogeneous dynamics (e.g. node motion and geometric definition of the links). We will refer to this as dynamics of geometric networks below. A second type is that where links and/or nodes are fixed but harbor local dynamical systems (in our case, stemming from e.g. information theory, queuing theory, social and economic sciences). This will be called dynamics on geometric networks. A third type is that where the dynamics of the network geometry and the local dynamics interplay. Our motivations for studying these systems stem from many fields of communications where they play a central role, and in particular: message passing algorithms; epidemic algorithms; wireless networks and information theory; device to device networking; distributed content delivery; social and economic networks.

Network calculus is a theory for obtaining deterministic upper bounds in
networks that has been developed by R. Cruz
, . From the modelling point of view, it is an
algebra for computing and propagating constraints given in terms of
envelopes. A flow is represented by its cumulative function

The operations used for this are an adaptation of filtering theory to

We investigate the complexity of computing exact worst-case performance bounds in network calculus and to develop algorithms that present a good trade off between algorithmic efficiency and accuracy of the bounds.

Simulation approaches can be used to efficiently estimate the stationary behavior of Markov chains by providing independent samples distributed according to their stationary distribution, even when it is impossible to compute this distribution numerically.

The classical Markov Chain Monte Carlo simulation techniques suffer from two main problems:

To overcome these issues, Propp and Wilson have
introduced a perfect sampling algorithm (PSA) that has later been
extended and applied in various contexts, including statistical physics
, stochastic geometry ,
theoretical computer science , and communications
networks , (see also the
bibliography at http://

Perfect sampling uses coupling arguments to give an unbiased sample
from the stationary distribution of an ergodic Markov chain on a
finite state space

The algorithm is based on a backward coupling scheme: it computes the trajectories from all

Any ergodic Markov chain on a finite state space has a representation of type () that couples in finite time with probability 1, so
Propp and Wilson's PSA gives a “perfect” algorithm in the sense that it provides an *unbiased* sample in *finite time*. Furthermore, the stopping criterion is given by the coupling from the past scheme, and
knowing the explicit bounds on the coupling time is not needed for the validity of the algorithm.

However, from the computational side, PSA is efficient only under some monotonicity assumptions that allow reducing the number of trajectories considered in the coupling from the past procedure only to extremal initial conditions. Our goal is to propose new algorithms solving this issue by exploiting semantic and geometric properties of the event space and the state space.

F. Baccelli, B. Blaszczyszyn in collaboration with M. Karray (Orange Labs) are preparing a new book focusing on the mathematical tools at the basis of stochastic geometry. The book will cover the main mathematical foundations of the field, namely the theory of point processes and random measures as well as the theory of random closed sets. The basis will be the graduate classes and the research courses taught by the authors at a variety of places worldwide.

The collaboration of F. Baccelli with V. Anantharam (UC Berkeley) continues in new directions on high dimensional stochastic geometry, primarily in relation with Information Theory, cf. Section .

The collaboration of B. Blaszczyszyn with D. Yogeshwaran (Indian Statistical Institute) and Y. Yukich (Lehigh University) led to the development of the limit theory for geometric statistics on general input processes, cf. Section .

Classical models of stochastic geometry (SG) are not sufficient for analyzing wireless networks as they ignore the specific nature of radio channels.

Consider a wireless communication network made of a collection of nodes which in turn can be transmitters or receivers. At a given time, some subset of this collection of nodes simultaneously transmit, each toward its own receiver. Each transmitter–receiver pair in this snapshot requires its own wireless link. For each such wireless link, the power of the signal received from the link transmitter is jammed by the powers of the signals received from the other transmitters. Even in the simplest model where the power radiated from a point decays in some isotropic way with Euclidean distance, the geometry of the location of nodes plays a key role within this setting since it determines the signal to interference and noise ratio (SINR) at the receiver of each such link and hence the possibility of establishing simultaneously this collection of links at a given bit rate, as shown by information theory (IT). In this definition, the interference seen by some receiver is the sum of the powers of the signals received from all transmitters excepting its own. The SINR field, which is of an essentially geometric nature, hence determines the connectivity and the capacity of the network in a broad sense. The essential point here is that the characteristics and even the feasibilities of the radio links that are simultaneously active are strongly interdependent and determined by the geometry. Our work is centered on the development of an IT-aware stochastic geometry addressing this interdependence. Dyogene members published in 2009 a two-volume book , on Stochastic Geometry and Wireless Networks that became a reference publication in this domain.

In collaboration with Martin Haenggi (University of Notre Dame Notre Dame, IN, USA), Paul Keeler (Weierstrass Institute for Applied Analysis and Stochastics Berlin, Germany) and Sayandev Mukherjee (DOCOMO Innovations, Inc. Palo Alto, CA, USA), B. Blaszczyszyn is currently working on a book project that is intended to bridge a gap between academic and industrial approach to the design of next-generation cellular networks. In fact, simulation-only approach adopted by a majority of industry practitioners does not scale up with the increasing network complexity and analytical treatment is still yet not widely accepted in various bodies working out future standards specifications. The monograph is intended to bridge that gap, and make the methods, tools, approaches, and results of stochastic geometry available to a wide group of researchers (both in academia and in industry), systems engineers, and network designers. We expect that academic researchers and graduate students will appreciate that the book collects and organizes the most recent research results in a convenient way.

The cavity method combined with geometric networks concepts has recently led to spectacular progresses in digital communications through error-correcting codes. More than fifty years after Shannon's theorems, some coding schemes like turbo codes and low-density parity-check codes (LDPC) now approach the limits predicted by information theory. One of the main ingredients of these schemes is message-passing decoding strategies originally conceived by Gallager, which can be seen as direct applications of the cavity method on a random bipartite graph (with two types of nodes representing information symbols and parity check symbols, see ).

Modern coding theory is only one example of application of the cavity method. The concepts and techniques developed for its understanding have applications in theoretical computer science and a rich class of *complex systems*, in the field of networking, economics and social sciences.
The cavity method can be used both for the analysis of randomized
algorithms and for the study of random ensembles of computational
problems representative real-world situations. In order to analyze the
performance of algorithms, one generally defines a family of instances
and endows it with a probability measure, in the same way as one
defines a family of samples in the case of spin glasses or LDPC
codes. The discovery that the hardest-to-solve instances, with all
existing algorithms, lie close to a *phase transition* boundary has spurred
a lot of interest. Theoretical physicists suggest that the reason is a structural one, namely a change in the geometry of the set of solutions related to the *replica symmetry breaking* in the cavity method.
Phase transitions, which lie at the core of statistical physics, also play a key role in computer
science , signal processing and social sciences .
Their analysis is a major challenge, that may have a strong impact on the design of related algorithms.

We develop mathematical tools in the theory of discrete probabilities and theoretical computer science in order to contribute to a rigorous formalization of the cavity method, with applications to network algorithms, statistical inference, and at the interface between computer science and economics (EconCS).

Sparse graph structures are useful in a number of information processing tasks where the computational problem can be described as follows: infer the values of a large collection of random variables, given a set of constraints or observations, that induce relations among them. Similar design ideas have been proposed in sensing and signal processing and have applications in coding , network measurements, group testing or multi-user detection. While the computational problem is generally hard, sparse graphical structures lead to low-complexity algorithms that are very effective in practice. We develop tools in order to contribute to a precise analysis of these algorithms and of their gap to optimal inference which remains a largely open problem.

A second line of activities concerns the design of protocols and algorithms enabling a transmitter to learn its environment (the statistical properties of the channel quality to the corresponding receiver, as well as their interfering neighbouring transmitters) so as to optimise their transmission strategies and to fairly and efficiently share radio resources. This second objective calls for the development and use of machine learning techniques (e.g. bandit optimisation).

Wireless networks can be efficiently modelled as dynamic stochastic geometric networks. Their analysis requires taking into account, in addition to their geometric structure, the specific nature of radio channels and their statistical properties which are often unknown a priori, as well as the interaction through interference of the various individual point-to-point links. Established results contribute in particular to the development of network dimensioning methods and some of them are currently used in Orange internal tools for network capacity calculations.

Critical real-time embedded systems (cars, aircrafts, spacecrafts) are nowadays made up of multiple computers communicating with each other. The real-time constraints typically associated with operating systems now extend to the networks of communication between sensors/actuators and computers, and between the computers themselves. Once a media is shared, the time between sending and receiving a message depends not only on technological constraints, but also, and mainly from the interactions between the different streams of data sharing the media. It is therefore necessary to have techniques to guarantee maximum network delays, in addition to local scheduling constraints, to ensure a correct global real-time behaviour to distributed applications/functions.

Moreover, pessimistic estimate may lead to an overdimensioning of the network, which involves extra weight and power consumption. In addition, these techniques must be scalable. In a modern aircraft, thousands of data streams share the network backbone. Therefore algorithm complexity should be at most polynomial.

A content distribution network (CDN) is a globally distributed network of proxy servers deployed in multiple data centers. The goal of a CDN is to serve content to end-users with high availability and high performance. CDNs serve a large fraction of the Internet content today, including web objects (text, graphics and scripts), downloadable objects (media files, software, documents), applications (e-commerce, portals), live streaming media, on-demand streaming media, and social networks.

A. Bouillard and F. Baccelli started a collaboration with Virag Shah (Postdoc at the Inria-Microsoft Saclay center) on the analysis of delays in data clusters. Their focus is on the way delays scale with the size of a request and on the way delays compare under different policies for coding, data dissemination, and delivery. A paper on the matter is submitted.

Renewable energy sources such as wind and solar have a high degree of unpredictability and time variation, which makes balancing demand and supply challenging. There is an increased need for ancillary services to smooth the volatility of renewable power. In the absence of large, expensive batteries, we may have to increase our inventory of responsive fossil-fuel generators, negating the environmental benefits of renewable energy. The proposed approach addresses this challenge by harnessing the inherent flexibility in demand of many types of loads. The objective is to develop decentralized control for automated demand dispatch, that can be used by grid operators as ancillary service to regulate demand-supply balance at low cost. Our goal is to create the necessary ancillary services for the grid that are environmentally friendly, that have low cost and that do not impact the quality of service (QoS) for the consumers.

A challenge in residential communities is that many loads are either on or off. How can an on/off load track the continuously varying regulation signal broadcast by a grid operator? The answer proposed in our recent work is based on probabilistic algorithms: A single load cannot track a regulation signal such as the balancing reserves. A collection of loads can, provided they are equipped with local control. The value of probabilistic algorithms is that a) they can be designed with minimal communication, b) they avoid synchronization of load responses, and c) it is shown in our recent work that they can be designed to simplify control at the grid level (see the survey and , ).

This research is developed within the Inria Associate Team PARIS.

In the study of complex networks, a network is said to have community structure if the nodes of the network can be easily grouped into (potentially overlapping) sets of nodes such that each set of nodes is densely connected internally. Community structures are quite common in real networks. Social networks include community groups (the origin of the term, in fact) based on common location, interests, occupation, etc. Metabolic networks have communities based on functional groupings. Citation networks form communities by research topic. Being able to identify these sub-structures within a network can provide insight into how network function and topology affect each other. We propose several algorithms for this problem and extensions , , ,

The work with S. Rybko, S. Vladimorov (IPIT, Moscow) and S. Shlosman (CNRS Marseille) which started through some funding from CNRS and which led to several visits of S. Rybko and S. Vladimorov in Paris led to a series of research projects on queuing theory. The first one, on mean-fields for networks with node motion was published in 2016; cf. Section .

F. Baccelli received a Honorary Doctorate of Heriot-Watt University. The graduation took place on November 17, 2016, in Edinburgh, United Kingdom.

CLOsed queueing Networks Exact Sampling

Functional Description

Clones is a Matlab toolbox for exact sampling of closed queueing networks.

Participant: Christelle Rovetta

Contact: Christelle Rovetta

In we consider exact sampling from the stationary distribution of a
closed queueing network with finite capacities. In a recent work a
compact representation of sets of states was proposed that enables
exact sampling from the stationary distribution without considering
all initial conditions in the coupling from the past (CFTP)
scheme.This representation reduces the complexity of the one-step
transition in the CFTP algorithm to O(

In our previous research , it was argued that loads can provide most of the ancillary services required today and in the future. Through load-level and grid-level control design, high-quality ancillary service for the grid is obtained without impacting quality of service delivered to the consumer. This approach to grid regulation is called demand dispatch: loads are providing service continuously and automatically, without consumer interference. In work we investigate what intelligence is required at the grid-level. In particular, does the grid-operator require more than one-way communication to the loads? Our main conclusion: risk is not great in lower frequency ranges, e.g., PJM's RegA or BPA's balancing reserves. In particular, ancillary services from refrigerators and pool-pumps can be obtained successfully with only one-way communication. This requires intelligence at the loads, and much less intelligence at the grid level.

Nowadays, telecommunication infrastructures are composed of property hardware operated by a single entity to offer communication services to their final users. While this architecture simplifies the design and optimization of the network equipment for specific tasks, its low degree of flexibility represents the main limitation for the evolution of the network infrastructure. For this reason, network operators and equipment manufacturers have started the standardization process of a plethora of virtualization solutions that have been individually developed in recent years for enabling the sharing of general-purpose resources and increasing the flexibility of their network architectures. Such a process has led to the specification of the Network Functions Virtualization (NFV) technology, which promises to bring about several benefits, such as reduced CAPEX and OPEX (CAPital and OPerational EXpenditure), low time-tomarket for new network services, higher flexibility to scale up and down the services according to users' demand, simple and cheap testing of new services. Nevertheless, the consolidation of the virtualization technology represents one of the main challenging problems for its success and widespread utilization in telecommunication infrastructures, which still consist of a huge set of property hardware appliances and software systems. Indeed, the sharing of the physical infrastructure among multiple virtual operators as well as the simple configuration of network services require the design of complex management mechanisms for the orchestration of the network equipment, with the final goal of dynamically adapting the infrastructure to the resource utilization.

In particular, spatio-temporal correlation of traffic demands and computational loads can result in high congestion and low network performance for virtual operators, thus leading to service level agreement breaches. In , we propose novel orchestration mechanisms to optimally control and mitigate the resource congestion of a physical infrastructure based on the NFV paradigm. More specifically, we analyze the congestion resulting from the sharing of the physical infrastructure and propose innovative orchestration mechanisms based on both centralized and distributed approaches, aimed at unleashing the potential of the NFV technology. In particular, we first formulate the network functions composition problem as a non-linear optimization model to accurately capture the congestion of physical resources. To further simplify the network management, we also propose a dynamic pricing strategy of network resources, proving that the resulting system achieves a stable equilibrium in a completely distributed fashion, even when all virtual operators independently select their best network configuration. Numerical results show that the proposed approaches consistently reduce resource congestion. Furthermore, the distributed solution well approaches the performance that can be achieved using a centralized network orchestration system.

Content Delivery Networks (CDNs) have been identified as one of the relevant use cases where the emerging paradigm of Network Functions Virtualization (NFV) will likely be beneficial. In fact, virtualization fosters flexibility, since on-demand resource allocation of virtual CDN nodes can accommodate sudden traffic demand changes. However, there are cases where physical appliances should still be preferred, therefore we envision a mixed architecture in between these two solutions, capable to exploit the advantages of both of them. Motivated by these reasons, in we formulate a two-stage stochastic planning model that can be used by CDN operators to compute the optimal long-term network planning decision, deploying physical CDN appliances in the network and/or leasing resources for virtual CDN nodes in data centers. Key findings demonstrate that for a large range of pricing options and traffic profiles, NFV can significantly save network costs spent by the operator to provide the content distribution service.

The radio frequency (RF) spectrum is a scarce resource that has recently become particularly critical with the increased wireless demand. For this reason, the Federal Communications Commission (FCC) has recently allowed for opportunistic access to the unused spectrum in the TV bands (also called “white space”). With opportunistic access, however, there is a need to deploy enhanced channel allocation and power control techniques to mitigate interference, including Adjacent-Channel Interference (ACI). TV White Space (TVWS) spectrum access is often investigated without taking into account ACI between the transmissions of TV Bands Devices (TVBDs) and licensed TV stations. Guard Bands (GBs) can be used to protect data transmissions and mitigate the ACI problem. Therefore, in we consider a spectrum database that is administrated by a database operator, and an opportunistic secondary system, in which every TVBD is equipped with a single antenna that can be tuned to a subset of licensed channels. This can be done, for example, through adaptive channel aggregation or bonding techniques.

We investigate the distributed spectrum management problem in opportunistic TVWS systems using a game theoretical approach that accounts for adjacent channel interference and spatial reuse. TVBDs compete to access idle TV channels and select channel “blocks” that optimize an objective function. This function provides a tradeoff between the achieved rate and a cost factor that depends on the interference between TVBDs. We consider practical cases where contiguous or non-contiguous channels can be accessed by TVBDs, imposing realistic constraints on the maximum frequency span between the aggregated/bonded channels. We show that under general conditions, the proposed TVWS management games admit a potential function. Accordingly, a “best response” strategy allows us to determine the spectrum assignment of all players. This algorithm is shown to converge in a few iterations to a Nash Equilibrium (NE). Furthermore, we propose an effective algorithm based on Imitation dynamics, where a TVBD probabilistically imitates successful selection strategies of other TVBDs in order to improve its objective function. Numerical results show that our game theoretical framework provides a very effective tradeoff (close to optimal, centralized spectrum allocations) between efficient TV spectrum use and reduction of interference between TVBDs.

5G is envisioned to support scalable networks and improved user experience with virtually zero latency and ultra broad-band service. Supporting unlimited seamless mobility is one of the key issues and also for network resource utilization efficiency. In , we focus on mobility management and user equipment (UE) speed class estimation, also known as mobility state estimation (MSE). We propose a method for estimating the UE mobility which is compliant with UE history information specifications by 3GPP (3rd Generation Partnership Project). We also exploit the impact of the environment on the UE trajectory and speed when determining UE mobility state. We evaluate the effectiveness of our algorithm using realistic mobility traces and network topology of the city of Cologne in Germany provided by the Kolntrace project. Results show that the speed classification of UEs can be achieved with much higher accuracy compared to existing legacy 3GPP LTE MSE procedures.

Estimating mobile user speed is a problematic issue which has significant impacts to radio resource management and also to the mobility management of Long Term Evolution (LTE) networks. In introduces two algorithms that can estimate the speed of mobile user equipments (UE), with low computational requirement, and without modification of neither current user equipment nor 3GPP standard protocol. The proposed methods rely on uplink (UL) sounding reference signal (SRS) power measurements performed at the eNodeB (eNB) and remain efficient with large sampling period (e.g., 40 ms or beyond). We evaluate the effectiveness of our algorithms using realistic LTE system data provided by the eNB Layer1 team of Alcatel-Lucent. Results show that the classification of UE's speed required by LTE can be achieved with high accuracy. In addition, they have minimal impact to the central processing unit (CPU) and the memory of eNB modem. We see that they are very practical to today's LTE networks and would allow a continuous and real-time UE speed estimation

In small cell networks, high mobility of users results in frequent handoff and thus severely restricts the data rate for mobile users. To alleviate this problem, in we propose to use heterogeneous, two-tier network structure where static users are served by both macro and micro base stations, whereas the mobile (i.e., moving) users are served only by macro base stations having larger cells; the idea is to prevent frequent data outage for mobile users due to handoff. We use the classical two-tier Poisson network model with different transmit powers (cf ), assume independent Poisson process of static users and doubly stochastic Poisson process of mobile users moving at a constant speed along infinite straight lines generated by a Poisson line process. Using stochastic geometry, we calculate the average downlink data rate of the typical static and mobile (i.e., moving) users, the latter accounted for handoff outage periods. We consider also the average throughput of these two types of users defined as their average data rates divided by the mean total number of users co-served by the same base station. We find that if the density of a homogeneous network and/or the speed of mobile users is high, it is advantageous to let the mobile users connect only to some optimal fraction of BSs to reduce the frequency of handoffs during which the connection is not assured. If a heterogeneous structure of the network is allowed, one can further jointly optimize the mean throughput of mobile and static users by appropriately tuning the powers of micro and macro base stations subject to some aggregate power constraint ensuring unchanged mean data rates of static users via the network equivalence property (see ).

This work contributes to the line of research on the development of analytic tools for the QoS evaluation and dimensioning of operator cellular networks which is the subject of long-term collaboration between TREC/DYOGENE and Orange Labs (cf Section ). Our focus in is to explicitly characterize the disparity of quality of service (QoS) metrics between base stations in large heterogeneous wireless cellular networks. The considered QoS metrics are cell load, users' number, and user throughput. The spatial disparity of these metrics is due to the irregularity of the cells' geometry. In order to consider these irregularities, we assume a Poisson point process of base station locations, random transmission powers, and log-normal shadowing. The interdependency between the performances of the base stations is characterized by a system of load equations. The typical cell simulation model consists in resolving this system in order to find the loads and then deduce the remaining characteristics for each cell of the network. Using stochastic geometric and queueing theoretic techniques, we define the QoS averages, variances, and distributions. Inspired by the analysis of the typical cell model, several investigations lead us to propose a fully analytic approach, called mean cell model, that approximates the averages, variances, and distributions of these QoS metrics. Numerical experiments show a good agreement between the proposed approximations, simulation results, and real-life network measurements.

This work contributes to the line of research on Poisson convergence in wireless networks with strong shadowing initiated in , . More recently, Keeler, Ross and Xia derived in approximation and convergence results, which imply that the point process formed from the signal strengths received by an observer in a wireless network under a general statistical propagation model can be modeled by an inhomogeneous Poisson point process on the positive real line. The basic requirement for the results to apply is that there must be a large number of transmitters with a small proportion having a strong signal. The aim of is to apply some of the main results of in a less general but more easily applicable form, to illustrate how the results can apply to functions of the point process of signal strengths, and to gain intuition on when the Poisson model for transmitter locations is appropriate. A new and useful observation is that it is the stronger signals that behave more Poisson, which supports recent experimental work.

A number of real-world systems consisting of interacting agents can be usefully modelled by graphs, where the agents are represented by the vertices of the graph and the interactions by the edges. Such systems can be as diverse and complex as social networks (traditional or online), protein-protein interaction networks, internet, transport network and inter-bank loan networks. One important question that arises in the study of these networks is: to what extent, the local statistics of a network determine its global topology. This problem can be approached by constructing a random graph constrained to have some of the same local statistics as those observed in the graph of interest. One such random graph model is configuration model, which is constructed in such a way that a uniformly chosen vertex has a given degree distribution. This is the random graph which provides the underlying framework for the problems considered in the PhD thesis . As our first problem, we consider propagation of influence on configuration model, where each vertex can be influenced by any of its neighbours but in its turn, it can only influence a random subset of its neighbours. Our (enhanced) model is described by the total degree of the typical vertex and the number of neighbours it is able to influence. We give a tight condition, involving the joint distribution of these two degrees, which allows with high probability the influence to reach an essentially unique non-negligible set of the vertices, called a big influenced component, provided that the source vertex is chosen from a set of good pioneers. We explicitly evaluate the asymptotic relative size of the influenced component as well as of the set of good pioneers, provided it is non-negligible. Our proof uses the joint exploration of the configuration model and the propagation of the influence up to the time when a big influenced component is completed, a technique introduced in Janson and Luczak . Our model can be seen as a generalization of the classical Bond and Node percolation on configuration model, with the difference stemming from the oriented conductivity of edges in our model. We illustrate these results using a few examples which are interesting from either theoretical or real-world perspective. The examples are, in particular, motivated by the viral marketing phenomenon in the context of social networks. Next, we consider the isolated vertices and the longest edge of the minimum spanning tree of a weighted configuration model. Using Stein-Chen method, we compute the asymptotic distribution of the number of vertices which are separated from the rest of the graph by some critical distance, say alpha. This distribution gives the scaling of the length of the longest edge of the nearest neighbour graph with the size of the graph. We then use the results of Fountoulakis on percolation to prove that after removing all the edges of length greater than alpha, the subgraph obtained is connected but for the isolated vertices. This leads us to conclude that the longest edge of the minimal spanning tree and that of the nearest neighbour graph coincide with high probability. Finally, we investigate a more general question, that is, whether some ordering based on local statistics of the graph would lead to an ordering of the global topological properties, so that the bounds for more complex graphs could be obtained from their simplified versions. To this end, we introduce a convex order on random graphs and discuss some implications, particularly how it can lead to the ordering of percolation probabilities in certain situations.

The threshold model is widely used to study the propagation of opinions and technologies in social networks. In this model individuals adopt the new behavior based on how many neighbors have already chosen it. In we study cascades under the threshold model on sparse random graphs with community structure to see whether the existence of communities affects the number of individuals who finally adopt the new behavior. Specifically, we consider the permanent adoption model where nodes that have adopted the new behavior cannot change their state. When seeding a small number of agents with the new behavior, the community structure has little effect on the final proportion of people that adopt it, i.e., the contagion threshold is the same as if there were just one community. On the other hand, seeding a fraction of population with the new behavior has a significant impact on the cascade with the optimal seeding strategy depending on how strongly the communities are connected. In particular, when the communities are strongly connected, seeding in one community outperforms the symmetric seeding strategy that seeds equally in all communities.

Let P be a simple, stationary, clustering point process on the

This gives the limit theory for non-linear geometric statistics (such
as clique counts, the number of Morse critical points, intrinsic
volumes of the Boolean model, and total edge length of the k-nearest
neighbor graph) of determinantal point processes with fast decreasing
kernels, including the

The proof of the central limit theorem relies on a factorial moment
expansion originating in Blaszczyszyn to show clustering of
mixed moments of the score function. Clustering extends the cumulant
method to the setting of purely atomic random measures, yielding the
asymptotic normality of

One year CRE contract titled “Mise au point d’une méthode d’évaluation de la qualité de service pour le sens montant d’un réseau cellulaire LTE validée avec les mesures terrain” between Inria and Orange Labs have been signed in 2015 end realized in 2016. It is a part of the long-term collaboration between TREC/DYOGENE and Orange Labs, represented by M. K. Karray, for the development of analytic tools for the QoS evaluation and dimensioning of operator cellular networks. Arpan Chattopadhyay was hired by Inria as a post-doctoral fellow thanks to this contract.

Arpan Mukhopadhyay was hired by Inria as a post-doctoral fellow within
this lab dedicated to the research on communication networks of the future;
https://

PhD: Dalia-Georgiana Herculea, co-advised by B. Blaszczyszyn, E. Altman and Ph. Jacquet

Dyogene participates in LINCS https://

Members of Dyogene participate in Research Group GeoSto
(Groupement de recherche, GdR 3477)
http://

Members of Dyogene participate in GdR-IM
(Informatique-Mathématiques), https://

Members of Dyogene participate in GdR-RO (Recherche Opérationelle;
GdR CNRS 3002), http://

Gaspard Monge Program for Optimization and Operations Research project Decentralized control for renewable integration in smart-grids (2015-17). PI: A. Busic.

Markovian Modeling Tools and Environments - coordinator: Alain Jean-Marie (Inria Maestro); local coordinator (for partner Inria Paris-Rocquencourt): A. Bušić; Started: January 2013; Duration: 48 months; partners: Inria Paris-Rocquencourt (EPI DYOGENE), Inria Sophia Antipolis Méditerranée (EPI MAESTRO), Inria Grenoble Rhône-Alpes (EPI MESCAL), Université Versaillese-St Quentin, Telecom SudParis, Université Paris-Est Creteil, Université Pierre et Marie Curie.

The aim of the project is to realize a modeling environment dedicated to Markov models. One part will develop the Perfect Simulation techniques, which allow to sample from the stationary distribution of the process. A second one will develop parallelization techniques for Monte Carlo simulation. A third one will develop numerical computation techniques for a wide class of Markov models. All these developments will be integrated into a programming environment allowing the specification of models and their solution strategy. Several applications will be studied in various scientific disciplines: physics, biology, economics, network engineering.

Title: Probabilistic Algorithms for Renewable Integration in Smart Grid

International Partner (Institution - Laboratory - Researcher):

University of Florida (United States) — Sean Meyn

Start year: 2015

See also: http://

The importance of statistical modeling and probabilistic control techniques in the power systems area is now evident to practitioners in both the U.S. and Europe. Renewable generation has brought unforeseen volatility to the grid that require new techniques in distributed and probabilistic control. In a series of recent papers the two PIs have brought together their complementary skills in optimization, Markov modeling, simulation, and stochastic networks that may help to solve some pressing open problems in this area. This new research also opens many exciting new scientific questions.

B. Blaszczyszyn is collaborationg with T. Rolski, R. Szekli, (University of Wroclaw), D. Yogeshwaran (Indian Statistical Institute) and Y. Yukich (Lehigh University)

A. Busic is participating to the ARPA-E Powernet project led by Ram Rajagopal (Stanford); https://

Sean Meyn [Professor, University of Florida, Jun 2016]

Adithya Munegowda Devraj [PhD student, University of Florida, May – Jul 2016]

Sebastien Ziesche [PdD student, Karlsruhe Institute of Technology, March 2016]

Bartlomiej Blaszczyszyn: 5th Stochastic Models Conference in
Bȩdlewo, Poland; http://

Ana Busic: Workshop on the stochastic optimization and
games with applications to energy and networks;
http://

Bartlomiej Blaszczyszyn: WiOpt/Spaswin 2016.

Anne Bouillard: Valuetools 2016 conference.

Ana Busic: QEST 2016, IEEE SmartGridComm 2016,

Jocelyne Elias: GSNC 2016, Globecom 2016 SAC CN, IEEE WCNC 2016, WD 2016, IEEE ICCVE 2016.

Marc Lelarge: AISTATS 2017, NIPS 2016, Workshop on Algorithms and Models for the Web Graph 2016, ACM SIGMETRICS 2017, 2016, ACM Mobihoc 2017, 2016, IEEE INFOCOM 2017, 2016, ICALP 2016.

Dalia-Georgiana Herculea: IEEE VTC , IEEE ICT.

Marc Lelarge: IEEE Transactions on Network Science and Engineering, Bernoulli Journal, Queueing Systems.

Bartlomiej Blaszczyszyn: Ann. Appl. Probab., IEEE TNSE, TWC, WCL.

Dalia-Georgiana Herculea: IEEE Access, Wiley Transactions on Emerging Telecommunications Technologies.

Bartlomiej Blaszczyszyn: Workshop on Continuum Percolation in
Lille http://

Anne Bouillard: ASMTA conference.

Ana Busic: Institute for Mathematics and its Applications, Univ. of Minnesota, Indo-UK workshop on Energy Management, ICMS Edinburgh,Workshop EDF Lab', Simons Institute Berkeley, Faculty of Electrical Engineering and Computing, University of Zagreb,

Dalia-Georgiana Herculea: Institute of Stochastics, University of Ulm, LINCS Internal Workshop.

Ana Busic is co-responsable of the research group COSMOS (Stochastic
optimization and control, modeling and simulation) of the GDR-RO;
http://

Dalia-Georgiana Herculea: Representation of Phd students in the Board of Laboratory of Information, Networking and Communication Sciences (LINCS).

Bartlomiej Blaszczyszyn: ANR France, ISF Israel, NSC Poland, ERC Europe.

A. Busic: Co-president of CES (Commission des Emplois Scientifiques) of Inria Paris, Member of the hiring committee for CR2 research positions at Inria Paris, Member of CDT (Commission de développement technologique) of Inria Paris.

Licence: Anne Bouillard (Cours) et Rémi Varloot (TD) **Structures et algorithmes aléatoires** 80heqTD, L3, ENS, France.

Licence: Anne Bouillard (Cours) **Théorie de l'information et du codage** 24 heqTD, L3, ENS, France.

Licence: Anne Bouillard (Cours)
**Algorithmique et programmation** 21 heqTD, L3,
ENS, France.

Master: Bartlomiej Blaszczyszyn (Cours)
**Processus ponctuels, graphes aléatoires et
géeométrie stochastique** 39heqTD, M2
Probabilités et Modèles Aléatoires, UPMC, France

Master: Anne Bouillard (Cours + TD)
**Fondements de la modélisation des réseaux**
18heqTD, M2, MPRI, France

Master: Ana Busic and Marc Lelarge (Cours) et Rémi
Varloot (TD) **Modéles et algorithmes de réseaux**,
50 heqTD, M1, ENS, Paris, France.

Master: Ana Busic (Cours) **Simulation**,
13.5 heqTD, M2 AMIS UVSQ, France.

Master: Marc Lelarge (TD) **Networks:
distributed control and emerging phenomena** (cours given
by Laurent Massoulié) M2, Ecole
Polytechnique, France.

Master: Marc Lelarge (TD) **Aléatoire** (cours given
by Sylvie Mél'eard) M2, Ecole Polytechnique, France.

PhD: Kumar Gaurav, On some diffusion and spanning problems in configuration model , defended in November 2016, supervised by B. Blaszczyszyn

PhD in progress: Léeo Miolane, supervised by Marc Lelarge

PhD in progress: Dalia-Georgiana Herculea, co-advised by B. Blaszczyszyn, E. Altman and Ph. Jacquet

PhD in progress: Lennart Gulikers, supervised by Marc Lelarge with Laurent Massoulié

PhD in progress: Md Umar Hashmi, Decentralized control for renewable integration in smartgrids, from December 2015, co-advised by A. Busic and M. Lelarge

PhD in progress: Alexandre Hollocou: supervised by Marc Lelarge with Thomas Bonald

PhD in progress: Christelle Rovetta, Applications of perfect sampling to queuing networks and random generation of combinatorial objects, from December 2013, co-advised by Anne Bouillard and Ana Busic

PhD in progress: Sébastien Samain, Monte Carlo methods for performance evaluation and reinforcement learning, from November 2016, supervised by A. Busic

PhD in progress: Rémi Varloot, supervised by Marc Lelarge with Laurent Massoulié

PostDoc: Arpan Mukhopadhyay, supervised by Marc Lelarge with Nidhi Hegde

PostDoc: Virag Shah: supervised by Marc Lelarge with Laurent Massoulié and Milan Vojnovic

Anne Bouillard: reviewer of the PhD thesis of Mickael Back (University of Kaiserslautern).

Ana Busic: PhD jury of Alexandra Ugolnikova, LIPN (Université Paris Nord), Rim Kaddah (Télécom ParisTech).

Bartlomiej Blaszczyszyn: reviewer of the PhD thesis of Jihong YU LRI (Université Paris-Sud).

Marc Lelarge: reviewer for the PhD thesis of Kevin Scaman (ENS Cachan) and Van Hao Can (Université Aix-Marseille).

Anne Bouillard: invited speaker at “Girls can code” 2016 session (Training session for female junior and senior highschool students).