A large number of real-world structures and phenomena can be described by networks: separable elements with connections between certain pairs of them. Among such networks, the best known and the most studied in computer science is the Internet. Moreover, the Internet (as the physical underlying network) gives itself rise to many new networks, like the networks of hyperlinks, Internet based social networks, distributed data bases, codes on graphs, local interactions of wireless devices. These huge networks pose exciting challenges for the mathematician and the mathematical theory of networks faces novel, unconventional problems. For example, very large networks cannot be completely known, and data about them can be collected only by indirect means like random local sampling or by monitoring the behavior of various aggregated quantities.

The scientific focus of DYOGENE is on geometric network dynamics arising in communications. By geometric networks we understand networks with a nontrivial, discrete or continuous, geometric definition of the existence of links between the nodes. In stochastic geometric networks, this definition leads to random graphs or stochastic geometric models. A first type of geometric network dynamics is that where the nodes or the links change over time according to an exogeneous dynamics (e.g. node motion and geometric definition of the links). We will refer to this as dynamics of geometric networks below. A second type is that where links and/or nodes are fixed but harbor local dynamical systems (in our case, stemming from e.g. information theory, queuing theory, social and economic sciences). This will be called dynamics on geometric networks. A third type is that where the dynamics of the network geometry and the local dynamics interplay. Our motivations for studying these systems stem from many fields of communications where they play a central role, and in particular: message passing algorithms; epidemic algorithms; wireless networks and information theory; device to device networking; distributed content delivery; social and economic networks.

Network calculus is a theory for obtaining deterministic upper bounds in
networks that has been developed by R. Cruz
, . From the modelling point of view, it is an
algebra for computing and propagating constraints given in terms of
envelopes. A flow is represented by its cumulative function

The operations used for this are an adaptation of filtering theory to

We investigate the complexity of computing exact worst-case performance bounds in network calculus and to develop algorithms that present a good trade off between algorithmic efficiency and accuracy of the bounds.

Simulation approaches can be used to efficiently estimate the stationary behavior of Markov chains by providing independent samples distributed according to their stationary distribution, even when it is impossible to compute this distribution numerically.

The classical Markov Chain Monte Carlo simulation techniques suffer from two main problems:

To overcome these issues, Propp and Wilson have introduced a perfect sampling algorithm (PSA) that has later been extended and applied in various contexts, including statistical physics , stochastic geometry , theoretical computer science , and communications networks , (see also the annotated bibliography by Wilson ).

Perfect sampling uses coupling arguments to give an unbiased sample
from the stationary distribution of an ergodic Markov chain on a
finite state space

The algorithm is based on a backward coupling scheme: it computes the trajectories from all

Any ergodic Markov chain on a finite state space has a representation of type () that couples in finite time with probability 1, so
Propp and Wilson's PSA gives a “perfect” algorithm in the sense that it provides an *unbiased* sample in *finite time*. Furthermore, the stopping criterion is given by the coupling from the past scheme, and
knowing the explicit bounds on the coupling time is not needed for the validity of the algorithm.

However, from the computational side, PSA is efficient only under some monotonicity assumptions that allow reducing the number of trajectories considered in the coupling from the past procedure only to extremal initial conditions. Our goal is to propose new algorithms solving this issue by exploiting semantic and geometric properties of the event space and the state space.

Classical models of stochastic geometry (SG) are not sufficient for analyzing wireless networks as they ignore the specific nature of radio channels.

Consider a wireless communication network made of a collection of nodes which in turn can be transmitters or receivers. At a given time, some subset of this collection of nodes simultaneously transmit, each toward its own receiver. Each transmitter–receiver pair in this snapshot requires its own wireless link. For each such wireless link, the power of the signal received from the link transmitter is jammed by the powers of the signals received from the other transmitters. Even in the simplest model where the power radiated from a point decays in some isotropic way with Euclidean distance, the geometry of the location of nodes plays a key role within this setting since it determines the signal to interference and noise ratio (SINR) at the receiver of each such link and hence the possibility of establishing simultaneously this collection of links at a given bit rate, as shown by information theory (IT). In this definition, the interference seen by some receiver is the sum of the powers of the signals received from all transmitters excepting its own. The SINR field, which is of an essentially geometric nature, hence determines the connectivity and the capacity of the network in a broad sense. The essential point here is that the characteristics and even the feasibilities of the radio links that are simultaneously active are strongly interdependent and determined by the geometry. Our work is centered on the development of an IT-aware stochastic geometry addressing this interdependence.

The cavity method combined with geometric networks concepts has recently led to spectacular progresses in digital communications through error-correcting codes. More than fifty years after Shannon's theorems, some coding schemes like turbo codes and low-density parity-check codes (LDPC) now approach the limits predicted by information theory. One of the main ingredients of these schemes is message-passing decoding strategies originally conceived by Gallager, which can be seen as direct applications of the cavity method on a random bipartite graph (with two types of nodes representing information symbols and parity check symbols, see ).

Modern coding theory is only one example of application of the cavity method. The concepts and techniques developed for its understanding have applications in theoretical computer science and a rich class of *complex systems*, in the field of networking, economics and social sciences.
The cavity method can be used both for the analysis of randomized
algorithms and for the study of random ensembles of computational
problems representative real-world situations. In order to analyze the
performance of algorithms, one generally defines a family of instances
and endows it with a probability measure, in the same way as one
defines a family of samples in the case of spin glasses or LDPC
codes. The discovery that the hardest-to-solve instances, with all
existing algorithms, lie close to a *phase transition* boundary has spurred
a lot of interest. Theoretical physicists suggest that the reason is a structural one, namely a change in the geometry of the set of solutions related to the *replica symmetry breaking* in the cavity method.
Phase transitions, which lie at the core of statistical physics, also play a key role in computer
science , signal processing and social sciences .
Their analysis is a major challenge, that may have a strong impact on the design of related algorithms.

We develop mathematical tools in the theory of discrete probabilities and theoretical computer science in order to contribute to a rigorous formalization of the cavity method, with applications to network algorithms, statistical inference, and at the interface between computer science and economics (EconCS).

Sparse graph structures are useful in a number of information processing tasks where the computational problem can be described as follows: infer the values of a large collection of random variables, given a set of constraints or observations, that induce relations among them. Similar design ideas have been proposed in sensing and signal processing and have applications in coding , network measurements, group testing or multi-user detection. While the computational problem is generally hard, sparse graphical structures lead to low-complexity algorithms that are very effective in practice. We develop tools in order to contribute to a precise analysis of these algorithms and of their gap to optimal inference which remains a largely open problem.

A second line of activities concerns the design of protocols and algorithms enabling a transmitter to learn its environment (the statistical properties of the channel quality to the corresponding receiver, as well as their interfering neighbouring transmitters) so as to optimise their transmission strategies and to fairly and efficiently share radio resources. This second objective calls for the development and use of machine learning techniques (e.g. bandit optimisation).

Wireless networks can be efficiently modelled as dynamic stochastic geometric networks. Their analysis requires taking into account, in addition to their geometric structure, the specific nature of radio channels and their statistical properties which are often unknown a priori, as well as the interaction through interference of the various individual point-to-point links. Established results contribute in particular to the development of network dimensioning methods and some of them are currently used in Orange internal tools for network capacity calculations.

Critical real-time embedded systems (cars, aircrafts, spacecrafts) are nowadays made up of multiple computers communicating with each other. The real-time constraints typically associated with operating systems now extend to the networks of communication between sensors/actuators and computers, and between the computers themselves. Once a media is shared, the time between sending and receiving a message depends not only on technological constraints, but also, and mainly from the interactions between the different streams of data sharing the media. It is therefore necessary to have techniques to guarantee maximum network delays, in addition to local scheduling constraints, to ensure a correct global real-time behaviour to distributed applications/functions.

Moreover, pessimistic estimate may lead to an overdimensioning of the network, which involves extra weight and power consumption. In addition, these techniques must be scalable. In a modern aircraft, thousands of data streams share the network backbone. Therefore algorithm complexity should be at most polynomial.

A content distribution network (CDN) is a globally distributed network of proxy servers deployed in multiple data centers. The goal of a CDN is to serve content to end-users with high availability and high performance. CDNs serve a large fraction of the Internet content today, including web objects (text, graphics and scripts), downloadable objects (media files, software, documents), applications (e-commerce, portals), live streaming media, on-demand streaming media, and social networks. In , we address the problem of content replication in large distributed content delivery networks.

This reserach is developed by the Associate Team PARIS;
http://

**Challenges to Renewable Integration.**
With greater penetration of renewables, there is a need for tremendous shock absorbers to smooth the volatility of renewable power. An example is the balancing reserves obtained today from fossil-fuel generators, that ramp up and down their power output in response to a command signal from a grid balancing authority - an example of an ancillary service. In the absence of large, expensive batteries, we may have to increase our inventory of responsive fossil-fuel generators, negating the environmental benefits of renewable energy.

The goal of our research is to demonstrate that we do not need to rely entirely on expensive batteries or fast-responding fossil fuel generators to track regulation signals or balancing reserves. There is enormous flexibility in the power consumption of the majority of electric loads. This flexibility can be exploited to create “virtual batteries”. The best example of this is the heating, ventilation, and air conditioning (HVAC) system of a building: There is no perceptible change to the indoor climate if the airflow rate is increased by 10% for 20 minutes, and decreased by 10% for the next 20 minutes. Power consumption deviations follow the airflow deviations closely, but indoor temperature will be essentially constant.

A starting point in our research is the fact that many of the ancillary services needed today are defined by a power deviation reference signal that has zero mean. Examples are PJM’s RegD signal, or BPA’s balancing reserves

**Control Design with Local Intelligence at the Loads.**
An emphasis of our research is the creation of Smart Communities to complement a Smart Grid: intelligence is created at each load in the community. For example, a water heater may be equipped with a simple device that measures the grid frequency – a measure of power mismatch that is regulated to stabilize the power grid. Larger loads may receive a signal from a balancing authority.

A challenge in residential communities is that many loads are either on or off. How can an on/off load track the continuously varying regulation signal broadcast by a grid operator? The answer proposed in our recent work is based on probabilistic algorithms: A single load cannot track a regulation signal such as the balancing reserves. A collection of loads can, provided they are equipped with local control. The value of probabilistic algorithms is that a) they can be designed with minimal communication, b) they avoid synchronization of load responses, and c) it is shown in our recent work that they can be designed to simplify control at the grid level (see the survey and , ). Other researchers have introduced randomization (see in particular the thesis of J. Mathieu ), but without the use of “local intelligence” (distributed control).

In the study of complex networks, a network is said to have community structure if the nodes of the network can be easily grouped into (potentially overlapping) sets of nodes such that each set of nodes is densely connected internally. Community structures are quite common in real networks. Social networks include community groups (the origin of the term, in fact) based on common location, interests, occupation, etc. Metabolic networks have communities based on functional groupings. Citation networks form communities by research topic. Being able to identify these sub-structures within a network can provide insight into how network function and topology affect each other. We propose several algorithms for this problem and extensions , , ,

**Stochastic networks and stochastic geometry conference dedicated to François Baccelli on his 60th birthday**

This three day event http://

**Awards**

Ana Busic and Sean Meyn received jointly a Google Faculty Research Award for their research on Distributed Control for Renewable Integration in Smart Communities.

http://

The Applied Probability Society of INFORMS
presents a 2015 Best Publication Award
to Mohsen Bayati, Marc Lelarge and Andrea Montanari
for their paper

CLOsed queueing Networks Exact Sampling

Functional Description

Clones is a Matlab toolbox for exact sampling of closed queueing networks.

Participant: Christelle Rovetta

Contact: Christelle Rovetta

Based on a stationary Poisson point process, a wireless network model with random propagation effects (shadowing and/or fading) is considered in in order to examine the process formed by the signal-to-interference-plus-noise ratio (SINR) values experienced by a typical user with respect to all base stations in the down-link channel. This SINR process is completely characterized by deriving its factorial moment measures, which involve numerically tractable, explicit integral expressions. This novel framework naturally leads to expressions for the k-coverage probability, including the case of random SINR threshold values considered in multi-tier network models. While the k-coverage probabilities correspond to the marginal distributions of the order statistics of the SINR process, a more general relation is presented connecting the factorial moment measures of the SINR process to the joint densities of these order statistics. This gives a way for calculating exact values of the coverage probabilities arising in a general scenario of signal combination and interference cancellation between base stations. The presented framework consisting of mathematical representations of SINR characteristics with respect to the factorial moment measures holds for the whole domain of SINR and is amenable to considerable model extension.

Geographic locations of cellular base stations sometimes can be well fitted with spatial homogeneous Poisson point processes. In we make a complementary observation: In the presence of the log-normal shadowing of sufficiently high variance, the statistics of the propagation loss of a single user with respect to different network stations are invariant with respect to their geographic positioning, whether regular or not, for a wide class of empirically homogeneous networks. Even in perfectly hexagonal case they appear as though they were realized in a Poisson network model, i.e., form an inhomogeneous Poisson point process on the positive half-line with a power-law density characterized by the path-loss exponent. At the same time, the conditional distances to the corresponding base stations, given their observed propagation losses, become independent and log-normally distributed, which can be seen as a decoupling between the real and model geometry. The result applies also to Suzuki (Rayleigh-log-normal) propagation model. We use Kolmogorov-Smirnov test to empirically study the quality of the Poisson approximation and use it to build a linear-regression method for the statistical estimation of the value of the path-loss exponent.

In a paper accepted for publication in the Journal of Applied Probability,
F. Baccelli and V. Anantharam consider
a family of Boolean models, indexed by integers

In an Infocom'15 paper, F. Baccelli and X. Zhang (Qualcomm) have introduced an analytically tractable stochastic geometry model for urban wireless networks, where the locations of the nodes and the shadowing are highly correlated and different path loss functions can be applied to line-of-sight (LOS) and non-line-of-sight (NLOS) links.

Using a distance-based LOS path loss model and a blockage (shadowing)-based NLOS path loss model, one can derive the distribution of the interference observed at a typical location and the joint distribution at different locations of the network. When applied to cellular networks, this model leads to tractable coverage probabilities (SINR distribution) expressions. This model captures important features of urban wireless networks, which were difficult to analyze using existing models.

This model was lately extended in a joint work by the same authors and Robert Heath (UT Austin) in a paper presented at IEEE Globecom'15 where it received the best paper award.

In a paper to be published in IEEE Transactions of Information Theory, F. Baccelli, N. Lee and Robert Heath consider large random wireless networks where transmit-and-receive node pairs communicate within a certain range while sharing a common spectrum. By modeling the spatial locations of nodes as Poisson point processes, analytical expressions for the ergodic spectral efficiency of a typical node pair are derived as a function of the channel state information available at a receiver (CSIR) in terms of relevant system parameters: the density of communication links, the number of receive antennas, the path loss exponent, and the operating signal-to-noise ratio. One key finding is that when the receiver only exploits CSIR for the direct link, the sum spectral efficiency increases linearly with the density, provided the number of receive antennas increases as a certain super-linear function of the density. When each receiver exploits CSIR for a set of dominant interfering links in addition to that of the direct link, the sum spectral efficiency in creases linearly with both the density and the path loss exponent if the number of antennas is a linear function of the density. This observation demonstrates that having CSIR for dominant interfering links provides an order gain in the scaling law. It is also shown that this linear scaling holds for direct CSIR when incorporating the effect of the receive antenna correlation, provided that the rank of the spatial correlation matrix scales super-linearly with the density. These scaling laws are derived from integral representations of the distribution of the Signal to Interference and Noise Ratio, which are of independent interest and which in turn derived from stochastic geometry and more precisely from the theory of Shot Noise fields.

In a joint work with Mir-Omid Haji-Mirsadeghi, Sharif University,
Department of Mathematics, F. Baccelli
studied a class of non-measure preserving dynamical
systems on counting measures called point-maps.
This research introduced two objects associated with a point map

The

The

Two papers on the matter available. The first one is under revision for Annals of Probability.

In recent years, wearable devices and wireless body area networks have gained momentum as a means to monitor people’s behavior and simplify their interaction with the surrounding environment, thus representing a key element of the body-to-body networking (BBN) paradigm. Within this paradigm, several transmission technologies, such as 802.11 and 802.15.4, that share the same unlicensed band (namely, the industrial, scientific, and medical band) coexist, dramatically increasing the level of interference and, in turn, negatively affecting network performance. In this paper, we analyze the cross-technology interference (CTI) caused by the utilization of different transmission technologies that share the same radio spectrum. We formulate an optimization model that considers internal interference, as well as CTI to mitigate the overall level of interference within the system, explicitly taking into account node mobility. We further develop three heuristic approaches to efficiently solve the interference mitigation problem in large-scale network scenarios. Finally, we propose a protocol to compute the solution that minimizes CTI in a distributed fashion. Numerical results show that the proposed heuristics represent efficient and practical alternatives to the optimal solution for solving the CTI mitigation (CTIM) problem in large-scale BBN scenarios.

The ongoing evolution of wireless technologies has fostered the development of innovative network paradigms like the Internet of Things (IoT). Wireless Body Area Networks, and more specifically Body-to-Body Area Networks (BBNs), are emerging solutions for the monitoring of people's behavior and their interaction with the surrounding environment. These networks represent a key building block of the IoT paradigm. In BBNs several transmission technologies like 802.11 and 802.15.4 that share the same unlicensed band (namely the industrial, scientific and medical (ISM) radio band) coexist, increasing dramatically the level of interference and, in turn, negatively affecting network’s performance. In , we investigate the Cross-Technology Interference Mitigation (CTIM) problem caused by the utilization of different transmission technologies that share the same radio spectrum, from a centralized and distributed point of view, respectively.

Computing deterministic performance guarantees is a defining issue for systems with hard real-time constraints, like reactive embedded systems. In , we use burst-rate constrained arrivals and rate-latency servers to deduce tight worst-case delay bounds in tandem networks under arbitrary multiplexing. We present a constructive method for computing the exact worst-case delay, which we prove to be a linear function of the burstiness and latencies; our bounds are hence symbolic in these parameters. Our algorithm runs in quadratic time in the number of servers. We also present an application of our algorithm to the case of stochastic arrivals and server capacities. For a generalization of the exponentially bounded burstiness (EBB) model, we deduce a polynomial-time algorithm for stochastic delay bounds that strictly improve the state-of-the-art separated flow analysis (SFA) type bounds.

Renewable energy sources such as wind and solar power have a high degree of unpredictability and time-variation, which makes balancing demand and supply challenging. One possible way to address this challenge is to harness the inherent flexibility in demand of many types of loads. Introduced in is a technique for decentralized control for automated demand response that can be used by grid operators as ancillary service for maintaining demand-supply balance. A randomized control architecture is proposed, motivated by the need for decentralized decision making, and the need to avoid synchronization that can lead to large and detrimental spikes in demand. An aggregate model for a large number of loads is then developed by examining the mean field limit. A key innovation is a linear time-invariant (LTI) system approximation of the aggregate nonlinear model, with a scalar signal as the input and a measure of the aggregate demand as the output. This makes the approximation particularly convenient for control design at the grid level.

The maximum independent set (MIS) problem is a well-studied combinatorial optimization problem that naturally arises in many applications, such as wireless communication, information theory and statistical mechanics. MIS problem is NP-hard, thus many results in the literature focus on fast generation of maximal independent sets of high cardinality. One possibility is to combine Gibbs sampling with coupling from the past arguments to detect convergence to the stationary regime. This results in a sampling procedure with time complexity that depends on the mixing time of the Glauber dynamics Markov chain. We propose in an adaptive method for random event generation in the Glauber dynamics that considers only the events that are effective in the coupling from the past scheme, accelerating the convergence time of the Gibbs sampling algorithm.

In this paper, we revisit the problem of constructing a near-optimal rank

A non-backtracking walk on a graph is a directed path such that no edge is the inverse of its preceding edge. The non-backtracking matrix of a graph is indexed by its directed edges and can be used to count non-backtracking walks of a given length. It has been used recently in the context of community detection and has appeared previously in connection with the Ihara zeta function and in some generalizations of Ramanujan graphs. In , we study the largest eigenvalues of the non-backtracking matrix of the Erdos-Renyi random graph and of the Stochastic Block Model in the regime where the number of edges is proportional to the number of vertices. Our results confirm the "spectral redemption" conjecture that community detection can be made on the basis of the leading eigenvectors above the feasibility threshold.

In the ordinary stochastic block model, all degrees in a cluster have the same expected degree. The Degree-Corrected Stochastic Block Models (DC-SBM) is a generalization of the former where the expected degrees of individual nodes follow a prescribed degree-sequence. We consider community detection in the DC-SBM in a paper currently in preparation . We perform spectral clustering on a suitably normalized adjacency matrix. This leads to consistent recovery of the block-membership of all but a vanishing fraction of nodes, in the regime where the lowest degree is of order log

One year CRE contract titled “Détermination de la distribution des conditions radio validée avec les données terrain pour les outils de dimensionnement” (Determining the distribution of the radio channel conditions validated by the real data for network dimensioning tools) between Inria and Orange Labs have been signed in 2015. It is a part of the long-term collaboration between TREC/DYOGENE and Orange Labs, represented by M. K. Karray, for the development of analytic tools for the QoS evaluation and dimensioning of operator cellular networks. Arpan Chattopadhyay was hired by Inria as a post-doctoral fellow thanks to this contract.

Social Information Networks and Privacy

Online Social networks provide a new way of accessing and collectively treating information. Their efficiency is critically predicated on the quality of information provided, the ability of users to assess such quality, and to connect to like-minded users to exchange useful content.

To improve this efficiency, we develop mechanisms for assessing users’ expertise and recommending suitable content. We further develop algorithms for identifying latent user communities and recommending potential contacts to users.

Machine Learning and Big Data

Multi-Armed Bandit (MAB) problems constitute a generic benchmark model for learning to make sequential decisions under uncertainty. They capture the trade-off between exploring decisions to learn the statistical properties of the corresponding rewards, and exploiting decisions that have generated the highest rewards so far. In this project, we aim at investigating bandit problems with a large set of available decisions, with structured rewards. The project addresses bandit problems with known and unknown structure, and targets specific applications in online advertising, recommendation and ranking systems.

Members of Dyogene participate in Research Group GeoSto
(Groupement de recherche, GdR 3477)
http://

Graphs, Algorithms and Probability - PI: Marc
Lelarge; started in Jan 2012 - 48 months. http://

Over the last few years, several research areas have witnessed important progress through the fruitful collaboration of mathematicians, theoretical physicists and computer scientists. One of them is the cavity method. Originating from the theory of mean field spin glasses, it is key to understanding the structure of Gibbs measures on diluted random graphs, which play a key role in many applications, ranging from statistical inference to optimization, coding and social sciences.

The objective of this project is to develop mathematical tools in order to contribute to a rigorous formalization of the cavity method:

From local to global, the cavity method on diluted graphs. We will study the extent to which the global properties of a random process defined on some graph are determined by the local properties of interactions on this graph. To this end, we will relate the cavity method to the analysis of the complex zeros of the partition function, an approach that also comes from statistical mechanics. This will allow us to apply new techniques to the study of random processes on large diluted graphs and associated random matrices.

Combinatorial optimization, network algorithms, statistical inference and social sciences. Motivated by combinatorial optimization problems, we will attack long-standing open questions in theoretical computer science with the new tools developed in the first project. We expect to design new distributed algorithms for communication networks and new algorithms for inference in graphical models. We will also analyze networks from an economic perspective by studying games on complex networks.

Markovian Modeling Tools and Environments - coordinator: Alain Jean-Marie (Inria Maestro); local coordinator (for partner Inria Paris-Rocquencourt): A. Bušić; Started: January 2013; Duration: 48 months; partners: Inria Paris-Rocquencourt (EPI DYOGENE), Inria Sophia Antipolis Méditerranée (EPI MAESTRO), Inria Grenoble Rhône-Alpes (EPI MESCAL), Université Versaillese-St Quentin, Telecom SudParis, Université Paris-Est Creteil, Université Pierre et Marie Curie.

The aim of the project is to realize a modeling environment dedicated to Markov models. One part will develop the Perfect Simulation techniques, which allow to sample from the stationary distribution of the process. A second one will develop parallelization techniques for Monte Carlo simulation. A third one will develop numerical computation techniques for a wide class of Markov models. All these developments will be integrated into a programming environment allowing the specification of models and their solution strategy. Several applications will be studied in various scientific disciplines: physics, biology, economics, network engineering.

Title: Probabilistic Algorithms for Renewable Integration in Smart Grid

International Partner (Institution - Laboratory - Researcher):

University of Florida (United States) - Department of Electrical and Computer Engineering - Sean Meyn

Start year: 2015

See also: http://

The importance of statistical modeling and probabilistic controlechniques in the power systems area is now evident to practitioners in both the U.S. and Europe. Increased introduction of renewable generation has brought unforeseen volatility to the grid that require new techniques in distributed and probabilistic control. This Associate Team brings together the complementary skills in optimization, Markov modeling, simulation, and stochastic networks with aim to help solving some pressing open problems in this area. This collaboration also opens many exciting new scientific questions in the broad area of stochastic modeling and control.

Venkatachalam Anantharam [Professor, University of California, Jul 2015]

Bruce Hajek [Professor, CSL, from Feb 2015 until Mar 2015]

Holger Keeler [Post-Doctoral Fellow, Weierstrass Institute, Mar 2015]

Armand Makowski [Professor, University of Maryland, Jul 2015]

Peter Marbach [Professor, University of Toronto, from Jan until Jul 2015]

Piotr Markowski [PhD Student, University of Wroclaw, Jun 2015]

Sean Meyn [Professor, University of Florida, Feb 2015 and Jul 2015]

Bartek Blaszczyszyn was visiting Mathematical Department of Wroclaw Universty for two weeks in April and October 2015 giving a series of lectures on stochastic geometry and modeling of communication networks.

Bartek Blaszczyszyn and Marc Lelarge co-organized Stochastic networks and stochastic geometry conference dedicated to François Baccelli on his 60th birthday; http://

M. Lelarge: Co-organizer, Cargèse fall school on random graphs, with Dieter Mitsche and Pawel Pralat

M. Lelarge: Co-organizer, Workshop on Community Detection, with Laurent Massoulié

Ana Busic: Valuetools 2015 TPC co-chair.
http://

Bartek Blaszczyszyn: WiOpt/SpaSWiN 2015

Anne Bouillard was a member of the program commitee of WiOpt 2015 and Valuetools 2015.

Ana Busic: ACM Sigmetrics, WiOpt, IEEE SmartGridComm.

Marc Lelarge: WEIS, WiOpt, WAW.

F. Baccelli serves on the editorial borads of: Bernoulli, JAP, AAP et Questa.

M. Lelarge serves on the editorial borads of: IEEE's Transactions on Network Science and Engineering, Bernoulli Journal and Queueing Systems.

F. Baccelli gave the following invited lectures:
Keynote Lecture, *ISWCS'15*, Brussels, August 2015; Invited lecture at the *Huawei Vision Forum, Paris*, on
stochastic geometry for wireless networks, March 2015; Invited lecture at the *EPFL Inria Joint Meeting, Lausanne*,
on coverage in cellular networks, January 2015.

Bartek Blaszczyszyn: WiOpt/SpaSWiN 2015, Simons workshop on Stochastic Geometry and Networks, University of Texas at Austin.

A. Busic: keynote talk at CaFFEET (California France Forum on Energy Efficiency Technologies); http://

M. Lelarge: The International Symposium on Optimization (ISMP), Pittsburgh (Jul.); CAp2015 : Conférence sur l'APprentissage automatique, Lille (Jul.); Algotel, Beaunes (Jun.); DALI 2015 - Workshop on Learning Theory, Spain (Apr.); Assemblée Générale du GdR Information Signal Image viSion (ISIS), Lyon (Apr.); Combinatorial and algorithmic aspects of convexity, Paris (Jan.).

Licence : Anne Bouillard (Cours) et Ana Busic (TD) **Structures et algorithmes aléatoires** 80heqTD, L3, ENS, France.

Licence : Anne Bouillard (Cours) **Théorie de l'information et du codage** 36 heqTD, L3, ENS, France.

Licence : Anne Bouillard (Cours) **Algorithmique et programmation** 21 heqTD, L3, ENS, France.

Licence : Anne Bouillard (TD) **Systèmes digitaux** 9 heqTD, L3, ENS, France.

Master: Bartek Blaszczyszyn (with Laurent Massoulié), Graduate Course on point processes, stochastic geometry and random graphs (program “Master de Sciences et Technologies”), 45h, UPMC, Paris 6, France.

Master : Bartek Blaszczyszyn (with Laurent Decreusefond), Graduate Course on Spatial Stochastic Modeling of Wireless Networks (master program “Advanced Communication Networks”), 45h, l'X and Telecom ParisTech, Paris.

Master : Anne Bouillard (Cours + TD) **Fondements de la modélisation des réseaux** 18heqTD, M2, MPRI, France

Master: Ana Busic et Marc Lelarge (Cours) et Rémi Varloot (TD) Modèles et algorithmes de réseaux, 50heqTD, M1, ENS, Paris, France.

Master: Ana Busic, Simulation, 9hCours, M2 AMIS, UVSQ, France

Master: Marc Lelarge (Cours), Algorithms for Networked Information, 27heqTD, M2 ACN, Ecole polytechnique

Master: Marc Lelarge (TD), Networks: distributed control and emerging phenomena, 36 heqTD, M1, Ecole polytechnique.

HdR : Marc Lelarge, Topics in random graphs, combinatorial optimization, and statistical inference, ENS, 23 février 2015.

PhD : Miodrag Jovanovic, Evaluation and optimization of the quality perceived by mobile users for new services in cellular networks, started in January 2012, defended in 2015 advisor B. Blaszczyszyn, co-advisor M.Karray;

PhD in progress : Kumar Gaurav, Convex comparison of network architectures, started in October 2011, advisor B. Blaszczyszyn;

PhD in progress : Christelle Rovetta, Applications of perfect sampling to queuing networks and random generation of combinatorial objects, from December 2013, co-advised by Anne Bouillard and Ana Busic;

PhD in progress : Umar Hashmi, Decentralized control for renewable integration in smartgrids, from December 2015, advisors: A. Busic and M. Lelarge;

PhD in progress: Lennart Gulikers, Spectral clustering, depuis décembre 2014, encadrants: Marc Lelarge et Laurent Massoulié

PhD in progress : Rémi Varloot, Dynamique de Formation des Réseaux, depuis février 2015, encadrants: Marc Lelarge et Laurent Massoulié

PhD in progress: Alexandre Hollocou, Local community detection, depuis décembre 2015, encadrants: Thomas Bonald et Marc Lelarge

A. Busic: Examiner for PhD: Pierre-Antoine BRAMERET (ENS Cachan) 2015.

A. Busic: membre du jury de recrutement CR2 Inria Saclay - Île-de-France.

A. Busic: membre de la Commission des Emplois Scientifiques du CRI Paris‐Rocquencourt

A. Busic: membre de la Commission de développement technologique (CDT) de Paris-Rocquencourt

M. Lelarge: External Examiner for PhD: Jean Barbier (ENS) and Hang Zhou (ENS)

M. Lelarge: Member of the hiring committee for “maître de conférence Probabilités (Université Lyon 1)”