A large number of real-world structures and phenomena can be described by networks: separable elements with connections between certain pairs of them. Among such networks, the best known and the most studied in computer science is the Internet. Moreover, the Internet (as the physical underlying network) gives itself rise to many new networks, like the networks of hyperlinks, Internet based social networks, distributed data bases, codes on graphs, local interactions of wireless devices. These huge networks pose exciting challenges for the mathematician and the mathematical theory of networks faces novel, unconventional problems. For example, very large networks cannot be completely known, and data about them can be collected only by indirect means like random local sampling or by monitoring the behavior of various aggregated quantities.
The scientific focus of DYOGENE is on geometric network dynamics arising in communications. By geometric networks we understand networks with a nontrivial, discrete or continuous, geometric definition of the existence of links between the nodes. In stochastic geometric networks, this definition leads to random graphs or stochastic geometric models. A first type of geometric network dynamics is that where the nodes or the links change over time according to an exogeneous dynamics (e.g. node motion and geometric definition of the links). We will refer to this as dynamics of geometric networks below. A second type is that where links and/or nodes are fixed but harbor local dynamical systems (in our case, stemming from e.g. information theory, queuing theory, social and economic sciences). This will be called dynamics on geometric networks. A third type is that where the dynamics of the network geometry and the local dynamics interplay. Our motivations for studying these systems stem from many fields of communications where they play a central role, and in particular: message passing algorithms; epidemic algorithms; wireless networks and information theory; device to device networking; distributed content delivery; social and economic networks.
Network calculus is a theory for obtaining deterministic upper bounds in
networks that has been developed by R. Cruz
, . From the modelling point of view, it is an
algebra for computing and propagating constraints given in terms of
envelopes. A flow is represented by its cumulative function
The operations used for this are an adaptation of filtering theory to
We investigate the complexity of computing exact worst-case performance bounds in network calculus and to develop algorithms that present a good trade off between algorithmic efficiency and accuracy of the bounds.
Simulation approaches can be used to efficiently estimate the stationary behavior of Markov chains by providing independent samples distributed according to their stationary distribution, even when it is impossible to compute this distribution numerically.
The classical Markov Chain Monte Carlo simulation techniques suffer from two main problems:
To overcome these issues, Propp and Wilson have introduced a perfect sampling algorithm (PSA) that has later been extended and applied in various contexts, including statistical physics , stochastic geometry , theoretical computer science , and communications networks , (see also the annotated bibliography by Wilson ).
Perfect sampling uses coupling arguments to give an unbiased sample
from the stationary distribution of an ergodic Markov chain on a
finite state space
The algorithm is based on a backward coupling scheme: it computes the trajectories from all
Any ergodic Markov chain on a finite state space has a representation of type () that couples in finite time with probability 1, so Propp and Wilson's PSA gives a “perfect” algorithm in the sense that it provides an unbiased sample in finite time. Furthermore, the stopping criterion is given by the coupling from the past scheme, and knowing the explicit bounds on the coupling time is not needed for the validity of the algorithm.
However, from the computational side, PSA is efficient only under some monotonicity assumptions that allow reducing the number of trajectories considered in the coupling from the past procedure only to extremal initial conditions. Our goal is to propose new algorithms solving this issue by exploiting semantic and geometric properties of the event space and the state space.
Stochastic geometry is a rich branch of applied probability which allows one to quantify random phenomena on the plane or in higher dimension. It is intrinsically related to the theory of point processes. Initially its development was stimulated by applications to biology, astronomy and material sciences. Nowadays it is also widely used in image analysis. It provides a way of estimating and computing “spatial averages”. A typical example, with obvious communication implications, is the so called Boolean model, which is defined as the union of discs with random radii (communication ranges) centered at the points of a Poisson point process (user locations) of the Euclidean plane (e.g., a city). A first typical question is that of the prediction of the fraction of the plane which is covered by this union (statistics of coverage). A second one is whether this union has an infinite component or not (connectivity). Further classical models include shot noise processes and random tessellations. Our research consists of analyzing these models with the aim of better understanding wireless communication networks in order to predict and control various network performance metrics. The models require using techniques from stochastic geometry and related fields including point processes, spatial statistics, geometric probability, percolation theory.
Classical models of stochastic geometry (SG) are not sufficient for analyzing wireless networks as they ignore the specific nature of radio channels.
Consider a wireless communication network made of a collection of nodes which in turn can be transmitters or receivers. At a given time, some subset of this collection of nodes simultaneously transmit, each toward its own receiver. Each transmitter–receiver pair in this snapshot requires its own wireless link. For each such wireless link, the power of the signal received from the link transmitter is jammed by the powers of the signals received from the other transmitters. Even in the simplest model where the power radiated from a point decays in some isotropic way with Euclidean distance, the geometry of the location of nodes plays a key role within this setting since it determines the signal to interference and noise ratio (SINR) at the receiver of each such link and hence the possibility of establishing simultaneously this collection of links at a given bit rate, as shown by information theory (IT). In this definition, the interference seen by some receiver is the sum of the powers of the signals received from all transmitters excepting its own. The SINR field, which is of an essentially geometric nature, hence determines the connectivity and the capacity of the network in a broad sense. The essential point here is that the characteristics and even the feasibilities of the radio links that are simultaneously active are strongly interdependent and determined by the geometry. Our work is centered on the development of an IT-aware stochastic geometry addressing this interdependence.
The cavity method combined with geometric networks concepts has recently led to spectacular progresses in digital communications through error-correcting codes. More than fifty years after Shannon's theorems, some coding schemes like turbo codes and low-density parity-check codes (LDPC) now approach the limits predicted by information theory. One of the main ingredients of these schemes is message-passing decoding strategies originally conceived by Gallager, which can be seen as direct applications of the cavity method on a random bipartite graph (with two types of nodes representing information symbols and parity check symbols, see ).
Modern coding theory is only one example of application of the cavity method. The concepts and techniques developed for its understanding have applications in theoretical computer science and a rich class of complex systems, in the field of networking, economics and social sciences. The cavity method can be used both for the analysis of randomized algorithms and for the study of random ensembles of computational problems representative real-world situations. In order to analyze the performance of algorithms, one generally defines a family of instances and endows it with a probability measure, in the same way as one defines a family of samples in the case of spin glasses or LDPC codes. The discovery that the hardest-to-solve instances, with all existing algorithms, lie close to a phase transition boundary has spurred a lot of interest. Theoretical physicists suggest that the reason is a structural one, namely a change in the geometry of the set of solutions related to the replica symmetry breaking in the cavity method. Phase transitions, which lie at the core of statistical physics, also play a key role in computer science , signal processing and social sciences . Their analysis is a major challenge, that may have a strong impact on the design of related algorithms.
We develop mathematical tools in the theory of discrete probabilities and theoretical computer science in order to contribute to a rigorous formalization of the cavity method, with applications to network algorithms, statistical inference, and at the interface between computer science and economics (EconCS).
Sparse graph structures are useful in a number of information processing tasks where the computational problem can be described as follows: infer the values of a large collection of random variables, given a set of constraints or observations, that induce relations among them. Similar design ideas have been proposed in sensing and signal processing and have applications in coding , network measurements, group testing or multi-user detection. While the computational problem is generally hard, sparse graphical structures lead to low-complexity algorithms that are very effective in practice. We develop tools in order to contribute to a precise analysis of these algorithms and of their gap to optimal inference which remains a largely open problem.
A second line of activities concerns the design of protocols and algorithms enabling a transmitter to learn its environment (the statistical properties of the channel quality to the corresponding receiver, as well as their interfering neighbouring transmitters) so as to optimise their transmission strategies and to fairly and efficiently share radio resources. This second objective calls for the development and use of machine learning techniques (e.g. bandit optimisation).
Critical real-time embedded systems (cars, aircrafts, spacecrafts) are nowadays made up of multiple computers communicating with each other. The real-time constraints typically associated with operating systems now extend to the networks of communication between sensors/actuators and computers, and between the computers themselves. Once a media is shared, the time between sending and receiving a message depends not only on technological constraints, but also, and mainly from the interactions between the different streams of data sharing the media. It is therefore necessary to have techniques to guarantee maximum network delays, in addition to local scheduling constraints, to ensure a correct global real-time behaviour to distributed applications/functions.
Moreover, pessimistic estimate may lead to an overdimensioning of the network, which involves extra weight and power consumption. In addition, these techniques must be scalable. In a modern aircraft, thousands of data streams share the network backbone. Therefore algorithm complexity should be at most polynomial.
Wireless networks can be efficiently modelled as dynamic stochastic geometric networks. Their analysis requires taking into account, in addition to their geometric structure, the specific nature of radio channels and their statistical properties which are often unknown a priori, as well as the interaction through interference of the various individual point-to-point links.
The amount of multimedia traffic accessed via the Internet, already of the order of exabytes (
Networks are ubiquitous with the presence of different kinds of social, economic and information networks around us. The Internet is one of the most prominent examples of a geometric network. We also examine geometric networks from the perspective of sociologist and economist . Network analysis is also attracting fundamental research by computer scientists . Diffusion of information, social influence, trust, communication and cooperation between agents are heavily researched topics in e-commerce and multi-agent systems. Our probabilistic techniques are very appropriate in this case and have been largely neglected so far. While the first works on geometric networks emanated from theoretical physicists, they stay more focused on static properties of such networks and do not consider game theoretical or statistical learning (like community detection) aspects of such networks. This leaves open a range of new problems to which we will contribute.
Routing protocols enables to maintain paths for transmitting messages over a network. Those protocols, such as OSPF, are based on the transmission of periodic messages between neighbors. Nowadays, faulty behaviors result in the raising of alarms, but are mostly detected when a breakdown or a major misbehavior occurs. Indeed, alarms are so numerous that thay cannot be analyzed efficiently. We aim at developing methods to detect misbehaviours of a router befor a major fault accurs, and techniques to study the influence of the protocol parameters on the bahavior of the network.
Clones is a Matlab toolbox for exact sampling of closed queueing networks.
Details can be found in (best tool-paper award).
Available at: http://
F. Baccelli received 2014 IEEE Communications Society Stephen O. Rice Prize in the Field of Communications Theory:
http://
F. Baccelli received 2014 IEEE Communications Society Leonard G. Abraham Prize in the Field of Communications Systems:
http://
F. Baccelli received ACM Sigmetrics Achievement Award 2014:
F. Simatos received 2014 ACM SIGMETRICS Rising Star Researcher Award:
P. Brémaud published a book "Fourier Analysis and Stochastic Processes". Series: Universitext. Springer, Sept. 2014 - 385 pages.
PhD student C. Rovetta received best tool paper award at Valuetools 2014 for the paper .
With I. Norros (VTT Finland) and F. Mathieu (Bell Labs France),
F. Baccelli has continued the line of thought on the geometry
of Peer-to-Peer systems that was initiated in their Infocom 13 paper.
This type of dynamics leads to a class of spatial birth and
death process of the Euclidean space where the birth rate is
constant and the death rate of a given point is the shot noise
created at its location by the other points of the current
configuration for some response function
With A. Giovanidis, IMT, F. Baccelli has studied a cooperation model where the positions of base stations follow a Poisson point process distribution and where Voronoi cells define the planar areas associated with them. For the service of each user, either one or two base stations are involved. If two, these cooperate by exchange of user data and reduced channel information (channel phase, second neighbour interference) with conferencing over some backhaul link. The total user transmission power is split between them and a common message is encoded, which is coherently transmitted by the stations. The decision for a user to choose service with or without cooperation is directed by a family of geometric policies. The suggested policies further control the shape of coverage contours in favor of cell-edge areas. Analytic expressions based on stochastic geometry are derived for the coverage probability in the network. Their numerical evaluation shows benefits from cooperation, which are enhanced when Dirty Paper Coding is applied to eliminate the second neighbour interference.
With C. Singh (IIT), F. Baccelli and B. Blaszczyszyn worked on combining adaptive protocol design, utility maximization and stochastic geometry. The focus was on a spatial adaptation of Aloha within the framework of ad hoc networks. Quasi-static networks are considered, in which mobiles learn the local topology and incorporate this information to adapt their medium access probability (MAP) selection to their local environment. The cases where nodes cooperate in a distributed way to maximize the global throughput or to achieve either proportional fair or max-min fair medium access were considered. The proportionally fair sharing case leads to closed-form performance expressions in two extreme cases: (1) the case without topology information, where the analysis boils down to a parametric optimization problem leveraging stochastic geometry; (2) the case with full network topology information, which was recently solved using shot-noise techniques. It was shown that there exists a continuum of adaptive controls between these two extremes, based on local stopping sets, which can also be analyzed in closed form. These control schemes are implementable, in contrast to the full information case which is not. As local information increases, the performance levels of these schemes are shown to get arbitrarily close to those of the full information scheme. The analytical results are combined with discrete event simulation to provide a detailed evaluation of the performance of this class of medium access controls.
We present a new stochastic service model with capacity sharing and interruptions, appropriate for the evaluation of the quality of real-time streaming (e.g. mobile TV) in wireless cellular networks . It takes into account multi-class Markovian process of call arrivals (to capture different radio channel conditions, requested streaming bit-rates and call-durations) and allows for a general resource allocation policy saying which users are temporarily denied the requested fixed streaming bit-rates (put in outage) due to resource constraints. We develop general expressions for the performance characteristics of this model, including the mean outage duration and the mean number of outage incidents for a typical user of a given class, involving only the steady-state of the traffic demand. We propose also a natural class of least-effort-served-first resource allocation policies, which cope with optimality and fairness issues known in wireless networks, and whose performance metric! s can be easily calculated using Fourier analysis of Poisson variables. We specify and use our model to analyze the quality of real time streaming in 3GPP Long Term Evolution (LTE) cellular networks. Our results can be used for the dimensioning of these networks.
In , we propose a new comparison tool for spatial homogeneity of point processes, based on the joint examination of void probabilities and factorial moment measures. We prove that determinantal and permanental processes, as well as, more generally, negatively and positively associated point processes are comparable in this sense to the Poisson point process of the same mean measure. We provide some motivating results on percolation and coverage processes, and preview further ones on other stochastic geometric models, such as minimal spanning forests, Lilypond growth models, and random simplicial complexes, showing that the new tool is relevant for a systemic approach to the study of macroscopic properties of non-Poisson point processes. This new comparison is also implied by the directionally convex ordering of point processes, which has already been shown to be relevant to the comparison of the spatial homogeneity of point processes. For this latter ordering, using a notion of lattice perturbation, we provide a large monotone spectrum of comparable point processes, ranging from periodic grids to Cox processes, and encompassing Poisson point processes as well. They are intended to serve as a platform for further theoretical and numerical studies of clustering, as well as simple models of random point patterns to be used in applications where neither complete regularity nor the total independence property are realistic assumptions.
Stochastic geometry models of wireless networks based on Poisson point processes are increasingly being developed with a focus on studying various signal-to-interference-plus-noise ratio (SINR) values. In , we show that the SINR values experienced by a typical user with respect to different base stations of a Poissonian cellular network are related to a specific instance of the so-called two-parameter Poisson-Dirichlet process. This process has many interesting properties as well as applications in various fields. We give examples of several results proved for this process that are of immediate or potential interest in the development of analytic tools for cellular networks. Some of them simplify or are akin to certain results that are being developed in the network literature. By doing this we hope to motivate further research and use of Poisson-Dirichlet processes in this new setting.
In , we assume a space-time Poisson process of call arrivals on the infinite plane, independently marked by data volumes and served by a cellular network modeled by an infinite ergodic point process of base stations. Each point of this point process represents the location of a base station that applies a processor sharing policy to serve users arriving in its vicinity, modeled by the Voronoi cell, possibly perturbed by some random signal propagation effects. User service rates depend on their signal-to-interference-and-noise ratios with respect to the serving station. Little's law allows to express the mean user throughput in any region of this network model as the ratio of the mean traffic demand to the steady-state mean number of users in this region. Using ergodic arguments and the Palm theoretic formalism, we define a global mean user throughput in the cellular network and prove that it is equal to the ratio of mean traffic demand to the mean number of users in the steady st! ate of the “typical cell” of the network. Here, both means account for double averaging: over time and network geometry, and can be related to the per-surface traffic demand, base-station density and the spatial distribution of the signal-to-interference-and-noise ratio. This latter accounts for network irregularities, shadowing and cell dependence via some cell-load equations. Inspired by the analysis of the typical cell, we propose also a simpler, approximate, but fully analytic approach, called the mean cell approach. The key quantity explicitly calculated in this approach is the cell load. In analogy to the load factor of the (classical) M/G/1 processor sharing queue, it characterizes the stability condition, mean number of users and the mean user throughput. We validate our approach comparing analytical and simulation results for Poisson network model to real-network measurements.
In , we present a diffusion model developed by enriching the generalized random graph (a.k.a. configuration model), motivated by the phenomenon of viral marketing in social networks. The main results on this model are rigorously proved in [3], and in this paper we focus on applications. Specifically, we consider random networks having Poisson and Power Law degree distributions where the nodes are assumed to have varying attitudes towards influence propagation, which we encode in the model by their transmitter degrees. We link a condition involving total degree and transmitter degree distributions to the effectiveness of a marketing campaign. This suggests a novel approach to decision-making by a firm in the context of viral marketing which does not depend on the detailed information of the network structure.
Mobile network operators observe a significant disparity of quality of service (QoS) and network performance metrics, such as the mean user throughput, the mean number of users and the cell load, over different network base stations. The principal reason being the fact that real networks are never perfectly hexagonal, base stations are subject to different radio conditions, and may have different engineering parameters. In , we propose a model that takes into account these network irregularities in a probabilistic manner, in particular assuming Poisson spatial location of base stations, lognormal shadowing and random transmission powers. Performance of base stations is modeled by spatial processor sharing queues, which are made dependent of each other via a system of load equations. In order to validate our approach, we estimate all the model parameters from the data collected in a commercial network, solve it and compare the spatial variability of the QoS and performance metrics! in the model to the real network performance metrics. Considering two scenarios: downtown of a big city and a mid-size city, we show that our model predicts well the network performance.
In , we review some examples, methods, and recent results involving comparison of clustering properties of point processes. Our approach is founded on some basic observations allowing us to consider void probabilities and moment measures as two complementary tools for capturing clustering phenomena in point processes. As might be expected, smaller values of these characteristics indicate less clustering. Also, various global and local functionals of random geometric models driven by point processes admit more or less explicit bounds involving void probabilities and moment measures, thus aiding the study of impact of clustering of the underlying point process. When stronger tools are needed, directional convex ordering of point processes happens to be an appropriate choice, as well as the notion of (positive or negative) association, when comparison to the Poisson point process is considered. We explain the relations between these tools and provide examples of point processes admitting them. Furthermore, we sketch some recent results obtained using the aforementioned comparison tools, regarding percolation and coverage properties of the germ-grain model, the SINR model, subgraph counts in random geometric graphs, and more generally, U-statistics of point processes. We also mention some results on Betti numbers for Čech and Vietoris-Rips random complexes generated by stationary point processes. A general observation is that many of the results derived previously for the Poisson point process generalise to some “sub-Poisson” processes, defined as those clustering less than the Poisson process in the sense of void probabilities and moment measures, negative association or dcx-ordering.
For a graph
Motivated by the analysis of social networks, we study a model of random networks that has both a given degree distribution and a tunable clustering coefficient. We consider two types of growth processes on these graphs: diffusion and symmetric threshold model. The diffusion process is inspired from epidemic models. It is characterized by an infection probability, each neighbor transmitting the epidemic independently. In the symmetric threshold process, the interactions are still local but the propagation rule is governed by a threshold (that might vary among the different nodes). An interesting example of symmetric threshold process is the contagion process, which is inspired by a simple coordination game played on the network. Both types of processes have been used to model spread of new ideas, technologies, viruses or worms and results have been obtained for random graphs with no clustering. In , we are able to analyze the impact of clustering on the growth processes. While clustering inhibits the diffusion process, its impact for the contagion process is more subtle and depends on the connectivity of the graph: in a low connectivity regime, clustering also inhibits the contagion, while in a high connectivity regime, clustering favors the appearance of global cascades but reduces their size. For both diffusion and symmetric threshold models, we characterize conditions under which global cascades are possible and compute their size explicitly, as a function of the degree distribution and the clustering coefficient. Our results are applied to regular or power-law graphs with exponential cutoff and shed new light on the impact of clustering.
The classical setting of community detection consists of networks exhibiting a clustered structure. To more accurately model real systems we consider a class of networks (i) whose edges may carry labels and (ii) which may lack a clustered structure. Specifically we assume that nodes possess latent attributes drawn from a general compact space and edges between two nodes are randomly generated and labeled according to some unknown distribution as a function of their latent attributes. Our goal is then to infer the edge label distributions from a partially observed network. In , we propose a computationally efficient spectral algorithm and show it allows for asymptotically correct inference when the average node degree could be as low as logarithmic in the total number of nodes. Conversely, if the average node degree is below a specific constant threshold, we show that no algorithm can achieve better inference than guessing without using the observations. As a byproduct of our analysis, we show that our model provides a general procedure to construct random graph models with a spectrum asymptotic to a pre-specified eigenvalue distribution such as a power-law distribution.
Balanced edge partition has emerged as a new approach to partition an input graph data for the purpose of scaling out parallel computations, which is of interest for several modern data analytics computation platforms, including platforms for iterative computations, machine learning problems, and graph databases. This new approach stands in a stark contrast to the traditional approach of balanced vertex partition, where for given number of partitions, the problem is to minimize the number of edges cut subject to balancing the vertex cardinality of partitions.
In , we first characterize the expected costs of vertex and edge partitions with and without aggregation of messages, for the commonly deployed policy of placing a vertex or an edge uniformly at random to one of the partitions. We then obtain the first approximation algorithms for the balanced edge-partition problem which for the case of no aggregation matches the best known approximation ratio for the balanced vertex-partition problem, and show that this remains to hold for the case with aggregation up to factor that is equal to the maximum in-degree of a vertex. We report results of an extensive empirical evaluation on a set of real-world graphs, which quantifies the benefits of edge- vs. vertex-partition, and demonstrates efficiency of natural greedy online assignments for the balanced edge-partition problem with and with no aggregation.
In , we consider sparse networks consisting of a finite number of non-overlapping communities, i.e. disjoint clusters, so that there is higher density within clusters than across clusters. Both the intra- and inter-cluster edge densities vanish when the size of the graph grows large, making the cluster reconstruction problem nosier and hence difficult to solve. We are interested in scenarios where the network size is very large, so that the adjacency matrix of the graph is hard to manipulate and store. The data stream model in which columns of the adjacency matrix are revealed sequentially constitutes a natural framework in this setting. For this model, we develop two novel clustering algorithms that extract the clusters asymptotically accurately. The first algorithm is offline, as it needs to store and keep the assignments of nodes to clusters, and requires a memory that scales linearly with the network size. The second algorithm is online, as it may classify a node when the corresponding column is revealed and then discard this information. This algorithm requires a memory growing sub-linearly with the network size. To construct these efficient streaming memory-limited clustering algorithms, we first address the problem of clustering with partial information, where only a small proportion of the columns of the adjacency matrix is observed and develop, for this setting, a new spectral algorithm which is of independent interest.
We study a multistage epidemic model which generalizes the SIR model and where infected individuals go through
In , we investigate coupling from the past (CFTP) algorithms for closed queueing networks. The stationary distribution has a product form only in a very limited number of particular cases when queue capacity is finite, and numerical algorithms are intractable due to the cardinality of the state space. Moreover, closed networks do not exhibit any monotonic property enabling efficient CFTP. We derive a bounding chain for the CFTP algorithm for closed queueing networks. This bounding chain is based on a compact representation of sets of states that enables exact sampling from the stationary distribution without considering all initial conditions in the CFTP. The coupling time of the bounding chain is almost surely finite, and numerical experiments show that it is close to the coupling time of the exact chain.
In , we present Clones, a Matlab toolbox for exact sampling from the stationary distribution of a closed queueing net-work with finite capacities. This toolbox is based on recent results using a compact representation of sets of states that enables exact sampling from the stationary distribu-tion without considering all initial conditions in the coupling from the past (CFTP) scheme. This representation reduces the complexity of the one-step transition in the CFTP al-gorithm to O(KM 2), where K is the number of queues and M the total number of customers; while the cardinality of the state space is exponential in the number of queues. In this paper, we focus on the algorithmic and implementation issues. We propose a new representation, that leads to one-step transition complexity of the CFTP algorithm that is in O(KM). We provide a detailed description of our matrix-based implementation. The toolbox can be downloaded at
http://
Flexibility of energy consumption can be harnessed for the purposes of ancillary services in a large power grid. In prior work by the authors a randomized control architecture is introduced for individual loads for this purpose. In examples it is shown that the control architecture can be designed so that control of the loads is easy at the grid level: Tracking of a balancing authority reference signal is possible, while ensuring that the quality of service (QoS) for each load is acceptable on average. The analysis was based on a mean field limit (as the number of loads approaches infinity), combined with an LTI-system approximation of the aggregate nonlinear model. In , we examine in depth the issue of individual risk in these systems. The main contributions of the paper are of two kinds: Risk is modeled and quantified: (i) The average performance is not an adequate measure of success. It is found empirically that a histogram of QoS is approximately Gaussian, and consequently each load will eventually receive poor service. (ii) The variance can be estimated from a refinement of the LTI model that includes a white-noise disturbance; variance is a function of the randomized policy, as well as the power spectral density of the reference signal. Additional local control can eliminate risk: (iii) The histogram of QoS is truncated through this local control, so that strict bounds on service quality are guaranteed. (iv) This has insignificant impact on the grid-level performance, beyond a modest reduction in capacity of ancillary service.
Mean-field models are a popular tool in a variety of fields. They provide an understanding of the impact of interactions among a large number of particles or people or other "self-interested agents", and are an increasingly popular tool in distributed control. In , we consider a particular randomized distributed control architecture introduced in our own recent work. In numerical results it was found that the associated mean-field model had attractive properties for purposes of control. In particular, when viewed as an input-output system, its linearization was found to be minimum phase. In this paper we take a closer look at the control model. The results are summarized as follows: (i) The Markov Decision Process framework of Todorov is extended to continuous time models, in which the "control cost" is based on relative entropy. This is the basis of the construction of a family of controlled Markovian generators. (ii) A decentralized control architecture is proposed in which each agent evolves as a controlled Markov process. A central authority broadcasts a common control signal to each agent. The central authority chooses this signal based on an aggregate scalar output of the Markovian agents. (iii) Provided the control-free system is a reversible Markov process, the following identity holds for the linearization,
where the right hand side denotes the power spectral density for the output of any one of the individual (control-free) Markov processes.
The bipartite matching model was born in the work of Gale and Shapley, who proposed the stable marriage problem in the 1960s. In , we consider a dynamic setting, modeled as a multi-class queueing network or MDP model. The goal is to compute a policy for the matching model that is optimal in the average cost sense. Computation of an optimal policy is not possible in general, but we obtain insight by considering relaxations. The main technical result is a form of "heavy traffic" asymptotic optimality. For a parameterized family of models in which the network load approaches capacity, a variant of the MaxWeight policy is approximately optimal, with bounded regret, even though the average cost grows without bound. Numerical results demonstrate that the policies introduced in this paper typically have much lower cost when compared to polices considered in prior work.
In , we investigate how we can bound a discrete time Markov chain (DTMC) by a stochastic matrix with a low rank decomposition. We show how the complexity of the analysis for steady-state and transient distributions can be simplified when we take into account the decomposition. Finally, we show how we can obtain a monotone stochastic upper bound with a low rank decomposition.
Sequences of maximum-weight walks of a growing length in weighted digraphs have many applications in manufacturing and transportation systems, as they encode important performance parameters. It is well-known that they eventually enter a periodic regime if the digraph is strongly connected. The length of their transient phase depends, in general, both on the size of digraph and on the magnitude of the weights. In this paper, we show that certain bounds on the transients of unweighted digraphs, such as the bounds of Wielandt, Dulmage-Mendelsohn, Schwarz, Kim, and Gregory-Kirkland-Pullman, remain true for critical nodes in weighted digraphs.
This work was done by Thomas Nowak together with Glenn Merlet from Aix-Marseille Unversité, Hans Schneider from the University of Wisconsin at Madison, and Sergeĭ Sergeev from the University of Birmingham. It was presented at the 53th IEEE Conference on Decision and Control and appeared in the journal Discrete Applied Mathematics.
In this paper, we investigate the approximate consensus problem in highly
dynamic networks in which topology may change continually and unpredictably. We
prove that in both synchronous and partially synchronous systems, approximate
consensus is solvable if and only if the communication graph in each round has
a rooted spanning tree, i.e., there is a coordinator at each time. The striking
point in this result is that the coordinator is not required to be unique and
can change arbitrarily from round to round. Interestingly, the class of
averaging algorithms which are memoryless and require no process identities
entirely captures the solvability issue of approximate consensus in that the
problem is solvable if and only if it can be solved using any averaging
algorithm. Concerning the time complexity of averaging algorithms, we show that
approximate consensus can be achieved with precision of
This work was done by Thomas Nowak together with Bernadette Charron-Bost from the CNRS and Matthias Függer from Vienna University of Technology. It is currently under submission.
In contrast to analog models, binary circuit models are high-level abstractions that play an important role in assess-ing the correctness and performance characteristics of digital circuit designs: (i) modern circuit design relies on fast digital timing simulation tools and, hence, on binary-valued circuit models that faithfully model signal propagation, even throughout a complex design, and (ii) binary circuit models provide a level of abstraction that is amenable to formal correctness proofs. A mandatory feature of any such model is the ability to trace glitches and other short pulses precisely as they occur in physical circuits, as their presence may affect a circuit's correctness and its performance characteristics. Unfortunately, it was recently proved [Függer et al., ASYNC'13] that none of the existing binary-valued circuit models proposed so far, including the two most commonly used pure and inertial delay channels and any other bounded single-history channel, is realistic in the following sense: For the simple Short-Pulse Filtration (SPF) problem, which is related to a circuit's ability to suppress a single glitch, they showed that every bounded single-history channel either contradicts the unsolvability of SPF in bounded time or the solvability of SPF in unbounded time in physical circuits, i.e., no existing model correctly captures physical solvability with respect to glitch propagation. We propose a binary circuit model, based on so-called in-volution channels, which do not suffer from this deficiency. In sharp contrast to what is possible with all the existing models, they allow to solve the SPF problem precisely when this is possible in physical circuits. To the best of our knowledge, our involution channel model is hence the very first binary circuit model that realistically models glitch propagation, which makes it a promising candidate for developing more accurate tools for simulation and formal verification of digital circuits.
This work was done by Thomas Nowak together with Matthias Függer, Robert Najvirt, and Ulrich Schmid from Vienna University of Technolgy. It will be presented at the conference DATE 2105.
This paper aims to unify and extend existing techniques for deriving upper
bounds on the transient of max-plus matrix powers. To this aim, we introduce
the concept of weak CSR expansions:
This work was done by Thomas Nowak together with Glenn Merlet from Aix-Marseille Unversité and Sergeĭ Sergeev from the University of Birmingham. It appeared in the journal Linear Algebra and its Applications.
This book chapter surveys and discusses upper bounds on the length of the transient phase of max-plus linear systems and sequences of max-plus matrix powers. In particular, It explains how to extend a result by Nachtigall to yield a new approach for proving such bounds and states an asymptotic tightness result by using an example given by Hartmann and Arguelles.
This work was done by Thomas Nowak together with Bernadette Charron-Bost from the CNRS. It appeared in the book “Tropical and Idempotent Mathematics and Applications” in the AMS's book series Contemporary Mathematics.
Social Information Networks and Privacy
Online Social networks provide a new way of accessing and collectively treating information. Their efficiency is critically predicated on the quality of information provided, the ability of users to assess such quality, and to connect to like-minded users to exchange useful content.
To improve this efficiency, we develop mechanisms for assessing users’ expertise and recommending suitable content. We further develop algorithms for identifying latent user communities and recommending potential contacts to users.
Machine Learning and Big Data
Multi-Armed Bandit (MAB) problems constitute a generic benchmark model for learning to make sequential decisions under uncertainty. They capture the trade-off between exploring decisions to learn the statistical properties of the corresponding rewards, and exploiting decisions that have generated the highest rewards so far. In this project, we aim at investigating bandit problems with a large set of available decisions, with structured rewards. The project addresses bandit problems with known and unknown structure, and targets specific applications in online advertising, recommendation and ranking systems.
CRE contract titled “Distribution of the SINR in real networks” between Inria and Orange Labs have been realized. P. Keeler was hired by Inria as a research engineer within this contract. It is a part of the long-term collaboration between TREC/DYOGENE and Orange Labs, represented by M. K. Karray, for the development of analytic tools for the QoS evaluation and dimensioning of operator cellular networks.
Members of Dyogene participate in Research Group GeoSto
(Groupement de recherche, GdR 3477)
http://
Graphs, Algorithms and Probability - PI: Marc
Lelarge; started in Jan 2012 - 48 months. http://
Over the last few years, several research areas have witnessed important progress through the fruitful collaboration of mathematicians, theoretical physicists and computer scientists. One of them is the cavity method. Originating from the theory of mean field spin glasses, it is key to understanding the structure of Gibbs measures on diluted random graphs, which play a key role in many applications, ranging from statistical inference to optimization, coding and social sciences.
The objective of this project is to develop mathematical tools in order to contribute to a rigorous formalization of the cavity method:
From local to global, the cavity method on diluted graphs. We will study the extent to which the global properties of a random process defined on some graph are determined by the local properties of interactions on this graph. To this end, we will relate the cavity method to the analysis of the complex zeros of the partition function, an approach that also comes from statistical mechanics. This will allow us to apply new techniques to the study of random processes on large diluted graphs and associated random matrices.
Combinatorial optimization, network algorithms, statistical inference and social sciences. Motivated by combinatorial optimization problems, we will attack long-standing open questions in theoretical computer science with the new tools developed in the first project. We expect to design new distributed algorithms for communication networks and new algorithms for inference in graphical models. We will also analyze networks from an economic perspective by studying games on complex networks.
Markovian Modeling Tools and Environments - coordinator: Alain Jean-Marie (Inria Maestro); local coordinator (for partner Inria Paris-Rocquencourt): A. Bušić; Started: January 2013; Duration: 48 months; partners: Inria Paris-Rocquencourt (EPI DYOGENE), Inria Sophia Antipolis Méditerranée (EPI MAESTRO), Inria Grenoble Rhône-Alpes (EPI MESCAL), Université Versaillese-St Quentin, Telecom SudParis, Université Paris-Est Creteil, Université Pierre et Marie Curie.
The aim of the project is to realize a modeling environment dedicated to Markov models. One part will develop the Perfect Simulation techniques, which allow to sample from the stationary distribution of the process. A second one will develop parallelization techniques for Monte Carlo simulation. A third one will develop numerical computation techniques for a wide class of Markov models. All these developments will be integrated into a programming environment allowing the specification of models and their solution strategy. Several applications will be studied in various scientific disciplines: physics, biology, economics, network engineering.
A. Bušić was a participant (within partner LIP6) of the national project ANR MAGNUM (Methodes Algorithmiques pour la Generation aleatoire Non Uniforme: Modeles et applications) (2010–2014), partners: LIP6, LIAFA, IGM. http://
IT-SG-WN is an Associate Team between the Inria project-team DYOGENE of Inria Paris-Rocquencourt, and the EECS department of UC Berkeley in the USA, funded from 2011 to 2014. This Associate Team participates in the Inria@SiliconValley initiative. The last visit within this program was the one month visit of Prof. Venkat Anantharam (EECS, UC Berkeley). The research work focused on network information theory, and more precisely on error exponents for Gaussian MAC Channels and led to an ISIT submission lately.
Prof. Pawel Lorek from Wroclaw University (Poland) visited DYOGENE for one week.
Prof. Venkat Anantharam (EECS, UC Berkeley) visited DYOGENE in June 2014, within IT-SG-WN Inria Associate Team.
Prof. A. Rybko and Prof. A. Vladimirov (IITP RAS) visited DYOGNE in June - July 2014.
Ana Bušić visited MIT (2 months) and University of Florida (4 months) from March to August 2014.
Marc Lelarge: ACM Sigmetrics TPC co-chair
Ana Bušić: IFIP Performance, Valuetools, IEEE SmartGridComm
Marc Lelarge: ESA, WiOpt, WEIS
The members of the team have served as reviewers for numerous international conferences.
Marc Lelarge: IEEE's Transactions on Network Science and Engineering, Bernoulli Journal, Queueing Systems.
The members of the team reviewed numerous papers for numerous international journals.
Licence : Anne Bouillard (Cours) et Thomas Nowak (TD) Structures et algorithmes aléatoires 80heqTD, L3, ENS, France.
Licence : Marc Lelarge (Cours) et Anne Bouillard (TD) Thèorie de l'information et du codage 50heqTD, L3, ENS, France.
Master : Bartek Błaszczyszy (with Laurent Massoulié), Graduate Course on point processes, stochastic geometry and random graphs (program “Master de Sciences et Technologies”), 45h, UPMC, Paris 6, France.
Master : Bartek Błaszczyszy (with Laurent Decreusefond), Graduate Course on Spatial Stochastic Modeling of Wireless Networks (master program "Advanced Communication Networks), 45h, l'X and Telecom ParisTech, Paris.
Master : Anne Bouillard (Cours + TD) Graphes aléatoires 18heqTD, M1, ENS Cachan, France.
Master : Ana Bušić and Florian Simatos Modèles et algorithmes de réseaux, 50heqTD, M1, ENS, Paris, France.
Master: Ana Bušić, Foundations of network models, 18heqTD, M2, MPRI, France.
Licence : Marc Lelarge, Théorie de l'information et codage, 40h, L3, ENS, Paris, France.
Master : Marc Lelarge, Algorithms for Networked Information, 40h, M2, Ecole Polytechnique, France.
Doctorat : Francois Baccelli, Stochastic Geometry, Essén Lectures 2014 (May 12-16), Uppsala University, Sweden, http://
HdR : Anne Bouillard, Algorithms and efficiency of Network Calculus , ENS (Paris), 8 April 2014
PhD in progress :
Kumar Gaurav, Convex comparison of network architectures, started in October 2011, advisor B. Blaszczyszyn;
Miodrag Jovanović, Evaluation and optimization of the quality perceived by mobile users for new services in cellular networks, started in January 2012, advisor B. Blaszczyszyn, co-advisor M.Karray;
Christelle Rovetta, Applications of perfect sampling to queuing networks and random generation of combinatorial objects, from December 2013, co-advised by Anne Bouillard and Ana Bušić.
B. Blaszczyszyn was a reviewer of the PhD thesis of Andres Altieri [Supelec]
M. Lelarge: External Examiner for PhD: Jiaming Xu (UIUC) 2014.
M. Lelarge: membre du jury de recrutement CR2 Inria Paris-Rocquencourt.