TREC is a joint INRIA-ENS project-team.
TREC is a joint INRIA-ENS project-team. It is focused on the modeling and the control of communication networks. Its methodological activities are combined with projects defined with industrial partners, notably Thomson, Alcatel, France Télécom (since 2007 named also Orange Labs) and Sprint. The main research directions are :
communication network control: admission control, flow regulation, congestion control, traffic analysis in controlled networks;
modeling and performance analysis of wireless networks (cellular, mesh, ad-hoc, sensor, etc.): coverage and load analysis, power control, evaluation and optimization of the transport capacity, self organization;
stochastic network dynamics, in particular by means of algebraic methods, with a main emphasis on rare events and large network asymptotics;
the development of mathematical tools based on stochastic geometry, random graphs and spatial point processes: Voronoi tessellations, coverage processes, random spatial trees, random fields.
combinatorial optimization and analysis of algorithms; a new domain opened in 2007.
Here is the scientific content of each of our main research directions.
Modeling and control of communication networks. Here we mean control of admission, flow regulation and feedback control à la TCP, the understanding and improvements of which are major challenges within the context of large networks. Our aim is a mathematical representation of the dynamics of the most commonly used control protocols, from which one could predict and optimize the resulting end user bandwidth sharing and QoS. We currently try to use our better understanding of the dynamics of these protocols on Split TCP as used in wireless access networks and in peer-to-peer overlays.
Modeling and performance analysis of wireless networks. The main focus is on the following three classes of wireless networks: cellular networks, mobile ad hoc networks (MANETs) and WiFi mesh networks.
Concerning cellular networks, our mathematical representation of interferences based on shot-noise has led to a variety of results on coverage and capacity of large CDMA networks when taking into account intercell interferences and power control. Our general goal is to propose a strategy for the densification and parameterization of UMTS networks that is optimized for both voice and data traffic.
Using a similar approach, in particular additive and extremal shot-noise processes, we currently investigate also MAC layer scheduling algorithms and power control protocols for MANETs. We concentrate on cross layer optimizations allowing one to maximize the transport capacity for multihop MANETs. A recent example within this class of problems is the concept of opportunistic routing for MANETs that we currently study with the Hipercom project team of Rocquencourt.
We also continue the line of thoughts on the self-organization of WiFi mesh networks. The general problem within this context is to find robust and fully distributed algorithms for the selection of channels by access points, for the association of users to access points, for the power used by access points, and for routing in mesh networks. We proposed and analyzed new classes of algorithms based on Gibbs' sampler.
Theory of network dynamics. The main directions investigated this year in queueing theory concern extensions of product form theory and of the theory of insensitivity within the context of the new bandwidth sharing paradigms that have been proposed in the literature lately.
TREC is also pursuing the elaboration of a stochastic network calculus, that would allow the analysis of network dynamics by algebraic methods. The mathematical tools are those of discrete event dynamical systems: semi-rings (max, plus) and inf-convolutions, as well as their non linear extensions (topical and non expansive maps, monotone separable framework); the main probabilistic tools within this framework are ergodic theory, asymptotic analysis, Lyapounov exponent analysis, perturbation analysis and large deviations. The main current contributions bear on the analysis of rare events within this framework.
Combinatorial optimization and analysis of algorithms. In this research direction, that we started in 2007, we intend to build upon our expertise on random trees/graphs and our new collaboration with D. Aldous in Berkeley. Sparse graph structures have proved useful in a number of information processing tasks (channel coding, source coding, signal processing, and recently similar design ideas have been proposed for code division multiple access communications). The computational problem underlying many of these developments can be described as follows: infer the values of a large collection of random variables, given a set of constraints, or observations, that induce relations among them. While such a task is generally computationally hard, sparse graphical structures allow for low-complexity distributed algorithms (for instance iterative message passing algorithms as belief propagation) that were revealed to be very effective in practice. A precise analysis of these algorithms and the reduction in power compared to optimal (computationally intractable) inference is however a largely open problem.
Depending on the classes of communication networks, we focus on different issues:
Concerning the Internet, we concentrate on Internet probing and on the design of Internet overlay networks;
Concerning operator networks, we work on the control and the optimization of both wireless cellular networks and wireline access networks;
Concerning self-organized networks, we focus on the design of MAC and routing protocols and on the evaluation of the capacity.
We interact on these questions with the following industrial partners: Thomson (self organized networks), Alcatel (wireline access), France Télécom (wireless cellular networks) and Sprint (Internet probing and wireless access).
A software called SERT (Spatial Erlang for Real Time services) was designed by M. Karray for the evaluation of various properties of large CDMA networks and in particular the probability that calls are blocked due to the unfeasibility of the power control inherent to CDMA. This tool is based on the research conducted with FT R&D reported in the PhD thesis of M. K. Karray defended in September 2007; in particular on the results of , and patents pending , , .
This software is now part of the dimensioning tools used by Orange for its UMTS network.
Operators require a methodology for analyzing Internet traffic and control protocols to do resource planning (buffer capacities and bandwidth) capable of handling every possible mix of traffic (voice, video and data) with predefined end to end QoS, as well as overlay networks gathering large collections of interacting users and flows. Several research directions are pursued, ranging from the analysis of transport protocols on a single link to that of Split TCP.
In , a general solution for the class of transport equations that arise within the context of both persistent and non persistent TCP flows was derived. This class contains two cases of loss point process models: the rate-independent Poisson case where the packet loss rate is independent of the throughput of the flow and the rate-dependent case where the point process of losses has an intensity which is a function of the instantaneous rate. We also gave a direct proof of the fact that there is a unique density solving the associated differential equation and we provide a closed form expression for this density and for its mean value. In a survey to appear , we showed that this class of PDEs can be used in a variety of contexts concerning the prediction of the rate of HTTP users.
TCP was mainly developed for wired reliable links, where packet losses occur mostly because of congestion. In wireless networks, performance problems arise as TCP often over-reacts to losses due to radio transmission errors. The split-connection approach is an overlay structure that has has been adopted to cope with this problem. Initially proposed in the context of wireless networks, the “split connection” approach has been adopted in satellite networks and overlay networks of different nature: peer to peer systems, content delivery networks to cite a few examples. The idea of Split TCP is to replace a multihop, end-to-end TCP connection by a cascade of shorter TCP connections using intermediate nodes as proxies, thus achieving higher throughput. In the model that we developed with S. Foss we considered two long-lived TCP-Reno flows traversing two links with different medium characteristics in cascade. A buffer at the end of the first link will prevent the loss of packets that cannot be immediately forwarded on the second link by storing them temporarily. The target of our study is the characterization of the TCP throughput on both links as well as the buffer occupancy. In our work we made the following analytical contributions: we establish the equations for throughput dynamics jointly with that of buffer occupancy in the proxy. We then determined the stability conditions by exploiting some intrinsic monotonicity and continuity properties of the system. Finally, we focused on the study of buffer occupancy in the proxy and end-to-end delays to derive tail asymptotics. The framework allowed us to consider both the case with an infinite buffer at the proxy and that of a limited buffer size, where a backpressure algorithm is needed to limit the sender rate and avoid losses in the proxy. The tail asymptotics analysis surprisingly showed that buffer occupancy and delays in the stationary regime are heavy-tailed. We also performed network simulations (using the ns-2software) and developed an event-driven simulator in order to illustrate, as far as possible by means of simulations, the analytical results. In a second part of the work, we gave a representation of the system in terms of “piecewise-deterministic Markov process”, by referring the theory of PDPs developed by M.H.A. Davis in the `80s. A PDP is a mixture of deterministic motion and random jumps associated to various configurations of the system. Such formulation turns out to provide a natural representation of the system behaviour via PDEs associated to the stationary regime that we are currently investigating.
This axis concerns the analysis and the design of wireless access communication networks, in particular cellular networks, wireless LANs, MANETs, sensor networks etc. We are interested both in macroscopic models, which are particularly important for economic planning and in models allowing the definition and the optimization of protocols. Our approach combines several tools, queueing theory, point processes, stochastic geometry, random graphs, mean field techniques.
In a joint work with Mohamed Karray, we evaluated the performance of scalable congestion control policies derived from the load control schemes derived in . We considered the bit-rate configurations identified by these schemes as feasible sets for some classical, maximal fair resource allocation policies, and studied their performance in the long-term evolution of the system. Specifically, we assume Markovian arrivals, departures and mobility of customers, which transmit some given data-volumes, as well as some temporal channel variability (fading), and we study the mean throughputi.e., the mean bit-rates that the policies offer in different parts of a given cell. Explicit formulas are obtained in the case of proportional fair policies, which may or may not take advantage of the fading, for nullor infinitely rapid customer mobility. This work complements where the performance of the the scalable admission control policies for a fixed traffic was studied. The approach also applies to a channel shared by elastic and CBR traffic, regulated by their respective admission policy.
The global approach to wireless cellular networks was worked out in collaboration with M. K. Karray and is presented in his PhD thesis , supervised by B. Błaszczyszynand E. Moulines of ENST, and defended in September 2007.
In , we considered the downlink of a cellular network supporting data traffic. In addition to the direct traffic from the base-station, each user is equipped with the same type of 802.11-like WLAN or WPAN interface, used to relay packets to further users and hence to improve the performance of the overall network. We worked on design guidelines for such networks and we evaluated how much capacity improvement can be brought by the additional relay layer in comparison to cellular networks. We considered a realistic dynamic setting where users randomly initiate downloads and leave the system upon transfer completion. A first objective was to provide a scheduling/relay strategy that maximizes the network capacity, i.e., the traffic in bit/s/cell that the network can support. We found that, regardless of the spatial traffic distribution, when the cell approaches saturation (the number of active users is very large), the capacity-achieving strategy divides the cell into two areas: one closer to the base-station where the relay layer is always saturated and some nodes receive traffic through both direct and relay links, and the further one where the relay is never saturated and the direct traffic does not exist. We further showed that it is approximately optimal to use fixed link lengths, and we derived this length. We gave a simple algorithm to calculate the cell capacity. The obtained capacity was shown to be independent of the cell size (unlike in traditional cellular networks), and it is 20%-60% higher than already proposed relay architectures when the number of users is large. Finally we provided guidelines for future protocol design.
The popularity of IEEE 802.11 WLANs has led to today's dense deployments in urban areas. Such high density leads to sub-optimal performance unless wireless devices interfering in these networks learn how to optimally use and share the spectrum. We proposed a set of distributed algorithms that allow (i) multiple interfering 802.11 Access Points to select their operating frequency in order to minimize interferences, (ii) Access points to tun their transmission power and (iii) users to choose the Access Point they attach to, in order to maximize the sum of the bit-rates obtained by the set of users throughout the network. Typical functions (choosing a channel to operate on, choosing an access point to associate with) were shown to be well-addressed in a common optimization framework based on Gibbs' sampler via the minimization of a potential energy function. This scheme does not require explicit coordination among the wireless devices. For a fixed traffic demand, limited by wireless access, it was shown to achieve a fairness criterion identified in the past as the minimal potential delay . We established the mathematical properties of the proposed algorithms and studied their performance using analytical, event-driven simulations. We discussed implementation requirements and showed that significant benefits can be gained even within incremental deployments and in the presence of non-cooperating wireless clients. We investigated several possibilities for real conditions evaluation. Two papers, one on the self-association and the other on power control in such networks were presented at IEEE INFOCOM 2007. This approach was also used to design a new "MAC aware" routing protocol for mesh networks, which was presented at ACM CoNext'07 .
These papers are the outcome of a collaboration with researchers at INTEL and THOMSON.
In we presented a stochastic geometry model for the performance analysis and the planning of dense IEEE 802.11 networks. This model allows one to propose heuristic formulas for various properties of such networks like the probability for users to be covered, the probability for access points to be granted access to the channel or the average long term throughput provided to end-users. The main merit of this model is to take the effect of interferences and that of CSMA into account within this dense network context. This analytic model, which is based on Matern point processes, was partly validated against simulation. It was then used to assess various properties of such networks. We showed for instance how the long term throughput obtained by end-users behaves when the access point density increases. We also briefly showed how to use this model for the planning of managed networks and for the economic modeling of unplanned networks.
Using mean field techniques, we presented in a performance analysis of random back-off algorithms, such as the exponential back-off algorithm, in the case of a finite number of saturated sources. The analysis assumed that all links were interfering with each other. In , , we generalized the results to the case of networks with partial interaction, i.e., to the case where all links do not interfere with each other. To do so, we represented the system as a system of interacting particles with a rapidly varying environment, and developed the mean field analysis of such systems. The results allow us to derive explicit expressions of the throughput of the various links, and we are then able to exactly quantify the well-known problem of unfairness in case of hidden nodes.
In we considered Carrier Sense Multiple Access (CSMA) schedulers for wireless networks. For networks where all nodes are within transmission range of each other, it was shown that such schedulers achieve the network capacity in the limiting region of large networks with a small sensing delay. However the design and analysis of CSMA schedulers for general networks has been an open problem due to the complexity of the interaction among coupled interference constraints. For networks with primary interference constraints, we introduced a tractable analysis of such CSMA schedulers based on a fixed point approximation. We then used the approximation to characterize the achievable rate region of static CSMA schedulers. We showed that the approximation is asymptotically accurate for the limiting regime of large networks with a small sensing delay, and that in this case the achievable rate region of CSMA converges to the capacity region.
A mobile ad-hoc network (MANET) is made of mobile nodes which are at the same time terminals and routers, connected by wireless links, the union of which forms an arbitrary topology. The nodes are free to move randomly and organize themselves arbitrarily. Important issues in such a scenario are connectivity, medium access (MAC), routing and stability.
In an ongoing work with Paul Mühlethaler, we focus on the analysis of routing protocols in multi-hop mobile wireless networks. In particular, we investigate the potential gains of opportunistic routing strategies which take advantage of both time and space diversity compared to classical routing strategies, where packets are routed on a pre-defined route usually obtained by a shortest path routing protocol. In the opportunistic routing scheme we consider, the relay is selected among the nodes having captured the packet transmission (if any) as the node which maximizes the progress of the packet towards the destination. In such a scheme, opportunism consists in taking advantage at each hop of the local pattern of transmission, where locality is understood in both its time and space sense. In our study we use a spatial version of Aloha for the MAC layer, which has been shown to scale well in multi-hop networks and a well established definition for the capture of packets based on the Signal over Interference and Noise Ratio (SINR) model. Our simulation study shows that such an opportunistic scheme very significantly outperforms classical routing schemes. It also shows how to optimally tune the MAC parameters so as to minimize the average number of hops from origin to destination everywhere in the network. This optimization is shown by simulation to be independent of the network density, a property that we back by a mathematical proof based on a scale invariance argument. We submitted our results to a conference organized in 2008.
In , we proposed and analyzed a probabilistic model of packet reception in the steady state regime of a non-slotted wireless communication channel as used in certain classes of transmit-only radios. This can be viewed as an extension of the classical M/D/1/1 Erlang loss model where the interferencecreated by different packet emissions is introduced by means of the shot-noise process. More precisely, we assume that a given packet is admitted by the receiver if this latter is idle at the packet arrival epoch and successfully received if, in addition, its signal-to-interference-and-noise ratio averaged over the reception period is large enough. As the main results we proved an analog of the Erlang formula for the fraction of the packets that are successfully received.
In we considered a hybrid wireless sensor network with regular and transmit-only sensors. The transmit-only sensors do not have the receiver circuit, hence are cheaper and less energy consuming, but their transmissions cannot be coordinated. Regular sensors, also called cluster-heads, are responsible for receiving information from the transmit-only sensors and forwarding it to sinks. Using a mathematical model of random access networks developed in we defined and evaluated packet admission policies at cluster heads for different performance criteria. We showed that the proposed hybrid network architecture, using the optimal policies, can achieve substantial dollar cost and power consumption savings as compared to conventional architectures while providing the same performance guarantees.
In a sensor network, the points in the operational area that are suitably sensed are a two-dimensional spatial coverage process. For randomly deployed sensor networks, typically, the
network coverage of two-dimensional areas is analyzed. However, in many sensor network applications, e.g., tracking of moving objects, the sensing process on paths, rather than in areas, is
of interest. With such an application in mind, in
we analyzed the coverage process induced on a one-dimensional path by a sensor network that is modeled as a
two-dimensional Boolean model. In the analysis, the sensor locations form a spatial Poisson process of density
and the sensing regions are discs of i.i.d. random radii. We obtained a strong law for the fraction of a path that is
, i.e., sensed by (
k) sensors. Asymptotic path-sensing results were obtained under the same limiting regimes as those required for asymptotic coverage by a two-dimensional Boolean model. Interestingly,
the asymptotic fraction of the area that is 1-sensed is the same as the fraction of a path that is 1-sensed. For
k= 1, we also obtained a central limit theorem that shows that the asymptotics converge at the rate of
(
1/2)for
k= 1. For finite networks, the expectation and variance of the fraction of the path that is
was obtained. The asymptotics and the finite network results were then used to obtain the critical sensor density to
a fraction
kof an arbitrary path with very high probability was also obtained. Through simulations, we then analyzed the robustness of the model when the sensor deployment is nonhomogeneous and
when the paths are not rectilinear. Other path coverage measures like breach, support, “length to first sense” and sensing continuity measures like holes and clumps were also characterized.
Finally, we discussed some generalizations of the results like the characterization of the coverage process of
“straight line paths” by
, sensor networks.
The aim of the model reported in is to analyze the tracking of a target and the coverage in sensor networks. These have potential applications in intruder detection, surveillance of an area and various other military and medical applications. We model an unreliable sensor network by a Markov-Boolean model. The model is the Poisson-Boolean model but with the possibility that the nodes can be in two states — on or off. We characterize the ability of such a network to track linearly moving targets.
In and in we compared the performance of three usual allocations, namely max-min fairness, proportional fairness and balanced fairness, in a communication network whose resources are shared by a random number of data flows. The model consists of a network of processor-sharing queues. The vector of service rates, which is constrained by some compact, convex capacity set representing the network resources, is a function of the number of customers in each queue. This function determines the way network resources are allocated. We show that this model is representative of a rich class of wired and wireless networks. For this general framework we gave the stability condition of max-min fairness, proportional fairness and balanced fairness and compared their performance on a number of toy networks.
In , we investigated the stability of utility-maximizing allocations in networks with arbitrary rate regions, not necessarily convex, and in the case of two flow classes. We considered a dynamic setting where users randomly generate data flows according to some exogenous traffic processes. Network stability is then defined as the ergodicity of the process describing the number of active flows. When the rate region is convex, the stability region is known to coincide with the rate region, independently of the considered utility function. We showed that for non-convex rate regions, the choice of the utility function is crucial to ensure maximum stability. The results were illustrated on the simple case of a wireless network consisting of two interacting base stations.
In a joint work with colleagues at Princeton we extended the results of the previous paper, and characterized flow-level stochastic stability for networks with non-convex or time-varying rate regions under resource allocation based on utility maximization and for an arbitrary number of flow classes. Similar to prior works on flow-level stability, we considered exogenous data arrivals with finite durations. However, to model many realistic situations, the rate region which constrains the feasibility of resource allocation, may be either non-convex or time-varying. When the rate region is fixed but non-convex, we derived a sufficient and a necessary condition for stability, which coincide when the set of allocated rate vectors has continuous contours. When the rate region is time-varying according to some stationary, ergodic process, we prove the precise stability region. In both cases, the size of stability region depends on the resource allocation policy, in particular, on the fairness parameter in -fair utility maximization. This is in sharp contrast to the vast, existing literature on stability under fixed and convex rate regions, where stability region coincides with rate region for many utility-based resource allocation schemes. We further investigated the tradeoff between fairness and stability when rate region is non-convex or time-varying, and analytically characterized the tradeoff for two-class networks. Numerical examples on both wired and wireless networks were provided to illustrate the new stability regions and tradeoffs proved in the paper.
In We considered heterogeneous elastic traffic sources that dynamically share a common link. We proved that balancing these traffic sources decreases the mean throughput and increases the blocking probability in the presence of admission control. The result generalizes that of Dartois derived for telephone traffic.
In we introduced two throughput metrics we refer to as flow-sampled throughput and time-sampled throughput. The former gives the throughput statistics of an arbitrary flow while the latter gives the throughput statistics of a flow in proportion to its duration. We showed that the time-sampled throughput may also be interpreted as the instantaneous throughput weighted by the number of flows, which provides useful means to evaluate and measure it.
In , we investigated with Cathy Xia, Zhen Liu and Don Towsley how the throughput of a general fork-join queueing network with blocking behaves as the number of nodes increases to infinity while the processing speed and buffer space of each node stay unchanged. The problem is motivated by applications arising from distributed systems and computer networks. One example is large-scale distributed stream processing systems where TCP is used as the transport protocol for data transfer in between processing components. Other examples include reliable multicast in overlay networks, and reliable data transfer in ad hoc networks. Using an analytical approach, the paper establishes bounds on the asymptotic throughput of such a network. For a subclass of networks which are balanced, we obtained sufficient conditions under which the network stays scalable in the sense that the throughput is lower bounded by a positive constant as the network size increases. Necessary conditions of throughput scalability are derived for general networks. The special class of series-parallel networks was then studied in greater detail. The asymptotic behavior of the throughput was characterized.
In , we considered the following stochastic bin packing process: the items arrive continuously over time to a server and are packed into bins of unit size according to an online algorithm. The unpacked items form a queue. The items have random sizes with symmetric distribution. Our first contribution identifies some monotonicity properties of the queueing system that allow to derive bounds on the queue size for First Fit and Best Fit algorithms. As a direct application, we showed how to compute the stability region under very general conditions on the input process. Our second contribution is a study of the queueing system under heavy load. We showed how the monotonicity properties allow one to derive bounds for the speed at which the stationary queue length tends to infinity when the load approaches one. In the case of Best Fit, these bounds are tight. Our analysis shows connections between our dynamic model, average-case results on the classical bin packing problem and planar matching problems.
A network belongs to the monotone separable class if its state variables are homogeneous and monotone functions of the epochs of the arrival process. This framework contains several
classical queueing network models, including generalized Jackson networks, max-plus networks, polling systems, multiserver queues, and various classes of stochastic Petri nets. We used
comparison relationships between networks of this class with i.i.d. driving sequences and the
GI/
GI/1/1queue to obtain the tail asymptotics of the stationary maximal dater under light-tailed assumptions for service times
. The exponential rate of decay is given as a function of a logarithmic moment generating function. We
exemplified this through an explicit computation of this rate for the case of queues in tandem under various stochastic assumptions.
In , we concentrated on the logarithmic tail asymptotics of the stationary response time for a class of networks that admit a representation as (max,plus)-linear systems in a random medium. We were able to derive analytic results when the distribution of the holding times are light-tailed. We showed that the lack of independence may lead in dimension bigger than one to non-trivial effects in the asymptotics of the sojourn time. We also studied in detail a simple queueing network with multipath routing.
In a paper with Ton Dieker we extended previous results obtained with Serguei Foss in . We studied the stationary solution to a (max, plus)-linear recursion. Our results are valid for quite general networks. In , we illustrated this by studying the asymptotics of the resequencing delay and the size of the resequencing buffer due to multi-path routing. Due to random delays over different paths in a system, the packets or updates may arrive at the receiver in a different order than their chronological order. In such a case, a resequencing buffer at the receiver has to store disordered packets temporarily. In , we analyze both the waiting time of a packet in the resequencing buffer and the size of this resequencing queue. We derive the exact asymptotics for the large deviation of these quantities under heavy-tailed assumptions. In contrast with results obtained for light-tailed distributions, we showed that there exists several “typical paths” that lead to the large deviation. We derived explicitly these different “typical paths” and gave heuristic rules for an optimal balancing.
Active probing began by measuring end-to-end path metrics, such as delay and loss, in a direct measurement process which did not require inference of internal network parameters. The field has since progressed to measuring network metrics, from link capacities to available bandwidth and cross traffic itself, which reach deeper and deeper into the network and require increasingly complex inversion methodologies. The CCR paper is an outcome of a collaboration with S. Machiraju, D. Veitch, and J. Bolot. In this paper, we proposed inversion formulas based on queueing theory allowing one to analyze the law of cross traffic in a router from the time series of the end-to-end delays experienced by probes. We also investigated the limitations of such inversion formulas. We used the resulting insight to design practical estimators for cross traffic, which we tested in simulation and validated by using router traces.
In active probing, PASTA is invoked to justify the sending of probe packets at Poisson times in a variety of contexts. However, due to the diversity of aims and analysis techniques used in active probing, the benefits of Poisson based measurement, and the utility and role of PASTA, are unclear. With the colleagues of SPRINT, we have shown that PASTA is of very limited use in active probing. In particular, Poisson probes are not unique in their ability to sample in an asymptotically consistent way. Furthermore, PASTA ignores the issue of estimation variance and we showed that when the auto correlation function of the observed system is convex, several classes of probe point processes outperform Poisson proves in terms of variance. These issues are addressed in , where we discuss suitable alternative probing processes.
Active probing suffers presently of the "Bottleneck" limitation: all characteristics of the path after the bottleneck link are unreachable with current techniques. The bottleneck link erases all the later effects. In a joint work with Darryl Veitch, we are currently investigating a new tomography technique, based on the measurements of end-to-end delays and maximum likelihood, which should allow one to have access to several hidden metrics such as the available bandwidth and delay distribution for every link on the path.
TREC is actively working on a book project focused on the use of the stochastic geometry framework for the modeling of wireless communications.
Stochastic geometry is a rich branch of applied probability which allows to study random phenomenons on the plane or in higher dimension. It is intrinsically related to the theory of point processes. Initially its development was stimulated by applications to biology, astronomy and material sciences. Nowadays, it is also used in image analysis. During the 03-07 period, we contributed to proving that it could also be of use in the context of wireless communication networks. The reason for this is that the geometry of the location of mobiles and/or base stations plays a key role since it determines the signal to interference ratio for each potential channel and hence the possibility of establishing simultaneously some set of communications at a given bit rate.
Stochastic geometry provides a natural way of defining (and computing) macroscopic properties of wireless networks, by some averaging over all potential geometrical patterns for e.g. the mobiles. Its role is hence similar to that played by the theory of point processes on the real line in the classical queueing theory. The methodology was initiated in , and it was further developed through several papers including , , , , .
The book will survey these papers and more recent results obtained by this approach for analyzing key properties of wireless networks such as coverage or connectivity, and for evaluating the performance of a variety of protocols used in this context such as medium access control or routing.
Connectivity is probably the first issue that has to be addressed when considering large-scale MANETs. The mathematical analysis of this problem involves random graphs associated with various models typically driven by Poisson point processes on the plane. Percolation properties of these graphs (existence of a giant component) are interpreted as an indication that the connectivity of the ad hoc network scales well with the size. Probably the first percolation model explicitly proposed for wireless communication networks, was studied by Gilbert already in 1961 (see ). It is now considered as the classical continuum model in percolation theory and accepted in wireless communications as such, despite the fact that it ignores the interference effect that arises when many transmitters are active at the same time. Recently the use of shot-noise processes allowed us to study the impact of interferences on the connectivity of large-scale ad-hoc networks using percolation theory (see , ). An important observation is that, contrary to Gilbet's model, connectivity is not always improved by densification.
The existing model assumes constant emitted powers and bi-directional connections. Current work in this domain (in TREC) concerns extensions to random (in particular controlled) powers and uni-directional communications.
The theory of stochastic ordering provides very elegant tools for comparison of various random quantities. We are interested in comparison of shot-noise fields. More precisely under assumptions that the intensity measures or the marks are idcx (increasing directionally convex) ordered, we show orderings on the respective shot-noise fields. Our goal is also to illustrate applications of these results in SINR (signal-to-interference-noise ratio) models. This is a work in progress and a subject of a paper in preparation.
Belief propagation is a non-rigorous decentralized and iterative algorithmic strategy for solving complex optimization problems on huge graphs by purely-local propagation of dynamic messages along their edges. Its remarkable performances in various domains of application from statistical physics to image processing or error-correcting codes have motivated a lot of theoretical works on the crucial question of convergence of beliefs despite the cycles, and in particular the way it evolves as the size of the underlying graph grows to infinity. However, a complete and rigorous understanding of those remarkable emergence phenomena (general conditions for convergence, asymptotic speed and influence of the initialization) still misses.
A new idea consists in using the topological notion of local weak convergence of random geometric graphs to define a limiting local structure as the number of vertexes grows to infinity
and then replace the asymptotic study of the phenomenon by its direct analysis on the infinite graph. This method has already allowed us to establish asymptotic convergence at constant speed
for the special case of the famous optimal assignment problem, resulting in a distributed algorithm with asymptotic complexity
O(
n2)compared to
O(
n3)for the best-known exact algorithm. We hope this method will be easily extended to other optimization problems on cyclic graphs.
Freshman calculus tells us how to find a minimum
x*of a smooth function
f(
x): set the derivative
f'(
x*) = 0and check
f''(
x*)>0. The related series expansion tells us, for points
xnear to
x*, how the distance
= |
x-
x*|relates to the difference
=
f(
x)-
f(
x*)in
f-values:
scales as
2. This
scaling exponent2 persists for functions
: if
x*is a local minimum and
(
): = min{
f(
x)-
f(
x*):|
x-
x*| =
), then
(
)scales as
2for a generic smooth function
f.
Combinatorial optimization, exemplified by the
traveling salesman problem(TSP), is traditionally viewed as a quite distinct subject, with theoretical analysis focusing on the number of steps that algorithms require to find the
optimal solution. To make a connection with calculus, compare an arbitrary tour
through
npoints with the optimal (minimum-length) tour
, by considering the two quantities
where
s(
n)is the length of the minimum length tour. Now define
n(
)to be the minimum value of
over all tours
for which
. Although the function
n(
)will depend on
nand the problem instance, we anticipate that for typical instances drawn from a suitable probability model it will converge in the
nlimit to some deterministic function
(
). The
universalityparadigm from statistical physics suggests there might be a scaling exponent
defined by
and that the exponent should be robust under model details.
There is fairly strong evidence that for TSP the scaling exponent is 3. This is based on analytic methods in a
mean-fieldmodel of interpoint distances (distances between pairs of points are random, independent for different pairs, thus ignoring geometric constraints) and on Monte-Carlo
simulations for random points in 2, 3 and 4 dimensional space. The analytic results build upon a recent probabilistic reinterpretation of the work of Krauth and Mézard establishing the
average length of mean-field TSP tours. But neither part of these TSP assertions is rigorous, and indeed rigorous proofs in
ddimensions seem far out of reach of current methodology.
In , with David Aldous and Charles Bordenave, we study the relation between the minimal spanning tree (MST) on many random points and the "near-minimal" tree which is optimal subject to the constraint that a proportion of its edges must be different from those of the MST. Heuristics suggest that, regardless of details of the probability model, the ratio of lengths should scale as 1 + ( 2). We prove this scaling result in the model of the lattice with random edge-lengths and in the Euclidean model.
A very simple example of an algorithmic problem solvable by dynamic programming is to maximize, over
, the objective function
for given
i>0. This problem, with random
(
i), provides a test example for studying the relationship between optimal and near-optimal solutions of combinatorial optimization problems. In
we show that, amongst solutions differing from the optimal solution in a small proportion
of places, we can find near-optimal solutions whose objective function value differs from the optimum by a factor of order
2but not smaller order. We conjecture this relationship holds widely in the context of dynamic programming over random data, and Monte Carlo simulations for the Kauffman-Levin NK
model are consistent with the conjecture. This work is a technical contribution to a broad program initiated in Aldous-Percus (2003) of relating such scaling exponents to the algorithmic
difficulty of optimization problems.
The collaboration with the new Paris Lab of THOMSON has been developing very fast since its creation. The scientific ties with C. Diot, L. Massoulié and A. Chaintreau are quite strong and materialize into:
joint seminars and reading groups, notably the new Paris-Networking series (
http://
joint research actions, particularly on routing in ESS mesh WiFi networks and on CDMA networks; these actions were presented at the
INRIA / THOMSON Workshoporganized at ENS Paris, December 17 (
http://
a grant from Thomson which allows us to invite well known scientists in Communications (like e.g. V. Anantharam from Berkeley);
various ongoing projects of joint proposals in national and European agencies;
several joint papers published this year or to be published soon, including two Infocom'07 and one CoNext'07 papers;
a joint patent on routing in mesh networks.
The interaction with the research lab of Sprint (Sprint ATL, in Burlingame, California) is made possible through a research grant. This interaction has been focused on two main topics:
The design of active probing methods for the estimation of internal properties of core or access networks based on end-to-end measurements. In the paper we proposed inversion formulas allowing one the analyze the law of cross traffic in a router from the end-to-end delay of probes. We also investigated the limitations of such inversion formulas. There are several continuations of this line of thoughts currently under investigation.
The analysis of risks on the Internet. In , we considered the problem of whether buying insurance to protect the Internet and its users from security risks makes sense, and if so, of identifying specific benefits of insurance and designing appropriate insurance policies.
Other projects have been started on the architecture of heterogeneous wireless networks. This collaboration is quite fruitful. It lead to 3 papers this year (including an IMC'07 and a CCR papers).
This 6 year grant started in September 06 and bears on the modeling of mobile ad hoc networks. It allowed us to hire in 2007 a PhD student, D. Yogeshwaran from IISc Bangalore. The work of D. Yogeshwaran bears on the stochastic comparison of Poisson point process and other classes of point processes. A reading group on permanental and determinental point processes, that respectively exhibit attraction and repulsion, was organized by Yogeshwaran Dhandapani and Frédéric Morlot.
We completed the first phase of a research project with the Network Strategy Group of Alcatel Antwerp (Danny de Vleeschauwer and Koen Laevens) and with N2NSoft (Laurent Fournié and Dohy Hong). This project was focused on the modeling of the interaction of a large collection of multimedia sources that join and leave and that share an access network. The main objective was the design of optimal choking policies for the transport of layer encoded video in such an access networks. The methodology that we used and implemented is based on the theory of Markov Decision. The second phase of the project was approved lately and bears on the comparison with admission control.
This year collaboration with France Télécom is not part of any formal framework anymore and is under the form of "Spontaneous Collaborations" with two researchers:
Mohamed Karray, with who we work on the coverage and capacity of the CDMA/UMTS networks. This resulted in three patents filed by INRIA and FT. The pertinence of our approach has already been recognized by Orange. This operator uses some of our methods in the program SERT (see Section ) integrated to its dimensioning tools. This year the collaboration lead to a new paper presented at the IEEE Infocom conference and to the defense of M. K. Karray's PhD thesis co-supervised by B. Błaszczyszyn.
Frdédéric Morlot, who started a PhD under supervision of F. Baccelli. The work of Frédéric Morlot bears on the modeling of hot spots and on that motions compatible with such hot spots. A first Markovian model was studied by F. Morlot. A reading group on permanental and determinental point processes, that respectively exhibit attraction and repulsion, was organized by Frédéric Morlot and Yogeshwaran Dhandapani.
The France-Stanford Center has accepted a joint project entitled "Analysis and Design of Next-Generation Wireless Networks" This project was funded in 2006-2007. A second joint paper on power control is under preparation.
TREC obtained in November 2006 the INRIA status of
Associated Lab(équipe associée) for the group of Prof. Darryl Veitch of the University of Melbourne (
http://
TREC was a partner in the
European Network of Excellence (NoE)called Euro-NGI (2004–2006) on the next generation Internet and is currently a partner of the continuation of this NoE called Euro-FGI (
http://
TREC is a partner in the new NoE (starting 2007) Euro-NF which gathers a smaller group of about 30 partners (
http://
TREC is a partner in ARC IFANY (
http://
In 2007 members of TREC participated in the evaluation committee of an ANR Telecom proposal.
the following scientists gave talks in 2006:
France
Gersende Fort from ENSTtalking on “Limites fluides et stabilite de Chaines de Markov. Applications a l'etude d'echantillonneurs MCMC”, February 16,
Marc Lelarge from INRIA-ENStalking on “Near-minimal spanning trees: a scaling exponent in probability models”, February 28,
Irina Ignatiouk-Robert from Université de Cergy-Pontoisetalking on “Frontière de Martin pour des marches aléatoires sur un demi-espace”, March 8,
Defense of the PhD thesis of Mohamed Karray from Orange Labs/ENSTtitled “Analytic evaluation of wireless cellular networks performance by a spatial Markov process accounting for their geometry, dynamics and control schemes”, September 10,
Defense of the PhD thesis of Minh Anh Tran from ENS/Ecole Polytechnique, titled “Insensitivity in queueing networks and applications to computer resource sharing”, October 29,
François Delarue from CNRS-Université Paris 7, talking on “Algorithmes distribués en milieu Markovien”, November 21,
INRIA / THOMSON Workshop, December 17;
Paris-Networkingevent (
http://
Dan-Cristian Tomozei from THOMSONtalking on “Spectral clustering and Markov tree models for collaborative filtering ”,
Justin Salez from ENS/INRIAtalking on “Belief propagation : an asymptotically optimal decentralized algorithm for the random assignment problem”,
Augustin Chaintreau from THOMSONtalking on “The diameter of opportunistic networks”,
Francois Baccelli from INRIA/ENStalking on “Power Control and Routing in Wireless Mesh Networks”,
Theodoros Salonidis from THOMSONtalking on “High-performance protocol design for wireless mesh networks”,
Bartek Blaszczyszyn from INRIA/ENStalking on “M/D/1/1 loss systems with interference and application to transmit-only sensor networks”.
Europe
Olivier Dousse from Deutsche Telekom Laboratoriestalking on “On packet dynamics along CSMA/CA multi-hop routes”, March 20,
Takis Konstantopoulos from Heriot-Watt University, Edinburghtalking on “Stochastic fluid queues driven by local time processes”, May 2, May 14,
Serguei Foss from Heriot-Watt University, Edinburgh, talking on “Asymptotics of randomly stopped sums in the presence of heavy tails”, November 7,
Nikita Vvedenskaya from IITP, Moscowtalking on “Differential equations arising in network problems”, November 7,
Vsevolod Shneer from EURANDOMtalking on “Tail asymptotics for the busy period of an M/G/1 queue”, November 28,
America, Asia, Australia
Peter Marbach from
University of Toronto / INRIA visiting prof.talking on “Interaction of rate and medium access control in wireless networks”, January 16;
Paris-Networkingevent (
http://
David McDonald from University of Ottawatalking on “A particle system in interaction with a rapidly varying environment: Mean field limits and applications”, February 9,
Roland Malhamé from Ecole Polytechnique de Montréaltalking on “Principe de présomption d'équilibre de Nash et commande décentralisée des grands systemes stochastiques”, February 21,
Venkat Anantharam from University of California, Berkeleytalking on “An Information-theoretic view of stochastic resonance”, March 29,
Srikanth K. Iyer from Indian Institute of Science, Bangaloretalking on “Largest nearest neighbour distances in random geometric graphs”, May 23,
Edmund Yeh from Yale Universitytalking on “Characterization of the critical density for percolation in random geometric graphs”, July 12,
Daryl Daley from Australian National University (Center for Mathematics and Applications)talking on “Renewal theory and heavy tails : a survey”, October 17,
TREC is a founding member of and participates to Paris-Networking (
http://
M. Lelarge animates the project-team seminar
http://
F. Morlot and D. Yogeshwaran animate the reading group on point processes.
F. Baccelli organized a "séance publique sur les sciences de l'information" at the French Academy of Sciences in October 2007.
B. Błaszczyszyn is a member of the organizing committee of the Scientific Colloquium of INRIA Rocquencourt
Le modéle et l'algorithme(
http://
P. Brémaud is a member of the editorial board of the following journals: Journal of Applied Probability, Advances in Applied Probability, Journal of Applied Mathematics and Stochastic Analysis;
F. Baccelli is a member of the editorial board of the following journals: QUESTA, Journal of Discrete Event Dynamical Systems, Mathematical Methods of Operations Research, Advances in Applied Probability.
Graduate Course (M2) on “Dynamics and Algorithmics of Communication Network” by F. Baccelli, J. Mairesse (40h); program Mastère Parisisien de Recherche en Informatique
Graduate Course on point processes, stochastic geometry and random graphs (program “Master de Sciences et Technologies”), by F. Baccelli, B. Blaszczyszyn and L. Massoulié (45h).
Course on Information Theory (36h) TD by M. Lelarge,
Undergraduate course (master level, MMFAI) of F. Baccelli, A. Chaintreau and M. Lelarge on Communication Networks (48H).
Undergraduate course (master level) of F. Baccelli, P. Brémaud on applied probability (48h).
Member of the thesis committee of M. K. Karray (Orange Labs and ENST), Minh Anh Tran (ENS and Ecole Polytechnique).
Reviewer of the thesis of M. Durvy (EPFL) and Julien Michel (ENS Lyon - habilitation).
Organization of the workshop "Mathematical Modeling and Analysis of Computer Networks" with P. Marbach at ENS, (
http://
Organization of a session on TCP modeling with D. McDonald at INFORMS Applied Probability 07, Eindhoven, July 07.
Member of the program committee of the following conferences: Infocom'07 (
http://
Courses
CIMPA Course on the mathematics of networks, La Pedrera, Uruguay, February 07 (
http://
"Cours de rentrée de la majeure de mathématiques de l'école polytechnique", Hyères, September 07.
Keynote lectures:
10-th anniversary of LIAMA, Beijing January 07;
Allocution à l'académie des sciences, June 07;
Maxwell Lecture, Maxwell Institute Colloquium, Edinburgh, UK, June 2007;
Colloquium of the Toulouse Mathematical Institute, June 07;
IFIP-Performance'07, Köln, October 07;
Colloquium LIP, ENS Lyon, December 07.
Presentation at the following conferences:
Workshop on Applied Probability and Advanced Communications Networks, Bedlewo, Poland, May 2007 (
http://
ICIAM'07, July 07, ETH Zürich;
Colloque STIC, November 07, Paris.
Presentation at the following seminars:
Tsing-Hua University EECS, January 07, Beijing;
ENS-EPFL joint conference, January 07, Paris;
Thomson wireless seminar, February 07, Paris;
Mascos seminar, November 07, Melbourne;
Nicta seminar, November 07, Melbourne.
Scientific adviser of the "Direction Scientifique" of INRIA for communications.
Edition of the ICCE (Information, Communication and Computation Everywhere) document for the preparation of the new strategic plan;
Partnership ALU-INRIA: organization of the workshop on self optimization of communication networks (in February 07) and shaping of the joint lab (signature in December 07) on this topic.
Partnership FT-INRIA: monitoring of CREs and CRCs.
Organization of the INRIA-Thomson meeting in December 07 (
http://
Organization of the Evaluation Seminar for Com B (
http://
Chairman of the think tank "Internet du Futur" commissioned by DGE.
Member of the thesis committee of M. Karray (Orange Labs and ENST),
Member of the program committee of Infocom 07 (
http://
Presentations at the following conferences:
Conference on Computer Communications (INFOCOM) Anchorage, Alaska 2007 (
http://
Workshop on Spatial Stochastic Modeling of Wireless Networks (SpaSWiN), Limassol, Cyprus, April 2007,
Workshop on Applied Probability and Advanced Communications Networks, Bedlewo, Poland, May 2007 (
http://
Stochastic Networks Workshop, Heriot-Watt University, Edinburgh, July 2007 (
http://
14th Applied Probability Society of INFORMS Conference, Eindhoven University, July 2007 (
http://
Presentation at the INRIA/THOMSOM seminar, ENS Paris, December 2007.
Presentation at the Research Seminar on Stochastic Network Engineering, France Telecom R&D, Issy-les-Moulineaux, September 2007 (
http://
Presentation at the 14th Applied Probability Society of INFORMS Conference, Eindhoven University, July 2007 (
http://
Pre-defense of the PhD thesis at the Politecnico di Torino, Torino, December, 2007.
Presentation at the Conference on Computer Communications (INFOCOM) Anchorage, Alaska 2007 (
http://
Member of the program committee of Inter-Perf 2007 (
http://
Presentation at the following conferences:
Queueing theory without limits: transient and asymptotic analysis, Eurandom, October, 2007 (
http://
14th Applied Probability Society of INFORMS Conference, Eindhoven University, July 2007 (
http://
International Conference on Analysis of Algoruthm (Aofa07), France, June, 2007 (
http://
Workshop on Applied Probability and Advanced Communications Networks, Bedlewo, Poland, May 2007 (
http://
ALEA 2007, CIRM, March, 2007 (
http://
Participation at the conference Recent Developments in Random Walks, United Kingdom, July 2007 (
http://
Organizer of the Workshop on Mathematical Modeling and Analysis of Computer Networks at ENS, Paris, June 2007 (
http://
Presentation at the 46th IEEE Conference on Decision and Control (
http://
Presentation at the Research Seminar on Stochastic Network Engineering, France Telecom R&D, Issy-les-Moulineaux, September 2007 (
http://
Presentation at the INRIA/THOMSOM seminar, ENS Paris, December 2007.
Participation at the 14th Applied Probability Society of INFORMS Conference, Eindhoven University, July 2007 (
http://
Presentations at the International Symposium on Computer Performance, Modeling, Measurements, and Evaluation (Performance 2007), Cologne, Germany, October 2007 (
http://
Presentation at the following conferences
Workshop on Stochastic Geometry and Spatial Statistics, Neudietendorf, Germany, September 2007 (Poster Presentation),
PCMI Summer School on Statistical Mechanics, Park City, Utah, USA. July 2007.