TREC is a joint INRIA-ENS project-team. It is focused on the modeling, the control and the design of communication networks and protocols. Its methodological activities are combined with projects defined with industrial partners, notably Alcatel-Lucent, Technicolor, Sprint and Orange. The main research directions are:

modeling and performance analysis of wireless networks: network information theory, coverage and load analysis, power control, evaluation and optimization of the transport capacity, self organization;

stochastic network dynamics: stability, worst-case performance analysis using the (max,plus) algebra, network calculus, perfect simulation, inverse problems, distributed consensus.

economics of networks: epidemic risk model, incentives, security, insurance, diffusion of innovations;

the development of mathematical tools based on stochastic geometry, random geometric graphs and spatial point processes: Voronoi tessellations, coverage processes, random spatial trees, random fields, percolation

combinatorial optimization and analysis of algorithms: random graphs, belief propagation.

**Modeling and performance analysis of wireless
networks.**Our main focus was on cellular networks,
mobile ad hoc networks (MANETs) and their vehicular
variants called VANETs.

Our main advances about wireless networks have been based on the development of analytical tools for their performance analysis and on new results from network information theory.

Concerning cellular networks, the main questions bear on on coverage and capacity in large CDMA networks when taking intercell interferences and power control into account. Our main focus has been on the design of: 1) a strategy for the densification and parameterization of UMTS and future OFDM networks that is optimized for both voice and data traffic; 2) new self organization and self optimization protocols for cellular networks e.g. for power control, sub-carrier selection, load balancing, etc.

Concerning MANETs, we investigated MAC layer scheduling algorithms, routing algorithms and power control. The MAC protocols we considered are based on Aloha and CSMA as well as their cognitive radio extensions. We investigated opportunistic routing schemes for MANETs and VANETs. The focus was on cross layer optimizations allowing one to maximize the transport capacity of multihop networks.

**Theory of network dynamics.**TREC is pursuing the
analysis of network dynamics by algebraic methods. The
mathematical tools are those of discrete event dynamical
systems: semi-rings, and in particular network calculus,
ergodic theory, perfect simulation, stochastic
comparison, inverse problems, large deviations, etc.
Network calculus gives results on worst-case performance
evaluation; ergodic theory is used to assess the
stability of discrete event dynamical systems; inverse
problem methods are used to estimate some network
parameters from external observations and to design
network probing strategies.

TREC has also been studying gossip based algorithms. These algorithms are a first step towards multi-agents coordination in smart networks. We looked at the convergence of algorithms along random routes to estimate the average of network data. We also developed voting algorithms which compute majority in a distributed way, and our latest research was centered on computing their convergence time.

**The development of stochastic geometry and random
geometric graphs tools.**Stochastic geometry is a rich
branch of applied probability which allows one to
quantify random phenomena on the plane or in higher
dimension. It is intrinsically related to the theory of
point processes and also to random geometric graphs. Our
research is centered on the development of a methodology
for the analysis, the synthesis, the optimization and the
comparison of architectures and protocols to be used in
wireless communication networks. The main strength of
this method is its capacity for taking into account the
specific properties of wireless links, as well as the
fundamental question of scalability.

**Combinatorial optimization and analysis of
algorithms.**In this research direction started
in 2007, we build upon our expertise on random
trees/graphs and our collaboration with D. Aldous in
Berkeley. Sparse graph structures have proved useful in a
number of applications from information processing tasks
to the modeling of social networks. We obtained new
results for stochastic processes taking place on such
graphs. Thereby, we were able to analyze an iterative
message passing algorithm for the random assignment
problem and to characterize its performance. Likewise, we
made a sensitivity analysis of such processes and
computed the corresponding scaling exponents (for a
dynamic programming optimization problem). We also
derived analytic formula for the spectrum of the
adjacency matrix of diluted random graphs.

**Economics of networks**The premise of this
relatively new direction of research, developed jointly
with Jean Bolot [SPRINT ATL] is that economic incentives
drive the development and deployment of technology. Such
incentives exist if there is a market where suppliers and
buyers can meet. In today's Internet, such a market is
missing. We started by looking at the general problem of
security on Internet from an economic perspective and
derived a model showing that network externalities and
misaligned incentives are responsible for a low
investment in security measures. We then analyzed the
possible impact of insurance. A new research direction
started on the economic value of user localization in
wireless networks.

We have investigated various applications of our research results with the following industrial partners and user associations:

**Wireless Networks**

Alcatel-Lucent Bell Laboratories (L. Thomas and L. Roulet) on self optimization in cellular networks.

Sprint (J. Bolot anf H. Zang) on user localization.

Orange (M. Karray) on cellular networks.

**Network Dynamics**

Thalès and Real-Time-at-Work on embedded networks.

Grenouille on probing in access networks.

**Networks Economics**

Sprint (J. Bolot) on user localization.

SERT (Spatial Erlang for Real Time services) was a software designed by M. Karray [Orange Labs, Issy] for the evaluation of various properties of large CDMA networks and in particular the probability that calls are blocked due to the unfeasibility of the power control inherent to CDMA. This tool is based on the research conducted with Orange Labs and is now included in UTRANDIM, a current dimensioning tool of Orange Corporate for UMTS and LTE networks. The original approach of SERT is constantly developed and enriched in collaboration with Orange Labs. In particular this year a research has been undertaken under contract number CRE 46146063-A012 between INRIA and France Télécom (cf. Section and ) on the impact of the shadowing on the quality of service in wireless cellular networks.

The work on the self optimization of cellular networks based on Gibbs' sampler (see Section ), carried out in the joint laboratory with Alcatel-Lucent, led to the development of a software prototype that was presented by C. S. Chen at the INRIA Alcatel-Lucent joint laboratory seminar in March 2010 and demonstrated at the Alcatel-Lucent Bell Labs Open Days in May 2010.

The envelope technique described in Section has been implemented in a software tool PSI2 , in collaboration with MESCAL team [INRIA Grenoble - Rhône-Alpes].

This axis bears on the analysis and the design of wireless access communication networks. Our contributions are organized in terms of network classes: cellular networks, wireless LANs and MANETs, VANETs. We also have a section on generic results that regard more general wireless networks. We are interested both in macroscopic models, which are particularly important for economic planning and in models allowing the definition and the optimization of protocols. Our approach combines several tools, queueing theory, point processes, stochastic geometry, random graphs, distributed control algorithms, self organization protocols.

The activity on cellular networks has several complementary facets ranging from performance evaluation to protocol design. The work is mainly based on strong collaborations with Alcatel-Lucent, Orange Labs and Sprint.

We studied the impact of the shadowing, as well as the
path-loss exponent and the network architecture, on the
quality of service (QoS) in wireless cellular networks.
This impact is primarily seen in the choice of the
mobile's serving base station (BS) as the one received
with the strongest signal (and not necessarily the
closest one) and on the mobile's
*path-loss*with respect to this serving BS.
Secondarily, the shadowing impacts the so called mobile's
*interference factor*, defined as the ratio of the
sum of the path-gains form interfering BS to the
path-gain from the serving BS. These are two key
ingredients in the analysis of wireless cellular
networks, in particular, explicitly present in the
analysis of
*blocking probabilities*of streaming users. The
study of their mean values can explain the behavior of
more involved QoS metrics.

Using appropriate stochastic models, we studied
numerically the impact of the path-loss exponent and the
variance of the shadowing on the blocking probability in
the case of hexagonal network architectures. We explained
the observed results by the further, numerical and
analytical study of the mean path-loss and interference
factor in hexagonal networks. We also compared them to
these obtained for irregular (Poisson) network
architectures. We observe, as commonly expected, that
*a strong variance of the shadowing increases the mean
path-loss with respect to the serving BS, which in
consequence increases the blocking probability*.

We also obtained a surprising result that in some
cases
*an increase of the variance of the shadowing can
significantly reduce the mean interference factor and, in
consequence, also the blocking probability*. We
confirmed our findings by a mathematical analysis of the
respective models. We also obtain
*fully explicit, analytical results for the mean
path-loss and interference factors in the case of the
infinite Poisson network with an arbitrary distribution
of the shadowing*.

This research has been undertaken under the 2010 contract CRE 46146063-A012 between INRIA and France Télécom (see Section ). Partial results were presented at IFIP WMNC'10 . A more complete journal paper is under preparation.

Three patents were filed under the joint laboratory.

In cellular networks, the neighbor cell list (NCL) has an important impact on the number of dropped calls and is traditionally optimized manually with the help of planning tools. In , a method for automatically optimizing a NCL was presented, which consists of an initialization using a self-configuration phase, followed by a self-optimization phase that further refines the NCL based on measurements provided by mobile stations during the network operation. The performance of the proposed methods was evaluated for different user speeds and different NCL sizes. Besides, the convergence speed of the proposed self-optimization method was evaluated. It was shown that when about 6000 measurements are reported by mobile stations, the proposed self-optimization method attains a stable maximum performance with a success rate of about 99%.

In a wireless network composed of randomly scattered nodes, the characterization of the distribution of the best signal quality received from a group of nodes is of primary importance for many network design problems. In , we developed a framework for analyzing this distributions using shot noise models for the interference field. We first identified the joint distribution of the interference and the maximum signal strength. We then represented the best signal quality as a function of these two quantities. Particular practical scenarios were also analyzed in which explicit expressions are obtained.

Cellular networks are usually modeled by placing the base stations according to a regular geometry such as a grid, with the mobile users scattered around the network either as a Poisson point process (i.e. uniform distribution) or deterministically. These models have been used extensively for cellular design and analysis but suffer from being both highly idealized and not very tractable. Thus, complex simulations are used to evaluate key metrics such as coverage probability for a specified target rate (equivalently, the outage probability) or average/sum rate. More tractable models have long been desirable. In a joint work with J. Andrews and R. Ganti [UT Austin, USA] , we developed general models for multi-cell signal-to-noise-plus-interference ratio (SINR) based on homogeneous Poisson point processes and derived the coverage probability and rate. Under very general assumptions, the resulting expressions for the SINR cumulative distribution function involve quickly computable integrals, and in some important special cases of practical interest these integrals can be simplified to common integrals (e.g., the Q-function) or even to exact and quite simple closed-form expressions. We also derived the mean rate, and then the coverage gain (and mean rate loss) from static frequency reuse. We compared the coverage predictions obtained by this approach to the standard grid model and an actual base station deployment. We observed that the proposed model is pessimistic (a lower bound on coverage) whereas the grid model is optimistic. In addition to being more tractable, the proposed model may better capture the increasingly opportunistic and dense placement of base stations in urban cellular networks with highly variable coverage radii.

A MANET is made of mobile nodes which are at the same time terminals and routers, connected by wireless links, the union of which forms an arbitrary topology. The nodes are free to move randomly and organize themselves arbitrarily. Important issues in such a scenario are connectivity, medium access (MAC), routing and stability. This year, in collaboration with Paul Mühlethaler [INRIA HIPERCOM], we mainly worked on the analysis of MAC and routing protocols in multi-hop MANETS.

Consider again a slotted version of Aloha for MANETS. As above, our model features transmitters randomly located in the Euclidean plane, according to a Poisson point process and a set of receivers representing the next-hop from every transmitter. We concentrate on the so-called outage scenario, where a successful transmission requires a SINR larger than some threshold. In we analyzed the local delays in such a network, namely the number of times slots required for nodes to transmit a packet to their prescribed next-hop receivers. The analysis depends very much on the receiver scenario and on the variability of the fading. In most cases, each node has finite-mean geometric random delay and thus a positive next hop throughput. However, the spatial (or large population) averaging of these individual finite mean-delays leads to infinite values in several practical cases, including the Rayleigh fading and positive thermal noise case. In some cases it exhibits an interesting phase transition phenomenon where the spatial average is finite when certain model parameters (receiver distance, thermal noise, Aloha medium access probability) are below a threshold and infinite above. To the best of our knowledge, this phenomenon, has not been discussed in the literature. We comment on the relationships between the above facts and the heavy tails found in the so-called “RESTART” algorithm. We showed that the spatial average of the mean local delays is infinite primarily because of the outage logic, where one transmits full packets at time slots when the receiver is covered at the required SINR and where one wastes all the other time slots. This results in the “RESTART” mechanism, which in turn explains why we have infinite spatial average. Adaptive coding offers another nice way of breaking the outage/RESTART logic. We showed examples where the average delays are finite in the adaptive coding case, whereas they are infinite in the outage case.

In classical routing strategies for wireless ad-hoc (mobile or mesh) networks packets are transmitted on a pre-defined route that is usually obtained by a shortest path routing protocol. In we reviewed some recent ideas concerning a new routing technique which is opportunistic in the sense that each packet at each hop on its (specific) route from an origin to a destination takes advantage of the actual pattern of nodes that captured its recent (re)transmission in order to choose the next relay. The paper focuses both on the distributed algorithms allowing such a routing technique to work and on the evaluation of the gain in performance it brings compared to classical mechanisms. On the algorithmic side, we showed that it is possible to implement this opportunistic technique in such a way that the current transmitter of a given packet does not need to know its next relay a priori, but the nodes that capture this transmission (if any) perform a self selection procedure to chose the packet relay node and acknowledge the transmitter. We also showed that this routing technique works well with various medium access protocols (such as Aloha, CSMA, TDMA). Finally, we showed that the above relay self selection procedure can be optimized in the sense that it is the node that optimizes some given utility criterion (e.g. minimize the remaining distance to the final destination) which is chosen as the relay. The performance evaluation part is based on stochastic geometry and combines simulation a analytical models. The main result is that such opportunistic schemes very significantly outperform classical routing schemes when properly optimized and provided at least a small number of nodes in the network know their geographical positions exactly.

Mathematical analysis of asymptotic properties of opportunistic routing on large distances (when the Euclidean distance between the source and destination node tends to infinity) reveals the following surprising negative result: Under Poisson assumption for the repartition of nodes and some natural assumptions on the wireless channels, the mean delay per unit of distance is infinite. The main positive result states that when adding a periodic node infrastructure of arbitrarily small intensity to the Poisson point process, this “delay rate” is positive and finite (see Section for more details).

We worked with P. Bermolen PhD student jointly supervised with Télécom ParisTech, on the design and the quantitative evaluation of MAC mechanisms for wireless ad-hoc networks with performance guarantees. By this, we mean mechanisms where each accepted connection obtains a minimum rate or equivalently a minimum SINR level — which is not guaranteed by CSMA/CA — and which are adapted to the wireless ad-hoc network framework, namely are fully decentralized, power efficient and provide a good spatial reuse. Two such access control algorithms were defined and compared in . Both take the interference level into account to decide on the set of connections which can access the shared channel at any given time. The main difference between the two is the possibility or not of adjusting the transmission power of the nodes. A comparison of the performance of these two mechanisms and CSMA/CA was performed, based on a mix of analytical models and simulation and on a comprehensive set of performance metrics which include spatial reuse and power efficiency. Different network topologies, propagation environments and traffic scenarios are considered. The main aim of our study is to identify which of the proposed mechanisms outperforms CSMA/CA best depending on the scenario.

P. Bermolen defended her thesis , jointly supervised with Télécom ParisTech, in February 2010. She now holds a position in Universidad de la República, Montevideo, Uruguay.

In we assumed Aloha for both primary and secondary radio networks, and used previously developed (cf. ) analytical models to study how the two radio networks can coexist within the same area. We showed how the primary network and the secondary network can adapt their transmission parameters simultaneously to achieve the following goal: the primary network maintains its performance with a maximum and fixed degradation whereas the secondary network maximizes its transmission throughput. In practice this involves the primary network adapting its transmission power and the secondary network its transmission probability. We also studied the gain in performance when the secondary network nodes only transmit when their receivers are at minimum distance from any transmitter nodes in the primary network (constrained distance deployment).

Vehicular Ad Hoc NETworks (VANETs) are special cases of MANETs where the network is formed between vehicles. VANETs are today the most promising civilian application for MANETs and they are likely to revolutionize our traveling habits by increasing safety on the road while providing value added services.

In we studied slotted and non-slotted Aloha medium access schemes in VANETs. To this regard, we considered a one-dimensional, linear network, which is an appropriate assumption for VANETs and differs from two-dimensional, planar models usually assumed for general MANETs. More precisely, we used a linear version of the Poisson bipolar network model proposed in in which the locations of signal emitting vehicles form a homogeneous Poisson point process on the line, and where the receivers are within a fixed distance from these emitters. We use the SINR capture model assuming power-law mean path-loss and independent Rayleigh fading. First, we considered a capture/outage scenario with fixed bit rate coding, where the SINR must be above a given threshold for a successful packet reception. In this setting we obtained explicit formulas to calculate the probability of capture for both slotted and non-slotted Aloha. From these formulas other characteristics, such as the mean density of packet progress, were derived and optimized. We considered also adaptive coding, where the throughput depends on the SINR. In this scenario we quantified and optimized the mean density of information throughput. Our unified approach to slotted and non-slotted Aloha allows for explicit comparison of both versions of this simple MAC. The obtained results differ quantitatively and even qualitatively from these obtained previously in the analogous analysis of planar MANETs, revealing some specificity of the optimal tuning of the MAC layer in the linear network topology.

Conflict-avoiding codes are used in the multiple-access collision channel without feedback. The number of codewords in a conflict-avoiding code is the number of potential users that can be supported in the system. In , a new upper bound on the size of constant-weight conflict-avoiding codes was proved. This upper bound is general in the sense that it is applicable to all code lengths and all Hamming weights. Several existing constructions for conflict-avoiding codes, which are known to be optimal for Hamming weights equal to four and five, are shown to be optimal for all Hamming weights in general.

This traditional research topic of TREC has several new threads like perfect simulation, active probing or Markov decision.

Network calculus is a theory that aims at computing
deterministic performance guarantees in communication
networks. This theory is based on the (min,plus) algebra.
Flows are modeled by an
*arrival curve*that upper-bounds the amount of data
that can arrive during any interval, and network elements
are modeled by a
*service curve*that gives a lower bound on the amount
of service offered to the flows crossing that element.
Worst-case performances are then derived by combining these
curves.

In cooperation with Éric Thierry and Laurent Jouhet
[ENS Lyon], we described in
the first algorithm which
computes the maximum end-to-end delay for a given flow,
as well as the maximum backlog at a server, for
*any*feed-forward network under arbitrary
multiplexing, with concave arrival curves and convex
service curves. Its computational complexity may look
expensive (possibly super-exponential), but we showed
that the problem is intrinsically difficult (NP-hard). We
showed that, fortunately, in some cases, like tandem
networks with cross-traffic interfering along intervals
of servers, the complexity becomes polynomial.

In collaboration with Bruno Gaujal [INRIA Rhone Alpes] we are working on a model of performance bound calculus on feed-forward networks where data packets are routed under wormhole routing discipline. We are interested in determining maximum end-to-end delays and backlogs for packets going from a source node to a destination node, through a given virtual path in the network. Our objective is to give a “network calculus” approach to calculate the performance bounds. For this, we propose a new concept of curves that we call packet curves. The curves permit to model constraints on packet lengths for data flows, when the lengths are allowed to be different. We used this new concept to propose an approach for calculating residual services for data flows served under non preemptive service disciplines.

In envelope-based models for worst-case performance
evaluation like Network Calculus or Real-Time Calculus,
several types of service curves have been introduced to
quantify some deterministic service guarantees. In
cooperation with Éric Thierry and Laurent Jouhet [ENS
Lyon], we studied in
the expressiveness of these
different definitions of service curves. We revisited the
hierarchy ranging from the most restrictive definition
linked to
*variable capacity nodes*to the most general
definition of
*simple service curves*and stated the conditions
when the different definitions overlap and discuss the
existence of canonical descriptions for systems specified
through those definitions.

In collaboration with Xavier Olive [Thalès Alenia Space, Toulouse] we are working on an end-to-end delay calculus model for a SpaceWire-like switching router. We applied a new network calculus approach to determine residual services guaranteed for packets passing through a SpaceWire-like router, where wormhole routing discipline is set. Our results on end-to-end delays are compared to the effective delays obtained by “Thalès Alenia Space”.

Active probing began by measuring end-to-end path metrics, such as delay and loss, in a direct measurement process which did not require inference of internal network parameters. The field has since progressed to measuring network metrics, from link capacities to available bandwidth and cross traffic itself, which reach deeper and deeper into the network and require increasingly complex inversion methodologies. In , we formulated this line of thought as a set of inverse problems in queueing theory. Queueing theory is typically concerned with the solution of direct problems, where the trajectory of the queueing system, and laws thereof, are derived based on a complete specification of the system, its inputs and initial conditions. Inverse problems aim to deduce unknown parameters of the system based on partially observed trajectories. We provided a general definition of the inverse problems in this class and map out the key variants: the analytical methods, the statistical methods and the design of experiments. We also show how this inverse problem viewpoint translates to the design of concrete Internet probing applications.

We also investigated inverse problems in bandwidth sharing networks theory. A bandwidth sharing networks allocates the bandwidth to each flow in order to maximize a given utility function (typically an -fairness), with the constraints given by the capacity of the different servers. In particular, it has been shown that the equilibrium distribution of the bandwidth allocated by TCP to many competing connections is oscillating around an -fair allocation. As such, the theory of bandwidth sharing network is a high-level viewpoint of networks. We investigated the meaning of inverse problems in this theory, and how they are related to the active probing paradigm. In two simple examples of network, we showed that the capacity of the different servers and the flow population can estimated, and we provided an algorithm to perform this estimation.

Most active probing techniques suffer of the “Bottleneck” limitation: all characteristics of the path after the bottleneck link are erased and unreachable. we are currently investigating a new tomography technique, based on the measurement of the fluctuations of point-to-point end-to-end delays, and allowing one to get insight on the residual available bandwidth along the whole path. For this, we combined classical queueing theory models with statistical analysis to obtain estimators of residual bandwidth on all links of the path. These estimators were proved to be tractable, consistent and efficient. In we evaluated their performance with simulation and trace-based experiments.

Perfect simulation, introduced by Propp and Wilson in
1996, is a simulation algorithm that uses coupling
arguments to give an unbiased sample from the stationary
distribution of a Markov chain on a finite state space
. In the general case, the algorithm starts
trajectories from all
at some time in the past until time
t= 0. If the final state is the
same for all trajectories, then the chain has coupled and
the final state has the stationary distribution of the
Markov chain. Otherwise, the simulations are started
further in the past. This technique is very efficient if
all the events in the system have appropriate monotonicity
properties. However, in the general (non-monotone) case,
this technique requires that one consider the whole state
space, which limits its application only to chains with a
state space of small cardinality.

The envelope technique has been implemented in a software tool PSI2 .

Solving Markov chains is in general difficult if the state space of the chain is very large (or infinite) and lacking a simple repeating structure. One alternative to solving such chains is to construct models that are simple to analyze and provide bounds for a reward function of interest.

In a joint work with I.M. H. Vliegen [Technische Universiteit Eindhoven, The Netherlands] and A. Scheller-Wolf [Carnegie Mellon University, USA] , we presented a new bounding method for Markov chains inspired by Markov reward theory: Our method constructs bounds by redirecting selected sets of transitions, facilitating an intuitive interpretation of the modifications of the original system. We show that our method is compatible with strong aggregation of Markov chains; thus we can obtain bounds for an initial chain by analyzing a much smaller chain. We illustrated our method by using it to prove monotonicity results and bounds for assemble-to-order systems.

In an ongoing work, we apply these results in an optimization problem of base stock levels for service tools inventory.

Censored Markov chains (CMC) allow one to represent the conditional behavior of a system within a subset of observed states. They provide a theoretical framework to study the truncation of a discrete-time Markov chain when the generation of the state-space is too hard or when the number of states is too large. Unfortunately, the stochastic matrix of a CMC may be difficult to obtain. Dayar et al. (2006) have proposed an algorithm, called DPY, that computes a stochastic bounding matrix for a CMC with a smaller complexity with only a partial knowledge of the chain. In , we proved that this algorithm is optimal for the information they take into account. We also showed how some additional knowledge on the chain can improve stochastic bounds for CMC.

In
,
we proposed an iterative
algorithm to compute component-wise bounds of the
steady-state distribution of an irreducible and aperiodic
Markov chain. These bounds are based on very simple
properties of
(
m
a
x, + )and
(
m
i
n, + )sequences. We showed
that, under some assumptions on the Markov chain, these
bounds converge to the exact solution. In that case we
have a clear tradeoff between computation and the
tightness of bounds. Furthermore, at every step we know
that the exact solution is within an interval, which
provides a more effective convergence test than usual
iterative methods.

In a joint work with V. Gupta [Carnegie Mellon
University, USA] and J. Mairesse [LIAFA, CNRS and
Université Paris 7]
, we considered the bipartite
matching queueing model of customers and servers
introduced by Caldentey, Kaplan, and Weiss (Adv. Appl.
Probab., 2009). Customers and servers play symmetrical
roles. There is a finite set
C, resp.
S, of customer, resp. server, classes. Time is
discrete and at each time step, one customer and one
server arrive in the system according to a joint
probability measure
on
C×
S, independently of the past.
Also, at each time step, pairs of
*matched*customer and server, if they exist, depart
from the system. Authorized
*matchings*are given by a fixed bipartite graph
. A
*matching policy*is chosen, which decides how to
match when there are several possibilities.
Customers/servers that cannot be matched are stored in a
buffer.

The evolution of the model can be described by a discrete time Markov chain. We studied its stability under various admissible matching policies including: ML (Match the Longest), MS (Match the Shortest), FIFO (match the oldest), priorities. There exist natural necessary conditions for stability (independent of the matching policy) defining the maximal possible stability region. For some bipartite graphs, we prove that the stability region is indeed maximal for any admissible matching policy. For the ML policy, we proved that the stability region is maximal for any bipartite graph. For the MS and priority policies, we exhibited a bipartite graph with a non-maximal stability region.

Gossip is a class of distributed linear algorithms which
aim to reach a consensus on the average of the measurements
of a wireless sensor network. The speed of convergence
highly depends on the network topology, and unfortunately,
in real-world topologies, these algorithms are slow. This
year, we published a first paper
which mathematically solves the
problem : Path Averaging averages random routes and is
order
nlog
n(optimal), but, as a
counterpart, it is not very robust in dynamic networks.
Indeed, to average a route, agents have to send data back
and forth. We have then developed an algorithm, which
converges correctly, which is fast on simulations, and
which operates on random routes one way only. This
algorithm thus recovers the robustness lost by Path
Averaging. We called it Weighted Gossip
. Parallel to our work on
weighted gossip we quantized regular gossip to obtain
voting algorithms which are able to compute majority among
2, 3 or 4 candidates. The algorithms are asynchronous and
use deterministic automata of 2 bits for the binary voting
problem, 4 bits for the ternary voting problem and 7 bits
for the quaternary voting problem. We wrote a paper in a
special issue of signal processing about gossip algorithms,
which was accepted under minor modifications.

The spread of new ideas, behaviors or technologies has been extensively studied using epidemic models. In , we considered a model of diffusion where the individuals' behavior is the result of a strategic choice. We studied a simple coordination game with binary choice and give a condition for a new action to become widespread in a random network. We also analyze the possible equilibria of this game and identify conditions for the coexistence of both strategies in large connected sets. Finally we look at how can firms use social networks to promote their goals with limited information.

Our results differ strongly from the one derived with epidemic models. In particular, we showed that connectivity plays an ambiguous role: while it allows the diffusion to spread, when the network is highly connected, the diffusion is also limited by high-degree nodes which are very stable. In the case of a sparse random network of interacting agents, we computed the contagion threshold for a general diffusion model and showed the existence of (continuous and discontinuous) phase transitions. We also computed the minimal size of a seed of new adopters in order to trigger a global cascade if these new adopters can only be sampled without any information on the graph. We showed that this minimal size has a non-trivial behavior as a function of the connectivity. Our analysis extends methods developed in the random graphs literature based on the properties of empirical distributions of independent random variables, and leads to simple proofs.

The defining characteristic of wireless and mobile networking is user mobility, and related to it is the ability for the network to capture (at least partial) information on where users are located and how users change location over time. Information about location is becoming critical, and therefore valuable, for an increasingly larger number of location-based or location-aware services. A key open question, however, is how valuable exactly this information is. Our goal in this paper is to help understand and estimate the economics, or the value of location information.

Stochastic geometric models (in particular these of
wireless networks) are in general investigated in Poisson
point process setting. Due to the difficulty (or even
impossibility) of obtaining closed-form expressions for
characteristics of these models in non-Poisson settings, we
attempt a qualitative study of these characteristics, in
particular by comparison to Poisson setting.
*Directionally convex ordering*of point processes
proved to be particularly pertinent in regard to this
matter, as shown in
. This year we have continued
working with this order, in particular using it to compare
the clustering and percolation properties of point
processes; cf. Section
. The whole research axis was
being developed in the PhD thesis
of D. Yogeshwaran defended
in 2010.

Comparisons of Ripley's functions and pair correlation
functions seem to indicate that point processes higher in
directionally convex (
dcx) order cluster more. Simulation of various points
processes comparable in this order, in particular in the
class of the so called
*perturbed lattice point processes*, also confirm this
observation. These simulations, as well as some heuristics,
indicate also that clustering of a point process negatively
impacts the percolation of the related continuum
percolation model, called also the Boolean model. We moved
toward a formal statement of this heuristic. Namely, we
defined two critical radii for percolation of the Boolean
model called the lower and upper critical radii as these
sandwich the usual critical radius for percolation of the
Boolean model. We showed that
dcxorder preserves the upper critical radii and
reverses the lower critical radii. Following this
observation we considered a class of point processes, which
we call
*sub-Poisson*; these are point processes that can be
dominated in
dcxby some Poisson point process. For this class, we
extended the classical result on the existence of phase
transition in the percolation of the Gilbert's graph
(called also the Boolean model, and in the historical
result generated by a homogeneous Poisson point process).
We also extended a recent result of the same nature for the
SINR graph, to sub-Poisson point processes. This work is a
part of the PhD thesis
of D. Yogeshwaran defended
in 2010. Partial results have been presented at the
Allerton Conference in 2010; cf
,
. A more complete, journal
paper
is under preparation.

Random Geometric Graphs (RGG) have played an important role in providing a framework for modeling in wireless communication, starting with the pioneering work on connectivity by Gilbert (1961); . Vertices or points of the graphs represent communicating entities such as base stations. These vertices are assumed to be distributed in space randomly according to some point process, typically a Poisson point process. En edge between two points means that the communicating entities able to communicate with each other. In the classical model an edge exists between any two pair of nodes if the distance between them is less than some critical threshold. A variant of this classical model that exhibits the union of the coverage regions of all nodes is also referred to in stochastic geometry as the Boolean model. In the following, more fundamental works, we studied some variants and extensions of the classical models, more or less related to wireless communication networks.

We investigated percolation in the AB Poisson-Boolean
model in
d-dimensional Euclidean space, and asymptotic
properties of AB random geometric graphs on Poisson
points in
[0, 1]
^{d}. The AB random geometric graph we studied
is a generalization to the continuum of a bi-partite
graph called the
ABpercolation model on discrete lattices. Such an
extension is motivated by applications to secure
communication networks and frequency division duplex
networks. The AB Poisson Boolean model is defined as a
bi-partite graph on two independent Poisson point
processes of intensities
and
in the
d-dimensional Euclidean space in the same manner as
the usual Boolean model with a radius
r. We showed existence of
ABpercolation for all
d2, and derived bounds for a
critical intensity. Further, in
d= 2, we characterize a
critical intensity. The set-up for
ABrandom geometric graphs is to construct a
bi-partite graph on two independent Poisson point process
of intensities
nand
cnin the unit cube. We provided almost sure
asymptotic bounds for the connectivity threshold for all
c>0and a suitable choice of
radius cut-off functions
r_{n}(
c). Further for
c<
c
_{0}, we derived a weak law result for the
largest nearest neighbor radius. This work is a part of
the PhD thesis
of D. Yogeshwaran
defended in 2010. It has also been submitted for the
publication as a research article
.

Delay Tolerant Networks, in the simplest terms, are networks that take into account the time-delay at nodal points for the transmission of information in a network. First passage percolation models have been found to be useful for study of transmission of information along networks. We consider spatial growth models on stationary graphs constructed on point processes, similar in the spirit of continuum first passage percolation models. The dynamic governing this network model is the delayed propagation of the information at the vertices of the graph. Depending on the manner of the time-delay and information dissemination time, one can obtain various models. We are analyzing this class of models with the hope of obtaining shape theorem for the spread of information. These models combine the dynamics of both the first passage percolation models and Richardson growth models.

In the context of the opportunistic routing described in Section , in a more fundamental paper we studied optimal paths in wireless networks in terms of first passage percolation on some space-time SINR graph. We establish both “positive” and “negative” results on the associated the percolation delay rate (delay per unit of Euclidean distance called in the classical terminology time constant). The latter determines the asymptotics of the minimum delay required by a packet to progress from a source node to a destination node when the Euclidean distance between the two tends to infinity. The main negative result states that the percolation delay rate is infinite on the random graph associated with a Poisson point process under natural assumptions on the wireless channels. The main positive result states that when adding a periodic node infrastructure of arbitrarily small intensity to the Poisson point process, the percolation delay rate is positive and finite.

Finding optimal space-time paths studied above needs the knowledge of the future which is impossible in practice. So an interesting question consists in looking for local (both in time and space) algorithms. In we proved the convergence of the radial routing under the SINR setup. Another work in progress consists in analyzing the directional algorithm. In this algorithm packet uses the best link in the desired direction as the next node.

The times of occurrence of earthquakes in a given area
of seismic activity form a simple point process
Non the real line, where
N((
a,
b])is the number of shocks in the
time interval
(
a,
b]. The dynamics governing the
process can be expressed by the stochastic intensity
(
t). In the stress release model,
for
t0,
, where
c>0and
is an i.i.d. sequence of non-negative random
variables with finite expectation, whereas
X_{0}is some real random variable. The process
is known to be ergodic.

Another model of interest in seismology is the Hawkes
branching process, where the stochastic intensity is
, where
his a non-negative function, called the fertility
rate and
is a non-negative integrable function. Such point
process appears in the specialized literature under the
name ETAS (Epidemic Type After-Shock and is used to model
the aftershocks. It is well known that the corresponding
process “dies out” in finite time under the condition
.

A model mixing stress release and Hawkes aftershocks is

where
>0. The positive constant
cis the rate at which the strain builds up. If there
is a shock at time
t, then the strain is relieved by the quantity
Z_{N(
t)}. Each shock (primary or secondary) at time
tgenerates aftershocks according to a Poisson process
of intensity
. In
, we gave necessary and
sufficient conditions of ergodicity for this model.

In , we investigated the rank of the adjacency matrix of large diluted random graphs: for a sequence of graphs converging locally to a Galton-Watson tree, we provided an explicit formula for the asymptotic multiplicity of the eigenvalue 0 in terms of the degree generating function. In the first part, we showed that the adjacency operator associated with a Galton-Watson tree is self-adjoint with probability one ; we analyzed the associated spectral measure at the root and characterize the distribution of its atomic mass at 0. In the second part, we established a sufficient condition for the expectation of this atomic mass to be precisely the normalized limit of the dimension of the kernel of the adjacency matrices of the sequence of graphs. Our proofs borrow ideas from analysis of algorithms, functional analysis, random matrix theory, and statistical physics. This work has been presented at SODA .

The bootstrap percolation model has been used in several related applications. In , we considered bootstrap percolation in living neural networks. Recent experimental studies of living neural networks reveal that global activation of neural networks induced by electrical stimulation can be explained using the concept of bootstrap percolation on a directed random network. The experiment consists in activating externally an initial random fraction of the neurons and observe the process of firing until its equilibrium. The final portion of neurons that are active depends in a non linear way on the initial fraction. Our main result in is a theorem which enables us to find the final proportion of the fired neurons in the asymptotic case, in the case of random directed graphs with given node degrees as the model for interacting network.

These allowed us to analyze an asynchronous randomized broadcast algorithm for random regular graphs. Our results show that the asynchronous version of the algorithm performs better than its synchronized version: in the large size limit of the graph, it will reach the whole network faster even if the local dynamics are similar on average.

In , we adapted the model given in , which is on graphs, to an equivalent on hypergraphs. For this, we generalized the result obtained by Darling and Norris in , which deals with the k-core of a random hypergraph. This allowed us to give an upper bound for the size of the giant component of random hypergraphs. We are now trying to adapt ideas from Janson and Luczak, in order to prove also a lower bound for this size. This would lead to the demonstration of a phase transition for the size of the largest component in random hypergraphs.

We considered a natural family of Gibbs distributions over matchings on a finite graph, parameterized by a single positive number called the temperature. The correlation decay technique can be applied for the analysis of matchings at positive temperature and allowed us to establish the weak convergence of the Gibbs marginal as the underlying graph converges locally. However for the zero temperature problem (i.e. maximum matchings), we showed that there is no correlation decay even in very simple cases. By using a complex temperature and a half-plane property due to Heilmann and Lieb, we were able to let the temperature tend to zero and obtained a limit theorem for the asymptotic size of a maximum matching in the graph sequence.

TREC is a partner of the 3-year ANR project called CMON,
jointly with Technicolor, LIP6, the INRIA project-team
Planète and the community
http://

The CIFRE grant of Mathieu starts in 2011 but started with an internship. The topic bears on information dissemination and recommendation in social networks. The distribution of multimedia content and the use of social networks like Facebook, Orkut, etc.. are booming in today's networks. These social networks are also increasingly used for dissemination and recommendation of content. Content distribution can then follow paths established by the network structure of social relations. The objective of the thesis will be to develop an understanding of how information disseminates in social networks based on the type of information, user tastes, and the topological structure of these networks. This study will result in developing methods for more effective dissemination of content.

In 2010, the interaction with the research lab of Sprint (Sprint ATL, in Burlingame, California) focused on two main topics:

Bayesian inference to locate mobiles in cellular networks .

The analysis of the economics of communication networks .

This collaboration resulted in several joint papers this year again.

A 6 year Scientific partnership “Action de Partenariat Informatique Fondamentale” between ENS and EADS CC started in September 06. This action allowed TREC to hire in 2007 a PhD student, D. Yogeshwaran from IISc Bangalore. This thesis bears on the stochastic comparison of random measures, point process and shot-noise fields; cf. Section . It was successfully defended in November 2010; cf .

Trec realized in 2010 a research contract number CRE 46146063-A012 between INRIA and France Télécom. The objective was to study the impact of the shadowing on the quality of service perceived by the users in wireless cellular networks. Partial results were presented at the IFIP WMNC'10 Conference . A more complete journal paper is under preparation. For more details see Section . F.-X. Klepper was hired by Inria as a research engineer within this contract.

TREC is a partner of the 3-year ANR project called PEGASE, jointly with ENS Lyon, the INRIA project-team MESCAL, ONERA, Real-Time-at-Work (start-up) and Thalès. This project is focused on the analysis of critical embedded networks using algebraic tools. The aim is to apply these techniques to AFDX and Spacewire architectures. A post-doc (N. Farhi) has been hired through this grant in September 2010.

Ana Bušć is participating (20%) in the ANR project MAGNUM
(Méthodes Algorithmiques pour la Génération aléatoire Non
Uniforme: Modèles et applications), November 2010 – 2014;
http://

TREC is a partner of the
*European Network of Excellence (NoE)*Euro-NF
http://

F. Baccelli was co-organizer of the 6
month program entitled “Stochastic Processes and
Communication Sciences” at the
**Isaac Newton Institute for Mathematical Sciences**,
Cambridge, UK. This programme aimed at the exposition of
the latest developments in mathematical sciences lying on
the boundary between the disciplines of stochastics and
communications. It brought together experts in the fields
of probability and communications in order to review and
further develop knowledge and trends. Probability theory
and communications have developed hand in hand for about
a century. The research challenges in the latter field
(from telephone networks to wireless communications and
the Internet) have spurred the development of the
mathematical theory of stochastic processes, particularly
in the theory of Markov processes, point processes,
stochastic networks, stochastic geometry, stochastic
calculus, information theory, and ergodic theory to name
but a few. Conversely, a large number of applications in
communications would not have been possible without the
development of stochastics. The programme was attended by
87 long-term participants and 23 short-stay ones, several
of which were young researchers or graduate students. It
also hosted several workshops and special events in the
following areas: An inaugural workshop on the interface
between Probability and Communications which explored
probabilistic methods (e.g. Information Theory) for
communication systems
http://

The following scientists gave talks in 2010:

France

Vlady Ravelomanana (LIAFA) talking on “Random Bipartiteness, 2-XOR-SAT, MAX-2-XOR-SAT and MAX-CUT”; December 2010,

D. Yogeshwaran (ENS/INRIA), PhD thesis defense talking on “Stochastic geometric networks : Connectivity and comparison”; November 2010,

Amar Prakash Azad (INRIA Sophia Antipolis) talking on “Combined Optimal Control of Activation and Transmission in Delay Tolerant Networks”; November 2010,

Thach Nguyen (LIAFA) talking on “Deficiency zero Petri nets”; November 2010,

Emmanuel Hyon (Université Paris Ouest, France) talking on “Scheduling in a Queuing System with Impatience and Setup Costs”; July 2010,

Justin Salez (ENS, France) talking on “Matchings on diluted graphs : the cavity method at positive temperatures”; June 2010,

Mohamed Karray (Orange Labs, France) talking on “Fading Effect on the Dynamic Performance Evaluation of OFDMA Cellular Networks”; June 2010,

Djalil Chafai (Université Paris-Est Marne-la-Vallée, France) talking on “Vitesse de convergence de processus markoviens déterministes par morceaux”; April 2010,

Europe

Alexander Rybko (IPIT, Moscow) talking on “Poisson Hypothesis for Infinite Generalized Jackson Networks”; November 2010,

Nikolaos Fountoulakis (MPII) talking on “The push algorithm for broadcasting and the geometry of graphs”; November 2010,

Daniel Gentner (Univ. of Karlsruhe) talking on “Inspecting partially stationary models in Stochastic Geometry: Palm Theory and Mass-Transport Principle”; October 2010,

Mathew Penrose (Univ. of Bath) talking on “Percolation and limit theory for the Poisson lilypond model”; September 2010,

Ruediger Urbanke (EPFL, Suisse) talking on “Spatially Coupled Codes — A New Paradigm for Code Design”; July 2010,

Vijay G. Subramanian (Hamilton Institut, Ireland) talking on “Large Deviations of Max-Weight Scheduling Policies”; March 2010,

Asia, Australia, Canada, USA

George Stacey Staples (Southern Illinois University Edwardsville) talking on “Wireless Networks and Random Graphs: An Operator Calculus Approach”; November 2010,

Kenneth Shum (Chinese University of Hong Kong) talking on “The repair problem in distributed storage system”; September 2010,

Anant Sahai (U.C. Berkeley) talking on “Challenges for spectrum sharing by cognitive radios”; September 2010,

David Gamarnik (MIT, US) talking on “Statistical physics, interpolation method and scaling limits in sparse random graphs”; April 2010.

TREC is a founding member of and
participates to Paris-Networking (
http://

A. Bušić animates the project-team
seminar
http://

M. Lelarge animates the reading
group on mixing time;
http://

B. Błaszczyszyn is a member of
*Commission détachement, délégation et post-doc “sur
subvention”, Inria Rocquencourt*.

P. Brémaud is a member of the
editorial board of the following journals:
*Journal of Applied Probability, Advances in Applied
Probability, Journal of Applied Mathematics and
Stochastic Analysis*;

F. Baccelli is a member of the
editorial board of the following journals:
*QUESTA, Journal of Discrete Event Dynamical Systems,
Mathematical Methods of Operations Research, Advances
in Applied Probability*.

Graduate Course on point processes, stochastic geometry and random graphs (program ”Master de Sciences et Technologies”), B. Błaszczyszyn and L. Massoulié (45h).

Undergraduate course LI325 (Algorithms and applications) by Ana Bušić (30h).

“Graph Theory and Combinatorics” at Université Paris 6, J. Salez (January to June, 2010),

Undergraduate course (master level, MMFAI) by F. Baccelli, A. Bouillard and P. Brémaud, on Random Structures and Algorithms (35h + 28h of exercise session).

Undergraduate exercise session (master level, MMFAI) by A. Bouillard on formal languages, computability and complexity.

Course on Information Theory and Coding (master level, MMFAI) by M. Lelarge (26h) and exercise sessions (26h) by J. Salez.

Course on Communication Networks (master level, MMFAI) by F. Baccelli and A. Chaintreau (24h).

Stochastic Models in Communications and Computer Science (graduate course), F. Bénézit (70h).

Course on “Formal Calculus” in Lycée Henri IV, J. Salez (January to June, 2010),

“Stochastic geometry and wireless
networks” at the 3rd Euro-NF Summer School on
Opportunistic Networking, Valencia (Spain), June/July
2010,
http://

Visiting Max-Planck-Institut, Saarbrücken, Germany, September 2010,

Presentation in the following conferences or seminars:

Max-Planck-Institut,
Saarbrücken, Germany, September 2010;
http://

Bachelier congress, Toronto,
Canada, June 2010;
http://

Participation in the following conferences:

Spatial Network Models for
Wireless Communications, in the programme
“Stochastic Processes in Communication Sciences”,
Isaac Newton Institute for Mathematical Sciences,
Cambridge, UK, April 2010;
http://

Member of

the program committee of IEEE Infocom 2011, Spaswin 2010;

the NWO international committee for the evaluation of the Dutch Mathematical Clusters (Nov. 2010);

the scientific board of the Alcatel/Lucent–INRIA Joint Laboratory (since 2008);

the IFIP working group WG 7.3.

Honorary Professor at Heriot Watt University.

Reviewer of the thesis of R. Lachieze-Rey (Université de Lille), S. Lasaulce (L2S, habilitation) and N. Schabanel (Liafa, habilitation).

Author of a survey article on the future of communication network for Annales des Mines .

Presentation in the following conferences or seminars:

NSF Workshop on the Frontiers
of Controls, Games and Network Science, February
19 - 20, 2010, Austin, UT Austin;
http://

Ecole Polytechnique, Febr. 2010 (CINE lecture);

Conseil scientifique de l'INRIA, Febr. 2010;

Stochastic Networks Workshop,
in the programme “Stochastic Processes in
Communication Sciences”, Isaac Newton Institute
for Mathematical Sciences, Cambridge, UK, March
2010;
http://

INSA Lyon, April 2010;

Technicolor seminar, April 2010;

Princeton University, May 2010;

Qualcomm Research, May 2010;

Bristol University, June 2010;

9th Workshop on stochastic
analysis and related fields, a conference in
honor of A.S. Üstünel for his 60th birthday,
Telecom ParisTech, Paris, Sept. 2010.
http://

2nd IFAC Workshop on
Distributed Estimation and Control in Networked
Systems, NECSYS'10, Annecy, Sept. 2010
http://

Presentation in the following conferences or seminars:

International Sgymposium on
Information Theory (ISIT), Austin, USA, June
2010;
http://

Presentation in the following conferences or seminars:

IEEE Infocom 2010, San Diego,
CA, USA, March 2010;
http://

Spatial Network Models for
Wireless Communications, in the programme
“Stochastic Processes in Communication Sciences”,
Isaac Newton Institute for Mathematical Sciences
, Cambridge, UK, April 2010;
http://

Workshop on Stochastic
Processes in Communication Networks for Young
Researchers, in the programme “Stochastic
Processes in Communication Sciences”, Isaac
Newton Institute for Mathematical Sciences,
Edinburgh, UK, June 2010;
http://

Tutorial at the 3rd Euro-NF
Summer School on Opportunistic Networking,
Valencia (Spain), June/July 2010;
http://

Third Joint IFIP Wireless and
Mobile Networking Conference (IFIP WMNC) October
2010, Budapest, Hungary;
http://

Participation in the following conferences:

Workshop on New Topics at the
Interface Between Probability and Communications,
in the programme “Stochastic Processes in
Communication Sciences”, Isaac Newton Institute
for Mathematical Sciences, Cambridge, GB, January
2010;
http://

Mathematical Challenges in
Stochastic Networks, Oberwolfach, October 2010;
http://

Presentation in the following conferences or seminars:

10th International Workshop on
Discrete Event System (WODES), Berlin,
August/September 2010;
http://

10th International Conference
on Embedded Software (EMSOFT), Scottdaloe,
Arizona, October 2010;
http://

Presentation in the following conferences or seminars:

Poster presentation at
Stochastic Networks Workshop, in the programme
“Stochastic Processes in Communication Sciences”,
Isaac Newton Institute for Mathematical Sciences,
Cambridge, UK, March 2010;
http://

Poster presentation at
Simulation of Networks Workshop, in the programme
“Stochastic Processes in Communication Sciences”,
Isaac Newton Institute for Mathematical Sciences,
Cambridge, UK, June 2010;
http://

Workshop on Stochastic
Processes in Communication Networks for Young
Researchers, in the programme “Stochastic
Processes in Communication Sciences”, Isaac
Newton Institute for Mathematical Sciences,
Edinburgh, UK, June 2010;
http://

QEST 2010 and NSMC 2010,
Williamsburg, Virginia, USA, September, 2010;
http://

Participation in the following conferences:

Dynamics and Computation, 2nd
week of “Math-Info 2010 Towards new interactions
between mathematics and computer science”,
C.I.R.M, Marseille, February 2010;
http://

Spatial Network Models for
Wireless Communications, in the programme
“Stochastic Processes in Communication Sciences”,
Isaac Newton Institute for Mathematical Sciences,
Cambridge, UK, April 2010;
http://

MAMA 2010 (workshop of
SIGMETRICS), June 2010;
http://

JAC 2010, Turku, Finland,
December 2010;
http://

Member of the program committee of IEEE WCNC'10, CCNC'10.

Presentation in the following conferences or seminars:

INRIA Alcatel-Lucent Joint Lab seminar, Villarceaux, France, March 2010,

IEEE International Conference
on Communications (ICC'10), Cape Town, South
Africa, May 2010;
http://

Alcatel-Lucent Bell Labs Open Days 2010, Villarceaux, France, May 2010,

INRIA Alcatel-Lucent Selfnet seminar, Paris, France, November 2010.

Presentation in the following conferences or seminars:

Aléa, Marseille, March 2010;
http://

Seminar “Combinatoire
énumérative et analytique”, Paris (LIAFA),
October 2010;
http://

Presentation in the following conferences or seminars:

Poster presentation at
Statistics of Networks Workshop, in the programme
“Stochastic Processes in Communication Sciences”,
Isaac Newton Institute for Mathematical Sciences,
Cambridge, June 2010;
http://

Participation in the following conferences:

Simulation of Networks
Workshop, in the programme “Stochastic Processes
in Communication Sciences”, Isaac Newton
Institute for Mathematical Sciences, Cambridge,
June 2010;
http://

Organization with D. Denisov and
B. Zwart of Workshop on Stochastic Processes in
Communication Networks for Young Researchers,
Edinburgh, June 2010;
http://

Presentation in the following conferences or seminars:

ACM-SIAM Symposium on Discrete
Algorithms (SODA10), Autin, January 2010;
http://

2010 Information Theory and
Applications Workshop (ITA 2010), San Diego,
February 2010;
http://

Stochastic Networks Workshop,
in the programme “Stochastic Processes in
Communication Sciences”, Isaac Newton Institute
for Mathematical Sciences, Cambridge, March 2011;
http://

International Workshop in
Applied Probability (IWAP 2010), Madrid, July
2010;
http://

Journées MAS, Bordeaux,
September 2010;
http://

Mathematical Challenges in
Stochastic Networks, Oberwolfach, October 2010;
http://

Fourth EPFL-UPEMLV Workshop on
Random Matrices, Information Theory and
Applications, Paris, December 2010;
http://

Participation in the following conferences:

New Topics at the Interface
Between Probability and Communications, in the
programme “Stochastic Processes in Communication
Sciences”, Isaac Newton Institute for
Mathematical Sciences, Cambridge, January 2011;
http://

Participation in the following conferences:

Stochastic Networks Workshop,
in the programme “Stochastic Processes in
Communication Sciences”, Isaac Newton Institute
for Mathematical Sciences, Cambridge, UK, March
2010;
http://

Spatial Network Models for
Wireless Communications, in the programme
“Stochastic Processes in Communication Sciences”,
Isaac Newton Institute for Mathematical Sciences,
Cambridge, UK, April 2010;
http://

Stochastic Processes in
Communication Networks for Young Researchers, in
the programme “Stochastic Processes in
Communication Sciences”, Isaac Newton Institute
for Mathematical Sciences, Edinburgh, UK, June
2010;
http://

Simulation of Networks
Workshop, in the programme “Stochastic Processes
in Communication Sciences”, Isaac Newton
Institute for Mathematical Sciences, Cambridge,
UK, June 2010;
http://

Statistics of Networks
Workshop, in the programme “Stochastic Processes
in Communication Sciences”, Isaac Newton
Institute for Mathematical Sciences, Cambridge,
UK, June 2010;
http://

Presentation in the following conferences or seminars:

IEEE Infocom 2010, San Diego,
CA, USA, March 2010;
http://

Presentation in the following conferences or seminars:

2010 IEEE Symposium on New
Frontiers in Dynamic Spectrum (DYSPAN10),
Singapore, 04/2010;
http://

Participation in the following conferences:

Stochastic Networks Workshop,
in the programme “Stochastic Processes in
Communication Sciences”, Isaac Newton Institute
for Mathematical Sciences, Cambridge,
http://

Member of the program committee of the IEEE ICC Cognitive Radio and Networks Symposium 2010.

Presentation in the following conferences or seminars:

Sixth International Workshop
on Spatial Stochastic Models for Wireless
Networks (SpasWIN), Avignon, France, June 2010;
http://

Bell Labs, Fraunhofer Heinrich
Hertz Institute and Deutsche Telekom Labs' joint
workshop of on the
*Future of Communications: Science,
Technologies, and Services*, Berlin, June
2010;
http://

IEEE 21st International
Symposium on Personal Indoor and Mobile Radio
Communications (PIMRC), Istanbul, Turkey, Sept
2010;
http://

Presentation in the following conferences or seminars:

Workshop on Statistical
Physics of Complexity, Optimization and Systems
Biology, Ecole de Physique des Houches, Marsh
2010;
http://

8
^{th}French Combinatorial
Conference, Orsay, June 2010;
http://

Journées Modélisation
Aléatoire et Statistique, Bordeaux, September
2010;
http://

Workshop on Mathematical
Challenges in Stochastic Networks, Oberwolfach,
October 2010;
http://

Fourth EPFL-UPEMLV Workshop on
Random Matrices, Information Theory and
Applications, Paris, December 2010;
http://

Member of the program committee of ACM Sigmetrics 2010.

Presentation in the following conferences or seminars:

OSDI, Vancouver, October,
2010;
http://

Participation in the following conferences:

ACM Internet Measurement
Conference, Melbourne, 2010;
http://

Presentation in the following conferences or seminars:

Poster presentation at Spatial
Network Models for Wireless Communications, in
the programme “Stochastic Processes in
Communication Sciences”, Isaac Newton Institute
for Mathematical Sciences, Cambridge, UK, April
2010;
http://

Workshop on Stochastic
Processes in Communication Networks for Young
Researchers, Edinburgh, UK. June 2010;
http://

ICM Satellite Conference on
Probability and Stochastic Processes, Bangalore,
India. August 2010
http://

Departement of Mathematics, Indian Institute of Technology-Madras, Chennai, India. August 2010 ;

The 48th Allerton Conference
in Communication, Control and Computing,
Urbana-Champaign, USA. September 2010
http://

Participation in the following conferences:

New Topics at the Interface
Between Probability and Communications, in the
programme “Stochastic Processes in Communication
Sciences”, Isaac Newton Institute for
Mathematical Sciences, Cambridge, UK. January
2010;
http://