TREC is a joint INRIA-ENS project-team. It is focused on the modeling, the control and the design of communication networks and protocols. Its methodological activities are combined with projects defined with industrial partners, notably Alcatel-Lucent, Technicolor, Qualcomm and Orange. The main research directions are:

modeling and performance analysis of wireless networks: network information theory, coverage and load analysis, power control, evaluation and optimization of the transport capacity, self organization;

stochastic network dynamics: stability, worst-case performance analysis using the (max,plus) algebra, network calculus, perfect simulation, inverse problems, distributed consensus;

economics of networks: epidemic risk model, incentives, security, insurance, diffusion of innovations;

the development of mathematical tools based on stochastic geometry, random geometric graphs and spatial point processes: Voronoi tessellations, coverage processes, random spatial trees, random fields, percolation;

combinatorial optimization and analysis of algorithms: random graphs, belief propagation.

TREC's web site at Inria
http://

The collaboration of TREC and the Wireless Foundations Center of UC Berkeley became part of the inria@siliconvalley program.

The paper “Impact of Clustering on Diffusions and Contagions in Random Networks” got the best paper award of NetGCOOP 2011 : International conference on NETwork Games, COntrol and OPtimization.

**Modeling and performance analysis of wireless networks.**Our main focus was on cellular networks, mobile ad hoc networks (MANETs) and their vehicular variants called VANETs.

Our main advances about wireless networks have been based on the development of analytical tools for their performance analysis and on new results from network information theory.

Concerning cellular networks, the main questions bear on coverage and capacity in large CDMA networks when taking intercell interferences and power control into account. Our main focus has been on the design of: 1) a strategy for the densification and parameterization of UMTS and future OFDM networks that is optimized for both voice and data traffic; 2) new self organization and self optimization protocols for cellular networks e.g. for power control, sub-carrier selection, load balancing, etc.

Concerning MANETs, we investigated MAC layer scheduling algorithms, routing algorithms and power control. The MAC protocols we considered are based on Aloha and CSMA as well as their cognitive radio extensions. We investigated opportunistic routing schemes for MANETs and VANETs. The focus was on cross layer optimizations allowing one to maximize the transport capacity of multihop networks.

**Theory of network dynamics.**TREC is pursuing the analysis of network dynamics by algebraic methods. The mathematical tools are those of discrete event dynamical systems: semi-rings,
and in particular network calculus, ergodic theory, perfect simulation, stochastic comparison, inverse problems, large deviations, etc. Network calculus gives results on worst-case
performance evaluation; ergodic theory is used to assess the stability of discrete event dynamical systems; inverse problem methods are used to estimate some network parameters from
external observations and to design network probing strategies.

**The development of stochastic geometry and random geometric graphs tools.**Stochastic geometry is a rich branch of applied probability which allows one to quantify random phenomena on
the plane or in higher dimension. It is intrinsically related to the theory of point processes and also to random geometric graphs. Our research is centered on the development of a
methodology for the analysis, the synthesis, the optimization and the comparison of architectures and protocols to be used in wireless communication networks. The main strength of this
method is its capacity for taking into account the specific properties of wireless links, as well as the fundamental question of scalability.

**Combinatorial optimization and analysis of algorithms.**In this research direction started in 2007, we build upon our expertise on random trees and graphs and our collaboration with D.
Aldous in Berkeley. Sparse graph structures have proved useful in a number of applications from information processing tasks to the modeling of social networks. We obtained new results in
this research direction: computation of the asymptotic for the rank of the adjacency matrix of random graphs, computation of the matching number and the b-matching number of large graphs.
We also applied our result to design bipartite graph structures for efficient balancing of heterogeneous loads and to analyze the flooding time in random graphs.

**Economics of networks**The premise of this relatively new direction of research, developed jointly with Jean Bolot [SPRINT ATL and then TECHNICOLOR] is that economic incentives drive
the development and deployment of technology. Such incentives exist if there is a market where suppliers and buyers can meet. In today's Internet, such a market is missing. We started by
looking at the general problem of security on Internet from an economic perspective. A new research direction started on the economic value of user localization in wireless networks. This
led to an Infocom'11 paper. We also built on our expertise in random graphs to derive new insights concerning diffusion and cascading behavior in random (possibly clustered) networks.

We have investigated various applications of our research results with the following industrial partners and user associations:

**Wireless Networks**

Alcatel-Lucent Bell Laboratories (L. Thomas and L. Roulet) on self optimization in cellular networks.

Qualcomm (T. Richardson and his group) on improvements of CSMA CA.

Orange (M. Karray) on cellular networks.

**Network Dynamics**

Thalès and Real-Time-at-Work on embedded networks.

Grenouille on probing in access networks.

**Networks Economics**

Technicolor (J. Bolot) on economic incentives and user localization.

The work on the self optimization of cellular networks based on Gibbs' sampler (see Section ) carried out in the joint laboratory with Alcatel-Lucent, led to the development of a software prototype that was presented by C. S. Chen at the INRIA Alcatel-Lucent joint laboratory seminar in March 2010 and demonstrated at the Alcatel-Lucent Bell Labs Open Days in May 2010. It was also demonstrated in the LINCS opening ceremony in April 2011.

The work on perfect sampling (see Section
) has been partially implemented in a software tool PSI2, in collaboration with
MESCAL team [INRIA Grenoble - Rhône-Alpes];
https://

This axis bears on the analysis and the design of wireless access communication networks. Our contributions are organized in terms of network classes: cellular networks, wireless LANs and MANETs, VANETs. We also have a section on generic results that regard more general wireless networks. We are interested both in macroscopic models, which are particularly important for economic planning and in models allowing the definition and the optimization of protocols. Our approach combines several tools, queueing theory, point processes, stochastic geometry, random graphs, distributed control algorithms, self organization protocols.

The activity on cellular networks has several complementary facets ranging from performance evaluation to protocol design. The work is mainly based on strong collaborations with Alcatel-Lucent and Orange Labs.

Our objective in is to analyze the impact of fading and opportunistic scheduling on the quality of service perceived by the users in an Orthogonal Frequency Division Multiple Access (OFDMA) cellular network. To this end, assuming Markovian arrivals and departures of customers that transmit some given data volumes, as well as some temporal channel variability (fading), we study the mean throughput that the network offers to users in the long run of the system. Explicit formulas are obtained in the case of allocation policies, which may or may-not take advantage of the fading, called respectively opportunistic and non-opportunistic. The main practical results of the present work are the following. Firstly we evaluate for the non-opportunist allocation the degradation due to fading compared to Additive White Gaussian Noise (AWGN) (that is, a decrease of at least 13% of the throughput). Secondly, we evaluate the gain induced by the opportunistic allocation. In particular, when the traffic demand per cell exceeds some value (about 2 Mbits/s in our numerical example), the gain induced by opportunism compensates the degradation induced by fading compared to AWGN. Partial results were presented at ComNet in 2009 .

Shadowing is believed to degrade the quality of service (QoS) in wireless cellular networks. Assuming log-normal shadowing, and studying mobile's path-loss with respect to the strongest (serving) base station (BS) and the corresponding interference factor (the ratio of the sum of the path-gains form interfering BS's to the path-gain from the serving BS), which are two key ingredients of the analysis and design of the cellular networks, in we discovered a more subtle reality. We observe, as commonly expected, that a strong variance of the shadowing increases the mean path-loss with respect to the serving BS, which in consequence, may compromise QoS. However, in some cases, an increase of the variance of the shadowing can significantly reduce the mean interference factor and, in consequence, improve some QoS metrics in interference limited systems, provided the handover policy selects the strongest BS as the serving one. We exemplify this phenomenon, similar to stochastic resonance, studying the blocking probability in regular, hexagonal networks in a semi-analytic manner, using a spatial version of the Erlang's loss formula combined with Kaufman-Roberts algorithm. More detailed probabilistic analysis explains that increasing variance of the log-normal shadowing amplifies the ratio between the strongest signal and all other signals thus reducing the interference. The above observations might shed new light, in particular on the design of indoor communication scenarios. Partial results were presented at IFIP WMNC'2010 .

Three patents were filed under the INRIA/Alcatel–Lucent joint laboratory.

In a wireless network composed of randomly scattered nodes, the characterization of the distribution of the best signal quality received from a group of nodes is of primary importance for many network design problems. The thesis of Van Minh Nguyen developed a framework for analyzing this distributions using shot noise models for the interference field. The joint distribution of the interference and the maximum signal strength was identified. The best signal quality can be represented as a function of these two quantities. Particular practical scenarios were also analyzed where explicit expressions can be obtained.

The Foschini-Miljanic's algorithm is used for power control in cellular networks whes users require a fixed bit rate. It leads to an optimal choice of power by the users in a distributed way when such a solution exists. If the users are too greedy or too many, the network saturates, and it is not possible to provide the required bit rates. We have been working on the question of residual bandwidth estimation in . The residual bandwidth of a user is defined as the rate that this user should have to saturate the network when all other users stick to their initial rate requirement and all users use power control. The aim is to determine the residual bandwidth of a given user by local measurements. We showed that by simply changing their SINR target slightly and by listening to the evolution of interference, users can locally inverse Foschini-Miljanic's algorithm and compute their residual bandwidth.

Cellular networks are usually modeled by placing the base stations according to a regular geometry such as a grid, with the mobile users scattered around the network either as a Poisson point process (i.e. uniform distribution) or deterministically. These models have been used extensively for cellular design and analysis but suffer from being both highly idealized and not very tractable. Thus, complex simulations are used to evaluate key metrics such as coverage probability for a specified target rate (equivalently, the outage probability) or average/sum rate. More tractable models have long been desirable. In a joint work with J. Andrews and R. Ganti [UT Austin, USA] and , we developed general models for multi-cell signal-to-noise-plus-interference ratio (SINR) based on homogeneous Poisson point processes and derived the coverage probability and rate. Under very general assumptions, the resulting expressions for the SINR cumulative distribution function involve quickly computable integrals, and in some important special cases of practical interest these integrals can be simplified to common integrals (e.g., the Q-function) or even to exact and quite simple closed-form expressions. We also derived the mean rate, and then the coverage gain (and mean rate loss) from static frequency reuse. We compared the coverage predictions obtained by this approach to the standard grid model and an actual base station deployment. We observed that the proposed model is pessimistic (a lower bound on coverage) whereas the grid model is optimistic. In addition to being more tractable, the proposed model may better capture the increasingly opportunistic and dense placement of base stations in urban cellular networks with highly variable coverage radii.

Cellular networks are in a major transition from a carefully planned set of large tower-mounted base-stations (BSs) to an irregular deployment of heterogeneous infrastructure elements that often additionally includes micro, pico, and femtocells, as well as distributed antennas. In a collaboration with H. Dhillon, J. Andrews and R. Ganti [UT Austin, USA] , we extended the approach of we developed a model for a downlink heterogeneous cellular network (HCN) consisting of K tiers of randomly located BSs, where each tier may differ in terms of average transmit power, supported data rate and BS density. Assuming a mobile user connects to the strongest candidate BS, the resulting Signal-to-Interference-plus-Noise-Ratio (SINR) is greater than 1 when in coverage, Rayleigh fading, we derived an expression for the probability of coverage (equivalently outage) over the entire network under both open and closed access. One interesting observation for interference-limited open access networks is that at a given SINR, adding more tiers and/or BSs neither increases nor decreases the probability of coverage or outage when all the tiers have the same SINR threshold.

A MANET is made of mobile nodes which are at the same time terminals and routers, connected by wireless links, the union of which forms an arbitrary topology. The nodes are free to move randomly and organize themselves arbitrarily. Important issues in such a scenario are connectivity, medium access (MAC), routing and stability. This year, we worked on the analysis of MAC and routing protocols in multi-hop MANETS in collaboration with Paul Mühlethaler [INRIA HIPERCOM], and on a game theoretic view of Spatial Aloha in collaboration with E. Altman and M.K. Hanawal [INRIA MAESTRO] .

The most popular medium access mechanism for such ad hoc networks is CSMA/CA with RTS/CTS. In CSMA-like mechanisms, spatial reuse is achieved by implementing energy based guard zones. In a new collaboration with Qualcomm ( and ), we considered the problem of simultaneously scheduling the maximum number of links that can achieve a given signal to interference ratio (SIR). Using tools from stochastic geometry, we studied and maximized the medium access probability of a typical link. Our contributions are two-fold: (i) We showed that a simple modification to the RTS/CTS mechanism, viz., changing the receiver yield decision from an energy-level guard zone to an SIR guard zone, leads to performance gains; and (ii) We showed that this combined with a simple modification to the transmit power level – setting it to be inversely proportional to the square root of the link gain – leads to significant improvements in network throughput. Further, this simple power-level choice is no worse than a factor of two away from optimal over the class of all "local" power level selection strategies for fading channels, and further is optimal in the non-fading case. The analysis relies on an extension of the Matérn hard core point process which allows us to quantify both these SIR guard zones and this power control mechanism.

In collaboration with Gustavo de Veciana and Yuchul Kim [UT Austin, ECE] we studied the benefits of channel-aware (opportunistic) scheduling of transmissions in ad-hoc networks using CSMA/CA . The key challenge in optimizing the performance of such systems is finding a good compromise among three interdependent quantities, the density and channel quality of the scheduled transmitters, and the resulting interference at receivers. We propose two new channel-aware slotted CSMA protocols: opportunistic CSMA (O-CSMA) and quantile-based CSMA (QT-CSMA) and develop stochastic geometric models allowing us to quantify their performance in terms of spatial reuse and spatial fairness. When properly optimized these protocols offer substantial improvements in terms of both of these metrics relative to CSMA - particularly when the density of nodes is moderate to high. Moreover, we show that a simple version of QT-CSMA can achieve robust performance gains without requiring careful parameter optimization. The paper supports the case that the benefits associated with channel-aware scheduling in ad hoc networks, as in centralized base station scenarios, might far outweigh the associated overhead, and this can be done robustly using a QT-CSMA like protocol.

This traditional research topic of TREC has several new threads like perfect simulation, active probing or Markov decision.

Network calculus is a theory that aims at computing deterministic performance guarantees in communication networks. This theory is based on the (min,plus) algebra. Flows are modeled by an
*arrival curve*that upper-bounds the amount of data that can arrive during any interval, and network elements are modeled by a
*service curve*that gives a lower bound on the amount of service offered to the flows crossing that element. Worst-case performances are then derived by combining these curves.

In cooperation with Aurore Junier [INRIA/IRISA], we present in algorithms to compute worst-case performance upper bounds when the service policy is static priorities, using linear programming. Linear programming does not lead to tight bounds, but when combining this method with (min,plus) methods, we obtain bounds that outperform the already known bounds. Also, we prove that in tandem networks, the the worst-case performance bound under arbitrary multiplexing can be obtain by a policy with static priorities, the “shortest-destination first” policy.

In collaboration with Bruno Gaujal [INRIA Rhone Alpes] and Nadir Farhi [IFFSTAR] we are working on a model of performance bound calculus on feed-forward networks where data packets are
routed under wormhole routing discipline. We are interested in determining maximum end-to-end delays and backlogs for packets going from a source node to a destination node, through a given
virtual path in the network. Our objective is to give a “network calculus” approach to calculate the performance bounds. For this, we propose a new concept of curves that we call
*packet curves*. The curves permit to model constraints on packet lengths for data flows, when the lengths are allowed to be different. We used this new concept to propose an approach
for calculating residual services for data flows served under non preemptive service disciplines. This notion also enabled us to differentiate different classes of service policies: those
that are based on a packet count (like round-robin and its generalized version), where the packet curve will be useful to tighten the bounds computed, and those that are based on the amount
of data served (FIFO, priorities), where it won't be useful. These results can be found in
and have been presented in ILAS 2011.

In envelope-based models for worst-case performance evaluation like Network Calculus or Real-Time Calculus, several types of service curves have been introduced to quantify some deterministic service guarantees. We compare those different classes of service curves regarding the composition (servers in tandem) and individual service curves (when several flows share a server, what service curve can be guaranteed to each of the flows?). In short, there are two main classes of service curves, simple and strict service curves. Individual service curve can not always be computed when simple service curves are considered, and strict service curves is not a stable class regarding the two operations described. We show that there can be no equivalence between the two main classes of service curves and that no notion of service curve in-between can be defined, that behaves well for the composition. We complete this study by studying other classes of service curves from this viewpoint. These results have been presented in

With Éric Badouel, Philippe Darondeau [INRIA/IRISA] and Jan Komenda [Institute of Mathematics, Brnó], we study in the decidability of existence and the rationality of delay controllers for systems with time weights in the tropical and interval semirings. Depending on the (max,+) or (min,+)-rationality of the series specifying the controlled system and the control objective, cases are identified where the controller series defined by residuation is rational, and when it is positive (i.e., when delay control is feasible). When the control objective is specified by a tolerance, i.e. by two bounding rational series, a nice case is identified in which the controller series is of the same rational type as the system specification series.

Active probing began by measuring end-to-end path metrics, such as delay and loss, in a direct measurement process which did not require inference of internal network parameters. The field has since progressed to measuring network metrics, from link capacities to available bandwidth and cross traffic itself, which reach deeper and deeper into the network and require increasingly complex inversion methodologies. The thesis of B. Kauffmann investigates this line of thought as a set of inverse problems in queueing theory. Queueing theory is typically concerned with the solution of direct problems, where the trajectory of the queueing system, and laws thereof, are derived based on a complete specification of the system, its inputs and initial conditions. Inverse problems aim to deduce unknown parameters of the system based on partially observed trajectories. A general definition of the inverse problems in this class was provided and the key variants were mapped out: the analytical methods, the statistical methods and the design of experiments. We also show how this inverse problem viewpoint translates to the design of concrete Internet probing applications.

Inverse problems in bandwidth sharing networks theory were also investigated. A bandwidth sharing networks allocates the bandwidth to each flow in order to maximize a given utility
function (typically an

Most active probing techniques suffer of the “Bottleneck” limitation: all characteristics of the path after the bottleneck link are erased and unreachable. we are currently investigating a new tomography technique, based on the measurement of the fluctuations of point-to-point end-to-end delays, and allowing one to get insight on the residual available bandwidth along the whole path. For this, we combined classical queueing theory models with statistical analysis to obtain estimators of residual bandwidth on all links of the path. These estimators were proved to be tractable, consistent and efficient. In we evaluated their performance with simulation and trace-based experiments.

Propp and Wilson introduced in 1996 a perfect sampling algorithm that uses coupling arguments to give an unbiased sample from the stationary distribution of a Markov chain on a finite
state space

In collaboration with Bruno Gaujal [INRIA Grenoble - Rhone-Alpes], we proposed in a new approach for the general case that only needs to consider two trajectories. Instead of the original chain, we used two bounding processes (envelopes) and we showed that, whenever they couple, one obtains a sample under the stationary distribution of the original chain. We showed that this new approach is particularly effective when the state space can be partitioned into pieces where envelopes can be easily computed. We further showed that most Markovian queueing networks have this property and we propose efficient algorithms for some of them.

The envelope technique has been implemented in a software tool PSI2 (see Section ).

In collaboration with Bruno Gaujal [INRIA Grenoble - Rhone-Alpes], we proposed a new method to speed up perfect sampling of Markov chains by skipping passive events during the simulation . We showed that this can be done without altering the distribution of the samples. This technique is particularly efficient for the simulation of Markov chains with different time scales such as queueing networks where certain servers are much faster than others. In such cases, the coupling time of the Markov chain can be arbitrarily large while the runtime of the skipping algorithm remains bounded. This was further illustrated by several experiments that also show the role played by the entropy of the system in the performance of our algorithm.

When the cardinality of the state space is so huge that even storing the state of the Markov chain becomes challenging, we propose to combine the ideas of bounding processes and the aggregation of Markov chains . We illustrate the proposed approach of aggregated envelope bounding chains on queueing models with joint arrivals and joint services, often referred to in the literature as assemble-to-order systems. Due to the finite capacity, and coupling in arrivals and services, the exact solving techniques are inefficient for larger problem instances. For instance, for the service tools model proposed by Vliegen and Van Houtum (2009), the aggregated envelope method reduces exponentially the dimension of the state space and allows effective perfect sampling algorithms. We also provide bounds for the coupling time, under the high service rate assumptions.

Solving Markov chains is in general difficult if the state space of the chain is very large (or infinite) and lacking a simple repeating structure. One alternative to solving such chains is to construct models that are simple to analyze and provide bounds for a reward function of interest. The bounds can be established by using different qualitative properties, such as stochastic monotonicity, convexity, submodularity, etc. In the case of Markov decision processes, similar properties can be used to show that the optimal policy has some desired structure (e.g. the critical level policies).

g In collaboration with Jean-Michel Fourneau [PRiSM, Université de Versailles Saint-Quentin] we consider two different applications of stochastic monotonicity in performance evaluation of networks . In the first one, we assume that a Markov chain of the model depends on a parameter that can be estimated only up to a certain level and we have only an interval that contains the exact value of the parameter. Instead of taking an approximated value for the unknown parameter, we show how we can use the monotonicity properties of the Markov chain to take into account the error bound from the measurements. In the second application, we consider a well known approximation method: the decomposition into submodels. In such an approach, models of complex networks are decomposed into submodels whose results are then used as parameters for the next submodel in an iterative computation. One obtains a fixed point system which is solved numerically. In general, we have neither an existence proof of the solution of the fixed point system nor a convergence proof of the iterative algorithm. Here we show how stochastic monotonicity can be used to answer these questions. Furthermore, monotonicity properties can also help to derive more efficient algorithms to solve fixed point systems.

In collaboration with Jean-Michel Fourneau [PRiSM, Université de Versailles Saint-Quentin] we proposed an iterative algorithm to compute component-wise bounds of the steady-state
distribution of an irreducible and aperiodic Markov chain
. These bounds are based on very simple properties of

In a joint work with I.M. H. Vliegen [University of Twente, The Netherlands] and A. Scheller-Wolf [Carnegie Mellon University, USA] , we presented a new bounding method for Markov chains inspired by Markov reward theory: Our method constructs bounds by redirecting selected sets of transitions, facilitating an intuitive interpretation of the modifications of the original system. We show that our method is compatible with strong aggregation of Markov chains; thus we can obtain bounds for an initial chain by analyzing a much smaller chain. We illustrated our method by using it to prove monotonicity results and bounds for assemble-to-order systems.

In a joint work with Emmanuel Hyon [University of Paris Ouest Nanterre La Defense and LIP6] , we consider a single-item lost sales inventory model with different classes of customers. Each customer class may have different lost sale penalty costs. We assume that the demands follow a Poisson process and we consider a single replenishment hypoexponential server. We give a Markov decision process associated with this optimal control problem and prove some structural properties of its dynamic programming operator. This allows us to show that the optimal policy is a critical level policy. We also discuss some possible extensions to other replenishment distributions and give some numerical results for the hyperexponential server case.

Dynamic systems with local interactions can be used to model problems in distributed computing: gathering a global information by exchanging only local information. The challenge is
two-fold: first, it is impossible to centralize the information (cells are indistinguishable); second, the cells contain only a limited information (represented by a finite alphabet
*Probabilistic Cellular Automaton (PCA)*(e.g. Dobrushin, R., Kryukov, V., Toom, A.:
*Stochastic cellular systems: ergodicity, memory, morphogenesis*, 1990). In the second case, time is continuous, cells are updated at random instants, at most one cell is updated at any
given time, and the model is known as a (finite range)
*Interacting Particle System (IPS)*(e.g. Liggett, T.M.:
*Interacting particle systems*, 2005).

In a joint work with N. Fatès [INRIA Nancy – Grand-Est], J. Mairesse and I. Marcovici [LIAFA, CNRS and Université Paris 7]
we consider an infinite graph with nodes initially labeled by
independent Bernoulli random variables of parameter

In a joint work with J. Mairesse and I. Marcovici [LIAFA, CNRS and Université Paris 7] , we considered ergodicity properties of probabilistic cellular automata (PCA). A classical cellular automaton (CA) is a particular case of PCA. For a 1-dimensional CA, we proved that ergodicity is equivalent to nilpotency, and is therefore undecidable. We then proposed an efficient perfect sampling algorithm for the invariant measure of an ergodic PCA. Our algorithm does not assume any monotonicity properties of the local rule. It is based on a bounding process which is shown to be also a PCA. We then focused on the PCA Majority, whose asymptotic behavior is unknown, and performed numerical experiments using the perfect sampling procedure.

The spread of new ideas, behaviors or technologies has been extensively studied using epidemic models. In , we considered a model of diffusion where the individuals' behavior is the result of a strategic choice. We studied a simple coordination game with binary choice and give a condition for a new action to become widespread in a random network. We also analyze the possible equilibria of this game and identify conditions for the coexistence of both strategies in large connected sets. Finally we look at how can firms use social networks to promote their goals with limited information.

Our results differ strongly from the one derived with epidemic models. In particular, we showed that connectivity plays an ambiguous role: while it allows the diffusion to spread, when the network is highly connected, the diffusion is also limited by high-degree nodes which are very stable. In the case of a sparse random network of interacting agents, we computed the contagion threshold for a general diffusion model and showed the existence of (continuous and discontinuous) phase transitions. We also computed the minimal size of a seed of new adopters in order to trigger a global cascade if these new adopters can only be sampled without any information on the graph. We showed that this minimal size has a non-trivial behavior as a function of the connectivity. Our analysis extends methods developed in the random graphs literature based on the properties of empirical distributions of independent random variables, and leads to simple proofs.

The defining characteristic of wireless and mobile networking is user mobility, and related to it is the ability for the network to capture (at least partial) information on where users are located and how users change location over time. Information about location is becoming critical, and therefore valuable, for an increasingly larger number of location-based or location-aware services. A key open question, however, is how valuable exactly this information is. Our goal in this paper is to help understand and estimate the economics, or the value of location information.

Heuristics indicate that point processes exhibiting clustering of points have larger critical radius
*directionally convex*(

As explained above, heuristically one expects finiteness of the critical radii for percolation of sub-Poisson point processes. However, in
we have show that it is non-zero as well. In a more elaborate
paper
we present a reasoning as to why this non-triviality is to be
expected. Specifically, we defined two (nonstandard) critical radii for percolation of the Boolean model, called the lower and upper critical radii, and related, respectively, to the
finiteness of the expected number of void circuits around the origin and asymptotic of the expected number of long occupied paths from the origin in suitable discrete approximations of the
continuum model. These radii sandwich the usual critical radius
*reverses*the lower critical radii.

Many random models are parametrized by the size of the model, and the essential properties of the model are the asymptotic ones as the size of the graph tends to infinity. In the master thesis we show that the theory of local weak converge provides a natural setting to investigate stochastic (convex) ordering of such models. We consider both the geometric context of and the discrete one of Galton-Watson branching process and Configuration Model, cf . In this latter case we define and study a convex order in the context of random trees and graphs which converge in the local weak sense. In particular, we're interested in the effect of ordering on percolation. It turns out that while in the case of Galton-Watson trees, convex ordering leads to the ordering of percolation probabilities, we cannot conclude this in the case of configuration model. In this case, we could only obtain the ordering of percolation thresholds.

We investigated percolation in the AB Poisson-Boolean model in

Random packing models (RPM) are point processes (p.p.s) where points which "contend" with each other cannot be simultaneously present. These p.p.s play an important role in many studies in
physics, chemistry, material science, forestry and geology. For example, in microscopic physics, chemistry and material science, RPMs can be used to describe systems with hard-core
interactions. Applications of this type range from reactions on polymer chains, chemisorption on a single-crystal surface, to absorption in colloidial systems. In these models, each point
(molecule, particle,

In the simplest Matérn point processes, one retains certain points of a Poisson point process in such a way that no pairs of points are at distance less than a threshold. This condition
can be reinterpreted as a threshold condition on an extremal shot–noise field associated with the Poisson point process. In a joint work with P. Bermolen (Universidad de la República,
Montevideo, Uruguay)
, we studied extensions of Matérn point processes where one retains
points that satisfy a threshold condition based on an
*additive*shot–noise field of the Poisson point process. We provide an analytical characterization of the intensity of this class of point processes and we compare the packing obtained
by the extremal and additive schemes and certain combinations thereof.

In collaboration with F. Mathieu [INRIA GANG] and Ilkka Norros [VTT, Finland], we started studying a new spatial birth and death point process model where the death rate is a shot noise of the point configuration . We showed that the spatial point process describing the steady state exhibits repulsion. We studied two asymptotic regimes: the fluid regime and the hard–core regime. We derived closed form expressions for the mean (and in some cases the law) of the latency of points as well as for the spatial density of points in the steady state of each regime.

We currently investigate extensions of this approach to network information theoretich channels.

A new direction of research was initiated aiming at defining a new class of measures on a point process which are invariant under the action of a navigation on this point process. This class of measures has properties similar to Palm measures of stationary point processes; but they cannot be defined in the classical framework of Palm measures.

We considered a natural family of Gibbs distributions over matchings on a finite graph, parameterized by a single positive number called the temperature. The correlation decay technique can be applied for the analysis of matchings at positive temperature and allowed us to establish the weak convergence of the Gibbs marginal as the underlying graph converges locally. However for the zero temperature problem (i.e. maximum matchings), we showed that there is no correlation decay even in very simple cases. By using a complex temperature and a half-plane property due to Heilmann and Lieb, we were able to let the temperature tend to zero and obtained a limit theorem for the asymptotic size of a maximum matching in the graph sequence.

With Laurent Massoulié [Technicolor], we extend the results obtained previously on the asymptotic size of maximum matchings in random graphs converging locally to Galton-Watson trees to so-called b-matchings (with non-unitary capacity at vertices as well as constraints on individual edges). Compared to the matching case, this involves studying the convergence of a message passing algorithms which transmits vectors instead of single real numbers. We also look further into an application of these results to large scale distributed content service platforms, such as peer-to-peer video-on-demand systems. In this context, the density of maximum b-matchings corresponds to the maximum fraction of simultaneously satisfiable requests, when the service resources are limited and each server can only handle requests for a predetermined subset of the contents which it has stored in memory. An important design aspect of such systems is the content placement strategy onto the servers depending on the estimated content popularities; the results obtained allow to characterize the efficiency of such placement strategies and the optimal strategies in the limit of large storage capacity at servers are determined.

These allowed us to analyze an asynchronous randomized broadcast algorithm for random regular graphs. Our results show that the asynchronous version of the algorithm performs better than its synchronized version: in the large size limit of the graph, it will reach the whole network faster even if the local dynamics are similar on average.

TREC is a partner of the 3-year ANR project called CMON, jointly with Technicolor, LIP6, the INRIA project-team Planète and the community
http://

TREC is a partner of the 3-year ANR project called PEGASE, jointly with ENS Lyon, the INRIA project-team MESCAL, ONERA, Real-Time-at-Work (start-up) and Thalès. This project is focused on the analysis of critical embedded networks using algebraic tools. The aim is to apply these techniques to AFDX and Spacewire architectures. Nadir Farhi was a post-doc hired through this grant until January 2011, and Abir Benabid was hired in March 2011.

Ana Bušić is participating (20%) in the 4-year ANR project MAGNUM (Méthodes Algorithmiques pour la Génération aléatoire Non Uniforme: Modèles et applications), 2010 – 2014;
http://

The CIFRE grant of Mathieu started in January 2011. The topic bears on information dissemination and recommendation in social networks. The distribution of multimedia content and the use of social networks like Facebook, Orkut, etc.. are booming in today's networks. These social networks are also increasingly used for dissemination and recommendation of content. The objective of the thesis will be to develop an understanding of how information disseminates in social networks based on the type of information, user tastes, and the topological structure of these networks. This study will result in developing methods for more effective dissemination of content.

TREC participates in the Laboratory of Information, Networking and Communication Sciences (LINCS);
http://

Project
*Analyse et Conception de Réseaux Sans Fil Auto-Organisés*(ACRON) started in 2011. Coordinator: Supélec (Télécommunications), Partners: Inria HIPERCOM, Université Paris-Sud, IEF. Trec is
associated partner.

The objective of this project is to work on characterization of the fundamental performance limits of large self-organizing wireless networks and develop distributed and self-organizing communication techniques that will approach the theoretical limits.

Two-year Inria Collaborative action
*Action de recherche collaborative (ARC)*OCOQS “Optimal threshold policies in COntrolled Queuing Systems” OCOQS started in 2011. Coordinator: Ana Bušić, Participants: Alain Jean-Marie
(MAESTRO, INRIA Sophia-Antipolis), Emmanuel Hyon (University of Paris Ouest and LIP6), Ingrid Vliegen (University of Twente);
http://

TREC has participated in the mounting of the Research Group (Groupement de recherche, GdR) on Stochastic Geometry led by Pierre Calka (Université de Rouen). This GdR is going to be a
collaboration framework for all French research teams working in the domain of
*spatial stochastic modeling*, both on theory development and in applications. This year the application has been accepted by the National Committee of CNRS and the group will be
officially created in 2012.

Exploratory research (Projet Exploratoire Premier Soutien (PEPS)) of INS2I CNRS titled “Simulation Temps Paralléle, Simulation Parfaite et Monotonie” (MonoSimPa) is a one year exploratory project on parallel and perfect simulation. It is a joint project with PRiSM, Versailles (UMR 8144) and LIG, Grenoble (UMR 5217).

European Network of Excellence (NoE),
http://

Project acronym: Euro-NF;

Duration: January 2008 - June 2012;

Coordinator: D. Kofman (Intitut Télécom);

Partners: about 30 partners;

Abstract: This NoE is focused on the next generation Internet. Its main target is to integrate the research effort of the partners to be a source of innovation and a think tank on possible scientific, technological and socio-economic trajectories towards the network of the future. Euro-NF is supported by the theme "Information and Communication Technologies (ICT)" under the 7th Framework Programme of the European Community for RTD. Euro-NF is a continuation of Euro-NGI

EIT ICT Labs Action Line: Internet Technologies and Architectures.

Project acronym: FUN

Project title: Fundamentals of Networking

Duration: January 2011 - December 2011

Coordinator: INRIA TREC

Partners: the partners are INRIA TREC and INRIA GANG (Fabien Mathieu) in France, VTT (Ilkka Norros, Samuli Aalto) and Aalto University (Pekka Orponen) in Finland, Eindhoven University (Sem Borst, Onno Boxma and Remco van der Hofstad) in the Netherlands.

Abstract: The aim of this project is to build a community of researchers focusing on fundamental theoretical issues of future networking. The topics of interest include: communication theory, network information theory, distributed algorithms, self-organization and game theory, modeling of large random and complex networks and structures. The proposal builds upon collaborations within the EURONF Network of Excellence, where the three institutions are partners and where the researchers have had fruitful scientific interactions.

Title: Information Theory, Stochastic Geometry, Wireless Networks

INRIA principal investigator: François Baccelli

International Partner:

Institution: University of California Berkeley (United States)

Laboratory: EECS Department

Researcher: Venkat Anantharam, Anant Sahai, David Tse.

International Partner:

Institution: Stanford University (United States)

Laboratory: EE

Researcher: Abbas El Gamal.

Duration: 2011 - 2013

See also:
http://

The activity of this proposal is centered on the inter-play between stochastic geometry and network information theory, whith a particular emphasis on wireless networks. In terms of research, three main lines of thought will be pursued:1. Error exponents and stochastic geometry2. Stochastic geometry and network Information Theory3. Cognitive radio and stochastic geometry

Aleksander Wieczorek

Subject: Optimal control of an inventory system

Institution: Poznan University of Technology (Poland)

Mir Omid Haji Mirsadeghi (from Jan 2011 until Sep 2011)

Subject: Graph matching based on semi-definite positive relaxation

Institution: Sharif University of Technology (Iran, Islamic Republic of)

The following scientists gave talks on Trec's seminar in 2011 (see
http://

**Michel Mandjes**(University of Amsterdam) /Nov 30/ Talking on: "Birthday surprises",

**Prashant Mehta**(University of Illinois at Urbana-Champaign) /Oct 14/ Talking on: "Feedback Particle Filter: A New Formulation for Nonlinear Filtering based on Mean-field
Theory",

**Anastasios Giovanidis**(TREC) /Sep 23/ Talking on: "Measurement Based Self-Optimization in Random Access Communications",

**Anne Bouillard**(TREC) /Sep 22/ Talking on: "Residuation of tropical series: rationality issues",

**Naoto Miyoshi**(Tokyo Institute of Technology) /Sep 7/ Talking on: "Limiting size index distributions for ball-bin models with Zipf-type frequencies",

**Justin Salez**(TREC), PhD thesis defense /Jul 4/ Talking on: "Quelques conséquences de la convergence locale faible pour les graphes aléatoires",

**Hamed Amini**(TREC), PhD thesis defense /Jun 24/ Talking on: "Epidémies et Percolation dans les Graphes Aléatoires",

**Shenghao Yang**(INC, The Chinese University of Hong Kong) /Jun 23/ Talking on: "BATS Codes: when Network Coding Meets Digital Fountain",

**Van Minh Nguyen**(TREC), PhD thesis defense /Jun 20/ Talking on: "Wireless Link Quality Modelling and Mobility Management Optimisation for Cellular Networks",

**I-Hong Hou**(University of Illinois at Urbana-Champaign) /Jun 15/ Talking on: "Supporting Delay Guarantees over Unreliable Wireless Channels",

**Giovanni Luca Torrisi**(IAC Mauro Picone Italy) /May 26/ Talking on: "Density estimation of functionals of spatial point processes with application to wireless networks",

**Richard Emilion**(Université d'Orléans) /May 16/ Talking on: "Hierarchical Dirichlet Models",

**Kristen Woyach**(UC Berkeley) /Apr 29/ Talking on: "Crime and Punishment for Cognitive Radios",

**Martin Haenggi**(University of Notre Dame) /Apr 14/ Talking on: "A Geometric Approach to Security in Wireless Systems",

**Amin Coja-Oghlan**(University of Warwick) /Apr 8/ Talking on: "Phase transitions and computational complexity",

**Ilkka Norros**(VTT, Finland) /Mar 30/ Talking on: "Stability problems of two-chunk file-sharing systems",

**Michel Mandjes**(University of Amsterdam & EURANDOM & CWI) /Mar 25/ Talking on: "Resource dimensioning through buffer sampling",

**Bruno Kauffmann**(Orange Labs and INRIA/ENS), PhD thesis defense /Mar 24/ Talking on: "Inverse Problems in Networks",

**Darryl Veitch**(University of Melbourne) /Mar 23/ Talking on: "La synchronisation d'horloges à travers l'internet",

**Irene Marcovici**(LIAFA) /Feb 17/ Talking on: "Probabilistic cellular automata, invariant measures, and perfect sampling",

**Stefan Haar**(LSV) /Feb 10/ Talking on: "When you don't have a Free Choice",

**Victor Bapst**(Laboratoire de Physique Théorique, ENS) /Jan 26/ Talking on: "On the spectrum of random regular graphs with random edges weights",

**Armand Makowski**(University of Maryland) /Jan 24/ Talking on: "Recent results for random key graphs: Connectivity, triangles, etc.",

**Nicolas Broutin**(INRIA) /Jan 13/ Talking on: "La limite d'échelle des graphes aléatoires critiques",

**Nadir Farhi**(TREC, INRIA/ENS) /Jan 7/ Talking on: "Packetization and aggregate service in network calculus"

The reading group on Random Graphs is animated by A. Bouillard and A. Giovanidis; see
http://

Undergraduate course (master level, MMFAI) by M. Lelarge and J. Salez, on Information Theory and Coding (24h + 24h of exercise session).

Course on Communication Networks (master level, MMFAI) by F. Baccelli, A. Bouillard and A. Bušić (24h + 24h of exercice sessions).

Course on Network Modeling (master level, MPRI) by F. Baccelli and A. Bouillard (24h)

Undergraduate course (master level, MMFAI) by F. Baccelli, A. Bouillard and P. Brémaud, on Random Structures and Algorithms (35h + 28h of exercise session).

Undergraduate exercise session (master level, MMFAI) by A. Bouillard on formal languages, computability and complexity (28h).

Preparation to the oral exams of the agregation of mathematics (computer science option) by A. Bouillard (12h).

Graduate Course on point processes, stochastic geometry and random graphs (program ”Master de Sciences et Technologies”), B. Błaszczyszyn and L. Massoulié (45h).

Undergraduate course onconception of algorithms and applications (Licence Informatique, 3rd year), A. Bušić (24h)

Preparation to the
*certification C2i*by A. Benabid (36h)

Graduate course on simulation (M2 COSY), A. Bušić (6h).

Lectures on Clustering, percolation and convex ordering of point processes, during Summer Academy on “Stochastic Analysis, Modelling and Simulation of Complex
Structures”, Söllerhaus, Austria, September 2011;
http://

Ecole de Recherche ENS Lyon 10-14 janvier 2011. 25 hours of lectures of F. Baccelli on stochastic geometry.

Kyoto University, January 2011, Department of Systems Science, 3 lectures of F. Baccelli on “Stochastic Geometry and Wireless Networks”.

Chinese University of Hong Kong, July 2011, EECS Dept., 3 lecture of F. Baccelli on “Stochastic Geometry and Information Theory”.

Fields Mitacs Workshop “Probabilistic Methods in Wireless Networks”, August 2011,
http://

PhD:
**Bruno Kauffmann**, “Inverse Problems in Networks”,
**defended**on March 24, 2011; adviser F. Baccelli; see

PhD:
**Van Minh Nguyen**, “Modélisation des Liens de Communication Radio et Optimisation de la Gestion de Mobilité dans les Réseaux Cellulaires”,
**defended**on Jun 20, 2011; adviser F. Baccelli; see

PhD:
**Hamed Amini**, “Epidémies et Percolation dans les Graphes Aléatoires”,
**defended**on Jun 24, 2011; adviser F. Baccelli and M. Lelarge; see

PhD:
**Justin Salez**, “Quelques conséquences de la convergence locale faible pour les graphes aléatoires”,
**defended**on July 4, 2011; adviser F. Baccelli and M. Lelarge; see

PhD in progress:
**Emilie Coupechoux**, “Analysis of large random graphs”, started in September 2009, adviser M. Lelarge, F. Baccelli;

PhD in progress:
**Kumar Gaurav**, “ Convex comparison of network architectures” started in October 2011, adviser B. Błaszczyszyn;

PhD in progress:
**Mir Omid Haji Mirsadeghi**, “Routing on Point Processes”, started in 2009, adviser F. Baccelli;

PhD in progress:
**Mathieu Leconte**, “Propagation d'information et recommandations dans les réseaux sociaux”, started in January 2011, adviser M. Lelarge, F. Baccelli;

PhD in progress:
**Frédéric Morlot**, “Mobility Models for Communication Networks”, started in 2008, adviser F. Baccelli;

PhD in progress:
**Tien Viet Nguyen**, “Random Packing Models”, started in 2009, adviser F. Baccelli.

Keynote or Colloquium Lectures at the

Queuing Theory Symposium of the Operations Research Society of Japan, January 2011. Lecture on “Spatial Queuing”.

Conférence SMAI 2011, Guidel, France, May 2011,
http://

Colloquium, Department of Mathematics, UT Austin, October 2011. Lecture on “Phase Transitions in Stochastic Network Dynamics”.

Markov Lecture 2011, Informs 2011 Conference, Charlotte, North Carolina, October 2011
http://

Presentations in the following conferences or seminars

Séances publique de l'Académie des sciences “ A l'heure des grands réseaux”, March 2011,
http://

Workshop on stochastic geometry - Université de Lille, April 2011. Lecture on “Information–Theoretic Capacity and Error Exponents of Stationary Point Processes under Random Additive Displacements”.

Wiopt–Spaswin Workshop, Princeton, May 2011. Lecture on “Optimizing CSMA for Wide Area Ad-hoc Networks”.

ISCIS Conference 2011, Royal Society, London, September 2011. Lecture on “Performance of P2P Networks with Spatial Interactions of Peers”.

UT Austin, ECE Department, October 2011. Lecture on “Performance Evaluation and Design of Communication Networks with Random Spatial Components”.

ICERM Workshop “Novel Applications of Kinetic Theory”, Brown University, USA, October 2011. Lecture on “Transport Equations for Internet Transmission Control”.

EIT ICT Labs Workshop “Fundamentals of Networking”, VTT, Helsinki, Finland, November 2011. Lecture on “Self Organization in Large CSMA Networks”.

Member of the TPC of IEEE Infocom 2011.

Participation in the following conferences:

WCTT 2011, Vienna, Austria, November 2011;
http://

Presentation in the following conferences or seminars

Conference on “Stochastic Networks and Related Topics III”, Bȩdlewo, Poland, May 2011;
http://

16th Workshop on Stochastic Geometry, Stereology and Image Analysis, SGSIA 2011 at Sandbjerg Estate, Sonderborg, Denmark, June 2011;
http://

Summer Academy on “Stochastic Analysis, Modelling and Simulation of Complex Structures”, Söllerhaus, Austria, September 2011;
http://

Journé scientifique "Graphes aléatoires", l'Institut Elie Cartan, Nancy, October 2011;
http://

NetGCooP, Paris, October 2011;
http://

Journé GdR ISIS “Modèles spatiaux stochastiques pour les réseaux”,Télécom ParisTech, December 2011; invited talk
http://

Member of the program committee of the WCTT workshop (worst-case traversal time) affiliated with the RTSS2011 conference

Presentation in the following conferences or seminars:

WCTT 2011, Vienna, Austria, November 2011;
http://

LSV seminar, ENS Cachan, March 2011;
http://

TREC seminar, INRIA Paris, September 2011;
http://

68NQRT seminar, INRIA/IRISA, September 2011;
http://

WEED annual meeting, INRIA Paris, November 2011;
http://

SDA2 annual meeting, Caen, June 2011;
https://

Participation in the following conferences:

Valuetools 2011, Cachan, May 2011;
http://

Presentation in the following conferences or seminars:

16th INFORMS Applied Probability Society Conference, Stockholm, Sweden, July 2011;
http://

49th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, September 2011;
http://

5th Young European Queueing Theorists (YEQT) workshop, October 2011 (invited);
http://

Probability Seminar, Department of Mathematics, University of Zagreb, Croatia, May 2011;
http://

LACL seminar, University Paris 12, France, January 2011;
http://

Participation in the following conferences:

Valuetools 2011, Cachan, France, May 2011;
http://

EPEW 2011, Borrowdale, The English Lake District, UK, October 2011;
http://

Member of the program committee of international conferences: IEEE ICC (2011), IEEE Globecom (2011), IEEE WCNC (2011), IEEE CCNC (2011), IEEE INFOCOM Workshop on Cognitive and Cooperative Networks (2011), IEEE ICCT (2011).

Editor of European Transactions on Telecommunications (Journal)

International Editorial Broad of Internet of Things (Journal)

Presentation in the following conferences or seminars:

IEEE VTC international workshop on Self-Organizing Networks, Hungary, May 2011;
http://

INRIA-Alcatel-Lucent Joint Laboratory 3rd Anniversary Meeting (Seminar), Centre de Conference Paris Victoire, France, Jan 2011;
http://

Presentation in the following conferences or seminars:

APS Conference, Stockholm (Sweden), July 2011;
http://

NetGCooP Conference, Paris, October 2011;
http://

Séminaire de Probabilités, Nanterre, November 2011;
http://

Séminaire des Doctorants, Rocquencourt, December 2011;
https://

Presentation in the following conferences or seminars:

ILAS(International Linear Algebra Society Conference), Braunschweig, Germany, August 2011
http://

Presentation in the following seminars:

University of Rouen, Rouen, France. February 2011.

University of Bath, Bath, UK. April 2011.

University of Orleans, Orleans, France. May 2011.

Indian Statistical Institute, Bangalore, India. July 2011.

Indian Institute of Science, Bangalore, India. July 2011.

TIFR Centre for Applicable Mathematics, Bangalore, India. August 2011.

Indian Institute of Technology-Bombay, Mumbai, India. August 2011.

Participation in the following conferences:

Random Structures and Dynamics, Oxford, April 2011 ;
http://

On sabbatical in the Electrical Engineering Department of Stanford University from September to December 2011, working with Andrea Montanari on message passing algorithms for compressed sensing.

Presentation in the following conferences or seminars:

Journées de l'ANR A3, Nancy, February, 2011;
http://

Stochastic Activity Month EURANDOM, Eindhoven, April, 2011;
http://

WIDS MIT, Boston, June, 2011;
http://

SIGMETRICS workshop: MAMA, San Jose, 2011;
http://

Berkeley seminar, Berkeley, October, 2011;
http://

ISL colloquium, Stanford, November, 2011;

Participation in the following conferences:

Stochastic Geometry days, Lille, March 2011;
http://

Presentation in the following conferences or seminars:

III Edición Curso Ciencia de las Redes, Politécnica Madrid, November 2011;
http://

Participation in the following conferences:

NETSCI 2011, Budapest (Hungary), June 2011;
http://

Member of the organization committee of VnTelecom conference
http://

Member of the program committee of ICC 2011,

Presentation in the following conferences or seminars:

VnTelecom conference, Paris, November, 2011

ICC, Kyoto, June 2011;
http://

Presentation in the following conferences or seminars:

Valuetools 2011, Cachan, France, May 2011;
http://

Presentation in the following conferences or seminars:

Seminaire de Probabilites de l'Institut Elie Cartan, Nancy (France), January 2011;
http://

Seminaire de Combinatoire du LIAFA, Paris (France), March 2011;
http://

Workshop on Stochastic Networks and Related Topics, Bedlewo (Poland), May 2011 ;
http://

Seminaire de l'equipe INRIA Reseaux, Algorithmes et Probabilites, Rocquencourt (France), June 2011 ;
https://

The 16th Applied Probability Society Conference, Stockholm (Sweden), June 2011 ;
http://

UC Berkeley Probability Seminar, Berkeley (USA), September 2011 ;
http://

Presentation in the following conferences or seminars:

EPEW 2011, Borrowdale, The English Lake District, UK, October 2011;
http://