Our goal is to develop the field of graph algorithms for networks. Based on algorithmic graph theory and graph modeling we want to understand what can be done in these large networks and what cannot. Furthermore, we want to derive practical distributed algorithms from known strong theoretical results. Finally, we want to extract possibly new graph problems by focusing on particular applications.

The main goal to achieve in networks are efficient searching of nodes or data, and efficient content transfers. We propose to implement strong theoretical results in that domain to make significant breakthrough in large network algorithms. These results concern small world routing, low stretch routing in doubling metrics and bounded width classes of graphs. They are detailed in the next section. This implies several challenges:

testing our target networks against general graph parameters known to bring theoretically tractability,

implementing strong theoretical results in the dynamic and distributed context of large networks.

A complementary approach consists in studying the combinatorial and graph structures that appear in our target networks. These structures may have inherent characteristics coming from the way the network is formed, or from the design goals of the target application.

Application domains include evaluating Internet performances, the design of new peer-to-peer applications, enabling large scale ad hoc networks and mapping the web.

The application of measuring and modeling Internet metrics such as latencies and bandwidth is to provide tools for optimizing Internet applications. This concerns especially large scale applications such as web site mirroring and peer-to-peer applications.

Peer-to-peer protocols are based on a all equal paradigm that allows to design highly reliable and scalable applications. Besides the file sharing application, peer-to-peer solutions could take over in web content dissemination resistant to high demand bursts or in mobility management. Envisioned peer-to-peer applications include video on demand, streaming, exchange of classified ads,...

Wifi networks have entered our every day life. However, enabling them at large scale is still a challenge. Algorithmic breakthrough in large ad hoc networks would allow to use them in fast and economic deployment of new radio communication systems.

The main application of the web graph structure consists in ranking pages. Enabling site level indexing and ranking is a possible application o f such studies.

How well connected is the network? This is one of the most fundamental questions one would ask when facing the challenge of designing a communication network. Three major notions of connectivity have been considered in the literature, but in the context of traditional (single-layer) networks, they turn out to be equivalent. The paper , introduces a model for studying the three notions of connectivity in multi-layer networks. Using this model, it is easy to demonstrate that in multi-layer networks the three notions may differ dramatically. Unfortunately, in contrast to the single-layer case, where the values of the three connectivity notions can be computed efficiently, it has been recently shown in the context of WDM networks (results that can be easily translated to our model) that the values of two of these notions of connectivity are hard to compute or even approximate in multi-layer networks. The current paper shed some positive light into the multi-layer connectivity topic: we show that the value of the third connectivity notion can be computed in polynomial time and develop an approximation for the construction of well connected overlay networks.

In the graph searching game the opponents are a set of searchers and a fugitive in a graph. The searchers try to capture the fugitive by applying some sequence of moves that include placement, removal, or sliding of a searcher along an edge. The fugitive tries to avoid capture by moving along unguarded paths. The search number of a graph is the minimum number of searchers required to guarantee the capture of the fugitive. In , we initiate the study of this game under the natural restriction of connectivity where we demand that in each step of the search the locations of the graph that are clean (i.e. non-accessible to the fugitive) remain connected. We give evidence that many of the standard mathematical tools used so far in classic graph searching fail under the connectivity requirement. We also settle the question on “the price of connectivity”, that is, how many searchers more are required for searching a graph when the connectivity demand is imposed. We make estimations of the price of connectivity on general graphs and we provide tight bounds for the case of trees. In particular, for an n-vertex graph the ratio between the connected searching number and the non-connected one is while for trees this ratio is always at most 2. We also conjecture that this constant-ratio upper bound for trees holds also for all graphs. Our combinatorial results imply a complete characterization of connected graph searching on trees. It is based on a forbidden-graph characterization of the connected search number. We prove that the connected search game is monotone for trees, i.e. restricting search strategies to only those where the clean territories increase monotonically does not require more searchers. A consequence of our results is that the connected search number can be computed in polynomial time on trees, moreover, we show how to make this algorithm distributed. Finally, we reveal connections of this parameter to other invariants on trees such as the Horton–Strahler number.

From a high level perspective, we illustrate how methods from distributed computing can be useful in generating lower bounds for cooperative biological ensembles. Indeed, if experiments that comply with our setting reveal that the ants' search is time efficient, then our theoretical lower bounds can provide some insight on the memory they use for this task.

When playing the boolean game

We focus on the notion of a

Our first main result is the design of an

The situation is radically different when one is considering variants of the model in
which nodes are aware of the status of their neighbors, i.e., are aware of whether or
not they have already received the rumor, at any point in time. Indeed, our second main
result states that, unless P=NP , the worst case behavior of the list-based model with
the additional feature that every node is perpetually aware of which of its neighbors
have already received the rumor cannot be approximated in polynomial time within a

Modularity (Newman-Girvan) has been introduced as a quality measure for graph partitioning. It has received considerable attention in several disciplines, especially complex systems. In order to better understand this measure from a graph theoretical point of view, we study the modularity of a variety of graph classes. In , we first consider simple graph classes such as tori and hypercubes. We show that these regular graph families have asymptotic modularity 1 (that is the maximum possible). We extend this result to trees with bounded degree, allowing us to give a lower bound of 2 over average degree for graph classes with low maximum degree (included power law graphs for a sufficiently large exponent).

Social networks offer users new means of accessing information, essentially relying on “social filtering”, i.e. propagation and filtering of information by social contacts. The sheer amount of data flowing in these networks, combined with the limited budget of attention of each user, makes it difficult to ensure that social filtering brings relevant content to the interested users. Our motivation in is to measure to what extent self-organization of the social network results in efficient social filtering. To this end we introduce flow games, a simple abstraction that models network formation under selfish user dynamics, featuring user-specific interests and budget of attention. In the context of homogeneous user interests, we show that selfish dynamics converge to a stable network structure (namely a pure Nash equilibrium) with close-to-optimal information dissemination. We show in contrast, for the more realistic case of heterogeneous interests, that convergence, if it occurs, may lead to information dissemination that can be arbitrarily inefficient, as captured by an unbounded “price of anarchy”. Nevertheless the situation differs when users' interests exhibit a particular structure, captured by a metric space with low doubling dimension. In that case, natural autonomous dynamics converge to a stable configuration. Moreover, users obtain all the information of interest to them in the corresponding dissemination, provided their budget of attention is logarithmic in the size of their interest set.

For a given set

A homogeneous pair (also known as a 2-module) of a graph is a pair

2-joins are edge cutsets that naturally appear in the decomposition
of several classes of graphs closed under taking induced subgraphs, such
as balanced bipartite graphs, even-hole-free graphs, perfect graphs and
claw-free graphs. Their detection is needed in several algorithms, and
is the slowest step for some of them. The classical method to detect
a 2-join takes *non-path* 2-joins
(special kinds of 2-joins that are needed in all of the known algorithms
that use 2-joins), the fastest known method takes time

The growth of User-Generated Content (UGC) traffic makes the understanding of its nature a priority for network operators, content providers and equipment suppliers. In , we study a four-month dataset that logs all video requests to DailyMotion made by a fixed subset of users. We were able to infer user sessions from raw data, to propose a Markovian model of these sessions, and to study video popularity and its evolution over time. The presented results are a first step for synthesizing an artificial (but realistic) traffic that could be used in simulations or experimental testbeds.

Today, Internet involves many actors who are making revenues on it (operators, companies, service providers,...). It is therefore important to be able to make fair decisions in this large-scale and highly competitive economical ecosystem. One of the main issues is to prevent actors from manipulating the natural outcome of the decision process. For that purpose, game theory is a natural framework. In that context, voting systems represent an interesting alternative that, to our knowledge, has not yet been considered. They allow competing entities to decide among different options. Strong theoretical results showed that all voting systems are susceptible to be manipulated by one single voter, except for some ”degenerated” and non-acceptable cases. However, very little is known about how much a voting system is manipulable in practical scenarios. In , we investigate empirically the use of voting systems for choosing end-to-end paths in multi-carrier networks, analyzing their manipulability and their economical efficiency. We show that one particular system, called Single Transferable Vote (STV), is largely more resistant to manipulability than the natural system which tries to get the economical optimum. Moreover, STV manages to select paths close to the economical optimum, whether the participants try to cheat or not.

In , , we consider the Byzantine agreement problem (BA) in synchronous systems with homonyms. In this model different processes may have the same authenticated identifier. In such a system of

Assuming that the processes know the distribution of identifiers we give a necessary and sufficient condition on the integer partition of

This bound is to be compared with the

The problem of estimating the proportion of satisfiable instances of a
given CSP (constraint satisfaction problem) can be tackled through weighting.
It consists in putting onto each solution a non-negative real value based
on its neighborhood in a way that the total weight is at least 1 for each
satisfiable instance. We define in ,
a general weighting scheme
for the estimation of satisfiability of general CSPs. First we give some
sufficient conditions for a weighting system to be correct.
Then we show that this scheme allows for an improvement on the upper bound on
the existence of non-trivial cores in 3-SAT obtained by
Maneva and Sinclair (2008) to

EnseignantUnivFr[UVSQ][Habilite]

For Toeplitz matrices associated with degree 3 and 4 uniform B-spline interpolation, the inverse may be analytically known , saving the standard inverse calculations. It generalizes to any degree as a row of the Eulerian numbers triangle.

A contract has been signed between Inria, RadioCeros and the ARITT Center. Gang is to provide a feasibility study on the subject of the use of Peer-to-peer mechanisms for high quality Internet radio.

Alcatel grants ADR LINCS to study applicability of voting systems to loosely connected networks (Peer-to-peer, social networks...).

ALCATEL is funding a CIFRE PhD for carrying PageRank techniques to Social Networks.

Managed by University Paris Diderot, H. Fauconnier is leading this project granting J. Clément from Région Ile de France.

The goal of the SONGS project is to extend the applicability of the SimGrid simulation framework from Grids and Peer-to-peer systems to Clouds and High Performance Computation systems. Each type of large-scale computing system will be addressed through a set of use cases and lead by researchers recognized as experts in this area.

Managed by University Paris Diderot, P. Fraigniaud.

Online social networks are among the most popular sites on the Web and continue to grow rapidly. They provide mechanisms to establish identities, share content and information, and create relationships. With the emergence of a new generation of powerful mobile devices that enable wireless ad hoc communication, it is time to extend social networking to the mobile world. Such an ad hoc social networking environment is full of opportunities. As opposed to the use of personal computers, a mobile phone is a strictly personal device, always on, with several wireless interfaces that include a short range communication with nearby nodes. Applications such as notification of status updates, sharing of user generated content, documents tagging, rating/recommendation and bookkeeping can be deployed “on the move” on top of contacts established through short range communication. It requires to deploy social networking applications in a delay tolerant manner using opportunistic social contacts as in a peer to peer network, as well as new advanced content recommendation engines.

The Prose project is a collective and multi-disciplinary effort to design opportunistic contact sharing schemes, and characterizes the environmental conditions, the usage constraint, as well as the algorithmic and architecture principles that let them operate. The partners of the Prose project will engage in this exploration through various expertise: network measurement, traffic monitoring from a real application, system design, behavioral study, analysis of distributed algorithms, theory of dynamic graph, networking modeling, and performance evaluation. As part of this project, the partners will be involved in the analysis of the content received and accessed by users of a real commercial application (PlayAdz), and will participate to the design of a new promotion advertisement service.

SHAMAN (Self-organizing and Healing Architectures for Malicious and Adversarial Networks) is an ANR VERSO Project (2009-2012).

Managed by University Paris Diderot, H. Fauconnier leads this project that grants Ph. D. H. Tran-The.

SHAMAN focuses on the algorithmic foundations of resource-constrained autonomous large scale systems, dedicated to enabling the sustainability of network functions in spite of abrupt system evolutions, component failures, and attacks. We foresee original solutions in the general frameworks of self-stabilization, failure detection, and robust protocols. Our first objective is the design of obligate but realistic models encompassing anonymity, dynamism, and/or malicious behavior. Our second objective is to evaluate both the theoretical power, and the practical functionality, of these models, by confronting them to their ability of designing efficient algorithms and protocols for dynamic and malicious environments. This evaluation will be tackled in two complementary application domains: wireless sensor networks, and peer-to-peer systems. The primary outcome of SHAMAN should be the demonstration of reliable middleware bricks that could be integrated in real distributed platforms.

Managed by University Paris Diderot, C. Delporte and H. Fauconnier lead this project that grants 1 Ph. D.

Distributed computation keep raising new questions concerning computability and complexity. For instance, as far as fault-tolerant distributed computing is concerned, impossibility results do not depend on the computational power of the processes, demonstrating a form of undecidability which is significantly different from the one encountered in sequential computing. In the same way, as far as network computing is concerned, the impossibility of solving certain tasks locally does not depend on the computational power of the individual processes.

The main goal of DISPLEXITY (for DIStributed computing: computability and ComPLEXITY) is to establish the scientific foundations for building up a consistent theory of computability and complexity for distributed computing.

One difficulty to be faced by DISPLEXITY is to reconcile the different sub-communities corresponding to a variety of classes of distributed computing models. The current distributed computing community may indeed be viewed as two not necessarily disjoint sub-communities, one focusing on the impact of temporal issues, while the other focusing on the impact of spatial issues. The different working frameworks tackled by these two communities induce different objectives: computability is the main concern of the former, while complexity is the main concern of the latter.

Within DISPLEXITY, the reconciliation between the two communities will be achieved by focusing on the same class of problems, those for which the distributed outputs are interpreted as a single binary output: yes or no. Those are known as the yes/no-problems. The strength of DISPLEXITY is to gather specialists of the two main streams of distributed computing. Hence, DISPLEXITY will take advantage of the experience gained over the last decade by both communities concerning the challenges to be faced when building up a complexity theory encompassing more than a fragment of the field.

In order to reach its objectives, DISPLEXITY aims at achieving the following tasks:

Formalizing yes/no-problems (decision problems) in the context of distributed computing. Such problems are expected to play an analogous role in the field of distributed computing as that played by decision problems in the context of sequential computing.

Formalizing decision problems (yes/no-problems) in the context of distributed computing. Such problems are expected to play an analogous role in the field of distributed computing as that played by decision problems in the context of sequential computing.

Revisiting the various explicit (e.g., failure-detectors) or implicit (e.g., a priori information) notions of oracles used in the context of distributed computing allowing us to express them in terms of decidability/complexity classes based on oracles.

Identifying the impact of non-determinism on complexity in distributed computing. In particular, DISPLEXITY aims at a better understanding of the apparent lack of impact of non-determinism in the context of fault-tolerant computing, to be contrasted with the apparent huge impact of non-determinism in the context of network computing. Also, it is foreseen that non-determinism will enable the comparison of complexity classes defined in the context of fault-tolerance with complexity classes defined in the context of network computing.

Last but not least, DISPLEXITY will focus on new computational paradigms and frameworks, including, but not limited to distributed quantum computing and algorithmic game theory (e.g., network formation games).

The project will have to face and solve a number of challenging problems. Hence, we have built the DISPLEXITY consortium so as to coordinate the efforts of those worldwide leaders in Distributed Computing who are working in our country. A successful execution of the project will result in a tremendous increase in the current knowledge and understanding of decentralized computing and place us in a unique position in the field.

Title: EULER (Experimental UpdateLess Evolutive Routing)

Type: COOPERATION (ICT)

Defi: Future Internet Experimental Facility and Experimentally-driven Research

Instrument: Specific Targeted Research Project (STREP)

Duration: October 2010 - September 2013

Coordinator: ALCATEL-LUCENT (Belgium)

Others partners:

Alcatel-Lucent Bell, Antwerpen, Belgium

3 projects from Inria: CEPAGE, GANG and MASCOTTE, France

Interdisciplinary Institute for Broadband Technology (IBBT),Belgium

Laboratoire d'Informatique de Paris 6 (LIP6), Université Pierre Marie Curie (UPMC), France

Department of Mathematical Engineering (INMA) Université Catholique de Louvain, Belgium

RACTI, Research Academic Computer Technology Institute University of Patras, Greece

CAT, Catalan Consortium: Universitat PolitÃ¨cnica de Catalunya, Barcelona and University of

Girona, Spain

Abstract: The title of this study is "Dynamic Compact Routing Scheme". The aim of this projet is to develop new routing schemes achieving better performances than current BGP protocols. The problems faced by the inter-domain routing protocol of the Internet are numerous:

The underlying network is dynamic: many observations of bad configurations show the instability of BGP;

BGP does not scale well: the convergence time toward a legal configuration is too long, the size of routing tables is proportional to the number of nodes of network (the network size is multiplied by 1.25 each year);

The impact of the policies is so important that the many packets can oscillated between two Autonomous Systems.

Description: In this collaboration, we mainly investigate new routing paradigms so as to design, develop, and validate experimentally a distributed and dynamic routing scheme suitable for the future Internet and its evolution. The resulting routing scheme(s) is/are intended to address the fundamental limits of current stretch-1 shortest-path routing in terms of routing table scalability but also topology and policy dynamics (perform efficiently under dynamic network conditions). Therefore, this project will investigate trade-offs between routing table size (to enhance scalability), routing scheme stretch (to ensure routing quality) and communication cost (to efficiently and timely react to various failures). The driving idea of this research project is to make use of the structural and statistical properties of the Internet topology (some of which are hidden) as well as the stability and convergence properties of the Internet policy in order to specialize the design of a distributed routing scheme known to perform efficiently under dynamic network and policy conditions when these properties are met. The project will develop new models and tools to exhaustively analyse the Internet topology, to accurately and reliably measure its properties, and to precisely characterize its evolution. These models, that will better reflect the network and its policy dynamics, will be used to derive useful properties and metrics for the routing schemes and provide relevant experimental scenarios. The project will develop appropriate tools to evaluate the performance of the proposed routing schemes on large-scale topologies (order of 10k nodes). Prototype of the routing protocols as well as their functional validation and performance benchmarking on the iLAB experimental facility and/or virtual experimental facilities such as PlanetLab/OneLab will allow validating under realistic conditions the overall behaviour of the proposed routing schemes.

The aim of this project is to build a community of researchers focusing on fundamental
theoretical issues of future networking, including such topics as communication theory,
network information theory, distributed algorithms, self-organization and game theory,
modeling of large random and complex networks and structures.
Partners Inria, VTT, Aalto University, Eindhoven University are gathered
under EIT ICT Labs Project Fundamentals of Networking (FUN).
http://

Master MPRI University of Paris Diderot:

M. Habib, graph algorithms, 12 hours;

P. Fraigniaud, “Algorithmique distribuée pour les réseaux”, 12 hours;

C. Delporte and H. Fauconnier, “Algorithmique distribuée avec mémoire partagée”; C. Delporte, 12 hours and H. Fauconnier, 12 hours;

L. Viennot, “Structures de données distribuées et routage”, 12 hours.

Master Professional University of Paris Diderot:

M. Habib, Search Engines, 50 hours;

M. Habib, Parallelism and mobility which includes peer-to-peer overlay networks, 50 hours;

C. Delporte, Distributed programming, 85 hours;

H. Fauconnier, Internet Protocols and Distributed algorithms, 85 hours.

Master: F. Mathieu, Peer-to-Peer Techniques, 30 hours, University of Paris 6;

D.U.T. : Y. Boufkhad, computer science and networks, 192 hours, University of Paris Diderot;

U.F.R.: F. de Montgolfier,foundation of computer science, algorithmics, and computer architecture, 192 hours, University of Paris Diderot;

Master: F. de Montgolfier, Peer-to-Peer theory and application, M2, University of Marne-la-Vallée.

PhD : Hervé Baumann,"Diffusion décentralisée d'information dans les systèmes distribués", University of Paris Diderot, september 24, 2012

Xavier Koegler,"Protocoles de population, jeux et grandes populations", University of Paris Diderot, september 13, 2012

PhD in progress : François Durand, "Manipulabilité des systèmes de vote et applications aux réseaux", since 2012, supervised by Fabien Mathieu and Ludovic Noirie

Jérémie Dusart,"Parcours de graphes de cocomparabilité", since 2011, supervised by Michel Habib

Antoine Mamcarz,"Algorithmes de décomposition de graphes", since 2010, supervised by Michel Habib

The-Dang Huynh , "Extensions de PageRank et Applications aux Réseaux Sociaux", since 2012, supervised by Fabien Mathieu and Dohy Hong

Hung Tran, Failure detection with Byzantine adversary, since 2010, supervised by Hugues Fauconnier and Carole Delporte

HdR review: L. Viennot reviews Arnaud Legout's HDR thesis on "Efficacité et vie privée : de BitTorrent à Skype";

PhD review: L. Viennot reviews Bio Mikaila Toko Worou's PhD thesis on "Outils algorithmiques de détection des communautés dans les réseaux";

PhD thesis : L. Viennot was examinator of Quentin Godfroy's thesis on "From spanners to multipath spanners"

F. Mathieu was examinator of Claudio Testa's thesis "On the congestion control of Peer-to-peer applications: the LEDBAT case".