Our goal is to develop the field of graph algorithms for networks. Based on algorithmic graph theory and graph modeling we want to understand what can be done in these large networks and what cannot. Furthermore, we want to derive practical distributed algorithms from known strong theoretical results. Finally, we want to extract possibly new graph problems by focusing on particular applications.

The main goal to achieve in networks are efficient searching of nodes or data, and efficient content transfers. We propose to implement strong theoretical results in that domain to make significant breakthrough in large network algorithms. These results concern small world routing, low stretch routing in doubling metrics and bounded width classes of graphs. They are detailed in the next section. This implies several challenges:

testing our target networks against general graph parameters known to bring theoretically tractability,

implementing strong theoretical results in the dynamic and distributed context of large networks.

A complementary approach consists in studying the combinatorial and graph structures that appear in our target networks. These structures may have inherent characteristics coming from the way the network is formed, or from the design goals of the target application.

The paper
was awarded as a best article in the
*25th Int. Symp. on Distributed Computing (DISC 2011)*.

Application domains include evaluating Internet performances, the design of new peer-to-peer applications, enabling large scale ad hoc networks and mapping the web.

The application of measuring and modeling Internet metrics such as latencies and bandwidth is to provide tools for optimizing Internet applications. This concerns especially large scale applications such as web site mirroring and peer-to-peer applications.

Peer-to-peer protocols are based on a all equal paradigm that allows to design highly reliable and scalable applications. Besides the file sharing application, peer-to-peer solutions could take over in web content dissemination resistant to high demand bursts or in mobility management. Envisioned peer-to-peer applications include video on demand, streaming, exchange of classified ads,...

Wifi networks have entered our every day life. However, enabling them at large scale is still a challenge. Algorithmic breakthrough in large ad hoc networks would allow to use them in fast and economic deployment of new radio communication systems.

The main application of the web graph structure consists in ranking pages. Enabling site level indexing and ranking is a possible application o f such studies.

Many fundamental local distributed algorithms are non-uniform, that is, they assume that all nodes know good estimations of one or more global parameters of the network, e.g., the maximum
degree

Modularity has been introduced as a quality measure for graph partitioning by Newman and Girvan. It has received considerable attention in several disciplines, especially complex systems. In order to better understand this measure from a graph theoretical point of view, we study in , the asymptotic modularity of a variety of graph classes.

In , , we study the measurement of the Internet according to two graph parameters: treewidth and hyperbolicity.

Motivated by multipath routing, we introduce in , a multi-connected variant of spanners.

Perfect phylogeny consisting of determining the compatibil- ity of a set of characters is known to be

Graph sandwich problems were introduced by Golumbic et al. (1994) in [12] for DNA physical mapping problems and can be described as follows. Given a property

An edge-Markovian process with birth-rate

In order to capture the core of asynchronous distributed decision model, we address in
the
*wait-free*model with crash failures. The set of tasks whose input is a pair

In , , we discuss theoretical performance issues that arise from using “Live Seeding”, a technique that can be employed to leverage the capacity of a P2P/Hybrid Live Streaming Systems by utilizing the capacities of idle peers.

In , we address the problem of verification by model- checking of the basic population protocol (PP) model of Angluin et al. . This problem has received special attention in the last two years and new tools have been proposed to deal with it. We show that the problem can be solved by using the existing model-checking tools, e.g., Spin and Prism. In order to do so, we apply the counter abstraction to get an abstraction of the PP model which can be efficiently verified by the existing model-checking tools. Moreover, this abstraction preserves the correct stabilization property of PP models. To deal with the fairness assumed by the PP models, we provide two new recipes. The first one gives sufficient conditions under which the PP model fairness can be replaced by the weak fairness implemented in Spin. We show that this recipe can be applied to several PP models. In the second recipe, we show how to use probabilistic model-checking and, in particular, Prism to take completely in consideration the fairness of the PP models. The correctness of this recipe is based on existing theorems involving finite discrete Markov chains. An abstract of this paper has been also published in .

What does it mean to solve a distributed task? In Paxos, Lamport proposed a definition of solvability in which every process is split into a proposer that submits commands to be executed,
an acceptor that takes care of the command execution order, and a learner that receives the outcomes of executed commands. The resulting perspective of computation in which every proposed
command can be executed, be its proposer correct or faulty, proved to be very useful when processes take steps on behalf of each other, i.e., in
*simulations*.

Most interesting tasks cannot be solved asynchronously, and failure detectors were proposed to circumvent these impossibilities. Alas, when it comes to solving a task using a failure detector, we cannot leverage simulation-based techniques. A process cannot perform steps of failure detector-based computation on behalf of another process, since it cannot access the remote failure-detector module.

Shared objects like atomic register, test-and-set, cmp-and-swap are classical hardware primitives that help to develop fault-tolerant distributed applications. In order to compare shared objects, in , we consider their implementations in message passing models. With the minimal failure detector for each object, we get a new hierarchy that has only two levels. This paper summarizes recent works and results on this topic.

In this paper we consider non-local tasks and determine the minimum information about failures that is necessary to solve such tasks in message-passing distributed systems. As part of this
work, we also introduces
*weak set agreement*— a natural weakening of
*set agreement*— and show that, in some precise sense, it is the weakest non-local task in message-passing systems.

At the heart of distributed computing lies the fundamental result that the level of agreement that can be obtained in an asynchronous shared memory model where t processes can crash is
exactly t + 1. In other words, an adversary that can crash any subset of size at most t can prevent the processes from agreeing on t values. But what about all the other

So far, the distributed computing community has either assumed that all the processes of a distributed system have distinct identifiers or, more rarely, that the processes are anonymous
and have no identifiers. These are two extremes of the same general model: namely,

We show that having
*correct*processes can actually make it harder to reach agreement. The impossibility proofs use the fact that a Byzantine process can send multiple messages to the same recipient in a
round. We show that removing this ability makes agreement easier: then,

Eigenvalues of tridiagonal (including main) Toeplitz matrices are analytically known under some regular distance to the main diagonal. Any eigenvector may be easily computed then, through a backward process; instead, in , we give an analytical form for each component through the reciprocation of the underlied trinomial. More generally, the connection to the Riordan group follows some bilinear iterative process.

*virtual*in the sense that the role of cutting constraints is played by additional convex pieces in the objective function. We report some computational results, that represent an
improvement on previous linearization based techniques.

It is well known that maximization of any difference of convex functions could be turned into a convex maximization; in , we aim at a piecewise convex maximization problem instead. Despite, it may seem harder, sometimes the dimension may be reduced by 1 and the local search improved by using extreme points of the closure of the convex hull of better points. We show that it is always the case for both binary and permutation problems and give, as such instances, piecewise convex formulations for the maximum clique problem and the quadratic assignment problem.

Collaboration with Alcatel-Lucent Bell Labs France (ALBLF)

Within the Laboratory of Information, Networking and Communication Sciences (LINCS), collaborations have been made with ALBLF. In 2011, it resulted in two internships paid by ALBLF and co-supervised by Fabien Mathieu (INRIA) and Ludovic Noirie (ALBLF). In 2012, both interns should start a thesis in collaboration with ALBLF and INRIA (one CIFRE, one in the context of the joint lab).

Managed by University Paris Diderot, H. Fauconnier is leading this project granting J. Clément from Région Ile de France.

Pierre Fraigniaud is leading an ANR project “blanc” (i.e. fundamental research) about the fundamental aspects of large interaction networks enabling massive distributed storage, efficient decentralized information retrieval, quick inter-user exchanges, and/or rapid information dissemination. The project is mostly oriented towards the design and analysis of algorithms for these (logical) networks, by taking into account proper ties inherent to the underlying infrastructures upon which they are built. The infrastructures and/or overlays considered in this project are selected from different contexts, including communication networks (from Internet to sensor networks), and societal networks (from the Web to P2P networks). Ending in november 2011, the project is prolonged until end of 2012 for LABRI partner.

Managed by University Paris Diderot, P. Fraigniaud leads this project.

Managed by University Paris Diderot, H. Fauconnier leads this project that grants Ph. D. H. Tran-The.

Managed by University Paris Diderot, C. Delporte and H. Fauconnier lead this project that grants 1 Ph. D. and 2 internships per year.

Title: Experimental UpdateLess Evolutive Routing

Type: COOPERATION (ICT)

Defi: Future Internet Experimental Facility and Experimentally-driven Research

Instrument: Specific Targeted Research Project (STREP)

Duration: October 2010 - September 2013

Coordinator: ALCATEL-LUCENT (Belgium)

See also:
http://

Abstract: EULER is a 3-year STREP Project targeting Challenge 1 "Technologies and systems architectures for the Future Internet" of the European Commission (EC) Seventh Framework Programme (FP7). The project scope and methodology position within the FIRE (Future Internet Research and Experimentation) Objective ICT-2009.1.6 Part b: Future Internet experimentally-driven research .

The main objective of the EULER exploratory research project is to investigate new routing paradigms so as to design, develop, and validate experimentally a distributed and dynamic routing scheme suitable for the future Internet and its evolution. The resulting routing scheme(s) is/are intended to address the fundamental limits of current stretch-1 shortest-path routing in terms of routing table scalability but also topology and policy dynamics (perform efficiently under dynamic network conditions). Therefore, this project will investigate trade-offs between routing table size (to enhance scalability), routing scheme stretch (to ensure routing quality) and communication cost (to efficiently and timely react to various failures). The driving idea of this research project is to make use of the structural and statistical properties of the Internet topology (some of which are hidden) as well as the stability and convergence properties of the Internet policy in order to specialize the design of a distributed routing scheme known to perform efficiently under dynamic network and policy conditions when these properties are met. The project will develop new models and tools to exhaustively analyse the Internet topology, to accurately and reliably measure its properties, and to precisely characterize its evolution. These models, that will better reflect the network and its policy dynamics, will be used to derive useful properties and metrics for the routing schemes and provide relevant experimental scenarios. The project will develop appropriate tools to evaluate the performance of the proposed routing schemes on large-scale topologies (order of 10k nodes). Prototype of the routing protocols as well as their functional validation and performance benchmarking on the iLAB experimental facility and/or virtual experimental facilities such as PlanetLab/OneLab will allow validating under realistic conditions the overall behaviour of the proposed routing schemes.

Program: EIT ICT Labs

Project acronym: TREC-EIT-GA2011-HORS-5643

Project title:

Duration: 2011

Coordinator:Ilkka Norros

Other partners: KTH (Finland), Fraunhofer (Germany)

Abstract: Content Distribution challenging issues; managed by TREC for France, the project allowed Pascal Felber to be invited by Fabien Mathieu for a postdoctoral position.

Michel Habib is in charge of a course entitled “graph algorithms”.

Pierre Fraigniaud (12 hours) is in charge of the course “Algorithmique distribuée pour les réseaux”;

Carole Delporte and Hugues Fauconnier are in charge of “Algorithmique distribuée avec mémoire partagée”;

Laurent Viennot (12 hours) is teaching “Structures de données distribuées et routage”

Yacine Boufkhad (192 hours) is teaching scientific computer science and networks.

Fabien de Montgolfier (192 hours) is teaching foundation of computer science, algorithmics, and computer architecture (192 hours);

Fabien de Montgolfier is teaching P2P theory and application.

Michel Habib (192 hours) is in charge of two courses entitled: Search Engines; Parallelism and mobility, which includes peer-to-peer overlay networks;

Carole Delporte (192 hours) is teaching “Distributed programming”;

Hugues Fauconnier (192 hours) in charge of both courses “Internet Protocols and Distributed algorithms”.

Fabien Mathieu is teaching Peer-to-peer Networks (6 hours).

PhD : Mauricio Soto, ”Quelques propriétés topologiques des graphes et applications à Internet et aux réseaux”, Paris Diderot University, 2 December 2011, supervisors: Fabien de Montgolfier et Laurent Viennot;

PhD : Thu-Hien To: ”On some graph problems in phylogenetics”, Paris Diderot University, 15 September 2011, supervisor: Michel Habib;

PhD in progress : Hung Tran-The, Failure detection with Byzantine adversary, from 2010, supervisors: Hugues Fauconnier and Carole Delporte,