Our goal is to develop the field of graph algorithms for networks. Based on algorithmic graph theory and graph modeling we want to understand what can be done in these large networks and what cannot. Furthermore, we want to derive practical distributed algorithms from known strong theoretical results. Finally, we want to extract possibly new graph problems by focusing on particular applications.
The main goal to achieve in networks are efficient searching of nodes or data, and efficient content transfers. We propose to implement strong theoretical results in that domain to make significant breakthrough in large network algorithms. These results concern small world routing, low stretch routing in doubling metrics and bounded width classes of graphs. They are detailed in the next section. This implies several challenges:
testing our target networks against general graph parameters known to bring theoretically tractability,
implementing strong theoretical results in the dynamic and distributed context of large networks.
A complementary approach consists in studying the combinatorial and graph structures that appear in our target networks. These structures may have inherent characteristics coming from the way the network is formed, or from the design goals of the target application.
The paper was awarded as a best article in the 25th Int. Symp. on Distributed Computing (DISC 2011).
Application domains include evaluating Internet performances, the design of new peer-to-peer applications, enabling large scale ad hoc networks and mapping the web.
The application of measuring and modeling Internet metrics such as latencies and bandwidth is to provide tools for optimizing Internet applications. This concerns especially large scale applications such as web site mirroring and peer-to-peer applications.
Peer-to-peer protocols are based on a all equal paradigm that allows to design highly reliable and scalable applications. Besides the file sharing application, peer-to-peer solutions could take over in web content dissemination resistant to high demand bursts or in mobility management. Envisioned peer-to-peer applications include video on demand, streaming, exchange of classified ads,...
Wifi networks have entered our every day life. However, enabling them at large scale is still a challenge. Algorithmic breakthrough in large ad hoc networks would allow to use them in fast and economic deployment of new radio communication systems.
The main application of the web graph structure consists in ranking pages. Enabling site level indexing and ranking is a possible application o f such studies.
Many fundamental local distributed algorithms are non-uniform, that is, they assume that all nodes know good estimations of one or more global parameters of the network, e.g., the maximum
degree
Modularity has been introduced as a quality measure for graph partitioning by Newman and Girvan. It has received considerable attention in several disciplines, especially complex systems. In order to better understand this measure from a graph theoretical point of view, we study in , the asymptotic modularity of a variety of graph classes.
In , , we study the measurement of the Internet according to two graph parameters: treewidth and hyperbolicity.
Motivated by multipath routing, we introduce in , a multi-connected variant of spanners.
We answer in the affirmative
, to the question pro- posed by Mike Steel as a $100 challenge: “Is
the following problem
Perfect phylogeny consisting of determining the compatibil- ity of a set of characters is known to be
Graph sandwich problems were introduced by Golumbic et al. (1994) in [12] for DNA physical mapping problems and can be described as follows. Given a property
In
, we propose a new algorithm for computing the diameter of undirected
unweighted graphs. Even though, in the worst case, this algorithm has complexity
An edge-Markovian process with birth-rate
Inspired by sequential complexity theory, in we focus on a complexity theory for distributed decision problems. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Our main result provides a sharp threshold for the impact of randomization on decision hereditary problems. In addition, we investigate the impact of non-determinism on local decision, and establish some structural results inspired by classical computational complexity theory. Specifically, we show that non-determinism does help, but that this help is limited, as there exist languages that cannot be decided non-deterministically. Perhaps surprisingly, it turns out that it is the combination of randomization with non-determinism that enables to decide all languages in constant time. Finally, we introduce the notion of local reduction, and establish some completeness results
In order to capture the core of asynchronous distributed decision model, we address in
the
wait-freemodel with crash failures. The set of tasks whose input is a pair
In , we partially answer the question of decidability of any language for mobile agents in a 2D environment like telecom networks or robots. It is proven that, for every agent, verifying whether (i) he/she is alone or not and (ii) he/she is able to capture the environment, is associated with the question of pertaining to an equivalence class of a map. A positive answer helps in the non-deterministic decision for any language for mobile agent.
In
, we consider the setting of randomly weighted graphs. Under this
setting, weighted graph properties typically become random variables and we are interested in computing their statistical features. Unfortunately, this turns out to be computationally hard
for some weighted graph properties albeit the problem of computing the properties per se in the traditional setting of algorithmic graph theory is tractable. For example, there are well known
efficient algorithms that compute the diameter of a given weighted graph, yet, computing the expected diameter of a given randomly weighted graph is ♯P-hard even if the edge weights are
identically distributed. In this paper, we define a family of weighted graph properties and show that for each property in this family, the problem of computing the
In
, we establish two new lower bounds on the message complexity of the
controller problem. We first prove a simple lower bound stating that any
In
, we consider a model for online computation in which the online
algorithm receives, together with each request, some information regarding the future, referred to as advice. We are interested in the impact of such advice on the competitive ratio, and in
particular, in the relation between the size
In
, we establishes tight bounds for the Minimum-weight Spanning Tree
(MST) verification problem in the distributed setting. Specifically, we provide an MST verification algorithm that achieves simultaneously
In
, we initiate a systematic study of distributed verification, and
give almost tight lower bounds on the running time of distributed verification algorithms for many fundamental problems such as connectivity, spanning connected subgraph, and
In , we present and discuss possible architectures for P2P systems to manage overlays that try to cope with the underlying network.
In , , we discuss theoretical performance issues that arise from using “Live Seeding”, a technique that can be employed to leverage the capacity of a P2P/Hybrid Live Streaming Systems by utilizing the capacities of idle peers.
In , we propose a new paradigm for P2P networks, where the bandwidth bottleneck is not the access node anymore. This new model is versatile enough to be used in the context of classical networks with congestion control, wireless networks, or semantic networks.
In , we address the problem of verification by model- checking of the basic population protocol (PP) model of Angluin et al. . This problem has received special attention in the last two years and new tools have been proposed to deal with it. We show that the problem can be solved by using the existing model-checking tools, e.g., Spin and Prism. In order to do so, we apply the counter abstraction to get an abstraction of the PP model which can be efficiently verified by the existing model-checking tools. Moreover, this abstraction preserves the correct stabilization property of PP models. To deal with the fairness assumed by the PP models, we provide two new recipes. The first one gives sufficient conditions under which the PP model fairness can be replaced by the weak fairness implemented in Spin. We show that this recipe can be applied to several PP models. In the second recipe, we show how to use probabilistic model-checking and, in particular, Prism to take completely in consideration the fairness of the PP models. The correctness of this recipe is based on existing theorems involving finite discrete Markov chains. An abstract of this paper has been also published in .
What does it mean to solve a distributed task? In Paxos, Lamport proposed a definition of solvability in which every process is split into a proposer that submits commands to be executed, an acceptor that takes care of the command execution order, and a learner that receives the outcomes of executed commands. The resulting perspective of computation in which every proposed command can be executed, be its proposer correct or faulty, proved to be very useful when processes take steps on behalf of each other, i.e., in simulations.
Most interesting tasks cannot be solved asynchronously, and failure detectors were proposed to circumvent these impossibilities. Alas, when it comes to solving a task using a failure detector, we cannot leverage simulation-based techniques. A process cannot perform steps of failure detector-based computation on behalf of another process, since it cannot access the remote failure-detector module.
In
, we propose a new definition of solving a task with a failure
detector in which
computationprocesses that propose inputs and provide outputs are treated separately from
synchronizationprocesses that coordinate using a failure detector. In the resulting framework, any failure detector is shown to be equivalent to the availability of some
Shared objects like atomic register, test-and-set, cmp-and-swap are classical hardware primitives that help to develop fault-tolerant distributed applications. In order to compare shared objects, in , we consider their implementations in message passing models. With the minimal failure detector for each object, we get a new hierarchy that has only two levels. This paper summarizes recent works and results on this topic.
In , we first define the basic notions of localand non-localtasks for distributed systems. Intuitively, a task is local if, in a system with no failures, each process can compute its output value locally by applying some local function on its own input value (so the output value of each process depends only on the process' own input value, not on the input values of the other processes); a task is non-local otherwise. All the interesting distributed tasks, including all those that have been investigated in the literature (e.g., consensus, set agreement, renaming, atomic commit, etc.) are non-local.
In this paper we consider non-local tasks and determine the minimum information about failures that is necessary to solve such tasks in message-passing distributed systems. As part of this work, we also introduces weak set agreement— a natural weakening of set agreement— and show that, in some precise sense, it is the weakest non-local task in message-passing systems.
At the heart of distributed computing lies the fundamental result that the level of agreement that can be obtained in an asynchronous shared memory model where t processes can crash is
exactly t + 1. In other words, an adversary that can crash any subset of size at most t can prevent the processes from agreeing on t values. But what about all the other
So far, the distributed computing community has either assumed that all the processes of a distributed system have distinct identifiers or, more rarely, that the processes are anonymous
and have no identifiers. These are two extremes of the same general model: namely,
We show that having
In
, we address the impact of optimizing the memory size on the time
complexity, and show that this carries at most a small cost in terms of time in the context of MST. Specifically, we present a self stabilizing distributed verification algorithm whose time
complexity is
In , the problem of estimating the proportion of satisfiable instances of a given CSP (constraint satisfaction problem) can be tackled through weighting. It consists in putting onto each solution a non-negative real value based on its neighborhood in a way that the total weight is at least 1 for each satisfiable instance. We define in this paper a general weighting scheme for the estimation of satisfiability of general CSPs. First we give some sufficient conditions for a weighting system to be correct. Then we show that this scheme allows for an improvement on the upper bound on the existence of non-trivial cores in 3-SAT obtained by Maneva and Sinclair (2008) to 4.419. Another more common way of estimating satisfiability is ordering. This consists in putting a total order on the domain, which induces an orientation between neighboring solutions in a way that prevents circuits from appearing, and then counting only minimal elements. We compare ordering and weighting under various conditions.
Eigenvalues of tridiagonal (including main) Toeplitz matrices are analytically known under some regular distance to the main diagonal. Any eigenvector may be easily computed then, through a backward process; instead, in , we give an analytical form for each component through the reciprocation of the underlied trinomial. More generally, the connection to the Riordan group follows some bilinear iterative process.
In
, we provide a global search algorithm for maximizing a piecewise
convex function
It is well known that maximization of any difference of convex functions could be turned into a convex maximization; in , we aim at a piecewise convex maximization problem instead. Despite, it may seem harder, sometimes the dimension may be reduced by 1 and the local search improved by using extreme points of the closure of the convex hull of better points. We show that it is always the case for both binary and permutation problems and give, as such instances, piecewise convex formulations for the maximum clique problem and the quadratic assignment problem.
in , we consider mathematical programming problems with the so-called piecewise convex objective functions. A solution method for this interesting and important class of nonconvex problems is presented. This method is based on Newton’s law of universal gravitation, multicriteria optimization and Helly’s theorem on convex bodies. Numerical experiments using well known classes of test problems on piecewise convex maximization, convex maximization as well as the maximum clique problem show the efficiency of the approach.
Collaboration with Alcatel-Lucent Bell Labs France (ALBLF)
Within the Laboratory of Information, Networking and Communication Sciences (LINCS), collaborations have been made with ALBLF. In 2011, it resulted in two internships paid by ALBLF and co-supervised by Fabien Mathieu (INRIA) and Ludovic Noirie (ALBLF). In 2012, both interns should start a thesis in collaboration with ALBLF and INRIA (one CIFRE, one in the context of the joint lab).
Managed by University Paris Diderot, H. Fauconnier is leading this project granting J. Clément from Région Ile de France.
Pierre Fraigniaud is leading an ANR project “blanc” (i.e. fundamental research) about the fundamental aspects of large interaction networks enabling massive distributed storage, efficient decentralized information retrieval, quick inter-user exchanges, and/or rapid information dissemination. The project is mostly oriented towards the design and analysis of algorithms for these (logical) networks, by taking into account proper ties inherent to the underlying infrastructures upon which they are built. The infrastructures and/or overlays considered in this project are selected from different contexts, including communication networks (from Internet to sensor networks), and societal networks (from the Web to P2P networks). Ending in november 2011, the project is prolonged until end of 2012 for LABRI partner.
Managed by University Paris Diderot, P. Fraigniaud leads this project.
Managed by University Paris Diderot, H. Fauconnier leads this project that grants Ph. D. H. Tran-The.
Managed by University Paris Diderot, C. Delporte and H. Fauconnier lead this project that grants 1 Ph. D. and 2 internships per year.
Title: Experimental UpdateLess Evolutive Routing
Type: COOPERATION (ICT)
Defi: Future Internet Experimental Facility and Experimentally-driven Research
Instrument: Specific Targeted Research Project (STREP)
Duration: October 2010 - September 2013
Coordinator: ALCATEL-LUCENT (Belgium)
See also:
http://
Abstract: EULER is a 3-year STREP Project targeting Challenge 1 "Technologies and systems architectures for the Future Internet" of the European Commission (EC) Seventh Framework Programme (FP7). The project scope and methodology position within the FIRE (Future Internet Research and Experimentation) Objective ICT-2009.1.6 Part b: Future Internet experimentally-driven research .
The main objective of the EULER exploratory research project is to investigate new routing paradigms so as to design, develop, and validate experimentally a distributed and dynamic routing scheme suitable for the future Internet and its evolution. The resulting routing scheme(s) is/are intended to address the fundamental limits of current stretch-1 shortest-path routing in terms of routing table scalability but also topology and policy dynamics (perform efficiently under dynamic network conditions). Therefore, this project will investigate trade-offs between routing table size (to enhance scalability), routing scheme stretch (to ensure routing quality) and communication cost (to efficiently and timely react to various failures). The driving idea of this research project is to make use of the structural and statistical properties of the Internet topology (some of which are hidden) as well as the stability and convergence properties of the Internet policy in order to specialize the design of a distributed routing scheme known to perform efficiently under dynamic network and policy conditions when these properties are met. The project will develop new models and tools to exhaustively analyse the Internet topology, to accurately and reliably measure its properties, and to precisely characterize its evolution. These models, that will better reflect the network and its policy dynamics, will be used to derive useful properties and metrics for the routing schemes and provide relevant experimental scenarios. The project will develop appropriate tools to evaluate the performance of the proposed routing schemes on large-scale topologies (order of 10k nodes). Prototype of the routing protocols as well as their functional validation and performance benchmarking on the iLAB experimental facility and/or virtual experimental facilities such as PlanetLab/OneLab will allow validating under realistic conditions the overall behaviour of the proposed routing schemes.
Program: EIT ICT Labs
Project acronym: TREC-EIT-GA2011-HORS-5643
Project title:
Duration: 2011
Coordinator:Ilkka Norros
Other partners: KTH (Finland), Fraunhofer (Germany)
Abstract: Content Distribution challenging issues; managed by TREC for France, the project allowed Pascal Felber to be invited by Fabien Mathieu for a postdoctoral position.
Michel Habib is in charge of a course entitled “graph algorithms”.
Pierre Fraigniaud (12 hours) is in charge of the course “Algorithmique distribuée pour les réseaux”;
Carole Delporte and Hugues Fauconnier are in charge of “Algorithmique distribuée avec mémoire partagée”;
Laurent Viennot (12 hours) is teaching “Structures de données distribuées et routage”
Yacine Boufkhad (192 hours) is teaching scientific computer science and networks.
Fabien de Montgolfier (192 hours) is teaching foundation of computer science, algorithmics, and computer architecture (192 hours);
Fabien de Montgolfier is teaching P2P theory and application.
Michel Habib (192 hours) is in charge of two courses entitled: Search Engines; Parallelism and mobility, which includes peer-to-peer overlay networks;
Carole Delporte (192 hours) is teaching “Distributed programming”;
Hugues Fauconnier (192 hours) in charge of both courses “Internet Protocols and Distributed algorithms”.
Fabien Mathieu is teaching Peer-to-peer Networks (6 hours).
PhD : Mauricio Soto, ”Quelques propriétés topologiques des graphes et applications à Internet et aux réseaux”, Paris Diderot University, 2 December 2011, supervisors: Fabien de Montgolfier et Laurent Viennot;
PhD : Thu-Hien To: ”On some graph problems in phylogenetics”, Paris Diderot University, 15 September 2011, supervisor: Michel Habib;
PhD in progress : Hung Tran-The, Failure detection with Byzantine adversary, from 2010, supervisors: Hugues Fauconnier and Carole Delporte,