Section: New Results
Understanding graph representations
Notions of Connectivity in Overlay Networks
Participants : Yuval Emek, Pierre Fraigniaud, Amos Korman, Shay Kutten, David Peleg.
How well connected is the network? This is one of the most fundamental questions one would ask when facing the challenge of designing a communication network. Three major notions of connectivity have been considered in the literature, but in the context of traditional (single-layer) networks, they turn out to be equivalent. The paper  , introduces a model for studying the three notions of connectivity in multi-layer networks. Using this model, it is easy to demonstrate that in multi-layer networks the three notions may differ dramatically. Unfortunately, in contrast to the single-layer case, where the values of the three connectivity notions can be computed efficiently, it has been recently shown in the context of WDM networks (results that can be easily translated to our model) that the values of two of these notions of connectivity are hard to compute or even approximate in multi-layer networks. The current paper shed some positive light into the multi-layer connectivity topic: we show that the value of the third connectivity notion can be computed in polynomial time and develop an approximation for the construction of well connected overlay networks.
Connected graph searching
Participants : Lali Barrière, Paola Flocchini, Fedor V. Fomin, Pierre Fraigniaud, Nicolas Nisse, Nicola Santoro, Dimitrios M. Thilikos.
In the graph searching game the opponents are a set of searchers and a fugitive in a graph. The searchers try to capture the fugitive by applying some sequence of moves that include placement, removal, or sliding of a searcher along an edge. The fugitive tries to avoid capture by moving along unguarded paths. The search number of a graph is the minimum number of searchers required to guarantee the capture of the fugitive. In  , we initiate the study of this game under the natural restriction of connectivity where we demand that in each step of the search the locations of the graph that are clean (i.e. non-accessible to the fugitive) remain connected. We give evidence that many of the standard mathematical tools used so far in classic graph searching fail under the connectivity requirement. We also settle the question on “the price of connectivity”, that is, how many searchers more are required for searching a graph when the connectivity demand is imposed. We make estimations of the price of connectivity on general graphs and we provide tight bounds for the case of trees. In particular, for an n-vertex graph the ratio between the connected searching number and the non-connected one is while for trees this ratio is always at most 2. We also conjecture that this constant-ratio upper bound for trees holds also for all graphs. Our combinatorial results imply a complete characterization of connected graph searching on trees. It is based on a forbidden-graph characterization of the connected search number. We prove that the connected search game is monotone for trees, i.e. restricting search strategies to only those where the clean territories increase monotonically does not require more searchers. A consequence of our results is that the connected search number can be computed in polynomial time on trees, moreover, we show how to make this algorithm distributed. Finally, we reveal connections of this parameter to other invariants on trees such as the Horton–Strahler number.
Computing with Large Populations Using Interactions
Participants : Olivier Bournez, Pierre Fraigniaud, Xavier Koegler.
We define in  , a general model capturing the behavior of a population of anonymous agents that interact in pairs. This model captures some of the main features of opportunistic networks, in which nodes (such as the ones of a mobile ad hoc networks) meet sporadically. For its reminiscence to Population Protocol, we call our model Large-Population Protocol, or LPP. We are interested in the design of LPPs enforcing, for every , a proportion of the agents to be in a specific subset of marked states, when the size of the population grows to infinity; In which case, we say that the protocol computes . We prove that, for every , is computable by a LPP if and only if is algebraic. Our positive result is constructive. That is, we show how to construct, for every algebraic number , a protocol which computes .
Collaborative Search on the Plane without Communication
Participants : Ofer Feinerman, Zvi Lotker, Amos Korman, Jean-Sébastien Sereni.
In  , we use distributed computing tools to provide a new perspective on the behavior of cooperative biological ensembles. We introduce the Ants Nearby Treasure Search (ANTS) problem, a generalization of the classical cow-path problem which is relevant for collective foraging in animal groups. In the ANTS problem, identical (probabilistic) agents, initially placed at some central location, collectively search for a treasure in the two-dimensional plane. The treasure is placed at a target location by an adversary and the goal is to find it as fast as possible as a function of both and , where is the distance between the central location and the target. This is biologically motivated by cooperative, central place foraging, such as performed by ants around their nest. In this type of search there is a strong preference to locate nearby food sources before those that are further away. We focus on trying to find what can be achieved if communication is limited or altogether absent. Indeed, to avoid overlaps agents must be highly dispersed making communication difficult. Furthermore, if the agents do not commence the search in synchrony, then even initial communication is problematic. This holds, in particular, with respect to the question of whether the agents can communicate and conclude their total number, . It turns out that the knowledge of by the individual agents is crucial for performance. Indeed, it is a straightforward observation that the time required for finding the treasure is , and we show in this paper that this bound can be matched if the agents have knowledge of up to some constant approximation. We present a tight bound for the competitive penalty that must be paid, in the running time, if the agents have no information about . Specifically, this bound is slightly more than logarithmic in the number of agents. In addition, we give a lower bound for the setting in which the agents are given some estimation of . Informally, our results imply that the agents can potentially perform well without any knowledge of their total number , however, to further improve, they must use some information regarding . Finally, we propose a uniform algorithm that is both efficient and extremely simple, suggesting its relevance for actual biological scenarios.
Memory Lower Bounds for Randomized Collaborative Search and Implications for Biology
Participants : Ofer Feinerman, Amos Korman.
Initial knowledge regarding group size can be crucial for collective performance. We study in  , this relation in the context of the Ants Nearby Treasure Search (ANTS) problem, which models natural cooperative foraging behavior such as that performed by ants around their nest. In this problem, (probabilistic) agents, initially placed at some central location, collectively search for a treasure on the two-dimensional grid. The treasure is placed at a target location by an adversary and the goal is to find it as fast as possible as a function of both and , where is the (unknown) distance between the central location and the target. It is easy to see that time units are necessary for finding the treasure. Recently, it has been established that time is sufficient if the agents know their total number (or a constant approximation of it), and enough memory bits are available at their disposal. In this paper, we establish lower bounds on the agent memory size required for achieving certain running time performances. To the best our knowledge, these bounds are the first non-trivial lower bounds for the memory size of probabilistic searchers. For example, for every given positive constant , terminating the search by time requires agents to use memory bits.
From a high level perspective, we illustrate how methods from distributed computing can be useful in generating lower bounds for cooperative biological ensembles. Indeed, if experiments that comply with our setting reveal that the ants' search is time efficient, then our theoretical lower bounds can provide some insight on the memory they use for this task.
What Can be Computed without Communications?
Participants : Heger Arfaoui, Pierre Fraigniaud.
When playing the boolean game , two players, upon reception of respective inputs and , must respectively output and satisfying , in absence of any communication. It is known that, for , the ability for the players to use entangled quantum bits (qbits) helps. In  , we show that, for different from the exclusive-or operator, quantum correlations do not help. This result is an invitation to revisit the theory of dis- tributed checking, a.k.a. distributed verification, currently sticked to the usage of decision functions based on the and-operator, hence potentially preventing us from using the potential benefit of quantum effects.
Decidability Classes for Mobile Agents Computing modularity
Participants : Andrzej Pelc, Pierre Fraigniaud.
We establish in  , a classification of decision problems that are to be solved by mobile agents operating in unlabeled graphs, using a deterministic protocol. The classification is with respect to the ability of a team of agents to solve the problem, possibly with the aid of additional information. In particular, our focus is on studying differences between the decidability of a decision problem by agents and its verifiability when a certificate for a positive answer is provided to the agents. Our main result shows that there exists a natural complete problem for mobile agent verification. We also show that, for a single agent, three natural oracles yield a strictly increasing chain of relative decidability classes.
Randomized Distributed Decision
Participants : Pierre Fraigniaud, Amos Korman, Merav Parter, David Peleg.
The paper  tackles the power of randomization in the context of locality by analyzing the ability to “boost” the success probability of deciding a distributed language. The main outcome of this analysis is that the distributed computing setting contrasts significantly with the sequential one as far as randomization is concerned. Indeed, we prove that in some cases, the ability to increase the success probability for deciding distributed languages is rather limited.
We focus on the notion of a decider for a language , which is a distributed randomized algorithm that accepts instances in with probability at least and rejects instances outside of with probability at least q. It is known that every hereditary language that can be decided in t rounds by a decider, where , can be decided deterministically in rounds. One of our results gives evidence supporting the conjecture that the above statement holds for all distributed languages and not only for hereditary ones, by proving the conjecture for the restricted case of path topologies. For the range below the aforementioned threshold, namely, , we study the class (for ) of all languages decidable in at most rounds by a decider, where . Since every language is decidable (in zero rounds) by a decider satisfying , the hierarchy provides a spectrum of complexity classes between determinism (, under the above conjecture) and complete randomization ). We prove that all these classes are separated, in a strong sense: for every integer , there exists a language satisfying but for any . In addition, we show that does not contain all languages, for any . In other words, we obtain the hierarchy All. Finally, we show that if the inputs can be restricted in certain ways, then the ability to boost the success probability becomes almost null, and in particular, derandomization is not possible even beyond the threshold .
The Worst Case Behavior of Randomized Gossip
Participants : Hervé Baumann, Pierre Fraigniaud, Hovhannes A. Harutyunyan, Rémi de Verclos.
In  we consider the quasi-random rumor spreading model introduced by Doerr, Friedrich, and Sauerwald in [SODA 2008], hereafter referred to as the list-based model. Each node is provided with a cyclic list of all its neighbors, chooses a random position in its list, and from then on calls its neighbors in the order of the list. This model is known to perform asymptotically at least as well as the random phone-call model, for many network classes. Motivated by potential applications of the list-based model to live streaming, we are interested in its worst case behavior.
Our first main result is the design of an time algorithm that, given any -node -edge network , and any source-target pair , computes the maximum number of rounds it may take for a rumor to be broadcast from to in , in the list-based model. This algorithm yields an time algorithm that, given any network , computes the maximum number of rounds it may take for a rumor to be broadcast from any source to any target, in the list-based model. Hence, the list-based model is computationally easy to tackle in its basic version.
The situation is radically different when one is considering variants of the model in which nodes are aware of the status of their neighbors, i.e., are aware of whether or not they have already received the rumor, at any point in time. Indeed, our second main result states that, unless P=NP , the worst case behavior of the list-based model with the additional feature that every node is perpetually aware of which of its neighbors have already received the rumor cannot be approximated in polynomial time within a multiplicative factor, for any . As a byproduct of this latter result, we can show that, unless P=NP , there are no PTAS enabling to approximate the worst case behavior of the list-based model, whenever every node perpetually keeps track of the subset of its neighbors which have sent the rumor to it so far.
Participants : Fabien de Montgolfier, Mauricio Soto, Laurent Viennot.
Modularity (Newman-Girvan) has been introduced as a quality measure for graph partitioning. It has received considerable attention in several disciplines, especially complex systems. In order to better understand this measure from a graph theoretical point of view, we study the modularity of a variety of graph classes. In  , we first consider simple graph classes such as tori and hypercubes. We show that these regular graph families have asymptotic modularity 1 (that is the maximum possible). We extend this result to trees with bounded degree, allowing us to give a lower bound of 2 over average degree for graph classes with low maximum degree (included power law graphs for a sufficiently large exponent).
Modeling social networks
Participants : Nidhi Hegde, Laurent Massoulié, Laurent Viennot.
Social networks offer users new means of accessing information, essentially relying on “social filtering”, i.e. propagation and filtering of information by social contacts. The sheer amount of data flowing in these networks, combined with the limited budget of attention of each user, makes it difficult to ensure that social filtering brings relevant content to the interested users. Our motivation in  is to measure to what extent self-organization of the social network results in efficient social filtering. To this end we introduce flow games, a simple abstraction that models network formation under selfish user dynamics, featuring user-specific interests and budget of attention. In the context of homogeneous user interests, we show that selfish dynamics converge to a stable network structure (namely a pure Nash equilibrium) with close-to-optimal information dissemination. We show in contrast, for the more realistic case of heterogeneous interests, that convergence, if it occurs, may lead to information dissemination that can be arbitrarily inefficient, as captured by an unbounded “price of anarchy”. Nevertheless the situation differs when users' interests exhibit a particular structure, captured by a metric space with low doubling dimension. In that case, natural autonomous dynamics converge to a stable configuration. Moreover, users obtain all the information of interest to them in the corresponding dissemination, provided their budget of attention is logarithmic in the size of their interest set.
Additive Spanners and Distance and Routing Labeling Schemes for Hyperbolic Graphs
Participants : Victor Chepoi, Feodor Dragan, Bertrand Estellon, Michel Habib, Yann Vaxès, Yang Xiang.
-Hyperbolic metric spaces have been defined by M. Gromov in 1987 via a simple 4-point condition: for any four points , the two larger of the distance sums , , differ by at most . They play an important role in geometric group theory, geometry of negatively curved spaces, and have recently become of interest in several domains of computer science, including algorithms and networking. In  , we study unweighted -hyperbolic graphs. Using the Layering Partition technique, we show that every n-vertex -hyperbolic graph with has an additive -spanner with at most edges and provide a simpler, in our opinion, and faster construction of distance approximating trees of -hyperbolic graphs with an additive error . The construction of our tree takes only linear time in the size of the input graph. As a consequence, we show that the family of n-vertex -hyperbolic graphs with ≥1/2 admits a routing labeling scheme with bit labels, additive stretch and time routing protocol, and a distance labeling scheme with bit labels, additive error and constant time distance decoder.
Constructing a Minimum phylogenetic Network from a Dense triplet Set
Participants : Michel Habib, Thu-Hien To.
For a given set of species and a set of triplets on , we seek to construct a phylogenetic network which is consistent with i.e. which represents all triplets of . The level of a network is defined as the maximum number of hybrid vertices in its biconnected components. When is dense, there exist polynomial time algorithms to construct level- and 2 networks (Aho et al., 1981; Jansson, Nguyen and Sung, 2006; Jansson and Sung, 2006; Iersel et al., 2009). For higher levels, partial answers were obtained in the paper by Iersel and Kelk (2008), with a polynomial time algorithm for simple networks. In  this paper, we detail the first complete answer for the general case, solving a problem proposed in Jansson and Sung (2006) and Iersel et al. (2009). For any fixed, it is possible to construct a level- network having the minimum number of hybrid vertices and consistent with , if there is any, in time .
Algorithms for Some Join Decompositions
Participants : Michel Habib, Antoine Mamcarz, Fabien de Montgolfier.
A homogeneous pair (also known as a 2-module) of a graph is a pair of disjoint vertex subsets such that for every vertex and , is either adjacent to all vertices in or to none of them. First used in the context of perfect graphs [Chvátal and Sbihi 1987], it is a generalization of splits (a.k.a 1-joins) and of modules. The algorithmics to compute them appears quite involved. In  , we describe an time algorithm computing (if any) a homogeneous pair, which not only improves a previous bound of [Everett, Klein and Reed 1997], but also uses a nice structural property of homogenous pairs. Our result can be extended to compute the whole homogeneous pair decomposition tree, within the same complexity. Using similar ideas, we present an time algorithm to compute a join decomposition of a graph, improving a previous algorithm [Feder et al. 2005]. These two decompositions are special case of joins [Bui-Xuan, Telle and Vatshelle 2010] to which our techniques apply.
Detecting 2-joins faster
Participants : Pierre Charbit, Michel Habib, Nicolas Trotignon, Kristina Vušković.
2-joins are edge cutsets that naturally appear in the decomposition of several classes of graphs closed under taking induced subgraphs, such as balanced bipartite graphs, even-hole-free graphs, perfect graphs and claw-free graphs. Their detection is needed in several algorithms, and is the slowest step for some of them. The classical method to detect a 2-join takes time where is the number of vertices of the input graph and the number of its edges. To detect non-path 2-joins (special kinds of 2-joins that are needed in all of the known algorithms that use 2-joins), the fastest known method takes time . Here, we give an -time algorithm for both of these problems. A consequence is a speed up of several known algorithms.