Section: New Results
Participants : Julien Bensmail, Jean-Claude Bermond, David Coudert, Frédéric Giroire, Frédéric Havet, Emanuele Natale, Nicolas Nisse, Stéphane Pérennes, Francois Dross, Fionn Mc Inerney, Thibaud Trolliet.
Coati is interested in the algorithmic aspects of Graph Theory. In general we try to find the most efficient algorithms to solve various problems of Graph Theory and telecommunication networks. We use Graph Theory to model various network problems. We study their complexity and then we investigate the structural properties of graphs that make these problems hard or easy.
Complexity of graph problems
Fully Polynomial FPT Algorithms for Some Classes of Bounded Clique-width Graphs.
Recently, hardness results for problems in P were achieved using reasonable complexity theoretic assumptions such as the Strong Exponential Time Hypothesis. According to these assumptions, many graph theoretic problems do not admit truly subquadratic algorithms. A central technique used to tackle the difficulty of the above mentioned problems is fixed-parameter algorithms with polynomial dependency in the fixed parameter (P-FPT). Applying this technique to clique-width, an important graph parameter, remained to be done. In , we study several graph theoretic problems for which hardness results exist such as cycle problems, distance problems and maximum matching. We give hardness results and P-FPT algorithms, using clique-width and some of its upper bounds as parameters. We believe that our most important result is an algorithm in -time for computing a maximum matching where is either the modular-width of the graph or the -sparseness. The latter generalizes many algorithms that have been introduced so far for specific subclasses such as cographs. Our algorithms are based on preprocessing methods using modular decomposition and split decomposition. Thus they can also be generalized to some graph classes with unbounded clique-width.
Explicit Linear Kernels for Packing Problems
During the last years, several algorithmic meta-theorems have appeared (Bodlaender et al. , Fomin et al. , Kim et al. ) guaranteeing the existence of linear kernels on sparse graphs for problems satisfying some generic conditions. The drawback of such general results is that it is usually not clear how to derive from them constructive kernels with reasonably low explicit constants. To fill this gap, we recently presented  a framework to obtain explicit linear kernels for some families of problems whose solutions can be certified by a subset of vertices. In , we enhance our framework to deal with packing problems, that is, problems whose solutions can be certified by collections of subgraphs of the input graph satisfying certain properties. -Packing is a typical example: for a family of connected graphs that we assume to contain at least one planar graph, the task is to decide whether a graph contains vertex-disjoint sub-graphs such that each of them contains a graph in as a minor. We provide explicit linear kernels on sparse graphs for the following two orthogonal generalizations of -Packing: for an integer , one aims at finding either minor-models that are pairwise at distance at least in (--Packing), or such that each vertex in belongs to at most minors-models (-Packing with-Membership). Finally, we also provide linear kernels for the versions of these problems where one wants to pack subgraphs instead of minors.
Low Time Complexity Algorithms for Path Computation in Cayley Graphs.
We study the problem of path computation in Cayley Graphs (CG) from an approach of word processing in groups. This approach consists in encoding the topological structure of CG in an automaton called Diff, then techniques of word processing are applied for computing the shortest paths. In , we present algorithms for computing the -shortest paths, the shortest disjoint paths and the shortest path avoiding a set of nodes and edges. For any CG with diameter , the time complexity of the proposed algorithms is , where denotes the size of Diff. We show that our proposal outperforms the state of art of topology-agnostic algorithms for disjoint shortest paths and stays competitive with respect to proposals for specific families of CG. Therefore, the proposed algorithms set a base in the design of adaptive and low-complexity routing schemes for networks whose interconnections are defined by CG.
Convex hull in graphs.
In , we prove that, given a closure function the smallest preimage of a closed set can be calculated in polynomial time in the number of closed sets. This implies that there is a polynomial time algorithm to compute the convex hull number of a graph, when all its convex subgraphs are given as input. We then show that deciding if the smallest preimage of a closed set is logarithmic in the size of the ground set is LOGSNP-hard if only the ground set is given. A special instance of this problem is to compute the dimension of a poset given its linear extension graph, that is conjectured to be in P.
The intent to show that the latter problem is LOGSNP-complete leads to several interesting questions and to the definition of the isometric hull, i.e., a smallest isometric subgraph containing a given set of vertices . While for an isometric hull is just a shortest path, we show that computing the isometric hull of a set of vertices is NP-complete even if . Finally, we consider the problem of computing the isometric hull number of a graph and show that computing it is complete.
Combinatorial games in graphs
Graph searching and combinatorial games in graphs.
The Network Decontamination problem consists of coordinating a team of mobile agents in order to clean a contaminated network. The problem is actually equivalent to tracking and capturing an invisible and arbitrarily fast fugitive. This problem has natural applications in network security in computer science or in robotics for search or pursuit-evasion missions. Many different objectives have been studied: the main one being the minimization of the number of mobile agents necessary to clean a contaminated network.
Many environments (continuous or discrete) have also been considered. In the book chapter , we focus on networks modeled by graphs. In this context, the optimization problem that consists of minimizing the number of agents has a deep graph-theoretical interpretation. Network decontamination and, more precisely, graph searching models, provide nice algorithmic interpretations of fundamental concepts in the Graph Minors theory by Robertson and Seymour.
For all these reasons, graph searching variants have been widely studied since their introduction by Breish (1967) and mathematical formalizations by Parsons (1978) and Petrov (1982). The book chapter  consists of an overview of the algorithmic results on graph decontamination and graph searching. Moreover,  is the preface to the special issue of TCS on the 8th Workshop on GRAph Searching, Theory and Applications, Anogia, Crete, Greece, April 10 - April 13, 2017.
In , we focus on another game with mobile agents in a graph. Precisely, in the eternal domination game played on graphs, an attacker attacks a vertex at each turn and a team of guards must move a guard to the attacked vertex to defend it. The guards may only move to adjacent vertices on their turn. The goal is to determine the eternal domination number of a graph which is the minimum number of guards required to defend against an infinite sequence of attacks.  continues the study of the eternal domination game on strong grids . Cartesian grids have been vastly studied with tight bounds existing for small grids such as grids for . It was recently proven that where is the domination number of which lower bounds the eternal domination number . We prove that, for all such that , (note that is the domination number of ). Our technique may be applied to other “grid-like" graphs.
In , we adapt the techniques of  to prove that the eternal domination number of strong grids is upper bounded by . While this does not improve upon a recently announced bound of  in the general case, we show that our bound is an improvement in the case where the smaller of the two dimensions is at most 6179.
The Orthogonal Colouring Game
In , we introduce the Orthogonal Colouring Game, in which two players alternately colour vertices (from a choice of colours) of a pair of isomorphic graphs while respecting the properness and the orthogonality of the colouring. Each player aims to maximize her score, which is the number of coloured vertices in the copy of the graph she owns. Our main result is that the second player has a strategy to force a draw in this game for any for graphs that admit a strictly matched involution. An involution of a graph is strictly matched if its fixed point set induces a clique and any non-fixed point is connected with its image by an edge. We give a structural characterization of graphs admitting a strictly matched involution and bounds for the number of such graphs. Examples of such graphs are the graphs associated with Latin squares and sudoku squares.
In , we prove that recognising graphs that admit a strictly matched involution is NP-complete.
Complexity of Games Compendium
Since games and puzzles have been studied under a computational lens, researchers unearthed a rich landscape of complexity results showing deep connections between games and fundamental problems and models in computer science. Complexity of Games (CoG, https://steven3k.gitlab.io/isnphard-test/) is a compendium of complexity results on games and puzzles. It aims to serve as a reference guide for enthusiasts and researchers on the topic and is a collaborative and open source project that welcomes contributions from the community.
Algorithms for social networks
KADABRA, an ADaptive Algorithm for Betweenness via Random Approximation
In , we present KADABRA, a new algorithm to approximate betweenness centrality in directed and undirected graphs, which significantly outperforms all previous approaches on real-world complex networks. The efficiency of the new algorithm relies on two new theoretical contributions, of independent interest. The first contribution focuses on sampling shortest paths, a subroutine used by most algorithms that approximate betweenness centrality. We show that, on realistic random graph models, we can perform this task in time with high probability, obtaining a significant speedup with respect to the worst-case performance. We experimentally show that this new technique achieves similar speedups on real-world complex networks, as well. The second contribution is a new rigorous application of the adaptive sampling technique. This approach decreases the total number of shortest paths that need to be sampled to compute all betweenness centralities with a given absolute error, and it also handles more general problems, such as computing the most central nodes. Furthermore, our analysis is general, and it might be extended to other settings.
Distributed Community Detection via Metastability of the 2-Choices Dynamics
In , we investigate the behavior of a simple majority dynamics on networks of agents whose interaction topology exhibits a community structure. By leveraging recent advancements in the analysis of dynamics, we prove that, when the states of the nodes are randomly initialized, the system rapidly and stably converges to a configuration in which the communities maintain internal consensus on different states. This is the first analytical result on the behavior of dynamics for non-consensus problems on non-complete topologies, based on the first symmetry-breaking analysis in such setting. Our result has several implications in different contexts in which dynamics are adopted for computational and biological modeling purposes. In the context of Label Propagation Algorithms, a class of widely used heuristics for community detection, it represents the first theoretical result on the behavior of a distributed label propagation algorithm with quasi-linear message complexity. In the context of evolutionary biology, dynamics such as the Moran process have been used to model the spread of mutations in genetic populations [Lieberman, Hauert, and Nowak 2005]; our result shows that, when the probability of adoption of a given mutation by a node of the evolutionary graph depends super-linearly on the frequency of the mutation in the neighborhood of the node and the underlying evolutionary graph exhibits a community structure, there is a non-negligible probability for species differentiation to occur.
On the Necessary Memory to Compute the Plurality in Multi-Agent Systems
Consensus and Broadcast are two fundamental problems in distributed computing, whose solutions have several applications. Intuitively, Consensus should be no harder than Broadcast, and this can be rigorously established in several models. Can Consensus be easier than Broadcast?
In models that allow noiseless communication, we prove in  a reduction of (a suitable variant of) Broadcast to binary Consensus, that preserves the communication model and all complexity parameters such as randomness, number of rounds, communication per round, etc., while there is a loss in the success probability of the protocol. Using this reduction, we get, among other applications, the first logarithmic lower bound on the number of rounds needed to achieve Consensus in the uniform GOSSIP model on the complete graph. The lower bound is tight and, in this model, Consensus and Broadcast are equivalent.
We then turn to distributed models with noisy communication channels that have been studied in the context of some bio-inspired systems. In such models, only one noisy bit is exchanged when a communication channel is established between two nodes, and so one cannot easily simulate a noiseless protocol by using error-correcting codes. An lower bound on the number of rounds needed for Broadcast is proved by Boczkowski et al.  in one such model (noisy uniform PULL, where is a parameter that measures the amount of noise). In such model, we prove a new bound for Broadcast and a bound for binary Consensus, thus establishing an exponential gap between the number of rounds necessary for Consensus versus Broadcast.
How long does it take for all users in a social network to choose their communities?
In , we consider a community formation problem in social networks, where the users are either friends or enemies. The users are partitioned into conflict-free groups (i.e., independent sets in the conflict graph that represents the enmities between users). The dynamics goes on as long as there exists any set of at most users, being any fixed parameter, that can change their current groups in the partition simultaneously, in such a way that they all strictly increase their utilities (number of friends i.e., the cardinality of their respective groups minus one). Previously, the best-known upper-bounds on the maximum time of convergence were for and for , with being the independence number of . Our first contribution consists in reinterpreting the initial problem as the study of a dominance ordering over the vectors of integer partitions. With this approach, we obtain for the tight upper-bound and, when is the empty graph, the exact value of order . The time of convergence, for any fixed , was conjectured to be polynomial. In this paper we disprove this. Specifically, we prove that for any , the maximum time of convergence is in .
A Comparative Study of Neural Network Compression
There has recently been an increasing desire to evaluate neural networks locally on computationally-limited devices in order to exploit their recent effectiveness for several applications; such effectiveness has nevertheless come together with a considerable increase in the size of modern neural networks, which constitute a major downside in several of the aforementioned computationally-limited settings. There has thus been a demand of compression techniques for neural networks. Several proposal in this direction have been made, which famously include hashing-based methods and pruning-based ones. However, the evaluation of the efficacy of these techniques has so far been heterogeneous, with no clear evidence in favor of any of them over the others. In , we address this latter issue by providing a comparative study. While most previous studies test the capability of a technique in reducing the number of parameters of state-of-the-art networks, we follow  in evaluating their performance on basic architectures on the MNIST dataset and variants of it, which allows for a clearer analysis of some aspects of their behavior. To the best of our knowledge, we are the first to directly compare famous approaches such as HashedNet, Optimal Brain Damage (OBD), and magnitude-based pruning with L1 and L2 regularization among them and against equivalent-size feed-forward neural networks with simple (fully-connected) and structural (convolutional) neural networks. Rather surprisingly, our experiments show that (iterative) pruning-based methods are substantially better than the HashedNet architecture, whose compression doesn't appear advantageous to a carefully chosen convolutional network. We also show that, when the compression level is high, the famous OBD pruning heuristics deteriorates to the point of being less efficient than simple magnitude-based techniques.