GANG focuses on algorithm design for large scale networks using structural properties of these networks. Application domains include the development of optimized protocols for large dynamic networks such as mobile networks or overlay networks over Internet. This includes for instance peer-to-peer applications, or the navigability of social networks. GANG tools come from recent advances in the field of graph algorithms, both in centralized and distributed settings. In particular, this includes graph decomposition and geometric properties (such as low doubling dimension, low dimension embedding, etc.). Today, the management of large networks, Internet being the reference, is best effort. However, the demand for mobility (ad hoc networks, wireless connectivity, etc.) and for dynamicity (node churn, fault tolerance, etc.) is increasing. In this distributed setting, it becomes necessary to design a new generation of algorithms and protocols to face the challenge of large scale mobility and dynamicity. In the mean time, recent and sophisticated theoretical results have emerged, offering interesting new tracks for managing large networks. These results concern centralized and decentralized algorithms for solving key problems in communication networks, including routing, but also information retrieval, localization, or load balancing. They are mainly based on structural properties observed in most of real networks: approximate topology with low dimension metric spaces, low treewidth, low doubling dimension, graph minor freeness, etc. In addition, graph decomposition techniques have recently progressed. The scientific community has now tools for optimizing network management. First striking results include designing overlay networks for peer-to-peer systems and understanding the navigability of large social networks.

We focus on two approaches for designing algorithms for large graphs: decomposing the graph and and relying on simple graph traversals.

We study new decompositions schemes such as 2-join, skew partitions and others
partition problems. These graph decompositions appeared in the structural graph
theory and are the basis of some well-known theorems such as the Perfect Graph
Theorem. For these decompositions there is a lack of efficient algorithms. We
aim at designing algorithms working in

We more deeply study multi-sweep graph searches. In this domain a graph search only yields a total ordering of the vertices which can be used by the subsequent graph searches. This technique can be used on huge graphs and do not need extra memory. We already have obtained preliminary results in this direction and many well-known graph algorithms can be put in this framework. The idea behind this approach is that each sweep discovers some structure of the graph. At the end of the process either we have found the underlying structure ( for example an interval representation for an interval graph) or an approximation of it (for example in hard discrete optimization problems). We envision applications to exact computations of centers in huge graphs, to underlied combinatorial optimization problems, but also to networks arising in Biology.

In the course of graph exploration, a mobile agent is expected to regularly visit all the nodes of an unknown network, trying to discover all its nodes as quickly as possible. Our research focuses on the design and analysis of agent-based algorithms for exploration-type problems, which operate efficiently in a dynamic network environment, and satisfy imposed constraints on local computational resources, performance, and resilience. Our recent contributions in this area concern the design of fast deterministic algorithms for teams of agents operating in parallel in a graph, with limited or no persistent state information available at nodes. We plan further studies to better understand the impact of memory constraints and of the availability of true randomness on efficiency of the graph exploration process.

The distributed community can be viewed as the union of two
sub-communities. This is true even in our team. Even though they are not
completely disjoint, they are disjoint enough not to leverage each other’s
results. At a high level, one is mostly interested in timing issues (clock
drifts, link delays, crashes, etc.) while the other one is mostly interested in
spatial issues (network structure, memory requirements, etc.). Indeed, one
sub-community is mostly focusing on the combined impact of asynchronism and
faults on distributed computation, while the other addresses the impact of
network structural properties on distributed computation. Both communities
address various forms of computational complexities, through the analysis of
different concepts. This includes, e.g., failure detectors and wait-free
hierarchy for the former community, and compact labeling schemes and computing
with advice for the latter community. We have the ambitious project to achieve
the reconciliation between the two communities by focusing on the same class of
problems, the yes/no-problems, and establishing the scientific foundations for
building up a consistent theory of computability and complexity for distributed
computing. The main question addressed is therefore: is the absence of globally
coherent computational complexity theories covering more than fragments of
distributed computing, inherent to the field? One issue is obviously the types
of problems located at the core of distributed computing. Tasks like consensus,
leader election, and broadcasting are of very different nature. They are not
*yes-no* problems, neither are they minimization problems. Coloring and
Minimal Spanning Tree are optimization problems but we are often more interested
in constructing an optimal solution than in verifying the correctness of a given
solution. Still, it makes full sense to analyze the *yes-no* problems
corresponding to checking the validity of the output of tasks. Another issue is
the power of individual computation. The FLP impossibility result as well as
Linial’s lower bound hold independently from the individual computational power
of the involved computing entities. For instance, the individual power of
solving NP-hard problems in constant time would not help overcoming these limits
which are inherent to the fact that computation is distributed. A third issue
is the abundance of models for distributed computing frameworks, from shared
memory to message passing, spanning all kinds of specific network structures
(complete graphs, unit-disk graphs, etc.) and or timing constraints (from
complete synchronism to full asynchronism). There are however models, typically
the wait-free model and the LOCAL model, which, though they do not claim to
reflect accurately real distributed computing systems, enable focusing on some
core issues. Our research program is ongoing to carry many important notions of
Distributed Computing into a *standard* computational complexity.

Based on our scientific foundation on both graph algorithms and distributed algorithms, we plan to analyze the behavior of various networks such as future Internet, social networks, overlay networks resulting from distributed applications or online social networks.

One of the key aspects of networks resides in the dissemination of information among the nodes. We aim at analyzing various procedures of information propagation from dedicated algorithms to simple distributed schemes such as flooding. We also consider various models, where noise can alter information as it propagates or where memory of nodes is limited for example.

We try to explore new routing paradigms such as greedy routing in social networks for example. We are also interested in content centric networking where routing is based on content name rather than content address. One of our target is multiple path routing: how to design forwarding tables providing multiple disjoint paths to a destination?

Based on our past experience of peer-to-peer application design, we would like to broaden the spectrum of distributed applications where new efficient algorithms and analysis can be performed. We especially target online social networks if we see them as collaborative tools for exchanging information. A basic question resides in making the right connections for gathering filtered and accurate information with sufficient coverage.

As forwarding tables of networks grow and are sometimes manually modified, the problem of verifying forwarding information becomes critical and has recently gained in interest. Some problems that arise in network verification such as loop detection for example, may be naturally encoded as Boolean Satisfiability problems. Beside the theoretical interest of this encoding in complexity proofs, it has also a practical value for solving these problems by taking advantage of the many efficient Satisfiability testing solvers. Indeed, SAT solvers have proved to be very efficient in solving problems coming from various areas (Circuit Verification, Dependency and Conflicts in Software distributions...) and encoded in Conjunctive Normal Form. To test an approach using SAT solvers in network verification, one need to collect data sets from real network and to develop good models for generating realistic networks. The technique of encoding and the solvers themselves need to be adapted to this kind of problems. All this represent a rich experimental field of future research.

Finally, we are interested in analyzing the structural properties of practical networks. This can include diameter computation or ranking of nodes. As we mostly consider large networks, we are often interested in efficient heuristics. Ideally, we target heuristics that give exact answer although fast computation time is not guaranteed for all networks. We already have designed such heuristics for diameter computation; understanding the structural properties that enable fast computation time in practice is still an open question.

Application domains include evaluating Internet performances, the design of new peer-to-peer applications, enabling large scale ad hoc networks and mapping the web.

The application of measuring and modeling Internet metrics such as latencies and bandwidth is to provide tools for optimizing Internet applications. This concerns especially large scale applications such as web site mirroring and peer-to-peer applications.

Peer-to-peer protocols are based on a all equal paradigm that allows to design highly reliable and scalable applications. Besides the file sharing application, peer-to-peer solutions could take over in web content dissemination resistant to high demand bursts or in mobility management. Envisioned peer-to-peer applications include video on demand, streaming, exchange of classified ads,...

Wifi networks have entered our every day life. However, enabling them at large scale is still a challenge. Algorithmic breakthrough in large ad hoc networks would allow to use them in fast and economic deployment of new radio communication systems.

The main application of the web graph structure consists in ranking pages. Enabling site level indexing and ranking is a possible application of such studies.

Pierre Fraigniaud has received the Prize for Innovation in Distributed Computing 2014.

In this work, we first considered the scenario when each mobile agent knows the
map of the network, as well as its own initial position. We established a
connection between the number of rounds required for collision-free exploration
and the degree of the minimum-degree spanning tree of the graph. We provided
tight (up to a constant factor) lower and upper bounds on the collision-free
exploration time in general graphs, and the exact value of this parameter for
trees. For our second scenario, in which the network is unknown to the agents,
we proposed collision-free exploration strategies running in

A rainbow matching for (not necessarily distinct) sets

Normal cone and subdifferential have been generalized through various continuous
functions; in , we focus on a non separable

A non-blocking implementation of a concurrent object is an implementation that
does not prevent concurrent accesses to the internal representation of the
object, while guaranteeing the deadlock-freedom progress condition without using
locks. Considering a failure free context, G. Taubenfeld has introduced (DISC
2013) a simple modular approach, captured under a new problem called the *fair synchronization* problem, to transform a non-blocking implementation into
a starvation-free implementation satisfying a strong fairness requirement.

This approach is illustrated in with the implementation of a concurrent stack. The spirit of the paper is mainly pedagogical. Its aim is not to introduce new concepts or algorithms, but to show that a powerful, simple, and modular transformation can provide concurrent objects with strong fairness properties.

Under the condition that *correct* synchronization processes take
sufficiently many steps, they provide the computation processes with enough
*advice* to solve the given task wait-free: every computation process
outputs in a finite number of its own steps, regardless of the behavior of other
computation processes.

Every task can thus be characterized by the *weakest* failure detector that
allows for solving it, and we show that every such failure detector captures a
form of set agreement. We then obtain a complete classification of tasks,
including ones that evaded comprehensible characterization so far, such as
renaming or weak symmetry breaking.

Considering the case of homonyms processes (some processes may share the same
identifier) on a ring , we give a necessary
and sufficient condition on the number of identifiers to enable leader
election. We prove that if

In a series of papers , , we analyze distributed decision in the context of various models for distributed computing.

In , , we consider a general framework for voting systems with arbitrary types of ballots such as orders of preference, grades, etc. We investigate their manipulability: in what states of the population may a coalition of electors, by casting an insincere ballot, secure a result that is better from their point of view?

We show that, for a large class of voting systems, a simple modification allows
to reduce manipulability. This modification is *Condorcification*: when
there is a Condorcet winner, designate her; otherwise, use the original rule.

When electors are independent, for any non-ordinal voting system
(i.e. requiring information that is not included in the orders of preferences,
for example grades), we prove that there exists an ordinal voting system whose
manipulability rate is at most as high and which meets some other desirable
properties. Furthermore, this result is also true when voters are not
independent but the culture is *decomposable*, a weaker condition that we
define.

Combining both results, we conclude that when searching for a voting system whose manipulability is minimal (in a large class of systems), one can restrict to voting systems that are ordinal and meet the Condorcet criterion.

In and , we
consider the *rotor-router mechanism*, which provides a deterministic
alternative to the random walk in undirected graphs. In this model, a set of *cover time* of
such a system, i.e., the number of steps after which each node has been visited
by at least one walk, regardless of the starting locations of the walks. In the
case of

In and , we investigate how to efficiently retrieve large portions of alive pages from an old crawl using orderings we called LiveRanks. Our work establishes the possibility of efficiently recovering a significant portion of the alive pages of an old snapshot and advocates for the use of an adaptive sample-based PageRank for obtaining an efficient LiveRank. Additionally, application field is not limited to Web graphs. It can be straightforwardly adapted to any online data with similar linkage enabling crawling, like P2P networks or online social networks.

Today's Internet usage is mostly centered around location-independent services. Because the Internet architecture is host-centric, content or service requests still have to be translated into locations, or the IP address of their hosts. This translation is realized through different technologies, e.g. DNS and HTTP redirection, which are currently implemented at the Application Layer. (ICN) proposes to evolve the current Internet infrastructure by extending the networking layer with name-based primitives.

Gang has a strong collaboration with Alcatel-Lucent. We notably collaborate with Fabien Mathieu and Diego Perino who are former members of Gang that have joined Alcatel-Lucent. A Cifre grant allows to fund the PhD thesis of The-Dang Huynh to study ranking techniques and their application to social networks. An ADR (joint research action) is dedicated to content centric networks and forwarding information verification. The PhD thesis of Leonardo Linguaglossa is funded by this contract. We also collaborate with Ludovic Noirie on voting systems.

This collaboration is developed inside the Alcatel-Lucent and Inria joint research lab.

Managed by University Paris Diderot, C. Delporte and H. Fauconnier lead this project that grants 1 Ph. D.

Distributed computation keep raising new questions concerning computability and complexity. For instance, as far as fault-tolerant distributed computing is concerned, impossibility results do not depend on the computational power of the processes, demonstrating a form of undecidability which is significantly different from the one encountered in sequential computing. In the same way, as far as network computing is concerned, the impossibility of solving certain tasks locally does not depend on the computational power of the individual processes.

The main goal of DISPLEXITY (for DIStributed computing: computability and ComPLEXITY) is to establish the scientific foundations for building up a consistent theory of computability and complexity for distributed computing.

One difficulty to be faced by DISPLEXITY is to reconcile the different sub-communities corresponding to a variety of classes of distributed computing models. The current distributed computing community may indeed be viewed as two not necessarily disjoint sub-communities, one focusing on the impact of temporal issues, while the other focusing on the impact of spatial issues. The different working frameworks tackled by these two communities induce different objectives: computability is the main concern of the former, while complexity is the main concern of the latter.

Within DISPLEXITY, the reconciliation between the two communities will be achieved by focusing on the same class of problems, those for which the distributed outputs are interpreted as a single binary output: yes or no. Those are known as the yes/no-problems. The strength of DISPLEXITY is to gather specialists of the two main streams of distributed computing. Hence, DISPLEXITY will take advantage of the experience gained over the last decade by both communities concerning the challenges to be faced when building up a complexity theory encompassing more than a fragment of the field.

In order to reach its objectives, DISPLEXITY aims at achieving the following tasks:

Formalizing yes/no-problems (decision problems) in the context of distributed computing. Such problems are expected to play an analogous role in the field of distributed computing as that played by decision problems in the context of sequential computing.

Formalizing decision problems (yes/no-problems) in the context of distributed computing. Such problems are expected to play an analogous role in the field of distributed computing as that played by decision problems in the context of sequential computing.

Revisiting the various explicit (e.g., failure-detectors) or implicit (e.g., a priori information) notions of oracles used in the context of distributed computing allowing us to express them in terms of decidability/complexity classes based on oracles.

Identifying the impact of non-determinism on complexity in distributed computing. In particular, DISPLEXITY aims at a better understanding of the apparent lack of impact of non-determinism in the context of fault-tolerant computing, to be contrasted with the apparent huge impact of non-determinism in the context of network computing. Also, it is foreseen that non-determinism will enable the comparison of complexity classes defined in the context of fault-tolerance with complexity classes defined in the context of network computing.

Last but not least, DISPLEXITY will focus on new computational paradigms and frameworks, including, but not limited to distributed quantum computing and algorithmic game theory (e.g., network formation games).

The project will have to face and solve a number of challenging problems. Hence, we have built the DISPLEXITY consortium so as to coordinate the efforts of those worldwide leaders in Distributed Computing who are working in our country. A successful execution of the project will result in a tremendous increase in the current knowledge and understanding of decentralized computing and place us in a unique position in the field.

Gang is participating to the LINCS, a research centre co-founded by Inria, Institut Mines-Télécom, UPMC and Alcatel-Lucent Bell Labs, dedicated to research and innovation in the domains of future information and communication networks, systems and services. Gang contributes to work on online social networks, content centric networking and forwarding information verification.

Carole Delporte and Hugues Fauconnier collaborate with Sam Toueg (Univ. of Toronto) and Rachid Guerraoui (EPFL) on distributed computing and synchronization.

Carole Delporte, Hugues Fauconnier and Pierre Fraigniaud collaborate on distributed computing with Eli Gafni (UCLA) and Sergio Rajsbaum (Univ. of Mexico).

Pierre Fraigniaud collaborates with Zvi Lotker (Ben-Gurion Univ.) on social networks.

Amos Korman collaborates with Ofer Feinerman (Weizmann Institute) on the application of distributed algorithm analysis to ant behaviors.

Eli Gafni, UCLA, June - July 2014

Sergio Rajsbaum, Univ. of Mexico, June - July 2014

Zvi Lotker, Ben-Gurion Univ., September 2014 - July 2015 (Junior chair of the FSMP)

Amos Korman has co-chaired and co-organized the BDA 2014 worksop
http://

Pierre Fraigniaud has chaired the C track of the 41st International Colloquium on Automata, Languages, and Programming (ICALP 2014).

Michel Habib was member of the program committee of the 39th International Symposium on Mathematical Foundations of Computer Science (MFCS 2014).

Pierre Fraigniaud was keynote speaker at the 16th Interntional Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS 2014), Paderborn, Germany, Sep 28 - Oct 1, 2014.

Adrian Kosowski made a tutorial at the 5th Polish Combinatorial Conference, Bedlewo, September 2014.

Adrian Kosowski was invited speaker at Algotel 2014 - 16èmes Rencontres Francophones pour les Aspects Algorithmiques des Télécommunications, Ile de Ré, June 2014.

Laurent Viennot was invited speaker at the SMBE Satellite Meeting on Reticulated Microbial Evolution, April 27-30, 2014, Kiel, Germany.

Michel Habib made a tutorial at Ecole Rescom, France, May 2014.

Pierre Fraigniaud is member of the editorial boards of Distributed Computing (DC), Theory of Computing Systems (TOCS), Fundamenta Informaticae (FI) and Journal of Interconnection Networks (JOIN).

Master: Carole Delporte and Hugues Fauconnier, Algorithmique distribuée avec mémoire partagée, 6h, M2, Université Paris Diderot

Master: Hugues Fauconnier, Cours programmation répartie, 33h, M2, Univ. Paris Diderot

Master: Carole Delporte, Cours et TP Protocoles des services internet, 44h, M2, Univ. Paris Diderot

Master: Carole Delporte, Cours Algorithme réparti, 33h, M2, Univ. Paris Diderot

Master: Carole Delporte and Hugues Fauconnier, Protocoles Réseaux, 72h, M1, Université Paris Diderot

Licence: Hugues Fauconnier, Programmation objet et interfaces graphiques, 48h, L2-L3, EIDD

Licence: Carole Delporte et Hugues Fauconnier, Sécurité, 36h, L3, Univ. Paris Diderot

Licence : Boufkhad Yacine, Algorithmique et Informatique, 132h, L1, IUT de l'Université Paris Diderot

Licence : Boufkhad Yacine, Programmation Orientée Objet, 60h, L2, IUT de l'Université Paris Diderot

Master: Adrian Kosowski, Communication and Routing, 4.5h, M1, ENSEIRB-MATMECA

Master: Adrian Kosowski, Randomization in Computer Science: Games, Networks, Epidemic and Evolutionary Algorithms, 24h, M1, École Polytechnique

Master: Adrian Kosowski, Distributed Computing with Mobile Agents: Exploration, Rendezvous, and related problems, 12h, M2, IX Summer School on Discrete Mathematics in South America, Valparaiso, Chile

Licence : Fabien de Montgolfier, Introduction à la programmation, 26h, L1, Univ Paris Diderot

Licence : Fabien de Montgolfier, Programmation avancée (bio-informatique), 26h, L3, Univ. Paris Diderot

Master : Fabien de Montgolfier, Algorithmique avancée (bio-informatique), 26h, M1, Univ Paris Diderot

Licence : Fabien de Montgolfier, Systèmes et Réseaux, 52h, L3, Ecole d'Ingénieurs Denis Diderot

Master : Laurent Viennot, Système, réseau et Internet, 15h, M1, Univ. Paris Diderot

License : Michel Habib, Algorithmique, 45h, L, ENS Cachan

Master : Michel Habib, Algorithmique avancée, 24h, M1, Univ. Paris Diderot

Master : Michel Habib, Mobilité, 33h, M2, Univ. Paris Diderot

Master : Michel Habib, Méthodes et algorithmes pour l'accès à l'information numérique, 16h, M2, Univ. Paris Diderot

Master : Michel Habib, Algorithmique de graphes, 12h, M2, Univ. Paris Diderot

Licence : Pierre Charbit, Introduction a la Programmation, 30h, L1, Université Paris Diderot, France

Licence : Pierre Charbit, Automates finis, 52h, L2, Université Paris Diderot, France

Licence : Pierre Charbit, Types de Données et Objet, 52h, L1, Université Paris Diderot, France

Master : Pierre Charbit, Programmation, 60h, M2Pro PISE, Université Paris Diderot, France

Master : Pierre Charbit, Algorithmique de Graphes, 18h, M2 MPRI, Université Paris Diderot, France

PhD: Jérémie Dusart, Parcours de graphes, applications aux graphes de cocomparabilité, Université Paris Diderot, defended June 2014, supervised by Michel Habib.

PhD: Antoine Mamcarz, Décompositions de trigraphes et parcours de graphes, Université Paris Diderot, defended June 30, 2014, supervised by Michel Habib.

PhD: Dominik Pajak, Algorithms for Deterministic Parallel Graph Exploration, Université de Bordeaux, defended June 13, 2014, supervised by Adrian Kosowski and Ralf Klasing.

PhD: The-Dang Huynh, Extensions de PageRank et Applications aux Réseaux Sociaux, Université Pierre et Marie Curie, since 2012, supervised by Fabien Mathieu, Dohy Hong and Laurent Viennot.

PhD: Leonardo Linguaglossa, Design of algorithms and protocols to support ICN functionalities in high speed routers, Université Paris Diderot, since 2013, supervised by Fabien Mathieu, Diego Perino and Laurent Viennot.

PhD : Hugues Fauconnier was member of the jury of Peva Blanchard, Synchronization and Fault-tolerance in Distributed Algorithms, Univ. Paris XI, September 24th

PhD : Laurent Viennot was reviewer of the thesis of Pierre-Alain Jachiet, Étude de l'évolution combinatoire des gènes par l'analyse de réseaux de similarité de séquence, Université Pierre et Marie Curie, July 2014, supervised by Eric Bapteste and Philippe Lopez

PhD : Laurent Viennot was reviewer of the thesis of Arnaud Jégou, Harnessing the power of implicit and explicit social networks through decentralization, Université de Rennes 1, September 2014, supervised by Anne-Marie Kermarrec and Davide Frey

PhD : Laurent Viennot was reviewer of the thesis of Pierre Halftermeyer, Connexité dans les réseaux et schémas d'étiquetage compact d'urgence, Université de Bordeaux, September 2014, supervised by Bruno Courcelle and Cyril Gavoille

PhD : Laurent Viennot was reviewer of the thesis of Aurélien Lancin, Étude de réseaux complexes et de leurs propriétés pour l'optimisation de modèles de routage, Université de Nice – Sophia Antipolis, December 2014, supervised by David Coudert

PhD : Fabien de Montgolfier was member of the jury of Pierre Clairet, Approche algorithmique pour l'amélioration des performances du système de détection d'intrusions PIGA, Université d'Orléans, defended on June 24th, 2014, supervised by Pascal Berthomé.

PhD : Fabien de Montgolfier was member of the jury of Antoine Mamcarz, Décompositions de trigraphes et parcours de graphes, Université Paris Diderot, defended on June 30th, 2014, supervised by Michel Habib.

Laurent Viennot has written an article on the history of telecommunication networks with a special focus on the phone network and on Internet .

Laurent Viennot has animated a weekly workshop on computer science unplugged in the Pouchet primary school in Paris at CM1 level (15h during 2014). The topics covered included Nim game, Euler paths in graphs, and error correcting codes.