GANG focuses on algorithm design for large scale networks using structural properties of these networks. Application domains include the development of optimized protocols for large dynamic networks such as mobile networks or overlay networks over Internet. This includes for instance peer-to-peer applications, or the navigability of social networks. GANG tools come from recent advances in the field of graph algorithms, both in centralized and distributed settings. In particular, this includes graph decomposition and geometric properties (such as low doubling dimension, low dimension embedding, etc.). Today, the management of large networks, Internet being the reference, is best effort. However, the demand for mobility (ad hoc networks, wireless connectivity, etc.) and for dynamicity (node churn, fault tolerance, etc.) is increasing. In this distributed setting, it becomes necessary to design a new generation of algorithms and protocols to face the challenge of large scale mobility and dynamicity. In the mean time, recent and sophisticated theoretical results have emerged, offering interesting new tracks for managing large networks. These results concern centralized and decentralized algorithms for solving key problems in communication networks, including routing, but also information retrieval, localization, or load balancing. They are mainly based on structural properties observed in most of real networks: approximate topology with low dimension metric spaces, low treewidth, low doubling dimension, graph minor freeness, etc. In addition, graph decomposition techniques have recently progressed. The scientific community has now tools for optimizing network management. First striking results include designing overlay networks for peer-to-peer systems and understanding the navigability of large social networks.

We focus on two approaches for designing algorithms for large graphs: decomposing the graph and relying on simple graph traversals.

We study new decompositions schemes such as 2-join, skew partitions and others
partition problems. These graph decompositions appeared in the structural graph
theory and are the basis of some well-known theorems such as the Perfect Graph
Theorem. For these decompositions there is a lack of efficient algorithms. We
aim at designing algorithms working in

We more deeply study multi-sweep graph searches. In this domain a graph search only yields a total ordering of the vertices which can be used by the subsequent graph searches. This technique can be used on huge graphs and do not need extra memory. We already have obtained preliminary results in this direction and many well-known graph algorithms can be put in this framework. The idea behind this approach is that each sweep discovers some structure of the graph. At the end of the process either we have found the underlying structure (for example an interval representation for an interval graph) or an approximation of it (for example in hard discrete optimization problems). We envision applications to exact computations of centers in huge graphs, to underlying combinatorial optimization problems, but also to networks arising in biology.

In the course of graph exploration, a mobile agent is expected to regularly visit all the nodes of an unknown network, trying to discover all its nodes as quickly as possible. Our research focuses on the design and analysis of agent-based algorithms for exploration-type problems, which operate efficiently in a dynamic network environment, and satisfy imposed constraints on local computational resources, performance, and resilience. Our recent contributions in this area concern the design of fast deterministic algorithms for teams of agents operating in parallel in a graph, with limited or no persistent state information available at nodes. We plan further studies to better understand the impact of memory constraints and of the availability of true randomness on efficiency of the graph exploration process.

The distributed community can be viewed as the union of two
sub-communities. This is true even in our team. Even though they are not
completely disjoint, they are disjoint enough not to leverage each others'
results. At a high level, one is mostly interested in timing issues (clock
drifts, link delays, crashes, etc.) while the other one is mostly interested in
spatial issues (network structure, memory requirements, etc.). Indeed, one
sub-community is mostly focusing on the combined impact of asynchronism and
faults on distributed computation, while the other addresses the impact of
network structural properties on distributed computation. Both communities
address various forms of computational complexities, through the analysis of
different concepts. This includes, e.g., failure detectors and wait-free
hierarchy for the former community, and compact labeling schemes and computing
with advice for the latter community. We have the ambitious project to achieve
the reconciliation between the two communities by focusing on the same class of
problems, the yes/no-problems, and establishing the scientific foundations for
building up a consistent theory of computability and complexity for distributed
computing. The main question addressed is therefore: is the absence of globally
coherent computational complexity theories covering more than fragments of
distributed computing, inherent to the field? One issue is obviously the types
of problems located at the core of distributed computing. Tasks like consensus,
leader election, and broadcasting are of very different nature. They are not
*yes-no* problems, neither are they minimization problems. Coloring and
Minimal Spanning Tree are optimization problems but we are often more interested
in constructing an optimal solution than in verifying the correctness of a given
solution. Still, it makes full sense to analyze the *yes-no* problems
corresponding to checking the validity of the output of tasks. Another issue is
the power of individual computation. The FLP impossibility result as well as
Linial's lower bound hold independently from the individual computational power
of the involved computing entities. For instance, the individual power of
solving NP-hard problems in constant time would not help overcoming these limits
which are inherent to the fact that computation is distributed. A third issue
is the abundance of models for distributed computing frameworks, from shared
memory to message passing, spanning all kinds of specific network structures
(complete graphs, unit-disk graphs, etc.) and or timing constraints (from
complete synchronism to full asynchronism). There are however models, typically
the wait-free model and the LOCAL model, which, though they do not claim to
reflect accurately real distributed computing systems, enable focusing on some
core issues. Our research program is ongoing to carry many important notions of
Distributed Computing into a *standard* computational complexity.

Based on our scientific foundation on both graph algorithms and distributed algorithms, we plan to analyze the behavior of various networks such as future Internet, social networks, overlay networks resulting from distributed applications or online social networks.

One of the key aspects of networks resides in the dissemination of information among the nodes. We aim at analyzing various procedures of information propagation from dedicated algorithms to simple distributed schemes such as flooding. We also consider various models, where noise can alter information as it propagates or where memory of nodes is limited for example.

We try to explore new routing paradigms such as greedy routing in social networks for example. We are also interested in content centric networking where routing is based on content name rather than content address. One of our target is multiple path routing: how to design forwarding tables providing multiple disjoint paths to a destination?

Based on our past experience of peer-to-peer application design, we would like to broaden the spectrum of distributed applications where new efficient algorithms and analysis can be performed. We especially target online social networks if we see them as collaborative tools for exchanging information. A basic question resides in making the right connections for gathering filtered and accurate information with sufficient coverage.

As forwarding tables of networks grow and are sometimes manually modified, the problem of verifying forwarding information becomes critical and has recently gained in interest. Some problems that arise in network verification such as loop detection for example, may be naturally encoded as Boolean Satisfiability problems. Beside the theoretical interest of this encoding in complexity proofs, it has also a practical value for solving these problems by taking advantage of the many efficient Satisfiability testing solvers. Indeed, SAT solvers have proved to be very efficient in solving problems coming from various areas (Circuit Verification, Dependency and Conflicts in Software distributions...) and encoded in Conjunctive Normal Form. To test an approach using SAT solvers in network verification, one need to collect data sets from real network and to develop good models for generating realistic networks. The technique of encoding and the solvers themselves need to be adapted to this kind of problems. All this represent a rich experimental field of future research.

Finally, we are interested in analyzing the structural properties of practical networks. This can include diameter computation or ranking of nodes. As we mostly consider large networks, we are often interested in efficient heuristics. Ideally, we target heuristics that give exact answer although fast computation time is not guaranteed for all networks. We already have designed such heuristics for diameter computation; understanding the structural properties that enable fast computation time in practice is still an open question.

Application domains include evaluating Internet performances, the design of new peer-to-peer applications, enabling large scale networks, and developping tools for transportation networks.

Functional Description

Gang is developing software for big graph manipulation. A preliminary library offering diameter and skeleton computation is available online

Contact: Laurent Viennot

URL: https://

A theoretical model to describe a series of successive graph searches is proposed in . We apply this model to deal with cocomparability graphs (i.e., complement of comparability graphs) in and in or . In this series of papers we provide a general algorithmic framework for many optimization problems on cocomparability graphs, such as Minimum Path Cover, Maximum Independent Set, Maximal interval subgraph, etc.

We also provide a new very simple algorithm for the recognition of cocomparability graphs. This algorithm is also based on a series of successive graph searches in .

We mainly use the two well-known Lexicographic graph searches: LBFS and LDFS, but not only. In , we also introduced a new graph search LocalMNS which seems to behave nicely on cocomparability graphs.

From our recent research on diameter computations on graphs we also investigated some reductions between polynomial problems on graphs .

We also extend the well-known multisweep BFS to give a better polynomial-time approximation for the Maximum Eccentricity Shortest Path Problem, in relation with the k-Laminarity Problem .

A *clique-coloring* of a graph *monochromatic* if all vertices in the set received the same color). The *clique-chromatic number* of

A graph *perfect* if all its induced subgraphs

Cographs are the graphs totally decomposable using series and parallel operations, in we introduced an interesting generalization, namely the class of switch cographs. These are the class of graphs that are totally decomposable w.r.t involution modular decomposition — a generalization of the modular decomposition of 2-structure, which has a unique linear-sized decomposition tree. We use our new decomposition tool to design three practical algorithms for the maximum cut, vertex cover and vertex separator problems. The complexity of these problems was previously unknown for this class of graphs.

A fundamental question in the setting of anonymous graphs concerns the ability of nodes to spontaneously break symmetries, based on their local perception of the network. In contrast to previous work, which focuses on symmetry breaking under arbitrary port labelings, in we study the following design question: Given an anonymous graph

More formally, for an integer *truncated view*

We present such efficient port labelings for any graph

For any graph

For any graph

For any integers

The *rotor-router model*, also called the *Propp machine*, was first considered as
a deterministic alternative to the random walk.
The edges adjacent to each node *port pointer* *lock-in problem*.
In [Yanovski et al., Algorithmica 37(3), 165–186 (2003)], it was proved that,
independently of the initial
configuration of the rotor-router mechanism in

We also investigate the robustness of the rotor-router graph exploration
in presence of faults in the pointers

We show that on the ring the rotor-router with

Finally, we study the limit behavior of the rotor-router system. We show that, once the rotor-router system has stabilized, all the nodes of the ring are always visited by some agent every

Locally finding a solution to symmetry-breaking tasks such as vertex-coloring, edge-coloring, maximal matching, maximal independent set, etc., is a long-standing challenge in distributed network computing. More recently, it has also become a challenge in the framework of centralized local computation. In , we introduce conflict coloring as a general symmetry-breaking task that includes all the aforementioned tasks as specific instantiations — conflict coloring includes all locally checkable labeling tasks from [Naor&Stockmeyer, STOC 1993]. Conflict coloring is characterized by two parameters

Adaptive renaming can be viewed as a coordination task involving a set of asynchronous agents, each aiming at grabbing a single resource out of a set of resources totally ordered by their desirability. Similarly, musical chairs is also defined as a coordination task involving a set of asynchronous agents, each aiming at picking one of a set of available resources, where every agent comes with an a priori preference for some resource. In , we foresee instances in which some combinations of resources are allowed, while others are disallowed. We model these constraints, i.e., the restrictions on the ability to use some combinations of resources, as an undirected graph whose nodes represent the resources, and an edge between two resources indicates that these two resources cannot be used simultaneously. In other words, the sets of resources that are allowed are those which form independent sets in the graph. E.g., renaming and musical chairs are specific cases where the graph is stable (i.e., it the empty graph containing no edges). As for musical chairs, we assume that each agent comes with an a priori preference for some resource. If an agent's preference is not in conflict with the preferences of the other agents, then this preference can be grabbed by the agent. Otherwise, the agents must coordinate to resolve their conflicts, and potentially choose non preferred resources. We investigate the following problem: given a graph, what is the maximum number of agents that can be accommodated subject to non-altruistic behaviors of early arriving agents? We entirely solve this problem under the restriction that agents which cannot grab their preferred resources must then choose a resource among the nodes of a predefined independent set. However, the general case, where agents which cannot grab their preferred resource are then free to choose any resource, is shown to be far more complex. In particular, just for cyclic constraints, the problem is surprisingly difficult. Indeed, we show that, intriguingly, the natural algorithm inspired from optimal solutions to adaptive renaming or musical chairs is sub-optimal for cycles, but proven to be at most 1 to the optimal. The main message of this paper is that finding optimal solutions to the coordination with constraints and preferences task requires to design " dynamic " algorithms, that is, algorithms of a completely different nature than the "static" algorithms used for, e.g., renaming.

When considering distributed computing, reliable message-passing synchronous systems on the one side, and asynchronous failure-prone shared-memory systems on the other side, remain two quite independently studied ends of the reliability/asynchrony spectrum. The concept of locality of a computation is central to the first one, while the concept of wait-freedom is central to the second one. This work proposes a new DECOUPLED model in an attempt to reconcile these two worlds. It consists of a synchronous and reliable communication graph of n nodes, and on top a set of asynchronous crash-prone processes, each attached to a communication node.
To illustrate the DECOUPLED model, the paper presents an asynchronous 3-coloring algorithm for the processes of a ring. From the processes point of view, the algorithm is wait-free. From a locality point of view, each process uses information only from processes at distance

An immediate snapshot object is a high level communication object, built on top of a read/write distributed system in which all except one processes may crash. It allows each process to write a value and obtains a set of pairs (process id, value) such that, despite process crashes and asynchrony, the sets obtained by the processes satisfy noteworthy inclusion properties.
Considering an n-process model in which up to t processes are allowed to crash (t-crash system model), the paper is on the construction of t-resilient immediate snapshot objects. In the t-crash system model, a process can obtain values from at least

A *failure detector* is a distributed oracle that provides the processes with information about failures. The *perfect* failure detector provides accurate and eventually complete information about process failures. In , we show that, in asynchronous failure-prone message-passing systems, perfect failure detection can be achieved by an oracle that outputs at most

Runtime Verification (RV) is a lightweight method for monitoring the formal specification of a system during its execution. It has recently been shown that a given state predicate can be monitored consistently by a set of crash-prone asynchronous *distributed* monitors, only if sufficiently many different verdicts can be emitted by each monitor. In , we revisit this impossibility result in the context of LTL semantics for RV. We show that employing the four-valued logic RVLTL will result in inconsistent distributed monitoring for some formulas. Our first main contribution is a family of logics, called LTL

Distributed snapshots, as introduced by Chandy and Lamport in the context of asynchronous failure-free message-passing distributed systems, are consistent global states in which the observed distributed application might have passed through. It appears that two such distributed snapshots cannot necessarily be compared (in the sense of determining which one of them is the “first”). Differently, snapshots introduced in asynchronous crash-prone read/write distributed systems are totally ordered, which greatly simplify their use by upper layer applications. In order to benefit from shared memory snapshot objects, it is possible to simulate a read/write shared memory on top of an asynchronous crash-prone message-passing system, and build then snapshot objects on top of it. This algorithm stacking is costly in both time and messages. To circumvent this drawback, the paper presents algorithms building snapshot objects directly on top of asynchronous crash-prone message-passing system. “Directly” means here “without building an intermediate layer such as a read/write shared memory”. To the authors knowledge, the proposed algorithms are the first providing such constructions. Interestingly enough, these algorithms are efficient and relatively simple.

A natural way to measure the power of a distributed-computing model is to characterize the set
of tasks that can be solved in it. In general, however, the question of whether a given task can
be solved in a given model is undecidable, even if we only consider the wait-free shared-memory.
In , we address this question for restricted classes of models and tasks. We
show that the question of whether a collection

The notion of deciding a *distributed language* *non-deterministic* distributed decision each process *opinions* emitted by the processes, and one aims at minimizing the number of possible opinions emitted by each process. In , we study non-deterministic distributed decision in asynchronous systems where processes may crash. In this setting, it is known that the number of opinions needed to deterministically decide a language can grow with *distributed encoding of the integers*, that provides an explicit construction of a long *bad sequence* in the *well-quasi-ordering*

We investigate this problem for various classes of graphs.
We design optimal algorithms for line segments, which turn out to be surprisingly different from strategies for related patrolling problems proposed in the literature. We then use these results to provide algorithms for general graphs. For Eulerian graphs

Error-correcting codes are efficient methods for handling noisy communication channels in the context of technological networks. However, such elaborate methods differ a lot from the unsophisticated way biological entities are supposed to communicate. Yet, it has been recently shown by Feinerman, Haeupler, and Korman [PODC 2014] that complex coordination tasks such as rumor spreading and majority consensus can plausibly be achieved in biological systems subject to noisy communication channels, where every message transferred through a channel remains intact with small probability

The goal of a hub-based distance labeling scheme for a network

Continuing work in the previously discussed framework of hub-based distance labeling schemes, in , , we present a hub labeling which allows us to decode exact distances in sparse graphs using labels of size sublinear in the number of nodes. For graphs with at most

By using similar techniques, we then present a 2-additive labeling scheme for general graphs, i.e., one in which the decoder provides a 2-additive-approximation of the distance between any pair of nodes. We achieve almost the same label size-time tradeoff

We believe all of our techniques are of independent value and provide a desirable simplification of previous approaches.

This work fits into the framework of computationally-inspired analysis of biological systems. Any organism faces sensory and cognitive limitations which may result in maladaptive decisions. Such limitations are prominent in the context of groups where the relevant information at the individual level may not coincide with collective requirements. In , we study the navigational decisions exhibited by *Paratrechina longicornis* ants as they cooperatively transport a large food item. These decisions hinge on the perception of individuals which often restricts them from providing the group with reliable directional information. We find that, to achieve efficient navigation despite partial and even misleading information, these ants employ a locally-blazed trail. This trail significantly deviates from the classical notion of an ant trail: First, instead of systematically marking the full path, ants mark short segments originating at the load. Second, the carrying team constantly loses the guiding trail. We experimentally and theoretically show that the locally-blazed trail optimally and robustly exploits useful knowledge while avoiding the pitfalls of misleading information.

Randomized gossip is one of the most popular way of disseminating information in large scale networks. This method is appreciated for its simplicity, robustness, and efficiency. In the Push protocol, every informed node selects, at every time step (a.k.a. round), one of its neighboring node uniformly at random and forwards the information to this node. This protocol is known to complete information spreading in *static* networks. The Push protocol has also been empirically shown to perform well in practice, and, specifically, toe robust against dynamic topological changes. In , we aim at analyzing the Push protocol in *dynamic* networks. We consider the *edge-Markovian* evolving graph model which captures natural temporal dependencies between the structure of the network at time

The *core-periphery* network architecture proposed by Avin et al. [ICALP 2014] was shown to support fast computation for many distributed algorithms, while being much sparser than the *congested clique*. For being efficient, the core-periphery architecture is however bounded to satisfy three axioms, among which is the capability of the core to emulate the clique, i.e., to implement the all-to-all communication pattern, in

Gang has a strong collaboration with Bell Labs (Nokia). We notably collaborate with Fabien Mathieu who is a former member of GANG and Nidhi Hegde. An ADR (joint research action) is dedicated to content centric networks and forwarding information verification. The PhD thesis of Leonardo Linguaglossa was funded by this contract.

This collaboration is developed inside the Alcatel-Lucent and Inria joint research lab.

Gang is participating to the LINCS, a research centre co-founded by Inria, Institut Mines-Télécom, UPMC and Alcatel-Lucent Bell Labs, dedicated to research and innovation in the domains of future information and communication networks, systems and services. Gang contributes to work on online social networks, content centric networking and forwarding information verification.

Managed by University Paris Diderot, C. Delporte and H. Fauconnier lead this project that grants 1 Post-Doc.

Distributed computation keep raising new questions concerning computability and complexity. For instance, as far as fault-tolerant distributed computing is concerned, impossibility results do not depend on the computational power of the processes, demonstrating a form of undecidability which is significantly different from the one encountered in sequential computing. In the same way, as far as network computing is concerned, the impossibility of solving certain tasks locally does not depend on the computational power of the individual processes.

The main goal of DISPLEXITY (for DIStributed computing: computability and ComPLEXITY) is to establish the scientific foundations for building up a consistent theory of computability and complexity for distributed computing.

One difficulty to be faced by DISPLEXITY is to reconcile the different sub-communities corresponding to a variety of classes of distributed computing models. The current distributed computing community may indeed be viewed as two not necessarily disjoint sub-communities, one focusing on the impact of temporal issues, while the other focusing on the impact of spatial issues. The different working frameworks tackled by these two communities induce different objectives: computability is the main concern of the former, while complexity is the main concern of the latter.

Within DISPLEXITY, the reconciliation between the two communities will be achieved by focusing on the same class of problems, those for which the distributed outputs are interpreted as a single binary output: yes or no. Those are known as the yes/no-problems. The strength of DISPLEXITY is to gather specialists of the two main streams of distributed computing. Hence, DISPLEXITY will take advantage of the experience gained over the last decade by both communities concerning the challenges to be faced when building up a complexity theory encompassing more than a fragment of the field.

In order to reach its objectives, DISPLEXITY aims at achieving the following tasks:

Formalizing yes/no-problems (decision problems) in the context of distributed computing. Such problems are expected to play an analogous role in the field of distributed computing as that played by decision problems in the context of sequential computing.

Formalizing decision problems (yes/no-problems) in the context of distributed computing. Such problems are expected to play an analogous role in the field of distributed computing as that played by decision problems in the context of sequential computing.

Revisiting the various explicit (e.g., failure-detectors) or implicit (e.g., a priori information) notions of oracles used in the context of distributed computing allowing us to express them in terms of decidability/complexity classes based on oracles.

Identifying the impact of non-determinism on complexity in distributed computing. In particular, DISPLEXITY aims at a better understanding of the apparent lack of impact of non-determinism in the context of fault-tolerant computing, to be contrasted with the apparent huge impact of non-determinism in the context of network computing. Also, it is foreseen that non-determinism will enable the comparison of complexity classes defined in the context of fault-tolerance with complexity classes defined in the context of network computing.

Last but not least, DISPLEXITY will focus on new computational paradigms and frameworks, including, but not limited to distributed quantum computing and algorithmic game theory (e.g., network formation games).

The project will have to face and solve a number of challenging problems. Hence, we have built the DISPLEXITY consortium so as to coordinate the efforts of those worldwide leaders in Distributed Computing who are working in our country. A successful execution of the project will result in a tremendous increase in the current knowledge and understanding of decentralized computing and place us in a unique position in the field.

The project has been extended until June 2016.

Cyril Gavoille (U. Bordeaux) leads this project that grants 1 Post-Doc. H. Fauconnier is the local coordinator (This project began in October 2016).

Despite the practical interests of reusable frameworks for implementing specific distributed services, many of these frameworks still lack solid theoretical bases, and only provide partial solutions for a narrow range of services. We argue that this is mainly due to the lack of a generic framework that is able to unify the large body of fundamental knowledge on distributed computation that has been acquired over the last 40 years. The DESCARTES project aims at bridging this gap, by developing a systematic model of distributed computation that organizes the functionalities of a distributed computing system into reusable modular constructs assembled via well-defined mechanisms that maintain sound theoretical guarantees on the resulting system. DESCARTES arises from the strong belief that distributed computing is now mature enough to resolve the tension between the social needs for distributed computing systems, and the lack of a fundamentally sound and systematic way to realize these systems.

Amos Korman has an ERC Consolidator Grant entitled “Distributed Biological Algorithms (DBA)”, started in May 2015. This project proposes a new application for computational reasoning. More specifically, the purpose of this interdisciplinary project is to demonstrate the usefulness of an algorithmic perspective in studies of complex biological systems. We focus on the domain of collective behavior, and demonstrate the benefits of using techniques from the field of theoretical distributed computing in order to establish algorithmic insights regarding the behavior of biological ensembles. The project includes three related tasks, for which we have already obtained promising preliminary results. Each task contains a purely theoretical algorithmic component as well as one which integrates theoretical algorithmic studies with experiments. Most experiments are strategically designed by the PI based on computational insights, and are physically conducted by experimental biologists that have been carefully chosen by the PI. In turn, experimental outcomes will be theoretically analyzed via an algorithmic perspective. By this integration, we aim at deciphering how a biological individual (such as an ant) “thinks”, without having direct access to the neurological process within its brain, and how such limited individuals assemble into ensembles that appear to be far greater than the sum of their parts. The ultimate vision behind this project is to enable the formation of a new scientific field, called algorithmic biology, that bases biological studies on theoretical algorithmic insights.

Pierre Charbit is director of the LIA STRUCO, which is an Associated International Laboratory of CNRS between IÚUK, Prague, and IRIF, Paris. The director on the Czech side is Pr. Jaroslav Nešetřil. The primary theme of the laboratory is graph theory, more specifically: sparsity of graphs (nowhere dense classes of graphs, bounded expansion classes of graphs), extremal graph theory, graph coloring, Ramsey theory, universality and morphism duality, graph and matroid algorithms and model checking.

STRUCO focuses on high-level study of fundamental combinatorial objects, with a particular emphasis on comprehending and disseminating the state-of-the-art theories and techniques developed. The obtained insights shall be applied to obtain new results on existing problems as well as to identify directions and questions for future work.

One of the main goals of STRUCO is to provide a sustainable and reliable structure to help Czech and French researchers cooperate on long-term projects, disseminate the results to students of both countries and create links between these students more systematically. The chosen themes of the project indeed cover timely and difficult questions, for which a stable and significant cooperation structure is needed. By gathering an important number of excellent researchers and students, the LEA will create the required environment for making advances, which shall be achieved not only by short-term exchanges of researchers, but also by a strong involvement of Ph. D students in the learning of state-of-the-art techniques and in the international collaborations.

STRUCO is a natural place to federate and organize these many isolated collaborations between our two countries. Thus, the project would ensure long-term cooperations and allow young researchers (especially PhD students) to maintain the fruitful exchanges between the two countries in the future years, in a structured and federated way.

Ofer Feinerman (Physics department of complex systems, Weizmann Institute of Science, Rehovot, Israel), is a team member in Amos Korman's ERC project DBA. This collaboration has been formally established by signing a contract between the CNRS and the Weizmann Institute of Science, as part of the ERC project.

Rachid Guerraoui (School of Computer and Communication Sciences, EPFL, Switzerland) maintains an active research collaboration with Gang team members (Carole Delporte, Hugues Fauconnier).

Pierluigi Crescenzi (University of Florence, Italy) is a frequent visitor to the team and maintains an active research collaboration with Gang team members (Pierre Fraigniaud).

Sergio Rajsbaum (UNAM, Mexico) is a regular collaborator of the team, also involved formally in a joint French-Mexican research project (see next subsection).

Boaz Patt-Shamir (Tel Aviv University, Israel) is a regular collaborator of the team, also involved formally in a joint French-Israeli research project (see next subsection).

Involvement in the bilateral Franco-Mexican project ECOS NORD (2013-2016) on “Distributed Verification”. Pierre Fraigniaud was the project's co-coordinator for the French side. Partners: IRIF and LaBRI (France), UNAM (Mexico).

Eli Gafni (1 month – June 2016)

Zvi Lotker, guest of Amos Korman (2 months – May, June 2016)

Thomas Sauerwald, guest of Adrian Kosowski (1 month – November 2016)

Sergio Rasjbaum's Team (UNAM), C. Delporte and H. Fauconnier, 10 days (March 2016)

Pierre Fraigniaud has organized the 1st conference on Highlights of Algorithms (HALG 2016), Paris, June 2016. This conference has gathered more than 200 participants to attend talks presenting the most significant results on algorithms produced in the academic year 2015-2016. The 2nd issue of this conference, Highlights of Algorithms 2017, will take place in Berlin.

Carole Delporte was PC Co-chair of NETYS 2016 — the 4th International Conference on Networked Systems, Morocco, May 18-20, 2016.

Pierre Fraigniaud is PC chair for the Track Algorithms of the 31st IEEE International Parallel & Distributed Processing Symposium (IPDPS), to be held in Orlando, Florida, USA, May 29 - June 2, 2017. http://

Hugues Fauconnier: NETYS 2016, PODC 2016.

Carole Delporte: ICDCN 2016, ICDCS 2016, DISC 2016, OPODIS 2016.

Adrian Kosowski: PODC 2016, ALGOSENSORS 2016, ICALP 2017.

Pierre Fraigniaud: WWW 2016, ESA 2016, IPDPS 2016, OPODIS 2016, SIROCCO 2016, ALGOSENSORS 2016, FUN 2016, WWW 2017, SPAA 2017, DISC 2017.

Amos Korman: ICALP 2017

Pierre Fraigniaud is a member of the Editorial Board of Distributed Computing (DC).

Pierre Fraigniaud is a member of the Editorial Board of Theory of Computing Systems (TOCS).

Pierre Fraigniaud is a member of the Editorial Board of Fundamenta Informaticae (FI).

Adrian Kosowski: SIROCCO 2016

Pierre Fraigniaud is member of the evaluation committee for the ERC Starting Grants (Panel 6).

Pierre Fraigniaud is director of the Institute de Recherche en Informatique Fondamentale (IRIF).

Master: Carole Delporte and Hugues Fauconnier, Algorithmique distribuée avec mémoire partagée, 6h, M2, Université Paris Diderot

Master: Hugues Fauconnier, Cours programmation répartie, 33h, M2, Univ. Paris Diderot

Master: Carole Delporte, Cours et TP Protocoles des services internet, 44h, M2, Univ. Paris Diderot

Master: Carole Delporte, Cours Algorithme réparti, 33h, M2, Univ. Paris Diderot

Master: Carole Delporte and Hugues Fauconnier, Protocoles Réseaux, 72h, M1, Université Paris Diderot

Licence: Carole Delporte and Hugues Fauconnier, Sécurité informatique, 36h, L3, Univ. Paris Diderot

Licence: Hugues Fauconnier, Programmation objet et interfaces graphiques, 48h, L2-L3, EIDD

Licence: Boufkhad Yacine, Algorithmique et Informatique, 132h, L1, IUT de l'Université Paris Diderot

Licence: Boufkhad Yacine, Programmation Orientée Objet, 60h, L2, IUT de l'Université Paris Diderot

Licence: Boufkhad Yacine, Traitement de données, 16h, L2, IUT de l'Université Paris Diderot

Master: Pierre Fraigniaud, Algorithmique avancée, 24h, Ecole Centrale Supélec Paris, M2

Master: Pierre Fraigniaud, Algorithmique parallèle et distribuée, 24h, Ecole Centrale Supélec Paris, M2

Master: Adrian Kosowski, Randomization in Computer Science: Games, Networks, Epidemic and Evolutionary Algorithms, 18h, M1, École Polytechnique

Licence: Adrian Kosowski, Design and Analysis of Algorithms, 32h, L3, École Polytechnique

Master: Pierre Fraigniaud and Adrian Kosowski, Algorithmique distribuée pour les réseaux, 24h, M2, Master Parisien de Recherche en Informatique (MPRI)

Master: Fabien de Montgolfier and Michel Habib, Grand Réseaux d'Interaction, 44h, M2, Univ Paris Diderot

Licence: Fabien de Montgolfier, Protocoles Réseau (TP/TD), 24h, M1, Univ Paris Diderot

Licence: Fabien de Montgolfier, Programmation avancée (cours/TD/projet, bio-informatique), 52h, L3, Univ. Paris Diderot

Master: Fabien de Montgolfier, Algorithmique avancée (bio-informatique), 26h, M1, Univ Paris Diderot

Licence: Fabien de Montgolfier, Algorithmique (TD), 26h, L3, Ecole d'Ingénieurs Denis Diderot

Master : Laurent Viennot, Graph Mining, 3h, M2 MPRI, Univ. Paris Diderot

Licence: Michel Habib, Algorithmique, 45h, L, ENS Cachan

Master: Michel Habib, Algorithmique avancée, 24h, M1, Univ. Paris Diderot

Master: Michel Habib, Mobilité, 33h, M2, Univ. Paris Diderot

Master: Michel Habib, Méthodes et algorithmes pour l'accès à l'information numérique, 16h, M2, Univ. Paris Diderot

Master: Michel Habib, Algorithmique de graphes, 12h, M2, Univ. Paris Diderot

Licence: Pierre Charbit, Introduction a la Programmation, 30h, L1, Université Paris Diderot, France

Licence: Pierre Charbit, Automates finis, 52h, L2, Université Paris Diderot, France

Licence: Pierre Charbit, Types de Données et Objet, 52h, L1, Université Paris Diderot, France

Master: Pierre Charbit, Programmation, 60h, M2Pro PISE, Université Paris Diderot, France

Master: Pierre Charbit, Algorithmique de Graphes, 12h, M2 MPRI, Université Paris Diderot, France

PhD: Leonardo Linguaglossa (co-advised by Laurent Viennot, Fabien Mathieu and Diego Perino, both from Nokia Bell Labs) was a PhD hired by Inria through the ADR CCN contract. He obained his PhD last September at Paris Diderot University.

PhD in progress: Simon Collet (co-advised by Amos Korman and Pierre Fraigniaud). Title of thesis is: "Algorithmic Game Theory Applied to Biology". Started September 2015.

PhD in progress: Lucas Boczkowski (co-advised by Amos Korman and Iordanis Kerenidis). Title of thesis is: "Computing with Limited Resources in Uncertain Environments". Started September 2015.

PhD in progress: Brieuc Guinard (advised by Amos Korman). Title of thesis is: "Algorithmic Aspects of Random Biological Processes". Started October 2016.

PhD in progress: Laurent Feuilloley (advised by Pierre Fraigniaud). Title of thesis is: "Synchronous Distributed Computing". Started September 2015.

PhD in progress: Mengchuan Zou (co-advised by Adrian Kosowski and Michel Habib). Title of thesis is: "Local and Adaptive Algorithms for Optimization Problems in Large Networks". Started October 2016.

PhD in progress: Finn Volkel (advised by Michel Habib). Title of Thesis: "Convexity in graphs", started september 2016.

PhD in progress: Léo Planche (co-advised by Étienne Birmelé and Fabien de Montgolfier). Title if thesis is : "Classification de collections de graphes". Started October 2015.

PhD in progress: Alkida Balliu and Dennis Olivetti (PhD students from L'Aquilla University and Gran Sasso Science Institute) are supervised by Pierre Fraigniaud.

PhD in progress: Lucas Hosseini (co-advised by Pierre Charbit, Patrice Ossona de Mendez and Jaroslav Nešetřil since Sept. 2014. Title : Limits of Structures.

Laurent Viennot was president of the jury in the PhD defense of Claudio Imbrenda on “Analysing Traffic Cacheability in the Access Network at Line Rate” at Telecom ParisTech (November 2016).

Michel Habib was reviewer for the PhD thesis of Matteo Seminaroti, “Combinatorial Algorithms for the seriation problem”, Tilburg University, Holland, december 2016.

Laurent Viennot was reviewer for the PhD thesis of Guillaume Ducoffe on “Propriétés métriques des grands graphes” at Côte d'Azur Univ. (December 2016).

Laurent Viennot was co-advisor for the PhD thesis of Leonardo Linguaglossa on “Two challenges of Software Networking: Name-based Forwarding and Table Verification” at Paris Diderot Univ. (September 2016).

Carole Delporte was reviewer for the PhD thesis of Claire Capdevielle on "Étude de la complexité des implémentations d'objets concurrents et sans attente, abandonnables ou solo-rapides" at Bordeaux Univ ( November 2016).

Laurent Viennot is “commissaire d'exposition” for the permanent exposition on “Informatique et sciences du numérique” at Palais de la déécouverte in Paris (opening in October 2017).

An article centered around the eLife 2016 paper “A locally-blazed ant trail achieves efficient collective navigation despite limited information” , whose co-authors include Amos Korman (co-corresponding author), Lucas Boczkowski, and Adrian Kosowski, appeared on the Israeli daily newspaper “Haaretz”, Dec 2016.

The team has made contributions to the Encyclopedia of Algorithms , .