GANG focuses on algorithm design for large scale networks using structural properties of these networks. Application domains include the development of optimized protocols for large dynamic networks such as mobile networks or overlay networks over Internet. This includes for instance peer-to-peer applications, or the navigability of social networks. GANG tools come from recent advances in the field of graph algorithms, both in centralized and distributed settings. In particular, this includes graph decomposition and geometric properties (such as low doubling dimension, low dimension embedding, etc.). Today, the management of large networks, Internet being the reference, is best effort. However, the demand for mobility (ad hoc networks, wireless connectivity, etc.) and for dynamicity (node churn, fault tolerance, etc.) is increasing. In this distributed setting, it becomes necessary to design a new generation of algorithms and protocols to face the challenge of large scale mobility and dynamicity. In the mean time, recent and sophisticated theoretical results have emerged, offering interesting new tracks for managing large networks. These results concern centralized and decentralized algorithms for solving key problems in communication networks, including routing, but also information retrieval, localization, or load balancing. They are mainly based on structural properties observed in most of real networks: approximate topology with low dimension metric spaces, low treewidth, low doubling dimension, graph minor freeness, etc. In addition, graph decomposition techniques have recently progressed. The scientific community has now tools for optimizing network management. First striking results include designing overlay networks for peer-to-peer systems and understanding the navigability of large social networks.

We focus on two approaches for designing algorithms for large graphs: decomposing the graph and relying on simple graph traversals.

We study new decompositions schemes such as 2-join, skew partitions and others
partition problems. These graph decompositions appeared in the structural graph
theory and are the basis of some well-known theorems such as the Perfect Graph
Theorem. For these decompositions there is a lack of efficient algorithms. We
aim at designing algorithms working in

We more deeply study multi-sweep graph searches. In this domain a graph search only yields a total ordering of the vertices which can be used by the subsequent graph searches. This technique can be used on huge graphs and do not need extra memory. We already have obtained preliminary results in this direction and many well-known graph algorithms can be put in this framework. The idea behind this approach is that each sweep discovers some structure of the graph. At the end of the process either we have found the underlying structure (for example an interval representation for an interval graph) or an approximation of it (for example in hard discrete optimization problems). We envision applications to exact computations of centers in huge graphs, to underlying combinatorial optimization problems, but also to networks arising in biology.

In the course of graph exploration, a mobile agent is expected to regularly visit all the nodes of an unknown network, trying to discover all its nodes as quickly as possible. Our research focuses on the design and analysis of agent-based algorithms for exploration-type problems, which operate efficiently in a dynamic network environment, and satisfy imposed constraints on local computational resources, performance, and resilience. Our recent contributions in this area concern the design of fast deterministic algorithms for teams of agents operating in parallel in a graph, with limited or no persistent state information available at nodes. We plan further studies to better understand the impact of memory constraints and of the availability of true randomness on efficiency of the graph exploration process.

The distributed computing community can be viewed as a union of two
sub-communities. This is also true in our team. Although they have interactions, they are disjoint enough not to leverage each others'
results. At a high level, one is mostly interested in timing issues (clock
drifts, link delays, crashes, etc.) while the other one is mostly interested in
spatial issues (network structure, memory requirements, etc.). Indeed, one
sub-community is mostly focusing on the combined impact of asynchronism and
faults on distributed computation, while the other addresses the impact of
network structural properties on distributed computation. Both communities
address various forms of computational complexity, through the analysis of
different concepts. This includes, e.g., failure detectors and wait-free
hierarchy for the former community and compact labeling schemes, and computing
with advice for the latter community. We have an ambitious project to achieve
the reconciliation between the two communities by focusing on the same class of
problems, the yes/no-problems, and establishing the scientific foundations for
building up a consistent theory of computability and complexity for distributed
computing. The main question addressed is therefore: is the absence of globally
coherent computational complexity theories covering more than fragments of
distributed computing, inherent to the field? One issue is obviously the types
of problems located at the core of distributed computing. Tasks like consensus,
leader election, and broadcasting are of very different nature. They are not
*yes-no* problems, neither are they minimization problems. Coloring and
Minimal Spanning Tree are optimization problems but we are often more interested
in constructing an optimal solution than in verifying the correctness of a given
solution. Still, it makes full sense to analyze the *yes-no* problems
corresponding to checking the validity of the output of tasks. Another issue is
the power of individual computation. The FLP impossibility result as well as
Linial's lower bound hold independently from the individual computational power
of the involved computing entities. For instance, the individual power of
solving NP-hard problems in constant time would not help overcoming these limits,
which are inherent to the fact that computation is distributed. A third issue
is the abundance of models for distributed computing frameworks, from shared
memory to message passing, spanning all kinds of specific network structures
(complete graphs, unit-disk graphs, etc.) and/or timing constraints (from
complete synchronism to full asynchronism). There are however models, typically
the wait-free model and the LOCAL model, which, though they do not claim to
reflect accurately real distributed computing systems, enable focusing on some
core issues. Our research program is ongoing to carry many important notions of
Distributed Computing into a *standard* computational complexity.

Based on our scientific foundation on both graph algorithms and distributed algorithms, we plan to analyze the behavior of various networks such as future Internet, social networks, overlay networks resulting from distributed applications or online social networks.

One of the key aspects of networks resides in the dissemination of information among the nodes. We aim at analyzing various procedures of information propagation from dedicated algorithms to simple distributed schemes such as flooding. We also consider various models, e.g. where noise can alter information as it propagates or where memory of nodes is limited.

We try to explore new routing paradigms such as greedy routing in social networks for example. We are also interested in content centric networking where routing is based on content name rather than content address. One of our target is multiple path routing: how to design forwarding tables providing multiple disjoint paths to the destination?

Based on our past experience of peer-to-peer application design, we would like to broaden the spectrum of distributed applications where new efficient algorithms can be designed and their analysis can be performed. We especially target online social networks as we see them as collaborative tools for exchanging information. A basic question resides in making the right connections for gathering filtered and accurate information with sufficient coverage.

As forwarding tables of networks grow and are sometimes manually modified, the problem of verifying them becomes critical and has recently gained in interest. Some problems that arise in network verification such as loop detection for example, may be naturally encoded as Boolean Satisfiability problems. Beside theoretical interest in complexity proofs, this encoding allows one to solve these problems by taking advantage of the many efficient Satisfiability testing solvers. Indeed, SAT solvers have proved to be very efficient in solving problems coming from various areas (Circuit Verification, Dependency and Conflicts in Software distributions...) and encoded in Conjunctive Normal Form. To test an approach using SAT solvers in network verification, one needs to collect data sets from a real network and to develop good models for generating realistic networks. The technique of encoding and the solvers themselves need to be adapted to this kind of problems. All this represents a rich experimental field of future research.

Finally, we are interested in analyzing the structural properties of practical networks. This can include diameter computation or ranking of nodes. As we mostly consider large networks, we are often interested in efficient heuristics. Ideally, we target heuristics that give exact answers and are reasonably fast in practice although fast computation time is not guaranteed for all networks. We have already designed such heuristics for diameter computation; understanding the structural properties that enable fast computation time in practice is still an open question.

Application domains include evaluating Internet performances, the design of new peer-to-peer applications, enabling large scale networks, and developping tools for transportation networks.

**WENDY: Workshop on Emergent Algorithms and Network Dynamics**

GANG/Inria Paris was the institutional organizer of WENDY workshop at
Institut Henri-Poincaré, Paris, October 10-11, 2018,
https://

The goal of the project was to facilitate the exchange of ideas between researchers working on distributed computing theory, modeling random structures, and discrete dynamical systems.

The main theme of the workshop was programming local interaction dynamics on networks, so as to obtain the desired emergent effects on the system as a whole. Central topics included:

Evolving graph models and dynamics on random graphs

Bio-inspired computing and computing with biological agents

Chemical reaction networks

Markovian and non-Markovian processes on networks.

**BDA: Workshop on Biological Distributed Algorithms**

Amos Korman chaired the organizing committee and co-chaired the program committee of the 6th workshop on Biological Distributed Algorithms (BDA, http://

BDA was focused on the relationships between distributed computing and distributed biological systems and in particular, on analysis and case studies that combine the two. Such research can lead to better understanding of the behavior of the biological systems while at the same time developing novel algorithms that can be used to solve basic distributed computing problems.

The workshop featured 6 invited talks and over a dozen accepted contributed submissions, with generous financial support offered to participants by Amos Korman's ERC grant.

Keyword: Graph algorithmics

Functional Description: Gang is developping a software for big graph manipulation. A preliminary library offering diameter and skeleton computation. This library was used to compute the diameters of the worldwide road network (200M edges) and the largest strongly connected component of the Twitter follower-followee graph (23G edges).

Contact: Laurent Viennot

URL: https://

*The high performance graph library for Java*

Keywords: Graph - Graph algorithmics - Java

Functional Description: Grph is an open-source Java library for the manipulation of graphs. Its design objectives are to make it portable, simple to use/extend, computationally/memory efficient, and, according to its initial motivation: useful in the context of graph experimentation and network simulation. Grph also has the particularity to come with tools like an evolutionary computation engine, a bridge to linear programming solvers, a framework for distributed computing, etc.

Grph offers a very general model of graphs. Unlike other graph libraries which impose the user to first decide if he wants to deal with directed, undirected, hyper (or not) graphs, the model offered by Grph is unified in a general class that supports mixed graphs made of undirected and directed simple and hyper edges. Grph achieves great efficiency through the use of multiple code optimization techniques such as multi-core parallelism, caching, adequate data structures, use of primitive objects, exploitation of low-level processor caches, on-the-fly compilation of specific C/C++ code, etc. Grph attempts to access the Internet in order to check if a new version is available and to report who is using it (login name and hostname). This has no impact whatsoever on performance and security.

Participants: Aurélien Lancin, David Coudert, Issam Tahiri, Luc Hogie and Nathann Cohen

Contact: Luc Hogie

In nature, search processes that use randomly oriented steps of different lengths have been observed at both the microscopic and the macroscopic scales.
Physicists have analyzed in depth two such processes on grid topologies:
*Intermittent Search*, which uses two step lengths, and *Lévy Walk*, which uses many.
Taking a computational perspective, in we consider the number of distinct step lengths *complexity measure* of the considered process. Our goal is to understand
what is the optimal achievable time needed to cover the whole terrain, for any given value of

We say * $k$-intermittent search* on the one dimensional

In addition, inspired by the notion of intermittent search, we introduce the *Walk or Probe* problem, which can be defined with respect to arbitrary graphs. Here, it is assumed that querying (probing) a node takes significantly more time than moving to a random neighbor.
Hence, to efficiently probe all nodes, the goal is to balance the time spent walking randomly and the time spent probing. We provide preliminary results for connected graphs and regular graphs.

Let *noise parameter*

In addition, we also consider a *semi-adversarial* variant, in which faulty nodes are still chosen at random, but an adversary chooses (beforehand) the advice of such nodes. For this variant, the threshold for efficient moving algorithms happens when the noise parameter is roughly

We study specifically graph classes that have an ordering avoiding some ordered structures. More precisely, we consider what we call *patterns on three nodes*, and the recognition complexity of the associated classes. In this domain, there are two key previous works. Damashke started the study of the classes defined by forbidden patterns, a set that contains interval, chordal and bipartite graphs among others.
On the algorithmic side, Hell, Mohar and Rafiey proved that any class defined by a set of forbidden patterns can be recognized in polynomial time. We improve on these two works, by characterizing systematically all the classes defined sets of forbidden patterns (on three nodes), and proving that among the 23 different classes (up to complementation) that we find, 21 can actually be recognized in linear time.

Beyond this result, we consider that this type of characterization is very useful, leads to a rich structure of classes, and generates a lot of open questions worth investigating.

The fundamental distribution of a door describes the probability it opens in the best of conditions (with respect to other doors being open or closed). We show that if in two configurations of

We then turn our attention to investigate precise bounds. Even for the case of two doors, identifying the optimal sequence is an intriguing combinatorial question.
Here, we study the case of two cascading memoryless doors. That is, the first door opens on each knock independently with probability

In an *intersection graph*, the vertices are geometric objects with an edge between any pair of intersecting objects.
Intersection graphs have been studied for many different families of objects due to their practical applications and their rich structural properties. Among the most studied ones are *disk graphs*, which are intersection graphs of closed disks in the plane, and their special case, *unit disk graphs*, where all the radii are equal.
Their applications range from sensor networks to map labeling, and many standard optimization problems have been studied on disk graphs.
Most of the hard optimization and decision problems remain NP-hard on disk graphs and even unit disk graphs. For instance, disk graphs contain planar graphs on which several of those problems are intractable.

The complexity of Maximum Clique on general disk graphs is a notorious open question in computational geometry. On the one hand, no polynomial-time algorithm is known, even when the geometric representation is given. On the other hand, the NP-hardness of the problem has not been established, even when only the graph is given as input.

Recently, Bonnet *et al.* showed that the disjoint union of two odd cycles is not the complement of a disk graph. From this result, they obtained a subexponential algorithm running in time *et al.*, or branching on a low-degree vertex.

The second contribution is to show the same forbidden induced subgraph for unit ball graphs as the one obtained for disk graphs : their complement cannot have a disjoint union of two odd cycles as an induced subgraph.
The proofs are radically different and the classes are incomparable.
So the fact that the same obstruction applies for disk graphs and unit ball graphs might be somewhat accidental. And again we therefore obtain a randomized EPTAS in time

Before that result, the best approximation factor was 2.553, due to Afshani and Chan. In particular, even getting a 2-approximation algorithm (as for disk graphs) was open.

Finally we show that such an approximation scheme, even in subexponential time, is unlikely for ball graphs (that is, 3-dimensional disk graphs with arbitrary radii), and unit 4-dimensional disk graphs.
Our lower bounds also imply NP-hardness.
To the best of our knowledge, the NP-hardness of Maximum Clique on unit

In an attempt to understand graph searching on cocomparability graphs has been so successful, one quickly notices that the orderings produced by these traversals are precisely words of some antimatroids or convex geometries. The notion of antimatroids and convex geometries have appeared in the literature under various settings; in this work, we focus on the graph searching setting, where we discuss some known geometries on cocomparability graphs, and then present new structural properties on AT-free graphs in the hope of exploring whether the algorithms on cocomparability graphs can be lifted to this larger graph class. A first version of this work in collaboration with Feodor Dragan and Lalla Mouatadib was presented at ICGT Lyon, july 2018.

Biological systems can share and collectively process information to yield emergent effects, despite inherent noise in communication. While man-made systems often employ intricate structural solutions to overcome noise, the structure of many biological systems is more amorphous. It is not well understood how communication noise may affect the computational repertoire of such groups. To approach this question we consider in , the basic collective task of rumor spreading, in which information from few knowledgeable sources must reliably flow into the rest of the population. We study the effect of communication noise on the ability of groups that lack stable structures to efficiently solve this task. We present an impossibility result which strongly restricts reliable rumor spreading in such groups. Namely, we prove that, in the presence of even moderate levels of noise that affect all facets of the communication, no scheme can significantly outperform the trivial one in which agents have to wait until directly interacting with the sources—a process which requires linear time in the population size. Our results imply that in order to achieve efficient rumor spread a system must exhibit either some degree of structural stability or, alternatively, some facet of the communication which is immune to noise. We then corroborate this claim by providing new analyses of experimental data regarding recruitment in Cataglyphis niger desert ants. Finally, in light of our theoretical results, we discuss strategies to overcome noise in other biological systems.

We first design a general compiler which can essentially transform any self-stabilizing algorithm with a certain property (called “the *the bitwise-independence property*”) that uses *Clock Synchronization* protocol, in which agents synchronize their clocks modulo some given integer

Consider *sites*, where a site *value* *players* that compete over the rewards. They independently act in parallel, in a one-shot scenario, each specifying a single site to visit, without knowing which sites are explored by others. The group performance is evaluated by the expected *coverage*, defined as the sum of

The main takeaway message of this paper is that the optimal symmetric coverage is expected to emerge when collision costs are relatively high, so that the following “Judgment of Solomon” type of rule holds: If a single player explores a site *(Symmetric) Price of Anarchy* of precisely 1, whereas, in fact, any other congestion policy has a price strictly greater than 1.

Our model falls within the scope of mechanism design, and more precisely in the area of incentivizing exploration. It finds relevance in evolutionary ecology, and further connects to studies on Bayesian parallel search algorithms.

We focus on designing protocols which meet two natural conditions: (1) universality, i.e., independence of population size, and (2) rapid convergence to a correct global state after a reconfiguration, such as a change in the state of a source agent. Our main positive result is to show that both of these constraints can be met. For both the broadcasting problem and the source detection problem, we obtain solutions with a convergence time of

Our protocols exploit the properties of self-organizing oscillatory dynamics. On the hardness side, our main structural insight is to prove that any protocol which meets the constraints of universality and of rapid convergence after reconfiguration must display a form of non-stationary behavior (of which oscillatory dynamics are an example). We also observe that the periodicity of the oscillatory behavior of the protocol, when present, must necessarily depend on the number

We show that in the recurrent state of this dynamics (i.e., disregarding a polynomially long initialization phase of the system), the number of particles located on a given edge, averaged over an interval of time, is tightly concentrated around the average particle density in the system. Formally, for a system of

As a corollary, we also obtain bounds on the *idleness* of the studied dynamics, i.e., on the longest possible time between two consecutive appearances of a token on an edge, taken over all edges. Designing trajectories for

Rabani et al. (1998) present a general technique for the analysis of a wide class of discrete load balancing algorithms. Their approach is to characterize the deviation between the actual loads of a discrete balancing algorithm with the distribution generated by a related Markov chain. The Markov chain can also be regarded as the underlying model of a continuous diffusion algorithm. Rabani et al. showed that after time

In this work we identify some natural additional conditions on deterministic balancing algorithms, resulting in a class of algorithms reaching a smaller discrepancy. This class contains well-known algorithms, eg., the Rotor-Router.
Specifically, we introduce the notion of cumulatively fair load-balancing algorithms where in any interval of consecutive time steps, the total number of tokens sent out over an edge by a node is the same (up to constants) for all adjacent edges. We prove that algorithms which are cumulatively fair and where every node retains a sufficient part of its load in each step, achieve a discrepancy of

In the allocation problem, asynchronous processors must partition a set of items so that each processor leave knowing all items exclusively allocated to it. In , we introduce a new variant of the allocation problem called the assignment problem, in which processors might leave having only partial knowledge of their assigned items. The missing items in a processor's assignment must eventually be announced by other processors.

While allocation has consensus power 2, we show that the assignment problem is solvable read-write wait-free when

The assignment problem and its read-write solution may be of practical interest for implementing resource allocators and work queues, which are pervasive concurrent programming patterns, as well as stream-processing systems.

One of the central questions in distributed computability is characterizing the tasks that are solvable in a given system model. In the anonymous case, where processes have no identifiers and communicate through multi-writer/multi-reader registers, there is a recent topological characterization (Yanagisawa 2017) of the colorless tasks that are solvable when any number of asynchronous processes may crash. In , we consider the case where at most

In asynchronous crash-prone read/write shared-memory systems there is the notion of a snapshot object, which simulates the behavior of an array of single-writer/multi-reader (SWMR) shared registers that can be read atomically. Processes in the system can access the object invoking (any number of times) two operations, denoted write() and snapshot(). A process invokes write() to update the value of its register in the array. When it invokes snapshot(), the process obtains the values of all registers, as if it read them simultaneously. It is known that a snapshot object can be implemented on top of SWMR registers, tolerating any number of process failures. Snapshot objects provide a level of abstraction higher than individual SWMR registers, and they simplify the design of applications. Building a snapshot object on an asynchronous crash-prone message-passing system has similar benefits. The object can be implemented by using the known simulations of a SWMR shared memory on top of an asynchronous message-passing system (if less than half the processes can crash), and then build a snapshot object on top of the simulated SWMR memory. presents an algorithm that implements a snapshot object directly on top of the message-passing system, without building an intermediate layer of a SWMR shared memory. To the authors knowledge, the proposed algorithm is the first providing such a direct construction. The algorithm is more efficient than the indirect solution, yet relatively simple.

We have carried out our study of distributed decision, either for its potential application to the design of fault-tolerant distributed algorithm, or for the purpose of designing a complexity/computanility theory for distributed network computing.

In the framework of *distributed network computing*, it is known that not all Turing-decidable predicates on labeled networks can be decided *locally* whenever the computing entities are Turing machines (TM), and this holds even if nodes are running *non-deterministic* Turing machines (NTM). In contrast, we show in that every Turing-decidable predicate on labeled networks can be decided locally if nodes are running *alternating* Turing machines (ATM). More specifically, we show that, for every such predicate, there is a local algorithm for ATMs, with at most two alternations, that decides whether the actual labeled network satisfies that predicate. To this aim, we define a hierarchy of classes of decision tasks, where the lowest level contains tasks solvable with TMs, the first level those solvable with NTMs, and the level *local reduction*. We complete these results by a study of the local decision hierarchy when certificates are bounded to be of logarithmic size.

Distributed proofs are mechanisms enabling the nodes of a network to collectively and efficiently check the correctness of Boolean predicates on the structure of the network (e.g. having a specific diameter), or on data structures distributed over the nodes (e.g. a spanning tree). In , we consider well known mechanisms consisting of two components: a *prover* that assigns a *certificate* to each node,
and a distributed algorithm called *verifier* that is in charge of verifying the distributed proof formed by the collection of all certificates. We show that many network predicates have distributed proofs offering a high level of redundancy, explicitly or implicitly. We use this remarkable property of distributed proofs to establish perfect tradeoffs between the *size of the certificate* stored at every node, and the *number of rounds* of the verification protocol.

The role of unique node identifiers in network computing is well understood as far as *symmetry breaking* is concerned. However, the unique identifiers also *leak information* about the computing environment—in particular, they provide some nodes with information related to the size of the network. It was recently proved that in the context of *local decision*, there are some decision problems that cannot be solved without unique identifiers, but unique identifiers leak a *sufficient* amount of information such that the problem becomes solvable (PODC 2013). In , we give a complete picture of what is the *minimal* amount of information that we need to leak from the environment to the nodes in order to solve local decision problems. Our key results are related to *scalar oracles* *weakest oracle* that leaks at least as much information as the unique identifiers. Our main result is the following dichotomy: we classify scalar oracles as *large* and *small*, depending on their asymptotic behaviour, and show that (1) any large oracle is at least as powerful as the unique identifiers in the context of local decision problems, while (2) for any small oracle there are local decision problems that still benefit from unique identifiers.

Two notable contributions to game theory applied to networks are worth being mentioned.

Gang has a strong collaboration with Bell Labs (Nokia). We notably collaborate with Fabien Mathieu who is a former member of GANG and Élie de Panafieu. An ADR (joint research action) is dedicated to distributed learning.

This collaboration is developed inside the Alcatel-Lucent and Inria joint research lab.

Gang is participating to the LINCS, a research centre co-founded by Inria, Institut Mines-Télécom, UPMC and Alcatel-Lucent Bell Labs, dedicated to research and innovation in the domains of future information and communication networks, systems and services. Gang contributes to work on online social networks, content centric networking and forwarding information verification.

Cyril Gavoille (U. Bordeaux) leads this project that grants 1 Post-Doc. H. Fauconnier is the local coordinator (This project began in October 2016).

Despite the practical interests of reusable frameworks for implementing specific distributed services, many of these frameworks still lack solid theoretical bases, and only provide partial solutions for a narrow range of services. We argue that this is mainly due to the lack of a generic framework that is able to unify the large body of fundamental knowledge on distributed computation that has been acquired over the last 40 years. The DESCARTES project aims at bridging this gap, by developing a systematic model of distributed computation that organizes the functionalities of a distributed computing system into reusable modular constructs assembled via well-defined mechanisms that maintain sound theoretical guarantees on the resulting system. DESCARTES arises from the strong belief that distributed computing is now mature enough to resolve the tension between the social needs for distributed computing systems, and the lack of a fundamentally sound and systematic way to realize these systems.

David Coudert (Sophia Antipolis) leads this project. L. Viennot coordinates locally. The project began in 2018.

The MultiMod project aims at enhancing the mobility of citizens in urban areas by providing them, through a unique interface enabling to express their preferences, the most convenient transportation means to reach their destinations. Indeed, the increasing involvement of actors and authorities in the deployment of more responsible and cost-effective logistics and the progress made in the field of digital technology have made possible to create synergies in the creation of innovative services for improving the mobility in cities. However, users are faced with a number of solutions that coexist at different scales, providing complementary information for the mobility of users, but that make very complex to find the most convenient itinerary at a given time for a specific user. In this context, MultiMod aims at improving the mobility of citizens in urban areas by proposing contextualized services, linking users, to facilitate multimodal transport by combining, with flexibility, all available modes (planned/dynamic carpooling, public transport (PT), car-sharing, bicycle, etc.).

We consider the use of carpooling in metropolitan areas, and so for short journeys. Such usage enables itineraries that are not possible with PT, allows for opening up areas with low PT coverage by bringing users near PT (last miles), and for faster travel-time when existing PT itineraries are too complex or with too low frequency (e.g., one bus per hour). In this context, the application must help the driver and the passenger as much as possible. In particular, the application must propose the meeting-point, indicate the driver the detour duration, and indicate the passenger how to reach this meeting-point using PT. Here, the time taken by drivers and passengers to agree becomes a critical issue and so the application must provide all needed information to quickly take a decision (i.e., in one click).

In addition, the era of Smart City gathers many emerging concepts, driven by innovative technological players, which enables the exploitation of real-time data (e.g., delay of a bus, traffic jam) made available by the various actors (e.g., communities in the framework of Open Data projects, users via their mobile terminals, traffic supervision authorities). In the MultiMod project, we will use these rich sources of data to propose itineraries that are feasible at query-time. Our findings will enable the design of a mobility companion able not only to guide the user along her journey, including when and how to change of transportation mean, but also to propose itinerary changes when the current one exceeds a threshold delay. The main originality of this project is thus to address the problem of computing itineraries in large-scale networks combining PT, carpooling and real-time data, and to satisfy the preferences of users. We envision that the outcome of this project will significantly improve the daily life of citizens.

The targeted metropolitan area for validating our solutions is Ile-de-France. Indeed, Instant-System is currently developing the new application “Vianavigo lab” which will replace the current “Vianavigo” application for the PT network of Ile-de-France. Our findings will therefore be tested at scale and eventually be integrated and deployed in production servers and mobile applications. The smaller networks of Bordeaux and Nice will be used to perform preliminary evaluations since Instant System already operates applications in these cities (Boogi Nice, Boogi Bordeaux). An important remark is that new features and algorithms can contractually be deployed in production every 4 months, thus enabling Instant System to measure and challenge the results of the MultiMod project in continue. This is a chance for the project to maximize its impact.

Arnaud Sangnier (IRIF, Univ Paris Diderot) leads this project that grants 1 PhD. (This project began in October 2017).

Distributed algorithms are nowadays omnipresent in most systems and applications. It is of utmost importance to develop algorithmic solutions that are both robust and flexible, to be used in large scale applications. Currently, distributed algorithms are developed under precise assumptions on their execution context: synchronicity, bounds on the number of failures, etc. The robustness of distributed algorithms is a challenging problem that has not been much considered until now, and there is no systematic way to guarantee or verify the behavior of an algorithm beyond the context for which it has been designed. We propose to develop automated formal method techniques to verify the robustness of distributed algorithms and to support the development of robust applications. Our methods are of two kinds: statically through classical verification, and dynamically, by synthesizing distributed monitors, that check either correctness or the validity of the context hypotheses at runtime.

Victor Chepoi (Univ. Marseille) leads this project. P. Charbit coordinates locally. The project began in early-2018.

The theme of the project is Metric Graph Theory, and we are concerned both on theoretical foundations and applications. Such applications can be found in real world networks. For example, the hub labelling problem in road networks can be directly applied to car navigation applications. Understanding key structural properties of large-scale data networks is crucial for analyzing and optimizing their performance, as well as improving their reliability and security. In prior empirical and theoretical studies researchers have mainly focused on features such as small world phenomenon, power law degree distribution, navigability, and high clustering coefficients. Although those features are interesting and important, the impact of intrinsic geometric and topological features of large-scale data networks on performance, reliability and security is of much greater importance. Recently, there has been a surge of empirical works measuring and analyzing geometric characteristics of real-world networks, namely the Gromov hyperbolicity (called also the negative curvature) of the network. It has been shown that a number of data networks, including Internet application networks, web networks, collaboration networks, social networks, and others, have small hyperbolicity.

Metric graph theory was also indispensable in solving some open questions in concurrency and learning theory in computer science and geometric group theory in mathematics. Median graphs are exactly the 1–skeletons of CAT(0) cube complexes (which have been characterized by Gromov in a local-to-global combinatorial way). They play a vital role in geometric group theory (for example, in the recent solution of the famous Virtual Haken Conjecture). Median graphs are also the domains of event structures of Winskel, one of the basic abstract models of concurrency. This correspondence is very useful in dealing with questions on event structures.

Many classical algorithmic problems concern distances: shortest path, center and diameter, Voronoi diagrams, TSP, clustering, etc. Algorithmic and combinatorial problems related to distances also occur in data analysis. Low-distortion embeddings into l1-spaces (theorem of Bourgain and its algorithmical use by Linial et al.) were the founding tools in metric methods. Recently, several approximation algorithms for NP-hard problems were designed using metric methods. Other important algorithmic graph problems related to distances concern the construction of sparse subgraphs approximating inter-node distances and the converse, augmentation problems with distance constraints. Finally, in the distributed setting, an important problem is that of designing compact data structures allowing very fast computation of inter- node distances or routing along shortest or almost shortest paths. Besides computer science and mathematics, applications of structures involving distances can be found in archeology, computational biology, statistics, data analysis, etc. The problem of characterizing isometric subgraphs of hypercubes has its origin in communication theory and linguistics. . To take into account the recombination effect in genetic data, the mathematicians Bandelt and Dress developed in 1991 the theory of canonical decompositions of finite metric spaces. Together with geneticists, Bandelt successfully used it over the years to reconstruct phylogenies, in the evolutional analysis of mtDNA data in human genetics. One important step in their method is to build a reduced median network that spans the data but still contains all most parsimonious trees. As mentioned above, the median graphs occurring there constitute a central notion in metric graph theory.

With this project, we aim to participate at the elaboration of this new domain of Metric Graph Theory, which requires experts and knowledge in combinatorics (graphs, matroids), geometry, and algorithms. This expertise is distributed over the members of the consortium and a part of the success of our project it will be to share these knowledges among all the members of the consortium. This way we will create a strong group in France on graphs and metrics.

This project starting in early-2018, led by Reza Naserasr, explores the connection between minors and colorings, exploiting the notion of signed graphs. With the four colour theorem playing a central role in development of Graph Theory, the notions of minor and coloring have been branded as two of the most distinguished concepts in this field. The geometric notion of planarity has given birth to the theory of minors among others, and coloring have proven to have an algebraic nature through its extension to the theory of graph homomorphisms.
Great many projects have been completed on both subjects, but what remains mostly a mystery is the correlation of the two subjects. The four color theorem itself, in slightly stronger form, claims that if a complete graph on five vertices cannot be formed by minor operation from a given graph, then the graph can be homomorphically mapped into the complete graph on four vertices (thus a 4-coloring). Commonly regarded as the most challenging conjecture on graph theory, the Hadwiger conjecture claims that five and four in this theorem can be replaced with

Amos Korman has an ERC Consolidator Grant entitled “Distributed Biological Algorithms (DBA)”, started in May 2015. This project proposes a new application for computational reasoning. More specifically, the purpose of this interdisciplinary project is to demonstrate the usefulness of an algorithmic perspective in studies of complex biological systems. We focus on the domain of collective behavior, and demonstrate the benefits of using techniques from the field of theoretical distributed computing in order to establish algorithmic insights regarding the behavior of biological ensembles. The project includes three related tasks, for which we have already obtained promising preliminary results. Each task contains a purely theoretical algorithmic component as well as one which integrates theoretical algorithmic studies with experiments. Most experiments are strategically designed by the PI based on computational insights, and are physically conducted by experimental biologists that have been carefully chosen by the PI. In turn, experimental outcomes will be theoretically analyzed via an algorithmic perspective. By this integration, we aim at deciphering how a biological individual (such as an ant) “thinks”, without having direct access to the neurological process within its brain, and how such limited individuals assemble into ensembles that appear to be far greater than the sum of their parts. The ultimate vision behind this project is to enable the formation of a new scientific field, called algorithmic biology, that bases biological studies on theoretical algorithmic insights.

Pierre Charbit is director of the LIA STRUCO, which is an Associated International Laboratory of CNRS between IÚUK, Prague, and IRIF, Paris. The director on the Czech side is Pr. Jaroslav Nešetřil. The primary theme of the laboratory is graph theory, more specifically: sparsity of graphs (nowhere dense classes of graphs, bounded expansion classes of graphs), extremal graph theory, graph coloring, Ramsey theory, universality and morphism duality, graph and matroid algorithms and model checking.

STRUCO focuses on high-level study of fundamental combinatorial objects, with a particular emphasis on comprehending and disseminating the state-of-the-art theories and techniques developed. The obtained insights shall be applied to obtain new results on existing problems as well as to identify directions and questions for future work.

One of the main goals of STRUCO is to provide a sustainable and reliable structure to help Czech and French researchers cooperate on long-term projects, disseminate the results to students of both countries and create links between these students more systematically. The chosen themes of the project indeed cover timely and difficult questions, for which a stable and significant cooperation structure is needed. By gathering an important number of excellent researchers and students, the LEA will create the required environment for making advances, which shall be achieved not only by short-term exchanges of researchers, but also by a strong involvement of Ph. D students in the learning of state-of-the-art techniques and in the international collaborations.

STRUCO is a natural place to federate and organize these many isolated collaborations between our two countries. Thus, the project would ensure long-term cooperations and allow young researchers (especially PhD students) to maintain the fruitful exchanges between the two countries in the future years, in a structured and federated way.

Carole Delporte-Gallet and Hugues Fauconnier are members of the Inria-MEXICO Equipe Associée LiDiCo (At the Limits of Distributed Computability, https://

Ofer Feinerman (Physics department of complex systems, Weizmann Institute of Science, Rehovot, Israel), is a team member in Amos Korman's ERC project DBA. This collaboration has been formally established by signing a contract between the CNRS and the Weizmann Institute of Science, as part of the ERC project.

Rachid Guerraoui (School of Computer and Communication Sciences, EPFL, Switzerland) maintains an active research collaboration with Gang team members (Carole Delporte, Hugues Fauconnier).

Sergio Rajsbaum (UNAM, Mexico) is a regular collaborator of the team, also involved formally in a joint French-Mexican research project (see next subsection).

Boaz Patt-Shamir (Tel Aviv University, Israel) is a regular collaborator of the team, also involved formally in a joint French-Israeli research project (see next subsection).

Lalla Moutadib, PhD student at Uinversity of Toronto, directed by Alan Borodin and Derek Corneil but also unformally by Michel Habib.
2 visits in 2018 in our group. She got her PhD in september 2018. See https://

Sergio Rajsbaum ( UNAM Mexico) - April 1 to June 30.

Giuliano Losa ( UCLA USA)- May 17 to May 30.

Carole Delporte and Hugues Fauconnier have visited Sergio Rajsbaum at UNAM Mexico - September 2 to September 14.

Amos Korman: BDA 2018, General Chair of the organizing committee.

Adrian Kosowski: WENDY Paris, chair of workshop.

Organisation of Dagstuhl Seminar 18211 *Formal Methods and Fault-Tolerant Distributed Computing: Forging an Alliance*, by Javier Esparza (TUM, Munick, Germany), Pierre Fraigniaud (IRIF and Inria GANG, Paris, France), Anca Muscholl (LaBRI, Bordeaux, France), and Sergio Rajsbaum (UNAM, Mexico, Mexique).

Amos Korman: BDA 2018, co-chair.

Amos Korman: ADGA 2018.

Pierre Fraigniaud: *Highlights of Algorithms* (HALG) from January 2015.

Adrian Kosowski: MFCS 2018, SIROCCO 2018.

Carole Delporte-Gallet: NETYS 2018.

Pierre Fraigniaud: SPAA 2018, DISC 2018, ICALP 2018, WWW 2018, IPDPS 2018, LATIN 2018, HiPC 2018, ICDCN 2018.

Michel Habib: WG 2018.

Pierre Fraigniaud is a member of the Editorial Board of Distributed Computing (DC).

Pierre Fraigniaud is a member of the Editorial Board of Theory of Computing Systems (TOCS).

Adrian Kosowski is a member of the Editorial Board of Mathematical Foundations of Computing (AIMS MFOC)

Carole Delporte is co-editors with Parosh Abdulla of the Special Issue on NETYS’2016 published in Computing ().

Hugues Fauconnier gives a seminar in College de France entitled "Failure detectors", December 2018.

Adrian Kosowski was an expert panel member for grant panel PE6 of the National Science Center, Poland (Spring 2018).

Pierre Fraigniaud was member of the *shadow committee* of the ERC Starting Grants selection panel in 2018.

Pierre Fraigniaud was vice-president of the HCERES committee of Laboratoire d'Informatique de Polytechnique (LIX), November 2018.

Hugues Fauconnier is director of the UFR d'informatique of Université Paris Diderot.

Carole Delporte-Gallet is deputy director of the UFR d'informatique of Université Paris Diderot.

Laurent Viennot is leader of the “Algorithms and discrete structures” department of the Institute de Recherche en Informatique Fondamentale (IRIF).

Master: Carole Delporte and Hugues Fauconnier, Algorithmique distribuée avec mémoire partagée, 6h, M2, Université Paris Diderot

Master: Hugues Fauconnier, Cours programmation répartie, 33h, M2, Univ. Paris Diderot

Master: Carole Delporte, Cours et TP Protocoles des services internet, 44h, M2, Univ. Paris Diderot

Master: Carole Delporte, Cours Algorithmes répartis, 33h, M2, Univ. Paris Diderot

Master: Carole Delporte and Hugues Fauconnier, Théorie et pratique de la concurrence, 48h, M1, Université Paris Diderot

Licence: Carole Delporte and Hugues Fauconnier, Culture informatique, 16h, L2, Univ. Paris Diderot

Licence: Boufkhad Yacine, Algorithmique et Informatique, 132h, L1, IUT de l'Université Paris Diderot

Licence: Boufkhad Yacine, Programmation Orientée Objet, 60h, L2, IUT de l'Université Paris Diderot

Licence: Boufkhad Yacine, Traitement de données, 16h, L2, IUT de l'Université Paris Diderot

Master: Pierre Fraigniaud, Algorithmique parallèle et distribuée, 24h, Ecole Centrale Supélec Paris, M2

Master: Adrian Kosowski, Randomization in Computer Science: Games, Networks, Epidemic and Evolutionary Algorithms, 18h, M1, École Polytechnique

Licence: Adrian Kosowski, Design and Analysis of Algorithms, 32h, L3, École Polytechnique

Master: Pierre Fraigniaud and Adrian Kosowski, Algorithmique distribuée pour les réseaux, 24h, M2, Master Parisien de Recherche en Informatique (MPRI)

Master: Fabien de Montgolfier, Grand Réseaux d'Interaction, 44h, M2, Univ Paris Diderot

Licence: Fabien de Montgolfier, Protocoles Réseau (TP/TD), 24h, M1, Univ Paris Diderot

Licence: Fabien de Montgolfier, Programmation avancée (cours/TD/projet, bio-informatique), 52h, L3, Univ. Paris Diderot

Master: Fabien de Montgolfier, Algorithmique avancée (bio-informatique), 26h, M1, Univ Paris Diderot

Licence: Fabien de Montgolfier, Algorithmique (TD), 26h, L3, Ecole d'Ingénieurs Denis Diderot

Master : Laurent Viennot, Graph Mining, 6h, M2 MPRI, Univ. Paris Diderot

Licence: Pierre Charbit, Elements d'Algorithmique, 24h, L2, Université Paris Diderot, France

Licence: Pierre Charbit, Automates finis, 36h, L2, Université Paris Diderot, France

Licence: Pierre Charbit, Internet et Outils, 52h, L1, Université Paris Diderot, France

Master: Pierre Charbit, Programmation Objet, 60h, M2Pro PISE, Université Paris Diderot, France

Master: Pierre Charbit, Algorithmique de Graphes, 12h, M2 MPRI, Université Paris Diderot, France

PhD defended: Lucas Boczkowski (co-advised by Amos Korman and Iordanis Kerenidis). Title of thesis is: "Computing with Limited Resources in Uncertain Environments" . Started September 2015, defended on November 30th, 2018.

PhD defended: Laurent Feuilloley (advised by Pierre Fraigniaud). Title of thesis is: "Synchronous Distributed Computing" . Started September 2015, defended on September 19th, 2018.

PhD defended: Léo Planche (co-advised by Étienne Birmelé and Fabien de Montgolfier). Title of thesis is : "Graph Decomposition into Shortest Paths and Cycles of Small Eccentricity" . Started October 2015, defended on November 23rd, 2018.

PhD defended: Vitaly Aksenov (co-advised by Petr Kuznetsov, Anatoly Shalyto and Carole Delporte). Title of thesis is : "Synchronization Costs in Parallel Programs and Concurrent Data Structures" . Started October 2015, defended on September 26, 2018.

PhD in progress: Simon Collet (co-advised by Amos Korman and Pierre Fraigniaud). Title of thesis is: "Algorithmic Game Theory Applied to Biology". Started September 2015.

PhD in progress: Brieuc Guinard (advised by Amos Korman). Title of thesis is: "Algorithmic Aspects of Random Biological Processes". Started October 2016.

PhD in progress: Mengchuan Zou (co-advised by Adrian Kosowski and Michel Habib). Title of thesis is: "Local and Adaptive Algorithms for Optimization Problems in Large Networks". Started October 2016.

PhD in progress: Alkida Balliu and Dennis Olivetti (PhD students from L'Aquilla University and Gran Sasso Science Institute) are supervised by Pierre Fraigniaud.

PhD in progress: Lucas Hosseini (co-advised by Pierre Charbit, Patrice Ossona de Mendez and Jaroslav Nešetřil since Sept. 2014). Title : Limits of Structures.

Master internship (MPRI): Duc-Minh Phan (advised by Laurent Viennot). (March-August 2018) Title of report: “Public Transit Routing with Unrestricted Walking using Hub Labelling”.

Michel Habib was on the jury committee of the PhD thesis of Léo Planche: “Décomposition de graphes en plus courts chemins et en cycles de faible excentricité", Paris Descartes and Paris Diderot Universities, 23^{th} novembre 2018.

Michel Habib was president of the jury committee of the PhD thesis of Julien Fradin : “Graphes complexes en biologie : problèmes, algorithmes et évaluations",
Nantes University, 4^{th} december 2018.

Michel Habib was on the jury committee of the PhD thesis of Mostafa Darwiche : “When operation research meets structural pattern recognition : on the solution of error-tolerant graph matching problems", Tours University, 5^{th} december 2018.

Michel Habib was member of the jury for the HDR thesis of Jean-Sébastien Sereni : “Sur des aspects algébriques de la coloration de graphes : coloration fractionnaire et nombre de colorations”, Université de Loraine, 13 février 2018.

Laurent Viennot was referee and on the jury committee of the HDR thesis of Frédéric Giroire on “Optimisation des infrastructures réseaux. Un peu de vert dans les réseaux et autres problèmes de placement et de gestion de ressources” at the University of Nice-Sophia Antipolis, October 2018.

Laurent Viennot was president of the jury committee of the PhD thesis of Matthieu Boutier “Routage sensible à la source” at Paris Diderot University, September 2018.

Laurent Viennot was on the jury committee of the PhD thesis of Alexandre Hollocou on “Novel Approaches to the Clustering of Large Graphs” at PSL University, December 2018.

Hugues Fauconnier was president of the jury committee of the PhD thesis of Vitaly Aksenov “Synchronization Costs in Parallel Programs and Concurrent Data Structures” at Paris Diderot University, September 2018.

Carole Delporte is on the jury committee of the PhD thesis of Vitaly Aksenov “Synchronization Costs in Parallel Programs and Concurrent Data Structures” at Paris Diderot University, September 2018.

Carole Delporte was president of the jury committee of the PhD thesis of Laurent Feuilloley “Local certification in distributed computing: error-sensitivity, uniformity, redundancy, and interactivity” at Paris Diderot University, September 2018.

Carole Delporte was referee and on the jury committee of the PhD thesis of Denis Jeanneau "Failure Detectors in Dynamic Distributed Systems" at Sorbonne université, December 2018.

Carole Delporte was president of the jury committee of the PhD thesis of Thibault Rieutord "Combinatorial Characterization of Asynchronous Distributed Computability" at Université Paris Saclay, Octobre 2018.

Pierre Fraigniaud was referee for the HDR thesis of Christine Tasson (IRIF, Paris Diderot) “Sémantiques vectorielles, probabilistes et distribuées”, 23 novembre 2018.

Pierre Fraigniaud was referee for the HDR thesis of Alessia Milani (LaBRI, Bordeaux) “Asynchronous Distributed Computing”, 12 novembre 2018.

Pierre Fraigniaud was member of the jury for the HDR thesis of Jean-Sébastien Sereni : “Sur des aspects algébriques de la coloration de graphes : coloration fractionnaire et nombre de colorations”, Université de Loraine, 13 février 2018.

Pierre Fraigniaud was member of the jury for the PhD thesis of Lucas Boczkowski “Distributed Computing Applied to Biology”, 30 novembre 2018.

Laurent Viennot was “commissaire scientifique” for the permanent exposition on “Informatique et sciences du numérique” at Palais de la déécouverte in Paris (opened in March 2018).

Carole Delporte was president of a jury of baccalaureat.