Section: New Results
Distributed Computing
Rendezvous
Rendezvous of Anonymous Agents in Trees
In [5] , we study the so-called rendezvous
problem in the mobile agent setting in graph environments. In the studied
model, two identical (anonymous) mobile agents start from arbitrary nodes of an
unknown tree and have to meet at some node. Agents move in synchronous rounds:
in each round an agent can either stay at the current node or move to one of its
neighbors. We consider deterministic algorithms for this rendezvous task. The
main result of our research is a tight trade-off between the optimal time of
completing rendezvous and the size of memory of the agents. For agents with
Rendezvous of Distance-Aware Mobile Agents in Unknown Graphs
In [17] , we study the problem of rendezvous of two mobile
agents starting at distinct locations in an unknown graph. The agents have
distinct labels and walk in synchronous steps. However, the graph is unlabeled
and the agents have no means of marking the nodes of the graph and cannot
communicate with or see each other until they meet at a node. When the graph is
very large, we would like the time to rendezvous to be independent of the graph
size and to depend only on the initial distance between the agents and some
local parameters such as the degree of the vertices, and the size of the agent's
label. It is well known that even for simple graphs of degree
Rendezvous of Heterogeneous Mobile Agents in Edge-Weighted Networks
In [22] , we study the deterministic rendezvous
problem in which a pair of heterogeneous agents, differing in the time required
to traverse particular edges of the graph, need to meet on an edge or node of
the graph. Each of the agents knows the complete topology of the undirected
graph and the initial positions of both of the agents. The agent also knows its
own traversal times for all of the edges of the graph, but is unaware of the
corresponding traversal times for the other agent. In this scenario, we study
the time required by the agents to meet, compared to the time
Rendezvous with Different Speeds
In [32] we introduce the study of the rendezvous problem in the context of agents having different speeds, and present tight and almost tight bounds for this problem, restricted to a ring topology.
Fair Synchronization
A non-blocking implementation of a concurrent object is an implementation that does not prevent concurrent accesses to the internal representation of the object, while guaranteeing the deadlock-freedom progress condition without using locks. Considering a failure free context, G. Taubenfeld has introduced (DISC 2013) a simple modular approach, captured under a new problem called the fair synchronization problem, to transform a non-blocking implementation into a starvation-free implementation satisfying a strong fairness requirement.
This approach is illustrated in [19] with the implementation of a concurrent stack. The spirit of the paper is mainly pedagogical. Its aim is not to introduce new concepts or algorithms, but to show that a powerful, simple, and modular transformation can provide concurrent objects with strong fairness properties.
In [20] , we extend this approach in several
directions. It first generalizes the fair synchronization problem to read/write
asynchronous systems where any number of processes may crash. Then, it
introduces a new failure detector and uses it to solve the fair synchronization
problem when processes may crash. This failure detector, denoted
Wait Free with Advice
In [7] , we motivate and propose a new way of thinking about failure detectors which allows us to define, quite surprisingly, what it means to solve a distributed task wait-free using a failure detector. In our model, the system is composed of computation processes that obtain inputs and are supposed to produce outputs and synchronization processes that are subject to failures and can query a failure detector.
Under the condition that correct synchronization processes take sufficiently many steps, they provide the computation processes with enough advice to solve the given task wait-free: every computation process outputs in a finite number of its own steps, regardless of the behavior of other computation processes.
Every task can thus be characterized by the weakest failure detector that allows for solving it, and we show that every such failure detector captures a form of set agreement. We then obtain a complete classification of tasks, including ones that evaded comprehensible characterization so far, such as renaming or weak symmetry breaking.
Adaptive Register Allocation
In [18] , we give an adaptive algorithm in which
processes use multi-writer multi-reader registers to acquire exclusive write
access to their own single-writer, multi-reader registers. It is the first such
algorithm that uses a number of registers linear in the number of participating
processes. Previous adaptive algorithms require at least
Leader Election
Considering the case of homonyms processes (some processes may share the same
identifier) on a ring [21] , we give a necessary
and sufficient condition on the number of identifiers to enable leader
election. We prove that if
Concurrency and Fault-tolerance
In [15] , we study the connections between
self-stabilization and proof-labeling schemes. It follows from the definition of
silent self-stabilization, and from the definition of
proof-labeling scheme, that if there exists a silent self-stabilizing
algorithm using
In [27] , we study the connections between, on the
one hand, asynchrony and concurrency, and, on the other hand, the quality of the
expected solution of a distributed algorithm. The state machine approach is a
well-known technique for building distributed services requiring high
performance and high availability, by replicating servers, and by coordinating
client interactions with server replicas using consensus. Indulgent consensus
algorithms exist for realistic eventually partially synchronous models, that
never violate safety and guarantee liveness once the system becomes
synchronous. Unavoidably, these algorithms may never terminate, even when no
processor crashes, if the system never becomes synchronous. We propose a
mechanism similar to state machine replication, called RC-simulation,
that can always make progress, even if the system is never synchronous. Using
RC-simulation, the quality of the service will adjust to the current level of
asynchrony of the network — degrading when the system is very asynchronous,
and improving when the system becomes more synchronous. RC-simulation
generalizes the state machine approach in the following sense: when the system
is asynchronous, the system behaves as if
Quantum Computing
In [1] , we provide illustrative examples of distributed
computing problems for which it is possible to design tight lower bounds for
quantum algorithms without having to manipulate concepts from quantum
mechanics, at all. As a case study, we address the following class of 2-player
problems. Alice (resp., Bob) receives a boolean
Distributed Decision and Verification
Randomization
In [12] , we study the power of randomization in the context of locality by analyzing the ability to “boost” the success probability of deciding a distributed language. The main outcome of this analysis is that the distributed computing setting contrasts significantly with the sequential one as far as randomization is concerned. Indeed, we prove that in some cases, the ability to increase the success probability for deciding distributed languages is rather limited.
Model Variants
In a series of papers [14] , [28] , we analyze distributed decision in the context of various models for distributed computing.
In [28] , we carry on the effort to bridging
runtime verification with distributed computability, studying necessary
conditions for monitoring failure prone asynchronous distributed systems. It has
been recently proved that there are correctness properties that require a large
number of opinions to be monitored, an opinion being of the form true, false,
perhaps, probably true, probably no, etc. The main outcome of this paper is to
show that this large number of opinions is not an artifact induced by the
existence of artificial constructions. Instead, monitoring an important class of
properties, requiring processes to produce at most
Finally, in [14] , we tackle local
distributed testing of graph properties. This framework is well suited to
contexts in which data dispersed among the nodes of a network can be collected
by some central authority (like in, e.g., sensor networks). In local distributed
testing, each node can provide the central authority with just a few information
about what it perceives from its neighboring environment, and, based on the
collected information, the central authority is aiming at deciding whether or
not the network satisfies some property. We analyze in depth the prominent
example of checking cycle-freeness, and establish tight bounds on the
amount of information to be transferred by each node to the central authority
for deciding cycle-freeness. In particular, we show that distributively testing
cycle-freeness requires at least
Voting Systems
In [44] , [38] , we consider a general framework for voting systems with arbitrary types of ballots such as orders of preference, grades, etc. We investigate their manipulability: in what states of the population may a coalition of electors, by casting an insincere ballot, secure a result that is better from their point of view?
We show that, for a large class of voting systems, a simple modification allows to reduce manipulability. This modification is Condorcification: when there is a Condorcet winner, designate her; otherwise, use the original rule.
When electors are independent, for any non-ordinal voting system (i.e. requiring information that is not included in the orders of preferences, for example grades), we prove that there exists an ordinal voting system whose manipulability rate is at most as high and which meets some other desirable properties. Furthermore, this result is also true when voters are not independent but the culture is decomposable, a weaker condition that we define.
Combining both results, we conclude that when searching for a voting system whose manipulability is minimal (in a large class of systems), one can restrict to voting systems that are ordinal and meet the Condorcet criterion.
In [35] , we examine the geometrical properties of the space of expected utilities over a finite set of options, which is commonly used to model the preferences of an agent. We focus on the case where options are assumed to be symmetrical a priori, which is a classical neutrality assumption when studying voting systems. Specifically, we prove that the only Riemannian metric that respects the geometrical properties and the natural symmetries of the utility space is the round metric. Whereas Impartial Culture is widely used in Social Choice literature but limited to ordinal preference, our theoretical result allows to extend it canonically to cardinal preferences.
In [25] , we study the manipulability of voting systems in a real-life experiment: electing the best paper in the conference Algotel 2012. Based on real ballots, we provide a quantitative study of the manipulability, as a function of the voting system used. We show that, even in a situation where all voting systems give the same winner by sincere voting, choosing the voting system is critical, because it has a huge impact on manipulability. In particular, one voting system fare way be better than the others: Instant-Runoff Voting.