## Section: New Results

### Distributed Algorithms for Dynamic Networks and Fault Tolerance

Participants : Luciana Bezerra Arantes [correspondent] , Sébastien Bouchard, Marjorie Bournat, João Paulo de Araujo, Swan Dubois, Laurent Feuilloley, Denis Jeanneau, Jonathan Lejeune, Franck Petit, Pierre Sens, Julien Sopena.

Nowadays, distributed systems are more and more heterogeneous and versatile.
Computing units can join, leave or move inside a global infrastructure.
These features require the implementation of *dynamic* systems, that is to say they can cope autonomously with changes
in their structure in terms of physical facilities and software. It therefore becomes necessary to define, develop, and validate
distributed algorithms able to managed such dynamic and large scale systems, for instance mobile *ad hoc*
networks, (mobile) sensor networks, P2P systems, Cloud environments, robot networks, to quote only a few.

The fact that computing units may leave, join, or move may result of an intentional behavior or not. In the latter case, the system may be subject to disruptions due to component faults that can be permanent, transient, exogenous, evil-minded, etc. It is therefore crucial to come up with solutions tolerating some types of faults.

In 2019, we obtained the following results.

#### Failure detectors

Mutual exclusion is one of the fundamental problems in distributed computing but existing mutual exclusion algorithms are unadapted to the dynamics and lack of membership knowledge of current distributed systems (e.g., mobile ad-hoc networks, peer-to-peer systems, etc.). Additionally, in order to circumvent the impossibility of solving mutual exclusion in asynchronous message passing systems where processes can crash, some solutions include the use of $($$\mathcal{T}$ $+$${\Sigma}^{l}$ $)$, which is the weakest failure detector to solve mutual exclusion in known static distributed systems. In [28], we define a new failure detector $\mathcal{T}$ ${\Sigma}^{l}$ ${}^{r}$ which is equivalent to $($$\mathcal{T}$ $+$${\Sigma}^{l}$ $)$ in known static systems, and prove that $\mathcal{T}$ ${\Sigma}^{l}$ ${}^{r}$ is the weakest failure detector to solve mutual exclusion in unknown dynamic systems with partial memory losses. We consider that crashed processes may recover.

Assuming a message-passing environment with a majority of correct processes, the necessary and sufficient information about failures for implementing a general state machine replication scheme ensuring consistency is captured by the $\Omega $ failure detector. We show in [19] that in such a message-passing environment, $\Omega $ is also the weakest failure detector to implement an eventually consistent replicated service, where replicas are expected to agree on the evolution of the service state only after some (a priori unknown) time.

#### Scheduler Tolerant to Temporal Failures in Clouds

Cloud platforms offer different types of virtual machines which ensure different guarantees in terms of availability and volatility, provisioning the same resource through multiple pricing models. For instance, in Amazon EC2 cloud, the user pays per hour for on-demand instances while spot instances are unused resources available for a lower price. Despite the monetary advantages, a spot instance can be terminated or hibernated by EC2 at any moment. Using both hibernation prone spot instances (for cost sake) and on-demand instances, we propose in [31] a static scheduling for applications which are composed of independent tasks (bag-of-task) with deadline constraints. However, if a spot instance hibernates and it does not resume within a time which guarantees the application’s deadline, a temporal failure takes place. Our scheduling, thus, aims at minimizing monetary costs of bag-of-tasks applications in EC2 cloud, respecting its deadline and avoiding temporal failures. Performance results with task execution traces, configuration of Amazon EC2 virtual machines, and EC2 market history confirms the effectiveness of our scheduling and that it tolerates temporal failures. In [30], we extend our approach for dynamic scheduling.

#### Gathering of Mobile Agents

Gathering a group of mobile agents is a fundamental task in the field of distributed and mobile systems. It consists of bringing agents that initially start from different positions to meet all together in finite time. In the case when there are only two agents, the gathering problem is often referred to as the rendezvous problem.

In [14] we show that rendezvous under the strong scenario is possible for agents with asynchrony restricted in the following way: agents have the same measure of time but the adversary can impose, for each agent and each edge, the speed of traversing this edge by this agent. The speeds may be different for different edges and different agents but all traversals of a given edge by a given agent have to be at the same imposed speed. We construct a deterministic rendezvous algorithm for such agents, working in time polynomial in the size of the graph, in the length of the smaller label, and in the largest edge traversal time.

#### Perpetual self-stabilizing exploration of dynamic environments

In [15], we deal with the classical
problem of exploring a ring by a cohort of synchronous robots. We focus on the
perpetual version of this problem in which it is required
that each node of the ring is visited by
a robot infinitely often.
We assume that the robots evolve in ring-shape TVGs,
*i.e.*, the static graph made of the same set of nodes and that includes all edges
that are present at least once over time forms a ring of arbitrary size.
We also assume that each node is infinitely often reachable from any
other node. In this context, we aim at providing a self-stabilizing
algorithm to the robots (*i.e.*, the algorithm must guarantee
an eventual correct behavior regardless of the initial state and
positions of the robots). We show that this problem is
deterministically solvable in this harsh environment by providing
a self-stabilizing algorithm for three robots.

#### Torus exploration by oblivious robots

In [17], we deal with a team of autonomous robots that are endowed with motion
actuators and visibility sensors. Those robots are weak and evolve in
a discrete environment. By weak, we mean that they are anonymous,
uniform, unable to explicitly communicate, and oblivious. We first
show that it is impossible to solve the terminating exploration of a
simple torus of arbitrary size with less than 4 or 5 such robots,
respectively depending on whether the algorithm is probabilistic or
deterministic. Next, we propose in the SSYNC model a probabilistic
solution for the terminating exploration of torus-shaped networks of
size $\ell \times L$, where $7\le \ell \le L$, by a team of 4 such
weak robots. So, this algorithm is optimal *w.r.t.* the number of
robots.

#### Explicit communication among stigmergic robots

In [18], we investigate avenues for the exchange of information (explicit communication) among deaf and mute mobile robots scattered in the plane. We introduce the use of movement-signals (analogously to flight signals and bees waggle) as a mean to transfer messages, enabling the use of distributed algorithms among robots. We propose one-to-one deterministic movement protocols that implement explicit communication among semi-synchronous robots. Our protocols enable the use of distributing algorithms based on message exchanges among swarms of stigmergic robots. They also allow robots to be equipped with the means of communication to tolerate faults in their communication devices.

#### Gradual stabilization

In [13], we introduce the notion of *gradual stabilization under
$(\tau ,\rho )$-dynamics* (gradual stabilization, for short). A
gradually stabilizing algorithm is a self-stabilizing algorithm with
the following additional feature: after up to $\tau $ *dynamic
steps* of a given type $\rho $ occur starting from a legitimate
configuration, it first quickly recovers to a configuration from which
a specification offering a minimum quality of service is satisfied.

It then gradually converges to specifications offering stronger and
stronger safety guarantees until reaching a configuration (1) from
which its initial (strong) specification is satisfied again, and (2)
where it is ready to achieve gradual convergence again in case of up
to $\tau $ new dynamic steps of type $\rho $.
A gradually stabilizing algorithm being also
self-stabilizing, it still recovers within finite time (yet more
slowly) after any other finite number of transient faults, including
for example more than $\tau $ arbitrary dynamic steps or other
failure patterns such as memory corruptions.
We illustrate this new property by considering three variants of a
synchronization problem respectively called *strong*, *weak*, and *partial* unison. We propose a self-stabilizing
unison algorithm which achieves gradual stabilization in the sense
that after one dynamic step of a certain type *BULCC* (such a step
may include several topological changes) occurs starting from a
configuration which is legitimate for the strong unison, it
maintains clocks almost synchronized during the convergence to
strong unison: it satisfies partial unison immediately after the
dynamic step, then converges in at most one round to weak unison,
and finally re-stabilizes to strong unison.