## Section: New Results

### Models and Algorithms for Networks

#### Exploiting Hopsets: Improved Distance Oracles for Graphs of Constant Highway Dimension and Beyond

For fixed $h\ge 2$, we consider in [25] the task of adding to a graph $G$ a set of weighted shortcut edges on the same vertex set, such that the length of a shortest $h$-hop path between any pair of vertices in the augmented graph is exactly the same as the original distance between these vertices in $G$. A set of shortcut edges with this property is called an *exact $h$-hopset* and may be applied in processing distance queries on graph $G$. In particular, a 2-hopset directly corresponds to a distributed distance oracle known as a *hub labeling*. In this work, we explore centralized distance oracles based on 3-hopsets and display their advantages in several practical scenarios. In particular, for graphs of constant highway dimension, and more generally for graphs of constant skeleton dimension, we show that 3-hopsets require *exponentially* fewer shortcuts per node than any previously described distance oracle,
and also offer a speedup in query time when compared to simple oracles based on a direct application of 2-hopsets. Finally, we consider the problem of computing minimum-size $h$-hopset (for any $h\ge 2$) for a given graph $G$, showing a polylogarithmic-factor approximation for the case of unique shortest path graphs. When $h=3$, for a given bound on the space used by the distance oracle, we provide a construction of hopset achieving polylog approximation both for space and query time compared to the optimal 3-hopset oracle given the space bound.

#### Hardness of exact distance queries in sparse graphs through hub labeling

A *distance labeling scheme* is an assignment of bit-labels to the vertices of an undirected, unweighted graph such that the distance between any pair of vertices can be decoded solely from their labels. An important class of distance labeling schemes is that of *hub labelings*, where a node $v\in G$ stores its distance to the so-called hubs ${S}_{v}\subseteq V$, chosen so that for any $u,v\in V$ there is $w\in {S}_{u}\cap {S}_{v}$ belonging to some shortest $uv$ path. Notice that for most existing graph classes, the best distance labelling constructions existing use at some point a hub labeling scheme at least as a key building block.

In [28], our interest lies in hub labelings of sparse graphs, i.e., those with $\left|E\right(G\left)\right|=O\left(n\right)$, for which we show a lowerbound of $\frac{n}{{2}^{O\left(\sqrt{logn}\right)}}$ for the average size of the hubsets. Additionally, we show a hub-labeling construction for sparse graphs of average size $O\left(\frac{n}{RS{\left(n\right)}^{c}}\right)$ for some $0<c<1$, where $RS\left(n\right)$ is the so-called Ruzsa-Szemerédi function, linked to structure of induced matchings in dense graphs. This implies that further improving the lower bound on hub labeling size to $\frac{n}{{2}^{{(logn)}^{o\left(1\right)}}}$ would require a breakthrough in the study of lower bounds on $RS\left(n\right)$, which have resisted substantial improvement in the last 70 years.

For general distance labeling of sparse graphs, we show a lowerbound of $\frac{1}{{2}^{\Theta \left(\sqrt{logn}\right)}}SumIndex\left(n\right)$, where $SumIndex\left(n\right)$ is the communication complexity of the Sum-Index problem over ${Z}_{n}$. Our results suggest that the best achievable hub-label size and distance-label size in sparse graphs may be $\Theta \left(\frac{n}{{2}^{{(logn)}^{c}}}\right)$ for some $0<c<1$.

#### Fast Public Transit Routing with Unrestricted Walking through Hub Labeling

In [30], we propose a novel technique for answering routing queries in public transportation networks that allows unrestricted walking. We consider several types of queries: earliest arrival time, Pareto-optimal journeys regarding arrival time, number of transfers and walking time, and profile, i.e. finding all Pareto-optimal journeys regarding travel time and arrival time in a given time interval. Our techniques uses hub labeling to represent unlimited foot transfers and can be adapted to both classical algorithms RAPTOR and CSA. We obtain significant speedup compared to the state-of-the-art approach based on contraction hierarchies. A research report version is deposited on HAL with number hal-02161283.

#### Independent Lazy Better-Response Dynamics on Network Games

In [29], we study an *independent* best-response dynamics on network games in which the nodes (players) decide to revise their strategies independently with some probability. We are interested in the *convergence time* to the equilibrium as a function of this probability, the degree of the network, and the potential of the underlying games.

#### A Comparative Study of Neural Network Compression

There has recently been an increasing desire to evaluate neural networks locally on computationally-limited devices in order to exploit their recent effectiveness for several applications; such effectiveness has nevertheless come together with a considerable increase in the size of modern neural networks, which constitute a major downside in several of the aforementioned computationally-limited settings. There has thus been a demand of compression techniques for neural networks. Several proposal in this direction have been made, which famously include hashing-based methods and pruning-based ones. However, the evaluation of the efficacy of these techniques has so far been heterogeneous, with no clear evidence in favor of any of them over the others. In [36], we address this latter issue by providing a comparative study. While most previous studies test the capability of a technique in reducing the number of parameters of state-of-the-art networks , we follow [CWT + 15] in evaluating their performance on basic architectures on the MNIST dataset and variants of it, which allows for a clearer analysis of some aspects of their behavior. To the best of our knowledge, we are the first to directly compare famous approaches such as HashedNet, Optimal Brain Damage (OBD), and magnitude-based pruning with L1 and L2 regularization among them and against equivalent-size feed-forward neural networks with simple (fully-connected) and structural (convolutional) neural networks. Rather surprisingly, our experiments show that (iterative) pruning-based methods are substantially better than the HashedNet architecture, whose compression doesn't appear advantageous to a carefully chosen convolutional network. We also show that, when the compression level is high, the famous OBD pruning heuristics deteriorates to the point of being less efficient than simple magnitude-based techniques.