Gyroweb is a joined team between INRIA, CNRS and Paris 7 University, through the ``laboratoire d'informatique algorithmique, fondements et applications'', LIAFA(UMR 7089).

Our goal is to develop the field of graph algorithms for networks. Based on algorithmic graph theory and graph modeling we want to understand what can be done in these large networks and what cannot. Furthermore, we want to derive practical distributed algorithms from known strong theoretical results. Finally, we want to extract possibly new graph problems by focusing on particular applications.

The main goal to achieve in networks are efficient searching of nodes or data, and efficient content transfers. We propose to implement strong theoretical results in that domain to make significant breakthrough in large network algorithms. These results concern small world routing, low stretch routing in doubling metrics and bounded width classes of graphs. They are detailed in the next section. This implies several challenges:

testing our target networks against general graph parameters known to bring theoretically tractability,

implementing strong theoretical results in the dynamic and distributed context of large networks.

A complementary approach consists in studying the combinatorial and graph structures that appear in our target networks. These structures may have inherent characteristics coming from the way the network is formed, or from the design goals of the target application. Understanding, characterizing such structures is

Recent years have brought tremendous progress along the peer-to-peer paradigm allowing large scale decentralized application to be practically deployed. The main achievement of this trend is certainly efficient content distribution through the BitTorrent protocol . The power of peer-to-peer content distribution is to rely on the upload capacity of the node interested in receiving the content. This allows to scale to very large number of participants. The main breakthrough of BitTorrent resides in its ``tit for tat'' strategy inspired from game theory to give incentive to cooperate. For that purpose, a peer preferentially uploads preferentially offering best reciprocity. This kind of preferences induces an interesting graph structure with ordered neighborhoods. Understanding the dynamic behavior of such affinity graphs is an important for stabilizing the performance of such protocols.

A second major achievement of the peer-to-peer paradigm concerns indexing with distributed hashtables , , . The idea behind these proposals is to organize the peers into a structure close to well known graphs with low diameter such as hight dimension torus, hypercube or de Bruijn graph. Efficient routing to the node storing a given key is then guaranteed. This academic work has lead to practicle basic indexing facilities by introducing redundancy in the structure . This is typically the kind of approach we want to promote: from known efficient theoretical solutions to practical working protocols. We have contributed to this trend by introducing de Bruijn based solutions , with redundancy in the contact graph to resist node churn.

Popularized emerging properties include degree distributions observed to be power law in many networks or clustering coefficient observed to be high in social networks or low average distances. This last point gave the denomination of ``small worlds'' for this type of networks. Some work , try to give models that give raise to such statistical properties. In that line, numerous results such as try to derive efficient algorithms based only on these statistical properties. This particular approach tends to concentrate load on nodes with high degrees and may not be suited for applications where nodes have similar capacities. Other interesting work try to explain this statistical observation forms an inherent optimization problem operating when constructing the network.

On the other hand, in its seminal paper , Kleinberg focuses on the algorithmic aspects of such social networks and shows how adding random links to a torus can produce efficient greedy routing. This result has been extend to more general classes of graphs , such as bounded growth metric graphs . However, this augmentation process is not always feasible . Such theoretical work is particularly interesting for overlay networks where this augmenting process simply consists in opening additional connections.

Bounded growth and doubling metrics generalize Euclidean metrics. A metric has bounded growth if the size of any ball increases by a factor not larger than
2
^{d}when its radius is doubled
. A metric is doubling if any set of
diameter
Dcan be covered with at most
2
^{d}sets of diameter
D/2
. In both cases, the smallest acceptable
value of
dis called the dimension of the metric. The metric of any
ddimensional Euclidean space has bounded growth dimension
O(
d). Any bounded growth metric of dimension
dhas doubling dimension
O(
d). The doubling metric is the most general and has the additional property of being inherited by subspaces: the metric induced by a doubling metric on a subset of
nodes is also doubling. For example, sampling nodes in an Euclidean space always results in a doubling metric.

As networks are embedded in our usual three dimensional space, it is legitimate to think than some network metrics may be modeled through doubling metrics. Recent results thus investigate network problems for the restricted classes of graphs with bounded growth or doubling metric , , , . However, the doubling nature of large networks such as the Internet has still not be tested.

Many graph parameters such as treewidth, branchwidth, rankwidth, cutwidth, cliquewidth ...have been introduced in recent years , to measure the structure of a given graph. These parameters are of course NP-complete to compute, but when it can be proved that for a given class of graphs the parameter is bounded by a constant then it can be proved using graph grammars (see Courcelle's fundamental work) that most of the optimization problems on this class are polynomially tractable, and sometimes we know the existence of a linear algorithm (but the hidden constant can be very high !)

The most famous parameter, namely the treewidth captures the distance of the graph to a tree, and therefore when the treewidth is small a dynamic programming approach can be used .

Despite some promising results, applications of these notions has still to be done for networks, in a practical perspective.

Application domains include evaluating Internet performances, the design of new peer-to-peer applications, enabling large scale ad hoc networks and mapping the web.

The application of measuring and modeling Internet metrics such as latencies and bandwidth is to provide tools for optimizing Internet applications. This concerns especially large scale applications such as web site mirroring and peer-to-peer applications.

Peer-to-peer protocols are based on a all equal paradigm that allows to design highly reliable and scalable applications. Besides the file sharing application, peer-to-peer solutions could take over in web content dissemination resistant to high demand bursts or in mobility management. Envisionned peer-to-peer applications include video on demand, streaming, exchange of classified ads,...

Wifi networks have entered our every day life. However, enabling them at large scale is still a challenge. Algorithmic breakthrough in large ad hoc networks would allow to use them in fast and economic deployment of new radio communication systems.

The main application of the web graph structure consists in ranking pages. Enabling site level indexing and ranking is a possible application o f such studies.

The peer-to-peer paradigm can be used to duplicate sensible data. Free disk space can be exchanged to ensure cooperative reliable storage. Anh-Tuan Gai, Fabrice Le-Fessant and Laurent Viennot are developing a peer-to-peer client for personal files sharing and backup. Anh-Tuan Gai has obtained an industrial postdoctoral position in order to create a startup company based on this prototype. Research aspects are developed in tight collaboration with Anne-Marie Kermarrec and Fabrice Le-Fessant (Asap).

A first release of the software is planned for 2007 as an open-source project. Several improvements are forecast:

cooperative caching,

indexing facilities allowing peer search and distributed DNS,

improvement of the backup system for dedicated environments such as universities or companies (where many computers may share the same files).

For use in Cplex optimization suite, Dominique Fortin has implemented in java 1.5, Field and Ring operations that support arbitrary precision (using Apfloat machinery). It is meant for
quaternion or Eisenstein-like cases, e.g.
for a fixed
nwhich is involved in extreme points enumeration of a polyhedral cone.

Yacine Boufkhad, Mehdi Nafa and Fabien de Montgolfier have realized a BitTorrent protocol simulator.

In , we generalize the first graph model of the small world routing property presented by J. Kleinberg in 2000 as a mesh augmented by random links, to a larger class of graphs: bounded growth graphs. At that point, it became a crucial question in the understanding of inherent structural properties of small worlds networks to determine whether such an augmentation is possible over any substructure. In , we answer negatively to this question by exhibiting graphs having a too large doubling dimension to be augmented into small worlds.

For 0-1optimization problems

where
f(·)is a convex function and
Dis a convex body in
R^{n}, the difficulty lies in binary constraint
x{0, 1}
^{n}, a nonconvex and discrete domain. We can write it in three different ways for an equivalent but continuous formulation:

While the first is celebrated under the lift-and-project approach, we focused on the second, in the recent past, from the piecewise convex maximization optimization point of view ; as for the third, it suggests to split the interval [0, 1]in [0, ], [ , 1- ], [1- , 1]and to record how many times a variable has been assigned to 0, middle value or 1 value in the continuous relaxation. It gives rise to an adaptative branch-and-bound strategy, independent of the problem itself, that interestingly competes with dedicated heuristics , .

In and we present a generalization of modular decomposition, one of the most used graph decomposition. We show that such a decomposition can indeed apply to combinatorial structure including the graphs, namely the homogeneous relations. The classical graph algorithmics can often be adapted to such structures.

MARDI is a collaboration contract between Inria and France Telecom. It gathers Gyroweb and Spontex (FT) around the study of decentralized networks over Internet. Spontex is a transversal project on cooperative networks. The main persons implied in Mardi are Fabien Mathieu (FT R&D), Simon Gwendal (FT R&D), Diego Perinon (PhD student, CIFRE) and Dmitry Lebedev (Postdoc) on FT side, Fabien de Montgolfier and Laurent Viennot on Gyroweb side. Diego Perino is funded through this collaboration and co-supervised by Fabien Mathieu and Laurent Viennot.

A first aspect of the project consist in studying Internet latencies in order to understand how logical overlays can be optimized with respect to delays. A possible track for gathering valuable large scale measures is to use a peer-to-peer network for measuring latencies. Interestingly, it is possible to find shortcuts in the Internet where the route through a relay can be faster than the direct route.

This item is connected to the affinity model where peers tend to connect preferentially to some peers based on some measured or infered criterions. Connecting peers according to delays is a special case of affinity where a peer connects preferentially to peers with low RTT. Additional properties can be proven for this case to prove the convergence of a dynamic system following this low RTT strategy.

The third part of the project aims at designing efficient structuring algorithm for decentralized applications. It relies on the previous parts. Measuring and modeling Internet latencies can be used to obtain a first coarse solution to a fast overlay, and the affinity models can be use to tune the solution and to adapt it under node churn.

Laurent Viennot is the head of the PairAPair national project.

Peer-to-peer networks have become the heaviest source of traffic in the Internet through the use of file sharing applications (such as Gnutella or Kazaa for example). However the protocols behind these applications are still too greedy and waste a lot of the Internet resources. On the other hand, theoretical solutions based on distributed hash tables exist but cannot be used practically. The PairAPair project aims at bridging efficient theoretical solutions to practical applications such as file sharing.

The main goal of the project concerns the conception of peer-to-peer protocols. A first approach consists in optimizing algorithms for existing protocols (without changing the communication rules). Another way consists in developing new protocols based on efficient theoretical solutions. Also important aspects of peer-to-peer networks concerns ethics: how to accept sharing one's resources if they can be used for non moral purposes? Designing protocols allowing the respect of certain rules will be another goal of the PairAPair project. Finally, analyzing and optimizing protocols requires models. For that purpose, crawling of existing peer-to-peer networks is envisioned.

The PairAPair project gathers members of four teams: Gyroweb ( inria–liafa), GraFComm ( lri), Graphes ( labri) and Hipercom ( inria). More information is vailable at http://gyroweb.inria.fr/pairapair/.

Michel Habib, Fabien de Montgolfier and Vincent Limouzy are involved in a new important ANR project "blanc" i.e. fundamental research, about graph decompositions. This project will start in 2007 and includes professor Bruno Courcelle and his group from Bordeaux University (LABRI) such as Christophe Paul and professor Stepahn Thomasse from Montpellier University (LIRMM).

Laurent Viennot is a scientific editor of the )i(nterstices ( http://interstices.info/) vulgarization site initiated by Inria in collaboration with french universities and Cnrs.

Michel Habib is member of the steering committee of STACS (Symposium on Theoretical Aspecst of Computer Science) and also of WG (International Workshop on Graph-Theoretic Concepts in Computer Science).

Michel Habib is in charge of a course entitled ``graph algorithms''. Laurent Viennot teaches ad hoc and web graph algorithms in the ``networks dynamics'' course.

Laurent Viennot is teaching foundations of computer science, java and networks (90 hours).

Michel Habib is in charge of a course about "Algorithmic Complexity", which includes approximation algorithms and parameterized complexity.

Yacine Boufkhad is teaching scientific computer science and networks (192 hours).

Fabien de Montgolfier is teaching foundation of computer science, algorithmics, and computer architecture (192 hours).

Michel Habib is in charge of two courses untitled: Search Engines; Parallelism and mobility, which includes peer-to-peer overlay networks.

Anh-Tuan Gai,
*Structuration en graphe de de Bruijn ou par incitation dans les réseaux de pair à pair*. PhD thesis, Univ. Paris 6, 2006.

Vincent Limouzy
*Algorithmes de décomposition de graphes*(MENRT).

Diego Perino
*Mesures dans Internet par et pour les réseaux decentralisés*(CIFRE FranceTelecom).