The development of interconnection networks has led to the emergence of new types of computing platforms. These platforms are characterized by heterogeneity of both processing and communication resources, geographical dispersion, and instability in terms of the number and performance of participating resources. These characteristics restrict the nature of the applications that can perform well on these platforms. Due to middleware and application deployment times, applications must be long-running and involve large amounts of data; also, only loosely-coupled applications may currently be executed on unstable platforms.
The new algorithmic challenges associated with these platforms have been approached from two different directions. On the one hand, the parallel algorithms community has largely concentrated on the problems associated with heterogeneity and large amounts of data. On the other hand, the distributed systems community has focused on scalability and fault-tolerance issues. The success of file sharing applications demonstrates the capacity of the resulting algorithms to manage huge volumes of data and users on large unstable platforms. Algorithms developed within this context are completely distributed and based on peer-to-peer (P2P for short) communication.
The goal of our project is to establish a link between these two directions, by gathering researchers from the distributed algorithms and data structures, parallel and randomized algorithms communities. Indeed, the change in scale for distributed applications raises several new questions.
The first set of questions is related to distributed computations, where the Internet is the underlying network. Since the topology of the underlying network is unknown, the use of logical networks (overlays) is required. In turn, the choice of the overlay will have an impact on the complexity of the algorithms. In this context, only the performance of the whole chain is meaningful, which requires to collect raw data and then to propose network models and algorithms based on these models, such that the performance of the resulting algorithm is good on raw data. This also requires studying the influence of the topology of the overlay network (the underlying graph) on the complexity of fundamental questions, such as graph exploration or black-hole search.
The second set of questions is related to distributed data structures. In general, the question is related to the compromise between the size of the data structure to be stored on each node and the time to answer a request (estimate the bandwidth between two nodes, compute the closest common ancestor of two nodes in a tree) or to perform a task (route a message in a network based on the information stored at the router nodes).
In order to study these questions, our research plan is based on the following goals. Firstly, we aim both at building strong foundations for distributed algorithms (graph exploration,
black-hole search,...) and distributed data structures (routing, efficient query, compact labeling...) to understand how to explore large scale networks in the context of failures and how to
disseminate data so as to answer quickly to specific queries. Secondly, we aim at building simple (based on local estimations without centralized knowledge), realistic models to accurately
represent resource performance and to build a realistic view of the topology of the network (based on network coordinates, geometric spanners,
We will concentrate on the design of new services for computationally intensive applications, consisting of mostly independent tasks sharing data, with application to distributed storage, molecular dynamics and distributed continuous integration, that will be described in more details in Section
Most of the research (including ours) currently carried out on these topics relies on a centralized knowledge of the whole (topology and performances) execution platform, whereas recent evolutions in computer networks technology yield a tremendous change in the scale of these networks. The solutions designed for scheduling and managing compact data structures must be adapted to these systems, characterized by a high dynamism of their entities (participants can join and leave at will), a potential instability of the large scale networks (on which concurrent applications are running), and the increasing probability of failure.
P2P systems have achieved stability and fault-tolerance, as witnessed by their wide and intensive usage, by changing the view of the networks: all communication occurs on a logical network (fixed even though resources change over time), thus abstracting the actual performance of the underlying physical network. Nevertheless, disconnecting physical and logical networks leads to low performance and a waste of resources. Moreover, due to their original use (file exchange), those systems are particularly well suited to exact search using Distributed Hash Tables (DHT's) and are based on fixed regular virtual topologies (Hypercubes, De Bruijn graphs...). In the context of the applications we consider, more complex queries and services will be required (finding the set of edges used for content distribution, finding a set of replicas covering the whole database) and, in order to reach efficiency, unstructured virtual topologies must be considered.
In this context, the main scientific challenges of our project are:
Models:
At a low level, to understand the underlying physical topology and to obtain both realistic and instanciable models. This requires expertise in graph theory (all the members of the project) and platform modelling (Olivier Beaumont, Nicolas Bonichon, Lionel Eyraud-Dubois and Ralf Klasing). The obtained results will be used to focus the algorithms designed in Section and Section .
At a higher level, to derive models of the dynamism of targeted platforms, both in terms of participating resources and resource performances (Olivier Beaumont, Philippe Duchon). Our goal is to derive suitable tools to analyze and prove algorithm performances in dynamic conditions rather than to propose stochastic modeling of evolutions (see Section ).
Overlays and distributed algorithms:
To understand how to augment the logical topology in order to achieve the good properties of P2P systems. This requires knowledge in P2P systems and small-world networks (Olivier Beaumont, Nicolas Bonichon, Philippe Duchon, Nicolas Hanusse, Cyril Gavoille). The obtained results will be used for developing the algorithms designed in Sections and .
To build overlays dedicated to specific applications and services that achieve good performances (Olivier Beaumont, Nicolas Bonichon, Philippe Duchon, Lionel Eyraud-Dubois, Ralf Klasing, Adrian Kosowski). The set of applications and services we target will be described in more details in Section .
To understand how to dynamically adapt scheduling algorithms (in particular collective communication schemes) to changes in network performance and topology, using randomized algorithms (Olivier Beaumont, Nicolas Bonichon, Nicolas Hanusse, Philippe Duchon, Adrian Kosowski, Ralf Klasing) (see Section ).
To study the computational power of the mobile agent systems under various assumptions on few classical distributed computing problems (exploration, mapping problem, exploration of the network in spite of harmful hosts. The goal is to enlarge the knowledge on the foundations of mobile agent computing. This will be done by developing new efficient algorithms for mobile agent systems and by proving impossibility results. This will also allow us to compare the different models (David Ilcinkas, Ralf Klasing, Adrian Kosowski, Evangelos Bampas) (see Section ).
Compact and distributed data structures:
To understand how to dynamically adapt compact data structures to changes in network performance and topology (Nicolas Hanusse, Cyril Gavoille) (Section )
To design sophisticated labeling schemes in order to answer complex predicates using local labels only (Nicolas Hanusse, Cyril Gavoille) (Section )
We will detail in Section how the various expertises in the team will be employed for the considered applications.
We therefore tackle several problems related to two priorities that INRIA identified in its strategic plan (2008-2012): "Modeling, Simulation and Optimization of Complex Dynamic Systems" and "Information, Computation and Communication Everywhere "
The recent evolutions in computer networks technology, as well as their diversification, yield a tremendous change in the use of these networks: applications and systems can now be designed at a much larger scale than before. This scaling evolution is dealing with the amount of data, the number of computers, the number of users, and the geographical diversity of these users. This race towards large scalecomputing has two major implications. First, new opportunities are offered to the applications, in particular as far as scientific computing, data bases, and file sharing are concerned. Second, a large number of parallel or distributed algorithms developed for average size systems cannot be run on large scale systems without a significant degradation of their performances. In fact, one must probably relax the constraints that the system should satisfy in order to run at a larger scale. In particular the coherence protocols designed for distributed applications are too demanding in terms of both message and time complexity, and must therefore be adapted for running at a larger scale.
The aggregation of information stored at each node can be seen as a distributed data structure. Whenever an application senda a request in the network, the result corresponds to a combination of individual queries performed by some nodes. It is important, in terms of resource consumption and query time, that the least number of nodes are requested.
Moreover, most distributed systems deployed nowadays are characterized by a high dynamism of their entities (participants can join and leave at will), a potential instability of the large scale networks (on which concurrent applications are running), and an increasing individual probability of failure. Therefore, as the size of the system increases, it becomes necessary that it adapts automatically to the changes of its components, requiring self-organization of the system to deal with the arrival and departure of participants, data, or resources. As it is not reasonnable that each node get an exact knowledge of the network, oracles are promising local and light data structures that sum up different metrics about the network: approximate distance, latencies, ...
As a consequence, it becomes crucial to be able to understand and model the behavior of large scale systems, to efficiently exploit these infrastructures, in particular w.r.t. designing dedicated algorithms handling a large amount of users and/or data.
In the case of parallel computation solutions, some strategies have been developed in order to cope with the intrinsic difficulty induced by resource heterogeneity. It has been proved that changing the metric (from makespan minimization to throughput maximization) simplifies most scheduling problems, both for collective communications and parallel processing. This restricts the use of target platforms to simple and regular applications, but due to the time needed to develop and deploy applications on large scale distributed platforms, the risk of failures, the intrinsic dynamism of resources, it is unrealistic to consider tightly coupled applications involving many tight synchronizations. Nevertheless, (1) it is unclear how the current models can be adapted to large scale systems, and (2) the current methodology requires the use of (at least partially) centralized subroutines that cannot be run on large scale systems. In particular, these subroutines assume the ability to gather all the information regarding the network at a single node (topology, resource performance, etc.). This assumption is unrealistic in a general purpose large size platform, in which the nodes are unstable, and whose resource characteristics can vary abruptly over time. Moreover, the proposed solutions for small to average size, stable, and dedicated environments do not satisfy the minimal requirements for self-organization and fault-tolerance, two properties that are unavoidable in a large scale context. Therefore, there is a strong need to design efficient and decentralized algorithms. This requires in particular defining new metrics adapted to large scale dynamic platforms in order to analyze the performance of the proposed algorithms.
As already noted, P2P file sharing applications have been successfully deployed on large scale dynamic platforms. Nevertheless, since our goal is the design of efficient algorithms in terms of actual performance and resource consumption, we need to concentrate on specific P2P environments. Indeed, P2P protocols are mostly designed for file sharing applications, and are not optimized for scientific applications, nor are they adapted to sophisticated database applications. This is mainly due to the primitive goal of designing file sharing applications, where anonymity is crucial, exact queries only are used, and all large file communications are made at the IP level.
Unfortunately, the context strongly differs for the applications we consider in our project, and some of the constraints appear to be in contradiction with performance and resource consumption optimization. For instance, in these systems, due to anonymity, the number of neighboring nodes in the overlay network (i.e. the number of IP addresses known to each peer) is kept relatively low, much lower than what the memory constraints on the nodes actually impose. Such a constraint induces longer routes between peers, and is therefore in contradiction with performance. In those systems, with the main exception of the LAND overlay, the overlay network (induced by the connections of each peer) is kept as far as possible separate from the underlying physical network. This property is essential in order to cope with malicious attacks, i.e. to ensure that even if a geographic site is attacked and disconnected from the rest of the network, the overall network will remain connected. Again, since actual communications occur between peers connected in the overlay network, communications between two close nodes (in the physical network) may well involve many wide area messages, and therefore such a constraint is in contradiction with performance optimization. Fortunately, in the case of file sharing applications, only queries are transmitted using the overlay network, and the communication of large files is made at IP level. On the other hand, in the case of more complex communication schemes, such as broadcast or multicast, the communication of large files is done using the overlay network, due to the lack of support, at IP level, for those complex operations. In this case, in order to achieve good results, it is crucial that virtual and physical topologies be as close as possible.
The members of CEPAGE have been involved in the following program committees: IPDPS 2011 (Vice-Chair, Algorithm Track), PODC 2011 (General Chair), SIROCCO 2011 (Co-chair), EuroPar 2011 (Local Chair, P2P Track), STACS 2011, ESA 2011, FOMC 2011, ADHOC-NOW 2011, IWOCA 2011, IC3 2011, RENPAR 2011, ISCIS 2011, DISC 2011.
Nicolas Hanusse has become Directeur de Recheche at CNRS. (Ralf Klasing became Directeur de Recherche in 2010 and Cyril Gavoille became junior member of the Institut Universitaire de France in 2009).
Adrian Kosowski has become a member of the junior chapter of the Polish Academy of Sciences for the term of office 2012–2016.
Modeling the platform dynamics in a satisfying manner, in order to design and analyze efficient algorithms, is a major challenge. In distributed platforms, the performance of individual nodes (be they computing or communication resources) will fluctuate; in a fully dynamic platform, the set of available nodes will also change over time, and algorithms must take these changes into account if they are to be efficient.
There are basically two ways one can model such evolution: one can use a stochastic process, or some kind of adversary model.
In a stochastic model, the platform evolution is governed by some specific probability distribution. One obvious advantage of such a model is that it can be simulated and, in many well-studied cases, analyzed in detail. The two main disadvantages are that it can be hard to determine how much of the resulting algorithm performance comes from the specifics of the evolution process, and that estimating how realistic a given model is – none of the current project participants are metrology experts.
In an adversary model, it is assumed that these unpredictable changes are under the control of an adversary whose goal is to interfere with the algorithms efficiency. Major assumptions on the system's behavior can be included in the form of restrictions on what this adversary can do (like maintaining such or such level of connectivity). Such models are typically more general than stochastic models, in that many stochastic models can be seen as a probabilistic specialization of a nondeterministic model (at least for bounded time intervals, and up to negligible probabilities of adopting "forbidden" behaviors).
Since we aim at proving guaranteed performance for our algorithms, we want to concentrate on suitably restricted adversary models. The main challenge in this direction is thus to describe sets of restricted behaviors that both capture realistic situations and make it possible to prove such guarantees.
On the other hand, in order to establish complexity and approximation results, we also need to rely on a precise theoretical model of the targeted platforms.
At a lower level, several models have been proposed to describe interference between several simultaneous communications. In the 1-port model, a node cannot simultaneously send to (and/or receive from) more than one node. Most of the “steady state” scheduling results have been obtained using this model. On the other hand, some authors propose to model incoming and outgoing communication from a node using fictitious incoming and outgoing links, whose bandwidths are fixed. The main advantage of this model, although it might be slightly less accurate, is that it does not require strong synchronization and that many scheduling problems can be expressed as multi-commodity flow problems, for which efficient decentralized algorithms are known. Another important issue is to model the bandwidth actually allocated to each communication when several communications compete for the same long-distance link.
At a higher level, proving good approximation ratios on general graphs may be too difficult, and it has been observed that actual platforms often exhibit a simple structure. For instance, many real life networks satisfy small-world properties, and it has been proved, for instance, that greedy routing protocols on small world networks achieve good performance. It is therefore of interest to prove that logical (given by the interactions between hosts) and physical platforms (given by the network links) exhibit some structure in order to derive efficient algorithms.
In the context of large scale dynamic platforms, it is unrealistic to determine precisely the actual topology and the contention of the underlying network at application level. Indeed, existing tools such as Alnem are very much based on quasi-exhaustive determination of interferences, and it takes several days to determine the actual topology of a platform made up of a few tens of nodes. Given the dynamism of the platforms we target, we need to rely on less sophisticated models, whose parameters can be evaluated at runtime.
Therefore, we propose to model each node using a small set of parameters. This is related to the theoretical notion of distance labeling , and corresponds to assigning labels to the nodes, so that a cheap operation on the labels of two nodes provides an estimation of the value of a given parameter (the latency or the bandwidth between two nodes, for instance). Several solutions for performance estimation on the Internet are based on this notion, under the terminology of Network Coordinate Systems. Vivaldi , IDES and Sequoia are examples of such systems for latency estimation. In the case of bandwidth estimation, fewer solutions have been proposed. We have studied the last-mile model, in which we model each node by an incoming and an outgoing bandwidth and neglect interference that appears at the core of the network (Internet), in order to concentrate on local constraints.
Once low level modeling has been obtained, it is crucial to be able to test the proposed algorithms. To do this, we will first rely on simulation rather than direct experimentation. Indeed, in order to be able to compare heuristics, it is necessary to execute those heuristics on the same platform. In particular, all changes in the topology or in the resource performance should occur at the same time during the execution of the different heuristics. In order to be able to replicate the same scenario several times, we need to rely on simulations. Moreover, a metric for providing approximation results in the case of dynamic platforms necessarily requires computing the optimal solution at each time step, which can be done off-line if all traces for the different resources are stored. Using simulation rather than experiments can be justified if the simulator itself has been proven valid. Moreover, the modeling of communications, processing and their interactions may be much more complex in the simulator than in the model used to provide a theoretical approximation ratio, such as in SimGrid. In particular, sophisticated TCP models for bandwidth sharing have been implemented in SimGRID.
During the course of the USS-SimGrid ANR Arpege project, the SimGrid simulation framework has been adapted to large scale environments. Thanks to hierarchical platform description, to
simpler and more scalable network models, and to the possibility to distribute the simulation of several nodes, it is now possible to perform simulations of very large platforms (of the order
of
Finally, we propose several applications that will be described in detail in Section These applications cover a large set of fields (molecular dynamics, continuous integration...). All these applications will be developed and tested with an academic or industrial partner. In all these collaborations, our goal is to prove that the services that we propose in Section can be integrated as steering tools in already developed software. Our goal is to assert the practical interest of the services we develop and then to integrate and to distribute them as a library for large scale computing.
At a lower level, in order to validate the models we propose, i.e. make sure that the predictions given by the model are close enough to the actual values, we need realistic datasets of network performance on large scale distributed platforms. Latency measurements are easiest to perform, and several datasets are available to researchers and serve as benchmarks to the community. Bandwidth datasets are more difficult to obtain, because of the measurement cost. As part of the bedibe software (see section ), we have implemented a script to perform such measurements on the Planet-Lab platform . We plan to make these datasets available to the community so that they can be used as benchmarks to compare the different solutions proposed.
The optimization schemes for content distribution processes or for handling standard queries require a good knowledge of the physical topology or performance (latencies, throughput, ...) of the network. Assuming that some rough estimate of the physical topology is given, former theoretical results described in Section show how to pre-process the network so that local computations are performed efficiently. Due to the dynamism of large distributed platforms, some requirements on the coding of local data structures and the updating mechanism are needed. This last process is done using the maintenance of light virtual networks, so-called overlay networks(see Section ). In our approach, we focus on:
Compact Routing tables.
Routing queries and broadcasting information on large scale platforms are tasks involving many basic message communications. The maximum performance objective imposes that basic messages are routed along paths of cost as low as possible. On the other hand, local routing decisions must be fast and the algorithms and data structures involved must support a certain amount of dynamism in the platform. Since the size of the data-structure impacts negatively the query and the update time, the space of the data-structure must be of limited size.
Local computations.
Although the size of the data structures is less constrained in comparison with P2P systems (due to security reasons), however, even in our collaborative framework, it is unrealistic that each node manages a complete view of the platform with the full resource characteristic. Thus, a node has to manage data structures concerning only a fraction of the whole system. In fact, a partial view of the network will be sufficient for many tasks: for instance, in order to compute the distance between two nodes a local and limited information available at the two nodes may suffice (distance labeling).
Overlay and small world networks.
The processes we consider can be highly dynamic. The preprocessing usually assumed takes polynomial time. Hence, when a new process arrives, it must be dealt with in an on-linefashion, i.e., we do not want to totally re-compute, and the (partial) re-computation has to be simple.
In order to meet these requirements, overlay networksare normally implemented. These are light virtual networks, i.e., they are sparse and a local change of the physical network will only lead to a small change of the corresponding virtual network. As a result, small address books are sufficient at each node.
A specific class of overlay networks are small-worldnetworks. These are efficient overlay networks for (greedy) routing tasks assuming that distance requests can be performed easily.
Mobile Agent Computing.
Mobile Agent Computing has been proposed as a powerful paradigm to study distributed systems. Our purpose is to study the computational power of the mobile agent systems under various assumptions. Indeed, many models exist but little is known about their computational power. One major parameter describing a mobile agent model is the ability of the agents to interact.
The most natural mobile agent computing problem is the exploration or mapping problem in which one or several mobile agents have to explore or map their environment. The rendezvous problem consists for two agents to meet at some unspecified node of the network. Two other fundamental problems deal with security, which is often the main concern of actual mobile agent systems. The first one consists in exploring the network in spite of harmful hosts that destroy incoming agents. An additional goal in this context is to locate the harmful host(s) to prevent further agent losses. We already mentioned the second problem related to security, which consists for the agents in capturing an intruder.
The goal is to enlarge the knowledge on the foundations of mobile agent computing. This will be done by developing new efficient algorithms for mobile agent systems and by proving impossibility results. This will also allow to compare the different models.
Of course, the main difficulty is to adapt the maintenance of local data structures to the dynamism of the network.
As mentioned in Section
, solutions provided by the parallel algorithm community are dedicated to stable
platforms whose resource performances can be gathered at a single node that is responsible for computing the optimal solution. On the other hand, P2P systems are fully distributed but the set
of available queries in these systems is much too poor for computationally intensive applications. Therefore, actual solutions for large scale distributed platforms such as BOINC
Requests and Task scheduling on large scale platforms;
New services for processing on large scale platforms.
The localized data-structures developed in the project, in particular the near-shortest path representation of a graph, can be naturally applied to routing in the Internet. More, precisely it can be applied to compact routing information while preserving routing along near-shortest routes. The current solution, based on BGP, uses routing tables of size linear in the number of T1-routers. The increase of the number of these routers, and so of the resulting routing tables, negatively impacts the routing time to forward each packet in the routers.
One of the goals of the project (see also Section ) is to provide routing tables for the Internet network with sublinear size while achieving near-shortest path routing.
Datacubes are an intuitive interface allowing users to mine their data by selecting subsets of dimensions, drilling down and rolling up through dimensions hierarchies and by constraining some dimensions values. In order to optimize this navigation, the best solution would be to precompute all possible query results. However, this is unfeasible in practice. Hence one has to select the “most beneficial" part to precompute and materialize. Traditionally, this problem has been modeled as a query execution time minimization under storage space hard constraint. We have proposed to revisit this problem by considering query performance as being the hard constraint whilst minimizing the storage space. Due to the hardness of this problem, we proposed approximate solutions. For this purpose, we used, among others, the concept of borderwhich is encountered in several data mining applications. We developed a general parallel algorithm for computing such borders with performance guarantees which is quite easily adaptable to the case where the data are distributed. This algorithm has been implemented and the extensive experiments confirmed the theoretical results.
Hubble is implemented in Scheme, using GNU Guile version 2. Details of the simulation, such as keeping track of processor occupation and network usage, are taken care of by SimGrid, a toolkit for the simulation of distributed applications in heterogeneous distributed environments.
The input to Hubble is an XML description of the DAG of build tasks. For each task, a build duration and the size in bytes of the build output are specified. For our evaluation purposes, we
collected this data on a production system, the
http://
The Nixpkgs DAG contains fixed-output nodes, i.e., nodes whose output is known in advance and does not require any computation. These nodes are typically downloads of source code from
external web sites. The raw data collected on
http://
See also the web page
http://
Software assessment : A-2, SO-3, SM-2, EM-1, SDL-2.
NamdP2P is a distributed implementation of ABF method using NAMD. It is worth noting that NAMD is designed to run on high-end parallel platforms or clusters, but not to run efficiently on instable and distributed platforms.
Software assessment : A-1, SO-2, SM-1, EM-1, SDL-1.
This applet considers the scheduling of malleable tasks with bounded amount of processing resources. The goal is to compute schedules that minimize the weighted completion time of tasks. The applet generates all possible greedy schedules for a given instance and displays only the best ones.
This applet illustrates the complexity of finding an optimal order of tasks .
See also the web page
http://
Software assessment : A-2, SO-1, SM-2, EM-1, SDL-2.
Bedibe (Benchmarking Distributed Bandwidth Estimation) is a software to compare different models for bandwidth estimation on the Internet, and their associated instantiation algorithms. The goal is to ease the development of new models and algorithms, and the comparison with existing solutions. The development of this software is just starting.
See also the web page
http://
Software assessment : A-1-up2, SO-3, SM-1-up2, EM-2, SDL-1-up2.
This software extracts the Maximal Frequent Itemsets from a transaction data base. It is designed in C++using OpenMP Library to take full advantage of multicore, multi-cpu machines.
Software assessment: A:1, SO:4, SM:2, EM:2, SDL:1.
Several Network Coordinate Systems have been proposed to predict unknown network distances between a large number of Internet nodes by using only a small number of measurements. These systems focus on predicting latency, and they are not adapted to the prediction of available bandwidth. But end-to-end path available bandwidth is an important metric for the performance optimisation in many high throughput distributed applications, such as video streaming and le sharing networks. In , we propose to perform available bandwidth prediction with the last-mile model, in which each node is characterised by its incoming and outgoing capacities. This model has been used in several theoretical works for distributed applications. We design decentralised heuristics to compute the capacities of each node so as to minimise the prediction error. We show that our algorithms can achieve a competitive accuracy even with asymmetric and erroneous end-to-end measurement datasets. A comparison with existing models (Vivaldi, Sequoia, PathGuru, DMF) is provided. Simulation results also show that our heuristics can provide good quality predictions even when using a very small number of measurements.
Malleable tasks are jobs that can be scheduled with preemptions on a varying number of resources. In We focus on the special case of work-preserving malleable tasks, for which the area of the allocated resources does not depend on the allocation and is equal to the sequential processing time. Moreover, we assume that the number of resources allocated to each task at each time instant is bounded. We consider both the clairvoyant and non-clairvoyant cases, and we focus on minimizing the weighted sum of completion times. In the weighted non-clairvoyant case, we propose an approximation algorithm whose ratio (2) is the same as in the unweighted non-clairvoyant case. In the clairvoyant case, we provide a normal form for the schedule of such malleable tasks, and prove that any valid schedule can be turned into this normal form, based only on the completion times of the tasks. We show that in these normal form schedules, the number of preemptions per task is bounded by 3 on average. At last, we analyze the performance of greedy schedules, and prove that optimal schedules are greedy for a special case of homogeneous instances. We conjecture that there exists an optimal greedy schedule for all instances, which would greatly simplify the study of this problem. Finally, we explore the complexity of the problem restricted to homogeneous instances, which is still open despite its very simple expression. (Join work with Loris Marchal from ENS Lyon)
In
, we consider a generalization of a classical optimization problem
related to server and replica location problems in networks. More precisely, we suppose that a set of users distributed over a network wish to have access to a particular service proposed by
a set of providers. The aim is then to distinguish a set of service providers able to offer a sufficient amount of resources in order to satisfy the requests of the clients. Moreover, a
quality of service following some requirements in terms of latencies is desirable. A smart repartition of the servers in the network may also ensure good fault tolerance properties. We model
this problem as a variant of Bin Packing, namely Bin Packing under Distance Constraint (BPDC) where the goal is to build a minimal number of bins (i.e. to choose a minimal number of servers)
so that (i) each client is associated to exactly one server, (ii) the capacity of the server is large enough to satisfy the requests of its clients and (iii) the distance between two clients
associated to the same server is minimized. We prove that this problem is hard to approximate even when using resource augmentation techniques : we compare the number of obtained bins when
using polynomial time algorithms allowed to build bins of diameter at most
In , we are interested in large scale distributed platforms like BOINC, consisting of heterogeneous resources and using the Internet as underlying communication network. In this context, we study a resource clustering problem, where the goal is to build clusters having at least a given capacity and such that any two participants to the same cluster are not too far from each other. In this context, the distance between two participants corresponds to the latency of a communication between them. Our goal is to provide algorithms with provable approximation ratios. In such large scale networks, it is not realistic to assume that the whole latency matrix (that gives the latency between any two participants) is known, and we need to rely on embedding tools such as Vivaldi or Sequoia. These tools enable to work on compact descriptions and well described metric spaces in which the distance between two points can be obtained directly from a small amount of information available at each node. We present the Bin Covering under Distance Constraint problem (BCDC for short), and propose dedicated algorithms for this problem for each metric space induced by each of the embedding tools. Then, we propose a comparison of these algorithms based on actual latency measures, that enables to decide which algorithm/embedding tool pair offers in practice for realistic datasets the best balancing between distance prediction and approximation ratios for the resource clustering problem.
In , we consider the classical problem of broadcasting a large message at an optimal rate in a large scale distributed network. The main novelty of our approach is that we consider that the set of participating nodes can be split into two parts: "green" nodes that stay in the open-Internet and "red" nodes that lie behind firewalls or NATs. Two red nodes cannot communicate directly, but rather need to use a green node as a gateway for transmitting a message. In this context, we are interested in both maximizing the throughput (i.e. the rate at which nodes receive the message) and minimizing the degree at the participating nodes, i.e. the number of TCP connections they must handle simultaneously. We consider both cyclic and acyclic solutions for the flow graph. In the cyclic case, our main contributions are a closed form formula for the optimal cyclic throughput and the proof that the optimal solution may require arbitrarily large degrees. In the acyclic case, we propose an algorithm to achieve the optimal throughput with low degree. Then, we prove a worst case ratio between the optimal acyclic and cyclic throughput and show through simulations that this ratio is on average very close to 1, which makes acyclic solutions efficient both in terms of throughput and of number of connections.
In
, we considered the problem of exploring an anonymous undirected
graph using an oblivious robot. The studied exploration strategies are designed so that the next edge in the robot's walk is chosen using only local information, and so that some local equity
(fairness) criterion is satisfied for the adjacent undirected edges. Such strategies can be seen as an attempt to derandomize random walks, and are natural counterparts for undirected graphs
of the rotor-router model for symmetric directed graphs. The first of the studied strategies, known as Oldest-First (
OF), always chooses the neighboring edge for which the most time has elapsed since its last traversal. Unlike in the case of symmetric directed graphs, we show that such a strategy in
some cases leads to exponential cover time. We then consider another strategy called Least-Used-First (
LUF) which always uses adjacent edges which have been traversed the smallest number of times. We show that any Least-Used-First exploration covers a graph
In
we considered a team of agents which has to explore a graph
In
we studied rendezvous of two anonymous agents, where each agent
knows its own initial position in the environment. Their task is to meet each other as quickly as possible. The time of the rendezvous is measured by the number of synchronous rounds that
agents need to use in the worst case in order to meet. In each round, an agent may make a simple move or it may stay motionless. We consider two types of environments, finite or infinite
graphs and Euclidean spaces. A simple move traverses a single edge (in a graph) or at most a unit distance (in Euclidean space). The rendezvous consists in visiting by both agents the same
point of the environment simultaneously (in the same round). In
, we propose several asymptotically optimal rendezvous algorithms.
In particular, we show that in the line and trees as well as in multi-dimensional Euclidean spaces and grids the agents can rendezvous in time
In the boundary patrolling problem, a set of
In the rendezvous problem in trees, two identical (anonymous) mobile agents start from arbitrary nodes of an unknown tree and have to meet at some node. Agents move in synchronous rounds:
in each round an agent can either stay at the current node or move to one of its neighbors. We consider deterministic algorithms for this rendezvous task. In
we have presented a tight trade-off between the optimal time of
completing rendezvous and the size of memory of the agents. For agents with
In
we consider the problem of exploring an anonymous line by a team of
Two mobile agents (robots) have to meet in an a priori unknown bounded terrain modeled as a polygon, possibly with polygonal obstacles. Robots are modeled as points, and each of them is equipped with a compass. Compasses of robots may be incoherent. Robots construct their routes, but the actual walk of each robot is decided by the adversary that may, e.g., speed up or slow down the robot. In , we consider several scenarios, depending on three factors: (1) obstacles in the terrain are present, or not, (2) compasses of both robots agree, or not, (3) robots have or do not have a map of the terrain with their positions marked. The cost of a rendezvous algorithm is the worst-case sum of lengths of the robots' trajectories until they meet. For each scenario we design a deterministic rendezvous algorithm and analyze its cost. We also prove lower bounds on the cost of any deterministic rendezvous algorithm in each case. For all scenarios these bounds are tight.
We study the problem of exploration by a mobile entity (agent) of a class of dynamic networks, namely the periodically-varying graphs (the PV-graphs, modeling public transportation
systems, among others). These are defined by a set of carriers following infinitely their prescribed route along the stations of the network. Flocchini, Mans, and Santoro
(ISAAC 2009) studied this problem in the case when the agent must
always travel on the carriers and thus cannot wait on a station. They described the necessary and sufficient conditions for the problem to be solvable and proved that the optimal number of
steps (and thus of moves) to explore a
In
, we study the impact of the ability to wait at the stations. We
exhibit the necessary and sufficient conditions for the problem to be solvable in this context, and we prove that waiting at the stations allows the agent to reduce the worst-case optimal
number of moves by a multiplicative factor of at least
In
, we deal with an error model in distributed networks. For a target
More precisely, we establish a relationship between the number of liars and the number of distance changes after one edge deletion. Whenever
In
, We study the computational power of graph-based models of
distributed computing in which each node additionally has access to a global whiteboard. A node can read the contents of the whiteboard and, when activated, can write one message of
In
, we address the problem of verifying the accuracy of a map of a
network by making as few measurements as possible on the nodes of the network. In the past, this task has been formalized as an optimization problem that, given a graph
Datacubes are data structures designed to query optimization in databases. In we provide some algorithmic solutions in a user centric setting: the request time is guaranteed and the amount of memory space is minimized.
Borders are fundamental building blocks in data mining. They are used to find frequent patterns, dependencies between attributes, ... In
we provide an algorithm that computes borders with a speedup of
Motivated by multipath routing, we introduce a multi-connected variant of spanners. For that purpose we introduce in
the
Building upon recent results on fault-tolerant spanners, we show how to build
Additionally, we give an improved construction for the case
For every integral parameter
For any
Our second contribution is a compact routing scheme with poly-logarithmic addresses that provides affine stretch guarantees. With
Given a restriction of
Routing with
multiplicativestretch 3 (which means that the path used by the routing scheme can be up to three times longer than a shortest path) can be done with routing tables of
In
, we give reasons why routing in unweighted graphs with
additivestretch is difficult in the form of space lower bounds for general graphs and for planar graphs. We prove that any routing scheme using routing tables of size
On the positive side, we give an almost tight upper bound: we present the first non-trivial compact routing scheme with
We started an informal collaboration with Xavier Hanin (4SH) who has developed Xooctory and who has initiated the project Ivy which is now a project of the Apache Software Foundation. This collaboration is supported by INRIA who delegated Ludovic Courtès (INRIA SED Engineer) to work in Cepage for one year from July 2009 on a distributed version of Xooctory.
We have tested and implemented in Xooctory different scheduling algorithms that distribute the build process. This lead in particular to the conception of Hubble (see Section ) software.
In this collaboration, we mainly focus on the scalability properties that a new routing protocol should guarantee (see Section
). The main measures are the size of the local routing tables, and the time (or
message complexity) to update or to generate such tables. The design of schemes achieving sub-linear space per routers, say in
.
We have started an informal collaboration with IBM Montpellier and IBM Haifa in the context of ANR “SONGS”. This collaboration aims to produce accurate and scalable simulations for Cloud Computing and HPC with SimGrid.
This project in coordination with the INRIA project RUNTIME aims at designing models for communication times on heterogeneous platforms of two types: high-scale platforms for volunteer computing, and high performance NUMA machines. The goal is to reach a compromise between precision and algorithmic tractability.
The scientific objectives of ALADDIN are to solve what are identified as the most challenging problems in the theory of interaction networks. The ALADDIN project is thus an opportunity to create a full continuum from fundamental research to applications in coordination with both INRIA projects CEPAGE and GANG.
The goal of this ANR is the study of identifying codes in evolving graphs. Ralf Klasing is the overall leader of the project.
The objectives of USS SimGrid is to create a simulation framework that will answer (i) the need for simulation scalability arising in the HPC community; (ii) the need for simulation accuracy arising in distributed computing. The Cepage team will be involved in the development of tools to provide realistic model instantiations.
The project involves the following INRIA and CNRS teams: AlGorille, ASAP, Cepage, Graal, MESCAL, SysCom, CC IN2P3.
The main goal of DISPLEXITY (for DIStributed computing: computability and ComPLEXITY) is to establish the scientific foundations for building up a consistent theory of computability and complexity for distributed computing. The other partners are from IRISA (Rennes) and LIAFA (Paris).
SONGS (Simulation of Next Generation Systems) is a follow-up to the USS-SimGrid project. Its objective is to design a unified and open simulation framework for performance evaluation of next generation systems: Grids, Peer-to-Peer systems, Clouds and HPC systems. Cepage will be involved in the Peer-to-peer and Cloud use cases by designing and testing efficient allocation policies. Cepage will also take part in the design of efficient and realistic models and their validation.
The project involves the following INRIA and CNRS teams: AlGorille, ASCOLA, AVALON, CEPAGE, HiePACS, ICPS, MASCOTTE, MODALIS, MESCAL, RUNTIME, CC IN2P3.
Title: EULER (Experimental UpdateLess Evolutive Routing)
Type: COOPERATION (ICT)
Defi: Future Internet Experimental Facility and Experimentally-driven Research
Instrument: Specific Targeted Research Project (STREP)
Duration: October 2010 - September 2013
Coordinator: ALCATEL-LUCENT (Belgium)
Others partners:
Alcatel-Lucent Bell, Antwerpen, Belgium
3 projects from INRIA: CEPAGE, GANG and MASCOTTE, France
Interdisciplinary Institute for Broadband Technology (IBBT),Belgium
Laboratoire d'Informatique de Paris 6 (LIP6), Université Pierre Marie Curie (UPMC), France
Department of Mathematical Engineering (INMA) Université Catholique de Louvain, Belgium
RACTI, Research Academic Computer Technology Institute University of Patras, Greece
CAT, Catalan Consortium: Universitat Politècnica de Catalunya, Barcelona and University of
Girona, Spain
See also:
http://
Abstract: The title of this study is "Dynamic Compact Routing Scheme". The aim of this projet is to develop new routing schemes achieving better performances than current BGP protocols. The problems faced by the inter-domain routing protocol of the Internet are numerous:
The underlying network is dynamic: many observations of bad configurations show the instability of BGP;
BGP does not scale well: the convergence time toward a legal configuration is too long, the size of routing tables is proportional to the number of nodes of network (the network size is multiplied by 1.25 each year);
The impact of the policies is so important that the many packets can oscillated between two Autonomous Systems.
In this collaboration, we mainly focus on the scalability properties that a new routing protocol should guarantee. The main measures are the size of the local routing tables, and the
time (or message complexity) to update or to generate such tables. The design of schemes achieving sub-linear space per routers, say in n where
International Joint Project, 2011-2013, entitled “SEarch, RENdezvous and Explore (SERENE)”, on foundations of mobile agent computing, in collaboration with the Department of Computer Science, University of Liverpool. Funded by the Royal Society, U.K. Principal investigator on the UK side: Leszek Gasieniec. Ralf Klasing is the principal investigator on the French side.
The goal of ComplexHPC is to coordinate European groups working on the use of heterogeneous and hierarchical systems for HPC as well as the development of collaborative activities among
the involved research groups, to tackle the problem at every level (from cores to large-scale environments) and to provide new integrated solutions for large-scale computing for future
platforms (see
http://
International Joint Project, 2011, on foundations of mobile agent computing, in collaboration with the Department of Computer Science, University of Perugia, Italy. Principal investigator on the Italian side: Alfredo Navarra. Ralf Klasing is the principal investigator on the French side.
Marcin Markiewicz, University of Gdansk (Poland)
Quantum distributed computing models and simulation of quantum correlations using classical information channels.
Ashley Deflumere and Alexey Lastovetsky, University College Dublin (Ireland)
Design of efficient distribution scheme for linear algebra kernels on modern heterogeneous architectures
Gabriele Di Stefano, University of L'Aquila (Italy)
Alfredo Navarra, University of Perudia (Italy)
Mobile agent coordination in distributed computing.
Miroslaw Korzeniowski, Technical University of Wroclaw (Poland)
Design of distributed and randomized algorithms for P2P networks.
Leszek Gasieniec, University of Liverpool (UK)
Design of distributed algorithms for mobile agents in exploration and patrolling tasks.
Guido Proetti, University of L'Aquila (Italy)
Davide Bilo, University of Sassari (Italy)
Network discovery and verification. ation techniques for chosen task scheduling problems.
Tobias Mömke, KTH Royal Institute of Technology, Stockholm (Sweden)
Centralized approximation techniques for chosen task scheduling problems.
Thomas Sauerwald, Max-Planck-Institut für Informatik, Saarbrücken (Germany)
Propp machine, Multiple random walks.
Ashley Deflumere, University College Dublin, 04/12 - 17/12/2011
Alfredo Navarra, University of Perugia, Italy, 11/12-16/12/2011
Ljubomir Perkovic, De Paul University Chicago, (September 2011–)
Miroslaw Korzeniowski, Technical University of Wroclaw, (23/08- 28/08/2011)
Marcin Markiewicz, University of Gdansk, 10/05-17/05/2011
Miroslaw Korzeniowski, Technical University of Wroclaw, (06/07- 22/07/2011)
Tobias Mömke, KTH Royal Institute of Technology, Stockholm, Sweden, 17/07 - 31/07/2011
Leszek Gasieniec, University of Liverpool, UK, 10/09-17/09/2011
Alfredo Navarra, University of Perugia, Italy, 10/09-17/09/2011
Gabriele Di Stefano, University of L'Aquila, Italy, 10/09-17/09/2011
Davide Bilo, University of Sassari, Italy, 27/11-07/12/2011
Microsoft Research Montainview, CA, invited research visit by I. Abhram (C. Gavoille, 10 days, April 2011)
Weizmann Institute, research visit with D. Peleg (Q. Godfroy, one week, November 2011)
Universidad Adolfo Ibanez, Chile, research visit as part of joint grant (A. Kosowski, 16/01-03/02/2011)
University of Gdansk, Poland, research visit (A. Kosowski, 07/02-14/02/2011)
University of Liverpool, UK, research visit as part of joint grant (A. Kosowski, 15/02-27/02/2011)
Carleton University, Canada, invited research visit (A. Kosowski, 08/11-20/11/2011)
Weizmann Institute (IL), David Peleg
MIT (USA), Christian Sommer
MicroSoft Research, Montainview (USA), Ittai Abraham
Foreign partner of the project entitled “Mathematical modeling for industrial and management science” funded by the Government of Chile through its CONICYT program (ANILLO for Science and Technology).
This grant involves research into mathematical programming models, network dynamics and graph models, stochastic models, as well as other interdisciplinary projects. The joint work performed during the research collaboration lead to new results on the computational power of interconnection networks in distributed computing, and to new algorithms for compact routing in special graph classes.
Foreign partner of the project entitled “CLOUDS: Cloud Computing para Servicios Escalables, Confiables y Ubicuos” (2010-2013) funded by the Comunidad de Madrid.
Ralf Klasing is a member of the Editorial Board of Theoretical Computer Science, Discrete Applied Mathematics, Wireless Networks, Networks, Journal of Interconnection Networks, Parallel Processing Letters, Algorithmic Operations Research, Fundamenta Informaticae, and Computing and Informatics.
Adrian Kosowski is a guest editor for Theoretical Computer Scienceand Computing and Informatics.
Olivier Beaumont is Associate Editor for IEEE Transactions on Parallel and Distributed Systems (IEEE TPDS).
Olivier Beaumont was the Vice-Chair (Algoritm Track) of IPDPS 2011, the 25th IEEE International Parallel and Distributed Processing Symposium, May 16-20, 2011 Anchorage (Alaska) USA
Nicolas Hanusse is the Conference Chair of 14th French Conference of Communications in Network (Algotel 2012)
Nicolas Hanusse is the Conference Chair of International Conference on Algorithm (FUN 2012)
Ralf Klasing was the Conference Chair of the Bordeaux Workshop on Identifying Codes (BWIC 2011), Bordeaux, France, 21/11-25/11/2011.
Ralf Klasing is the Conference Chair of the 11th International Symposium on Experimental Algorithms (SEA 2012), Bordeaux, France, June 7-9, 2012.
Adrian Kosowski was the Conference Organization & Proceedings Co-Chair of the 18th International Colloquium on Structural Information and Communication Complexity (SIROCCO 2011), Gdańsk, Poland, June 26-29, 2011.
Cyril Gavoille is a member of the Steering Committee of the ACM Symposium on Principles of Distributed Computing (PODC).
Ralf Klasing is a member of the Steering Committee of the International Colloquium on Structural Information and Communication Complexity (SIROCCO).
ESA 2011 (Sep. 5-9, Saarbrücken, Germany) Annual European Symposium on Algorithms (C. Gavoille)
DISC 2012 (Oct. 16-18, Salvador, Brazil), International Symposium on Distributed Computing (C. Gavoille)
ISAAC 2012 (Dec. 19-21, Taipei, Taiwan), International Symposium on Algorithms and Computation (C. Gavoille)
ICDCN 2012 (Jan. 3-6, Hong-Kong), International Conference on Distributed Computing and Networking (C. Gavoille)
SEA 2012, 11th International Symposium on Experimental Algorithms, Bordeaux, France, June 7-9, 2012 (R. Klasing, A. Kosowski)
IWOCA 2012, 23rd International Workshop on Combinatorial Algorithms, July 19-21, 2012, Kalasalingam University, Anand Nagar, Krishnankoil, Tamil Nadu, India (R. Klasing)
SSS 2012, 14th International Symposium on Stabilization, Safety, and Security of Distributed Systems, Toronto, September 24-27, 2012 (A. Kosowski).
ICDCN 2012, 13th International Conference on Distributed Computing and Networking, January 3-6, 2012, Hong Kong, China (A. Kosowski)
ICPP 2012, 41st International Conference on Parallel Processing, Pittsburgh, PA, USA, September 10-13, 2012 (O. Beaumont)
IPDPS 2012, 26th IEEE International Parallel & Distributed Processing Symposium, May 21-25, 2012, Shanghai, China (O. Beaumont)
HCW 2012, 20th International Heterogeneity in Computing Workshop, May 21-25, 2012, Shanghai, China (O. Beaumont)
HIPC 2011, 18th annual IEEE International Conference on High Performance Computing (HiPC 2011), Bengaluru (Bangalore), India, December 18-21, 2011. (O. Beaumont)
IEEE CloudCom 2011, 3rd IEEE International Conference on Cloud Computing Technology and Science (IEEE CloudCom 2011), Athens, Greece, Nov 29 - Dec 1, 2011 (O. Beaumont)
ISCIS 2011, 26th International Symposium on Computer and Information Sciences, 26-28 September 2011, London, UK (O. Beaumont)
EUROPAR 2011, local chair (P2P systems track), 17th International European Conference on Parallel and Distributed Computing Euro-Par 2011, Bordeaux, France, August 29-September 2, 2011 (O. Beaumont)
IC3, The Fourth International Conference on Contemporary Computing (IC3-2011), NOIDA, (outskirts of New Delhi), India, August 8-10, 2011 (O. Beaumont)
OPODIS 2011, 15th International Conference On Principles Of Distributed Systems, December 13-16, 2011, Toulouse, France (O. Beaumont)
IPDPS 2011, Vice-Chair (Algorithms) 25th IEEE International Parallel & Distributed Processing Symposium, May 16-20, 2011 Anchorage (Alaska) USA (O. Beaumont)
SIROCCO 2011, 18th International Colloquium on Structural Information and Communication Complexity, June 26-29, 2011, Gdansk, Poland (R. Klasing, A. Kosowski)
DISC 2011, 25th International Symposium on DIStributed Computing, September 20-22, 2011, Rome, Italy (D. Ilcinkas, A. Kosowski)
FOMC 2011, 7th ACM SIGACT/SIGMOBILE International Workshop on Foundations of Mobile Computing (formerly known as DIALM-POMC), June 9, 2011, San Jose, California, USA (R. Klasing)
ADHOC-NOW 2011, 10th International Conference on Ad Hoc Networks and Wireless, July 18-20, 2011, Paderborn, Germany (R. Klasing)
IWOCA 2011, 22nd International Workshop on Combinatorial Algorithms, June 20-22, 2011, University of Victoria, Canada (R. Klasing)
ALGOSENSORS - Track B 2011, 7th International Symposium on Algorithms for Sensor Systems, Wireless Ad Hoc Networks and Autonomous Mobile Entities (D. Ilcinkas)
AlgoTel 2011, 13èmes Rencontres Francophones sur les Aspects Algorithmiques des Télécommunications (D. Ilcinkas)
CSA 2011, 3rd FTRA International Conference on Computer Science and its Applications, December 12-15, 2011, Jeju, Korea (L. Eyraud-Dubois)
Cyril Gavoille was the General Chair of PODC 2011, the 30th Annual ACM Symposium on Principles of Distributed Computing, June 6-8, San Jose, California.
Ralf Klasing was the Conference Chair of the Bordeaux Workshop on Identifying Codes (BWIC 2011), Bordeaux, France, 21/11-25/11/2011.
Ralf Klasing is the Conference Chair of the 11th International Symposium on Experimental Algorithms (SEA 2012), Bordeaux, France, June 7-9, 2012.
Adrian Kosowski was the Conference Organization Chair of the 18th International Colloquium on Structural Information and Communication Complexity (SIROCCO 2011), Gdańsk, Poland, June 26-29, 2011.
David Ilcinkas was member of the organization committee of the second TERANET workshop ( T oward E volutive R outing A lgorithms for scale-free/internet-like NET works), September 19, 2011.
Olivier Beaumont is the "Délégué Scientifique" and the head of the Project Committee at INRIA Bordeaux Sud-Ouest.
Ralf Klasing is the Head of the "Combinatorics and Algorithms" team of the LaBRI.
Ralf Klasing is a member of the Conseil Scientiqueof the LaBRI.
Ralf Klasing is a member of the Commission Consultativeof the LaBRI.
Ralf Klasing is responsible for the International Relations of the LaBRI.
Ralf Klasing is a member of the Evaluation Committee of the programme "ANR DEFIS 2009".
Olivier Beaumont was external reviewer of The-Thung NGuyen (LAAS Toulouse), November 2011
Olivier Beaumont was external reviewer of Mathieu Valero (LIP6, Paris), December 2011
Nicolas Hanusse was reviewer of Thomas Aynaud (Université Paris 6), Novembre 2011
Nicolas Hanusse was reviewer of Mauricio Soto (Université Paris 7), December 2011
Ralf Klasing was reviewer and opponent in the Ph.D. defense of Ville Junnila (University of Turku, Finland), June 2011.
Cyril Gavoille was external reviewer of Mathieu Chapelle (Université d'Orléans, LIFO), December 2011.
Ralf Klasing was an Invited Speaker at the 2nd Nordic Workshop on System and Network Optimization for Wireless (SNOW 2011), Sälen, Sweden, 24 - 26 March 2011. (Title of the talk: Cost Minimization in Multi-Interface Networks.)
Cyril Gavoille was an Invited Speaker at the First International Workshop on Dynamic Systems (DYNAM), Toulouse, December 2011, joint with OPODIS. (Title: Dynamic algorithms via forbidden-set labeling).
Cyril Gavoille was an Invited Speaker at the "10 ans du séminaire MaMux - Mathématiques, musique et relations avec d'autres disciplines", Paris, May 2011. (Title: Oracles pour les arbres et les graphes).
Licence : The members of CEPAGE are heavily involved in teaching activities at undergraduate level (Licence 1, 2 and 3, Master 1 and 2, Engineering Schools ENSEIRB). The teaching is carried out by members of the University as part of their teaching duties, and for CNRS/INRIA (at master 2 level) as extra work. It represents more than 500 hours per year.
Master : Communication and Routing (last year of engineering school ENSEIRB, 2011) O. Beaumont, L. Eyraud, N. Hanusse, R. Klasing, A. Kosowski (16h)
Master : Communication Algorithms in Networks (2nd year MASTER "Algorithms and Formal Methods", University of Bordeaux, 2011) R. Klasing (24h)
Master : Structure of Web Search Engines, ENSEIRB, O. Beaumont (16h)
Doctorat : titre du cours, nombre d'heures en équivalent temps plein, université, pays
Master : Distributed Algorithms (2nd year MASTER Informatique, University of Bordeaux, 2011) Cyril Gavoille (32h)