The research of the Regal team addresses the theory and practice of Computer Systems, including multicore computers, clusters, networks, peer-to-peer systems, cloud computing systems, and other communicating entities such as swarms of robots. It addresses the challenges of communicating, sharing information, and computing correctly in such large-scale, highly dynamic computer systems. This includes addressing the core problems of communication, consensus and fault detection, scalability, replication and consistency of shared data, information sharing in collaborative groups, dynamic content distribution, and multi- and many-core concurrent algorithms.
Regal is a joint research team between LIP6 and Inria Paris-Rocquencourt. In 2014, 4 permanent members of Regal created Whisper team with a focuss on infrastructure (system) software.
As society relies more and more on computers, responsiveness,
correctness and security are increasingly critical.
At the same time, systems are growing larger, more parallel, and more
unpredictable.
Our research agenda is to design Computer Systems that remain correct
and efficient despite this increased complexity and in spite of
conflicting requirements.
The term “Computer Systems” is interpreted broadly,
This holistic approach allows us to address related problems at different levels. It also permits us to efficiently share knowledge and expertise, and is a source of originality.
Computer Systems is a rapidly evolving domain, with strong interactions with industry. Two main evolutions in the Computer Systems area have strongly influenced our research activities:
Ensuring the persistence, availability and consistency of data in a distributed setting is a major requirement: the system must remain correct despite slow networks, disconnection, crashes, failures, churn, and attacks. Ease of use, performance and efficiency are equally important for systems to be accepted. These requirements are somewhat conflicting, and there are many algorithmic and engineering trade-offs, which often depend on specific workloads or usage scenarios.
Years of research in distributed systems are now coming to fruition, and are being used by millions of users of web systems, peer-to-peer systems, gaming and social applications, or cloud computing. These new usages bring new challenges of extreme scalability and adaptation to dynamically-changing conditions, where knowledge of system state can only be partial and incomplete. The challenges of distributed computing listed above are subject to new trade-offs.
Innovative environments that motivate our research include cloud computing, geo-replication, edge clouds, peer-to-peer (P2P) systems, dynamic networks, and manycore machines. The scientific challenges are scalability, fault tolerance, security, dynamicity and the virtualization of the physical infrastructure. Algorithms designed for classical distributed systems, such as resource allocation, data storage and placement, and concurrent and consistent access to shared data, need to be revisited to work properly under the constraints of these new environments.
Regal focuses in particular on two key challenges in these areas: the adaptation of algorithms to the new dynamics of distributed systems and data management on large configurations.
The fine-grained parallelism offered by multicore architectures has the potential to open highly parallel computing to new application areas. To make this a reality, however, many issues, including issues that have previously arisen in distributed systems, need to be addressed. Challenges include obtaining a consistent view of shared resources, such as memory, and optimally distributing computations among heterogeneous architectures, such as CPUs, GPUs, and other specialized processors. As compared to distributed systems, in the case of multicore architectures, these issues arise at a more fine-grained level, leading to the need for different solutions and different cost-benefit trade-offs.
Of particular interest to Regal are topics related to memory management in high-end multicore computers, such as garbage collection of very large memories and system support for massive databases of highly-structured data.
Garbage collection for big data on large-memory NUMA machines. We developed NumaGiC, a high-throughput garbage collector for big-data algorithms running on large-memory NUMA machines. This result, a collaboration with the Whisper team, has been presented at ASPLOS 2015 .
Explicit consistency. We propose an alternative approach to the strong-vs.-weak consistency conundrum, explicit consistency. This result has been presented at EuroSys 2015 . We have also developed a new sound logic for proving the correctness of a distributed database under concurrent updates. This result is published at POPL 2016 .
The weakest failure detector of implement eventual consistency. We found the weakest failure detector to implement an eventually consistent replicated service. This theoretical result has been presented at PODC 2015 .
Gauthier Voron obtained best paper award at system track of Compas'2015
Functional Description
Antidote is the flexible cloud database platform currently under development in the SyncFree European project. Antidote aims to be both a research platform for studying replication and consistency at the large scale, and an instrument for exploiting research results. The platform supports replication of CRDTs, in and between sharded (partitioned) data centres (DCs). The current stable version supports strong transactional consistency inside a DC, and causal transactional consistency between DCs. Ongoing research includes support for explicit consistency , , for elastic version management, for adaptive replication, for partial replication, and for reconfigurable sharding.
Participants: Tyler Crain, Marc Shapiro, Serdar Tasiran and Alejandro Tomsic
Contact: Tyler Crain
Functional Description
A large family of distributed transactional protocols have a common structure, called Deferred Update Replication (DUR). DUR provides dependability by replicating data, and performance by not re-executing transactions but only applying their updates. Protocols of the DUR family differ only in behaviors of few generic functions. Based on this insight, we offer a generic DUR middleware, called G-DUR, along with a library of finely-optimized plug-in implementations of the required behaviors.
Participants: Marc Shapiro, Alejandro Tomsic
Contact: Marc Shapiro
Functional Description
NumaGiC is a version of the HotSpot garbage collector (GC) adapted to many-core computers with very large main memories. In order to maximise GC throughput, it manages the trade-off between memory locality (local scans) and parallelism (work stealing) in a self-balancing manner. Furthemore, the collector features several memory placement heuristics that improve locality.
Participants: Lokesh Gidra, Marc Shapiro, Julien Sopena and Gaël Thomas
Contact: Marc Shapiro
Functional Description
Client-side (e.g., mobile or in-browser) apps need local access to shared cloud data, but current technologies either do not provide fault-tolerant consistency guarantees, or do not scale to high numbers of unreliable and resource-poor clients, or both. Addressing this issue, the SwiftCloud distributed object database supports high numbers of client-side partial replicas. SwiftCloud offers fast reads and writes from a causally-consistent client-side cache. It is scalable, thanks to small and bounded metadata, and available, tolerating faults and intermittent connectivity by switching between data centres. The price to pay is a modest amount of staleness. A recent Inria Research Report (submitted for publication) presents the SwiftCloud algorithms, design, and experimental evaluation, which shows that client-side apps enjoy the same guarantees as a cloud data store, at a small cost.
Participants: Marc Shapiro, Serdar Tasiran, Marek Zawirski and Mahsa Najafzadeh
Contact: Marc Shapiro
Functional Description
PUMA is a system that is based on a kernel-level remote caching mechanism that provides the ability to pool VMs memory at the scale of a data center. An important property while lending memory to another VM, is the ability to quickly retrieve memory in case of need. Our approach aims at lending memory only for clean cache pages: in case of need, the VM which lent the memory can retrieve it easily. We use the system page cache to store remote pages such that: (i) if local processes allocate memory the borrowed memory can be retrieved immediately; and (ii) if they need cache the remote pages have a lower priority than the local ones.
Participants: Maxime Lorrillere, Sébastien Monnet, Pierre Sens, Julien Sopena
Contact: Maxime Lorrillere
Nowadays, distributed systems are more and more heterogeneous and versatile. Computing units can join, leave or move inside a global infrastructure. These features require the implementation of dynamic systems, that is to say they can cope autonomously with changes in their structure in terms of physical facilities and software. It therefore becomes necessary to define, develop, and validate distributed algorithms able to managed such dynamic and large scale systems, for instance mobile ad hoc networks, (mobile) sensor networks, P2P systems, Cloud environments, robot networks, to quote only a few.
We have obtained results both on fundamental aspects of distributed algorithms and on specific emerging large-scale applications.
We study various key topics of distributed algorithms: agreement, failure detection, data dissemination and data finding in large scale systems, self-stabilization and self-* services.
Distributed systems should provide reliable and continuous services despite the failures of some of their components. A classical way for a distributed system to tolerate failures is to detect them and then to recover. It is now well recognized that the dominant factor in system unavailability lies in the failure detection phase. In 2015, we obtain the following results on failure detection:
Assuming a message-passing environment with a majority
of correct processes, the necessary and sufficient information
about failures for implementing a general state machine
replication scheme ensuring consistency is captured by the
We also study the k-set agreement problem is a generalization of the consensus problem where processes can decide up to k different values. Very few papers have tackled this problem in dynamic networks. Exploiting the formalism of the Time Varying Graph model, we propose in a new quorum-based failure detector for solving k-set agreement in dynamic networks with asynchronous communications. We present two algorithms that implement this new failure detector using graph connectivity and message pattern assumptions. We also provide an algorithm for solving k-set agreement using our new failure detector.
We propose several algorithms to implement efficient failure detection services. We introduce in the Two Windows Failure Detector (2WFD), an algorithm that provides QoS and is able to react to sudden changes in network conditions, a property that currently existing algorithms do not satisfy. We ran tests on real traces and compared the 2W-FD to state-of-the-art algorithms. Our results show that our algorithm presents the best performance in terms of speed and accuracy in unstable scenarios. In , we propose a new approach towards the implementation of failure detectors for large and dynamic networks: we study reputation systems as a means to detect failures. The reputation mechanism allows efficient node cooperation via the sharing of views about other nodes. Our experimental results show that a simple prototype of a reputation-based detection service performs better than other known adaptive failure detectors, with improved flexibility. It can thus be used in a dynamic environment with a large and variable number of nodes.
We explore the node allocation challenges in providing probabilistic Byzantine fault tolerance in a hybrid cloud environment, consisting of nodes with varying reliability levels, compute power, and monetary cost. We consider hybrid computing architectures that combine edge nodes with cloud hosted computing. In such a system, a large fraction of the computation is performed by donated machines at the edge of the network, which significantly reduces the cost to the owner of the computation.
Considering “bag of tasks” (BoT) applications where a large computational problem is broken into a large number of independent tasks, the probabilistic Byzantine fault tolerance guarantee refers to the confidence level that the result of a given computation is correct despite potential Byzantine failures. In we explore probabilistic Byzantine tolerance, in which computation tasks are replicated on dynamic replication sets whose size is determined based on ensuring probabilistic thresholds of correctness.
We study covering problems (such as minimal dominating set or maximal matching) in the context of highly dynamic distributed systems. We first obtain some general results. In , we first propose a new definition of this family of problems since classical ones are meaningless in such systems. We generalize the classical definition of time complexity (for static systems) to our setting. We also provided in a generic tool to help the writing of impossibility proofs in dynamic distributed systems. Then, we focus on the particular case of the minimal dominating set problem. We characterize the necessary and sufficient condition to construct deterministically a minimal dominating set in a dynamic system according to our definition.
Self-stabilization is a generic paradigm to tolerate transient faults (i.e., faults of finite duration) in distributed systems. Results obtained in this area by Regal members in 2015 follow.
Spanning tree construction is a well-studied problem in distributed computing for its numerous applications like routing, broadcast...Properties of the obtained trees, efficiency of the construction, and fault-tolerance guarantees are naturally at the heart of many researches. In this context, we propose in a new self-stabilizing algorithm for the minimum diameter spanning tree that achieves better time and space complexity than existing solutions. Moreover, our solution tolerates a fully asynchronous adversary.
A classical way to endowed self-stabilization with (permanent) fault tolerance is confinement. That is, we ensure that the self-stabilizing system moreover ensures that the effect of permanent faults is limited to some topological areas of the system. In , we propose a characterization of optimal confinement areas for a large set of spanning tree metrics in presence of Byzantine faults. In , we propose a stabilizing implementation of an atomic register in presence of crash faults. By avoiding the propagation of fault effects further than a given radius, confinement is clearly a spatial approach. Another approach, called temporal, consists in recovering as quick as possible to a configuration from which some forms of safety are satisfied.
In , we introduce the notion of gradual stabilization and provide a gradually self-stabilizing algorithm that solves the unison problem, i.e., the problem that consists in synchronizing logical clocks locally maintained by the processes.
Swarm of autonomous mobile sensor devices (or, robots) recently emerged as an attractive issue in the study of dynamic distributed systems permits to assess the intrinsic difficulties of many fundamentals tasks, such as exploring or gathering in a discrete space. We consider autonomous robots that are endowed with visibility sensors (but that are otherwise unable to communicate) and motion actuators. The robots we consider are weak, i.e., they are anonymous, uniform, unable to explicitly communicate, and oblivious (they do not remember any of their past actions). Despite their weakness, those robots must collaborate to solve a collective tasks such as exploration, gathering, flocking, to quote only a few.
In , we first show that it is impossible to explore any simple torus of arbitrary size with (strictly) less than four robots, even if the algorithm is probabilistic. Next, we propose an optimal (w.r.t. the number of robots) solution for the terminating exploration of torus-shaped networks by a team of
In 2014, we had proposed SPLAD (for Scattering and PLAcing Data replicas to enhance long-term durability), a model that allows us to vary the data scattering degree by tuning a selection range width. We have enhanced our model and we have focused on the study of the policy used while choosing a storing node within the selection range. Some policies may lead to heavily unbalanced storage load distribution which can be harmful for the system. Simple policies to balance the load (e.g. storing new blocks on least loaded nodes) may induce network congestion and thus data losses. We have shown that the “power of two choices” policy (choosing the least loaded node among two random ones) brings good results both in terms of storage load distribution and fault tolerance.
Managing and processing Dynamic Big Data, where multiple sources produce new data continuously, is very complex. Static cluster- or grid-based solutions are prone to induce bottleneck problems, and are therefore ill-suited in this context. Our objective in this domain is to design and implement a Reliable Large Scale Distributed Framework for the Management and Processing of Dynamic Big Data. In 2015, we focused on Spatio-temporal range queries over Big Location Data aim to extract and analyze relevant data items generated around a given location and time. They require concurrent processing of massive and dynamic data flows. We proposed a scalable architecture for continuous spatio-temporal range queries built by coalescing multiple computing nodes on top of a Distributed Hash Table. The key component of our architecture is a distributed spatio-temporal indexing structure which exhibits low insertion and low index maintenance costs. We assessed our solution with a public data set released by Yahoo! which comprises millions of geotagged multimedia files .
We have developed a new sound logic for proving the correctness of a distributed database under concurrent updates, showing whether the application maintains the database's integrity invariants. An operation of the application is specified as a preparator, which checks the operation's precondition at an origin replica and generates an effector. The effector abstracts the update to be applied to every replica. The application also specifies which operations are allowed to take place concurrently. In summary, the logic shows that the application maintains the invariant if the three following rules are satisfied:
Each operation individually maintains the invariant. It follows that operations' preconditions are sufficiently strong to ensure correctness in a sequential execution.
The effectors of any two operations that can execute concurrently commute. This implies that the database replicas all converge to the same state.
For any pair of operations
This result is published at POPL 2016 .
We have implemented a tool (based on the Z3 SMT solver) that implements these rules. A demo of the tool is available online . If the application passes the tool, it is correct. If not, the tool returns a counter-example, which the application developer can inspect to find the source of the error. Generally speaking, the developer can either weaken the invariants or the effects of operations, or strengthen consistency by disallowing concurrency. By choosing one or the other, the developer performs a co-design of the application with its consistency protocol, in order to have the highest possible concurrency that still ensures correctness.
For instance, consider a database of bank accounts, with the invariant
that an account's balance must be positive.
The banking application has operations
Some applications, like online sales servers, intensively use disk I/Os. Their performance is tightly coupled with I/Os efficiency. To speed up I/Os, operating systems use free memory to offer caching mechanisms. Several I/O intensive applications may require a large cache to perform well. However, nowadays resources are virtualized. In clouds, for instance, virtual machines (VMs) offer both isolation and flexibility. This is the foundation of cloud elasticity, but it induces fragmentation of the physical resources, including memory. This fragmentation reduces the amount of available memory a VM can use for caching I/Os. Previously, we proposed Puma (for Pooling Unused Memory in Virtual Machines) which allows I/O intensive applications running on top of VMs to benefit of large caches. This was realized by providing a remote caching mechanism that provides the ability for any VM to extend its cache using the memory of other VMs located either in the same or in a different host.
We have performed an extensive evaluation of Puma and we have enhanced our solution: Puma adapts automatically the amount a memory that a VM offers to another VM. Furthermore, if the network becomes overloaded, Puma detects a performance degradation and stops using a remote cache.
Orange Lab, 30,000 euros for 1 PhD Students (CIFRE), Ralucca Diaconu
Renault, 60,000 over 3 years (2013 - 2016) for a CIFRE. In the context of a Cifre cooperation with Renault, we are supervising with Whipser the PhD of Antoine Blin on the topic of scheduling processes on a multicore machine for the automotive industry. The goal is to allow real-time and multimedia applications to cohabit on a single processor. The challenge here is to control resource consumption of non real-time processes so as to preserve the real-time behavior of critical ones. As part of this cooperation, we will use the Bossa DSL framework for implementing process schedulers that we have previously developed.
This year, we continued the joint CIFRE (industrial PhD) research of Tao Thanh Vinh, with the French start-up company Scality, as described above (under “Large-Scale File Systems”).
The objective of this research is to design new algorithms for file and block storage systems, considering both the issues of scaling the file naming tree to a very large size, and the issue of conflicting updates to files or to the name tree, in the case of high latency or disconnected work. Preliminary results were published at Systor 2015 .
Franck Petit and Swan Dubois participate to the creation of the EMR (Equipe Mixte de Recherche) CREDIT, (Compréhension, Représentation et Exploitation Des Interactions Temporelles) between LIP6/UPMC and Thales.
Nowadays, networks are the field of temporal interactions that occur in many settings networks, including security issues. The amount and the speed of such interactions increases everyday. Until recently, the dynamics of these objects was little studied due to the lack of appropriate tools and methods. However, it becomes crucial to understand the dynamics of these interactions. Typically, how can we detect failures or attacks in network traffic, fraud in financial transactions, bugs or attacks traces of software execution. More generally, we seek to identify patterns in the dynamics of interactions. Recently, several different approaches have been proposed to study such interactions. For instance, by merging all interactions taking place over a period (e.g. one day) in a graph that are studied thereafter (evolving graphs). Another approach was to built meta-objects by duplicating entities at each unit of time of their activity, and by connecting them together.
The goal of the EMR is to join both teams of LIP6 and Thales on these issues.
More specifically, we hope to make significant progress on security issues such as
anomaly detection.
This requires the use of a formalism sufficiently expressive to formulate complex temporal properties.
Recently, a vast collection of concepts, formalisms, and models
has been unified in a framework called Time-Varying Graphs. We want to pursuit that way.
In the short run, the challenges facing us are:
Magency organizes large events during which participants can use mobile devices to access related data and interact together.
The thesis of Lyes Hamidouche concerns efficient data sharing among a large number of mobile devices. Magency brings traces captured during real events (data accesses and user mobility). We are jointly working on the design of algorithms allowing a large number of mobile devices to efficiently access remote data.
Magency also runs servers. A server is used before an event in order to be prepared and tested, and then, during the event to serve the numerous mobile devices accesses. Many servers are run on a single physical machine using containers. Using this configuration, the memory is partitioned, leading to poor performances for applications that need a large amount of memory for caching purpose. In the context of Damien Carver's PhD thesis, we are designing kernel-level mechanisms that automatically give more memory to the most active containers, leveraging the expertise acquired during Maxime Lorrillere's PhD thesis.
ISIR (UPMC/CNRS), LIP6 (UPMC/CNRS), LIB (UPMC/INSERM), LJLL (UPMC/CNRS), LTCI (Institut Mines-Télécom/CNRS), CHArt-LUTIN (Univ. Paris 8/EPHE), L2E (UPMC), STMS (IRCAM/CNRS).
Sorbonne Universités, ANR.
The SMART Labex project aims globally to enhancing the quality of life in our digital societies by building the foundational bases for facilitating the inclusion of intelligent artifacts in our daily life for service and assistance. The project addresses underlying scientific questions raised by the development of Human-centered digital systems and artifacts in a comprehensive way. The research program is organized along five axes and Regal is responsible of the axe “Autonomic Distributed Environments for Mobility.”
The project involves a PhD grant of 100 000 euros over 2,5 years.
LIP6 (Regal), Ecole des Mines de Nantes (Constraint), IRISA (Triskell), LaBRI (LSR).
ANR Infra.
The design of the Java Virtual Machine (JVM) was last revised in 1999, at a time when a single program running on a uniprocessor desktop machine was the norm. Today's computing environment, however, is radically different, being characterized by many different kinds of computing devices, which are often mobile and which need to interact within the context of a single application. Supporting such applications, involving multiple mutually untrusted devices, requires resource management and scheduling strategies that were not planned for in the 1999 JVM design. The goal of InfraJVM is to design strategies that can meet the needs of such applications and that provide the good performance that is required in an MRE.
The coordinator of InfraJVM is Gaël Thomas, who left the team in 2014. Infra-JVM brings a grant of 202 000 euros from the ANR to UPMC over three years.
Title: Large-scale computation without synchronisation
Programm: FP7
Duration: October 2013 - September 2016
Coordinator: Inria
Partners:
Basho Technologies (United Kingdom)
Faculdade de Ciencias e Tecnologia da Universidade Nova de Lisboa (Portugal)
Koç University (Turkey)
Rovio Entertainment Oy (Finland)
Trifork As (Denmark)
Université Catholique de Louvain (Belgium)
Technische Universitaet Kaiserslautern (Germany)
Inria contact: Marc Shapiro
The goal of SyncFree is to enable large-scale distributed applications without global synchronisation, by exploiting the recent concept of Conflict-free Replicated Data Types (CRDTs). CRDTs allow unsynchronised concurrent updates, yet ensure data consistency. This revolutionary approach maximises responsiveness and availability; it enables locating data near its users, in decentralised clouds. Global-scale applications, such as virtual wallets, advertising platforms, social networks, online games, or collaboration networks, require consistency across distributed data items. As networked users, objects, devices, and sensors proliferate, the consistency issue is increasingly acute for the software industry. Current alternatives are both unsatisfactory: either to rely on synchronisation to ensure strong consistency, or to forfeit synchronisation and consistency altogether with ad-hoc eventual consistency. The former approach does not scale beyond a single data centre and is expensive. The latter is extremely difficult to understand, and remains error-prone, even for highly-skilled programmers. SyncFree avoids both global synchronisation and the complexities of ad-hoc eventual consistency by leveraging the formal properties of CRDTs. CRDTs are designed so that unsynchronised concurrent updates do not conflict and have well-defined semantics. By combining CRDT objects from a standard library of proven datatypes (counters, sets, graphs, sequences, etc.), large-scale distributed programming is simpler and less error-prone. CRDTs are a practical and cost-effective approach. The SyncFree project will develop both theoretical and practical understanding of large-scale synchronisation-free programming based on CRDTs. Project results will be new industrial applications, new application architectures, large-scale evaluation of both, programming models and algorithms for large-scale applications, and advanced scientific understanding.
Inria Chile
Associate Team involved in the International Lab:
Title: hARnessing MAssive DAta flows
International Partner (Institution - Laboratory - Researcher):
Universidad Tecnica Federico Santa Maria (Chile) - Department of Computer Science (Department of Comput) - Xavier Bonnaire
Start year: 2014
See also: http://
The ARMADA project aims at designing and implementing a reliable framework for the management and processing of massive dynamic dataflows. The project is two-pronged: fault-tolerant middleware support for processing massive continuous input, and a redundant storage service for mutable data on a massive scale.
Title: Application dependent intrusion (byzantine) detection in Dynamic cloud systems
International Partner (Institution - Laboratory - Researcher):
Technion Haifa - Prof. Roy Friedman
Duration: 2014–2015
The goal of this project is to study the ability to tolerate Byzantine failures in dynamic environments. The Byzantine model allows arbitrary behaviour of a certain fraction of nodes. Our goal is to provide both a theoretical framework and performance evaluation to tolerate Byzantine behaviour in dynamic distributed environments. We consider "bag of tasks" (BoT) applications characterized by trivial parallelism where a large computational problem is broken into a large number of independent tasks. These tasks can be spread on commodity hardware and operating systems. We target different executions environments: (1) Clouds: tasks are submitted to virtual machines hosted at cloud providers, (2) Desktop grid: tasks are submitted to federate large pool of donated machines hosted at user home, (3) Hybrid cloud: combining both cloud and desktop nodes.
Title: Autonomic and Scalable Algorithms for Building Resilient Distributed Systems
International Partner (Institution - Laboratory - Researcher):
Universida de Federal do Paraná (UFPR), Brazil, Prof. Elias Duarte
Duration: 2015–2017
In the context of autonomic computing systems that detect and diagnose problems, self-adapting themselves, the VCube (Virtual Cube), proposed by Prof. Elias Duarte , is a distributed diagnosis algorithm that organizes the system nodes on a virtual hypercube topology. VCube has logarithmic properties: when all nodes are fault-free, processes are virtually connected to form a perfect hypercube; as soon as one or more failures are detected, links are automatically reconnected to remove the faulty nodes and the resulting topology, connecting only fault-free nodes, keeps the logarithmic properties. The goal of this project is to exploit the autonomic and logarithmic properties of the VCube by proposing self-adapting and self-configurable services.
Dastagiri Reddy MalikiReddy
Date: May—Aug. 2015
Institution: IITKGP (India)
Alvarez Colombo Santiago Javier
Date: Jul. 2015—Jan. 2016
Institution: Universidad de Buenos Aires (Argentina)
Luciana Arantes was scientific co-chair of the 16th Simposio em Sistemas Computacionais de Alto Desempenho
Pierre Sens was general-chair of EDCC 2015 conference.
Luciana Arantes was organisation chair of EDCC 2015 conference.
Sébastien Monnet was finance chair for EDCC 2015 conference.
Franck Petit is member of the steering committee of SSS (International Symposium on Stabilization, Safety, and Security of Distributed Systems) conference.
Pierre Sens is member of the steering committee of SBAC-PAD (International Symposium on Computer Architecture and High Performance Computing) conference.
Marc Shapiro is a member of the Steering Committee of the Principles and Practice of Consistency for Distributed Data (PaPoC).
Marc Shapiro is a member of the Steering Committee of the Int. Conf. on the Principles of Distributed Systems (OPODIS).
Marc Shapiro chairs the 2016 Franco-American Doctoral Exchange Programme (FADEx).
Luciana Arantes was a PC member of the 15th IFIP Distributed Applications and Interoperable Systems Conference (DAIS 2015); The 26th IEEE International Symposium on Software Reliability Engineering (ISSRE 2015); the 17th IEEE International Conference on High Performance Computing and Communications (HPCC 2015); The 14th IEEE International Symposium on Network Computing and Applications (NCA 2015); The 27th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD 2015); The Eighth International Conference on Dependability (DEPEND 2015).
Swan Dubois was member of the program committee of 17th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS 2015).
Sébastien Monnet is PC member of 30th and 31st ACM/SIGAPP Symposium On Applied Computing - track Operating Systems (SAC 2015 and 2016); French conference on parallelism, architecture and system (Compas 2015); 14th workshop on Network and Systems Support for Games (NetGames 2015) and 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2016).
Pierre Sens was member of the program committee of 4rd IEEE/SAE International Conference on Connected Vehicles and Expo (ICCVE 2015); Europar 2015; 30th IEEE International Parallel and Distributed Processing Symposium (IPDPS'2016).
Marc Shapiro was a member of the External Programme Committee for the Int. Conf. on Architectural Support for Programming Languages and Operating Systems (ASPLOS 2016).
Marc Shapiro is PC member of W. on Planetary-Scale Distributed Systems (W-PSDS) 2015; member of the Middleware 2015 conference.
Swan Dubois reviewed papers for the ACM Symposium on Principles of Distributed Computing (PODC 2015), the International Conference on Networked Systems (NETYS 2015), and the Rencontres Francophones sur les Aspects Algorithmiques de Télécommunications (AlgoTel 2015).
Sébastien Monnet was external reviewer for the 30th IEEE International Parallel and Distributed Processing Symposium (IPDPS 2016).
Franck Petit was invited editor with Michel Raynal for a special issue on Distributed Computing into TCS (Journal of Theoretical Computer Science)—Vol. 561. Also, he belongs to the editorial board of the Scientific World Journal and the Journal of Discrete Mathematics.
Pierre Sens is associated editor of International Journal of High Performance Computing and Networking (IJHPCN).
Luciana Arantes reviewed papers for the Journal of Parallel and Distributed Computing (JPDC).
Swan Dubois reviewed papers for the Journal of Parallel and Distributed Computing (JPDC), Computer Networks (COMNET), and Theoretical Computer Science (TCS).
Sébastien Monnet reviewed a paper for the IEEE Transactions on Parallel & Distributed Systems (TPDS 2016).
Pierre Sens gave the following invited talks:
ICL Innovative Computing Laboratory - University of Tennessee, October 2015
Maimonide seminar - Herzilya, Israel, November 2015.
Marc Shapiro gave the following invited talks:
Royal Holloway London University, May 2015.
Universidade Nova de Lisboa, May 2015.
W. on Chemistry of Concurrent and Distributed Programming II, Agadir, Morocco, May 2015.
CurryOn, Prague, July 2015.
Workshop on Large-Scale Distributed Systems (LADIS), Monterey CA USA, Oct. 2015.
CodeMesh, London, Nov. 2015.
Swan Dubois was member of the scientific committee for the assistant professor position n°1642 at University Paris-Sud.
Pierre Sens served on the jury for the "Prix de thèse Gilles Kahn" (SIF - Académie des Sciences).
Pierre Sens also chaired the selection committees for an assistant professor position at Grenoble University.
Pierre is Member of the scientific council of UPMC and officer at UPMC Vice Presidence of Research and Innovation.
Julien Sopena is Member of "Directoire des formations et de l'insertion professionnelle" of UPMC Sorbonne Universités, France
Master: Julien Sopena is responsible of Computer Science Master's degree in Distributed systems and applications (in French, SAR), UPMC Sorbonne Universités, France
Master: Luciana Arantes, Swan Dubois, Oliver Marin, Sébastien Monnet, Franck Petit, Pierre Sens, Advanced distributed algorithms, M2, UPMC Sorbonne Universités, France
Master: Maxime Lorrillere, Julien Sopena, Linux Kernel Programming, M1, UPMC Sorbonne Universités, France
Master: Luciana Arantes, Sébastien Monnet, Pierre Sens, Julien Sopena, Operating systems kernel, M1, UPMC Sorbonne Universités, France
Master: Luciana Arantes, Swan Dubois, System distributed Programming, M1, UPMC Sorbonne Universités, France
Master: Luciana Arantes, Swan Dubois, Franck Petit, Distributed Algorithms, M1, UPMC Sorbonne Universités, France
Master: Sébastien Monnet, Julien Sopena, Client-server distributed systems, M1, UPMC Sorbonne Universités, France
Licence: Pierre Sens, Luciana Arantes, Julien Sopena, Principles of operating systems, L3, UPMC Sorbonne Universités, France
Licence: Swan Dubois, Sébastien Monnet, Introduction to operating systems, L2, UPMC Sorbonne Universités, France
Licence: Mesaac Makpangou, C Programming Language, 27 h, L2, UPMC Sorbonne Universités, France
Ingénieur 4ème année : Marc Shapiro, Introduction aux systèmes d'exploitation, 22 h, M1, Polytech UPMC Sorbonne Universités, France.
PhD: Raluca Diaconu, “Passage à l'échelle pour les mondes virtuels,” UPMC, 01/23/2015, Joaqquin Keller (Orange lab), Sébastien Monnet, Pierre Sens.
PhD: Lokesh Gidra, “Ramasse-miettes pour les machines virtuelles sur les processeurs multicoeurs,” UPMC, 09/28/2015, Gaël Thomas, Marc Shapiro, Julien Sopena.
PhD: Karine Pires, “Diffusion et Transcodage à Grande Échelle de Flux Vidéo en Direct,” UPMC, 03/31/2015, Gwendal Simon, Sébastien Monnet, Pierre Sens.
PhD: Maxime Véron, “Arbitrage décentralisé pour les jeux massivement parrallèles,” UPMC, 09/25/2015, Olivier Marin, Sébastien Monnet, Pierre Sens.
PhD: Marek Zawirski, “Cohérence à terme fiable avec des types de données répliquées,” UPMC, 01/14/2015, Marc Shapiro.
PhD in Progress: João Paulo de Araujo, "L'exécution efficace d'algorithmes distribués dans les réseaux véhiculaires", funded by CNPq (Brésil), since Nov.2015, Pierre Sens and Luciana Arantes.
PhD in progress : Antoine Blin, "Execution of real-time applications on a small multicore embedded system", since April 2012, Gilles Muller (Whisper) and Julien Sopena, CIFRE Renault
PhD in progress: Marjorie Bournat, “Gathering in robot networks”, UPMC, since Sep. 2014, Swan Dubois, Franck Petit, Yoann Dieudonné (University of Picardy Jules Verne)
PhD in progress: Damien Carver, “HACHE : HorizontAl Cache cHorEgraphy – Toward automatic resizing of shared I/O caches.”, UPMC, CIFRE, since Jan. 2015, Sébastien Monnet, Pierre Sens, Julien Sopena, Dimitri Refauvelet (Magency).
PhD in Progress: Florent Coriat, "Géolocalisation et routage en situation de crise" since Sept 2014, UPMC,Anne Fladenmuller (NPA-LIP6) and Luciana Arantes.
PhD in progress: Rudyar Cortes,“Un Environnement à grande échelle pour le traitement de flots massifs de données,” UPMC, funded by Chile government, since Sep. 2013, Olivier Marin, Luciana Arantes, Pierre Sens.
PhD in progress: Lyes Hamidouche, “Data replication and data sharing in mobile networks”, UPMC, CIFRE, since Nov. 2014, Sébastien Monnet, Pierre Sens, Dimitri Refauvelet (Magency).
PhD in progress: Denis Jeanneau,“Problèmes d'accord et détecteurs de défaillances dans les réseaux dynamique,” UPMC, funded by Labex Smart, since Oct. 2015, Luciana Arantes, Pierre Sens.
PhD in progress: Mohamed Hamza Kaaouachi, “Autonomic Distributed Environments for Mobility”, UPMC/Chart-LUTIN (Labex SMART), Franck Petit, Swan Dubois, and François Jouen (Chart).
PhD in progress: Maxime Lorrillere, “A kernel cooperative cache for virtualized environments”, UPMC, Sébastien Monnet, Julien Sopena, Pierre Sens.
PhD in progress: Mahsa Najafzadeh, UPMC, funded by Inria competitive grant (Cordi-S), since Nov. 2012, Marc Shapiro.
PhD in progress: Yoann Péron, “Development of an adaptive recommendation system”, UPMC/Makazi, Franck Petit, Patrick Gallinari, Matthias Oehler (Makazi).
PhD in progress: Alejandro Z. Tomsic, UPMC, funded by SyncFree, since Feb. 2014, Marc Shapiro.
PhD in progress: Guillaume Turchini, “Scalable platform for massively multiplayer online games..”, UPMC, since Sep. 2015, Sébastien Monnet.
PhD in progress: Tao Thanh Vinh, UPMC, CIFRE, since Feb. 2014, Marc Shapiro, Vianney Rancurel (Scality).
PhD in progress: Gauthier Voron, “Big-Os : un OS pour les grands volumes de données,”, UPMC, since Sep. 2014, Gaël Thomas, Pierre Sens.
Master 1 : Mohamed Bekthaoui, “Cliques maximales dans les TVGs”, ENS Lyon, Swan Dubois.
Franck Petit was the reviewer of:
T. Langner, PhD ETHZ, Zurich, Suisse. (Advisor: R. Watenhoffer)
M. Djibril Faye, PhD LIP, ENS Lyon. (Advisor: E. Caron)
M. Khaled, PhD MIS, Amiens. (Advisors: V. Villain and F. Levé)
Franck Petit was Chair of :
D. Bonnin, PhD LaBRI, Bordeaux. (Advisors: C. Travers and Y. Métivier)
M. Véron, PhD LIP6, Paris. (Advisors: P. Sens, O. Marin, and S. Monnet)
Pierre Sens was the reviewer of:
G. Da Costa, HDR IRIT, Toulouse
V. Marangozova, HDR LIG, Grenoble
G. Fedak, HDR LIP, Lyon
P. Li, PhD Bordeaux (Advisors: R. Namyst, E. Brunnet)
R. Leroy, PhD LIFL - Maison de la Simulation, (Advisor: N. Melab)
T. Martsinkevich, PhD LRI (Advisor: F. Cappello)
M. Antoine, PhD Univ. Nice (Advisor: F. Baude)
C. Gómez-Calzado, PhD Univ. San Sebastian, Spain (Advisor: M. Larrea)
J. De la Houssaye, PhD Univ. Évry (Advisor: F. Pommereau)
F. Zanon Boito, PhD LIG (Advisors: Y. Denneulin, P. Naveaux)
A. El Rheddane, PhD LIG (Advisor: N. de Palma)
Pierre Sens was Chair of the defense committees of:
S. Monnet, HDR LIP6, Paris
D. Conan, HDR Télécom Sud Paris, Evry
R. Angarita, PhD Dauphine, Paris (Advisor: M. Rukoz)
X. Han, PhD Télécom Sud Paris, Evry (Advisor: N. Crespi)
R. Farahbakhsh, Han, PhD Télécom Sud Paris, Evry (Advisor: N. Crespi)
D. Yang, PhD Télécom Sud Paris, Evry (Advisor: D. Zeghlache)
H. Xiong, PhD Télécom Sud Paris, Evry (Advisor: D. Zeghlache)
L. Guo, PhD LIP6, Paris, (Advisor: G. Muller)
Marc Shapiro was a reviewer on the defense committee of Mehdi Ahmed-Nacer (Nancy).
Sébastien Monnet was member of the defense committee of Amadou Diarra, Phd Grenoble university, Grenoble (Advisor: V. Quema).
Sébastien Monnet is responsible for the Science Festival at UPMC for the LIP6.
Sébastien Monnet and Julien Sopena animated an activity during the Science Festival 2015.