The overall goal of the Sardes project-team is to develop software engineering and software infrastructure (operating system, virtual machine, middleware) foundations for the construction of provably dependable, self-manageable distributed systems.
To contribute to the above goal, the project-team has three major objectives:
To develop component-based software technology, that allows the construction of efficient, dynamically configurable systems, and that relies on a well-defined formal foundation.
To develop a “language-based” approach to the construction of configurable, provably dependable operating systems and distributed software infrastructures.
To develop algorithms and control techniques required to build scalable, self-manageable distributed systems.
In line with these objectives, the project-team organizes its research along four major areas:
Languages and foundations for component systems Work in this area focuses on language support and semantical foundations for distributed component-based systems, with two main goals: (1) the development of a new generation of reflective software component technology with a formal semantical basis, and extensive language support in the form of architecture description and programming languages for dynamic distributed software architectures; (2) the study of process calculus foundations and coinductive proof techniques for distributed component-based programs.
System support for multiscale systems Work in this area focuses on operating system and middleware services required for the construction of component-based systems at different scales (multicore systems on chip, and peer-to-peer systems), with two main goals: (1) to develop algorithms and operating system functions required for the support of efficient event-based concurrency and component reconfiguration in MPSoCs; (2) to develop algorithms and middleware functions required for the deployment, configuration and operation of applications in realistic peer-to-peer environments, typically exploiting an epidemic approach.
Control for adaptive and self-managed systems Workk in this area focuses on the exploitation and development of discrete and continuous control techniques for the construction of adaptive component-based system. Application domains considered for this theme are, respectively, embedded systems and performance management for application server clusters.
Virtual machine for component systems Work in this area focuses on the development of a component-based virtual machine for embedded systems, with two main goals: (1) to develop an extended instruction set for component support, including support for dynamic configuration, orthogonal component persistence, and isolation; (2) to develop a native implementation of the virtual machine, on resource-constrained hardware.
The primary foundations of the software component technology developed by Sardes relate to the component-based software engineering , and software architecture fields. Nowadays, it is generally recognized that component-based software engineering and software architecture approaches are crucial to the development, deployment, management and maintenance of large, dependable software systems . Several component models and associated architecture description languages have been devised over the past fifteen years: see e.g. for an analysis of recent component models, and , for surveys of architecture description languages.
To natively support configurability and adaptability in systems, Sardes component technology also draws from ideas in reflective languages , and reflective middleware , , . Reflection can be used both to increase the separation of concerns in a system architecture, as pioneered by aspect-oriented programming , and to provide systematic means for modifying a system implementation.
The semantical foundations of component-based and reflective systems are not yet firmly
established, however. Despite much work on formal foundations for component-based systems , ,
several questions remain open. For instance, notions of program equivalence when dealing with
dynamically configurable capabilities, are far from being understood.
To study the formal foundations of component-based technology,
we try to model relevant constructs and capabilities
in a process calculus, that is simple enough to formally analyze and reason about.
This approach has been used successfully for the analysis of concurrency with
the
Part of the language developments in Sardes concern the challenge of providing programming support for computer systems with continuously running services and applications, that operate at multiple physical and logical locations, that are constantly introduced, deployed, and combined, that interact, fail and evolve all the time. Programming such systems – called open programming by the designers of the Alice programming language — is challenging because it requires the combination of several features, notably: (i) modularity, i.e. the ability to build systems by combining and composing multiple elements; (ii) security, i.e. the ability to deal with unknown and untrusted system elements, and to enforce if necessary their isolation from the rest of the system; (iii) distribution, i.e. the ability to build systems out of multiple elements executing separately on multiple interconnected machines, which operate at different speed and under different capacity constraints, and which may fail independently; (iv) concurrency, i.e. the ability to deal with multiple concurrent events, and non-sequential tasks; and (v) dynamicity, i.e. the ability to introduce new systems, as well as to remove, update and modify existing ones, possibly during their execution.
The rigorous study of programming features relate to the study of programming language constructs and semantics , , in general. Each of the features mentioned above has been, and continues to be, the subject of active research on its own. Combining them into a practical programming language with a well-defined formal semantics, however, is still an open question. Recent languages that provide relevant background for Sardes' research are:
For their support of dynamic notions of modules and software components: Acute , Alice , , ArchJava , Classages , Erlang , Oz , and Scala .
For their security and failure management features: Acute, E , Erlang and Oz .
For their support for concurrent and distributed execution, Acute, Alice, JoCaml , E, Erlang, Klaim , and Oz.
The Sardes approach to software infrastructure is both architecture-based and language-based: architecture-based for it relies on an explicit component structure for runtime reconfiguration, and language-based for it relies on a high-level type safe programming language as a basis for operating system and middleware construction. Exploiting high-level programming languages for operating system construction has a long history, with systems such as Oberon , SPIN or JX . More recent and relevant developments for Sardes are:
The developments around the Singularity project at Microsoft Research , , which illustrates the use of language-based software isolation for building a secure operating system kernel.
The seL4 project , , which developed a formal verification of a modern operating system microkernel using the Isabelle/HOL theorem prover.
The development of operating system kernels for multicore hardware architectures such as Corey and Barrelfish .
The development of efficient run-time for event-based programming on multicore systems such as libasync , .
Management (or Administration) is the function that aims at maintaining a system's ability to provide its specified services, with a prescribed quality of service. We approach management as a control activity, involving an event-reaction loop: the management system detects events that may alter the ability of the managed system to perform its function, and reacts to these events by trying to restore this ability. The operations performed under system and application administration include observation and monitoring, configuration and deployment, resource management, performance management, and fault management.
Up to now, administration tasks have mainly been performed in an ad-hoc fashion. A great deal of the knowledge needed for administration tasks is not formalized and is part of the administrators' know-how and experience. As the size and complexity of the systems and applications are increasing, the costs related to administration are taking up a major part of the total information processing budgets, and the difficulty of the administration tasks tends to approach the limits of the administrators' skills. For example, an analysis of the causes of failures of Internet services shows that most of the service's downtime may be attributed to management errors (e.g. wrong configuration), and that software failures come second. In the same vein, unexpected variations of the load are difficult to manage, since they require short reaction times, which human administrators are not able to achieve.
The above motivates a new approach, in which a significant part of management-related functions is performed automatically, with minimal human intervention. This is the goal of the so-called autonomic computing movement . Several research projects are active in this area. , are recent surveys of the main research problems related to autonomic computing. Of particular importance for Sardes' work are the issues associated with configuration, deployment and reconfiguration , and techniques for constructing control algorithms in the decision stage of administration feedback loops, including discrete control techniques , and continuous ones .
Management and control functions built by Sardes require also the development of distributed algorithms , at different scales, from algorithms for multiprocessor architectures to algorithms for cloud computing and for dynamic peer-to-peer computing systems , . Of particular relevance in the latter contexts are epidemic protocols such as gossip protocols because of their natural resilience to node dynamicity or churn, an inherent scalability.
AAC_tactics is a plugin for the Coq proof-assistant that implements new proof tactics for rewriting modulo associativity and commutativity.
It is available at http://
ACM: D.2.4 Software/Program Verification
Keywords: Rewriting, rewriting modulo AC, proof tactics, proof assistant
Software benefit: AAC_tactics provides novel efficient proof tactics for rewriting modulo associativity and commutativity.
License: LGPL
Type of human computer interaction: N/A
OS/Middleware: Windows, Linux, MacOS X
Programming language: Coq
ATBR (Algebraic Tools for Binary Relations) is library for the Coq proof assistant that implements new proof tactics for reasoning with binary relations.
Its main tactics implements a decision procedure for inequalities in Kleene algebras. It is available at http://
ACM: D.2.4 Software/Program Verification
Keywords: Binary relations, Kleene algebras, proof tactics, proof assistant
Software benefit: ATBR provides new proof tactics for reasoning with binary relations.
License: LGPL
Type of human computer interaction: N/A
OS/Middleware: Windows, Linux, MacOS X
Programming language: Coq
MoKa is a software framework for the modeling and capacity planning of distributed systems.
It first provides a set of tools to build analytical models that describe the behavior of distributed computing systems, in terms of performance, availability, cost.
The framework allows to include several model algorithms and to compare them regarding their accuracy and their efficiency.
Furthermore, MoKa provides a set of tools to build capacity planning methods.
A capacity planning method allows to find a distributed system configuration that guarantee given quality-of-service objectives.
MoKa is able to include different capacity planning algorithms and to compare them regarding their efficiency and the optimality of their results.
MoKais available at: http://
ACM: C.2.4 Distributed Systems, C.4 Performance of Systems, D.2.9 Management
Keywords: Caching, multi-tier systems, consistency, performance
Software benefit: a novel end-to-end caching protocol for multi-tier services.
License: TBD
Type of human computer interaction: command-line interface
OS/Middleware: Windows, Linux, MacOS X
Programming language: Java
ConSer is a software framework for the modeling and the concurrency and admision control of servers systems.
It implements a fluid-based model that exhibits the dynamics and behavior of a server system in terms of service performance and availability.
ConSer implements various novel admission control laws for servers such as
ACM: C.4 Performance of Systems; D.2.9 Management
Keywords: System management, capacity planning, performance management
Software benefit: MoKa provides modeling, capacity planning and performance management facilities for application server clusters. Thanks to its model-based capacity planning, MoKa is able to enforce service level objectives while minimizing the service cost.
License: LGPL
Type of human computer interaction: web interface
OS/Middleware: Windows, Linux, MacOS X
Programming language: Java, AspectJ
e-Caching is a software framework for higher scalability of multi-tier Internet services through end-to-end caching of dynamic data. It provides a novel caching solution that allows to cache different types of data (e.g. Web content, database query results, etc.), at different locations of multi-tier Internet services. The framework allows to combine different caches and, thus, to provide higher scalability of Internet services. e-Caching maintains the integrity of the cached data through novel distributed caching algorithms that guarantee the consistency of the underlying data.
ACM: C.2.4 Distributed Systems, C.4 Performance of Systems
Keywords: Caching, multi-tier systems, consistency, performance
Software benefit: a novel end-to-end caching protocol for multi-tier services, consistency management, performance improvement.
License: TBD
Type of human computer interaction: command-line interface
OS/Middleware: Windows, Linux, MacOS X
Programming language: Java
MRB is a software framework for benchmarking the performance and dependability of MapReduce distributed systems. It includes five benchmarks covering several application domains and a wide range of execution scenarios such as data-intensive vs. compute-intensive applications, or batch applications vs. interactive applications. MRB allows to characterize application workload, faultload and dataload, and it produces extensive performance and dependability statistics.
ACM: C.2.4 Distributed Systems, C.4 Performance of Systems
Keywords: Benchmark, performance, dependability, MapReduce, Hadoop, Cloud Computing
Software benefit: the first performance and dependability benchmark suite for MapReduce systems.
License: TBD
Type of human computer interaction: GUI and command-line interface
OS/Middleware: Windows, Linux, MacOS X
Programming language: Java, Unix Shell scripts
BZR is a reactive language, belonging to the synchronous languages family, whose main feature is to include discrete controller synthesis within its compilation. It is equipped with a behavioral contract mechanisms, where assumptions can be described, as well as an enforce property part: the semantics of the latter is that the property should be enforced by controlling the behaviour of the node equipped with the contract. This property will be enforced by an automatically built controller, which will act on free controllable variables given by the programmer.
BZR is now further developed with the Pop-Art team, where G. Delaval got a position. It has been designed and developed in the Sardes team in relation with the research topic on Model-based Control of Adaptive and Reconfigurable Systems. It is currently applied in different directions: component-based design and the Fractal framework; real-time control systems and the Orccad design environment; operating systems and administration loops in virtual machines; hardware and reconfigurable architecture (FPGAs).
See also the web page http://
ACM: D.3.3 [Programming Languages]: Language Constructs and Features—Control structures; C.3 [Special-purpose and Application-based Systems]: Real-time and embedded systems; D.2.2 [Software Engineering]: Design Tools and Techniques—Computer-aided software engineering, State diagrams; D.2.4 [Software Engineering]: Software / Program Verification—Formal methods, Programming by contract
Keywords: Discrete controller synthesis, modularity, components, contracts, reactive systems, synchronous programming, adaptive and reconfigurable systems
Software benefit: the first integration of discrete control synthesis in a compiler, making it usable at the level of the programming language.
License: TBD
Type of human computer interaction: programming language and command-line interface
OS/Middleware: Linux
Programming language: Caml; generates C or Java or Caml executable code
The goal of this work is to study process algebraic foundations for component-based distributed programming. Most of this work takes place in the context of the ANR PiCoq and Rever projects.
To develop composable abstractions for programming dependable systems, we investigate concurrent reversible models
of computation, where arbitrary executions can be reversed, step by step, in a causally consistent way.
This year we have continued the study of primitives for controlling reversibility in a higher-order variant of the
We have also started a study on the cost of making a concurrent programming language reversible. More specifically, we have started from an abstract machine for a fragment of the Oz programming language and made it reversible. We have shown that the overhead of the reversible machine with respect to the original one in terms of space is at most linear in the number of execution steps, and that this bound is tight since some programs cannot be made reversible without storing a commensurate amount of information. This work has been published in .
The goal of this work is to apply control techniques based on the behavioral model of reactive automata and the algorithmic techniques of discrete controller synthesis. We adopt the synchronous approach to reactive systems, and use an associated effective controller synthesis tool, Sigali, developed at Inria Rennes. We are exploring several target application domains, where we expect to find commonalities in the control problems, and variations in the definitions of configurations, and in the criteria motivating adaptation.
This year, we have started investigating the application of discrete controller synthesis to various problems in computer systems management and administration. The increasing complexity of computer systems has led to the automation of administration functions, in the form of autonomic managers. One important aspect requiring such management is the issue of energy consumption of computing systems, in the perspective of green computing. As these managers address each a specific aspect, there is a need for using several managers to cover all the domains of administration. However, coordinating them is necessary for proper and effective global administration. Such coordination is a problem of synchronization and logical control of administra- tion operations that can be applied by autonomous managers on the managed system at a given time in response to events observed on the state of this system. We therefore propose to investigate the use of reactive models with events and states, and discrete control techniques to solve this problem. In , , , , we illustrate this approach by integrating a controller obtained by synchronous programming, based on Discrete Controller Synthesis, in an autonomic system administration infrastructure. The role of this controller is to orchestrate the execution of reconfiguration operations of all administration policies to satisfy properties of logical consistency. We have applied this approach to coordinate energy-aware managers for self-optimization, self-regulation of processor frequency and self-repair.
Multicore machines with Non-Uniform Memory Accesses (NUMA) are becoming commodity platforms. Efficiently exploiting their resources remains an open research problem. This line of work investigates system support to tackle various issues related to efficient resource management and programming support.
One of the key concerns in efficiently exploiting multicore NUMA architectures is to limit as much as possible the number of remote memory accesses (i.e., main memory accesses performed from a core to a memory bank that is not directly attached to it). However, in many cases, existing profilers do not provide enough information to help programmers achieve this goal. We have developed MemProf , the first profiler that allows programmers to choose and implement efficient application-level optimizations for NUMA systems. MemProf achieves this goal by allowing programmers to (i) precisely understand which memory objects are accessed remotely in memory, and (ii) building temporal flows of interactions between threads and objects. We evaluated MemProf using four applications (FaceRec, Streamcluster, Psearchy, and Apache) on three different machines. In each case, we showed how MemProf helped us choose and implement efficient optimizations, unlike existing profilers. These optimizations provide significant performance gains on the studied applications (up to 161%), while requiring very lightweight modifications (10 lines of code or less).
State-machine replication is a well-known fault-tolerance technique. Unfortunately existing state-machine replication schemes do not scale well on multicore machines. In collaboration with U. Texas at Austin (L. Alvisi), we have developed a new state-machine replication scheme , that departs from the standard agree-execute architecture of existing schemes, in favor of a more optimistic, and less deterministic, execute-verify replication scheme, which yields much better scalability. We have evaluated Eve's throughput gain compared with traditional sequential execution approaches, as well as Eve's overheads compared to unreplicated multithreaded execution and to alternative replication approaches.
MapReduce is a popular programming model for distributed data processing. Extensive research has been conducted on the reliability of MapReduce, ranging from adaptive and on-demand fault-tolerance to new fault-tolerance models. However, realistic benchmarks are still missing to analyze and compare the effectiveness of these proposals. To date, most MapReduce fault-tolerance solutions have been evaluated using micro benchmarks in an ad-hoc and overly simplified setting, which may not be representative of real-world applications. To remedy this situation, we have developed MRBS, a comprehensive benchmark suite for evaluating the dependability of MapReduce systems. MRBS includes five benchmarks covering several application domains and a wide range of execution scenarios such as data-intensive vs. compute-intensive applications, or batch applications vs. online interactive applications. MRBS allows to inject various types of faults at different rates. It also considers different application workloads and data loads, and produces extensive reliability, availability and performance statistics. We have shown the use of MRBS with Hadoop clusters running on Amazon EC2, and on a private cloud , .
PhD grant Quentin Sabah, funded by STMicroelectronics.
PhD grant Xavier Etchevers, funded by Orange Labs.
This project is lead by Eric Rutten and funded by CNRS in the programme Projet Exploratoire-Premier(s) Soutien(s) PEPS Rupture de l'INS2I 2011. It concerns Control Techniques for Autonomic Computing, and intends to group researchers of different backgrounds (Architectures and FPGA, distributed systems and adaptative software, programming languages for reconfiguration, and control theory) to gather experiences and points of view on this multi-disciplinary topic.
The goal of SocEDA is to develop and validate an elastic and reliable federated SOA architecture for dynamic and complex event‑driven interaction in large highly distributed and heterogeneous service systems. Such architecture will enable exchange of contextual information between heterogeneous services, providing the possibilities to optimize/personalize the execution of them, according to social network information.
The main outcome of the SocEDA project will be a platform for event-‐‑driven interaction between services, that scales at the Internet level based on the proposed architecture and that addresses Quality of Service (QoS) requirements.
The project partners are Inria (ADAM in Lilles), EBM WebSourcing (FR), ActiveEon (FR), ARMINES (FR), France Telecom R&D (FR), CNRS (I3S and LIG), INSA Lyon, Thales Communications.
The project runs from October 2010 to September 2013.
The goal of the PiCoq project is to develop an environment for the formal verification of properties of distributed, component-based programs. The project's approach approach lies at the interface between two research areas: concurrency theory and proof assistants. Achieving this goal relies on three scientific advances, which the project intends to address:
Finding mathematical frameworks that ease modular reasoning about concurrent and distributed systems: due to their large size and complex interactions, distributed systems cannot be analysed in a global way. They have to be decomposed into modular components, whose individual behaviour can be understood.
Improving existing proof techniques for distributed/modular systems: while behavioural theories of first-order concurrent languages are well understood, this is not the case for higher-order ones. We also need to generalise well-known modular techniques that have been developed for first-order languages to facilitate formalisation in a proof assistant, where source code redundancies should be avoided.
Defining core calculi that both reflect concrete practice in distributed component programming and enjoy nice properties w.r.t. behavioural equivalences.
The project partners include Inria (Sardes), LIP (Plume team), and Université de Savoie. the project runs from November 2010 to October 2014.
The ANR PiCoq is in the programme ANR 2010 BLAN 0305 01:
http://
The objective of the MyCloud project is to define and implement a novel cloud model: SLAaaS (SLA aware Service). The SLAaaS model enriches the general paradigm of Cloud Computing, and enables systematic and transparent integration of service levels and SLA to the cloud. SLAaaS is orthogonal to IaaS, PaaS and SaaS clouds and may apply to any of them. The MyCloud project takes into account both the cloud provider and cloud customer points of view. From cloud provider's point of view, MyCloud proposes autonomic SLA management to handle performance, availability, energy and cost issues in the cloud. An innovative approach combines control theory techniques with distributed algorithms and language support in order to build autonomic elastic clouds. Novel models, control laws, distributed algorithms and languages will be proposed for automated provisioning, configuration and deployment of cloud services to meet SLA requirements, while tackling scalability and dynamics issues. On the other hand from cloud customer's point of view, the MyCloud project provides SLA governance. It allows cloud customers to be part of the loop and to be automatically notified about the state of the cloud, such as SLA violation and cloud energy consumption. The former provides more transparecy about SLA guaranties, and the latter aims to raise customers' awareness about cloud's energy footprint.
The project partners are Inria (Sardes is the project coordinator), Grenoble; LIP6, Paris; EMN, Nantes; We Are Cloud, Montpellier; Elastic Grid LLC, USA.
The project runs from November 2010 to October 2013.
The FAMOUS project (FAst Modeling and Design FlOw for Dynamically ReconfigUrable Systems) intends to make reconfigurable hardware systems design easier and faster, by introducing a complete methodology that takes the reconfigurability of the hardware as an essential design concept and proposes the necessary mechanisms to fully exploit those capabilities at runtime. The tool under development in this project is expected to be used by both industrial designers and academic researchers, especially for modern application system specific design such as smart cameras, image and video processing, etc.
The project partners are Inria (Sardes in Grenoble and DaRT in Lille), Université de Bretagne Sud, Université de Bourgogne, Sodius.
The project runs from December 2009 to November 2013.
The REVER project aims to develop semantically well-founded and composable abstractions for dependable distributed computing on the basis of a reversible programming model, where reversibility means the ability to undo any program execution and to revert it to a state consistent with the past execution. The critical assumption behind REVER is that by combining reversibility with notions of compensation and modularity, one can develop systematic and composable abstractions for dependable programming.
The REVER workprogramme is articulated around three major objectives:
To investigate the semantics of reversible concurrent processes.
To study the combination of reversibility with notions of compensation, isolation and modularity in a concurrent and distributed setting.
To investigate how to support these features in a practical (typically, object-oriented and functional) programming language design.
The project partners are Inria (Sardes in Grenoble and Focus in Bologna), Université de Paris VII (PPS laboratory), and CEA (List laboratory).
The project runs from December 2011 to November 2015.
The goal of the CtrlGreen project is to develop the control techniques and software infrastructure required to build energy-efficient data centers. Because resource management must meet performance, dependability and scalability objectives and as well as service level agreements, energy-efficiency must be considered as a multi-criteria control problem. CtrlGreen aims to develop an autonomic system approach, where multiple control loops may coexist and coordinate. Specifically, the work will proceed along four directions:
The study of reactive control techniques including synchronous languages and discrete controller synthesis to program, verify and synthetize coordinating controllers.
The development of a controllable platform that can provide system level support for the deployment and integration of the requuired controllers.
The study of several green data center scenarios that involve the coordination between several controllers at different levels (hardware, operating system and middleware) and targetting different objectives (performance, availability, energy efficiency, etc).
Experiments with an industrial data center to evaluate CtrlGreen techniques in a real world environment, with multiple running applications.
The project partners include Eolas, Inria Rennes, INPT/IRIT Toulouse, LIG (Sardes) and ScalAgent. The project runs from January 2012 to December 2014.
Title: Pushing dynamic and ubiquitous interaction between services Leveraged in the Future Internet by ApplYing complex event processing
Type: COOPERATION (ICT)
Defi: Internet of Services, Software & Virtualisation
Instrument: Specific Targeted Research Project (STREP)
Duration: October 2010 - September 2013
Coordinator: FZI (Germany)
Others partners: EBM WebSourcing (Fr), Inria (OASIS and SARDES) (Fr), France Telecom (Fr), ICCS (Gr), Ecole des Mines Albi (Fr), CIM (Serbia).
See also: http://
Abstract: The goal of PLAY is to develop and validate an elastic and reliable federated SOA architecture for dynamic and complex, event-driven interaction in large highly distributed and heterogeneous service systems. Such architecture should enable the exchange of contextual information between heterogeneous services, providing the possibilities to optimize/personalize the execution of them, resulting in the so called situational-driven adaptivity.
The main outcome will be a FOT (federated open trusted) Platform for event-driven interaction between services, that scales at the Internet level based on the proposed architecture and that addresses Quality of Service (QoS) requirements. The platform will comprise in particular:
A federated middleware layer: a peer-to-peer overlay network combined with a publish/subscribe mechanism, that has the task to collect events coming from the heterogeneous and distributed services.
A distributed complex event processor: an elastic, distributed computing cloud based engine for complex processing/combining of events coming from different services in order to detect interesting situations a service should react on.
Co-General Chair of the CFSE/Renpar/SympA conferences in 2013.
Member of the Programme Committee of ICAS 2012.
Chair of the GCM (Green and Cloud Management) 2012 workshop.
Organizer of the Winter School on Hot Topics in Distributed Systems (each year since 2008).
Member of the Programme Committe of APRES (Adaptive and Reconfigurable Embedded Systems workshop) 2012. Member of the IFAC Technical Committee 1.3 on Discrete Event and Hybrid Systems.
Chairman of IFIP WG6.1 on Architectures and Protocols for Distributed Systems. Chairman of the Steering Committee of the FMOODS/FORTE conference series. Member of the Programme Committee of ACM SAC 2012. Member of the editorial board of the journal Annals of Telecommunications. Member of the Technology Council of the company STMicroelectronics.
V. Quéma is a full-time professor at Grenoble INP.
Noël de Palma is a full-time professor at Grenoble University.
Olivier Gruber is a full-time professor at Grenoble University.
Olivier Gruber is the head of the Parallel, Distributed, and Embedded Systems track in the international Master of Science in Informatics at Grenoble (MOSIG).
Renaud Lachaize is a full-time assistant-professor at Grenoble University.
Sara Bouchenak is a full-time assistant-professor at Grenoble University.
Fabienne Boyer is a full-time assistant-professor at Grenoble University.
Fabienne Boyer is the head of the M2PGI (Master Professionnel Génie Informatique) Alternance at Grenoble.