Our research is based on two complementary fields: distributed systems and software engineering. We aim at introducing more automation in the adaptation processes of software systems, that is, transitioning from the study of adaptive systems to self-adaptive systems. In particular, we work towards two directions: self-healing software systems with data mining solutions, and self-optimizing software systems with context monitoring. These two objectives are declined for two target environments: mobile computing and cloud computing.
Distributed software services and systems are central to many human activities, such as communication, commerce, education, defense, etc.
Distributed software services consist of an ever growing number of devices, often highly heterogeneous, from cloud platforms, sensor networks, to application servers, desktop machines, and mobile devices, such as smartphones.
The future of this huge number of interconnected software services has been called the Internet of Services, a vision "where everything that is needed to use software applications is available as a service on the Internet, such as the software itself, the tools to develop the software, the platform servers, storage and communication to run the software."
This research project focuses on defining self-adaptive software services and middleware. From the perspective of the Internet of Services, this project fits in the vision sketched by e.g., the FP8 Expert Group Services in the Future Internet , the NESSI Research Priorities for the next Framework Programme for Research and Technological Development FP8 , the Roadmap for Advanced Cloud Technologies under H2020 , and research roadmaps, such as , , .
Our research program on self-adaptive software targets two key properties that are detailed in the remainder of this section: self-healing and self-optimization.
Software systems are under the pressure of changes all along their lifecycle. Agile development blurs the frontier between design and execution and requires constant adaptation. The size of systems (millions of lines of code) multiplies the number of bugs by the same order of magnitude. More and more systems, such as sensor network devices, live in "surviving" mode, in the sense that they are neither rebootable nor upgradable.
Software bugs are hidden in source code and show up at development-time, testing-time or worse, once deployed in production. Except for very specific application domains where formal proofs are achievable, bugs can not be eradicated. As an order of magnitude, on 16 Dec 2011, the Eclipse bug repository contains 366,922 bug reports. Software engineers and developers work on bug fixing on a daily basis. Not all developers spend the same time on bug fixing. In large companies, this is sometimes a full-time role to manage bugs, often referred to as Quality Assurance (QA) software engineers. Also, not all bugs are equal, some bugs are analyzed and fixed within minutes, others may take months to be solved .
In terms of research, this means that: (i) one needs means to automatically adapt the design of the software system through automated refactoring and API extraction, (ii) one needs approaches to automate the process of adapting source code in order to fix certain bugs, (iii) one needs to revisit the notion of error-handling so that instead of crashing in presence of errors, software adapts itself to continue with its execution, e.g., in degraded mode.
There is no one-size-fits-all solution for each of these points. However, we think that novel solutions can be found by using data mining and machine learning techniques tailored for software engineering . This body of research consists of mining some knowledge about a software system by analyzing the source code, the version control systems, the execution traces, documentation and all kinds of software development and execution artifacts in general. This knowledge is then used within recommendation systems for software development, auditing tools, runtime monitors, frameworks for resilient computing, etc.
The novelty of our approach consists of using and tailoring data mining techniques for analyzing software artifacts (source code, execution traces) in order to achieve the next level of automated adaptation (e.g., automated bug fixing). Technically, we plan to mix unsupervised statistical learning techniques (e.g. frequent item set mining) and supervised ones (e.g. training classifiers such as decision trees). This research is currently not being performed by data mining research teams since it requires a high level of domain expertise in software engineering, while software engineering researchers can use off-the-shelf data mining libraries, such as Weka .
We now detail the two directions that we propose to follow to achieve this objective.
The first direction is about mining techniques in software repositories (e.g., CVS, SVN, Git). Best practices can be extracted by data mining source code and the version control history of existing software systems. The design and code of expert developers significantly vary from the artifacts of novice developers. We will learn to differentiate those design characteristics by comparing different code bases, and by observing the semantic refactoring actions from version control history. Those design rules can then feed the test-develop-refactor constant adaptation cycle of agile development.
Fault localization of bugs reported in bug repositories. We will build a solid foundation on empirical knowledge about bugs reported in bug repository. We will perform an empirical study on a set of representative bug repositories to identify classes of bugs and patterns of bug data. For this, we will build a tool to browse and annotate bug reports. Browsing will be helped with two kinds of indexing: first, the tool will index all textual artifacts for each bug report; second it will index the semantic information that is not present by default in bug management software—i.e., “contains a stacktrace”). Both indexes will be used to find particular subsets of bug reports, for instance “all bugs mentioning invariants and containing a stacktrace”. Note that queries with this kind of complexity and higher are mostly not possible with the state-of-the-art of bug management software. Then, analysts will use annotation features to annotate bug reports. The main outcome of the empirical study will be the identification of classes of bugs that are appropriate for automated localization. Then, we will run machine learning algorithms to identify the latent links between the bug report content and source code features. Those algorithms would use as training data the existing traceability links between bug reports and source code modifications from version control systems. We will start by using decision trees since they produce a model that is explicit and understandable by expert developers. Depending on the results, other machine learning algorithms will be used. The resulting system will be able to locate elements in source code related to a certain bug report with a certain confidence.
Automated bug fix generation with search-based techniques. Once a location in code is identified as being the cause of the bug, we can try to automatically find a potential fix. We envision different techniques: (1) infer fixes from existing contracts and specifications that are violated; (2) infer fixes from the software behavior specified as a test suite; (3) try different fix types one-by-one from a list of identified bug fix patterns; (4) search fixes in a fix space that consists of combinations of atomic bug fixes. Techniques 1 and 2 are explored in and . We will focus on the latter techniques. To identify bug fix patterns and atomic bug fixes, we will perform a large-scale empirical study on software changes (also known as changesets when referring to changes across multiple files). We will develop tools to navigate, query and annotate changesets in a version control system. Then, a grounded theory will be built to master the nature of fixes. Eventually, we will decompose change sets in atomic actions using clustering on changeset actions. We will then use this body of empirical knowledge to feed search-based algorithms (e.g. genetic algorithms) that will look for meaningful fixes in a large fix space. To sum up, our research on automated bug fixing will try not only to point to source code locations responsible of a bug, but to search for code patterns and snippets that may constitute the skeleton of a valid patch. Ultimately, a blend of expert heuristics and learned rules will be able to produce valid source code that can be validated by developers and committed to the code base.
The second proposed research direction is about inventing a self-healing capability at run-time. This is complementary to the previous objective that mainly deals with development time issues. We will achieve this in two steps. First, we want to define frameworks for resilient software systems. Those frameworks will help to maintain the execution even in the presence of bugs—i.e. to let the system survive. As exposed below, this may mean for example to switch to some degraded modes. Next, we want to go a step further and to define solutions for automated runtime repair, that is, not simply compensating the erroneous behavior, but also determining the correct repair actions and applying them at run-time.
Mining best effort values. A well-known principle of software engineering is the "fail-fast" principle. In a nutshell, it states that as soon as something goes wrong, software should stop the execution before entering incorrect states. This is fine when a human user is in the loop, capable of understanding the error or at least rebooting the system. However, the notion of “failure-oblivious computing” shows that in certain domains, software should run in a resilient mode (i.e. capable of recovering from errors) and/or best-effort mode—i.e. a slightly imprecise computation is better than stopping. Hence, we plan to investigate data mining techniques in order to learn best-effort values from past executions (i.e. somehow learning what is a correct state, or the opposite what is not a completely incorrect state). This knowledge will then be used to adapt the software state and flow in order to mitigate the error consequences, the exact opposite of fail-fast for systems with long-running cycles.
Embedding search based algorithms at runtime. Harman recently described the field of search-based software engineering . We think that certain search based approaches can be embedded at runtime with the goal of automatically finding solutions that avoid crashing. We will create software infrastructures that allow automatically detecting and repairing faults at run-time. The methodology for achieving this task is based on three points: (1) empirical study of runtime faults; (2) learning approaches to characterize runtime faults; (3) learning algorithms to produce valid changes to the software runtime state. An empirical study will be performed to analyze those bug reports that are associated with runtime information (e.g. core dumps or stacktraces). After this empirical study, we will create a system that learns on previous repairs how to produce small changes that solve standard runtime bugs (e.g. adding an array bound check to throw a handled domain exception rather than a spurious language exception). To achieve this task, component models will be used to (1) encapsulate the monitoring and reparation meta-programs in appropriate components and (2) support runtime code modification using scripting, reflective or bytecode generation techniques.
Complex distributed systems have to seamlessly adapt to a wide variety of deployment targets. This is due to the fact that developers cannot anticipate all the runtime conditions under which these systems are immersed. A major challenge for these software systems is to develop their capability to continuously reason about themselves and to take appropriate decisions and actions on the optimizations they can apply to improve themselves. This challenge encompasses research contributions in different areas, from environmental monitoring to real-time symptoms diagnosis, to automated decision making. The variety of distributed systems, the number of optimization parameters, and the complexity of decisions often resign the practitioners to design monolithic and static middleware solutions. However, it is now globally acknowledged that the development of dedicated building blocks does not contribute to the adoption of sustainable solutions. This is confirmed by the scale of actual distributed systems, which can—for example—connect several thousands of devices to a set of services hosted in the Cloud. In such a context, the lack of support for smart behaviors at different levels of the systems can inevitably lead to its instability or its unavailability. In June 2012, an outage of Amazon's Elastic Compute Cloud in North Virginia has taken down Netflix, Pinterest, and Instagram services. During hours, all these services failed to satisfy their millions of customers due to the lack of integration of a self-optimization mechanism going beyond the boundaries of Amazon.
The research contributions we envision within this area will therefore be organized as a reference model for engineering self-optimized distributed systems autonomously driven by adaptive feedback control loops, which will automatically enlarge their scope to cope with the complexity of the decisions to be taken.
This solution introduces a multi-scale approach, which first privileges local and fast decisions to ensure the homeostasis
The novelty of this objective is to exploit the wisdom of crowds to define new middleware solutions that are able to continuously adapt software deployed in the wild. We intend to demonstrate the applicability of this approach to distributed systems that are deployed from mobile phones to cloud infrastructures. The key scientific challenges to address can be summarized as follows: How does software behave once deployed in the wild? Is it possible to automatically infer the quality of experience, as it is perceived by users? Can the runtime optimizations be shared across a wide variety of software? How optimizations can be safely operated on large populations of software instances?
The remainder of this section further elaborates on the opportunities that can be considered within the frame of this objective.
Once deployed, developers are generally no longer aware of how their software behave. Even if they heavily use testbeds and benchmarks during the development phase, they mostly rely on the bugs explicitly reported by users to monitor the efficiency of their applications. However, it has been shown that contextual artifacts collected at runtime can help to understand performance leaks and optimize the resilience of software systems . Monitoring and understanding the context of software at runtime therefore represent the first building block of this research challenge. Practically, we intend to investigate crowd-sensing approaches, to smartly collect and process runtime metrics (e.g., request throughput, energy consumption, user context). Crowd-sensing can be seen as a specific kind of crowdsourcing activity, which refers to the capability of lifting a (large) diffuse group of participants to delegate the task of retrieving trustable data from the field. In particular, crowd-sensing covers not only participatory sensing to involve the user in the sensing task (e.g., surveys), but also opportunistic sensing to exploit mobile sensors carried by the user (e.g., smartphones).
While reported metrics generally enclose raw data, the monitoring layer intends to produce meaningful indicators like the Quality of Experience (QoE) perceived by users. This QoE reflects representative symptoms of software requiring to trigger appropriate decisions in order to improve its efficiency. To diagnose these symptoms, the system has to process a huge variety of data including runtime metrics, but also history of logs to explore the sources of the reported problems and identify opportunities for optimizations. The techniques we envision at this level encompass machine learning, principal component analysis, and fuzzy logic to provide enriched information to the decision level.
Beyond the symptoms analysis, decisions should be taken in order to improve the Quality of Service (QoS). In our opinion, collaborative approaches represent a promising solution to effectively converge towards the most appropriate optimization to apply for a given symptom. In particular, we believe that exploiting the wisdom of the crowd can help the software to optimize itself by sharing its experience with other software instances exhibiting similar symptoms. The intuition here is that the body of knowledge that supports the optimization process cannot be specific to a single software instance as this would restrain the opportunities for improving the quality and the performance of applications. Rather, we think that any software instance can learn from the experience of others.
With regard to the state-of-the-art, we believe that a multi-levels decision infrastructure, inspired from distributed systems like Spotify , can be used to build a decentralized decision-making algorithm involving the surrounding peers before requesting a decision to be taken by more central control entity. In the context of collaborative decision-making, peer-based approaches therefore consist in quickly reaching a consensus on the decision to be adopted by a majority of software instances. Software instances can share their knowledge through a micro-economic model , that would weight the recommendations of experienced instances, assuming their age reflects an optimal configuration.
Beyond the peer level, the adoption of algorithms inspired from evolutionary computations, such as genetic programming, at an upper level of decision can offer an opportunity to test and compare several alternative decisions for a given symptom and to observe how does the crowd of applications evolves. By introducing some diversity within this population of applications, some instances will not only provide a satisfying QoS, but will also become naturally resilient to unforeseen situations.
Any decision taken by the crowd requires to propagate back to and then operated by the software instances. While simplest decisions tend to impact software instances located on a single host (e.g., laptop, smartphone), this process can also exhibit more complex reconfiguration scenarios that require the orchestration of various actions that have to be safely coordinated across a large number of hosts. While it is generally acknowledged that centralized approaches raise scalability issues, we think that self-optimization should investigate different reconfiguration strategies to propagate and apply the appropriate actions. The investigation of such strategies can be addressed in two steps: the consideration of scalable data propagation protocols and the identification of smart reconfiguration mechanisms.
With regard to the challenge of scalable data propagation protocols, we think that research opportunities encompass not only the exploitation of gossip-based protocols , but also the adoption of publish/subscribe abstractions in order to decouple the decision process from the reconfiguration. The fundamental issue here is the definition of a communication substrate that can accommodate the propagation of decisions with relaxed properties, inspired by Delay Tolerant Networks (DTN), in order to reach weakly connected software instances. We believe that the adoption of asynchronous communication protocols can provide the sustainable foundations for addressing various execution environments including harsh environments, such as developing countries, which suffer from a partial connectivity to the network. Additionally, we are interested in developing the principle of social networks of applications in order to seamlessly group and organize software instances according to their similarities and acquaintances. The underlying idea is that grouping application instances can contribute to the identification of optimization profiles not only contributing to the monitoring layer, but also interested in similar reconfigurations. Social networks of applications can contribute to the anticipation of reconfigurations by exploiting the symptoms of similar applications to improve the performance of others before that problems actually happen.
With regard to the challenge of smart reconfiguration mechanisms, we are interested in building on our established experience of adaptive middleware in order to investigate novel approaches to efficient application reconfigurations. In particular, we are interested in adopting seamless micro-updates and micro-reboot techniques to provide in-situ reconfiguration of pieces of software. Additionally, the provision of safe and secured reconfiguration mechanisms is clearly a key issue that requires to be carefully addressed in order to avoid malicious exploitation of dynamic reconfiguration mechanisms against the software itself. In this area, although some reconfiguration mechanisms integrate transaction models , most of them are restricted to local reconfigurations, without providing any support for executing distributed reconfiguration transactions. Additionally, none of the approached published in the literature include security mechanisms to preserve from unauthorized or malicious reconfigurations.
Although our research is general enough to be applied to many application domains, we currently focus on applications and distributed services for the retail industry and for the digital home. These two application domains are supported by a strong expertise in mobile computing and in cloud computing that are the two main target environments on which our research prototypes are built, for which we are recognized, and for which we have already established strong collaborations with the industrial ecosystem.
This application domain is developed in relation with the PICOM (Pôle de compétivité Industries du Commerce) cluster. We have established strong collaborations with local companies in the context of former funded projects, such as Cappucino and Macchiato, which focused on the development of a new generation of mobile computing platforms for e-commerce. We are also involved in the Datalyse and OCCIware funded projects that define cloud computing environments with applications for the retail industry. Finally, our activities in terms of crowd-sensing and data gathering on mobile devices with the APISENSE® platform share also applications for the retail industry.
We are developing new middleware solutions for the digital home, in particular through our long standing collaboration with Orange Labs. We are especially interested in developing energy management and saving solutions with the PowerAPI software library for distributed environments such the ones that equip digital homes. We are also working to bridge the gap between distributed services hosted on home gateways and distributed services hosted on the cloud to be able to smoothly transition between both environments. This work is especially conducted with the Saloon platform.
María Gómez Lacruz defended her PhD thesis in 2016 in the Spirals project-team. She is now a post-doctoral researcher at Saarland University, Germany. During her thesis, she worked in the domain of crowdsensed data. She proposed algorithms to mine traces of mobile applications in order to detect and reproduce application crashes. This research led to techniques that improve the robustness of mobile applications deployed in the wild. For these results, she was awarded an accessit for the thesis prize of the CNRS GDR Génie de la programmation et du logiciel (GPL). See http://
Benjamin Danglot and his colleagues—Thomas Durieux, Martin Monperrus, Simon Urli, contributing to the development of the Spoon software library—received the 2017 OW2 Community award for Spoon specific and dynamic community, and the use of Spoon in other projects. OW2 is an independent, global, open-source community that promotes the development of open-source middleware, generic business applications, and cloud computing platforms. Spoon is an OW2 project and a software building block that is used in many of our research activities and projects on self-adaptation. See https://
Philippe Merle, Christophe Gourdin, and Nathalie Mitton (Inria Fun) received a best paper award
Keywords: Mobile sensing - Crowd-sensing - Mobile application - Crowd-sourcing - Android
Functional Description: APISENSE platform is a software solution to collect various contextual information from Android devices (client application) and automatically upload collected data to a server (deployed as a SaaS). APISENSE is based on a Cloud computing infrastructure to facilitate datasets collection from significant populations of mobile users for research purposes.
Participants: Antoine Veuiller, Christophe Ribeiro, Julien Duribreux, Nicolas Haderer and Romain Rouvoy
Partner: Université Lille 1
Contact: Romain Rouvoy
URL: http://
Keyword: Automatic software repair
Functional Description: Nopol is an automatic software repair tool for buggy conditional statements (i.e., if-then-else statements) in Java programs. Nopol takes a buggy program as well as a test suite as input and generates a patch with a conditional expression as output. The test suite is required to contain passing test cases to model the expected behavior of the program and at least one failing test case that reveals the bug to be repaired. The process of Nopol consists of three major phases. First, Nopol employs angelic fix localization to identify expected values of a condition during the test execution. Second, runtime trace collection is used to collect variables and their actual values, including primitive data types and objected-oriented features (e.g., nullness checks), to serve as building blocks for patch generation. Third, Nopol encodes these collected data into an instance of a Satisfiability Modulo Theory (SMT) problem, then a feasible solution to the SMT instance is translated back into a code patch.
Contact: Martin Monperrus
Keywords: Energy efficiency - Energy management
Functional Description: PowerAPI is a library for monitoring the energy consumption of software systems.
PowerAPI differs from existing energy process-level monitoring tool in its software orientation, with a fully customizable and modular solution that let the user to precisely define what he/she wants to monitor. PowerAPI is based on a modular and asynchronous event-driven architecture using the Akka library. PowerAPI offers an API which can be used to define requests about energy spent by a process, following its hardware resource utilization (in term of CPU, memory, disk, network, etc.).
Participants: Adel Noureddine, Loïc Huertas, Maxime Colmant and Romain Rouvoy
Contact: Romain Rouvoy
URL: http://
Keywords: Feature Model - Software Product Line - Cloud computing - Model-driven engineering - Ontologies
Functional Description: Saloon is a framework for the selection and configuration of Cloud providers according to application requirements. The framework enables the specification of such requirements by defining ontologies. Each ontology provides a unified vision of provider offers in terms of frameworks, databases, languages, application servers and computational resources (i.e., memory, storage and CPU frequency). Furthermore, each provider is related to a Feature Model (FM) with attributes and cardinalities, which captures its capabilities. By combining the ontology and FMs, the framework is able to match application requirements with provider capabilities and select a suitable one. Specific scripts to the selected provider are generated in order to enable its configuration.
Participants: Clément Quinton, Daniel Romero Acero, Laurence Duchien, Lionel Seinturier and Romain Rouvoy
Partner: Université Lille 1
Contact: Lionel Seinturier
Keywords: Java - Code analysis
Functional Description: Spoon is an open-source library that enables you to transform (see below) and analyze Java source code (see example) . Spoon provides a complete and fine-grained Java metamodel where any program element (classes, methods, fields, statements, expressions…) can be accessed both for reading and modification. Spoon takes as input source code and produces transformed source code ready to be compiled.
Participants: Gérard Paligot, Lionel Seinturier, Martin Monperrus and Nicolas Petitprez
Contact: Martin Monperrus
In , together with Frederico Alvares (Inria Ascola) and Eric Rutten (Inria Ctrl-A), we have proposed Ctrl-F, a new domain-specific language for specifying reconfiguration policies in self-adaptable component-based software systems. Self-adaptive behaviors in the context of component-based architecture are generally designed based on past monitoring events, configurations (component assemblies) as well as behavioral programs defining the adaptation logics and invariant properties. The novelty of the proposed Ctrl-F language is to enable taking decisions on predictions on the possible futures of the system in order to avoid going into branches of the behavioral program leading to bad configurations. Ctrl-F is formally defined by a translation into Finite State Automata models. We use Discrete Controller Synthesis to automatically generate a controller to enforce correct self-adaptive behaviors. Ctrl-F is integrated with our FraSCAti middleware platform for distributed service and component oriented systems.
In , together with Nathalie Mitton (Inria Fun), we have proposed OMCRI, a new interface for mobile cloud robotics. This interface enables to abstract from the heterogeneity of robotic platforms and to bring some resource management facilities to fleets of robots. This result is based on the expertise that we have developed in the management of resources for cloud computing environments, especially around the OCCI standard. To the best of our knowledge, OMCRI is the first interface that enables to concretize the vision of robotics as a service. This result has obtained the best award at the 2nd IEEE International Congress on Internet of Things (ICIOT 2017).
A software exploitation license of the APISENSE® crowd-sensing platform has been sold to the ip-label company. They use this platform as a solution to monitor the quality of the GSM signal in the wild. The objective is to provide developers and stakeholders with a feedback on the quality of experience of GSM connection depending on their location.
This collaboration (2015–18) aims at proposing a framework to deal with elasticity in cloud computing environments. This framework must cover all kinds of resources, IaaS, PaaS, SaaS, must provide a solution for interoperability between different clouds and virtualization technologies, and must enable the specification and composition of reactive and predictive strategies.
This collaboration is conducted in the context of the ongoing PhD thesis of Yahya Al-Dhuraibi.
This collaboration (2017–20) aims at proposing new solutions for optimizing the energy footprint of ICT software infrastructures. We want to be able to measure and assess the energy footprint of ICT systems while preserving various quality of service parameters, such as performance and security. We aim at proposing a testbed for assessing the energy footprint of various programming languages. This testbed will also incorporate frameworks for web and mobile programming. Finally, we want to be able to issue recommendations to developers in order to assist them in improving the energy footprint of their programs. This collaboration will take advantage of the PowerAPI software library.
The PhD of Mohammed Chakib Belgaid will start in January 2018 in the context of this collaboration.
This collaboration (2017–18) aims at defining a computational model for software infrastructures layered on top of virtualized and interconnected cloud resources. This computational model will provide application programming and management facilities to distributed applications and services. This computational model will define a pivot model that will enable the interoperability of various existing and future standards for cloud systems such as OCCI and TOSCA. This pivot model will be defined with the Alloy specification language . This collaboration takes advantage of the expertise that we are developing since several years on reconfigurable component-based software systems , on cloud systems , and on the Alloy specification language .
This collaboration with Orange Labs is a joint project with Jean-Bernard Stefani from the Spades Inria project-team.
This is a 3-year project (2015–17) in the context of the so-called "Chercheur citoyen" program. The partners are LISIC/Université Côte d'Opale (leader), ATMO Nord-Pas De Calais, Association Bâtisseurs d'Economie Solidaire. This project targets the distributed monitoring of air quality with crowd-sensing solutions obtained via sensors connected to smart devices. We aim at inciting citizens to perform their own measures, and to obtain thanks to GPS geo-localization a large-scale database and a dynamic fine-grained cartography of air quality. This project takes advantage of the APISENSE® crowdsensing platform.
CIRRUS is an 3-year (2017–20) joint team with the Scalair cloud operator and architect company funded by the Hauts-de-France region. The CIRRUS joint team is developing novel solutions in the domains of the on demand configuration of heterogeneous cloud resources, the management of cloud elasticity for all deployed services (SaaS, PaaS, IaaS) in order to guarantee quality of service and user quality of experience, and the taming of financial costs of cloud infrastructures.
ADT LibRepair (2016–18) is a technology development initiative supported by the Inria Lille - Nord Europe Center that aims at supporting the development of an integrated library of automated software repair algorithms and techniques. This ADT builds on our results about with the Astor, Nopol and NpeFix that have been obtained in the context of the defended PhD theses of Matias Martinez and Benoit Cornu .
North European Lab LLEX (2015–17) is an international initiative supported by the Inria Lille - Nord Europe Center that takes place in the context of a collaboration between Inria and University College London. LLEX deals with research on automatic diagnosis and repair of software bugs. Automatic software repair is the process of fixing software bugs automatically. An automatic software repair system fixes software bugs with no human intervention. The goal of automatic software repair is to save maintenance costs and to enable systems to be more resilient to bugs and unexpected situations. This research may dramatically improve the quality of software systems. The objective of the partnership is to work on the automated diagnosis of exceptions with a focus on null pointer exceptions.
BottleNet is a 48-month project (2015–19) funded by ANR. The objective of BottleNet is to deliver methods, algorithms, and software systems to measure Internet Quality of Experience (QoE) and diagnose the root cause of poor Internet QoE. Our goal calls for tools that run directly at users’ devices. We plan to collect network and application performance metrics directly at users’ devices and correlate it with user perception to model Internet QoE, and to correlate measurements across users and devices to diagnose poor Internet QoE. This data-driven approach is essential to address the challenging problem of modeling user perception and of diagnosing sources of bottlenecks in complex Internet services. BottleNet will lead to new solutions to assist users, network and service operators as well as regulators in understanding Internet QoE and the sources of performance bottleneck.
SATAS is a 48-month project (2015–19) funded by ANR. SATAS aims to advance the state of the art in massively parallel SAT solving with a particular eye to the applications driving progress in the field. The final goal of the project is to be able to provide a "pay as you go" interface to SAT solving services, with a particular focus on their power consumption. This project will extend the reach of SAT solving technologies, daily used in many critical and industrial applications, to new application areas, which were previously considered too hard, and lower the cost of deploying massively parallel SAT solvers on the cloud.
Headwork is a 48-month project (2016–21) funded by ANR. The main objective of Headwork is to develop data-centric workflows for programming crowd sourcing systems in flexible declarative manner. The problem of crowd sourcing systems is to fill a database with knowledge gathered by thousands or more human participants. A particular focus is to be put on the aspects of data uncertainty and for the representation of user expertise. This project is coordinated by D. Gross-Amblard from the Druid Team (Rennes 1). Other partners include the Dahu team (Inria Saclay), Sumo (Inria Bretagne), and Links (Inria Lille) with J. Nierhen and M. Sakho.
Delta is a 48-month project (2016–21) funded by ANR. The project focuses on the study of logic, transducers and automata. In particular, it aims at extending classical framework to handle input/output, quantities and data. This project is coordinated by M. Zeitoun from LaBRI. Other partners include LIF (Marseille), IRIF (Paris-Diderot), and D. Gallois from the Inria Lille Links team.
StoreConnect is a 24-month project (2016–18) funded by FUI and labelled by the PICOM (Pôle des Industries du COMmerce) competitivity cluster. The partners are Tevolys, Ubudu (leader), Smile, STIME, Leroy Merlin, Insiteo, Inria Spirals, Inria Fun, Inria Stars. The goal of the project is to define a modular multi-sensors middleware platform for indoor geolocation.
OCCIware is a 36-month project (2014–17) of the Programme Investissement d'Avenir Cloud Computing and Big Data 4th call for projects. The partners are Smile (leader), ActiveEon SA, Scalair, Institut Mines-Télécom/Télécom SudParis, Inria, Linagora GSO, Obeo, OW2 Consortium, and Université Grenoble Alpes. The project aims at defining a formal framework for managing every digital resources in the clouds, based on Open Cloud Computing Interface (OCCI) recommendations from Open Grid Forum (OGF).
BetterNet (2016–19) aims at building and delivering a scientific and technical collaborative observatory to measure and improve the Internet service access as perceived by users. In this Inria Project Lab, we will propose new original user-centered measurement methods, which will associate social sciences to better understand Internet usage and the quality of services and networks. Our observatory can be defined as a vantage point, where: (1) tools, models and algorithms/heuristics will be provided to collect data, (2) acquired data will be analyzed, and shared appropriately with scientists, stakeholders and civil society, and (3) new value-added services will be proposed to end-users. IPL BetterNet is led by Isabelle Chrisment (Inria Madynes), with the participation of the Diana, Dionysos, Inria Chile, Muse, and Spirals Inria project-teams, as well as the ARCEP French agency and the ip-label company.
DoMaSQ’Air is a 1-year project funded by the CNRS INS2I MASTODONS program on research in big data. This project gathers a pluridisciplinary team on the measurement and the continuous analysis of indoor and outdoor air quality. This project takes advantage of crowds of cheap and miniaturized sensors in relation with the Internet of Things and smart cities. In addition to the challenges raised by the massive amount of data generated by these cyber-physical systems, the project tackles questions related to the quality and privacy of data. DoMaSQ’Air is led by Romain Rouvoy with the participation of the PC2A laboratory on PhysicoChemistry of Combustion of the Atmosphere (CNRS/U. Lille) and the LISIC laboratory on Computer Science, Signal and Image (U. Côte d'Opale).
Program: H2020 ICT-10-2016.
Project acronym: STAMP.
Project title: Software Testing Amplification.
Duration: 36 months (2016–19).
Coordinator: Inria.
Other partners: ActiveEon (France), Atos (Spain), Engineering (Italy), OW2 (France), SINTEF (Norway), TellU (Norway), TU Delft (The Netherlands), XWiki (France).
Abstract: By leveraging advanced research in automatic test generation, STAMP aims at pushing automation in DevOps one step further through innovative methods of test amplification. It will reuse existing assets (test cases, API descriptions, dependency models), in order to generate more test cases and test configurations each time the application is updated. Acting at all steps of development cycle, STAMP techniques aim at reducing the number and cost of regression bugs at unit level, configuration level and production stage.
Participants: Benjamin Danglot, Martin Monperrus [correspondant].
Program: H2020 JU Shift2Rail.
Project acronym: X2Rail-1.
Project title: Start-up activities for Advanced Signalling and Automation System.
Duration: 36 months (2016–19).
Coordinator: Siemens.
Other partners: 19 partners, among others Bombardier, Siemens, Thales, IRT Railenium.
Abstract: Our contribution to the project is focused on adaptive communication middleware for cyber-physical railway systems.
Participants: Lionel Seinturier [correspondant].
Program: EUREKA Celtic-Plus.
Project acronym: SENDATE.
Project title: SEcure Networking for a DATa Center Cloud in Europe.
Duration: 36 months (2016–19).
Coordinator: Nokia.
Other partners: 50+ partners in Finland, France, Germany, Norway, and Sweden. Selected partners involved: Nokia, Orange.
Abstract: The project addresses the convergence of telecommunication networks and IT in the context of distributed data centers. We are involved in the TANDEM subproject that targets the infrastructure of such a distributed system. More specifically, we are studying new approaches in terms of software engineering and component-based solutions for enabling this convergence of network and IT.
Participants: Lionel Seinturier [correspondant].
Title: Self-Optimization of Service Oriented Architectures for Mobile and Cloud Applications
International Partner (Institution - Laboratory - Researcher):
Université du Québec À Montréal (Canada) - LATECE - Naouel MOHA
Start year: 2017
See also: http://
The long-term goal of this research program is to propose a novel and innovative methodology embodied in a software platform, to support the runtime detection and correction of anti-patterns in large-scale service-oriented distributed systems in order to continuously optimize their quality of service. One originality of this program lies in the dynamic nature of the service-oriented environments and the application on emerging frameworks for embedded and distributed systems (e.g., Android/iOS for mobile devices, PaaS/SaaS for Cloud environments), and in particular mobile systems interacting with remote services hosted on the Cloud.
RRI-MobDev (Responsible Research and Innovation for Mobile Application Development) is a 2-years (2017–2018) bilateral collaboration with UCLan Cyprus, an overseas campus of the University of Central Lancashire. Mobile applications are part of a complex ecosystem involving various stakeholders (developers, users, app stores, etc.) exposed to various threats, including not only malware, but also potential information leaks through the continuous interactions with remote servers. This project aims to study and alleviate this problem by intervening both with the users and the developers of mobile apps, with an aim of enabling a cleaner, safer and more responsible mobile app ecosystem.
Fernanda Madeiral Delfim, PhD Student from the Federal University of Uberlândia, Brazil, visited us from January to May 2017, and again in September 2017.
Mohammad Naseri, MSc. Student in Computer Science from Saarland University, Germany, is visiting us for 3 months, starting in November 2017.
Chaima Chakhava, MSc. Student in Computer Science from ESI Alger, Algeria, is visiting us for 6 months, starting in December 2017.
Thomas Durieux, PhD Student, spent 4 months from September to December 2017 in KTH, Sweden.
Laurence Duchien
Doctoral Symposium of European Conference on Software Architecture (ECSA)
Laurence Duchien
International Conference on Web Engineering (ICWE)
European Conference on Software Architecture (ECSA)
International Systems and Software Product Line Conference (SPLC)
BElgian-NEtherlands software eVOLution symposium (BENEVOL)
International Conference on Software Engineering education and Training (CSEE&T)
Joint 5th ICSE International Workshop on Software Engineering for Systems-of-Systems and 11th Workshop on Distributed Software Development, Software Ecosystems and Systems-of-Systems (SESoS/WDES)
Philippe Merle
International Conference on Cooperative Information Systems (CoopIS)
International Symposium on Theoretical Aspects of Software Engineering (TASE)
International Conference on Web Engineering (ICWE)
International Conference on Cloud Computing, GRIDs, and Virtualization (CLOUD COMPUTING)
International Conference on Advanced Service Computing (SERVICE COMPUTATION)
Symposium on Machine Learning and Metaheuristics Techniques and Applications for Dependable Distributed Systems (MADS)
International Symposium on Security in Computing and Communications (SSCC)
International Workshop on Adaptive and Reflective Middleware (ARM)
Workshop on CrossCloud Infrastructures & Platforms (CrossCloud)
Martin Monperrus
International Conference on Software Engineering (ICSE)
Genetic Improvement Workshop (GI)
Clément Quinton
International Workshop on Dynamic Software Product Lines (DSPL)
Romain Rouvoy
International Conference on Pervasive Computing and Communications (PerCom)
IEEE/ACM International Conference on Mobile Software Engineering and Systems (MOBILESoft)
International Symposium on Applied Computing (SAC), track on Dependable, Adaptive, and Trustworthy Distributed Systems (DADS)
International Conference on Service Oriented Computing (ICSOC)
International Conference on Principles of Distributed Systems (OPODIS)
International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)
International Conference on Ambient Systems, Networks and Technologies (ANT)
International Conference on Information Systems Development (ISD)
International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP)
International Symposium on Computers and Communication (ISCC)
International Workshop on Scalable Computing For Real-Time Big Data Applications (SCRAMBL)
Lionel Seinturier
IEEE/ACM International Conference on Mobile Software Engineering and Systems (MOBILESoft)
International Conference on Service Oriented Computing (ICSOC)
ACM Symposium on Applied Computing (SAC), tracks Software Architecture Theory Technology and Applications (SA-TTA), Operating Systems (OS)
Euromicro Conference on Software Engineering and Advanced Applications (SEAA), track Model-based development Components and Services (MOCS)
International Workshop on Models@run.time @ MODELS
International Workshop on Models@run.time @ ICAC
Workshop on Context-Oriented Programming (COP) @ ECOOP
Martin Monperrus is member of the editorial board of the international journal Springer Empirical Software Engineering (IF-2016: 3.275).
Romain Rouvoy is member of the editorial board of the journal Lavoisier Technique et Science Informatiques (TSI).
Lionel Seinturier is editor for software engineering of the ISTE-Wiley Computer Science and Information Technology book collection.
Laurence Duchien: IET Software, IEEE Software, Wiley Software: Practice and Experience (SPE).
Philippe Merle: ACM Transactions on Internet Technology (TOIT), Wiley Journal of Software Testing, Verification and Reliability (STVR), International Journal of Web Services Research (IJWSR), Springer Journal of Cloud Computing, IEEE Cloud Computing.
Martin Monperrus: IEEE Transactions on Software Engineering (TSE), ACM Transactions on Software Engineering and Methodology (TOSEM), Elsevier Journal of Systems and Software (JSS), IEEE Transactions on Reliability (TR).
Clément Quinton: Elsevier Journal of System and Software (JSS), IEEE Software, IET Software, IEEE Transactions on Services Computing (TSC).
Romain Rouvoy: Elsevier Sustainable Computing, Informatics and Systems (SUSCOM), IEEE Transactions on Software Engineering (TSE), Springer Journal of Internet Services and Applications (JISA), Wiley Journal of Software: Evolution and Process (JSEP).
Lionel Seinturier: Elsevier Journal of System and Software (JSS), ACM Transactions on Internet Technology (TOIT), Elsevier Computer Languages, Systems and Structures (COMLAN), Journal of Universal Computer Science (JUCS), Ingénierie des systèmes d’information (ISI), Elsevier Future Generation Computer Systems (FGCS).
Philippe Merle, Walid Gaaloul and Faiez Zalila were invited to give a tutorial on "Seamless Lifecycle Management of Cloud Resources Using OCCI" at Services Conference Federation (SCF) 2017.
Martin Monperrus was invited to give a talk on "When and how to automatically repair bugs?" at the 4th Workshop on Design Automation for Understanding Hardware Designs (DUHDe 2017).
Martin Monperrus is the co-head of the "Groupe de Travail Génie Logiciel Empirique" of the GDR GPL.
Romain Rouvoy is the co-head of the "Groupe de Travail Génie Logiciel pour les Systèmes Cyber-physiques" of the GDR GPL.
Laurence Duchien did some scientific expertise for Direction des Relations Internationales of the French Ministry of Research.
Philippe Merle was member of the recruitment committee for assistant professors at INSA Toulouse. He is member of the Inria Scientific Board.
Lionel Seinturier was member of the recruitment committee for junior researchers at Inria Rennes. He was scientific expert for ANRT, ECOS Nord, COFECUB, Belgium FWO. He was member of the Agence Nationale de la Recherche (ANR) Scientific Evaluation Committee for Software and Network (CES25).
Laurence Duchien is member of the CNRS CoCNRS section 6 committee, and of the "bureau" of this committee. She was member of Hcéres committees for DIENS Laboratory (Paris) and LIS Laboratory (Marseille).
Philippe Merle is president of the CUMI (Comité des Utilisateurs des Moyens Informatiques), project manager for Irill at Lille, secretary of the CLHSCT (Comité Local d’Hygiène, de Sécurité et de Conditions de Travail), and member of the centre committee for the Inria Lille - Nord Europe research center. He is member of the steering committee of the Inria's continuous integration service. He is leading the CIRRUS research joint team between Scalair company and Spirals Inria project-team. He was the scientific and technical leader of the OCCIware PIA funded project. He is member of the steering committee of CIEL (Conférence en IngénieriE du Logiciel).
Lionel Seinturier is president of the CDT (Commission Développement Technologique), and member of the BCEP (Bureau du Comité des Équipes-Projets), for the Inria Lille - Nord Europe research center. He heads the committee (so-called "vivier 27 rang A") that selects members of recruitment committees in Computer Science at the University of Lille 1. He is Scientific Advisor for the evaluation of ICT research laboratories at the Hcéres.
Pierre Bourhis is, in addition to his tenure junior research position at CNRS, chargé d'enseignement spécialité Sciences des données at École Polytechnique, Palaiseau, France, in the Department of Computer Sciences (DIX).
Bases de données, 18h, Cycle Polytechnique Info553
Laurence Duchien teaches at the University of Lille 1 in the FST faculty. She heads the Carrières et Emplois service and is referent for the professional insertion in the PhD program in Computer Science at ComUE University Lille Nord de France. She is Director of Doctoral Studies for Computer Science in the Doctoral School Engineering Science (SPI) - ComUE Lille Nord de France.
Software Project Management, 50h, Level M2, Master MIAGE
Design of distributed applications, 42h, Level M1, Master of Computer Science
Software Product Lines, 8h, Level M2, Master of Computer Science
Research and Innovation Initiation, 22h, Level M2 IAGL, Master of Computer Science
Tutoring Internship, 16h, Level M2, Master of Computer Science
Martin Monperrus teaches at the University of Lille 1 in the FST faculty. Until August 2017, he has headed the IAGL specialty of the Master of Computer Science at the University of Lille 1.
Introduction to programming, 48h, Level L1, Licence of Computer Science
Object-oriented design, 39h, Level L3, Licence of Computer Science
Automated software engineering, 40h, Level M2 IAGL, Master of Computer Science
Clément Quinton teaches at the University of Lille 1 in the FST faculty.
Design of distributed applications, 42h, Level M1, Master of Computer Science
Software engineering, 42h, Level M1, Master of Computer Science
Advanced design of distributed applications, 37.5h, Level M2, Master MIAGE
Infrastructure and frameworks for the Internet, 33.75h, Level M2, Master of Computer Science
Software product lines, 7.5h, Level M2, Master of Computer Science
Suivi de stages et de projets, 30h, Licence and Master of Computer Science Science
Romain Rouvoy teaches at the University of Lille 1 in the FST faculty. He heads the Master of Computer Science program at the University of Lille 1.
Design of distributed applications, 12h, Level M1, Master of Computer Science
Object-oriented design, 4h, Level L3, Licence of Computer Science
Suivi de projets, 20h, Level M2, Master of Computer Science
Walter Rudametkin Ivey teaches at the University of Lille 1 in the Polytech engineering school.
GIS4 Programmation par Objets, 32h
GIS4 Architectures Logicielles, 26h
GIS2A3 (apprentissage) Projet programmation par Objet, 24h
IMA2A4 (apprentissage) Conception Modélisation Objet, 24h
IMA3 Programmation Avancée, 62h
GBIAAL4 Bases de données, 22h
GIS5 Suivi de projets, 42h
GIS2A (apprentissage) Suivi d'apprentis, 28h
Lionel Seinturier teaches at the University of Lille 1 in the FST faculty. He heads the Computer Science Department at the Faculty of Science and Technology of the University of Lille 1.
Conception d'Applications Réparties, 50h, Level M1, Master MIAGE
Infrastructures et Frameworks Internet, 70h, Level M2 E-Services IAGL TIIR, Master of Computer Science
PhD in progress: Zeinab Abou Khalil, November 2017, Laurence Duchien, co-supervision with Tom Mens (University of Mons, Belgium).
PhD in progress: Guillaume Fieni, GreenData : Vers un traitement efficient et éco-responsable des grandes masses de données numériques, October 2017, Romain Rouvoy & Lionel Seinturier.
PhD in progress: Benjamin Danglot, Software Testing Amplification, December 2016, Martin Monperrus & Lionel Seinturier.
PhD in progress: Lakhdar Meftah, Cartography of the Quality of Experience for Mobile Internet Access, November 2016, Romain Rouvoy, co-supervision with Isabelle Chrisment (Inria Madynes).
In progress PhD: Sarra Habchi, Une supervision de contexte sensible à la confidentialité pour les développements logiciels en crowdsource, October 2016, Romain Rouvoy.
PhD in progress: Antoine Vastel, Cartographie de la qualité d'expérience pour l'accès à l'internet mobile, October 2016, Romain Rouvoy & Walter Rudametkin.
PhD in progress: Yahya Al Dhuraibi, Un cadre flexible pour l'élasticité dans les nuages, October 2015, Philippe Merle.
PhD in progress: Stéphanie Challita, Un cadre formel et outillé pour la gestion de toute ressource en nuage, October 2015, Philippe Merle.
PhD in progress: Thomas Durieux, Search-based Monitoring and Root Cause Diagnosis in Production, September 2015, Lionel Seinturier & Martin Monperrus.
PhD in progress: Gustavo Sousa, Towards Dynamic Software Product Lines to Optimize Management and Reconfiguration of Cloud Applications, October 2012, Laurence Duchien & Walter Rudametkin Ivey.
PhD in progress: Amal Tahri, Evolution logicielle multi-vues, des réseaux domestiques au Cloud, March 2013, Laurence Duchien.
Laurence Duchien
Hanae Rateau, Université Lille, chair
Sylvain Defourny, Université Lille, chair
Thi-Kim-Dung Pham, Conservatoire National des Arts et Métiers, examiner
Xhevahire Ternava, Université Côte d'Azur, chair
Julien Perolat, Université Lille, chair
Philippe Merle
Mohamed Boussaa, Université Rennes 1, reviewer
Jean-François Weber, Université de Franche-Comté, reviewer
Martin Monperrus
Bogdan Marculescu (Blekinge Institute of Technology, Sweden), member
Romain Rouvoy
Stéphane Delbruel (University of Rennes 1), reviewer
Yaroslav Hayduk (University of Neuchâtel, Switzerland), reviewer
Gustavo Sousa (University of Lille 1), president
Sabbir Hassan (University of Rennes 1), examiner
Yunbo Li (IMT Atlantique), examiner
Cyril Cecchinel (University Côte d'Azur), reviewer
Elyas Ben Hadj Yahia (University of Bordeaux), reviewer
Marcelino Rodriguez Cancio (University of Rennes 1), reviewer
Lionel Seinturier
HDR Abderrahmane Seriai (University of Montpellier), president
HDR Bilel Derbel (University of Lille 1), president
HDR Marc Frincu (University of Timisoara, Romania), examiner
Akram Kamoun (University of Sfax, Tunisia), reviewer
André Sales Fonteles (University of Grenoble), reviewer
Adja Ndeye Sylla (University of Grenoble), reviewer
Lionel Seinturier participated in October to the Chercheur itinérant event organized by the Inria Lille Nord Europe research center in the context of La fête de la science. Two classrooms (2nde and terminale) have been visited. The theme of the visit was crowdsensing and data gathering from mobile devices.
Antoine Vastel participated in October to the Chercheur itinérant event organized by the Inria Lille Nord Europe research center in the context of La fête de la science. Two classrooms (2nde and terminale) have been visited. The theme of the visit was browser fingerprinting and user privacy tracking.