Our research is based on two complementary fields: distributed systems and software engineering. We aim at introducing more automation in the adaptation processes of software systems, that is, transitioning from the study of adaptive systems to self-adaptive systems. In particular, we work towards two directions: self-healing software systems with data mining solutions, and self-optimizing software systems with context monitoring. These two objectives are applied to two target environments: mobile computing and cloud computing.
Distributed software services and systems are central to many human activities, such as communication, commerce, education, defense, etc.
Distributed software services consist of an ever growing number of devices, often highly heterogeneous, from cloud platforms, sensor networks, to application servers, desktop machines, and mobile devices, such as smartphones.
The future of this huge number of interconnected software services has been called the Internet of Services, a vision "where everything that is needed to use software applications is available as a service on the Internet, such as the software itself, the tools to develop the software, the platform servers, storage and communication to run the software."
This research project focuses on defining self-adaptive software services and middleware. From the perspective of the Internet of Services, this project fits in the vision sketched by e.g., the FP8 Expert Group Services in the Future Internet , the NESSI Research Priorities for the next Framework Programme for Research and Technological Development FP8 , the Roadmap for Advanced Cloud Technologies under H2020 , and research roadmaps, such as , , .
Our research program on self-adaptive software targets two key properties that are detailed in the remainder of this section: self-healing and self-optimization.
Software systems are under the pressure of changes all along their lifecycle. Agile development blurs the frontier between design and execution and requires constant adaptation. The size of systems (millions of lines of code) multiplies the number of bugs by the same order of magnitude. More and more systems, such as sensor network devices, live in "surviving" mode, in the sense that they are neither rebootable nor upgradable.
Software bugs are hidden in source code and show up at development-time, testing-time or worse, once deployed in production. Except for very specific application domains where formal proofs are achievable, bugs can not be eradicated. As an order of magnitude, on 16 Dec 2011, the Eclipse bug repository contains 366 922 bug reports. Software engineers and developers work on bug fixing on a daily basis. Not all developers spend the same time on bug fixing. In large companies, this is sometimes a full-time role to manage bugs, often referred to as Quality Assurance (QA) software engineers. Also, not all bugs are equal, some bugs are analyzed and fixed within minutes, others may take months to be solved .
In terms of research, this means that: (i) one needs means to automatically adapt the design of the software system through automated refactoring and API extraction, (ii) one needs approaches to automate the process of adapting source code in order to fix certain bugs, (iii) one needs to revisit the notion of error-handling so that instead of crashing in presence of errors, software adapts itself to continue with its execution, e.g., in degraded mode.
There is no one-size-fits-all solution for each of these points. However, we think that novel solutions can be found by using data mining and machine learning techniques tailored for software engineering . This body of research consists of mining some knowledge about a software system by analyzing the source code, the version control systems, the execution traces, documentation and all kinds of software development and execution artifacts in general. This knowledge is then used within recommendation systems for software development, auditing tools, runtime monitors, frameworks for resilient computing, etc.
The novelty of our approach consists of using and tailoring data mining techniques for analyzing software artifacts (source code, execution traces) in order to achieve the next level of automated adaptation (e.g., automated bug fixing). Technically, we plan to mix unsupervised statistical learning techniques (e.g. frequent item set mining) and supervised ones (e.g. training classifiers such as decision trees). This research is currently not being performed by data mining research teams since it requires a high level of domain expertise in software engineering, while software engineering researchers can use off-the-shelf data mining libraries, such as Weka .
We now detail the two directions that we propose to follow to achieve this objective.
The first direction is about mining techniques in software repositories (e.g., CVS, SVN, Git). Best practices can be extracted by data mining source code and the version control history of existing software systems. The design and code of expert developers significantly vary from the artifacts of novice developers. We will learn to differentiate those design characteristics by comparing different code bases, and by observing the semantic refactoring actions from version control history. Those design rules can then feed the test-develop-refactor constant adaptation cycle of agile development.
Fault localization of bugs reported in bug repositories. We will build a solid foundation on empirical knowledge about bugs reported in bug repository. We will perform an empirical study on a set of representative bug repositories to identify classes of bugs and patterns of bug data. For this, we will build a tool to browse and annotate bug reports. Browsing will be helped with two kinds of indexing: first, the tool will index all textual artifacts for each bug report; second it will index the semantic information that is not present by default in bug management software—i.e., “contains a stacktrace”). Both indexes will be used to find particular subsets of bug reports, for instance “all bugs mentioning invariants and containing a stacktrace”. Note that queries with this kind of complexity and higher are mostly not possible with the state-of-the-art of bug management software. Then, analysts will use annotation features to annotate bug reports. The main outcome of the empirical study will be the identification of classes of bugs that are appropriate for automated localization. Then, we will run machine learning algorithms to identify the latent links between the bug report content and source code features. Those algorithms would use as training data the existing traceability links between bug reports and source code modifications from version control systems. We will start by using decision trees since they produce a model that is explicit and understandable by expert developers. Depending on the results, other machine learning algorithms will be used. The resulting system will be able to locate elements in source code related to a certain bug report with a certain confidence.
Automated bug fix generation with search-based techniques. Once a location in code is identified as being the cause of the bug, we can try to automatically find a potential fix. We envision different techniques: (1) infer fixes from existing contracts and specifications that are violated; (2) infer fixes from the software behavior specified as a test suite; (3) try different fix types one-by-one from a list of identified bug fix patterns; (4) search fixes in a fix space that consists of combinations of atomic bug fixes. Techniques 1 and 2 are explored in and . We will focus on the latter techniques. To identify bug fix patterns and atomic bug fixes, we will perform a large-scale empirical study on software changes (also known as changesets when referring to changes across multiple files). We will develop tools to navigate, query and annotate changesets in a version control system. Then, a grounded theory will be built to master the nature of fixes. Eventually, we will decompose change sets in atomic actions using clustering on changeset actions. We will then use this body of empirical knowledge to feed search-based algorithms (e.g. genetic algorithms) that will look for meaningful fixes in a large fix space. To sum up, our research on automated bug fixing will try not only to point to source code locations responsible of a bug, but to search for code patterns and snippets that may constitute the skeleton of a valid patch. Ultimately, a blend of expert heuristics and learned rules will be able to produce valid source code that can be validated by developers and committed to the code base.
The second proposed research direction is about inventing a self-healing capability at run-time. This is complementary to the previous objective that mainly deals with development time issues. We will achieve this in two steps. First, we want to define frameworks for resilient software systems. Those frameworks will help to maintain the execution even in the presence of bugs—i.e. to let the system survive. As exposed below, this may mean for example to switch to some degraded modes. Next, we want to go a step further and to define solutions for automated runtime repair, that is, not simply compensating the erroneous behavior, but also determining the correct repair actions and applying them at run-time.
Mining best effort values. A well-known principle of software engineering is the "fail-fast" principle. In a nutshell, it states that as soon as something goes wrong, software should stop the execution before entering incorrect states. This is fine when a human user is in the loop, capable of understanding the error or at least rebooting the system. However, the notion of “failure-oblivious computing” shows that in certain domains, software should run in a resilient mode (i.e. capable of recovering from errors) and/or best-effort mode—i.e. a slightly imprecise computation is better than stopping. Hence, we plan to investigate data mining techniques in order to learn best-effort values from past executions (i.e. somehow learning what is a correct state, or the opposite what is not a completely incorrect state). This knowledge will then be used to adapt the software state and flow in order to mitigate the error consequences, the exact opposite of fail-fast for systems with long-running cycles.
Embedding search based algorithms at runtime. Harman recently described the field of search-based software engineering . We believe that certain search based approaches can be embedded at runtime with the goal of automatically finding solutions that avoid crashing. We will create software infrastructures that allow automatically detecting and repairing faults at run-time. The methodology for achieving this task is based on three points: (1) empirical study of runtime faults; (2) learning approaches to characterize runtime faults; (3) learning algorithms to produce valid changes to the software runtime state. An empirical study will be performed to analyze those bug reports that are associated with runtime information (e.g. core dumps or stacktraces). After this empirical study, we will create a system that learns on previous repairs how to produce small changes that solve standard runtime bugs (e.g. adding an array bound check to throw a handled domain exception rather than a spurious language exception). To achieve this task, component models will be used to (1) encapsulate the monitoring and reparation meta-programs in appropriate components and (2) support runtime code modification using scripting, reflective or bytecode generation techniques.
Complex distributed systems have to seamlessly adapt to a wide variety of deployment targets. This is due to the fact that developers cannot anticipate all the runtime conditions under which these systems are immersed. A major challenge for these software systems is to develop their capability to continuously reason about themselves and to take appropriate decisions and actions on the optimizations they can apply to improve themselves. This challenge encompasses research contributions in different areas, from environmental monitoring to real-time symptoms diagnosis, to automated decision making. The variety of distributed systems, the number of optimization parameters, and the complexity of decisions often resign the practitioners to design monolithic and static middleware solutions. However, it is now globally acknowledged that the development of dedicated building blocks does not contribute to the adoption of sustainable solutions. This is confirmed by the scale of actual distributed systems, which can—for example—connect several thousands of devices to a set of services hosted in the Cloud. In such a context, the lack of support for smart behaviors at different levels of the systems can inevitably lead to its instability or its unavailability. In June 2012, an outage of Amazon's Elastic Compute Cloud in North Virginia has taken down Netflix, Pinterest, and Instagram services. During hours, all these services failed to satisfy their millions of customers due to the lack of integration of a self-optimization mechanism going beyond the boundaries of Amazon.
The research contributions we envision within this area will therefore be organized as a reference model for engineering self-optimized distributed systems autonomously driven by adaptive feedback control loops, which will automatically enlarge their scope to cope with the complexity of the decisions to be taken.
This solution introduces a multi-scale approach, which first privileges local and fast decisions to ensure the homeostasis
The novelty of this objective is to exploit the wisdom of crowds to define new middleware solutions that are able to continuously adapt software deployed in the wild. We intend to demonstrate the applicability of this approach to distributed systems that are deployed from mobile phones to cloud infrastructures. The key scientific challenges to address can be summarized as follows: How does software behave once deployed in the wild? Is it possible to automatically infer the quality of experience, as it is perceived by users? Can the runtime optimizations be shared across a wide variety of software? How optimizations can be safely operated on large populations of software instances?
The remainder of this section further elaborates on the opportunities that can be considered within the frame of this objective.
Once deployed, developers are generally no longer aware of how their software behave. Even if they heavily use testbeds and benchmarks during the development phase, they mostly rely on the bugs explicitly reported by users to monitor the efficiency of their applications. However, it has been shown that contextual artifacts collected at runtime can help to understand performance leaks and optimize the resilience of software systems . Monitoring and understanding the context of software at runtime therefore represent the first building block of this research challenge. Practically, we intend to investigate crowd-sensing approaches, to smartly collect and process runtime metrics (e.g., request throughput, energy consumption, user context). Crowd-sensing can be seen as a specific kind of crowdsourcing activity, which refers to the capability of lifting a (large) diffuse group of participants to delegate the task of retrieving trustable data from the field. In particular, crowd-sensing covers not only participatory sensing to involve the user in the sensing task (e.g., surveys), but also opportunistic sensing to exploit mobile sensors carried by the user (e.g., smartphones).
While reported metrics generally enclose raw data, the monitoring layer intends to produce meaningful indicators like the Quality of Experience (QoE) perceived by users. This QoE reflects representative symptoms of software requiring to trigger appropriate decisions in order to improve its efficiency. To diagnose these symptoms, the system has to process a huge variety of data including runtime metrics, but also history of logs to explore the sources of the reported problems and identify opportunities for optimizations. The techniques we envision at this level encompass machine learning, principal component analysis, and fuzzy logic to provide enriched information to the decision level.
Beyond the symptoms analysis, decisions should be taken in order to improve the Quality of Service (QoS). In our opinion, collaborative approaches represent a promising solution to effectively converge towards the most appropriate optimization to apply for a given symptom. In particular, we believe that exploiting the wisdom of the crowd can help the software to optimize itself by sharing its experience with other software instances exhibiting similar symptoms. The intuition here is that the body of knowledge that supports the optimization process cannot be specific to a single software instance as this would restrain the opportunities for improving the quality and the performance of applications. Rather, we think that any software instance can learn from the experience of others.
With regard to the state-of-the-art, we believe that a multi-levels decision infrastructure, inspired from distributed systems like Spotify , can be used to build a decentralized decision-making algorithm involving the surrounding peers before requesting a decision to be taken by more central control entity. In the context of collaborative decision-making, peer-based approaches therefore consist in quickly reaching a consensus on the decision to be adopted by a majority of software instances. Software instances can share their knowledge through a micro-economic model , that would weight the recommendations of experienced instances, assuming their age reflects an optimal configuration.
Beyond the peer level, the adoption of algorithms inspired from evolutionary computations, such as genetic programming, at an upper level of decision can offer an opportunity to test and compare several alternative decisions for a given symptom and to observe how does the crowd of applications evolves. By introducing some diversity within this population of applications, some instances will not only provide a satisfying QoS, but will also become naturally resilient to unforeseen situations.
Any decision taken by the crowd requires to propagate back to and then operated by the software instances. While simplest decisions tend to impact software instances located on a single host (e.g., laptop, smartphone), this process can also exhibit more complex reconfiguration scenarios that require the orchestration of various actions that have to be safely coordinated across a large number of hosts. While it is generally acknowledged that centralized approaches raise scalability issues, we think that self-optimization should investigate different reconfiguration strategies to propagate and apply the appropriate actions. The investigation of such strategies can be addressed in two steps: the consideration of scalable data propagation protocols and the identification of smart reconfiguration mechanisms.
With regard to the challenge of scalable data propagation protocols, we think that research opportunities encompass not only the exploitation of gossip-based protocols , but also the adoption of publish/subscribe abstractions in order to decouple the decision process from the reconfiguration. The fundamental issue here is the definition of a communication substrate that can accommodate the propagation of decisions with relaxed properties, inspired by Delay Tolerant Networks (DTN), in order to reach weakly connected software instances. We believe that the adoption of asynchronous communication protocols can provide the sustainable foundations for addressing various execution environments including harsh environments, such as developing countries, which suffer from a partial connectivity to the network. Additionally, we are interested in developing the principle of social networks of applications in order to seamlessly group and organize software instances according to their similarities and acquaintances. The underlying idea is that grouping application instances can contribute to the identification of optimization profiles not only contributing to the monitoring layer, but also interested in similar reconfigurations. Social networks of applications can contribute to the anticipation of reconfigurations by exploiting the symptoms of similar applications to improve the performance of others before that problems actually happen.
With regard to the challenge of smart reconfiguration mechanisms, we are interested in building on our established experience of adaptive middleware in order to investigate novel approaches to efficient application reconfigurations. In particular, we are interested in adopting seamless micro-updates and micro-reboot techniques to provide in-situ reconfiguration of pieces of software. Additionally, the provision of safe and secured reconfiguration mechanisms is clearly a key issue that requires to be carefully addressed in order to avoid malicious exploitation of dynamic reconfiguration mechanisms against the software itself. In this area, although some reconfiguration mechanisms integrate transaction models , most of them are restricted to local reconfigurations, without providing any support for executing distributed reconfiguration transactions. Additionally, none of the approached published in the literature include security mechanisms to preserve from unauthorized or malicious reconfigurations.
Although our research is general enough to be applied to many application domains, we currently focus on applications and distributed services for the retail industry and for the digital home. These two application domains are supported by a strong expertise in mobile computing and in cloud computing that are the two main target environments on which our research prototypes are built, for which we are recognized, and for which we have already established strong collaborations with the industrial ecosystem.
This application domain is developed in relation with the PICOM (Pôle de compétivité Industries du Commerce) cluster. We have established strong collaborations with local companies in the context of former funded projects, such as Cappucino and Macchiato, which focused on the development of a new generation of mobile computing platforms for e-commerce. We are also involved in the Datalyse and OCCIware funded projects that define cloud computing environments with applications for the retail industry. Finally, our activities in terms of crowd-sensing and data gathering on mobile devices with the APISENSE® platform share also applications for the retail industry.
We are developing new middleware solutions for the digital home, in particular through our long standing collaboration with Orange Labs. We are especially interested in developing energy management and saving solutions with the PowerAPI software library for distributed environments such the ones that equip digital homes. We are also working to bridge the gap between distributed services hosted on home gateways and distributed services hosted on the cloud to be able to smoothly transition between both environments. This work is especially conducted with the Saloon platform.
In 2019, Christophe Gourdin and Philippe Merle have created the XScalibur company. XScalibur is a startup company that sells an innovative software solution to design, deploy and monitor software systems in a multi-cloud environment. The company is the result of a transfert activity initiated by Philippe Merle and Christophe Gourdin around the OCCIware Studio software tool suite. This model-driven based solution for cloud management is the result of several years of research , and has especially been developed in the context of the OCCIware collaborative project from 2014 to 2017. XScalibur has been selected in March 2019 by the Alliancy magazine in the top-13 of startup companies in the domain of cloud computing to "follow closely"
In 2019, Laurence Duchien has been general chair of the 13th edition of the European Conference on Software Architecture (ECSA) , and program co-chair of the 23rd edition of the International Systems and Software Product Line Conference (SPLC) , . These two events have been co-located in Paris from 9 to 13 September 2019. The fact that a member of the Spirals project-team was proposed by the software architecture research community to serve in these two major events is a testimony of the recognition and of the visibility of our research activities in this domain.
Walter Rudametkin and Pierre Laperdrix were awarded in January 2019 the Prix Inria CNIL protection de la vie privée. The award was announced during the 12th edition of the International Computers, Privacy and Data Protection (CPDP) conference, and rewards research undertaken with a view to creating a trustworthy digital society. The award was granted thanks to the work of Walter Rudametkin, Pierre Laperdrix, and Benoit Baudry on browser fingerprinting.
Thomas Durieux was awarded in June 2019 a honorable mention (accessit) at Prix de thèse GDR GPL for his PhD work on software automated repair that was defended in September 2018 . GDR GPL (Génie de la Programmation et du Logiciel) is the group that gathers the French research community on software engineering and programming languages. This is the third time that a PhD student from the Spirals project-team wins either this prize or an honorable mention (Clément Quinton won the prize in 2014, and Maria Gomez won a honorable mention in 2017).
Lakhdar Meftah and Romain Rouvoy won a Best Paper award at the 19th International Conference on Distributed Applications and Interoperable Systems (DAIS 2019)
Philippe Merle won a Best Demo Paper award at the 5th IEEE International Conference on Network Softwarization (NetSoft 2019)
Laurence Duchien was awarded the rank of Chevalière de l'Ordre National du Mérite (JO du 29 mai 2019).
Keywords: Mobile sensing - Crowd-sensing - Mobile application - Crowd-sourcing - Android
Functional Description: APISENSE platform is a software solution to collect various contextual information from Android devices (client application) and automatically upload collected data to a server (deployed as a SaaS). APISENSE is based on a Cloud computing infrastructure to facilitate datasets collection from significant populations of mobile users for research purposes.
Participants: Antoine Veuiller, Christophe Ribeiro, Julien Duribreux, Nicolas Haderer, Romain Rouvoy, Romain Sommerard and Lakhdar Meftah
Partner: Université de Lille
Contact: Romain Rouvoy
URL: https://
Keywords: Energy efficiency - Energy management
Functional Description: PowerAPI is a library for monitoring the energy consumption of software systems.
PowerAPI differs from existing energy process-level monitoring tool in its software orientation, with a fully customizable and modular solution that let the user to precisely define what he/she wants to monitor. PowerAPI is based on a modular and asynchronous event-driven architecture using the Akka library. PowerAPI offers an API which can be used to define requests about energy spent by a process, following its hardware resource utilization (in term of CPU, memory, disk, network, etc.).
Participants: Adel Noureddine, Loïc Huertas, Maxime Colmant, Romain Rouvoy, Mohammed Chakib Belgaid and Arthur D'azemar
Contact: Romain Rouvoy
URL: http://
Keywords: Feature Model - Software Product Line - Cloud computing - Model-driven engineering - Ontologies
Functional Description: Saloon is a framework for the selection and configuration of Cloud providers according to application requirements. The framework enables the specification of such requirements by defining ontologies. Each ontology provides a unified vision of provider offers in terms of frameworks, databases, languages, application servers and computational resources (i.e., memory, storage and CPU frequency). Furthermore, each provider is related to a Feature Model (FM) with attributes and cardinalities, which captures its capabilities. By combining the ontology and FMs, the framework is able to match application requirements with provider capabilities and select a suitable one. Specific scripts to the selected provider are generated in order to enable its configuration.
Participants: Clément Quinton, Daniel Romero Acero, Laurence Duchien, Lionel Seinturier and Romain Rouvoy
Partner: Université de Lille
Contact: Clément Quinton
Keywords: Java - Code analysis
Functional Description: Spoon is an open-source library that enables you to transform (see below) and analyze Java source code (see example). Spoon provides a complete and fine-grained Java metamodel where any program element (classes, methods, fields, statements, expressions…) can be accessed both for reading and modification. Spoon takes as input source code and produces transformed source code ready to be compiled.
Participants: Gérard Paligot, Lionel Seinturier, Martin Monperrus, Nicolas Petitprez and Simon Urli
Contact: Martin Monperrus
We obtained new results on the concept of browser fingerprinting. This is a major technique of Internet security that is widely used for many purposes such as tracking activities, enhancing authentication, detecting bots, just to name a few. These results contribute to the enhancement of security for distributed software systems.
Our contributions to browser fingerprinting include the following three elements. First, we collected 122K fingerprints from 2 346 browsers and studied their stability over more than 2 years. We showed that, despite frequent changes in the fingerprints, a significant fraction of browsers can be tracked over a long period of time. Second, we designed a test suite to evaluate fingerprinting countermeasures. We applied our test suite to 7 countermeasures, some of them claiming to generate consistent fingerprints, and show that all of them can be identified, which can make their users more identifiable. Third, we explored the use of browser fingerprinting for crawler detection. We measured its use in the wild, as well as the main detection techniques. Since fingerprints are collected on the client-side, we also evaluated its resilience against an adversarial crawler developer that tries to modify its crawler fingerprints to bypass security checks.
These results have been obtained in the context of the PhD thesis of Antoine Vastel defended in October 2019.
With respect to self-healing, we proposed a new algorithm for test amplification. Test amplification consists of exploiting the knowledge of test methods, in which developers embed input data and expected properties, in order to enhance these tests .
We proposed a new approach based on test inputs transformation and assertions generation to amplify test suites, and implemented this approach in the DSpot software tool that we created . By evaluating DSpot on open-source projects from GitHub, we showed that we improve the mutation score of test suites. These improvements have been proposed to developers through pull requests: their feedbacks show that they value the output of DSpot by accepting to integrate amplified test methods into their test suite. This proves that DSpot can improve the quality of the test suite of real projects. We also showed that DSpot can generate amplified test methods that specify behavioral changes, and can generate amplified test methods to improve the ability to detect potential regressions.
These results have been obtained in the context of the STAMP H2020 project and in the context of the PhD thesis of Benjamin Danglot defended in November 2019.
With respect to self-healing, we obtained new results in the domain of code smells for mobile software systems. Code smells are well-known concepts in software engineering. They refer to bad design and development practices commonly observed in software systems.
We obtained three new results that contribute to a better understanding of mobile code smells. First, we studied the expansion of code smells in different mobile platforms. Then, we conducted a large-scale study to analyze the change history of mobile apps and discern the factors that favor the introduction and survival of code smells. To consolidate these studies, we also performed a user study to investigate developers’ perception of code smells and the adequacy of static analyzers as a solution for coping with them. Finally, we performed a qualitative study to question the established foundation about the definition and detection of mobile code smells. The results of these studies revealed important research findings. Notably, we showed that pragmatism, prioritization, and individual attitudes are not relevant factors for the accrual of mobile code smells. The problem is rather caused by ignorance and oversight, which are prevalent among mobile developers. Furthermore, we highlighted several flaws in the code smell definitions that are currently adopted by the research community. These results allowed us to elaborate some recommendations for researchers and tool makers willing to design detection and refactoring tools for mobile code smells , . On top of that, our results opened perspectives for research works about the identification of mobile code smells and development practices in general.
These results have been obtained in the context of the PhD thesis of Sarra Habchi defended in December 2019.
We obtained new results in the domain of data privacy for crowdsourced data.
We proposed an anonymous data collection library for mobile apps, a software library that improves the user's privacy without compromising the overall quality of the crowdsourced dataset. In particular, we proposed a decentralized approach, named FOUGERE, to convey data samples from user devices using peer-to-peer (P2P) communications to third-party servers, thus introducing an a priori data anonymization process that is resilient to location-based attacks. To validate the approach, we proposed a testing framework to test this P2P communication library, named PeerFleet. Beyond the identification of P2P-related errors, PeerFleet also helps to tune the discovery protocol settings to optimize the deployment of P2P apps. We validated FOUGERE using 500 emulated devices that replay a mobility dataset and use FOUGERE to collect location data. We evaluated the overhead, the privacy and the utility of FOUGERE. We showed that FOUGERE defeats the state-of-the-art location-based privacy attacks with little impact on the quality of the collected data , .
These results have been obtained in the context of the PhD thesis of Lakhdar Meftah defended in December 2019.
A software exploitation license (2014–ongoing) of the APISENSE® crowd-sensing platform has been sold to the ip-label company. They use this platform as a solution to monitor the quality of the GSM signal in the wild. The objective is to provide developers and stakeholders with a feedback on the quality of experience of GSM connection depending on their location.
This collaboration (2017–20) aims at proposing new solutions for optimizing the energy footprint of ICT software infrastructures. We want to be able to measure and assess the energy footprint of ICT systems while preserving various quality of service parameters, such as performance and security. We aim at proposing a testbed for assessing the energy footprint of various programming languages. This testbed will also incorporate frameworks for web and mobile programming. Finally, we want to be able to issue recommendations to developers in order to assist them in improving the energy footprint of their programs. This collaboration will take advantage of the PowerAPI software library.
The PhD of Mohammed Chakib Belgaid takes place in the context of this collaboration.
This collaboration (2017–19) aims at defining a computational model for software infrastructures layered on top of virtualized and interconnected cloud resources. This computational model provides application programming and management facilities to distributed applications and services , . This computational model defines a pivot model that enables the interoperability of various existing and future standards for cloud systems such as OCCI and TOSCA. This pivot model is defined with the Alloy specification language . This collaboration takes advantage of the expertise that we are developing since several years on reconfigurable component-based software systems , on cloud systems , and on the Alloy specification language .
This collaboration with Orange Labs is a joint project with Jean-Bernard Stefani from the Spades Inria project-team.
This collaboration (2018–21) aims at proposing new solutions for modeling the energy efficiency of software systems and to design and implement new methods for measuring and reducing the energy consumption of software systems at development time. We especially target software systems deployed on cloud environments.
The CIFRE PhD of Zakaria Ournani takes place in the context of this collaboration.
This collaboration (2018–21) aims at proposing new solutions for automatically spotting and fixing recurrent user experience issues in web applications. We are interested in developing an autonomic framework that learns and classifies the behaviors and figures out causality links between data such as web GUI events, support tickets and user feedback, source version management events (e.g. recent commits). The ultimate objective is to implement an AI-powered recommendation system to guide the maintenance and even to automatically predict and solve user issues.
The CIFRE PhD of Sacha Brisset takes place in the context of this collaboration.
CIRRUS is a 3-year (2017–20) joint team with the Scalair cloud operator and architect company funded by the Hauts-de-France region. The CIRRUS joint team is developing novel solutions in the domains of the on demand configuration of heterogeneous cloud resources, the management of cloud elasticity for all deployed services (SaaS, PaaS, IaaS) in order to guarantee quality of service and user quality of experience, and the taming of financial costs of cloud infrastructures.
Alloy@Scale is a 12-month (2018–19) project funded in the context of CPER Data program. Alloy@Scale aims at overcoming the limits of the formal verification of large software systems specified with the Alloy formal specification language. For that, the program combines the Grid'5000 infrastructure and the Docker container technology.
This 24-month (2019–20) project is funded in the context of the STaRS program. It aims at the development of methods and tools for rigorous design of cloud computing platforms and applications, which can be proven to be correct by construction. First results have been published in , , .
Indoor Analytics is a 32-month (2019–21) project funded in the context of CPER Data program. Indoor Analytics aims at collaborating with the Mapwize company on the development of novel analytics for indoor location systems. In particular, Mapwize and Spirals target the joint delivery of an open-source software solution devoted to the acquisition, storage and processing of location events at scale.
COMMODE (Knowledge COMpilation for feature MODEls) is a 24-month (2019–21) project funded in the context of CPER Data program. COMMODE aims at using techiques from knowledge compilation, a subarea of artificial intelligence, for feature models, a representation of software products used in software engineering.
North European Lab LLEX (2017–19) is an international initiative supported by the Inria Lille - Nord Europe Center that takes place in the context of a collaboration between Inria and KTH. LLEX deals with research on automated diagnosis and repair of software bugs. Automated software repair is the process of fixing software bugs automatically. An automated software repair system fixes software bugs with no human intervention. The goal of automated software repair is to save maintenance costs and to enable systems to be more resilient to bugs and unexpected situations. This research may dramatically improve the quality of software systems. This initiative led to several results that have been published , , , , and to the PhD thesis of Benajamin Danglot that have been defended in November 2019.
ADT FingerKit (2018–20) is a technology development initiative supported by the Inria Lille - Nord Europe Center that focuses on the design and development of a new and enhanced version of the AmIUnique platform. AmIUnique is a data collection and analysis platform to better understand, analyze and vulgarize the uses and threats of browser fingerprinting. This initiative led by Inria is a key asset to better understand novel techniques that threatens the user privacy on Internet. This ADT builds on our first results with the PhD thesis of Antoine Vastel .
ADT e-Lens (2018–20) is a technology development initiative supported by the Inria Lille - Nord Europe Center that aims at extending the PowerAPI energy monitoring library that we develop in the team since 2011. The extension deals with the integration of new power models (for GPU, disk, network interface), the implementation of a self-optimization algorithm, the port of the platform to embedded systems running with Raspberry Pi, ROS and Android, and the implementation of an active learning algorithm for power models. This ADT builds on our results with the defended PhD theses of Adel Noureddine and Maxime Colmant , and with the ongoing PhD thesis of Guillaume Fieni.
BottleNet is a 48-month project (2015–19) funded by ANR.
The objective of BottleNet is to deliver methods, algorithms, and software systems to measure Internet Quality of Experience (QoE) and diagnose the root cause of poor Internet QoE.
Our goal calls for tools that run directly at users’ devices.
We plan to collect network and application performance metrics directly at users’ devices and correlate it with user perception to model Internet QoE, and to correlate measurements across users and devices to diagnose poor Internet QoE.
This data-driven approach is essential to address the challenging problem of modeling user perception and of diagnosing sources of bottlenecks in complex Internet services.
BottleNet will lead to new solutions to assist users, network and service operators as well as regulators in understanding Internet QoE and the sources of performance bottleneck.
Several results and publications have been obtained in the context of this project , , .
The paper
SATAS is a 48-month project (2015–20) funded by ANR. SATAS aims to advance the state of the art in massively parallel SAT solving with a particular eye to the applications driving progress in the field. The final goal of the project is to be able to provide a "pay as you go" interface to SAT solving services, with a particular focus on their power consumption. This project will extend the reach of SAT solving technologies, daily used in many critical and industrial applications, to new application areas, which were previously considered too hard, and lower the cost of deploying massively parallel SAT solvers on the cloud. Our results from this project have been published in the following papers , .
Headwork is a 48-month project (2016–21) funded by ANR. The main objective of Headwork is to develop data-centric workflows for programming crowd sourcing systems in a flexible declarative manner. The problem of crowd sourcing systems is to fill a database with knowledge gathered by thousands or more human participants. A particular focus is to be put on the aspects of data uncertainty and for the representation of user expertise. This project is coordinated by D. Gross-Amblard from the Druid Team (Rennes 1). Other partners include the Dahu team (Inria Saclay), Sumo (Inria Bretagne), and Links (Inria Lille) with J. Nierhen and M. Sakho. Our results from this project have been published in the following paper .
Delta is a 48-month project (2016–21) funded by ANR. The project focuses on the study of logic, transducers and automata. In particular, it aims at extending classical framework to handle input/output, quantities and data. This project is coordinated by M. Zeitoun from LaBRI. Other partners include LIF (Marseille), IRIF (Paris-Diderot), and D. Gallois from the Inria Lille Links team. Several results and publications have been obtained in the context of this project , , , .
CQFD is a 48-month project (2018–22) funded by ANR. The project focuses on the complex ontological queries over federated heterogeneous data. The project targets to set the foundations, to provide efficient algorithms, and to provide query rewriting oriented evaluation mechanisms, for ontology-mediated query answering over heterogeneous data models. This project is coordinated by Federico Ulliana from Inria Sophia Antipolis. Other partners include LaBRI, Inria Saclay, IRISA, LTCI, and LIG.
FP-Locker is a 42-month project (2019–23) funded by ANR in the context of the JCJC program. This project proposes to investigate advanced browser fingerprinting as a configurable authentication mechanism. We argue that it has the potential to be the only authentication mechanism when used in very low-security, public websites; it can be used to block bots and other fraudulent users from otherwise open websites. It also has the potential to be used as a second factor authentication mechanism, or as an additional factor in Multi-Factor Authentication (MFA) schemes. Besides strengthening a session’s initial authentication, it can also be used for continuous session authentication to protect against session hijacking. In many contexts, fingerprinting is fully transparent to users, meaning that contrary to authentication processes that rely on external verification cards, code generating keys, special apps, SMS verification codes, users do not have to do anything to improve their security. In more restricted contexts, administrators can enforce different policies, for example, enrolling fingerprints from devices that connect from trusted IP addresses (e.g., an internal network), and then verifying these fingerprints when the same users connect from untrusted IP addresses. Consequently, we plan to design an architecture and implement it to be able to plug the browser fingerprinting authentication process to an existing authentication system.
Koala is a 42-month project (2019–23) funded by ANR in the context of the JCJC program. The project aims to deliver a series of innovative tools, methods and software to deal with the complexity of fog computing environments configurations and adaptations. In particular, we take a step back on the current limitations of existing approaches (e.g., lack of expressiveness and scalability) and address them placing knowledge as a first-class citizen. We plan to tackle configuration issues from a novel perspective in the field of variability management, using recent techniques from the area of knowledge compilation. Specifically, we will investigate the best-suited d-DNNF representation for each reasoning operation, and we plan to provide new variability modeling mechanisms (e.g., dimensions, priorities and scopes) required in a fog context. Regarding adaptation concerns, we want to leverage machine learning techniques to improve adaptation management and evolution under uncertainty, relying on a continuously enriched and reusable knowledge base. In particular, we plan to propose an approach for suggesting evolution scenarios in a predictive manner, relying on an evolution-aware knowledge base acquired at run-time through machine learning feedback.
StoreConnect is a 36-month project (2016–19) funded by FUI and labelled by the PICOM (Pôle des Industries du COMmerce) competitivity cluster. The partners are Tevolys, Ubudu (leader), Smile, STIME, Leroy Merlin, Insiteo, Inria Spirals, Inria Fun, Inria Stars. The goal of the project is to define a modular multi-sensors middleware platform for indoor geolocation. Several results and publications have been obtained in the context of this project , , .
BetterNet (2016–19) aims at building and delivering a scientific and technical collaborative observatory to measure and improve the Internet service access as perceived by users. In this Inria Project Lab, we will propose new original user-centered measurement methods, which will associate social sciences to better understand Internet usage and the quality of services and networks. Our observatory can be defined as a vantage point, where: (1) tools, models and algorithms/heuristics will be provided to collect data, (2) acquired data will be analyzed, and shared appropriately with scientists, stakeholders and civil society, and (3) new value-added services will be proposed to end-users. IPL BetterNet is led by Isabelle Chrisment (Inria Madynes), with the participation of the Diana, Dionysos, Inria Chile, Muse, and Spirals Inria project-teams, as well as the ARCEP French agency and the ip-label company. Our results in the context of this project have been published in .
"Gérer vos données sans fuite d'information" is a 3-year (2018–20) project granted in the context of the CNRS-Momentum call for projects. Data manipulated by modern applications are stored in large databases. To protect these pieces of data, security policies limit a user's access to what she is allowed to see. However, by using the semantics of the data, a user can deduce information that she was not supposed to have access to. The goal of this project is to establish methods and tools for understanding and detecting such data leaks. Several results and publications have been obtained in the context of this project , , .
Program: H2020 ICT-10-2016.
Project acronym: STAMP.
Project title: Software Testing Amplification.
Duration: 36 months (2016–19).
Coordinator: Inria.
Other partners: ActiveEon (France), Atos (Spain), Engineering (Italy), OW2 (France), SINTEF (Norway), TellU (Norway), TU Delft (The Netherlands), XWiki (France).
Abstract: By leveraging advanced research in automatic test generation, STAMP aims at pushing automation in DevOps one step further through innovative methods of test amplification. It will reuse existing assets (test cases, API descriptions, dependency models), in order to generate more test cases and test configurations each time the application is updated. Acting at all steps of development cycle, STAMP techniques aim at reducing the number and cost of regression bugs at unit level, configuration level and production stage.
Participants: Benjamin Danglot, Martin Monperrus [contact person].
Program: H2020 JU Shift2Rail.
Project acronym: X2Rail-1.
Project title: Start-up activities for Advanced Signalling and Automation System.
Duration: 36 months (2016–19).
Coordinator: Siemens.
Other partners: 19 partners, among others Bombardier, Siemens, Thales, IRT Railenium.
Abstract: Our contribution to the project is focused on adaptive communication middleware for cyber-physical railway systems.
Participants: Lionel Seinturier [contact person].
Program: EUREKA Celtic-Plus.
Project acronym: SENDATE.
Project title: SEcure Networking for a DATa Center Cloud in Europe.
Duration: 36 months (2016–19).
Coordinator: Nokia.
Other partners: 50+ partners in Finland, France, Germany, Norway, and Sweden. Selected partners involved: Nokia, Orange.
Abstract: The project addresses the convergence of telecommunication networks and IT in the context of distributed data centers. We are involved in the TANDEM subproject that targets the infrastructure of such a distributed system. More specifically, we are studying new approaches in terms of software engineering and component-based solutions for enabling this convergence of network and IT.
Participants: Lionel Seinturier [contact person].
Title: Self-Optimization of Service Oriented Architectures for Mobile and Cloud Applications
International Partner (Institution - Laboratory - Researcher):
Université du Québec à Montréal (Canada) - LATECE - Naouel MOHA
Start year: 2017
See also: http://
The long-term goal of this research program is to propose a novel and innovative methodology embodied in a software platform, to support the runtime detection and correction of anti-patterns in large-scale service-oriented distributed systems in order to continuously optimize their quality of service. One originality of this program lies in the dynamic nature of the service-oriented environments and the application on emerging frameworks for embedded and distributed systems (e.g., Android/iOS for mobile devices, PaaS/SaaS for Cloud environments), and in particular mobile systems interacting with remote services hosted on the Cloud.
PACE is a 3-year (2019–21) project funded by the Research Council of Norway. The goal of the project is to establish a sustained education and research-oriented collaboration between four partner universities in energy informatics and green computing that will strengthen quality academic relations and mutually improve each other's quality of research and researcher training both at PhD and master level. Partner universities are: University of Oslo (Norway), University of Stavanger (Norway), TU Munich (Germany), Université de Lille.
Jonatan Enes, PhD Student in Computer Science from University of A Coruña, visited us for 3 months from April to July.
Alejandro Grez, from Pontifical Catholic University of Chile, visited us for 1 month in April.
Simon Bliudze
co-chair International Workshop on Methods and Tools for Rigorous System Design (MeTRiD)
Laurence Duchien
general chair European Conference on Software Engineering (ECSA)
Walter Rudametkin Ivey
Atelier sur la Protection de la Vie Privée (APVP)
Pierre Bourhis
Atelier sur la Protection de la Vie Privée (APVP)
Knowledge Comparison Workshop (KOCOON)
Clément Quinton
publicity chair International Systems and Software Product Line Conference (SPLC)
proceedings chair European Conference on Software Engineering (ECSA)
Romain Rouvoy
poster & demonstration chair International Symposium on Reliable Distributed Systems (SRDS)
Laurence Duchien
co-chair International Systems and Software Product Line Conference (SPLC)
Simon Bliudze
International Conference on Coordination Models and Languages (Coordination)
International Conference on Formal Methods in Software Engineering (FormaliSE)
International Conference on Formal Aspects of Component Software (FACS)
International Conference on Verification and Evaluation of Computer and Communication Systems (VECoS)
International Symposium on Model-Based Safety and Assessment (IMBSA)
Workshop on Formal Approaches for Advanced Computing Systems (FAACS)
International Workshop on Foundations of Coordination Languages and Self-Adaptative Systems (FOCLASA)
Embedded Systems and the Internet of Things track at Euromicro Software Engineering and Advanced Applications conference (ES-IoT@SEAA)
Pierre Bourhis
International Conference on Database Theory (ICDT)
ACM SIGMOD International Conference on Management of Data (PODS)
International Joint Conferences on Artificial Intelligence (IJCAI)
Conférence sur la Gestion de Données – Principes, Technologies et Applications (BDA)
Laurence Duchien
International Conference on Software Architecture (ICSA)
International Workshop Series on Conducting Empirical Studies in Industry (CESI) in conjonction with the ICSE conference
International Workshop on Software Qualities And Their Dependencies (SQADE) in conjunction with the ESEC/FSE conference
Belgium-Netherlands Software Evolution Workshop
Pierre Laperdrix
USENIX Workshop on Offensive Technologies (WOOT)
Philippe Merle
International Conference on Cooperative Information Systems (CoopIS)
International Conference on Evolving Internet (INTERNET)
International Conference on Cloud Computing, GRIDs, and Virtualization (CLOUD COMPUTING)
International Conference on Advanced Service Computing (SERVICE COMPUTATION)
International Conference on Mobile Ubiquitous Computing, Systems, Services and Technologies (UBICOMM)
Workshop on CrossCloud Infrastructures & Platforms (CrossCloud)
International Workshop on Adaptive and Reflective Middleware (ARM)
Clément Quinton
International Workshop on Variability Modelling of Software-Intensive Systems (VaMoS)
ACM Symposium on Applied Computing (SAC) SA-TTA Track
International Systems and Software Product Line Conference (SPLC)
SPLC Demonstration and Tools Track
International Workshop on Software Product Line Teaching (SPLTea)
Women in Software Architecture (WSA ECSA)
Romain Rouvoy
International Conference on Service Oriented Computing (ICSOC)
ACM Symposium on Applied Computing (SAC) DADS Track
International Conference on Ambient Systems, Networks and Technologies (ANT)
ACM International Conference on Distributed and Event‐based Systems (DEBS)
International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP)
International Workshop on Advances in Mobile App Analysis (A-Mobile)
Eurosys Doctoral Workshop
Lionel Seinturier
IEEE/ACM International Conference on Mobile Software Engineering and Systems (MOBILESoft)
ACM Symposium on Applied Computing (SAC) SA-TTA Track
International Conference on Software Technologies (ICSOFT)
International Conference on Service Oriented Computing (ICSOC)
ACM International Conference on Management of Emergent Digital EcoSystems (MEDES)
International Conference on Computational Collective Intelligence (ICCCI)
Walter Rudametkin Ivey
International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS) Track D: Security and Privacy
Lionel Seinturier is editor for software engineering of the ISTE-Wiley Computer Science and Information Technology book collection.
Simon Bliudze: Science of Computer Programming (SCICO), Journal of Logical and Algebraic Methods in Programming (JLAMP)
Laurence Duchien: Journal of Systems and Software (JSS).
Pierre Laperdrix: IEEE Access, Emerald Journal of Intellectual Capital.
Philippe Merle: ACM Computing Surveys (CSUR), Elsevier Future Generation Computer Systems (FGCS), IEEE Transactions on Services Computing (TSC), IEEE Access, Wiley Software Practice and Experience (SPE).
Clément Quinton: Elsevier Information and Software Technology (IST), Science of Computer Programming (SCICO).
Romain Rouvoy: Elsevier Information and Software Technology (IST), Elsevier Journal of Systems and Software (JSS), ACM Transactions on Internet of Things (TOIT), ACM Computing Surveys (CSUR), IEEE Transactions on Software Engineering (TSE).
Walter Rudametkin Ivey: ACM Transactions on the Web (TWEB), IEEE Transactions on Services Computing (TSC), IEEE Communications Magazine.
Lionel Seinturier: ACM Transactions on Software Engineering and Methodology (TOSEM), Springer Journal of Internet Services and Applications (JISA), Wiley Journal of Software Evolution and Process (JSME).
Simon Bliudze gave a tutorial on the Rigorous Component-based Design in BIP at the 6th International Symposium on Model-Based Safety and Assessment (IMBSA) and an invited lecture on Component-Based Design of Concurrent Software in BIP for a group of Master 2 students at the Aristotle University of Thessaloniki (Greece).
Pierre Bourhis gave an invited talk on Reasoning on Leak of Information for Database Views at the Emerging Challenges in Databases and AI Research Seminar in Santa Cruz, Chile.
Romain Rouvoy gave an invited talk on "Enabling In situ Location Privacy" during the IRIXYS workshop organized in Lyon in June 2019. He was also a keynote speaker during the national days of the GDR GPL in June 2019 with a talk entitled "Quels défis pour le développement durable des logiciels ?".
Romain Rouvoy is the co-head of the "Groupe de Travail Génie Logiciel pour les Systèmes Cyber-physiques" of the GDR GPL.
Laurence Duchien was member of the Hcéres committee for the UMR Laboratoire d'Informatique de Grenoble (LIG). She was expert for Canada NSERC, and DGRI (Ministère de la Recherche) program PHC. She was chair of the recruitment committee for Professor in Computer Science at ENS Lyon. She was member of the recruitment committee for two associate professor positions at the University of Luxembourg.
Romain Rouvoy was vice-chair of the recruitment committee for research scientists at Inria Lille - Nord Europe. He was member of the recruitment committee for associate researchers at University of Luxembourg. He was member of scientific committee 25 - Infrastructures (networks, high performance computing and storage), software sciences and technologies of the French Research Agency (ANR). He was scientific expert for DRRT IDF.
Lionel Seinturier was chair of the recruitment committee for Professor in Computer Science at CNAM Paris. He was member of the promotion committee to Tenured Associate Professor, Department of Electrical and Computer Engineering, University of Waterloo, Canada. He was member of the IEEE Technical Council on Software Engineering Distinguished Educational Award Committee. He was scientific expert for Chile CONICYT, Belgium Innoviris, DRRT IDF.
Simon Bliudze was member of steering committees of International Interaction and Concurrency Experience workshop (ICE) and the International Symposium on Formal Approaches to Parallel and Distributed Systems (4PAD). He is lead guest editor for a special issue on Verification and Evaluation of Computer and Communication Systems (VECoS) of Innovations in Systems and Software Engineering: A NASA Journal (ISSE, Springer) and guest editor for a special issue on Methods and Tools for Rigorous System Design (MeTRiD) of the International Journal on Software Tools for Technology Transfer (STTT, Springer).
Laurence Duchien is member of the CNRS CoCNRS section 6 committee, and of the "bureau" of this committee. She is chairing the scientific and technical council of the PICOM business and research cluster for the retail industry. She is member of the scientific councils of IMT Atlantique (Nantes), Labex CIMI (Toulouse), IRT SystemX (Saclay). She is member of the steering committees of the European Conference on Software Architecture (ECSA) and of the Systems and Software Product Lines Conference (SPLC). She was guest editor for a special issue on the Configurable Systems of Springer Empirical Software Engineering journal. She was member of the committee for the Women in Science and Engineering Award from IEEE TCSE. She was member of a panel on soft skills for PhD students on humanities and social sciences for MESHS Lille. She is in charge of the Career development & Intersectoral secondments in the PEARL Project ("Programme for EArly-stage Researchers in Lille") at I-SITE Université Lille Nord Europe.
Philippe Merle is elected member of the Inria scientific board (CS), elected member of the Inria technical committee (CTI), elected member of the Inria national committee on "hygiène, de sécurité et des conditions de travail" (CNHSCT), president of the CUMI (Comité des Utilisateurs des Moyens Informatiques), permanent secretary of the CLHSCT (Comité Local d’Hygiène, de Sécurité et de Conditions de Travail), and elected member of the centre committee for the Inria Lille - Nord Europe research center. He is member of the steering committee of the Inria’s continuous integration service. He is member of SPI doctoral school council of University of Lille.
Romain Rouvoy is an elected member of the "bureau" of the French chapter of the ACM Special Interest Group in Operating Systems (SIGOPS / ASF) and elected member of the administrative council of SpecifCampus. He is a member of the steering committee of DAIS (International Conference on Distributed Applications and Interoperable Systems).
Walter Rudametkin Ivey is member of the CDT (Comité de Développement Technologique) of the Inria Lille - Nord Europe research center.
Lionel Seinturier is member of the BSC (Bureau Scientifique du Centre) for the Inria Lille - Nord Europe research center. Until August 2019, he was Scientific Advisor for the evaluation of ICT research laboratories at the Hcéres. Since December 2019, he is chair of CNU27, the Computer Science section of the "Conseil National des Universités".
Simon Bliudze is, in addition to his tenure junior research position at Inria, chargé d'enseignement at École Polytechnique, Palaiseau, France, in the Department of Computer Sciences (DIX).
INF411: Les bases de la programmation et de l'algorithmique, 40h, 2nd year of the Engineering cycle
INF442: Traitement des données massives, 40h, 2nd year of the Engineering cycle
Pierre Bourhis is, in addition to his tenure junior research position at CNRS, chargé d'enseignement spécialité Sciences des données at École Polytechnique, Palaiseau, France, in the Department of Computer Sciences (DIX).
Info553: Bases de données, 18h, Cycle Polytechnique
Modal Graphe Géant, 36h
Laurence Duchien teaches at the Université de Lille in the FST faculty. She is project leader for doctoral studies at Université de Lille.
Software engineering project, 60h, Level M2, Master MIAGE FI
Software engineering project, 50h, Level M2, Master MIAGE FC/FA
Research initiation, 20h, Level M2, Master of Computer Science
Clément Quinton teaches at the Université de Lille in the FST faculty.
Introduction to Computer Science, 46.5h, Level L1, Licence of Computer Science
Object-oriented programming, 36h, Level L2, Licence of Computer Science
Object-oriented design, 42h, Level L3, Licence of Computer Science
Design of distributed applications, 42h, Level M1, Master of Computer Science
Advanced design of distributed applications, 37.5h, Level M2, Master MIAGE
Infrastructure and frameworks for the Internet, 33.75h, Level M2, Master of Computer Science
Software product lines, 7.5h, Level M2, Master of Computer Science
Suivi de stages et de projets, 30h, Licence and Master of Computer Science
Romain Rouvoy teaches at the Université de Lille in the FST faculty. He heads the Master of Computer Science program at the Université de Lille.
Design of distributed applications, 12h, Level M1, Master of Computer Science
Object-oriented design, 4h, Level L3, Licence of Computer Science
Suivi de projets, 20h, Level M2, Master of Computer Science
Walter Rudametkin Ivey teaches at the Université de Lille in the Polytech engineering school.
GIS4 Programmation par Objets, 32h
GIS4 Architectures Logicielles, 26h
GIS2A3 (apprentissage) Projet programmation par Objet, 24h
IMA2A4 (apprentissage) Conception Modélisation Objet, 24h
IMA3 Programmation Avancée, 62h
GBIAAL4 Bases de données, 22h
GIS5 Suivi de projets, 42h
GIS2A (apprentissage) Suivi d'apprentis, 28h
Lionel Seinturier teaches at the Université de Lille in the FST faculty. Until July 2019, he headed the Computer Science Department at the Faculty of Science and Technology of the Université de Lille.
Conception d'Applications Réparties, 50h, Level M1, Master MIAGE
Infrastructures et Frameworks Internet, 70h, Level M2 E-Services IAGL TIIR, Master of Computer Science
PhD: Antoine Vastel, Cartographie de la qualité d'expérience pour l'accès à l'internet mobile, Université de Lille, October 2019, Romain Rouvoy & Walter Rudametkin.
PhD: Benjamin Danglot, Automatic Unit Test Amplification for DevOps, Université de Lille, November 2019, Martin Monperrus & Lionel Seinturier.
PhD: Lakhdar Meftah, Cartography of the Quality of Experience for Mobile Internet Access, Université de Lille, December 2019, Romain Rouvoy, co-supervision with Isabelle Chrisment (Inria Madynes).
PhD: Sarra Habchi, Une supervision de contexte sensible à la confidentialité pour les développements logiciels en crowdsource, Université de Lille, December 2019, Romain Rouvoy.
PhD in progress: Mohammed Chakib Belgaid, Développement durable des logiciels vers une optimisation énergétique de bout en bout des systèmes logiciels, January 2018, Romain Rouvoy & Lionel Seinturier.
PhD in progress: Vikas Mishra, Collaborative Strategies to Protect Against Browser Fingerprinting, October 2018, Romain Rouvoy & Walter Rudametkin & Lionel Seinturier.
PhD in progress: Marion Tommasi, Collaborative Data-centric Workflows: Towards Knowledge Centric Workflows and Integrating Uncertain Data, October 2018, Pierre Bourhis & Lionel Seinturier.
PhD in progress: Antonin Durey, Leveraging Browser Fingerprinting to Fitht Fraud on the Web, October 2018, Romain Rouvoy & Walter Rudametkin & Lionel Seinturier.
PhD in progress: Sacha Brisset, Automatic Spotting and fixing or Recurrent user Experience issues. Detecting and Fixing Anomalies by applying Machine Learning on user Experience Data, November 2018, Lionel Seinturier & Romain Rouvoy & Renaud Pawlak & Yoann Couillec.
PhD in progress: Zakaria Ournani, Eco-conception des logiciels : modélisation de l’efficience énergétique des logiciels et conception d’outils pour mesurer et réduire leur consommation d’énergie, November 2018, Romain Rouvoy.
PhD in progress: Zeinab Abou Khalil, November 2017, Laurence Duchien & Clément Quinton, co-supervision with Tom Mens (University of Mons, Belgium).
PhD in progress: Guillaume Fieni, GreenData : Vers un traitement efficient et éco-responsable des grandes masses de données numériques, October 2017, Romain Rouvoy & Lionel Seinturier.
PhD in progress: Trình Lê Khánh, Design of Correct-by-Construction Self-Adaptive Cloud Applications using Formal Methods, October 2019, Philippe Merle & Simon Bliudze.
Simon Bliudze
Wajeb Saab (École polytechnique fédérale de Lausanne, Switzerland), examiner
Rim El Ballouli (Université Grenoble Alpes), examiner
Laurence Duchien
Stepan Shevtsov (Linnaeus University, Sweden and KU Leuven, Belgium), reviewer
HDR, Marius Bilasco (Université de Lille), chair
Thibault Raffaillac (Université de Lille), chair
Romain Rouvoy
Hanyang Cao (Université de Bordeaux), reviewer
Mousa Hayam (INSA Lyon), reviewer
Stefan Contiu (Université de Bordeaux), reviewer
Adrien Luxey (Université de Rennes 1), reviewer
Lionel Seinturier
HDR, Thomas Polacsek (Université de Toulouse), reviewer
Xinxiu Tao (Université Grenoble Alpes), reviewer
Benjamin Benni (Université Nice Sophia Antipolis), reviewer
Pierre Laperdrix gave a talk in May at the RuhrSec IT security conference organized by the Ruhr University Bochum. The aim of the conference is to popularize different topics to an audience of non-experts so that they are familiar with the latest trends in computer security.