The Cidre team is concerned with security and privacy issues. Our long-term ambition is to contribute to the construction of widely used systems that are trustworthy and respectful of privacy, even when parts of the system are targeted by attackers.
With this objective in mind, the CIDRE team focuses mainly on the three following topics:
In many aspects of our daily lives, we rely heavily on computer systems, many of which are based on massively interconnected devices that support a population of interacting and cooperating entities. As these systems become more open and complex, accidental and intentional failures become much more frequent and serious. We believe that the purpose of attacks against these systems is expressed at a high level (compromise of sensitive data, unavailability of services). However, these attacks are often carried out at a very low level (exploitation of vulnerabilities by malicious code, hardware attacks).
The CIDRE team is specialized in the defense of computer systems. We argue that to properly protect these systems we must have a complete understanding of the attacker's concrete capabilities. In other words, to defend properly we must understand the attack.
The CIDRE team therefore strives to have a global expertise in information systems: from hardware to distributed architectures. Our objective is to highlight security issues and propose preventive or reactive countermeasures in widely used and privacy-friendly systems.
The fields of application of the Cidre team are naturally the security of the systems. The algorithms and tools produced in the team are regularly transferred to the industry through our various collaborations such as Cifre, Start-up or Inria License.
This year, several works carried out in the team were published in the best conferences of our field. Among them, we would like to highlight the following publications:
We were very proud to receive an outstanding paper award at 34th Euromicro Conference on Real-Time Systems (ECRTS 2022) for the article entitled "RT-DFI: Optimizing Data-Flow Integrity for Real-Time Systems". This work is the result of a fruitful collaboration between Nicolas Bellec and Isabelle Puaut from the INRIA PACAP research team, Simon Rokicki from the INRIA TARAN research team, Guillaume Hiet and Frédéric Tronel from CentraleSupélec/INRIA CIDRE research team.
To fully understand various methodologies of cyber attacks, our study is organized with a two-fold focus. On one hand, we are interested in providing the security analysts the tools for quickly capturing the knowledge of the scope of an attack in progress. On the other hand, we are interested with investigating new horizons of emerging threats.
Advanced Persistent Threat (APT) attacks are surgically targeted attacks led by advanced attackers, who constantly adapt their Tactics, Techniques, and Procedures (TTP).Identifying patterns in the modus operandi of attackers is an essential requirement in the study of Advanced Persistent Threats. Our community is hampered both by the lack of formalism helping to precisely describe these attacks and by the lack of accurate, relevant, and representative datasets of these current threats. In 2 we propose a formal model of an attacker’s tactical progression during the network propagation phase. This formalization allows to describe the PWNJUTSU experiment unequivocally. In this experiment, 22 Red Teamers attacked a vulnerable infrastructure to compromise machines and steal secret flags. we had three distinct goals in this experiment. A first goal was to observe how professionals attackers progress in such an infrastructure. A second goal was to build a dataset of various attacks targeting the same infrastructure. A last goal was to test the relevance of our model of a whole attack campaign. The resulting dataset is available online on here. It contains all participants’ event logs (system and network), including the reference environment’s event logs. A search engine is also provided to peek into the dataset.
Research on Android malware detection based on Machine learning has been prolific in recent years.
In 5, in collaboration with the TruX team at the University of Luxembourg we detail a large-scale evaluation of four state-of-the-art approaches that their achieved performance fluctuates when applied to different datasets.
Combining existing approaches appears as an appealing method to stabilise performance. We therefore proceed to empirically investigate the effect of such combinations on the overall detection performance.
In our study, we evaluated 22 methods to combine feature sets or predictions from the state-of-the-art approaches. Our results showed that no method has significantly enhanced the detection performance reported by the state-of-the-art malware detectors. Nevertheless, the performance achieved is on par with the best individual classifiers for all settings. Overall, we conduct extensive experiments on the opportunity to combine state-of-the-art detectors.
Our main conclusion is that combining state-of-the-art malware detectors leads to a stabilization of the detection performance, and a research agenda on how they should be combined effectively is required to boost malware detection. All artifacts of this study form a dataset of
Android security has received a lot of attention over the last decade, especially when working on malware analysis. The CIDRE team have also a long past work on malware analysis. When presenting our contributions in conferences or discussing with other researchers of the community we had the feelings that the datasets that are used in the literature were of low quality: if the datasets are outdated or not representative of the studied population, the conclusions may be flawed. In 4, we have investigated the irregularities of datasets used in experiments, questioning the validity of the performances reported in the literature. We have developed a new method for debiasing datasets. With this method we have rebuilt new dataset, more up-to-date and with less bias than in the literature. This dataset has been released on the artifact repository of IEEE, as an independent contribution for ensuring the reproducibilty of our experiments and to serve as new start point for future works. In particular, this dataset can be used by other reasearchers contributing in Android malware detection or classification with machine learning algorithms.
Among the difficulties encountered in building datasets to evaluate intrusion detection tools, a tricky part is the process of labelling the events into malicious and benign classes. The labelling correctness is paramount for the quality of the evaluation of intrusion detection systems but is often considered as the ground truth by practitioners and is rarely verified. Another difficulty lies in the correct capture of the network packets. If it is not the case, the characteristics of the network flows generated from the capture could be modified and lead to false results.
In this paper, we present several flaws we identified in the labelling of the CICIDS2017 dataset and in the traffic capture, such as packet misorder, packet duplication and attack that were performed but not correctly labelled. We also assess the impact of these different corrections on the evaluation of supervised intrusion detection approaches.
Despite fruitful achievements made by unsupervised machine learning-based anomaly detection for network intrusion detection systems, they are still prone to the issue of high false alarm rates, and it is still difficult to reach very high recalls. In 2020, our team proposed (Leichtnam' PhD thesis) Sec2graph, an unsupervised approach applied to security objects graphs that exhibited interesting results on single-step attacks. The graph representation and the embedding allowed for better detection since it creates qualitative features.
In this paper, we present new experiments to assess the performances of this approach for detecting APT attacks. We achieve better detection performances than the original work's baseline detection methods on the DAPT2020 dataset.
Network Intrusion Detection Systems (NIDSes) evaluation requires background traffic. However, real background traffic is hard to collect. We hence rely on synthetic traffic generated especially for this task. The quality of the generated traffic has to be evaluated according to some clearly defined criteria.
In this paper, we show how to adapt the quality assessment solutions proposed for different fields of data generation such as image or text generation to network traffic. We discuss the criteria that allow evaluation of the quality of a generated network traffic and propose functions to evaluate these criteria.
Alert correlation is a set of techniques used to process alerts raised by various intrusion detection systems in order to a eliminate redundant alerts, reduce the number of false alerts, and reconstruct attack scenarios. In Industrial Control Systems, the presence of a physical process and the associated specific threats has led to the heterogeneity of alerts due to the development of multi-domain detection techniques. Some detection approaches rely solely on observations at the level of the cyber domain while other approaches monitor the physical process. The two approaches are complementary but the information carried by the two types of alerts are different. In 6, we combine the alerts from physical domain intrusion detection with more classical cyber-domain intrusion detection alerts. We develop an alert correlation approach using an alert enrichment that allows mapping physical domain alerts into the cyber domain. We also propose a specific alert selection for correlation that adapts to the state of the physical process by dynamically adjusting the size of the selected alert window. We publicly released all the datasets generated and used in our results.
The fast improvement of Machine-Learning (ML) methods gives rise to new attacks in Information System (IS). Simultaneously, ML also creates new opportunities for network intrusion detection. Early network intrusion detection is a valuable asset for IS security, as it fosters early deployment of countermeasures and reduces the impact of attacks on system availability. In 8 we propose and study an anomaly-based Network Intrusion Detection System (NIDS) based on Tangled Program Graph (TPG) ML. Secure-GEGELATI learns how to detect intrusions from IS-produced traces and is optimised to fit the requirements of intrusion detection. The study evaluates the capacity of Secure-Gegelati to act as a continuously learning, real-time, and low energy NIDS when executed in an embedded network probe. We show that a TPG is capable of switching between training and inference phases, new training phases enriching the probe knowledge with limited degradation of previous intrusion detection capabilities. The Secure-GEGELATI software reaches 8x the energy efficiency of an optimised Random Forests (RF)-based Intrusion Detection System (IDS) on the same platform. It is capable of processing 13.2 k connections/seconds with a peak power of less than 3.3 Watts on an embedded platform, and is processing in real-time the CIC-IDS 2017 dataset while detecting
In the context of the European SPARTA project, we have proposed an anomaly-based intrusion detection tool where the normal behavior of an application is described using models based on automata and invariants. First, the solution was evaluated using a distributed application (XtreemFS, a distributed file system). In this case, the events observed on different machines form a partially ordered set (POSET), which is the ideal target of our solution. Then, the tool was adapted to analyze network traffics (both Pcap and CSV files). Here observed events are totally ordered. Results of these evaluations are summarized in the published SPARTA deliverables.
Prior works in industrial intrusion detection practices follow the general approach to build AI-driven predicative models to learn from historical data. Such models characterize previous attack events and use this knowledge to predict future ones. These systems typically require collecting events from different industrial organizations and storing them in a centralized AI-as-a-Service (AIaaS) platform to train an AI-driven prediction model. However, these records often include privacy-sensitive metadata, including machine ID, event description, timestamp, system hardening actions taken, etc. Moreover, disclosing them can reveal sensitive information about security policies and security postures. Confidentiality concerns of industrial customers hence make it unrealistic to deploy such a centralized learning process of AI-based security event prediction methods. In 15, we investigate the feasibility of using privacy-friendly collaborative learning to benefit from participating organizations’ knowledge without requiring data disclosure. In particular, we turn to employ Federated Learning to train AI models collaboratively by first locally training AI models per participating organization and then aggregating the local model updates. This Federated AI-based intrusion detection method is tested using real-world network attack attack data stored in a distributed way by 20 different industrial participants. We analyze the detection performance of this Federated AI system in practices, including measuring the detection performances and communication / computational cost of the Federated AI system. Furthermore, we discuss how the distribution drift of the attack behaviors across different industrial users affect the accuracy of the Federated AI system. We demonstrate how to measure the contribution of different participating organizations in the system to the collectively trained intrusion detection model and evaluate the benefits of each organization gained from joining the federation network. Finally, we unveil that distributed data poisoning attacks against the Federated AI system are effective at undermining robustness and decreasing the attack detection precision by a significant amount. While the state-of-the-art countermeasures are claimed to be effective, we demonstrate that carefully tuned data poisoning attacks can easily bypass the defense measures and injects decision bias to the Federated AI system without triggering any alerts of the hygiene check. Besides, we show that data reconstruction attacks can be performed against the participating organizations of the Federated AI system by leaking the statistical profiles of their privately owned attack data used for AI model training. Efficient as the Federated AI system is, our study also raises a severe concern over the potential risk of system integrity and data privacy leaks inside this Federated AI system.
Machine Learning-as-a-Service systems (MLaaS) have been largely developed for cyber security-critical applications, such as detecting network intrusions and fake news campaigns. Despite effectiveness, their robustness against adversarial attacks is one of the key trust concerns for MLaaS deployment. Our work 16 are thus motivated to assess the adversarial robustness of the Machine Learning models residing at the core of these security-critical applications with categorical inputs. Previous research efforts on accessing model robustness against manipulation of categorical inputs are specific to use cases and heavily depend on domain knowledge, or require white-box access to the target ML model. Such limitations prevent the robustness assessment from being as a domain-agnostic service provided to various real-world applications. We propose a provably optimal yet computationally highly efficient adversarial robustness assessment protocol for a wide band of ML-driven cyber security-critical applications. We demonstrate the use of the domain-agnostic robustness assessment method with substantial experimental study on fake news detection and intrusion detection problems. ——
allow users to prove characteristics about themselves to multiple service providers. IMS evolved from impractical, site-by-site authentication, to versatile, privacy enhancing Self Sovereign Identity (SSI) Frameworks. SSI frameworks often use Anonymous Credential schemes to provide user privacy, and more precisely unlinkability between uses of these credentials. However, these schemes imply the disclosure of the identity of the Issuer of a given credential to any service provider. This can lead to information leaks. In 3, we have proposed a new Anonymous Credential scheme that allows a user to hide the Issuer of a credential, while being able to convince the service providers that they can trust the credential, in the absence of a trusted setup. We proved this new scheme secure under the Computational Diffie Hellman assumption, and Decisional Diffie Hellman assumption, in the Random Oracle Model. We show that this scheme is efficient enough to be used with laptops, and to be integrated into SSI frameworks or any other IMS.
Blockchain technology aims to replace traditional
banking systems and manage the world’s economic data.
However, the long-term feasibility of blockchain technology
is hindered by the inability of existing blockchain protocols to
prune the consensus data leading to constantly growing storage
and communication requirements. Kiayias et al. have proposed
a blockchain protocol based on superblock Non-InteractiveProofs-of-Proof-of-Work (NIPoPoWs) as a mechanism to reduce the storage and communication complexity of blockchains
to
We propose and analyze a new asynchronous rumor spreading protocol to deliver a rumor to all the nodes of a large-scale distributed network 7.
This spreading protocol relies on what we call a k-pull operation, with
The arrival of Bitcoin drove the shift to decentralized ecosystems through the exchange of transactions without intermediary. However, one of the main challenges that need to face permissionless blockchains are scalability and security. Sycomore
Characterizing and assessing the adversarial vulnerability of classification models with categorical input has been a practically important, while rarely explored research problem. Categorical data exist widely in cyber security applications. For example, dynamic and static features of malware samples contain many categorical run-time and static signatures attributes, such as the types of API calls, files / processes visited or created and generated network traffics. Adversarial attacks against ML-based security incident detection systems raise a severe concern over the trustworthiness of ML-based detection. Therefore, exploring the origin of adversarial risk over categorical data can deepen our understanding about the feasibility of adversarial threats against real-world cyber security applications.
Our work 10 echoes the challenge by first unveiling the impact factors of adversarial vulnerability of classification models with categorical data based on an information-theoretic adversarial risk analysis about the targeted classifier. Though certifying the robustness of such classification models is intrinsically an NP-hard combinatorial problem, our study shows that the robustness certification can be solved via an efficient greedy exploration of the discrete attack space for any measurable classifiers with a mild smoothness constraint. Our proposed robustness certification framework is instantiated with deep neural network models applied on real-world safety-critic data sources. Our empirical observations confirm the impact of the key adversarial risk factors with categorical input.
In this work 17, we focus on exploring the potential data privacy leaks while deploying graph-based machine learning over the privately owned data, such as molecules invented to develop new medicines and social networks. Unveiling the privacy information about the training data directly violate the intellectual property policy of any entity using the Machine Learning service.
Previous security research efforts orbiting around graphs have been exclusively focusing on either (de-)anonymizing the graphs or understanding the security and privacy issues of graph neural networks. Little attention has been paid to understand the privacy risks of integrating the output from graph embedding models (e.g., node embeddings) with complex downstream machine learning pipelines. In this paper, we fill this gap and propose a novel model-agnostic graph recovery attack that exploits the implicit graph structural information preserved in the embeddings of graph nodes. We show that an adversary can recover edges with decent accuracy by only gaining access to the node embedding matrix of the original graph without interactions with the node embedding models. We demonstrate the effectiveness and applicability of our graph recovery attack through extensive experiments.
The emergence of Real-Time Systems with increased connections to their environment has led to a greater demand in security for these systems. Memory corruption attacks, which modify the memory to trigger unexpected executions, are a significant threat against applications written in low-level languages. Data-Flow Integrity (DFI) is a protection that verifies that only a trusted source has written any loaded data. The overhead of such a security mechanism remains a major issue that limits its adoption. In 11, we presents RT-DFI, a new approach that optimizes the Data-Flow Integrity to reduce its overhead on the Worst-Case Execution Time. We model the number and order of the checks and use an Integer Linear Programming solver to optimize the protection on the Worst-Case Execution Path. Our approach protects the program against many memory-corruption attacks, including Return-Oriented Programming and Data-Only attacks. Moreover, our experimental results show that our optimization reduces the overhead by 7% on average compared to a state-of-the-art implementation.
DGA (2021-2024)
Vincent Raulin’s PhD focuses on using Machine Learning approaches to boost malware detection/classification based on dynamic analysis traces by extracting feature representations with the knowledge of malware analysis experts. This representation aims at capturing the semantics of the program (i.e. what resources it accesses, what operations it performs on them) in a plateform-independent fashion, by replacing the implementation particularities (system call number 2) with higher-level operation (opening a file). This representation could notably provide semantic explanation of malware activity and deliver explainable malware detection/malware family classification.
Ministry of Defence: Characterization of an attacker
Aïmad Berady has started his PhD thesis in November 2018 in the context of a contract between CentraleSupelec and the French Ministry of Defence. His work is to highlight the characteristics of an attacker performing a targeted and long-term attack on an information system.
CEA:
Mohamed-Aimen Djari has started his PhD thesis in October 2019 in the context of a contract between the CNRS and the CEA. His work consists in evaluating security and scalability of permissionless crypto-currency blockchains. The main objective of this thesis is to implement a proof-of-stake permissionless blockchain with suitable incentive mechanisms, and robust mechanisms to defend the system against Sybil attacks.
ANSSI:
Matthieu Baty started his PhD in October 2020 in the context of a collaboration between Inria and the ANSSI. In this project, we want to formally specify hardware-based security mechanisms of a RISC-V processor to prove that they satisfy a well-defined security policy. In particular, we would like to use the Coq proof assistant to formally specify and verify the processor. Our goal is also to extract an HDL description of that certified processor, that could be used to synthetize the processor on an FPGA board.
ANSSI:
Lucas Aubard started his PhD in October 2022 in the context of a collaboration between Inria and the ANSSI. The objective of this thesis is to improve the existing knowledge on reassembly policies, to design mechanisms to automate IDS configuration and to improve the application of these policies within IDS/IPS to increase their detection capabilities in specific contexts such as cloud computing.
DGA:
Pierre-Victor Besson is financed by a DGA-PEC grant since October 2020. Pierre-Victor Besson work on the automatic generation of attack scenario to design deceptive honeynet.
Malizen:
Romain Brisse's thesis is financed by Malizen, an Inria start-up from the CIDRE team since January 2021. His thesis focuses on recommendation system for visual investigation software.
Hackuity:
Natan Talon started his PhD in October 2021 in the context of a collaboration with the company Hackuity. The main objective of this thesis is to be able to assess whether an information system is likely to be vulnerable to an attack. This attack may have been observed in the past or inferred automatically from other attacks.
Anurag Jain from the International Institute of Information Technology, Hyderabad, India has visited the team from the 1st of May to the 31st of May. During this stay, with Emmanuelle Anceaume they have managed to identify and improve the Non-Interactive-Proofs-of-Proof-of-Work (NIPoPoWs) mechanism to achieve security against a Byzantine adversary that may control up to half of the total computational power, matching that of the non-NIPoPoW-protocols.
Pierre-François Gimenez stayed for three months at CISPA Helmholtz Center for Information Security in Saarbrücken, Germany. During this stay, Pierre-François worked with Pr. Mario Fritz on malware analysis with machine learning. More specifically, this project is interested in the robustness of malware detectors based on machine learning against adversarial attacks. Mario Fritz brings his experience and knowledge of robust machine-learning-based detectors and certifiable machine learning. Pierre-François brings his expertise on malware analysis.
SPARTA project on cordis.europa.eu
PEPR DefMal is a collaborative ANR project involving CentraleSupélec, Rennes University, Lorraine University, Sorbonne Paris Nord University, CEA, CNRS, Inria and Eurecom. Malware is affecting government systems, critical infrastructures, businesses, and citizens alike, and regularly makes headlines in the press. Malware extorts money (ransomware), steals data (banking, medical), destroys information systems, or disrupts the operation of industrial systems. The fight against malware is a national and European security issue that requires scientific advances to design new responses and anticipate future attack methods. The aim of the project DefMal is to study malicious programs, whether they are malware, ransomware, botnet, etc. The first objective is to develop new approaches to analyze malicious programs. This objective covers the three aspects of the fight against malware: (i) Understanding (ii) Detection and (iii) Forensics. The second objective of the project is the global understanding of the malware ecosystem (modes of organization, diffusion, etc.) in an interdisciplinary approach involving all the actors concerned.
The security assessment of digital systems relies on compliance and vulnerability analyses to provide recognized cybersecurity assurances. The SECUREVAL project of PEPR Cybersecurity aims to design new tools around new digital technologies to verify the absence of hardware and software vulnerabilities and achieve the required compliance proofs. These developments are based on a double approach, first theoretical and founded on the French school of symbolic reasoning, then applied and anchored in the practice of tool development and security assessment techniques. In addition, by exploring new techniques for security assessments, this project will also allow France to remain at the top of the world in assessment capabilities by anticipating the evolution of international certification schemes. Within this project's framework, our contribution concerns tasks 4.4 Formal analysis and models at the software-hardware boundary (led by Guillaume Hiet) and 3.2 Vulnerability analysis tools in binary codes (led by Frédéric Tronel). Two Ph.D. and one postdoc funded by this project will start between 2023 and 2025.
PEPR SuperviZ is a collaborative ANR project involving CentraleSupélec, Eurecom, Institut Mines-Télécom, Institut Polytechnique de Grenoble, Rennes University, Lorraine University, CEA, CNRS and Inria. The digitalization of all infrastructures makes it almost impossible today to secure all systems
, as it is too complex and too expensive. Supervision seeks to reinforce preventive security mechanisms and to compensate for their inadequacies. Supervision is fundamental in the general context of enterprise systems and networks, and is just as important for the security of cyber-physical systems. Indeed, with "objects" that should eventually be all, or almost all, connected, the attack surface increases significantly. This makes security even more difficult to implement. The increase in the number of components to be monitored, as well as the growing heterogeneity of the capacity of these objects in terms of communication, storage and computation, makes security supervision more complex.
Byblos is a collaborative ANR project involving Rennes university and IRISA (CIDRE and WIDE research teams), Nantes university (GDD research team), and Insa Lyon, LIRIS (DRIM research team). This project aims at overcoming performance and scalability issues of blockchains, that are inherent to the total order that blockchain algorithms seek to achieve in their operations, which implies in turn a Byzantine-tolerant agreement. To overcome these limitations, this project aims at taking a step aside, and exploiting the fact that many applications – including cryptocurrencies – do not require full Byzantine agreement, and can be implemented with much lighter, and hence more scalable and efficient, guarantees. This project further argues that these novel Byzantine-tolerant applications have the potential to power large-scale multi-user online systems, and that in addition to Byzantine Fault Tolerance, these systems should also provide strong privacy protection mechanisms, that are designed from the ground up to exploit implicit synergies with Byzantine mechanisms.
BC4SSI is a JCJC ANR project led by Romaric Ludinard (SOTERN), involving the SOTERN and CIDRE research teams. Self-sovereign identities (SSI) are digital identities that are managed in a decentralized manner. This technology allows users to self-manage their digital identities without depending on third-party providers to store and centrally manage the data, including the creation of new identities. Implementing SSI requires many care since identities are more than simple identifiers: they need to be checked by the service provider via, for instance, verifiable claims. Such requirements make blockchain technology a prime candidate for deploying SSI and storing verifiable claims. BC4SSI aims at studying the weakest synchrony assumptions enabling SSI deployment in a public Blockchain. Among the different existing challenges, BC4SSI will address the following scientific locks: alternatives to PoW security proofs, lightweight replication, scalability and energy consumption.
Priceless is a collaborative CominLabs project involving Rennes University with IRISA (CIDRE and WIDE research teams), and IODE (Institut de l'ouest: droit et Europe), and Nantes university (GDD research team). Promoters of blockchain-based systems such as cryptocurrencies have often advocated for the anonymity these provide as a pledge of privacy protection, and blockchains have consequently been envisioned as a way to safely and securely store data. Unfortunately, the decentralized, fully-replicated and unalterable nature of the blockchain clashes with both French and European legal requirements on the storage of personal data, on several aspects such as the right of rectification and the preservation of consent. This project aims to establish a cross-disciplinary partnership between Computer Science and Law researchers to understand and address the legal and technical challenges associated with data storage in a blockchain context.
In the ANR TrustGW project, we consider a system composed of IoT objects connected to a gateway. This gateway is, in turn, connected to one or more cloud servers. The architecture of the gateway, which is at the heart of the project, is heterogeneous (software-hardware), composed of a baseband processor, an application processor, and hardware accelerators implemented on an FPGA. A hypervisor allows sharing these resources and allocating them to different virtual machines. TrustGW is a collaborative project between the ARCAD team from Lab-STICC, the ASIC team from IETR, and the CIDRE team from IRISA. The project addresses three main challenges: (1) to define a heterogeneous, dynamically configurable and trusted gateway architecture, (2) to propose a trusted hypervisor allowing to deploy virtual machines on a heterogeneous software-hardware architecture with virtualization of the whole resources and (3) to secure the applications running on the gateway. Within this project's framework, the CIDRE team's contribution focuses mainly on the last challenge, particularly through the PhD of Lionel Hemmerlé (2022-2025). Guillaume Hiet is the director of this PhD, co-supervised by Guillaume Hiet, Frédéric Tronel, Pierre Wilke and Jean-Christophe Prévotet. We will also explore hardware-assisted Dynamic Information Flow Tracking approaches for hybrid applications, which offload part of their computation to an FPGA.
The SECUTRACE research project aims to contribute to the dependability and cyber-security of systems by exploiting the trace generation mechanisms available in most consumer hardware platforms. These mechanisms are, for example, available in embedded systems using ARM processors (CoreSight technology) and on computers using Intel processors (Intel PT technology). SECUTRACE is a collaborative project between the CIDRE team at CentraleSupélec/Inria (France) and Volker Stolz’s team at Western Norway University of Applied Sciences (HVL). This work should ultimately reduce the defect rate in software, mitigate the effects of programming errors, and provide new ways to detect intrusions.
SCRATCHS is a collaboration between researchers in the fields of formal methods (EPICURE, Inria Rennes), security (CIDRE, CentraleSupélec Rennes), and hardware design (Lab-STICC). Our goal is to co-design a RISC-V processor and a compiler toolchain to ensure by construction that a security-sensitive code is immune to timing side-channel attacks while running at maximal speed. We claim that a co-design is essential for end-to-end security: cooperation between the compiler and hardware is necessary to avoid time leaks due to the micro-architecture with minimal overhead. In the context of this project, Guillaume Hiet is the director of the Ph.D. of Jean-Loup Houdot, co-supervised by Guillaume Hiet, Pierre Wilke and Frederic Besson, on security-enhancing compilation against side-channel attacks.
Jean-Francois Lalande was a reviewer for the phD grants “RIN Doctorants” of Normandie University and for the call for projects IRGA 2022 of Grenoble Alpes University.
Ludovic Mé serves:
Guillaume Hiet served as a reviewer for an ANR project.
Since October 2022, Guillaume Hiet is the co-chair of the Systems, Software and Network Security working group of the GDR Sécurité Informatique.
Ludovic Mé is deputy scientific director of Inria, in charge of the cyber security area.
The team participated to several recruitment committees:
Several team members are involved in initial and continuing education in CentraleSupélec, a french institute of research and higher education in engineering and science, ESIR (Ecole Supérieure d'Ingénieur de Rennes) the graduate engineering school of the University of Rennes 1.
In these institutions,
The teaching duties are summed up in table 1.
Emmanuelle Anceaume co-supervises the following PhD students:
Guillaume Hiet co-supervises the following PhD students:
Jean-François Lalande co-supervises the following PhD students:
Ludovic Mé co-supervises the following PhD students:
Valérie Viet Triem Tong co-supervises the following PhD students:
Jean-François Lalande was member of the PhD committee for the following PhD thesis:
Ludovic Mé was a member of the HDR committee (reviewer) for the following habilitation:
Ludovic Mé was member of the PhD committee for the following PhD thesis:
Valérie Viet Triem Tong was reviewer of the PhD committee for the following PhD thesis:
Valérie Viet Triem Tong was president of the jury of the PhD committee for the following PhD thesis:
Emmanuelle Anceaume was reviewer of the PhD committee for the following PhD theses:
Emmanuelle Anceaume was member of the PhD committee for the following PhD theses: