2024Activity reportProject-TeamKOPERNIC
RNSR: 201822841D- Research center Inria Paris Centre
- Team name: Keeping worst case reasoning for different criticalities
- Domain:Algorithmics, Programming, Software and Architecture
- Theme:Embedded and Real-time Systems
Keywords
Computer Science and Digital Science
- A1.1.1. Multicore, Manycore
- A1.1.2. Hardware accelerators (GPGPU, FPGA, etc.)
- A1.5. Complex systems
- A1.5.1. Systems of systems
- A1.5.2. Communicating systems
- A2.3. Embedded and cyber-physical systems
- A2.3.1. Embedded systems
- A2.3.2. Cyber-physical systems
- A2.3.3. Real-time systems
- A2.4.1. Analysis
- A2.4.3. Proofs
Other Research Topics and Application Domains
- B5.2. Design and manufacturing
- B5.2.1. Road vehicles
- B5.2.2. Railway
- B5.2.3. Aviation
- B5.2.4. Aerospace
- B6.6. Embedded systems
1 Team members, visitors, external collaborators
Research Scientist
- Liliana Cucu [Team leader, INRIA, Senior Researcher]
PhD Students
- Hadjer Bendellaa [INRIA, from Nov 2024]
- Ismail Hawila [StatInf, CIFRE]
- Marwan Wehaiba El Khazen [INRIA, from Sep 2024]
- Marwan Wehaiba El Khazen [Statinf, CIFRE, until Aug 2024]
Technical Staff
- Masum Bin Alam [INRIA, Engineer]
Interns and Apprentices
- Kyrylo Hrynevych [INRIA, Intern, from Jul 2024 until Aug 2024]
- Myriam Mabrouki [INRIA, Intern, from Jun 2024 until Aug 2024]
Administrative Assistants
- Guiziou Christelle [INRIA]
- Christelle Rosello [INRIA]
External Collaborators
- Slim Ben Amor [Statinf]
- Adriana Gogonel [Statinf]
- Kossivi Kougblenou [StatInf]
- Yves Sorel [Retired from INRIA]
2 Overall objectives
The Kopernic members are focusing their research on studying time for embedded communicating systems, also known as cyber-physical systems. More precisely, the team proposes a system-oriented solution to the problem of studying time properties of the cyber components of a CPS. The solution is expected to be obtained by composing probabilistic and non-probabilistic approaches for CPSs. Moreover, statistical approaches are expected to validate existing hypotheses or propose new ones for the models considered by probabilistic analyses 3, 4.
The term cyber-physical systems refers to a new generation of systems with integrated computational and physical capabilities that can interact with humans through many new modalities 14. A defibrillator, a mobile phone, an autonomous car or an aircraft, they all are CPSs. Beside constraints like power consumption, security, size and weight, CPSs may have cyber components required to fulfill their functions within a limited time interval (a.k.a. time dependability), often imposed by the environment, e.g., a physical process controlled by some cyber components. The appearance of communication channels between cyber-physical components, easing the CPS utilization within larger systems, forces cyber components with high criticality to interact with lower criticality cyber components. This interaction is completed by external events from the environnement that has a time impact on the CPS. Moreover, some programs of the cyber components may be executed on time predictable processors and other programs on less time predictable processors.
Different research communities study separately the three design phases of these systems: the modeling, the design and the analysis of CPSs 26. These phases are repeated iteratively until an appropriate solution is found. During the first phase, the behavior of a system is often described using model-based methods. Other methods exist, but model-driven approaches are widely used by both the research and the industry communities. A solution described by a model is proved (functionally) correct usually by a formal verification method used during the analysis phase (third phase described below).
During the second phase of the design, the physical components (e.g., sensors and actuators) and the cyber components (e.g., programs, messages and embedded processors) are chosen often among those available on the market. However, due to the ever increasing pressure of smartphone market, the microprocessor industry provides general purpose processors based on multicore and, in a near future, based on manycore processors. These processors have complex architectures that are not time predictable due to features like multiple levels of caches and pipelines, speculative branching, communicating through shared memory or/and through a network on chip, internet, etc. Due to the time unpredictability of some processors, nowadays the CPS industry is facing the great challenge of estimating worst case execution times (WCETs) of programs executed on these processors. Indeed, the current complexity of both processors and programs does not allow to propose reasonable worst case bounds. Then, the phase of design ends with the implementation of the cyber components on such processors, where the models are transformed in programs (or messages for the communication channels) manually or by code generation techniques 17.
During the third phase of analysis, the correctness of the cyber components is verified at program level where the functions of the cyber component are implemented. The execution times of programs are estimated either by static analysis, by measurements or by a combination of both approaches 35.
These WCETs are then used as inputs to scheduling problems 28, the highest level of formalization for verifying the time properties of a CPS. The programs are provided a start time within the schedule together with an assignment of resources (processor, memory, communication, etc.). Verifying that a schedule and an associated assignment are a solution for a scheduling problem is known as a schedulability analysis.
The current CPS design, exploiting formal description of the models and their transformation into physical and cyber parts of the CPS, ensures that the functional behavior of the CPS is correct. Unfortunately, there is no formal description guaranteeing today that the execution times of the generated programs is smaller than a given bound. Clearly all communities working on CPS design are aware that computing takes time25, but there is no CPS solution guaranteeing time predictability of these systems as the processors appear late within the design phase (see Figure 1). Indeed, the choice of the processor is made at the end of the CPS design process, after writing or generating the programs.

The CPS design is compared to a solar system where the model is central.
Since the processor appears late within the CPS design process, the CPS designer in charge of estimating the worst case execution time of a program or analyzing the schedulability of a set of programs inherits a difficult problem. The Kopernic main purpose is the proposition of compositional rules with respect to the time behaviour of a CPS, allowing to restrain the CPS design to analyzable instances of the WCET estimation problem and of the schedulability analysis problem.
With respect to the WCET estimation problem, we say that a rule
Before enumerating our scientific objectives, we introduce the concept of variability factors. More precisely, the time properties of a cyber component are subject to variability factors. We understand by variability the distance between the smallest value and the largest value of a time property. With respect to the time properties of a CPS, the factors may be classified in three main classes:
- program structure: for instance, the execution time of a program that has two main branches is obtained, if appropriate composition principles apply, as the maximum between the largest execution time of each branch. In this case the branch is a variability factor on the execution time of the program;
- processor structure: for instance, the execution time of a program on a less predictable processor (e.g., one core, two levels of cache memory and one main memory) will have a larger variability than the execution time of the same program executed on a more predictable processor (e.g., one core, one main memory). In this case the cache memory is a variability factor on the execution time of the program;
- execution environment: for instance, the appearance of a pedestrian in front of a car triggers the execution of the program corresponding to the brakes in an autonomous car. In this case the pedestrian is a variability factor for the triggering of the program.
We identify three main scientific objectives to validate our research hypothesis. The three objectives are presented from program level, where we use statistical approaches, to the level of communicating programs, where we use probabilistic and non-probabilistic approaches.
The Kopernic scientific objectives are:
-
[O1] worst case execution time estimation of a program - modern processors induce an increased variability of the execution time of programs, making difficult (or even impossible) a complete static analysis to estimate such worst case. Our objective is to propose a solution composing probabilistic and non-probabilistic approaches based both on static and on statistical analyses by answering the following scientific challenges:
- a classification of the variability factors of execution times of a program with respect to the processor features. The difficulty of this challenge is related to the definition of an element belonging to the set of variability factors and its mapping to the execution time of the program.
- a compositional rule of statistical models associated to each variability factor. The difficulty of this challenge comes from the fact that a global maximum of a multicore processor cannot be obtained by upper bounding the local maxima on each core.
-
[O2] deciding the schedulability of all programs running within the same cyber component, given an energy budget - in this case the programs may have different time criticalities, but they share the same processor, possibly multicore1. Our objective is to propose a solution composing probabilistic and non-probabilistic approaches based on answers to the following scientific challenges:
- scheduling algorithms taking into account the interaction between different variability factors. The existence of time parameters described by probability distributions imposes to answer to the challenge of revisiting scheduling algorithms that lose their optimality even in the case of an unicore processor 29. Moreover, the multicore partionning problem is recognized difficult for the non-probabilistic case 33;
- schedulability analyses based on the algorithms proposed previously. In the case of predictable processors, the schedulability analyses accounting for operating systems costs increase the time dependability of CPSs 31. Moreover, in presence of variability factors, the composition property of non-probabilistic approaches is lost and new principles are required.
- [O3] deciding the schedulability of all programs communicating through predictable and non-predictable networks, given an energy budget - in this case the programs of the same cyber component execute on the same processor and they may communicate with the programs of other cyber components through networks that may be predictable (network on chip) or non-predictable (internet, telecommunications). Our objective is to propose a solution to this challenge by analysing schedulability of programs, for which existing (worst case) probabilistic solutions exist 30, communicating through networks, for which probabilistic worst-case solutions 18 and average solutions exist 27.
3 Research program
The research program for reaching these three objectives is organized according three main research axes
- Worst case execution time estimation of a program, detailed in Section 3.1;
- Building measurement-based benchmarks, detailed in Section 3.2;
- Scheduling of graph tasks on different resources within an energy budget in Section 3.3.
3.1 Worst case execution time estimation of a program
The temporal study of real-time systems is based on the estimation of the bounds for their temporal parameters and more precisely the WCET of a program executed on a given processor. The main analyses for estimating WCETs are static analyses 35, dynamic analyses 19, also called measurement-based analyses, and finally hybrid analyses that combine the two previous ones 35.
The Kopernic approach for solving the WCET estimation problem is based on (i) the identification of the impact of variability factors on the execution of a program on a processor and (ii) the proposition of compositional rules allowing to integrate the impact of each factor within a WCET estimation. Historically, the real-time community had illustrated the distribution of execution times for programs as heavy-tailed ones as intuitively the large values of execution times of programs are agreed to have a low probability of appearance. For instance Tia et al. are the first underlining this intuition within a paper introducing execution times described by probability distributions within a single core schedulability analysis 34. Since 34, a low probability is associated to large values of execution times of a program executed on a single core processor. It is, finally, in 2000 that the group of Alan Burns, within the thesis of Stewart Edgar 20, formalizes this property as a conjecture indicating that a maximal bound on the execution times of a program may be estimated by the Extreme Value Theory 23. No mathematical definition of what represents this bound for the execution time of a program has been proposed at that time. Two years later, a first attempt to define this bound has been done by Bernat et al. 16, but the proposed definition is extending the static WCET understanding as a combination of execution times of basic blocks of a program. Extremely pessimistic, the definition remains intuitive, without associating a mathematical description. After 2013, several publications from Liliana Cucu-Grosjean's group at Inria Nancy introduce a mathematical definition of a probabilistic worst-case execution time, respectively, probabilistic worst-case response time, as an appropriate formalization for a correct application of the Extreme Value Theory to the real-time problems 2, 1, 5.
We identify the following open research problems related to the first research axis:
- the generalization of modes analysis to multi-dimensional, each dimension representing a program when several programs cooperate;
- the proposition of a rules set for building programs that are time predictable for the internal architecture of a given single core and, then, of a multicore processor;
- modeling the impact of processor features on the energy consumption to better consider both worst case execution time and schedulability analyses considered within the third research axis of this proposal.
3.2 Building measurement-based benchmarks
The real-time community is facing the lack of benchmarks adapted to measurement-based analyses. Existing benchmarks for the estimation of WCET 32, 24, 21 have been used to estimate WCETs mainly for static analyses. They contain very simple programs and are not accompanied by a measurement protocol. They do not take into account functional dependencies between programs, mainly due to shared global variables which, of course, influence their execution times. Furthermore, current benchmarks do not take into account interferences due to the competition for resources, e.g., the memory shared by the different cores in a multicore. On the other hand, measurement-based analyses require execution times measured while executing programs on embedded processors, similar to those used in the embedded systems industry. For example, the mobile phone industry uses multicore based on non predictable cores with complex internal architecture, such as those of the ARM Cortex-A family. In a near future, these multicore will be found in critical embedded systems found in application domains such as avionics, autonomous cars, railway, etc., in which the team is deeply involved. This increases dramatically the complexity of measurement-based analyses compared to analyses performed on general purpose personal computers as they are currently performed.
We understand by measurement-based benchmarks a 3-uple composed by a program, a processor and a measurement protocol. The associated measurement protocols should detail the variation of the input variables (associated to sensors) of these benchmarks and their impact on the output variables (associated to actuators), as well as the variation of the processor states.
Proposing reproducibility and representativity properties that measurement-based benchmarks should follow is the strength of this research axis 7. We understand by the reproducibility, the property of a measurement protocol to provide the same ordered set of execution times for a fixed pair (program, processor). We understand by the representativity, the existence of a (sufficiently small) number of values for the input variables allowing a measurement protocol to provide an ordered set of execution times that ensure a convergence for the Extreme Value Index estimators.
Within this research axis we identify the following open problems:
- proving reproducibility and representativity properties while extending current benchmarks from predictable unicore processors (e.g., ARM Cortex-M4) to non predictable ones (e.g., ARM Cortex-A53 or Cortex-A7);
- proving reproducibility and representativity properties while extending unicore benchmarks to multicore processors. In this context, we face the supplementary difficulty of defining the principles that an operating system should satisfy in order to ensure a real-time behaviour.
3.3 Scheduling of graph tasks on different resources within an energy budget
Following the model-driven approach, the functional description of the cyber part of the CPS, is performed as a graph of dependent functions, e.g., a block diagram of functions in Simulink, the most widely used modeling/simulation tool in industry. Of course, a program is associated to every function. Since the graph of dependent programs becomes a set of dependent tasks when real-time constraints must be taken into account, we are facing the problem of verifying the schedulability of such dependent task sets when it is executed on a multicore processor.
Directed Acyclic Graphs (DAG) are widely used to model different types of dependent task sets. The typical model consists of a set of independent tasks where every task is described by a DAG of dependent sub-tasks with the same period inherited from the period of each task 15. In such DAG, the sub-tasks are vertices and edges are dependencies between sub-tasks. This model is well suited to represent, for example, the engine controller of a car described with Simulink. The multicore schedulability analysis may be of two types, global or partitionned. To reduce interference and interactions between sub-tasks, we focus on partitioned scheduling where each sub-task is assigned to a given core 22, 6, 8.
In order to propose a general DAG task model, we identify the following open research problems:
- solving the schedulability problem where probabilistic DAG tasks are executed on predictable and non predictable processors, and such that some tasks communicate through predictable networks, e.g., inside a multicore or a manycore processor, and non-predictable networks, e.g., between these processors through internet. Within this general schedulability problem; we consider five main classes of scheduling algorithms that we adapt to solve probabilistic DAG task scheduling problems. We compare the new algorithms with respect to their energy-consumption in order to propose new versions with a decreased energy consumption by integrating variation of frequencies for processor features like CPU or memory accesses.
- the validation of the proposed framework on our multicore drone case study. To answer to the challenging objective of proposing time predictable platforms for drones, we currently migrate the PX4-RT programs on heterogeneous architectures. This includes an implementation of the scheduling algorithms detailed within this research axis within current operating system, NuttX2.
4 Application domains
4.1 Avionics
Time critical solutions in this context are based on temporal and spatial isolation of the programs and the understanding of multicore interferences is crucial. Our contributions belong mainly to the solutions space for the objective [O1] identified previously.
4.2 Railway
Time critical solutions in this context concern both the proposition of an appropriate scheduler and associated schedulability analyses. Our contributions belong to the solutions space of problems dealt within objectives [O1] and [O2] identified previously.
4.3 Autonomous cars
Time critical solutions in this context concern the interaction between programs executed on multicore processors and messages transmitted through wireless communication channels. Our contributions belong to the solutions space of all three classes of problems dealt within all three Kopernic objectives identified previously.
4.4 Drones
As it is the case of autonomous cars, there is an interaction between programs and messages, suggesting that our contributions in this context belong to the solutions space of all three classes of problems dealt within the objectives identified previously.
5 Social and environmental responsibility
5.1 Impact of research results
The Kopernic members provide theoretical means to decrease both the processor utilization and the energy consumption. Such gain is estimated within
6 Highlights of the year
6.1 Institutional concerns
Note : Readers are advised that the Institute does not endorse the text in the “Highlights of the year” section, which is the sole responsibility of the team leader.
Kopernic members are concerned by major changes on INRIA missions and its organization that the new Contract of Objectives, Means and Performance with the French government introduces for the period 2024–2028 as well as the lack of collective work on these new missions. Indeed, the multiplication of new missions and priorities may restrain the independence of scientists and teams, as well as their freedom to select research topics and collaborators. More precisely, our main concern with this new contract lies with the following items:
- Placement of Inria in a “zone à régime restrictif” (ZRR)
- Restriction of international and industrial collaborations to partners chosen by the institute’s management, with no clear indication of the rules
- Individual financial incentives for researchers involved in strategic partnerships, whose topics are steered by the program agency
- Priority given to “dual” research with both military and civilian applications, materialized by tighter links with the Ministry of Defense.
6.2 Awards
Liliana Cucu-Grosjean and Adriana Gogonel have been awarded among the top 100 Bussiness Romanian Woman by the Capital newspaper. They have been, also, awarded by the Galati City Hall with the 2024 Award of the Year.
6.3 Keynotes
Liliana Cucu-Grosjean has gave a keynote entitled "Proving probabilistic worst-case reasoning: when functional and non-functional properties must meet" at the South Americain joint conference between the XIV Symposium on Computing Systems Engineering (SBESC 2024) and the 13th Latin-American Symposium on Dependable and Secure Computing (LADC 2024).
7 New software, platforms, open data
7.1 New software
7.1.1 SynDEx
-
Keywords:
Distributed, Optimization, Real time, Embedded systems, Scheduling analyses
-
Scientific Description:
SynDEx is a system level CAD software implementing the AAA methodology for rapid prototyping and for optimizing distributed real-time embedded applications. It is developed in OCaML.
Architectures are represented as graphical block diagrams composed of programmable (processors) and non-programmable (ASIC, FPGA) computing components, interconnected by communication media (shared memories, links and busses for message passing). In order to deal with heterogeneous architectures it may feature several components of the same kind but with different characteristics.
Two types of non-functional properties can be specified for each task of the algorithm graph. First, a period that does not depend on the hardware architecture. Second, real-time features that depend on the different types of hardware components, ranging amongst execution and data transfer time, memory, etc.. Requirements are generally constraints on deadline equal to period, latency between any pair of tasks in the algorithm graph, dependence between tasks, etc.
Exploration of alternative allocations of the algorithm onto the architecture may be performed manually and/or automatically. The latter is achieved by performing real-time multiprocessor schedulability analyses and optimization heuristics based on the minimization of temporal or resource criteria. For example while satisfying deadline and latency constraints they can minimize the total execution time (makespan) of the application onto the given architecture, as well as the amount of memory. The results of each exploration is visualized as timing diagrams simulating the distributed real-time implementation.
Finally, real-time distributed embedded code can be automatically generated for dedicated distributed real-time executives, possibly calling services of resident real-time operating systems such as Linux/RTAI or Osek for instance. These executives are deadlock-free, based on off-line scheduling policies. Dedicated executives induce minimal overhead, and are built from processor-dependent executive kernels. To this date, executives kernels are provided for: TMS320C40, PIC18F2680, i80386, MC68332, MPC555, i80C196 and Unix/Linux workstations. Executive kernels for other processors can be achieved at reasonable cost following these examples as patterns.
-
Functional Description:
Software for optimising the implementation of embedded distributed real-time applications and generating efficient and correct by construction code
- URL:
-
Contact:
Yves Sorel
-
Participant:
Yves Sorel
7.1.2 EVT Kopernic
-
Keywords:
Embedded systems, Worst Case Execution Time, Real-time application, Statistics
-
Scientific Description:
The EVT-Kopernic tool is an implementation of the Extreme Value Theory (EVT) for the problem of the statistical estimation of worst-case bounds for the execution time of a program on a processor. Our implementation uses the two versions of EVT - GEV and GPD - to propose two independent methods of estimation. Their results are compared and only results that are sufficiently close allow to validate an estimation. Our tool is proved predictable by its unique choice of block (GEV) and threshold (GPD) while proposant reproducible estimations.
-
Functional Description:
EVT-Kopernic is tool proposing a statistical estimation for bounds on worst-case execution time of a program on a processor. The estimator takes into account dependences between execution times by learning from the history of execution, while dealing also with cases of small variability of the execution times.
- URL:
-
Contact:
Adriana Gogonel
-
Participants:
Adriana Gogonel, Liliana Cucu
7.2 Open data
Kopernic members contribute to the effort of reproducing research results and numerical evaluation by proposing dynamic benchmarks for statistical analysis of measured execution times, memory accesses and other traces obtained during the Hardware in the Loop execution of the open source programs PX4-RT. Measurements obtained from the execution of the PX4-RT programs on real boards are shared with the community under the name of KDBench, a.k.a, Kopernic Dynamic Benchmarks. The data related to KDBench are available and regularly updated at https://team.inria.fr/kopernic/kdbench/ . The contact person is Liliana Cucu-Grosjean.
8 New results
During this year, the results of Kopernic members have covered all Kopernic research axes.
8.1 Worst case execution time estimation of a program
Participants: Liliana Cucu-Grosjean, Adriana Gogonel, Marwan Wehaiba El Khazen.
We consider both WCET and WCEC (worst-case energy consumption) statistical estimators that are based on the utilization of the Extreme Value Theory 23. Compared to existing methods 35, our results require the execution of the program under study on the targeted processor or at least a cycle-accurate simulator of this processor. We concentrate on the correct application of the Extreme Value Theory as it brings an important support 9 to justifiable estimations.
This year we have concentrated our research effort on the robutness of statistical estimators as well as the heterogeneity of execution times requiring the use of change-point detection and anomaly detection techniques 12 as they address two fundamental questions: (i) Were there significant changes in our data during observation? and (ii) Was there something new or unusual in my data with respect to an expected time behavior? These two questions lead to the formulation of two research problems:
- Change-Point Detection: Identifying points in time where significant changes occur in a time series x (in our case, sequences of execution times for a program);
- Anomaly Detection: Identifying the set of samples corresponding to unusual phenomena in the time series.
While merging results for these two problems, our target is provide explanations of outliers, those execution times at the crossroad between the change-point detection and the anomaly detection. The idea is to create a score for every observed execution time, by letting all the other observations "judge" it by behaving like the initialization subset that produces the threshold, and increases the score of the judged observation if it is above the threshold. But since some of these observations may be outliers themselves, we drop a random subset of size half of the initial set of observations for each vote, and iterate this process 1000 times. Only half of the observations get to vote on each iteration, and the benefit is twofold: first, many subsets will naturally exclude outliers and will have a sound judgement, and second, for the subsets that are still polluted by outliers, the threshold will probably be so high that the score will simply not increase. This later behavior is not a concern since we can always increase the number of iterations and define a threshold for the score for the final decision. Outliers are then excluded from the statistical estimation.
This year we have, also, dedicated effort to a more philosophical question on the definition of (probabilistic or statistical) worst-case execution times. Indeed, independence hypotheses makes difficult today to calculate the probabilistic worst-case execution time of a program and current approaches are built, often, on statistical estimators based on the use of Extreme Value Theory or concentration inequalities. Future probabilistic time analyses are expected to consider worst-case execution times estimates obtained by using statistical estimators on measured execution times instead of probabilistic (worst-case) execution times estimations. Thus, we discuss the opportunity of differentiating probabilistic (worst-case) execution times from statistical (worst-case) execution times and how dependence between execution times are better or easier captured by each of the definition, while stochastic execution times could be, also, an appropriate alternative 10.
8.2 Building measurement-based benchmarks: KDBench
Participants: Slim Ben Amor, Masum Bin Alam, Liliana Cucu-Grosjean, Ismail Hawila, Yves Sorel, Marwan Wehaiba El Khazen.
KDBench, our measurement-based benchmarks, are obtained by modifying open-source programs of the autopilot PX4 designed to control a wide range of air, land, sea and underwater drones. More precisely, the studied programs are executed on a standard Pixhawk drone board based on a predictable single core processor ARM Cortex-M4 and during several collaborative projects we have transformed this set of programs into a set of dependent tasks that satisfies real-time constraints, leading to a new version of PX4, called PX4-RT. As usual, the set of dependent real-time tasks is defined by a data dependency graph. An interested reader may refer to the web page of the Kopernic team at The KDBench website.
During this year, we have concentred on the migration of KDBench programs to a new board, NAVIO2, while starting publicizing existing benchmarks obtained for the first board, PixHawk 11. Concerning the first item, one important difficulty identified during this year is the migration from the NuttX operating system (on PixHawk) to Linux, the RPI operating system (on NAVIO2). Indeed, the collection of data has been previously done based on NuttX internal scheduling mechanisms and the effort of this year has been dedicated to the reproduction of measurements on one core of NAVIO2 as it has been done for NuttX. Our target is the introduction in 2025 of the multicore feature of RPI as well as the study of the impact of real-time features like PREEMPT_RT patch.
8.3 Scheduling of graph tasks on different resources within an energy budget
Participants: Liliana Cucu-Grosjean, Myriam Mabrouki.
Within this research axis and during this year, while targeting a reduced energy consumption for the embedded real-time systems community, different techniques exist and we are interested in Dynamic Voltage and Frequency Scaling. Quan and Hu propose an algorithm ensuring an energy-optimal CPU frequency assignment for executing independent programs on a single-core processor under realtime constraints. Since its optimality is proposed for CPU variable frequencies, we propose to add variable frequency assignment for memory accesses to the problem solved by Quan and Hu. Based on numerical evaluation of energy consumption of TACLeBench programs, we study the impact of both CPU and memory accesses frequencies on the energy consumption but no existing energy model considers these two factors. Moving back to the existence of an optimal algorithm to assign both frequencies, one may propose to find the appropriate pair of CPU-memory accesses frequencies, but an appropriate energy model is required to compare two different pairs. Inspired by the results proposed within the thesis of Marwan Wehaiba, we plan to introduce a new statistical energy model within scheduling algorithms of real-time systems 13.
9 Bilateral contracts and grants with industry
9.1 CIFRE Grant funded by StatInf
Participants: Liliana Cucu-Grosjean, Adriana Gogonel, Marwan Wehaiba El Khazen.
A CIFRE agreement between the Kopernic team and the start-up StatInf has started on October 1st, 2020. Its funding is related to study the evolution of WCET models to consider the energy consumption according to Kopernic research objectives. The associated PhD thesis has been defended on December 12th, 2024.
9.2 CIFRE Grant funded by StatInf
Participants: Liliana Cucu-Grosjean, Slim Ben Amor, Ismail Hawila, Yves Sorel.
A CIFRE agreement between the Kopernic team and the start-up StatInf has started on October 1st, 2022. Its funding is related to study the relation between the control theory robustness and the schedulability problem using probabilistic descriptions according to Kopernic research objectives. The defense of the associated thesis is expected before the end of 2025.
10 Partnerships and cooperations
10.1 International initiatives
10.1.1 Associate Teams in the framework of an Inria International Lab or in the framework of an Inria International Program
Participants: Slim Ben Amor, Liliana Cucu-Grosjean, Ismail Hawila, Marwan Wehaiba.
KEPLER
-
Title:
Probabilistic foundations for time, a key concept for the certification of cyber-physical systems
-
Duration:
2020 -> 2024
-
Coordinator:
George Lima (gmlima@ufba.br)
-
Partners:
- Universidade Federal da Bahia (Brésil)
-
Inria contact:
Liliana Cucu
-
Summary:
Today the term of cyber-physical systems (CPSs) refers to a new generation of systems integrating computational and physical capabilities to interact with humans. A defibrillator, a mobile phone, a car or an aircraft, they all are CPSs. Beside constraints like power consumption, security, size and weight, CPSs may have cyber components required to fulfill their functions within a limited time interval, property a.k.a safety. This expectation arrives simultaneously with the need of implementing new classes of algorithms, e.g. deep learning techniques, requiring the utilization of important computing and memory resources. These ressources may be found on multicore processors, known for increasing the execution time variability of programs. Therefore, ensuring time predictability on multicore processors is our identified challenge. The Kepler project faces this challenge by developing new mechanisms and techniques for supporting CPS applications on multicore processors, focusing on scheduling and timing analysis, for which probabilistic guarantees should be provided.
10.2 International research visitors
10.2.1 Visits of international scientists
Other international visits to the team
Prof. Enrico Bini (University of Turin) has visited the team while preparing a longer term collaboration on the design of control real-time systems between Kopernic and his research group.
11 Dissemination
Participants: Liliana Cucu-Grosjean, Yves Sorel.
11.1 Promoting scientific activities
11.1.1 Scientific events: selection
Chair of conference program committees
Liliana Cucu-Grosjean has been the Track Chair of Embedded System at the European conference DATE 2024
Member of the conference program committees
Kopernic members are regular PC members for relevant conferences like IEEE RTSS, IEEE RTAS, DATE, ETFA, RTNS and RTCSA.
11.1.2 Journal
Member of the editorial boards
Liliana Cucu-Grosjean is associated editor of the Journal of Systems Architecture
Reviewer - reviewing activities
Kopernic members are regular reviewers for the main journals of our domain: Journal of Real-Time Systems, IEEE Transactions on Computer Science, Informa- tion Processing Letter, Journal of Heuristics, Journal of Systems Architecture, Journal of Signal Processing Systems, Leibniz Transactions on Embedded Systems, IEEE Transactions on Industrial Informatics, etc.
11.1.3 Invited talks
Liliana Cucu-Grosjean has been a keynote at the South Americain joint conference between the XIV Symposium on Computing Systems Engineering (SBESC 2024) and the 13th Latin-American Symposium on Dependable and Secure Computing (LADC 2024).
11.1.4 Leadership within the scientific community
Liliana Cucu-Grosjean is the current IEEE TC on Real-Time Systems vice-chair.
11.1.5 Scientific expertise
- YvesSorel is a member of the Steering Committee of Advanced engineering and computing Hub of Systematic Paris-Region Cluster;
- Yves Sorel is a member of the Steering Committee Orientations and Programs of System X Institute for Technological Research (IRT);
- Liliana Cucu-Grosjean is a scientific expert within the French Society of Automotive Engineering.
11.1.6 Research administration
- Yves Sorel is a member of the CDT Paris center commission;
- Liliana Cucu-Grosjean is INRIA national referee on harassment at FS-CSA;
- Liliana Cucu-Grosjean is an elected member of INRIA Scientific Board.
11.2 Teaching - Supervision - Juries
11.2.1 Teaching
Liliana Cucu-Grosjean, Initiation to the research, MSc degree on Embedded Systesm at University of Saclay
11.2.2 Supervision
- Hadjer Bendellaa, Dimensioning probabilistic embedded systems for efficient execution of artificial intelligence algorithms , Sorbonne university, started on November 2024, supervised by Liliana Cucu- Grosjean
- Ismail Hawila, Multicore scheduling of real-time control systems of probabilisic tasks with precedence constraints, Sorbonne university, started on October 2022, supervised by Liliana Cucu- Grosjean and Slim Ben Amor (StatInf)
- Marwan Wehaiba El Khazen, Statistical models for optimizing the energy consumption of cyber-physical systems, Sorbonne university, started on October 2020 and defended on December 12th, 2024, supervised by Liliana Cucu- Grosjean and Adriana Gogonel (StatInf )
11.2.3 Juries
Liliana Cucu-Grosjean has been
- reviewer of the PhD thesis entitled Implantation certifiable et efficace de réseaux de neurones sur des systèmes embarqués temps-réel critiques defended by Iryna De Albuquerque Silva at ISAE, Toulouse
- chair of the PhD jury for the PhD thesis entitled Conception et configuration de réseaux TSN guidées par les modèles defended by Maxine Samson at University of Lorraine, Nancy
12 Scientific production
12.1 Major publications
- 1 patentSimulation Device.FR2016/050504FranceMarch 2016, URL: https://hal.science/hal-01666599back to text
- 2 inproceedingsMeasurement-Based Probabilistic Timing Analysis for Multi-path Programs.the 24th Euromicro Conference on Real-Time Systems, ECRTS2012, 91--101back to text
- 3 articleA Survey of Probabilistic Schedulability Analysis Techniques for Real-Time Systems.Leibniz Transactions on Embedded Systems612019, 53HALDOIback to text
- 4 articleA Survey of Probabilistic Timing Analysis Techniques for Real-Time Systems.Leibniz Transactions on Embedded Systems612019, 60HALDOIback to text
- 5 patentDispositif de caractérisation et/ou de modélisation de temps d'exécution pire-cas.1000408053FranceJune 2017, URL: https://hal.science/hal-01666535back to text
- 6 inproceedingsLatency analysis for data chains of real-time periodic tasks. the 23rd IEEE International Conference on Emerging Technologies and Factory Automation, ETFA'18September 2018back to text
- 7 articleReproducibility and representativity: mandatory properties for the compositionality of measurement-based WCET estimation approaches.SIGBED Review1432017, 24--31back to text
- 8 inproceedingsScheduling Real-time HiL Co-simulation of Cyber-Physical Systems on Multi-core Architectures. the 24th IEEE International Conference on Embedded and Real-Time Computing Systems and ApplicationsAugust 2018back to text
12.2 Publications of the year
International journals
- 9 articleOn vulnerabilities in EVT-based timing analysis: an experimental investigation on a multi-core architecture.Design Automation for Embedded Systems2024HALDOIback to text
Invited conferences
- 10 inproceedingsInvited Paper: Statistical, Stochastic or Probabilistic (Worst-Case Execution) Execution Time? - What Impact on the Multicore Composability.Open Access Series in Informatics (OASIcs), 22nd International Workshop on Worst Case Execution TimeWCET 2024 - 22nd International Workshop on Worst-Case Execution Time AnalysisLille, FranceSchloss Dagstuhl – Leibniz-Zentrum für Informatik2024HALDOIback to text
International peer-reviewed conferences
- 11 inproceedingsKopernic Dynamic Benchmarks (KDBench) - open source measurement-based benchmarks: Kopernic Dynamic Benchmarks (KDBench) - programmes témoins pour les approches à base de mesures.https://2024.rtss.org/wp-content/uploads/2024/12/RTSS@Work2024-proceedings.pdfRTSS@work 2024 - organized as part of the IEEE Real-Time Systems Symposium (RTSS) 2024York, United Kingdom2024HALback to text
Doctoral dissertations and habilitation theses
- 12 thesisStatistical models for the optimization of energy consumption in cyber-physical systems.Sorbonne UniversiteDecember 2024HALback to text
Other scientific publications
- 13 inproceedingsHow to prove the existence of an optimal frequency assignment algorithm for CPU and memory accesses in absence of an appropriate energy model.JRWRTC-RTNS 2024 - Junior Researcher Workshop on Real-Time Computing at the 32nd International Conference on Real-Time Networks and SystemsPorto, PortugalNovember 2024HALback to text
12.3 Cited publications
- 14 bookCyber-physical systems.IEEE2011back to text
- 15 inproceedingsA Generalized Parallel Task Model for Recurrent Real-time Processes.2012 IEEE 33rd Real-Time Systems Symposium (RTSS)2012, 63-72back to text
- 16 inproceedingsWCET Analysis of Probabilistic Hard Real-Time System.Proceedings of the 23rd IEEE Real-Time Systems Symposium (RTSS'02)IEEE Computer Society2002, 279--288back to text
- 17 inproceedingsA Synchronous-Based Code Generator for Explicit Hybrid Systems Languages.Compiler Construction - 24th International Conference, CC, Joint with ETAPS2015, 69--88back to text
- 18 inproceedingsPreliminary results for introducing dependent random variables in stochastic feasiblity analysis on CAN.the WIP session of the 7th IEEE International Workshop on Factory Communication Systems (WFCS)2008back to text
- 19 articleA Survey of Probabilistic Timing Analysis Techniques for Real-Time Systems.LITES612019, 03:1--03:60back to text
- 20 inproceedingsStatistical Analysis of WCET for Scheduling.the 22nd IEEE Real-Time Systems Symposium (RTSS)2001, 215--225back to text
- 21 inproceedingsTACLeBench: A Benchmark Collection to Support Worst-Case Execution Time Research.16th International Workshop on Worst-Case Execution Time Analysis (WCET)55OASICS2016, 2:1--2:10back to text
- 22 inproceedingsResponse time analysis of sporadic DAG tasks under partitioned scheduling.11th IEEE Symposium on Industrial Embedded Systems (SIES)05 2016, 1-10back to text
- 23 articleOpen Challenges for Probabilistic Measurement-Based Worst-Case Execution Time.Embedded Systems Letters932017, 69--72back to textback to text
- 24 inproceedingsThe Mälardalen WCET Benchmarks: Past, Present And Future.10th International Workshop on Worst-Case Execution Time Analysis (WCET)15OASICS2010, 136--146back to text
- 25 articleComputing Needs Time.Communications of ACM5252009back to text
- 26 bookIntroduction to embedded systems - a cyber-physical systems approach.MIT Press2017back to text
- 27 inproceedingsReal-Time Queueing Theory.the 10th IEEE Real-Time Systems Symposium (RTSS)1996back to text
- 28 bookMultiprocessor Scheduling for Real-Time Systems.Springer2015back to text
- 29 inproceedingsOptimal Priority Assignment Algorithms for Probabilistic Real-Time Systems.the 19th International Conference on Real-Time and Network Systems (RTNS)2011back to text
- 30 inproceedingsResponse Time Analysis for Fixed-Priority Tasks with Multiple Probabilistic Parameters.the IEEE Real-Time Systems Symposium (RTSS)2013back to text
- 31 inproceedingsMonoprocessor Real-Time Scheduling of Data Dependent Tasks with Exact Preemption Cost for Embedded Systems.the 16th IEEE International Conference on Computational Science and Engieering (CSE)2013back to text
- 32 inproceedingsPapaBench: a Free Real-Time Benchmark.6th Intl. Workshop on Worst-Case Execution Time (WCET) Analysis4OASICS2006back to text
- 33 inproceedingsAutomatic Parallelization of Multi-Rate FMI-based Co-Simulation On Multi-core.the Symposium on Theory of Modeling & Simulation: DEVS Integrative M&S Symposium2017back to text
- 34 inproceedingsProbabilistic Performance Guarantee for Real-Time Tasks with Varying Computation Times.IEEE Real-Time and Embedded Technology and Applications Symposium1995back to textback to text
- 35 articleThe worst-case execution time problem: overview of methods and survey of tools.Trans. on Embedded Computing Systems732008, 1-53back to textback to textback to textback to text