2023Activity reportProjectTeamKOPERNIC
RNSR: 201822841D Research center Inria Paris Centre
 Team name: Keeping worst case reasoning for different criticalities
 Domain:Algorithmics, Programming, Software and Architecture
 Theme:Embedded and Realtime Systems
Keywords
Computer Science and Digital Science
 A1.1.1. Multicore, Manycore
 A1.5. Complex systems
 A1.5.1. Systems of systems
 A1.5.2. Communicating systems
 A2.3. Embedded and cyberphysical systems
 A2.3.1. Embedded systems
 A2.3.2. Cyberphysical systems
 A2.3.3. Realtime systems
 A2.4.1. Analysis
Other Research Topics and Application Domains
 B5.2. Design and manufacturing
 B5.2.1. Road vehicles
 B5.2.2. Railway
 B5.2.3. Aviation
 B5.2.4. Aerospace
 B6.6. Embedded systems
1 Team members, visitors, external collaborators
Research Scientist
 Liliana Cucu [Team leader, INRIA, Senior Researcher, HDR]
Faculty Member
 Avner BarHen [CNAM, Professor Delegation, until Mar 2023, HDR]
PhD Students
 Chiara Daini [INRIA, until Apr 2023]
 Ismail Hawila [StatInf, CIFRE]
 Marwan Wehaiba El Khazen [Statinf, CIFRE]
 Kevin Zagalo [INRIA, until Sep 2023]
Technical Staff
 Masum Bin Alam [INRIA, Engineer, from Nov 2023]
 Rihab Bennour [INRIA, Engineer, until May 2023]
Interns and Apprentices
 MarcAntoine Auvray [INRIA, Apprentice, until Aug 2023]
 Myriam Mabrouki [INRIA, Intern, from Jul 2023 until Aug 2023]
Administrative Assistants
 Nelly Maloisel [INRIA]
 Christelle Rosello [INRIA, from Apr 2023]
External Collaborators
 Slim Ben Amor [StaInf]
 Adriana Gogonel [Statinf]
 Kossivi Kougblenou [StatInf]
 Yves Sorel [INRIA]
2 Overall objectives
The Kopernic members are focusing their research on studying time for embedded communicating systems, also known as cyberphysical systems. More precisely, the team proposes a systemoriented solution to the problem of studying time properties of the cyber components of a CPS. The solution is expected to be obtained by composing probabilistic and nonprobabilistic approaches for CPSs. Moreover, statistical approaches are expected to validate existing hypotheses or propose new ones for the models considered by probabilistic analyses 3, 4.
The term cyberphysical systems refers to a new generation of systems with integrated computational and physical capabilities that can interact with humans through many new modalities 15. A defibrillator, a mobile phone, an autonomous car or an aircraft, they all are CPSs. Beside constraints like power consumption, security, size and weight, CPSs may have cyber components required to fulfill their functions within a limited time interval (a.k.a. time dependability), often imposed by the environment, e.g., a physical process controlled by some cyber components. The appearance of communication channels between cyberphysical components, easing the CPS utilization within larger systems, forces cyber components with high criticality to interact with lower criticality cyber components. This interaction is completed by external events from the environnement that has a time impact on the CPS. Moreover, some programs of the cyber components may be executed on time predictable processors and other programs on less time predictable processors.
Different research communities study separately the three design phases of these systems: the modeling, the design and the analysis of CPSs 27. These phases are repeated iteratively until an appropriate solution is found. During the first phase, the behavior of a system is often described using modelbased methods. Other methods exist, but modeldriven approaches are widely used by both the research and the industry communities. A solution described by a model is proved (functionally) correct usually by a formal verification method used during the analysis phase (third phase described below).
During the second phase of the design, the physical components (e.g., sensors and actuators) and the cyber components (e.g., programs, messages and embedded processors) are chosen often among those available on the market. However, due to the ever increasing pressure of smartphone market, the microprocessor industry provides general purpose processors based on multicore and, in a near future, based on manycore processors. These processors have complex architectures that are not time predictable due to features like multiple levels of caches and pipelines, speculative branching, communicating through shared memory or/and through a network on chip, internet, etc. Due to the time unpredictability of some processors, nowadays the CPS industry is facing the great challenge of estimating worst case execution times (WCETs) of programs executed on these processors. Indeed, the current complexity of both processors and programs does not allow to propose reasonable worst case bounds. Then, the phase of design ends with the implementation of the cyber components on such processors, where the models are transformed in programs (or messages for the communication channels) manually or by code generation techniques 18.
During the third phase of analysis, the correctness of the cyber components is verified at program level where the functions of the cyber component are implemented. The execution times of programs are estimated either by static analysis, by measurements or by a combination of both approaches 36.
These WCETs are then used as inputs to scheduling problems 29, the highest level of formalization for verifying the time properties of a CPS. The programs are provided a start time within the schedule together with an assignment of resources (processor, memory, communication, etc.). Verifying that a schedule and an associated assignment are a solution for a scheduling problem is known as a schedulability analysis.
The current CPS design, exploiting formal description of the models and their transformation into physical and cyber parts of the CPS, ensures that the functional behavior of the CPS is correct. Unfortunately, there is no formal description guaranteeing today that the execution times of the generated programs is smaller than a given bound. Clearly all communities working on CPS design are aware that computing takes time26, but there is no CPS solution guaranteeing time predictability of these systems as the processors appear late within the design phase (see Figure 1). Indeed, the choice of the processor is made at the end of the CPS design process, after writing or generating the programs.
Since the processor appears late within the CPS design process, the CPS designer in charge of estimating the worst case execution time of a program or analyzing the schedulability of a set of programs inherits a difficult problem. The Kopernic main purpose is the proposition of compositional rules with respect to the time behaviour of a CPS, allowing to restrain the CPS design to analyzable instances of the WCET estimation problem and of the schedulability analysis problem.
With respect to the WCET estimation problem, we say that a rule $\circ $ is compositional for any two sets of measured execution times ${\mathcal{C}}_{1}$ and ${\mathcal{C}}_{2}$ of a program $A$, and a WCET statistical estimator $p$, if we obtain a safe WCET estimation for $A$ from $p({\mathcal{C}}_{1}\circ {\mathcal{C}}_{2})$. For instance, ${\mathcal{C}}_{1}$ may be the set of measured execution times of the program $A$ while all processor features except the local cache L1 are deactivated, while ${\mathcal{C}}_{2}$ is obtained, similarly, with a shared L2 cache activated. We consider that the variation of all input variables of the program $A$ follows the same sequence of values, when measuring the execution time of the program $p$. With respect to the schedulability analysis problem, we are interested in analyzing graphs of communicating programs. A program $A$ communicates with a program $B$ if input variables of the program $B$ are among output variables of the program $A$. A graph of communicating programs is a direct acyclic graph with programs as vertices. An edge from a program $A$ to program $B$ is defined if $A$ communicates with $B$. The end to end response time of such graph is the longest path from any source vertex to any sink vertex of the graph, if there is, at least one path between these two vertices. A rule $\u2a00$ is compositional for any set of measured response times ${\mathcal{R}}_{A}$ of program $A$, any set of measured response times ${\mathcal{R}}_{B}$ and a schedulability analysis $S$ if we obtain a safe schedulability analysis from $S({\mathcal{R}}_{A}\u2a00{\mathcal{R}}_{B})$.
Before enumerating our scientific objectives, we introduce the concept of variability factors. More precisely, the time properties of a cyber component are subject to variability factors. We understand by variability the distance between the smallest value and the largest value of a time property. With respect to the time properties of a CPS, the factors may be classified in three main classes:
 program structure: for instance, the execution time of a program that has two main branches is obtained, if appropriate composition principles apply, as the maximum between the largest execution time of each branch. In this case the branch is a variability factor on the execution time of the program;
 processor structure: for instance, the execution time of a program on a less predictable processor (e.g., one core, two levels of cache memory and one main memory) will have a larger variability than the execution time of the same program executed on a more predictable processor (e.g., one core, one main memory). In this case the cache memory is a variability factor on the execution time of the program;
 execution environment: for instance, the appearance of a pedestrian in front of a car triggers the execution of the program corresponding to the brakes in an autonomous car. In this case the pedestrian is a variability factor for the triggering of the program.
We identify three main scientific objectives to validate our research hypothesis. The three objectives are presented from program level, where we use statistical approaches, to the level of communicating programs, where we use probabilistic and nonprobabilistic approaches.
The Kopernic scientific objectives are:

[O1] worst case execution time estimation of a program  modern processors induce an increased variability of the execution time of programs, making difficult (or even impossible) a complete static analysis to estimate such worst case. Our objective is to propose a solution composing probabilistic and nonprobabilistic approaches based both on static and on statistical analyses by answering the following scientific challenges:
 a classification of the variability factors of execution times of a program with respect to the processor features. The difficulty of this challenge is related to the definition of an element belonging to the set of variability factors and its mapping to the execution time of the program.
 a compositional rule of statistical models associated to each variability factor. The difficulty of this challenge comes from the fact that a global maximum of a multicore processor cannot be obtained by upper bounding the local maxima on each core.

[O2] deciding the schedulability of all programs running within the same cyber component, given an energy budget  in this case the programs may have different time criticalities, but they share the same processor, possibly multicore1. Our objective is to propose a solution composing probabilistic and nonprobabilistic approaches based on answers to the following scientific challenges:
 scheduling algorithms taking into account the interaction between different variability factors. The existence of time parameters described by probability distributions imposes to answer to the challenge of revisiting scheduling algorithms that lose their optimality even in the case of an unicore processor 30. Moreover, the multicore partionning problem is recognized difficult for the nonprobabilistic case 34;
 schedulability analyses based on the algorithms proposed previously. In the case of predictable processors, the schedulability analyses accounting for operating systems costs increase the time dependability of CPSs 32. Moreover, in presence of variability factors, the composition property of nonprobabilistic approaches is lost and new principles are required.
 [O3] deciding the schedulability of all programs communicating through predictable and nonpredictable networks, given an energy budget  in this case the programs of the same cyber component execute on the same processor and they may communicate with the programs of other cyber components through networks that may be predictable (network on chip) or nonpredictable (internet, telecommunications). Our objective is to propose a solution to this challenge by analysing schedulability of programs, for which existing (worst case) probabilistic solutions exist 31, communicating through networks, for which probabilistic worstcase solutions 19 and average solutions exist 28.
3 Research program
The research program for reaching these three objectives is organized according three main research axes
 Worst case execution time estimation of a program, detailed in Section 3.1;
 Building measurementbased benchmarks, detailed in Section 3.2;
 Scheduling of graph tasks on different resources within an energy budget in Section 3.3.
3.1 Worst case execution time estimation of a program
The temporal study of realtime systems is based on the estimation of the bounds for their temporal parameters and more precisely the WCET of a program executed on a given processor. The main analyses for estimating WCETs are static analyses 36, dynamic analyses 20, also called measurementbased analyses, and finally hybrid analyses that combine the two previous ones 36.
The Kopernic approach for solving the WCET estimation problem is based on (i) the identification of the impact of variability factors on the execution of a program on a processor and (ii) the proposition of compositional rules allowing to integrate the impact of each factor within a WCET estimation. Historically, the realtime community had illustrated the distribution of execution times for programs as heavytailed ones as intuitively the large values of execution times of programs are agreed to have a low probability of appearance. For instance Tia et al. are the first underlining this intuition within a paper introducing execution times described by probability distributions within a single core schedulability analysis 35. Since 35, a low probability is associated to large values of execution times of a program executed on a single core processor. It is, finally, in 2000 that the group of Alan Burns, within the thesis of Stewart Edgar 21, formalizes this property as a conjecture indicating that a maximal bound on the execution times of a program may be estimated by the Extreme Value Theory 24. No mathematical definition of what represents this bound for the execution time of a program has been proposed at that time. Two years later, a first attempt to define this bound has been done by Bernat et al. 17, but the proposed definition is extending the static WCET understanding as a combination of execution times of basic blocks of a program. Extremely pessimistic, the definition remains intuitive, without associating a mathematical description. After 2013, several publications from Liliana CucuGrosjean's group at Inria Nancy introduce a mathematical definition of a probabilistic worstcase execution time, respectively, probabilistic worstcase response time, as an appropriate formalization for a correct application of the Extreme Value Theory to the realtime problems 2, 1, 5.
We identify the following open research problems related to the first research axis:
 the generalization of modes analysis to multidimensional, each dimension representing a program when several programs cooperate;
 the proposition of a rules set for building programs that are time predictable for the internal architecture of a given single core and, then, of a multicore processor;
 modeling the impact of processor features on the energy consumption to better consider both worst case execution time and schedulability analyses considered within the third research axis of this proposal.
3.2 Building measurementbased benchmarks
The realtime community is facing the lack of benchmarks adapted to measurementbased analyses. Existing benchmarks for the estimation of WCET 33, 25, 22 have been used to estimate WCETs mainly for static analyses. They contain very simple programs and are not accompanied by a measurement protocol. They do not take into account functional dependencies between programs, mainly due to shared global variables which, of course, influence their execution times. Furthermore, current benchmarks do not take into account interferences due to the competition for resources, e.g., the memory shared by the different cores in a multicore. On the other hand, measurementbased analyses require execution times measured while executing programs on embedded processors, similar to those used in the embedded systems industry. For example, the mobile phone industry uses multicore based on non predictable cores with complex internal architecture, such as those of the ARM CortexA family. In a near future, these multicore will be found in critical embedded systems found in application domains such as avionics, autonomous cars, railway, etc., in which the team is deeply involved. This increases dramatically the complexity of measurementbased analyses compared to analyses performed on general purpose personal computers as they are currently performed.
We understand by measurementbased benchmarks a 3uple composed by a program, a processor and a measurement protocol. The associated measurement protocols should detail the variation of the input variables (associated to sensors) of these benchmarks and their impact on the output variables (associated to actuators), as well as the variation of the processor states.
Proposing reproducibility and representativity properties that measurementbased benchmarks should follow is the strength of this research axis 7. We understand by the reproducibility, the property of a measurement protocol to provide the same ordered set of execution times for a fixed pair (program, processor). We understand by the representativity, the existence of a (sufficiently small) number of values for the input variables allowing a measurement protocol to provide an ordered set of execution times that ensure a convergence for the Extreme Value Index estimators.
Within this research axis we identify the following open problems:
 proving reproducibility and representativity properties while extending current benchmarks from predictable unicore processors (e.g., ARM CortexM4) to non predictable ones (e.g., ARM CortexA53 or CortexA7);
 proving reproducibility and representativity properties while extending unicore benchmarks to multicore processors. In this context, we face the supplementary difficulty of defining the principles that an operating system should satisfy in order to ensure a realtime behaviour.
3.3 Scheduling of graph tasks on different resources within an energy budget
Following the modeldriven approach, the functional description of the cyber part of the CPS, is performed as a graph of dependent functions, e.g., a block diagram of functions in Simulink, the most widely used modeling/simulation tool in industry. Of course, a program is associated to every function. Since the graph of dependent programs becomes a set of dependent tasks when realtime constraints must be taken into account, we are facing the problem of verifying the schedulability of such dependent task sets when it is executed on a multicore processor.
Directed Acyclic Graphs (DAG) are widely used to model different types of dependent task sets. The typical model consists of a set of independent tasks where every task is described by a DAG of dependent subtasks with the same period inherited from the period of each task 16. In such DAG, the subtasks are vertices and edges are dependencies between subtasks. This model is well suited to represent, for example, the engine controller of a car described with Simulink. The multicore schedulability analysis may be of two types, global or partitionned. To reduce interference and interactions between subtasks, we focus on partitioned scheduling where each subtask is assigned to a given core 23, 6, 8.
In order to propose a general DAG task model, we identify the following open research problems:
 solving the schedulability problem where probabilistic DAG tasks are executed on predictable and non predictable processors, and such that some tasks communicate through predictable networks, e.g., inside a multicore or a manycore processor, and nonpredictable networks, e.g., between these processors through internet. Within this general schedulability problem; we consider five main classes of scheduling algorithms that we adapt to solve probabilistic DAG task scheduling problems. We compare the new algorithms with respect to their energyconsumption in order to propose new versions with a decreased energy consumption by integrating variation of frequencies for processor features like CPU or memory accesses.
 the validation of the proposed framework on our multicore drone case study. To answer to the challenging objective of proposing time predictable platforms for drones, we currently migrate the PX4RT programs on heterogeneous architectures. This includes an implementation of the scheduling algorithms detailed within this research axis within current operating system, NuttX2.
4 Application domains
4.1 Avionics
Time critical solutions in this context are based on temporal and spatial isolation of the programs and the understanding of multicore interferences is crucial. Our contributions belong mainly to the solutions space for the objective [O1] identified previously.
4.2 Railway
Time critical solutions in this context concern both the proposition of an appropriate scheduler and associated schedulability analyses. Our contributions belong to the solutions space of problems dealt within objectives [O1] and [O2] identified previously.
4.3 Autonomous cars
Time critical solutions in this context concern the interaction between programs executed on multicore processors and messages transmitted through wireless communication channels. Our contributions belong to the solutions space of all three classes of problems dealt within all three Kopernic objectives identified previously.
4.4 Drones
As it is the case of autonomous cars, there is an interaction between programs and messages, suggesting that our contributions in this context belong to the solutions space of all three classes of problems dealt within the objectives identified previously.
5 Social and environmental responsibility
5.1 Impact of research results
The Kopernic members provide theoretical means to decrease both the processor utilization and the energy consumption. Such gain is estimated within $30\%$ to $60\%$ utilization gain for existing architectures or energy consumption for new architectures by decreasing the number of necessary cores.
6 Highlights of the year
6.1 Awards
Liliana CucuGrosjean and Adriana Gogonel have been nominated among the 100 most innovative French inventors of the year 2023 by the newpaper La Tribune (France). Liliana CucuGrosjean has been nominated among the 100 most influent Romanian in 2023 by the newpaper Newsweek (Romania). StatInf, a spinoff of the Kopernic team has been awarded within the TechForFuture contest in Paris at the category "Industrie du Futur"
6.2 Keynotes
Liliana CucuGrosjean has gave a keynote entitled "Probabilities – a means to gain time and space when designing CPS" at the 2023 edition of the Hipeac conference
6.3 Nominations
Liliana CucuGrosjean has been nominated to run for the election and have been elected vicechair of the IEEE Technical Community on RealTime Systems (TCRTS). She has started her office on January, 1st of 2024 as the first female vicechair of this IEEE committee after more than 40 years of existence. According to the IEEE TCRTS rules, she becomes chair of this committee on January, 1st, 2026 for a period of two years.
7 New software, platforms, open data
7.1 New software
7.1.1 SynDEx

Keywords:
Distributed, Optimization, Real time, Embedded systems, Scheduling analyses

Scientific Description:
SynDEx is a system level CAD software implementing the AAA methodology for rapid prototyping and for optimizing distributed realtime embedded applications. It is developed in OCaML.
Architectures are represented as graphical block diagrams composed of programmable (processors) and nonprogrammable (ASIC, FPGA) computing components, interconnected by communication media (shared memories, links and busses for message passing). In order to deal with heterogeneous architectures it may feature several components of the same kind but with different characteristics.
Two types of nonfunctional properties can be specified for each task of the algorithm graph. First, a period that does not depend on the hardware architecture. Second, realtime features that depend on the different types of hardware components, ranging amongst execution and data transfer time, memory, etc.. Requirements are generally constraints on deadline equal to period, latency between any pair of tasks in the algorithm graph, dependence between tasks, etc.
Exploration of alternative allocations of the algorithm onto the architecture may be performed manually and/or automatically. The latter is achieved by performing realtime multiprocessor schedulability analyses and optimization heuristics based on the minimization of temporal or resource criteria. For example while satisfying deadline and latency constraints they can minimize the total execution time (makespan) of the application onto the given architecture, as well as the amount of memory. The results of each exploration is visualized as timing diagrams simulating the distributed realtime implementation.
Finally, realtime distributed embedded code can be automatically generated for dedicated distributed realtime executives, possibly calling services of resident realtime operating systems such as Linux/RTAI or Osek for instance. These executives are deadlockfree, based on offline scheduling policies. Dedicated executives induce minimal overhead, and are built from processordependent executive kernels. To this date, executives kernels are provided for: TMS320C40, PIC18F2680, i80386, MC68332, MPC555, i80C196 and Unix/Linux workstations. Executive kernels for other processors can be achieved at reasonable cost following these examples as patterns.

Functional Description:
Software for optimising the implementation of embedded distributed realtime applications and generating efficient and correct by construction code
 URL:

Contact:
Yves Sorel

Participant:
Yves Sorel
7.1.2 EVT Kopernic

Keywords:
Embedded systems, Worst Case Execution Time, Realtime application, Statistics

Scientific Description:
The EVTKopernic tool is an implementation of the Extreme Value Theory (EVT) for the problem of the statistical estimation of worstcase bounds for the execution time of a program on a processor. Our implementation uses the two versions of EVT  GEV and GPD  to propose two independent methods of estimation. Their results are compared and only results that are sufficiently close allow to validate an estimation. Our tool is proved predictable by its unique choice of block (GEV) and threshold (GPD) while proposant reproducible estimations.

Functional Description:
EVTKopernic is tool proposing a statistical estimation for bounds on worstcase execution time of a program on a processor. The estimator takes into account dependences between execution times by learning from the history of execution, while dealing also with cases of small variability of the execution times.
 URL:

Contact:
Adriana Gogonel

Participants:
Adriana Gogonel, Liliana Cucu
7.2 Open data
The Kopernic members contribute to the effort of reproducing research results and numerical evaluation by proposing dynamic benchmarks for statistical analysis of measured execution times, memory accesses and other traces obtained during the Hardware in the Loop execution of the open source programs PX4RT. A set of measurements obtained from the execution of the PX4RT programs on the Pixhawk board are shared with the community under the name of KDBench, a.k.a, Kopernic Dynamic Benchmarks. The data related to KDBench are available at https://team.inria.fr/kopernic/kdbench/ and the contact person is Liliana CucuGrosjean.
8 New results
During this year, the results of Kopernic members have covered all Kopernic three research axes.
8.1 Worst case execution time estimation of a program
Participants: Slim Ben Amor, Rihab Bennour, Liliana CucuGrosjean, Adriana Gogonel, Kossivi Kougblenou, Marwan Wehaiba El Khazen.
We consider WCET statistical estimators that are based on the utilization of the Extreme Value Theory 24. Compared to existing methods 36, our results require the execution of the program under study on the targeted processor or at least a cycleaccurate simulator of this processor. We concentrate on the separation of hardware and software impacts 9, where the correct application of the Extreme Value Theory brings an important support 10.
The originality of this year results concerns the migration from the WCET estimation to the estimation of the worstcase energy consumption 12. More precisely, we propose a new statistical model introducing the impact of both software and hardware events on the estimation of worstcase energy consumption of programs on embedded processors. In order to achieve this purpose, we build a framework to better understand the representativeness of measurements with respect to both software and hardware events. During this year, we test this framework on both execution times and energy consumption data for existing benchmarks as a first step towards a complete statistical worstcase energy consumption model. We denote by WCEC the worstcase energy consumption of a program.
Another original aspect of this work is the definition of critical paths in the context of worstcase statistical estimations. Indeed, while a static analysis calculates a worstcase value based on a critical path, in statisticalbased estimators we define a similar concept of critical paths. For a program $A$ executed on a processor $\Pi $ and a measurement protocol $\mathcal{M}$, we obtain an ordered sequences of paths ${\left({P}_{j}\right)}_{j}$ and of execution times or energy consumption values denoted by ${V}^{*}$, depending on the choice of target variable (WCET or WCEC).
The WorstCase Execution Time of a program $A$ is defined as its largest execution time for any valid execution scenario $S$, while the probabilistic worstcase execution time (pWCET) ${\mathcal{C}}_{\mathcal{T}}$ of a program is an upper bound on all possible probabilistic execution times ${{\mathcal{C}}_{\mathcal{T}}}_{i}$ for all possible execution scenarios ${S}_{i},\forall i\ge 1$ (Each scenario ${S}_{i}$ thus defines a probability distribution of execution times ${{\mathcal{C}}_{\mathcal{T}}}_{i}$). The relation $\u2ab0$ describes the relation between the probabilistic execution times (pET) of a program and its pWCET, ${\mathcal{C}}_{\mathcal{T}}\u2ab0{{\mathcal{C}}_{\mathcal{T}}}_{i}$, $\forall i$, defined as follows. One writes ${\mathcal{C}}_{\mathcal{T}}\u2ab0{{\mathcal{C}}_{\mathcal{T}}}_{i}$ or ${\mathcal{C}}_{\mathcal{T}}$ is said to be worse than ${{\mathcal{C}}_{\mathcal{T}}}_{i}$ if its complementary cumulative distribution function (survival function 1CDF) has a higher or equal probability associated to each possible value, i.e., $P({\mathcal{C}}_{\mathcal{T}}\ge c)\ge P({{\mathcal{C}}_{\mathcal{T}}}_{i}\ge c)$, $\forall c$ and $\forall i\ge 1$.
Based on the previous (p)WCET definitions, we may define the analogous concepts of the worst case energy consumption (WCEC) and of the probabilistic WCEC (pWCEC) of a program as follows. A WorstCase Energy Consumption of a program is its largest energy consumption during the execution of that program for any valid execution scenario, while the probabilistic worstcase energy consumption ${\mathcal{C}}_{\mathcal{E}}$ of a program is an upper bound on all possible probabilistic energy consumption profiles ${{\mathcal{C}}_{\mathcal{E}}}_{i}$ for all possible execution scenarios ${S}_{i},\forall i\ge 1$. The relation $\u2ab0$ describes, as with pETs, the relation between the probabilistic energy consumption (pEC) of a program and its pWCEC, such that ${\mathcal{C}}_{\mathcal{E}}\u2ab0{{\mathcal{C}}_{\mathcal{E}}}_{i}$, $\forall i$. We say that a path ${P}_{j}$ of a program $A$ is critical with respect to a pWCET estimation of $A$ if, at least, one measurement of the execution of that path appears within the set of measurements building the pWCET estimate of $A$. Similarly, we define critical paths wrt the pWCEC estimation. A path ${P}_{j}$ of a program $A$ is critical with respect to a pWCEC estimation of $A$ if, at least, one measurement of the energy consumption associated to the execution of that path appears within the set of measurements building the pWCEC estimate of $A$. For instance, in Figure 2, we illustrate for the program $Dec$ 25 executed on an ARM microcontroller, by two colors different measured energy consumption, where points with the same color identify paths that are equivalent with respect to the worstcase energy consumption estimation.
8.2 Building measurementbased benchmarks: KDBench
Participants: Slim Ben Amor, Rihab Bennour, Masum Bin Alam, Liliana CucuGrosjean, Ismail Hawila, Yves Sorel, Marwan Wehaiba El Khazen.
KDBench, our measurementbased benchmarks, are obtained by modifying opensource programs of the autopilot PX4 designed to control a wide range of air, land, sea and underwater drones. More precisely, the studied programs are executed on a standard Pixhawk drone board based on a predictable single core processor ARM CortexM4 and during the CEOS project we have transformed this set of programs into a set of dependent tasks that satisfies realtime constraints, leading to a new version of PX4, called PX4RT. As usual, the set of dependent realtime tasks is defined by a data dependency graph. An interested reader may refer to the web page of the Kopernic team at The KDBench website.
During this year, the data associated to the KDBench has been collected from 3 flights according to different scenarios, where we vary from one scenario to another the periods of programs. We consider 4 different scenarios per flight and we obtain 12 sets of collected data, 4 sets for each flight. Moreover, different data are extracted from the benchmark after the flight completes.
We collect data from three different flight that we generate by using the QGroundControl software which provides planning for MAVlink enabled drones. The three flights are described as follows:
 during the first flight short_flight, the drone follows a simple line trajectory and the duration of the flight is 23s,
 for the second medium_flight we choose a flight with a sudden change of the drone orientation, in addition to other waypoints where we change the altitude. The duration of this flight is around $43s$,
 the third long_flight has more waypoints than the second flight and its duration is 2min 48s.
More precisely, we have considered the execution of 9 realtime tasks (or programs) of PX4RT. Their list is detailed in the table below together with their periods of activation. In the first line of this table, tasks are ordered according to their priority from the highest priority (on the left of the table) to the lowest priority (on the right of the table). For instance, the task Sensors has the highest priority while Commander has the lowest priority. The last 4 lines of this table contain the periods of activation of each task within the 4 considered measuring flight scenarios, where the periods ${T}_{i}$ are indexed according to the scenario number $i$.
Periods  Sensors  EKF2  Rate  Attitude  Position  Flight  Hover thrust  Nav  Cmd 
Tasks  manager  
${T}_{1}$(ms)  10  10  15  15  15  15  15  25  50 
${T}_{2}$ (ms)  4  4  4  4  4  4  4  4  4 
${T}_{3}$(ms)  8  8  8  8  8  8  8  8  8 
${T}_{4}$(ms)  12  12  12  12  12  12  12  12  12 
8.3 Scheduling of graph tasks on different resources within an energy budget
Participants: Slim Ben Amor, Liliana CucuGrosjean, Ismail Hawila, Kossivi Kougblenou, Yves Sorel, Kevin Zagalo.
Due to widespread of multicore processors on embedded and realtime systems, we concentrate our work on the study of the schedulability of graph tasks on such processors. We consider preemptive (both global and partitionned) fixedpriority scheduling policies. We monitor, when possible, the energy consumption required to meet deadlines as a metric comparing the efficiency of scheduling policies.
Given the difficulty of our scheduling problem, we have considered the single processor case for of independent tasks for which a feasibility interval has been proved 11, 14. To this problem we have added the precedence constraints and we study the specific case of control realtime tasks as an important mean to understand how probabilities may be associated to the interarrival times between consecutive instances of the same program or task as well as understanding the motivation for precedence constraints between tasks and/or their instances.
With respect to the proposition of a new task model merging scheduling and WCET concerns for realtime control systems, our purpose is that this new task model fulfills the following expectations:
 the data and/or precedence constraints between different control tasks as well as noncontrol tasks are considered;
 the impact of data constraints on the execution times of control tasks is considered;
 the impact of period variation between dependent control tasks is considered.
Such expectations applied to the PX4RT programs require to indicate the appropriate precendence constraints. In Figure 3, we represent by continuous lines edges that depict precedence constraints between tasks and by dotted lines edges that depict the data constraints for the PX4RT program 13.
9 Bilateral contracts and grants with industry
9.1 CIFRE Grant funded by StatInf
Participants: Liliana CucuGrosjean, Adriana Gogonel, Marwan Wehaiba El Khazen.
A CIFRE agreement between the Kopernic team and the startup StatInf has started on October 1st, 2020. Its funding is related to study the evolution of WCET models to consider the energy consumption according to Kopernic research objectives.
9.2 CIFRE Grant funded by StatInf
Participants: Liliana CucuGrosjean, Slim Ben Amor, Ismail Hawila, Yves Sorel.
A CIFRE agreement between the Kopernic team and the startup StatInf has started on October 1st, 2022. Its funding is related to study the relation between the control theory robustness and the schedulability problem using probabilistic descriptions according to Kopernic research objectives.
10 Partnerships and cooperations
10.1 International initiatives
10.1.1 Associate Teams in the framework of an Inria International Lab or in the framework of an Inria International Program
Kepler
Participants: Slim Ben Amor, Liliana CucuGrosjean, Adriana Gogonel, Ismail Hawila, Kossivi Kougblenou, Yves Sorel, Marwan Wehaiba.
 Title: Probabilistic foundations for time, a key concept for the certification of cyberphysical systems

Partner Institution(s):
 Universidade Federal da Bahia (Brezil)
 Since 2020/5 years
10.2 International research visitors
10.2.1 Visits to international teams
Research stays abroad
Marwan Wehaiba has visited the UFBA team during November 2023 while working on the results presented in 12.
10.2.2 Other european programs/initiatives
The Kopernic members participate to the EU COST action CERCIRAS
10.3 National initiatives
10.3.1 PSPC
STARTREC
The STARTREC project has been funded by the PSPC call until September 2023. Its partners are Easymile, StatInf, Trustinsoft, Inria and CEA. Its objective is the proposition of ISO26262 compliant arguments for the autonomous driving. The results are described within Section 8.
11 Dissemination
Participants: Liliana CucuGrosjean, Adriana Gogonel, Yves Sorel.
11.1 Promoting scientific activities
11.1.1 Scientific events: selection
Chair of conference program committees
Liliana CucuGrosjean has been the Embedded Systems track chair at DATE 2023
Member of the conference program committees
All Kopernic members are regular PC members for relevant conferences like IEEE RTSS, RTAS, ECRTS, DATE, ETFA, WFCS and RTNS.
11.1.2 Journal
Member of the editorial boards
Liliana CucuGrosjean is associated editor at the Journal of Systems Architecture
Reviewer  reviewing activities
All Kopernic members are regularly serving as reviewers for the main journals of our domain: Journal of RealTime Systems, IEEE Transactions on Computer Science, Information Processing Letter, Journal of Heuristics, Journal of Systems Architecture, Journal of Signal Processing Systems, Leibniz Transactions on Embedded Systems, IEEE Transactions on Industrial Informatics, etc.
11.1.3 Invited talks
Liliana CucuGrosjean has been keynote at the HIPEAC 2023 Conference in Toulouse
11.1.4 Leadership within the scientific community
Liliana CucuGrosjean has been nominated to run for the election and have been elected vicechair of the IEEE Technical Community on RealTime Systems (TCRTS). She has started her office on January, 1st of 2024 as the first female vicechair of this IEEE committee after more than 40 years of existence. According to the IEEE TCRTS rules, she becomes chair of this committee on January, 1st, 2026 for a period of two years.
11.1.5 Scientific expertise
 Yves Sorel is a member of the Steering Committee of Advanced engineering and computing Hub of Systematic ParisRegion Cluster
 Yves Sorel is a member of the Steering Committee Orientations and Programs of SystemX Institute for Technological Research (IRT).
11.1.6 Research administration
Yves Sorel is a member of the CDT Paris center commission
11.2 Teaching  Supervision  Juries
11.2.1 Teaching
Liliana CucuGrosjean, Initiation to the research, MSc degree on Embedded Systesm at University of Saclay
11.2.2 Supervision
 Ismail Hawila, Multicore scheduling of realtime control systems of probabilisic tasks with precedence constraints, Sorbonne university, started on October 2022, supervised by Liliana CucuGrosjean and Slim Ben Amor (StatInf)
 Marwan Wehaiba El Khazen, Statistical models for optimizing the energy consumption of cyberphysical systems, Sorbonne university, started on October 2020, supervised by Liliana Cucu Grosjean and Adriana Gogonel (StatInf) with a defense expected in September 2024.
 Kevin Zagalo,Statistical predictability of cyberphysical systems,Sorbonne University, PhD thesis defended on September 23rd, 2023, supervised by Liliana Cucu and Prof. Avner BarHen (CNAM).
11.2.3 Juries
 Liliana CucuGrosjean has been chair of the PhD defense committee of:
 Hadjer Benmeziane, supervised by Smail Niar, Kaoutar El Maghraoui and Hamza Ouarnoughi at Université Polytechnique HautsdeFrance, defense on August 30th, 2023
 Nan Li, supervised by Laurent Pautet and Eric Goubault at Institut polytechnique de Paris, defense on March 24th, 2023
 Liliana CucuGrosjean has been a reviewer for the following theses of:
 Frédéric Ridouard, at Ecole nationale supérieure de mécanique et d'aérotechnique, HDR defense on November 27th, 2023
 Gabriella Bettonte, supervised by Stéphane Louise at Universitéy of ParisSaclay, PhD defense on January 12th, 2023
 IllHam Atchadam, supervised by Frank Singhoff at University of Brest, PhD defense on March 23rd, 2023
 Liliana CucuGrosjean has been member of the PhD defense jury of:
 Matheus Ladeira Boechat Lemos supervised by Emmanuel Grolleau and Yassine Ouhammou at Ecole nationale supérieure de mécanique et d'aérotechnique, defense on November 11th, 2023
11.3 Popularization
11.3.1 Internal or external Inria responsibilities
Liliana CucuGrosjean is the Inria national harassment referee within the FSCSA
11.3.2 Articles and contents
 Liliana CucuGrosjean shares her vision on the future of embedded systems to Hipeac vision readers
 Liliana CucuGrosjean shares her experience as woman in Computer Science to the EU project Admorph participants.
11.3.3 Interventions
 Adriana Gogonel has participated to the panel "Competences challenges" organized during the Fête des Startups at the Cybersecurity Campus
 Adriana Gogonel has participated in the expert panel "Skilled Labour Shortage" in Nuremberg at the Embedded World Conference
 Adriana Gogonel has participated in the panel "The place of women as deeptech entrepreneurs in France" organized by Starburst at the event Meet up Atechna, Paris
12 Scientific production
12.1 Major publications
 1 patentSimulation Device.FR2016/050504FranceMarch 2016, URL: https://hal.archivesouvertes.fr/hal01666599back to text
 2 inproceedingsMeasurementBased Probabilistic Timing Analysis for Multipath Programs.the 24th Euromicro Conference on RealTime Systems, ECRTS2012, 91101back to text
 3 articleA Survey of Probabilistic Schedulability Analysis Techniques for RealTime Systems.Leibniz Transactions on Embedded Systems612019, 53HALDOIback to text
 4 articleA Survey of Probabilistic Timing Analysis Techniques for RealTime Systems.Leibniz Transactions on Embedded Systems612019, 60HALDOIback to text
 5 patentDispositif de caractérisation et/ou de modélisation de temps d'exécution pirecas.1000408053FranceJune 2017, URL: https://hal.archivesouvertes.fr/hal01666535back to text
 6 inproceedingsLatency analysis for data chains of realtime periodic tasks. the 23rd IEEE International Conference on Emerging Technologies and Factory Automation, ETFA'18September 2018back to text
 7 articleReproducibility and representativity: mandatory properties for the compositionality of measurementbased WCET estimation approaches.SIGBED Review1432017, 2431back to text
 8 inproceedingsScheduling Realtime HiL Cosimulation of CyberPhysical Systems on Multicore Architectures. the 24th IEEE International Conference on Embedded and RealTime Computing Systems and ApplicationsAugust 2018back to text
12.2 Publications of the year
International journals
 9 articleOn the impact of hardwarerelated events on the execution of realtime programs.Design Automation for Embedded Systems274December 2023, 275302HALDOIback to text
 10 articleOn vulnerabilities in EVTbased timing analysis: an experimental investigation on a multicore architecture.Design Automation for Embedded SystemsOctober 2023HALDOIback to text
 11 articleResponse Time Stochastic Analysis for FixedPriority Stable RealTime Systems.IEEE Transactions on ComputersJanuary 2023, 112HALDOIback to text
International peerreviewed conferences
 12 inproceedingsWork in progress: Towards a statistical worstcase energy consumption model.2023 IEEE 29th RealTime and Embedded Technology and Applications Symposium (RTAS)San Antonio, FranceIEEEMay 2023, 333336HALDOIback to textback to text
Conferences without proceedings
 13 inproceedingsTowards a new task model for merging control theory and realtime scheduling problems.16th Junior Researcher Workshop on RealTime ComputingDortmund, GermanyJune 2023HALback to text
Doctoral dissertations and habilitation theses
 14 thesisStochastic analysis of stationary realtime systems.Sorbonne UniversitéSeptember 2023HALback to text
12.3 Cited publications
 15 bookCyberphysical systems.IEEE2011back to text
 16 inproceedingsA Generalized Parallel Task Model for Recurrent Realtime Processes.2012 IEEE 33rd RealTime Systems Symposium (RTSS)2012, 6372back to text
 17 inproceedingsWCET Analysis of Probabilistic Hard RealTime System.Proceedings of the 23rd IEEE RealTime Systems Symposium (RTSS'02)IEEE Computer Society2002, 279288back to text
 18 inproceedingsA SynchronousBased Code Generator for Explicit Hybrid Systems Languages.Compiler Construction  24th International Conference, CC, Joint with ETAPS2015, 6988back to text
 19 inproceedingsPreliminary results for introducing dependent random variables in stochastic feasiblity analysis on CAN.the WIP session of the 7th IEEE International Workshop on Factory Communication Systems (WFCS)2008back to text
 20 articleA Survey of Probabilistic Timing Analysis Techniques for RealTime Systems.LITES612019, 03:103:60back to text
 21 inproceedingsStatistical Analysis of WCET for Scheduling.the 22nd IEEE RealTime Systems Symposium (RTSS)2001, 215225back to text
 22 inproceedingsTACLeBench: A Benchmark Collection to Support WorstCase Execution Time Research.16th International Workshop on WorstCase Execution Time Analysis (WCET)55OASICS2016, 2:12:10back to text
 23 inproceedingsResponse time analysis of sporadic DAG tasks under partitioned scheduling.11th IEEE Symposium on Industrial Embedded Systems (SIES)05 2016, 110back to text
 24 articleOpen Challenges for Probabilistic MeasurementBased WorstCase Execution Time.Embedded Systems Letters932017, 6972back to textback to text
 25 inproceedingsThe Mälardalen WCET Benchmarks: Past, Present And Future.10th International Workshop on WorstCase Execution Time Analysis (WCET)15OASICS2010, 136146back to textback to text
 26 articleComputing Needs Time.Communications of ACM5252009back to text
 27 bookIntroduction to embedded systems  a cyberphysical systems approach.MIT Press2017back to text
 28 inproceedingsRealTime Queueing Theory.the 10th IEEE RealTime Systems Symposium (RTSS)1996back to text
 29 bookMultiprocessor Scheduling for RealTime Systems.Springer2015back to text
 30 inproceedingsOptimal Priority Assignment Algorithms for Probabilistic RealTime Systems.the 19th International Conference on RealTime and Network Systems (RTNS)2011back to text
 31 inproceedingsResponse Time Analysis for FixedPriority Tasks with Multiple Probabilistic Parameters.the IEEE RealTime Systems Symposium (RTSS)2013back to text
 32 inproceedingsMonoprocessor RealTime Scheduling of Data Dependent Tasks with Exact Preemption Cost for Embedded Systems.the 16th IEEE International Conference on Computational Science and Engieering (CSE)2013back to text
 33 inproceedingsPapaBench: a Free RealTime Benchmark.6th Intl. Workshop on WorstCase Execution Time (WCET) Analysis4OASICS2006back to text
 34 inproceedingsAutomatic Parallelization of MultiRate FMIbased CoSimulation On Multicore.the Symposium on Theory of Modeling & Simulation: DEVS Integrative M&S Symposium2017back to text
 35 inproceedingsProbabilistic Performance Guarantee for RealTime Tasks with Varying Computation Times.IEEE RealTime and Embedded Technology and Applications Symposium1995back to textback to text
 36 articleThe worstcase execution time problem: overview of methods and survey of tools.Trans. on Embedded Computing Systems732008, 153back to textback to textback to textback to text