The goal of the TRIO team is to provide a set of techniques and methods that can be applied to design, validate and dimension real time distributed applications. In order to tackle this problem as a whole, our work is structured along two complementary points of view:
specification of real-time on-line mechanisms (protocols, schedulers, middleware) offering services to the application with a quality of service that ensures the satisfaction of real time constraints; this includes fault detection, fault recovery and fault tolerance,
modeling, analysis and evaluation of real time distributed systems for the verification of temporal properties and the optimisation of distributed deployment.
Furthermore, we will continue to study the modeling process of real time distributed applications that allows the description of both functional and non-functional aspects of these applications and therefore a formal use of these models for quantitative evaluation and optimal scaling.
The problems to solve are mainly due to three particularities of the targeted applications:
They are discrete event systems with temporal characteristics (temporal performances of hardware support, temporal properties); this increases the complexity of their modeling and of their analysis. Hence a part of our research objectives is to master this complexity while achieving a satisfactory trade-off between the accuracy of a model and its ability to be analyzed.
A second aspect is the environment of these systems that can be the cause of perturbations. We need to take into account the impact of an uncertain environment (for example, the impact of electro-magnetic perturbations on a hardware support) on the required properties. Therefore we have to develop stochastic approaches.
Finally, the main characteristic of our work is based on the fact that we consider the performances of the hardware support. Consequently, the time that we manipulate is a physical (continuous) time and the studied systems are event driven timed systems.
These above mentioned main directions contribute to cover the full spectrum from formal modeling and evaluation of real time distributed systems up to their use in industrial problems.
The release of the Open-PEOPLE platform.
The organization of the 20th International Conference on Real-Time and Network Systems (RTNS2012).
The acceptance of two TRIO papers to the two premier real-time conferences: the 33rd IEEE Real-time Systems Symposium (RTSS 2012) and the 24th Euromicro Conference on Real-Time Systems (ECRTS 2012).
Successful completion of the TIMMO-2-USE project in September 2012 in which TRIO was leader of the work package on the algorithms and tools within the project.
In order to check for the timing behavior and the reliability of distributed systems, the TRIO team developed several techniques based on deterministic approaches ; in particular, we apply and extend analytical evaluation of worst case response times and when necessary, e.g. for large-scale communication systems as Internet based applications, we use techniques based on network calculus.
When the environment might lead to hazards (e.g. electromagnetic interferences causing transmission errors and bit-flips in memory), or when some characteristics of the system are not perfectly known or foreseeable beforehand, we model and analyze the uncertainties using stochastic models, for instance, models of the frame transmission patterns or models of the transmission errors. In the context of real time computing, we are in general much more interested by worst-case results over a given time window than by average and asymptotic results, and dedicated analyses in that area have been developed in our team over the last 10 years.
In the design of discrete event systems with hard real time constraints, the scheduling of the system’s activities is of crucial importance. This means that we have to devise scheduling policies that ensure the respect of time constraints on line and / or optimize the behavior of the system according to some other application-dependent performance criteria.
In order to foster the best quality for programs, their understanding has to be automated, or at made significantly easier. Thus, we focus on analyzing and modeling program code, program structure and program behavior, and presenting these pieces of information to the user (in our case, program designers and program developers). Modeling user interaction is to come as well.
In the design of embedded, autonomous systems, power and energy usage is of paramount importance. We thus strive to model energy usage, based on actual hardware, and derive context-aware optimizations to decrease peak power and overall energy usage.
Three main application domains can be underlined.
In-vehicle embedded systems. A lot of work developed in TRIO is oriented towards transportation systems (cars, autonomous vehicles, etc.). They mainly cover two points. The first one is the specification of what must be modeled in such a system and how to reach a good accuracy of a model; this leads to investigate topics like Architecture Description Languages and automatic generation of models. The second point concerns the verification of dependability properties and temporal properties required by these applications and, consequently, the development of new fault tolerant on-line mechanisms to include in an application or the automatic generation of a standard middleware.
Compilation, memory management and low-power issues for real time embedded systems. It becomes mandatory to design embedded systems that respect performances and reliability con- straints while minimizing the energy consumption. Hence, TRIO is involved, on the one hand, in the definition of ad-hoc memory management at compilation time and on the other hand, in joint study of memory management strategies and tasks scheduling for real time critical systems.
Code analyses and software visualization for embedded systems. Despite important advances, it is still impossible to develop and optimize automatically all the programs with all their variety, especially when deployment constraints are considered. Software design and implementation thus remain highly ad-hoc, poorly automated activities, with a human being in the loop. TRIO is thus involved in the design of better tools for software engineering focusing on helping the human developer understand and develop the system, thanks to powerful automated program analyses and advanced visualizations techniques.
The aim of Open-PEOPLE is to provide a platform for estimating and optimizing the power and energy consumption of systems. The Open-PEOPLE project formally started in April 2009. Two systems administrator and software developers had been hired initially: Sophie Alexandre and Kévin Roussel. Another system administrator and software developer, Jonathan Ponroy, joined them in 2010 when he finished his work on the ANR MORE project where he worked previously. Sophie Alexandre contract ended in February 2011.
Since the beginning of the Open-PEOPLE project, we had made significant progress in setting up the infrastructure for the software part of the platform, for which Inria Nancy Grand Est is responsible. We had included new features to be able to fully integrate and test software developed as Eclipse plugins, relying on the Buckminster tool. We had also created a specific extension set for SVN and Hudson, called OPCIM (Open-PEOPLE Continuous Integration Mechanism). OPCIM had been registered at APP on 13/04/2010 with number IDDN.FR.001.150008.000.S.P.2010.000.10000.
Concerning the Open-PEOPLE platform itself, we had first tackled the high-level work, working with our partners on the definition of the requirements of the platform according to the needs of industry. We had then realized the specification work to define the global perimeter of our platform, according to the previous requirements. As part of this work had also been designed exchanges formats between the various tools. We had also designed at Inria Nancy Grand Est a Tools integration Protocol, which specified requirements for external tools to be integrated in our platform. All this design work had been materialized in several reports which were deliveries provided to ANR.
We had also designed and developed an authentication component (Eclipse plugin) for the platform, so as to be able to provide a unique, secured access gate to the platform to all the tools that are or shall be integrated into it.
We had also started and almost finished developing an Internet portal giving access and control to the Open-PEOPLE Hardware Platform, located at our partner's UBS in Lorient. Our portal features included user account management facilities, on the admin side, and on the user side, the ability to create, save, edit, reuse and of course submit jobs, make reservations for the hardware platform resources and get back tests results.
Finally, we had started working on two important parts of the software platform.
First, a way to unify the user experience despite the fact the platform federates several tools which were not developed to interact together. This implied an important and in-depth study of the wanted ergonomy for the platform, which involved taking into account both user needs and habits and the features of the available software tools.
The second work which had begun in 2011 was the design (then implementation) of the communications of between the various tools of the platform. This skeleton is a key part of our platform, and the quality of its design has a tremendous impact on its maintainability and its extensibility.
Note that the Open-PEOPLE project had been successfully evaluated on 14/09/2010 by ANR. Developments done during the first two years in the project are detailed in the 2009 and 2010 activity reports. In 2011, these developments had gone on.
We had continued the work to solidify our development platform supporting our work and that of our partners. We had produced a finer grained definition of the software platform functionalities, and a more precise definition of the tools integration protocol. We had worked towards the corresponding implementation documents, adding two new delivrables about the architecture of the software platform and the ergonomics of the software platform. For the latter, we had extensively interviewed user about ergonomics and designed several GUI mockups. We had progressed on the implementation of the software platform, especially with respect to the internet portal to remote-control the hardware platform. We had participated to the definition of the hardware platform and its functionalities, and participated actively to the work on the Specification document for HW / SW interfacing. We had provided the first concrete design and implementation of the HW/SW platform interfacing, with our implementation of the remote control portal for the HW platform. This remote control module had been completed in Fall 2011.
We had also participated to the work pertaining to basic components model homogenization, by reviewing this in the context of the software platform architecture and implementation, which had resulted in several incremental improvements of the underlying models. Finally, progressing towards the first release of the software part of the Open-PEOPLE platform, we had realized an ergonomic study for the consumption laws editors, with mockups and user interviews and validation. We had worked on the implementation of the editors for the consumption laws, which had required learning new environments and development tools (related to the EMF framework and the AADL, QUDV and MathML models). As a consequence, we had completed the implementation of the GUI and engine to create units and quantities. We had finalized the architecture needed to integrate external modules in the platform.
In 2012, this work went on. Basically, 2012 was the year of the concrete Open-PEOPLE platform, where all our efforts finally came to maturity. We thus completed global GUI of the Open-PEOPLE Software Platform. We performed the integration of various external tools and modules and the . We provided several improvements to the Remote Control Module providing access to the Hardware Platform. We finalized the implementation of consumption laws editors. We implemented the export and import functionality of Open-PEOPLE models. We created a new community-based website to allow sharing of Open-PEOPLE models.
We overall progressed as forecast in an iterative development and release schedule.
Version 1.0 (2012-04-06) was the first embodiment and public release of the Open-PEOPLE platform.
Version 1.1 (2012-06-12) added a default environment with pre-set Units, Quantities and AADL Property associations, asynchronous file uploads and downloads in the Remote Control Module, and better handling of big files (file size limit is now 4GB), and several bug fixes.
Version 1.3 (2012-09-27) changed internal mechanism of QUDV serialization (quantities, units and property associations), added version information to QUDV and Weaving meta-models, added internal builders to automatically generates QUDV and Weaving configuration files, added support of OSATE 2, improved UI reactivity (especially during file transfers), added progress bars for the remote control, and several bug fixes.
Version 1.4 (2012-10-25) added the Adele Graphical Editor, new OSATE 2 Snapshot, and several bugfixes.
Version 2.0 (2012-12-13) added RDALTE, AADL2SystemC, and a Standard environment with models and model sharing implemented (including a community sharing website), a new snapshot of OSATE2.
The aim of the VITRAIL operation is to provide tools for the advanced and immersive visualization of programs. It partners with the University of Montréal, University of Montpellier and Pareo team of Inria Nancy Grand Est.
Last years, in VITRAIL, we had developed software to instrument and trace Java programs at the bytecode level. We then had developed an analysis tool able to exploit these traces to compute relevant software metrics. We had hired Damien Bodenes as software developer, and had begun the work on a prototype able to render a 3D world, symbolizing software, onto various visualization hardware, with the possibility to change the display metaphor. The main part of our development work had been in 2009 the choice and validation of the technology, and a first architecture. In 2010, the development had go on at a good pace, building on chosen technologies and architecture. This had brought new experience, and with the first actual runs of our platform, we had realized that with the Irrlicht platform we had chosen, we could reach unforeseeable problem when scaling up. We had thus decided to reverse our choice to the Ogre3D 3D engine at the beginning of 2010. Our development had then progressed steadily.
We had released in 2010 a first prototype of our platform, with all the underlying architecture, able to provide navigation features and interaction capacities limited to the driving of the navigation, as per our plans. This had included dual screen management.
Our first prototype, using 2 large 2D screens, with a city metaphor, had been demonstrated during the "Fête de la Science" in November 2010 and had received a lot of attention and enthusiasm from the general public. About 55 persons per day had visited our booth and got demonstrations.
We had also progressed significantly in our Java bytecode tracer, by improving its granularity, the completeness of the traced information, and its performance as well. We have a unique tool which is able to trace both program classes and JDK classes, at basic block level. In addition, it does so with a dynamic instrumentation of classes, which means there is no need to have an instrumented version of the class files on disk. This is very convenient, especially when changing machine of JVM, or when upgrading either the JDK or the program itself. In addition, the performance is good enough that the instrumented programs are still fully usable in an interactive way, without bothering the user. To the best of our knowledge, this is the only Java bytecode tracer that offers these features nowadays.
Our software development had lead to several registrations with APP:
VITRAIL - Visualizer had been first registered on 29/12/2009 under number IDDN.FR.001.530021.000.S.P.2009.000.10000.
VITRAIL - Tracer, was registered at APP on 20/09/2010 with number IDDN.FR.001.380001.000.S.P. 2010.000.10000.
In 2011, we acquired a workstation and three 30 inches computer screens, to be able to set up a "boxed 3D workstation", that would provide display in front and on both sides of the operator. This would constitute the next step in our experiments, by improving immersion with a larger field of vision (on the sides). The software developments to do this are ongoing. We also integrated a WiiMote interaction device to our system, but our experiments found that its spacial resolution was too poor for our needs.
We finally improved significantly our VITRAIL prototype in 2011, especially by designing and implementing a new representation for the relations between software (hence visual) elements, with limited clutter and the possibility to regroup links and see their direction.
In 2012, we continued working on the analysis of software, gathering statistics about polymorphism in Java programs, aiming at comparing various type analyses make statically (CHA, RTA, VTA) and the dynamic trace provided by (a) real execution(s). This work is going on and has not been published yet.
We also developed a public website for the VITRAIL project, which is going live these days.
Code analyses and advanced visualization of software in real-time
Participants: Pierre Caserta, Olivier Zendra
Last years, strong developments for our instrumentation, tracer and analyzer, had been performed, allowing us to really enter the experimental phase and getting first interesting results. A thorough state of the art had also been written.
This state of the art paper had finally been published in TVCG, a leading journal in computer visualization. Thanks to the experimental setup efforts of previous years, we had been able in 2011 to conduct good experiments. We had designed and implemented a new way to visualize relations between software elements. These relations include static relations (is-a, direct heir, caller, callee, etc.) and dynamic ones (runtime caller, runtime callee). Our new relation visualization comprises a new way of placing way points so as to significantly decrease spatial and visual clutter when visualizing software systems with large numbers (thousands up to millions) of relations. This had lead to a publication in VISSOFT, one of the most recognized conferences in the software visualization domain, as well as a Best Poster in ECOOP, one of the most recognized conferences in the object-oriented domain. The important design and implementation work we had realized on the tracing and analysis software also lead to the publication of our method in ICOOOLPS 2011.
This year, in 2012, we published our instrumentation and tracing method in Elsevier's Science of Computer Programming journal .
Work has been going onto analyze polymorphism in Java programs, answering an apparently simple yet so far unanswered question: how much polymorphism is there actually in Java programs. This is of paramount importance, since a lot of work occur around polymorphism, which is an important concept, but no one is currently able to tell how much it impact programs in real life. We have begun writing this paper in cooperation with the LIRMM lab in Montpellier. In addition, we are in the process of finishing work pertaining to analyzing program evolutions, looking at differences between versions, and analyzing how dynamic metrics and static metrics correlate to evolution rate.
Work in this domain has also lead to the writing and successful defense of Pierre Caserta's PhD thesis, titled "Analyse statique et dynamique de code et visualisation des logiciels via la métaphore de la ville : contribution à l'aide à la compréhension des programmes", on 7th December 2012 .
A web site was also designed to publicize our work on the VITRAIL project.
Open Power and Energy Optimization PLatform and Estimator
Participants: Fabrice Vergnaud, Jérôme Vatrinet, Kévin Roussel, Olivier Zendra.
Work in this domain was performed in the context of the ANR Open-PEOPLE (Open Power and Energy Optimization PLatform and Estimator) project, financed since the very end of 2008. Inria Nancy Grand Est is responsible for the software part of the platform and is involved in memory management for low-power issues. Work in this project begun in April 2009 (kick-off meeting). We have finished setting up the very important infrastructure for the software part of the Open-PEOPLE platform. We have finished expressing the requirements for the platform, in order to start the actual developments and the actual integration of tools provided by the different partners. In 2011, we have finished expressing the platform architecture and user interface (GUI). We have also finished implementing the part of the software platform that is the remote control to the hardware platform. We finally have finished implementing the core of the software platform and canonical models handling. This work lead to several technical and the several presentations and posters in conferences.
This year was the result harvesting for our project, in terms of development. We finished the design and implementation of the PCMD (Power Consumption Model Development) and the PCAO (Power Consumption Analysis and Optimization) parts of the software platform, as well as the external tools integration work. We also designed and implemented the Open-PEOPE model sharing website. Again, several demos and publications in conferences resulted , , .
Operator calculus and conceptation of algorithms for optimisatio of multi-constraints problems
Participants: Jamila Ben Slimane, Hugo Cruz-Sanchez, Bilel Nefzi, René Schott, Ye-Qiong Song
R. Schott and G. Stacey Staples proposed a solution based on operator calculus for graphs with multi-constraints . These constraints are not necessarily linear or positive. This approach was developed for realistic problems like:
configuration of satellites proposing a high-quality coverage ;
optimal utilisation of ressources in hospitals;
optimal management in sensor networks .
This work was the result of the collaboration of our team with MADYNES team, LPMA (Laboratoire de Probabilités et Modèles Aléatoires, Paris 6 et 7) and University of Illinois at Edwardsville.
Scheduling of tasks in automotive multicore ECUs
Participants: Aurélien Monot, Nicolas Navet, Françoise Simonot-Lion.
As the demand for computing power is quickly increasing in the automotive domain, car manufacturers and tier-one suppliers are gradually introducing multicore ECUs in their electronic architectures. Additionally, these multicore ECUs offer new features such as higher levels of parallelism which ease the respect of safety requirements such as the ISO 26262 and the implementation of other automotive use-cases. These new features involve also more complexity in the design, development and verification of the software applications. Hence, car manufacturers and suppliers will require new tools and methodologies for deployment and validation. We address the problem of sequencing numerous elementary software components, called runnables, on a limited set of identical cores. We show how this problem can be addressed as two sub-problems, partitioning the set of runnables and building the sequencing of the runnables on each core, which problems cannot be solved optimally due to their algorithmic complexity. We then present low complexity heuristics to partition and build sequencer tasks that execute the runnable set on each core, and derive lower bounds on their efficiency (i.e., competitive ratio). Finally, we address the scheduling problem globally, at the ECU level, by discussing how to extend this approach in the case where other OS tasks are scheduled on the same cores as the sequencer tasks. An article providing a summary of this line of work has been published in IEEE TII .
Probabilistically analysable real-time system
Participants: Liliana Cucu-Grosjean, Adriana Gogonel, Codé Lo, Luca Santinelli, Dorin Maxim and Cristian Maxim.
The adoption of more complex hardware to respond to the increasing demand for computing power in next- generation systems exacerbates some of the limitations of static timing analysis for the estimation of the worst-case execution time (WCET) estimation. In particular, the effort of acquiring (1) detail information on the hardware to develop an accurate model of its execution latency as well as (2) knowledge of the timing behaviour of the program in the presence of varying hardware conditions, such as those dependent on the history of previously executed instructions. These problems are also known as the timing analysis walls. The probabilistic timing analysis, a novel approach to the analysis of the timing behaviour of next-generation real-time embedded systems, provides answers to timing analysis walls. In , we have showed how the probabilistic timing analysis attacks the timing analysis walls. We have also presented experimental evidence that shows how probabilistic timing analysis reduces the extent of knowledge about the execution platform required to produce probabilistically-safe and tight WCET estimations.
Based on existing estimations of WCET or minimal inter-arrival time, we may propose different probabilistic schedulability analyses , .
Statistical analysis of real-time systems
Participants: Liliana Cucu-Grosjean, Adriana Gogonel, Lu Yue, Thomas Nolte [Malardelan University], Rob Davis, Ian Bate [University of York], Michael Houston, Guillem Bernat [Rapita].
The response time analysis of real-time systems usually needs the knowledge of WCET estimation and this knowledge is not always available, e.g., because of intelectual property issues. This problem may be avoided by estimating statistically either the WCET of a task , the inter-arrival time or the response time of each task .
Probabilistic Component-based ApproachesParticipants: Luca Santinelli, Patrick Meumeu Yomsi, Dorin Maxim, Liliana Cucu-Grosjean.
We have proposed a probabilistic component-based model which abstracts in the interfaces both the functional and non-functional requirements of such systems. This approach allows designers to unify in the same framework probabilistic scheduling techniques and compositional guarantees that go from soft to hard real- time. We have provided sufficient schedulability tests for task systems using such framework when the scheduler is either preemptive fixed-priority or earliest deadline first. These results were published in .
Ph.D. thesis under CIFRE collaboration with PSA
Participants: Aurélien Monot, Nicolas Navet, Françoise Simonot-Lion
The complexity of electronic embedded systems in cars is continuously growing. Hence, mastering the temporal behavior of such systems is paramount in order to ensure the safety and comfort of the passengers. As a consequence, the verification of end-to-end real-time constraints is a major challenge during the design phase of a car. The AUTOSAR software architecture drives us to address the verification of end-to-end real-time constraints as two independent scheduling problems respectively for electronic control units and communication buses.
First, we introduce an approach, which optimizes the utilization of controllers scheduling numerous software components that is compatible with the upcoming multicore architectures. We describe fast and efficient algorithms in order to balance the periodic load over time on multicore controllers by adapting and improving an existing approach used for the CAN networks. We provide theoretical result on the efficiency of the algorithms in some specific cases. Moreover, we describe how to use these algorithms in conjunction with other tasks scheduled on the controller , .
The remaining part of this research work addresses the problem of obtaining the response time distributions of the messages sent on a CAN network. First, we present a simulation approach based on the modelisation of clock drifts on the communicating nodes connected on the CAN network. We show that we obtain similar results with a single simulation using our approach in comparison with the legacy approach consisting in numerous short simulation runs without clock drifts. Then, we present an analytical approach in order to compute the response time distributions of the CAN frames. We introduce several approximation parameters to cope with the very high computational complexity of this approach while limiting the loss of accuracy. Finally, we compare experimentally the simulation and analytical approaches in order to discuss the relative advantages of each of the two approaches , .
Open-PEOPLE initially gathers 5 partners from academia and 2 from industry. This project aims at providing a federative and open platform for the estimation and optimization of power and energy consumption in computer systems. The platform users will be able to evaluate application consumption on a hardware architecture chosen among a set of provided typical, parametric architectures. In the considered system, the components will be picked from a library of hardware and software components, be they parametric or not. It will be possible to perform the estimation at various stages of the specification refinement, thanks to a methodology based on multi-level, interoperable and exchangeable consumption models allowing an easy exploration of the design space. Thus, estimations results may be used to check the energy behaviour of a system developed with simulation platforms. Feedback about the application functional properties will allow further refining of the estimation results in Open-PEOPLE. A standardisation of consumption models will be proposed in order to allow interoperability and have easier exchanges with other platforms. The Open-PEOPLE library of consumption models will be extensible: new component models will be added as the user applicative requirements evolve and as implementation techniques progress. To do so, the software estimation platform that will be accessible via an Internet portal shall be linked to a hardware platform made of an automated measurement testbench, which will be controllable from the software platform. A standalone version will also be provided to meet the confidentiality requirements of industry. A library of applications benchmarks will be proposed to characterize new components and new architectures. In addition to the research work required to build methods for multi-level estimation in heterogeneous complex systems, research work shall be carried on in order to offer new methods and techniques making it possible to optimize consumption thanks to the results provided by Open-PEOPLE. Open-PEOPLE is hence geared towards academia to support research work pertaining to consumption estimation and optimization methods, as well as towards industry to estimate or optimize the consumption of future products.
This project ended in late 2012, and we hope to continue work in this direction through other subsequent projects.
The project DEPARTS started on October 1st for five next years. This project is funded by the national funding program BGLE. TRIO team will propose solutions for probabilistic component-based models.
Title: ___PROARTIS___
Type: COOPERATION (ICT)
Defi: Embedded Systems Design
Instrument: Specific Targeted Research Project (STREP)
Duration: February 2010 - July 2013
Coordinator: Barcelona Supercomputong Center (Spain)
See also: ___http://www.proartis-project.eu/___
Participants: Liliana Cucu-Grosjean, Adriana Gogonel, Luca Santinelli, Codé Lo, Dorin Maxim.
TRIO team participates to PROARTIS which is a STREP project within the FP7 call and it started on February 2010. It has six partners: Barcelona Supercomputing, University of York, University of Padova, Inria and Airbus. The overarching objective of the PROARTIS project is to facilitate a probabilistic approach to timing analysis. The proposed approach will concentrate on proving that pathological timing cases can only arise with negligible probability, instead of struggling to eradicate them, which is arguably not possible and could severely degrade performance. This will be a major turn from previous approaches that seek analyzability by trying to predict with cycle accuracy the state of hardware and software through analysis.
The PROARTIS project will facilitate the production of analysable CRTE systems on advanced hardware platforms with features such as memory hierarchies and multi core processors.
Participants: Liliana Cucu-Grosjean, Aurélien Monot, Nicolas Navet, Françoise Simonot-Lion, Ammar Oulamara, Luca Santinelli, Dominique Bertrand, Cristian Maxim.
TRIO team participated to TIMMO-2-USE (http://timmo-2-use.org/) is an ITEA 2 European project. It started in November 2010 and ended in September 2012. TIMMO-2-USE addresses the specification, transition and exchange of different types of timing information throughout different steps of the development process. The general goal is to evaluate and enhance standards for different applications in the development by different technical use cases covering multiple abstraction levels and tools. For this, TIMMO-2-USE will bring the AUTOSAR standard, TADL and EAST-ADL2 into different applications like WCET analysis and in-the-loop scenarios. This will bring new algorithms and tools for the transition and conversion of timing information between different tools and abstraction level based on a new advanced methodology which, in turn, will be based on a combination of the TIMMO and the ATESST2 methodologies.
The TRIO team is involved in the HiPEAC (High Performance Embedded Architecture and Compilation) European Network of Excellence (NoE). Olivier Zendra was initiator and leader in this context of a cluster of European Researchers “Architecture-aware compiler solutions for energy issues in embedded systems” from mid-2007 to mid-2009. A STREP proposal tentatively titled "RuSH2LEAP: Runtime Software-Hardware interactions to Lower Energy And Power"is currently being written, mostly in the context of this network of excellence, for submission in Call ICT 2013.10, challenge 3.4 Advanced computing, embedded and control systems.
Partner 1: University of York (U.K.)
Sujet 1: probabilistic and statistical analysis of real-time systems
Partner 2: Malardelan University (Sweden)
Sujet 2: statistical analysis of real-time systems
Partner 3: University of Edinburgh (U.K.)
Sujet 3: energy modeling and optimisation of computing systems
Rob Davis, University of York
Marko Bertogna, University of Modena
Luca Santinelli visited University of York and Rapita, York for one month in April 2012.
Liliana Cucu is the Delegate of International Relations for the Inria Nancy-Grand Est center.
Olivier Zendra is an elected member of the Research Center Committee of Inria Nancy Grand Est.
Nicolas Navet chairs with Thomas Nolte (MRTC Mälardalen) of the Sub-Committee on Real-Time Fault Tolerant Systems of the IEEE Industrial Electronic Society (IES) - Technical Committee on Factory Automation (TCFA).
Nicolas Navet is member of the Editorial Board of the Journal of Embedded Computing (IOS Press).
Liliana Cucu is an elected member of Inria Evaluation Commission (CE).
Olivier Zendra is head of the Documentation Committee of Inria Nancy Grand Est, member of the Health, Safety and Work Environment of Inria and of Inria Nancy Grand Est - LORIA Committees, member of the Permanent Education Committee of Inria Nancy Grand Est - LORIA, member of the new Sustainable Development Local and National Committees, member of the Inria Nancy Grand Est - LORIA Committee for the selection of hardware configurations.
Olivier Zendra is CIR expert for the Ministry of Research for the scientific evaluation of research in companies.
Liliana Cucu and Nicoals Navet are ANRT expert for evaluating CIFRE applications.
Olivier Zendra is founder and steering committee member of the ICOOOLPS (International Workshop on Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems) workshop.
Liliana Cucu and Nicolas Navet were general chairs of the 20th International Conference on Real-Time and Network Systems (RTNS 2012).
Liliana Cucu was co-chair of RTSOPS and WIP session of WFCS 2012.
Liliana Cucu was program committe member of ECRTS, ETFA, SIES, ACM RACS.
Nicolas Navet was program committe member of ACM RACS'2012, IEEE RSP'2012, Track on Industrial Communication Systems and Real-Time and (Networked) Embedded Systems at IEEE ETFA 2012, IEEE ICESS-2012, IEEE SIES'2012, IEEE WFCS'2012, WoNeCa-2012.
Nicolas Navet was co-animator of the High Performance Embedded Systems (HPES) cluster of GDR CNRS ASR, and co-animator of Actriss group (real-time services and infrastructure) of HPES cluster.
Olivier Zendra was co-chair of ICOOOLPS 2012, June 11th, Beijing, CN.
Olivier Zendra was program committee member for CIEL 2012, DASIP 2012, ICOOOLPS 2012, RESoLVE 2012.
The permanent members of TRIO team are reviewers for numerous international Conferences and Workshops and, in particular for the following journals: IEEE Transactions on Industrial Informatics, Real-Time Systems, IEEE Computer Communications, Journal of Discrete Event Systems, Journal of Systems Architecture, Journal of Embedded Computing, Journal of Scheduling, Theoretical Com- puter Science, ACM Surveys, ACM Transactions on Embedded Computing Systems, Information Processing Letters.
Licence : Olivier Zendra, langages à objets / Java, 53h, niveau (L2), University of Lorraine, France
Master : Liliana Cucu, Multiprocessor real-time systems, 30h, niveau (M1), University of Lorraine, France
Master : Nicolas Navet, Real-time critical systems, 30h, niveau (M1), University of Lorraine, France
PhD & HdR :
PhD : Aurélien Monot, Vérification des contraintes temporelles de bout-en-bout dans le contexte AutoSar, University of Lorraine, November 29th, Françoise Simonot, Nicolas Navet and Bernard Bavoux
PhD : Pierre Caserta, Analyse statique et dynamique de code et visualisation des logiciels via la métaphore de la ville: contribution à l'aide à la compréhension des programmes, December 7th, Jean-Pierre Thomesse and Olivier Zendra
PhD in progress: Dorin Maxim, Probabilistic real-time systems, October 1st 2010, Liliana Cucu and Françoise Simonot-Lion
Liliana Cucu was member of the PhD thesis jury of Mikel Cordovilla, defended in April 2012 at ONERA, Toulouse.
Olivier Zendra was member of the PhD thesis jury of Olivier Sallenave, defended in November 2012 at LIRMM / Université de Montpellier