Chroma is a bi-localized project-team at Inria Grenoble Rhone-Alpes in Grenoble and Lyon cities.
The project was launched at the beginning of the year 2015 (March) before it became an Inria project-team on December 1st, 2017.
It brings together experts in perception and decision-making for mobile robotics, all of them sharing common approaches that mainly relate to the field of Artificial Intelligence.
It was originally founded by members of Inria project-team eMotion led by Christian Laugier (2002-2014) and teacher-researchers from INSA Lyon
The overall objective of Chroma is to address fundamental and open issues that lie at the intersection of the emerging research fields called “Human Centered Robotics”
More precisely, our goal is to design algorithms and develop models allowing mobile robots to navigate and cooperate in dynamic and human-populated environments. Chroma is involved in all decision aspects pertaining to single and multi robot navigation tasks, including perception and motion-planning.
The general objective is to build robotic behaviors that allow one or several robots to operate safely among humans in partially known environments, where time, dynamics and interactions play a significant role. Recent advances in embedded computational power, sensor and communication technologies, and miniaturized mechatronic systems, make the required technological breakthroughs possible (including from the scalability point of view).
Chroma is clearly positioned in the "Artificial Intelligence and Autonomous systems" research theme of the Inria 2018-2022 Strategic Plan. More specifically we refer to the "Augmented Intelligence" challenge (connected autonomous vehicles) and to the "Human centred digital world" challenge (interactive adaptation).
To address the mentionned challenges, we take advantage of recent advances in all: probabilistic methods, planning techniques, multi-agent decision making, and machine learning. We also draw inspiration from other disciplines such as Sociology to take into account human models.
Two main research themes of mobile robotics are addressed : i) Perception and Situation Awareness ii) Navigation and Cooperation in Dynamic Environments. Next, we elaborate more about these themes.
Perception and Situation Awareness. This theme aims at understanding complex dynamic scenes, involving mobile objects and human beings, by exploiting prior knowledge and streams of perceptual data coming from various sensors. To this end, we investigate three complementary research problems:
Bayesian Perception: How to take into account prior knowledge and uncertain sensory data in a dynamic context?
Situation awareness : How to interpret the perceived scene and to predict their likely future motion (including near future collision risk) ?
Robust state estimation: acquire a deep understanding on several sensor fusion problems and investigate their observability properties in the case of unknown inputs.
Navigation and Cooperation in Dynamic Environments. This theme aims at designing models and algorithms allowing robots to move and to coordinate efficiently in dynamic environments. We focus on two problems: navigation in human-populated environment (social navigation) and cooperation in large distributed fleet of robots (scalability and robustness issues).
Motion-planning in human-populated environment. How to plan trajectories that take into account the uncertainty of human-populated environments and respect the social rules of human beings? Such a challenge requires models of human behavior to be learnt or designed as well as dedicated learning or planning algorithms.
Multi-robot decision making in complex environments. How to design models and algorithms that can achieve both scalability and performance guarantees in real-world robotic systems? Our methodology builds upon advantages of two complementary approaches, Multi-Agent Sequential Decision Making (MA-SDM) and Swarm Intelligence (SI).
Chroma is also concerned with applications and transfer of the scientific results. Our main applications include autonomous and connected vehicles as well as service robotics. They are presented in Sections and , respectively. Chroma is currently involved on several projects in collaboration with automobile companies (Renault, Toyota and Volvo) and some startups.
The Chroma team aims to deal with different issues of autonomous mobile robotics : perception, decision-making and cooperation. Figure schemes the different themes and sub-themes investigated by Chroma.
We present here after our approaches to address these different themes of research, and how they combine altogether to contribute to the general problem of robot navigation.
Chroma pays particular attention to the problem of autonomous navigation in highly dynamic environments populated by humans and cooperation in multi-robot systems.
We share this goal with other major robotic laboratories/teams in the world, such as Autonomous Systems Lab at ETH Zurich, Robotic Embedded Systems Laboratory at USC, KIT
Robust perception in open and dynamic environments populated by human beings is an open and challenging scientific problem. Traditional perception techniques do not provide an adequate solution for these problems, mainly because such environments are uncontrolled
Context.
Perception is known to be one of the main bottlenecks for robot motion autonomy, in particular when navigating in open and dynamic environments is subject to strong real-time and uncertainty constraints. In order to overcome this difficulty, we have proposed in the scope of the former e-Motion team, a new paradigm in robotics called “Bayesian Perception”. The foundation of this approach relies on the concept of “Bayesian Occupancy Filter (BOF)” initially proposed in the Ph.D. thesis of Christophe Coue and further developed in the team
In the scope of the Chroma team and of several academic and industrial projects (in particular the IRT Security for autonomous vehicle and Toyota projects), we went on with the development and the extension under strong embedded implementation constraints, of our Bayesian Perception concept. This work has already led to the development of more powerful models and more efficient implementations, e.g. the CMCDOT (Conditional Monte Carlo Dense Occupancy Tracker) framework which is still under development.
This work is currently mainly performed in the scope of the “Security for Autonomous Vehicle (SAV)” project (IRT Nanoelec), and more recently in cooperation with some Industrial Companies (see section New Results for more details on the non confidential industrial cooperation projects).
Objectives. We aim at defining a complete framework extending the Bayesian Perception paradigm to the object level. The main objective is to be simultaneously more robust, more efficient for embedded implementations, and more informative for the subsequent scene interpretation step (Figure .a illustrates). Another objective is to improve the efficiency of the approach (by exploiting the highly parallel characteristic of our approach), while drastically reducing important factors such as the required memory size, the size of the hardware component, its price and the required energy consumption. This work is absolutely necessary for studying embedded solutions for the future generation of mobile robots and autonomous vehicles. We also aim at developing strong partnerships with non-academic partners in order to adapt and move the technology closer to the market.
Context.
Testing and validating Cyber Physical Systems which are designed for operating in various real world conditions, is both an open scientific question and a necessity for a future deployment of such systems. In particular, this is the case for Embedded Perception and Decision-making Systems which are designed for future ADAS
This work is performed in the scope of both the SAV
Objectives. We started to work on this new research topic in 2017. The first objective is to build a “simulated navigation framework” for: (1) constructing realistic testing environments (including the possibility of using real experiments records), (2) developing for each vehicle a simulation model including various physical and dynamic characteristics (e.g. physics, sensors and motion control), and (3) evaluating the performances of a simulation run using appropriate statistical software tools.
The second objective is to develop models and tools for automating the Simulation & Validation process, by using a selection of relevant randomized parameters for generating large database of tests and statistical results. Then, a metric based on the use of some carefully selected “Key Performance Indicator” (KPI) has to be defined for performing a statistical evaluation of the results (e.g. by using the above-mentioned SMC approach).
Context.
Predicting the evolution of the perceived moving agents in a dynamic and uncertain environment is mandatory for being able to safely navigate in such an environment. We have recently shown that an interesting property of the Bayesian Perception approach is to generate short-term conservative
Objectives.
The first objective is to develop an integrated approach for “Situation Awareness & Risk Assessment” in complex dynamic scenes involving multiples moving agents (e.g vehicles, cyclists, pedestrians ...), whose behaviors are most of the time unknown but predictable. Our approach relies on combining machine learning to build a model of the agent behaviors and generic motion prediction techniques (e.g. Kalman-based, GHMM , or Gaussian Processes). In the perspective of a long-term prediction we will consider the semantic level
The second objective is to build a general framework for perception and decision-making in multi-robot/vehicle environments. The navigation will be performed under both dynamic and uncertainty constraints, with contextual information and a continuous analysis of the evolution of the probabilistic collision risk. Interesting published and patented results have already been obtained in cooperation with Renault and UC Berkeley, by using the “Intention / Expectation” paradigm and Dynamic Bayesian Networks. We are currently working on the generalization of this approach, in order to take into account the dynamics of the vehicles and multiple traffic participants. The objective is to design a new framework, allowing us to overcome the shortcomings of rules-based reasoning approaches which often show good results in low complexity situations, but which lead to a lack of scalability and of long terms predictions capabilities.
Context. In order to safely and autonomously navigate in an unknown environment, a mobile robot is required to estimate in real time several physical quantities (e.g., position, orientation, speed). These physical quantities are often included in a common state vector and their simultaneous estimation is usually achieved by fusing the information coming from several sensors (e.g., camera, laser range finder, inertial sensors). The problem of fusing the information coming from different sensors is known as the Sensor Fusion problem and it is a fundamental problem which plays a major role in robotics.
Objective. A fundamental issue to be investigated in any sensor fusion problem is to understand whether the state is observable or not. Roughly speaking, we need to understand if the information contained in the measurements provided by all the sensors allows us to carry out the estimation of the state. If the state is not observable, we need to detect a new observable state. This is a fundamental step in order to properly define the state to be estimated. To achieve this goal, we apply standard analytic tools developed in control theory together with some new theoretical concepts we introduced in (concept of continuous symmetry). Additionally, we want to account the presence of disturbances in the observability analysis.
Our approach is to introduce general analytic tools able to derive the observability properties in the nonlinear case when some of the system inputs are unknown (and act as disturbances). We recently obtained a simple analytic tool able to account the presence of unknown inputs , which extends a heuristic solution derived by the team of Prof. Antonio Bicchi with whom we collaborate (Centro Piaggio at the University of Pisa).
Fusing visual and inertial data. A special attention is devoted to the fusion of inertial and monocular vision sensors (which have strong application for instance in UAV navigation). The problem of fusing visual and inertial data has been extensively investigated in the past. However, most of the proposed methods require a state initialization. Because of the system nonlinearities, lack of precise initialization can irreparably damage the entire estimation process. In literature, this initialization is often guessed or assumed to be known , , . Recently, this sensor fusion problem has been successfully addressed by enforcing observability constraints , and by using optimization-based approaches , , , , . These optimization methods outperform filter-based algorithms in terms of accuracy due to their capability of relinearizing past states. On the other hand, the optimization process can be affected by the presence of local minima. We are therefore interested in a deterministic solution that analytically expresses the state in terms of the measurements provided by the sensors during a short time-interval.
For some years we explore deterministic solutions as presented in and . Our objective is to improve the approach by taking into account the biases that affect low-cost inertial sensors (both gyroscopes and accelerometers) and to exploit the power of this solution for real applications. This work is currently supported by the ANR project VIMAD
In his reference book Planning algorithms
In this context, we aim at scale up decision-making in human-populated environments and in multi-robot systems, while dealing with the intrinsic limits of the robots (computation capacity, limited communication).
Context. Motion planning in dynamic and human-populated environments is a current challenge of robotics. Many research teams work on this topic. We can cite the Institut of robotic in Barcelone , the MIT , the Autonomous Intelligent Systems lab in Freiburg , or the LAAS . In Chroma, we explore different issues : integrating the risk (uncertainty) in planning processes, modeling and taking into account human behaviors and flows.
Objective We aim to give the robot some socially compliant behaviors by anticipating the near future (trajectories of mobile obstacle in the robot's surroundings) and by integrating knowledge from psychology, sociology and urban planning. In this context, we will focus on the following 3 topics.
Risk-based planning.
Unlike static or controlled environments
We also investigate the problem of learning recurring human displacements - or flows of humans - from robots embedded sensors. It has been shown that such recurring behaviors can be mapped from spatial-temporal observations, as in . In this context, we explore counting-based mapping models to learn motion probabilities in cells of a grid representing the environment. Then we can revisit cost-function of path-planning algorithms (eg. A*) by integrating the risk to encountering humans in opposite direction. We also aim at demonstrating the efficiency of the approach with real robots evolving in dense human-populated environments.
Recently we investigated the automatic learning of robot navigation in complex environments based on specific tasks and from visual input. We address this problem by combining computer vision, machine learning (deep-learning), and robotics path planning (see ).
Sharing the physical space with humans. Robots are expected to share their physical space with humans. Hence, robots need to take into account the presence of humans and to behave in a socially acceptable way. Their trajectories must be safe but also predictable, that is why they must follow social conventions, respecting proximity constraints, avoiding people interacting or joining a group engaged in conversation without disturbing. For this purpose, we proposed earlier to integrate some knowledge from the psychology domain (i.e. proxemics theory), see figure .b. We aim now to integrate semantic knowledge
Context. A central challenge in Chroma is to define decision-making algorithms that scale up to large multi-robot systems. This work takes place in the general framework of Multi-Agent Systems (MAS). The objective is to compute/define agent behaviors that provide cooperation and adaptation abilities. Solutions must also take into account the agent/robot computational limits.
We can abstract the challenge in three objectives :
i) mastering the complexity of large fleet of robots/vehicles (scalability),
ii) dealing with limited computational/memory capacity,
iii) building adaptive solutions (robustness).
Combining Decision-theoretic models and Swarm intelligence.
Over the past few years, our attempts to address multi-robot decision-making
are mainly due to Multi-Agent Sequential Decision Making (MA-SDM) and Swarm Intelligence (SI). MA-SDM builds upon well-known decision-theoretic models (e.g., Markov decision processes and games) and related algorithms, that come with strong theoretical guarantees. In contrast, the expressiveness of MA-SDM models has limited scalability in face of realistic multi-robot systems
First, we plan to investigate incremental expansion mechanisms in anytime decision-theoretic planning, starting from local rules (from SI) to complex strategies with performance guarantees (from MA-SDM) . This methodology is grounded into our research on anytime algorithms, that are guaranteed to stop at anytime while still providing a reliable solution to the original problem. It further relies on decision theoretical models and tools including: Decentralized and Partially Observable Markov Decision Processes and Games, Dynamic Programming, Distributed Reinforcement Learning and Statistical Machine Learning.
Second, we plan to extend the SI approach by considering the integration of optimization techniques at the local level. The purpose is to force the system to explore solutions around the current stabilized state – potentially a local optimum – of the system. We aim at keeping scalability and self-organization properties by not compromising the decentralized nature of such systems. Introducing optimization in this way requires to measure locally the performances, which is generally possible from local perception of robots (or using learning techniques). The main optimization methods we will consider are Local Search (Gradient Descent), Distributed Stochastic Algorithm and Reinforcement Learning. We have shown in the interest of such an approach for driverless vehicle traffic optimization.
Both approaches must lead to master the complexity inherent to large and open multi-robot systems. Such systems are prone to combinatorial problems, in term of state space and communication, when the number of robots grows. To cope with this complexity we explore several approaches :
Combining MA-SDM, machine learning and RO
Defining heuristics by decentralizing global exact solutions. We explore this methodology to deal with dynamic problems such as the patrolling of moving persons (see ). We also deal with dynamic-MRR (Multi-Robot Routing) problems in the context of the PhD of M. Popescu, see Section .
Online incremental refining of the environment representation. This allows us to revisit mapping/coverage techniques and problems, see section .
Beyond this methodological work, we aim to evaluate our models on benchmarks from the literature, by using simulation tools as a complement of robotic experiments. This will lead us to develop simulators, allowing to deploy thousands of humans and robots in constrained environments.
Towards adaptive connected robots.
Mobile robots and autonomous vehicles are becoming more connected to one another and to other devices in the environment (concept of cloud of robots
In Chroma, we address the problem of adaptation by considering machine learning techniques and local mechanisms as discussed above (SI models). More specifically we investigate the problem of maintaining the connectivity between robots which perform dynamic version of tasks such as patrolling, exploration or transportation, i.e. where the setting of the problem is continuously changing and growing (see ).
In Lyon, the CITI Laboratory conducts research in many aspects of telecommunication, from signal theory to distributed computation. In this context, Chroma develops cooperations with the Inria team Agora (wireless communication protocols) and with Dynamid team (middlleware and cloud aspects), that we wish to reinforce in the next years.
Applications in Chroma are organized in two main domains : i) Future cars and transportation systems and ii) Services robotics. These domains correspond to the experimental fields initiated in Grenoble (eMotion team) and in Lyon (CITI lab). However, the scientific objectives described in the previous sections are intended to apply equally to both applicative domains. Even our work on Bayesian Perception is today applied to the intelligent vehicle domain, we aim to generalize to any mobile robots. The same remark applies to the work on multi-agent decision making. We aim to apply algorithms to any fleet of mobile robots (service robots, connected vehicles, UAVs). This is the philosophy of the team since its creation.
Thanks to the introduction of new sensor and ICT technologies in cars and in mass transportation systems, and also to the pressure of economical and security requirements of our modern society, this application domain is quickly changing. Various technologies are currently developed by both research and industrial laboratories. These technologies are progressively arriving at maturity, as it is witnessed by the results of large scale experiments and challenges such as the Google’s car project and several future products announcements made by the car industry. Moreover, the legal issue starts to be addressed in USA (see for instance the recent laws in Nevada and in California authorizing autonomous vehicles on roads) and in several other countries (including France).
In this context, we are interested in the development of ADAS
Since about 8 years, we are collaborating with Toyota and with Renault-Nissan on these applications (bilateral contracts, PhD Theses, shared patents), but also recently with Volvo group (PhD thesis started in 2016). We are also strongly involved (since 2012) in the innovation project Perfect then now Security for autonomous vehicle of the IRT
In this context, Chroma has two experimental vehicles equipped with various sensors (a Toyota Lexus and a Renault Zoe, see. Fig. and Fig. .b), which are maintained by Inria-SED
Service robotics is an application domain quickly emerging, and more and more industrial companies (e.g., IS-Robotics, Samsung, LG) are now commercializing service and intervention robotics products such as vacuum cleaner robots, drones for civil or military applications, entertainment robots ...
One of the main challenges is to propose robots which are sufficiently robust and autonomous, easily usable by non-specialists, and marked at a reasonable cost. We are involved in developing observation and surveillance systems, by using ground robots and aerial ones, see Fig. . Since 2016, we develop solutions for 3D observation/exploration of complex scenes or environments with a fleet of UAVs (Inria ADT CORDES
A more recent challenge for the coming decade is to develop robotized systems for assisting elderly and/or disabled people.
In the continuity of our work in the IPL PAL
Best student paper, 15th International Conference on Control, Automation, Robotics and Vision, Nov 2018, Singapore, Singapore (ICARCV 2018), Pavan Vasishta, Dominique Vaufreydaz, Anne Spalanzani
Success for several project applications in the field of Autonomous Vehicles : ANR"Hianic", PIA Ademe "CAMPUS", FUI "STAR" and "TORNADO".
In 2018, Chroma published several papers in Artificial Intelligence A+ ranked conferences: CVPR , NIPS , ICML , AAMAS .
Strong involvement of Chroma in the IEEE/RSJ IROS 2018 Conference (Madrid, October 2018, more than 4000 people): C. Laugier was Program co-chair and co-organized three interconnected events on Autonomous Vehicles: a one day Workshop having attracted more than 360 people
First participation to the international RoboCup competition (Montréal, Juin, 2018) : we created the 'LyonTech' team to compete in the robocup@Home Pepper league. We ranked 5th over 21 participants. LyonTech is composed of members from Chroma (F. Jumel, L. Matignon, J. Saraydaryan, O. Simonin, C. Wolf) and two engineers from CPE Lyon (R. Leber) and LIRIS lab/CNRS (E. Lombardi). In October 2018, we qualified for the next RoboCup final, to be organized in Sydney, on July 2019.
Participation in several International Award Committees (C. Laugier): Several IEEE/RSJ IROS 2018 Award Committees (Best Paper Award, Fellow Award, Harashima Award, Distinguished Service Award, Young Professional Award), IEEE ICARCV 2018 Best Paper Award Committee, IEEE Chapter Award Committee 2018.
French Robotics GDR : co-animation of the new GT « Apprentissage et Robotique » by Christian Wolf (with David Filiat), started in November 2018 ; O. Simonin will chair, with F. Charpillet (Inria Larsen), the next National Conference on Robotics Reseacrh (JNRR), on October 2019.
Functional Description: Software computing decision support strategies and decision-making
Contact: Jilles Dibangoye
Functional Description: Experimentary the closed Form Solution for usual-initial data fusion agains real and simulated fusion
Authors: Agostino Martinelli and Jacques Kaiser
Contact: Agostino Martinelli
Keywords: Robotics - Environment perception
Functional Description: GEOG-Estimator is a system of joint estimation of the shape of the ground, in the form of a Bayesian network of constrained elevation nodes, and the ground-obstacle classification of a pointcloud. Starting from an unclassified 3D pointcloud, it consists of a set of expectation-maximization methods computed in parallel on the network of elevation nodes, integrating the constraints of spatial continuity as well as the influence of 3D points, classified as ground-based or obstacles. Once the ground model is generated, the system can then construct a occupation grid, taking into account the classification of 3D points, and the actual height of these impacts. Mainly used with lidars (Velodyne64, Quanergy M8, IBEO Lux), the approach can be generalized to any type of sensor providing 3D pointclouds. On the other hand, in the case of lidars, free space information between the source and the 3D point can be integrated into the construction of the grid, as well as the height at which the laser passes through the area (taking into account the height of the laser in the sensor model). The areas of application of the system spread across all areas of mobile robotics, it is particularly suitable for unknown environments. GEOG-Estimator was originally developed to allow optimal integration of 3D sensors in systems using 2D occupancy grids, taking into account the orientation of sensors, and indefinite forms of grounds. The ground model generated can be used directly, whether for mapping or as a pre-calculation step for methods of obstacle recognition or classification. Designed to be effective (real-time) in the context of embedded applications, the entire system is implemented on Nvidia graphics card (in Cuda), and optimized for Tegra X2 embedded boards. To ease interconnections with the sensor outputs and other perception modules, the system is implemented using ROS (Robot Operating System), a set of opensource tools for robotics.
Authors: Amaury Nègre, Lukas Rummelhard, Lukas Rummelhard, Jean-Alix David and Christian Laugier
Contact: Christian Laugier
Keywords: Robotics - Environment perception
Functional Description: CMCDOT is a Bayesian filtering system for dynamic occupation grids, allowing parallel estimation of occupation probabilities for each cell of a grid, inference of velocities, prediction of the risk of collision and association of cells belonging to the same dynamic object. Last generation of a suite of Bayesian filtering methods developed in the Inria eMotion team, then in the Inria Chroma team (BOF, HSBOF, ...), it integrates the managment of hybrid sampling methods (classical occupancy grids for static parts, particle sets for parts dynamics) into a Bayesian unified programming formalism , while incorporating elements resembling the Dempster-Shafer theory (state "unknown", allowing a focus of computing resources). It also offers a projection system of the estimated scene in the near future, to reference potential collisions with the ego-vehicle or any other element of the environment, as well as very low cost pre-segmentation of coherent dynamic spaces (taking into account speeds). It takes as input instantaneous occupation grids generated by sensor models for different sources, the system is composed of a ROS package, to manage the connectivity of I / O, which encapsulates the core of the embedded and optimized application on GPU Nvidia (Cuda), allowing real-time analysis of the direct environment on embedded boards (Tegra X1, X2). ROS (Robot Operating System) is a set of open source tools to develop software for robotics. Developed in an automotive setting, these techniques can be exploited in all areas of mobile robotics, and are particularly suited to highly dynamic and uncertain environment management (eg urban scenario, with pedestrians, cyclists, cars, buses, etc.).
Authors: Amaury Nègre, Lukas Rummelhard, Jean-Alix David and Christian Laugier
Partners: CEA - CNRS
Contact: Christian Laugier
Keywords: Robotics - Environment perception
Functional Description: This module, directly implemented in ROS / Cuda, performs the merge of occupancy grids, defined in the format proposed in CMCDOT (probabilities integrating the "visibility" information of the cell, via the coefficients "unknown") thanks to an original method, allowing not only consistency with the rest of the system, but also a nuanced consideration of confidence criteria towards the various sources of information.
Authors: Lukas Rummelhard and Jean-Alix David
Contact: Lukas Rummelhard
Keywords: Robotics - Environment perception
Functional Description: This module generates occupation grids from "almost" planar lidar. The sensor model, as well as the outputs, have been modified, in order to be fully consistent with the CMCDOT and grid fusion module formats.
Authors: Amaury Nègre, Lukas Rummelhard and Jean-Alix David
Contact: Lukas Rummelhard
Keyword: Robotics
Functional Description: Tools for CMCDOT Software
Authors: Amaury Nègre, Lukas Rummelhard, Jean-Alix David, Mathias Perrollaz, Procopio Silveira-Stein, Jérôme Lussereau and Nicolas Vignard
Contact: Jean-Alix David
Dynamic Window Approach Planner based on occupancy grid
Keyword: Navigation
Functional Description: This program considers : - a given target - an occupancy grid which represents the environment - the odometry of the vehicle With these data, it computes the commands for a safe navigation towards the target.
Authors: Christian Laugier and Thomas Genevois
Partner: CEA
Contact: Christian Laugier
Simulation of Inria's Renault Zoe in Gazebo environment
Keyword: Simulation
Functional Description: This simulation represents the Renault Zoe vehicle considering the realistic physical phenomena (friction, sliding, inertia, ...). The simulated vehicle embeds sensors similar to the ones of the actual vehicle. They provide measurement data under the same format. Moreover the software input/output are identical to the vehicle's. Therefoe any program executed on the vehicle can be used with the simulation and reciprocally.
Authors: Christian Laugier, Nicolas Turro and Thomas Genevois
Contact: Christian Laugier
Functional Description: Simulation of moving people and mobile robots that can detect agents around them. Integration of ROS mobile robots with the PedSim simulator.
Contact: Jacques Saraydaryan
EKF based localisation for vehicles
Keywords: Localization - Autonomous Cars
Functional Description: This software fuses IMU data with wheel rotation or speed measurement inside an Extended Kalman Filter. It estimates the state position, orientation, speed, angular speed, acceleration.
Authors: Thomas Genevois and Christian Laugier
Contact: Christian Laugier
Simulation of a light vehicle in Gazebo environment
Keyword: Simulation
Functional Description: This simulation represents a light vehicle considering the realistic physical phenomena (friction, sliding, inertia, ...). The simulated vehicle embeds sensors similar to the ones of the actual vehicle. They provide measurement data under the same format. Moreover the software input/output are identical to the vehicle's. Therefore any program executed on the vehicle can be used with the simulation and reciprocally.
Authors: Thomas Genevois and Christian Laugier
Contact: Christian Laugier
Hybrid simulation for autonomous cars with high traffic
Keywords: Simulation - Autonomous Cars
Functional Description: Open source tool for simulating autonomous vehicles in complex, high traffic, scenarios. The hybrid simulation fully integrates and synchronizes a microscopic, multi-modal traffic simulator and a complex 3D simulator.
Contact: Mario Garzon Oviedo
Simultion of UAV fleets with Gazebo/ROS
Keywords: Robotics - Simulation
Functional Description: The simulator includes the following functionality : 1) Simulation of the mechanical behavior of an Unmanned Aerial Vehicle : * Modeling of the body's aerodynamics with lift, drag and moment * Modeling of rotors' aerodynamics using the forces and moments' expressions from Philppe Martin's and Erwan Salaün's 2010 IEEE Conference on Robotics and Automation paper "The True Role of Accelerometer Feedback in Quadrotor Control". 2) Gives groundtruth informations : * Positions in East-North-Up reference frame * Linear velocity in East-North-Up and Front-Left-Up reference frames * Linear acceleration in East-North-Up and Front-Left-Up reference frames * Orientation from East-North-Up reference frame to Front-Left-Up reference frame (Quaternions) * Angular velocity of Front-Left-Up reference frame expressed in Front-Left-Up reference frame. 3) Simulation of the following sensors : * Inertial Measurement Unit with 9DoF (Accelerometer + Gyroscope + Orientation) * Barometer using an ISA model for the troposphere (valid up to 11km above Mean Sea Level) * Magnetometer with the earth magnetic field declination * GPS Antenna with a geodesic map projection.
Release Functional Description: Initial version
Author: Vincent Le Doze
Partner: Insa de Lyon
Contact: Vincent Le Doze
Recognized as one of the core technologies developed within the team over the years (see related sections in previous activity report of Chroma, and previously e-Motion reports), the CMCDOT framework is a generic Bayesian Perception framework, designed to estimate a dense representation of dynamic environments and the associated risks of collision , by fusing and filtering multi-sensor data. This whole perception system has been developed, implemented and tested on embedded devices, incorporating over time new key modules . In 2018, this framework, and the corresponding software, has continued to be the core of many important industrial partnerships and academic contributions , and to be the subject of important developments, both in terms of research and engineering. Some of those recent evolutions are detailed below.
CMCDOT evolutions : important developments in the CMCDOT, in terms of calculation methods and fundamental equations, were introduced and tested this year. These developments could lead, in the coming months, to the proposal of a new patent, then to academic publications. These changes introduced, among other evolutions, a much higher update frequency, greater flexibility in the management of transitions between states (and therefore a better system reactivity), as well as the management of a high variability in sensor frequencies (for each sensor over time, and in the set of sensors). The technical documents describing those developments are currently being redacted, and will be described in the next annual report.
Multi-sensor integration in the Ground Estimator : the module of dynamic estimation of the shape of the ground and data segmentation, based solely on the sensor point clouds (no prior map information), the first step of data interpretation in CMCDOT framework, has been developed since 2016, patented and published in 2017. The corresponding software, until this year, could not take into account more than one sensor. In case of multiple sensors, several different modules were to be launched, their respective occupancy grids then fused, not only increasing the global computation use, but also preventing each sensor from benefiting from the ground models generated by the others. This point was corrected this year, by introducing the management of multiple input sensors, unifying the ground estimation in a single model, thus leading to improved performance, both in terms of calculation and results.
Velocity display : in the CMCDOT framework, velocity of every element of the scene is inferred at a cell level, without object segmentation. This low-level velocity estimation is one of the most original and important aspects of the method, and should be displayed accordingly. A velocity display module, displaying for each occupied cell of the grid the average of the estimated velocity, generating colors depending on the intensity and the orientation, has been developed, see Fig. .
Software optimization : the whole CMCDOT framework has been developed on GPUs (implementations in C++/Cuda), an important focus of the engineering has always been, and continued to be in 2018, on the optimization of the software and methods to be embedded on low energy consumption embedded boards (now Nvidia Jetson TX2).
IROS 2018 Autonomous Driving event : https://hal.inria.fr/medihal-01963296v1 As already mentioned in the highlights of the year, the experimental Zoe platform, funded by IRT Nanoelec, has participated at IROS2018 in the Autonomous Vehicle Demonstrations, a full day of demonstration of autonomous vehicle capacities from various research centers. During this successful event, it has been presented and demonstrated on live conditions the effectiveness of the embedded CMCDOT framework, in connection with the newly developed control and decision making systems.
Since 2017, we are working to address the concept of simulation based validation in the scope of the EU Enable-S3 project, with the objective of searching for novel approaches, methods, tools and experimental methodology for validating BOF-based algorithms. For that purpose, we have collaborated with the Inria Tamis team (Rennes) and with Renault for developing the simulation platform that is used in the test platform. The simulation of both the sensors and the driving environment are based on the Gazebo simulator. A simulation of the prototype car and its sensors has also been realized, meaning that the same implementation of CMCDOT can handle both real data and simulated data. The test management component that generates random simulated scenarios has also been developed. Output of CMCDOT computed from the simulated scenarios are recorded by ROS and analyzed through the Statistical Model Checker (SMC) developed by the Inria Tamis team. In , we presented the first results of this work, where a decision-making approach for intersection crossing (see Section ) has been analyzed. In particular new KPIs expressed as Bounded Linear Temporal Logic (BLTL) formula have been defined. Temporal formulas allow a finer formulation of KPIs by taking into account the evolution of the metrics during time. A further work in this direction will be done in the next months to provide new results on the validation of the perception algorithm, namely for the velocity estimation and collision risk assessment. For this part, we are also exploring the advantages and potentiality of a new open-source vehicle simulator (Carla), which would allow considering more realistic scenarios with respect to Gazebo. This work on simulation-based validation will be continued in 2019.
Previously, in 2017, CHROMA has developed a model of the Renault Zoe demonstrator within the simulation framework Gazebo. In 2018, we have improved it to keep it up-to-date after several evolutions of the actual demonstrator. Namely, the drivers of the simulated lidars and the control law have been updated. Thus the model now provides the outputs corresponding to a simulated Inertial Measurement Unit.
In 2018, we have updated the Renault Zoe demonstrator in collaboration with the LS2N (Laboratoire des Sciences Numérique de Nantes). The control codes have been transferred to the micro-controllers of the car for a faster and more precise control. An electric signal has been added to identify when the driver acts on the manual controls of the car. Finally the control law of the vehicle has been modified in order to consider a command in acceleration. These modifications allowed us to improve the software we use to control the vehicle. We have improved our implementation of DWA (Dynamic Window Approach) local planner in order to handle acceleration commands. This local planner has also been modified to take in account maxima of lateral acceleration and to integrate a path following module in its cost function. Thanks to this, the new version of this program provides a smooth command for a combination of path following and obstacle avoidance with the demonstrator Renault Zoe. This has been showed at the Autonomous Vehicle Demonstration event at IROS2018, Madrid, Figure .
We have also experimented a driving assistant for autonomous obstacle avoidance. We showed that it is possible on the Renault Zoe demonstrator to let a driver drive manually the car and then, when a collision risk is identified, to take over the control with the autonomous drive and perform an avoidance maneuver. A simple ADAS
Finally a Dijkstra Algorithm have been tested in simulation to define a global navigation path allowing management of waypoints to give to the DWA planner for local navigation.
Robust perception plays a crucial role in the development of autonomous vehicles. While perception in normal and constant environmental conditions has reached a plateau, robustly perceiving changing and challenging environments has become an active research topic, particularly due to the safety concerns raised by the introduction of autonomous vehicles to public streets. Solving the robustness issue in road and urban perception applications is the first challenge. Then, it is also mandatory to develop an appropriate framework for extracting relevant semantic information. Our approach is to reason about vision-based data and the output of our grid-based multi-sensors perception approach (see previous section).
The work presented in this section has partly been done in 2017 and completed in 2018, in the scope of our collaboration with Toyota Motor Europe (TME). The main objective was to develop a framework for integrate the outcomes of the deep learning methods with a well-established area, occupancy grids obtained with a Bayesian filtering method in the grid space.
In this work, we are interested in 2D egocentric representations. We propose a method, which estimates an occupancy grid containing detailed semantic information. The semantic characteristics include classes like road, car, pedestrian, sidewalk, building, vegetation, etc.. To this end, we leverage and fuse information from multiple sensors including Lidar, odometry and monocular RGB video. To benefit from the respective advantages of the two different methodologies, we propose a hybrid approach leveraging i) the high-capacity of deep neural networks as well as ii) Bayesian filtering, which is able to model uncertainty in a unique way.
In the system depicted by Figure , Bayesian particle filtering processes the Lidar data as well as odometry information from the vehicle's motion in order to robustly estimate an egocentric bird's eye view in the form of an occupancy grid. This grid contains a 360
Deep Learning is used for two different tasks in our work. Firstly, a deep network performs semantic segmentation of monocular RGB images. This network has been pre-trained on large scale datasets for image classification and fine-tuned on the vehicle datasets. Secondly, a deep network fuses the occupancy grid with the segmented image of the projective view in order to estimate the semantic grid. Since the occupancy grid is dense, the semantic grid is also expected to be dense. We pay particular attention to correctly model the transformation from the egocentric projective view of the RGB image to the bird's eye view of the occupancy grid as input to the neural network. This work was filed for a patent and published in , .
Novel approach: Semantic Grid Estimation with a Hybrid Bayesian and Deep Neural Network Approach.
Current and future work in the scope of our collaboration with TME, aims at constructing Semantic Occupancy Grids. We propose a hybrid approach, which combines the advantages of Bayesian filtering and deep neural networks. Bayesian filtering provides robust temporal/geometrical filtering and integration and allows for modelling of uncertainty. RGB information and deep neural networks provide knowledge about the semantic class labels like sideway vs road. The fusion process is fully learned and due to dense structure of occupancy grid, we can construct a dense semantic grid even if we have a sparse point cloud.
The objective is to develop human-like motion prediction and decision-making algorithms to enable automated driving in highways. This research work is done in the scope of the Inria-Toyota long-term cooperation on Autonomous Driving and of the PhD thesis work of David Sierra González.
Previous work from our team has shown the predictive potential of driver behavioral models learned from demonstrations using Inverse Reinforcement Learning (IRL) . Unfortunately, these models are hard to learn from real-world driving data due to the inability of traditional IRL algorithms to handle continuous state spaces and dynamic environments. To facilitate this task, we have proposed in 2018 an approximated IRL algorithm for driver behavior modeling that successfully scales to continuous spaces with moving obstacles, by leveraging a spatio-temporal trajectory planner . The proposed algorithm was validated using real-world data gathered with an instrumented vehicle. As an example, Figure shows the similarity between the trajectory obtained using a driver model learned with the proposed method and that of a real human driver in a highway overtake scenario. Current efforts are directed towards integrating the learned behavioral models and the predictive models developed in the scope of this project into a decision-making framework for highways. David Sierra González will defend his PhD thesis in March 2019.
Road intersections are probably the most complex segment in a road network. Most major accidents occur at intersections, mainly caused by human errors due to failures in fully understanding the encountered situations. Indeed, as drivers approach a road intersection, they must assess the situation and quickly adapt their behaviour accordingly. When this task is performed by a computer, the available information is partial and uncertain. Any decision requires the system to use this information as well as taking into account the behaviour of other drivers to avoid collisions. However, metrics such as collision rate can remain low in an interactive environment because of other driver's actions. Consequently, evaluation metrics must depend on other driving aspects.
In this framework, we developed a decision-making mechanism and designed metrics to evaluate such a system at road intersection crossing . For the former, a Partially Observable Markov Decision Process (POMDP) is used to model the system with respect to uncertainties in the behaviour of other drivers. For the latter, different key performance indicators are defined to evaluate the resulting behaviour of the system in different configurations and scenarios. The approach has been demonstrated within an automotive grade simulator.
Current work aims at increasing the complexity of the scenario, to include pedestrians and more vehicles, and improving the model used for the dynamics of the vehicle and the observation of the physical state to get closer to real world scenarios.
This work has been carried out in the framework of the PhD thesis of Mathieu Barbier, which will be defended in the first trimester of 2019.
This research is the follow up of Agostino Martinelli's investigations carried out during the last five years, which are in the framework of the visual and inertial sensor fusion problem and the unknown input observability problem.
During this year, we have obtained the full analytic solution of the cooperative visual inertial sensor fusion problem in the case of two agents, starting from the closed-form solution obtained in the last years (this latter solution will be published on the journal of Autonomous Robots ). Additionally, we also validated this solution with real experiments and in particular we showed that the analytic solution significantly outperforms our previous closed-form solution in . The new analytic solution has been accepted for publication by the IEEE Robotics and Automation Letters .
Specifically, we obtained the analytic solution of the problem by first proving that, this sensor fusion problem, is equivalent to a simple polynomial equations system that consists of several linear equations and three polynomial equations of second degree. The analytic solution of this polynomial equations system was easily obtained by using an algebraic method (developed by Bernard Mourrain, the leader of AROMATH at Inria Sophia Antipolis). The power of the analytic solution is twofold. From one side, it allows us to determine the relative state between the agents (i.e., relative position, speed and orientation) without the need of an initialization. From another side, it provides fundamental insights into all the theoretical aspects of the problem. During this year, we focused on the first issue. Our next objective is to exploit the analytic solution to obtain basic structural properties of the problem.
The Unknown Input Observability problem (UIO) in the nonlinear case was an open problem since the sixties years, when it was solved only in the linear case. In the last five years, I have obtained its general analytic solution. The mathematics apparatus necessary to obtain this solution is very sophisticated and is based on Ricci calculus, borrowed from theoretical physics. On the other hand, this mathematics can be avoided in the case of driftless systems and characterized by a single unknown input.
All the results (i.e., in the general case that also accounts for a drift and more than one unknown input) are fully described in a book available on ArXiv (arXiv:1704.03252).
During this year, my effort was devoted to make the analytic derivation of the solution palatable for a large audience (in particular, without knowledge of Ricci calculus). Hence, I focused on the simple case of a single unknown input and without drift. This solution has been published on a full paper on the IEEE Transaction on Automatic Control .
Regarding the general case available on ArXiv (arXiv:1704.03252), I was invited by the SIAM to write a book, palatable for a large audience. The scope of writing this book, is to present to the control theory and information theory communities a very powerful mathematics framework borrowed from theoretical physics. This could provide the possibility of revisiting many aspects of the control and information theory and bring new fundamental results, open new research domains etc. In this sense the book could be the kick-off of a new season of research in control and information theory. This will be the objective of the next years.
We study new motion planning algorithms to allow robots/vehicles to navigate in human populated environment, and to predict human motions. Since 2016, we investigate several directions exploiting vision sensors : prediction of pedestrian behaviors in urban environments (extended GHMM), mapping of human flows (statistical learning), and learning task-based motion planning (RL+Deep-Learning) . These works are presented here after.
[Pervasive Interaction, Inria Grenoble]
The objective of modeling urban behavior is to predict the trajectories of pedestrians in towns and around car or platoons (PhD work of P. Vasishta). In 2017 we proposed to model pedestrian behaviour in urban scenes by combining the principles of urban planning and the sociological concept of Natural Vision. This model assumes that the environment perceived by pedestrians is composed of multiple potential fields that influence their behaviour. These fields are derived from static scene elements like side-walks, cross-walks, buildings, shops entrances and dynamic obstacles like cars and buses for instance. This work was published in , . In 2018, an extension to the Growing Hidden Markov Model (GHMM) method has been proposed to model behavior of pedestrian without observed data or with very few of them. This is achieved by building on existing work using potential cost maps and the principle of Natural Vision. As a consequence, the proposed model is able to predict pedestrian positions more precisely over a longer horizon compared to the state of the art. The method is tested over legal and illegal behavior of pedestrians, having trained the model with sparse observations and partial trajectories. The method, with no training data (see. Fig. .a), is compared against a trained state of the art model. It is observed that the proposed method is robust even in new, previously unseen areas. This work was published in and won the best student paper of the conference.
Our goal is the automatic learning of robot navigation in human populated environments based on specific tasks and from visual input. The robot automatically navigates in the environment in order to solve a specific problem, which can be posed explicitly and be encoded in the algorithm (e.g. recognize the current activities of all the actors in this environment) or which can be given in an encoded form as additional input. Addressing these problems requires competences in computer vision, machine learning, and robotics (navigation and paths planning).
We started this work in the end of 2017, following the arrival of C. Wolf, through combinations of reinforcement learning and deep learning. The underlying scientific challenge here is to automatic learn representations which allow the agent to solve multiple sub problems require for the task. In particular, the robot needs to learn a metric representation (a map) of its environment based from a sequence of ego-centric observations. Secondly, to solve the problem, it needs to create a representation which encodes the history of ego-centric observations which are relevant to the recognition problem. Both representations need to be connected, in order for the robot to learn to navigate to solve the problem. Learning these representations from limited information is a challenging goal. This is the subject of the PhD thesis of Edward Beeching who started on October 2018, see illustration Fig. .b.
In order to deal with robot/humanoid navigation in complex and populated environments such as homes, we investigate since 2 years several research avenues :
Mapping humans flows. We defined a statistical learning approach (ie. a counting-based grid model) exploiting only data from robots embedded sensors. See illustration in Fig. .a and publication .
Path-planning in human flows. We revisited the A* path-planning cost function under the hypothesis of the knowledge of a flow grid. See publication .
In 2018 we started to study NAMO problems (Navigation Among Movable Obstacles) by considering populated environments and muti-robot cooperation. After his Master thesis on this subject, Benoit Renault started a PhD in Chroma focusing on the extension of NAMO algorithms to such dynamic environments.
RoboCup competition. In the context of the RoboCup international competition, we created the 'LyonTech' team, joining members from Chroma (INSA/CPE/UCBL). We investigated several humanoid tasks in home environments with our Pepper robot : social aware architecture, decision making and navigation, deep-learning based human and object detection (see Fig. .b), human-robot interaction. In July 2018, we participated for the first time to the RoboCup and reaching the 5th rank of the SSL league (Pepper@home). We also published our social-aware architecture to the RoboCup Conference . In October 2018, we qualified for the next final phase of RoboCup SSL (Pepper) to be organized on July 2019, in Sydney.
This work is part of the PhD. thesis in progress of Guillaume Bono, with the VOLVO Group, in the context of the INSA-VOLVO Chair. The goal of this project is to plan and learn at both global and local levels how to act when facing a vehicle routing problem (VRP). We started with a state-of-the-art paper on vehicle routing problems as it currently stands in the literature . We were surprise to notice that few attention has been devoted to deep reinforcement learning approaches to solving VRP instances. Hence, we investigated our own deep reinforcement learning approach that can help one vehicle to learn how to generalize strategies from solved instances of travelling salesman problems (an instance of VRPs) to unsolved ones. The difficulty of this problem lies in the fact that its Markov decision process' formulation is intractable, i.e., the number of states grows doubly exponentially with the number of cities to be visited by the salesman. To gain in scalability, we build inspiration on a recent work by DeepMind, which suggests using pointer-net, i.e., a novel deep neural network architecture, to address learning problems in which entries are sequences (here cities to be visited) and output are also sequences (here order in which cities should be visited). Preliminary results are encouraging and we are extending this work to the multi-agent setting.
After considering Multi-Robot Patrolling of known targets in 2016 , we generalized to MRR (multi-robot routing) and to DMRR (Dynamic MRR) in the work of the PhD of M. Popescu. Target allocation problems have been frequently treated in contexts such as multi-robot rescue operations, exploration, or patrolling, being often formalized as multi-robot routing problems. There are few works addressing dynamic target allocation, such as allocation of previously unknown targets. We recently developed different solutions to variants of this problem :
MRR : Multi-robot routing has been the main testbed in the domain of multi-robot task allocation, where decentralized solutions consist in auction-based methods. Our work addresses the MRR problem and proposes MRR with saturation constraints (MRR-Sat), where the cost of each robot treating its allocated targets cannot exceed a bound (called saturation). We provided a NP-Complete proof for the problem of MRR-Sat. Then, we proposed a new auction-based algorithm for MRR-Sat and MRR, which combines ideas of parallel allocations with target-oriented heuristics. An empirical analysis of the experimental results shows that the proposed algorithm outperforms state-of-the art methods, obtaining not only better team costs, but also a much lower running time. Results are submitted to RSS'2019 conference.
DMRR : we defined the Dynamic-MRR problem as the continuous adaptation of the ongoing robot missions to new targets. We proposed a framework for dynamically adapting the existent robot missions to new discovered targets. Dynamic saturation-based auctioning (DSAT) is proposed for adapting the execution of robots to the new targets. Comparison was made with algorithms ranging from greedy to auction-based methods with provable sub-optimality. The results for DSAT shows it outperforms state-of-the-art methods, like standard SSI or SSI with regret clearing, especially in optimizing the target allocation w.r.t. the target coverage in time and the robot resource usage (e.g. minimizing the worst mission cost). First results have been published in .
Synchronization : When patrolling targets along bounded cycles, robots have to meet periodically to exchange information, data (e.g. results of their tasks). Data will finally reach a delivery point (e.g. the base station). Hence, patrolling cycles sometimes have common points (rendezvous points), where the information needs to be exchanged between different cycles (robots). We investigated this problem by defining the following first solutions : random-wait, speed adaptation (first-multiple), primality of periods, greedy interval overlapping. We developed a simulator, allowing experiments that show the approaches have different performances and robustness. This work will be submitted to IROS'2019 conference.
PHC DRONEM
Multi-robots systems (MRS) require dedicated software tools and models to face the complexity of their design and deployment. In the context of the PhD work of Stefan Chitic, we addressed service self-discovery and property proofs in an ad-hoc network formed by a fleet of robots. This led us to propose a robotic middleware, SDfR, that is able to provide service discovery, see . In 2017, we defined a tool-chain based on timed automata, called ROSMDB, that offers a framework to formalize and implement multi-robot behaviors and to check some (temporal) properties (both offline and online). Stefan Chtic defended his Phd thesis on March 2018 .
Solving complex tasks with a fleet of robots requires to develop generic strategies that can decide in real time (or time-bounded) efficient and cooperative actions. This is particularly challenging in complex real environments. To this end, we explore anytime algorihms and adaptive/learning techniques.
The "CROME" and "COMODYS"
To attack the problem, we proposed an original concentric navigation model allowing to keep easily each robot camera towards the scene (see fig. .a). This model is combined with an incremental mapping of the environment and exploration guided by meta-heuristics in order to limit the complexity of the exploration state space. Results have been published in AAMAS'2018 . An extended version has been submitted to the Journal JAAMAS.
For experiment with multi-robot systems, we defined an hybrid metric-topological mapping. Robots individually build a map that is updated cooperatively by exchanging only high-level data, thereby reducing the communication payload. We combined the on-line distributed multi-robot decision with this hybrid mapping. These modules has been evaluated on our platform composed of several Turtlebots2, see fig. .b. This robotic architecture has been presented in (ECMR). A Demo has been done in AAMAS'2018 international conference .
It has been largely proved that the use of Unmanned Aerial Vehicles (UAVs) is an efficient and safe way to deploy visual sensor networks in complex environments. In this context, a widely studied problem is the cooperative coverage of a given environment. In a typical scenario, a team of UAVs is called to achieve the mission without a perfect knowledge on the environment and needs to generate the trajectories on-line, based only on the information acquired during the mission through noisy measurements. For this reason, guaranteeing a global optimal solution of the problem is usually impossible. Furthermore, the presence of several constraints on the motion (collision avoidance, dynamics, etc.) as well as from limited energy and computational capabilities, makes this problem particularly challenging.
Depending on the sensing capabilities of the team (number of UAVs, range of on-board sensor, etc.) and the dimension of the environment to cover, different formulations of this problem can be considered.
We firstly approached the deployment problem, where the goal is to find the optimal static UAVs configuration from which the visibility of a given region is maximized. A suitable way to tackle this problem is to adopt derivative-free optimization methods based on numerical approximations of the objective function. In 2012, Renzaglia et al. proposed an approach based on a stochastic optimization algorithm to obtain a solution for arbitrary, initially unknown 3D terrains (see fig. .c). However, adopting this kind of approaches, the final configuration can be strongly dependent on the initial positions and the system can get stuck in local optima very far from the global solution.
We identified that a way to overcome this problem can be found in initializing the optimization with a suitable starting configuration. An a priori partial knowledge on the environment is a fundamental source of information to exploit to this end. The main contribution of our work is thus to add another layer to the optimization scheme in order to exploit this information. This step, based on the concept of Centroidal Voronoi Tessellation, will then play the role of initialization for the on-line, measurement-based local optimizer. The resulting method, taking advantages of the complementary properties of geometric and stochastic optimization, significantly improves the result of the previous approach and notably reduces the probability of a far-to-optimal final configuration. Moreover, the number of iterations necessary for the convergence of the on-line algorithm is also reduced. This work led to a paper submitted to AAMAS 2019
We are currently also investigating the dynamic version of this problem, where the information is collected along the trajectories and the environment reconstruction is obtained from the fusion of the total visual data.
This research is the follow up of a group led by Jilles S. Dibangoye carried out during the last three years, which include foundations of sequential decision making by a group of cooperative or competitive robots or more generally artificial agents. To this end, we explore combinatorial, convex optimization and reinforcement learning methods.
Our major findings this year include:
(Theoretical) – As an extension of in the cooperative case , we characterize the optimal solution of partially observable stochastic games.
(Theoretical) – We further exhibit new underlying structures of the optimal solution for both cooperative and non-cooperative settings.
(Algorithmic) – We extend a non-trivial procedure in for computing such optimal solutions when only an incomplete knowledge about the model is available.
This work proposes a novel theory and algorithms to optimally solving a two-person zero-sum POSGs (zs-POSGs). That is, a general framework for modeling and solving two-person zero-sum games (zs-Games) with imperfect information. Our theory builds upon a proof that the original problem is reducible to a zs-Game—but now with perfect information. In this form, we show that the dynamic programming theory applies. In particular, we extended Bellman equations for zs-POSGs, and coined them maximin (resp. minimax) equations. Even more importantly, we demonstrated Von Neumann & Morgenstern’s minimax theorem holds in zs-POSGs. We further proved that value functions—solutions of maximin (resp. minimax) equations—yield special structures. More specifically, the maximin value functions are convex whereas the minimax value functions are concave. Even more surprisingly, we prove that for a fixed strategy, the optimal value function is linear. Together these findings allow us to extend planning and learning techniques from simpler settings to zs-POSGs. To cope with high-dimensional settings, we also investigated low-dimensional (possibly non-convex) representations of the approximations of the optimal value function. In that direction, we extended algorithms that apply for convex value functions to lipschitz value functions .
During the last year, we investigated deep and standard reinforcement learning for solving decentralized partially observable Markov decision processes. Our preliminary results include:
(Theoretical) Proofs that the optimal value function is linear in the occupancy-state space, the set of all possible distributions over hidden states and histories.
(Algorithmic) Value-based and policy-based (deep) reinforcement learning for common-payoff partially observable stochastic games.
This work addresses a long-standing open problem of Multi-Agent Reinforcement Learning (MARL) in decentralized stochastic control. MARL previously applied to finite decentralized decision making with a focus on team reinforcement learning methods, which at best lead to local optima. In this research, we build on our recent approach , which converts the original problem into a continuous-state Markov decision process, allowing knowledge transfer from one setting to the other. In particular, we introduce the first optimal reinforcement learning method for finite cooperative, decentralized stochastic control domains. We achieve significant scalability gains by allowing the latter to feed deep neural networks. Experiments show our approach can learn to act optimally in many finite decentralized stochastic control problems from the literature , .
This work is part of the Ph.D. thesis in progress of Guillaume Bono, with VOLVO Group, in the context of the INSA-VOLVO Chair. The work aims at investigating an attractive family of reinforcement learning methods, namely policy-gradient and more generally actor-critic methods for solving decentralized partially observable Markov decision processes. Our preliminary results include:
(Theoretical) Proofs of the policy-gradient theorems for both total- and discounted-reward criteria in decentralized stochastic control.
(Algorithmic) (deep) actor-critic reinforcement learning methods for centralized and decentralized stochastic control.
Reinforcement Learning (RL) for decentralized partially observable Markov decision processes (Dec-POMDPs) is lagging behind the spectacular breakthroughs of single-agent RL. That is because assumptions that hold in single-agent settings are often obsolete in decentralized multi-agent systems. To tackle this issue, we investigate the foundations of policy gradient methods within the centralized training for decentralized control (CTDC) paradigm. In this paradigm, learning can be accomplished in a centralized manner while execution can still be independent. Using this insight, we establish policy gradient theorem and compatible function approximations for decentralized multi-agent systems. Resulting actor-critic methods preserve the decentralized control at the execution phase, but can also estimate the policy gradient from collective experiences guided by a centralized critic at the training phase. Experiments demonstrate our policy gradient methods compare favorably against standard RL techniques in benchmarks from the literature , . Guillaume Bono also designed a simulator for urban logistic reinforcement learning, namely SULFR .
During the last year, Mohamad Hobballah (post-doc INSA VOLVO Chair) investigated efficient meta-heuristics for solving two-echelon vehicle routing problems (2E-VRPs) along with realistic logistic constraints. Algorithms for this problem are of interest in many real-world applications. Our short-term application targets goods delivery by a fleet of autonomous vehicles from a depot to the clients through an urban consolidation center using bikers. Preliminary results include:
(Methodological) Design of a novel meta-heuristic based on differential evolution algorithm and iterative local search . The former permits us to avoid being attracted by poor local optima whereas the latter performs the local solution improvement.
(Empirical) Empirical results on standard benchmarks available at http://
This collaboration has been built inside the INSA-VOLVO Chair, led by Prof. Didier Remond (INSA). In this context, the Chair funds the PhD Thesis of Guillaume Bono (2016-19) in Chroma. The objective is to study how machine learning techniques can deal with optimization of goods distribution using a fleet of autonomous vehicles. In the following of the first results, VOLVO proposed to extend our collaboration by funding a Post-doc position concerning good distribution with platoons of autonomous vehicles. This is the Post-Doc of Mohamad Hobballah, started on February 2018.
The contract with Toyota Motors Europe is a joint collaboration involving Toyota Motors Europe, Inria and ProbaYes. It follows a first successful short term collaboration with Toyota in 2005. This contract aims at developing innovative technologies in the context of automotive safety. The idea is to improve road safety in driving situations by equipping vehicles with the technology to model on the fly the dynamic environment, to sense and identify potentially dangerous traffic participants or road obstacles, and to evaluate the collision risk. The sensing is performed using sensors commonly used in automotive applications such as cameras and lidar.
This collaboration has been extended in 2018 for 4 years (period 2018-2021) and Toyota provides us with an experimental vehicle Lexus equipped with various sensing and control capabilities. Several additional connected technical contracts have also been signed, and an exploitation licence for the CMCDOT software has been bought by Toyota in 2018.
This contract was linked to the PhD Thesis of Mathieu Barbier (Cifre Thesis). The objective is to develop technologies for collaborative driving as part of a Driving Assistance Systems for improving car safety in road intersections. Both vehicle perception and communications are considered in the scope of this study. Some additional short-term contracts (about 3 months) and an evaluation license for the team CMCDOT software have also been signed during this period. We are on the process of signing a new PhD research agreement for the period 2019 – 2021, with objective to address the open problem of emergency obstacle avoidance in complex traffic situations (for ADAS or AD applications).
Security of Autonomous Vehicles is a project supported by ANR in the scope of the program PULSE of IRT Nanoelec. The objective of this project is to integrate, develop and promote technological bricks of context capture, for the safety of the autonomous vehicle. Building on Embedded Bayesian Perception for Dynamic Environment, Bayesian data fusion and filtering technologies from sets of heterogeneous sensors, these bricks make it possible to secure the movements of vehicles, but also provide them with an enriched and useful representation for autonomy functions themselves. In this context, various demonstrators embedding those technology bricks are developed in cooperation with industrial partners.
The project Tornado is coordinated by Renault. The academic partners of the project are Inria Grenoble-Rhône Alpes, UTC, Institut Pascal, University of Pau, IFSTTAR. The industrial and application partners are Renault, Easymile, Neavia, Exoskills, 4D-Virtualiz, MBPC and Rambouillet Territoires. The objective of the project is to demonstrate the feasibility of a mobility service systems operating in the commercial zone of Rambouillet and on some public roads located in its vicinity. Several autonomous cars (Autonomous Renault Zoe). The IRT Nanoelec is also involved in the project as a subcontractor, for testing the perception, decision-making, navigation and controls components developed in the project.
The Project STAR is coordinated by IVECO. The academic partners of the projects are Inria Grenoble-Rhône, IFSTTAR, ISAE-Supaéro. The industrial and application partners are IVECO, Easymile, Transpolis, Transdev and Sector Groupe. The goal of the project is to build an autonomous bus that will operate on a safe from other vehicle lane but not from pedestrian. Inria is involved in helping design situation awarness perception, specialy in special case like docking at the bus stop and handling dynamicity of any obstacle. The IRT Nanoelec is also involved in the project as a subcontractor, for testing the perception, decision-making, navigation and controls components developed in the project.
Project of the Informatics Federation of Lyon (FIL) between two teams of two laboratories: CHROMA (CITI) and SMA (LIRIS), entitled "COoperative Multi-robot Observation of DYnamic human poSes", 2017-2019. Leader : L. Matignon & O. Simonin.
This project funds materials, missions and internships and its objectives are the on-line adaptation of a team of robots that observe and must recognize human activities.
The project CORDES (Coordination d’une Flotte de Drones Connectés pour la Cartographie 3D d’édifices) is an Inria ADT coordinated by Olivier Simonin. It funds an Inria expert engineer position in Chroma (Vincent Le Doze, 10/17-11/19) focusing on UAVs control and path-planning. The project aims to deploy a fleet of UAVs able to autonomously fly over an unknown infrastructure and to build a 3D map.
The ANR VALET, led by A. Spalanzani, proposes a novel approach for solving car-sharing vehicles redistribution problem using vehicle platoons guided by professional drivers. An optimal routing algorithm is in charge of defining platoons drivers' routes to the parking areas where the followers are parked in a complete automated mode. The consortium is made of 2 academic partners: Inria (RITS, Chroma, Prima) and Ircyyn Ecole Centrale de Nantes and the AKKA company. The phD student (Pavan Vashista) recruited in this project focus on integrating models of human behaviors to evaluate and communicate a risk to pedestrians that may encounter the trajectory of the VALET vehicle. His phD thesis started in february 2016 and is codirected by D. Vaufreydaz (Inria/PervasiveInteraction).
The HIANIC project, led by A. Spalanzani, proposes to endow autonomous vehicles with smart behaviors (cooperation, negotiation, socially acceptable movements) that better suit complex SharedSpace situations. It will integrate models of human behaviors (pedestrian, crowds and passengers), social rules, as well as smart navigation strategies that will manage interdependent behaviors of road users and of cybercars. The consortium is made of 3 academic partners: Inria (RITS, Chroma, Pervasive Interaction teams), Lig Laboratory (Magma team) and LS2N laboratory (ARMEN and PACCE teams). A. Spalanzani is the leader of this project.
The CAMPUS project aims to identify, develop and deploy new functions for the autonomous cars in urban environments. In this project, Chroma will focus on finding solutions to navigate in complex situations such as crowded environments or dense traffic. The consortium is made of 1 academic partner: Inria (Rits and Chroma teams) and 3 companies: Safran electronics, Gemalto and Valeo.
Program: ECSEL
Project acronym: ENABLE-S3
Project title: European Initiative to Enable Validation for Highly Automated Safe and Secure Systems
Duration: June 2016 – May 2019
Coordinator: AVL List GesmbH
Other partners: Major European Organizations, including academic partners (such as Inria or KIT) and a Large number of industrial partners from various application domains such as automotive industry or Aeronautics or Train industry
Abstract: ENABLE-S3 is industry-driven and therefore aims to foster the leading role of the European industry. This is also reflected in its use case driven approach. The main technical objectives are extracted from the use cases defined by the industrial partners, in order to validate the success of the developed methods and tools.
The ENABLE-S3 project will provide European industry with leading-edge technologies that support the development of reliable, safe and secure functions for highly automated and/or autonomously operating systems by enabling the validation and verification at reduced time and costs.
Enables-S3 is a large European consortium, involving a French consortium leaded by Renault and Inria Grenoble Rhône-Alpes. The Inria Tamis team (Rennes) is also involved in the project.
Program: PHC franco-roumain "Brandusi"
Project acronym: DRONEM
Project title: Optimizing Data Delivery in Multi-robot Network Patrolling using Machine Learning
Duration: 01-2017 - 12-2018
Coordinator: O. Simonin, G. Czibula (University of Babes-Bolyai, Cluj-Napoca, Romania)
Abstract: The present research proposal is an interdisciplinary project that focuses on developing novel machine learning models and techniques for addressing the challenging problem of dynamic multi-robot network patrolling. This proposal brings together a team of researchers in the field of robotics (Chroma) with a team of researchers in the field of Machine Learning from Babeș-Bolyai University, Cluj-Napoca (the MLyRE team) and aims to combine their expertise in autonomous robotics and machine learning, as well as to exploit the complementarity between the two fields. Deploying fleets of mobile robots in real scenarios/environments raises several scientific challenges. One of them concerns the ability of the robots to adapt to the complexity of their environment, i.e. its dynamics and uncertainty.
Partner 1 : ETHZ, Zurich, Autonomous System laboratory, (Switzerland) and University of Zurich, Robotics and Perception Group (Switzerland)
Subject 1 : Vision and IMU data Fusion for 3D navigation in GPS denied environment.
Partner 2 : Karlsruhe Institut fur Technologie (KIT, Germany)
Subject 2 :Autonomous Driving (student exchanges and common project).
Partner 3 : Vislab Parma (Italy)
Subject 3 : Embedded Perception & Autonomous Driving (visits, projects submissions, and book chapter in the new edition of the Handbook of Robotics).
UC Berkeley & Stanford University (CA, USA)
Subject: Autonomous Driving (postdoc in the scope of Inria@SV, common publications and patent, visits).
NUS Singapore & NTU Singapore.
Subject: Autonomous Driving (visits, common ICT Asia projects, common organization of workshops, review of PhD students).
Massachussetts Institute of Technology (MIT), Cambridge, MA (USA)
Subject: Decentralized Control of Markov Decision Processes.
Subject: Autonomous Driving (visits and common organization of a workshop).
Visit of 3 researchers (Maria-Iuliana Bocicor, Vlad-Sebastian Ionescu, Ioan-Gabriel Mircea) from University Babes-Bolyai, Cluj-Napoca (Romania). In the context of our PHC project "DRONEM" (2017-18) we worked with them, in Lyon (CITI lab), on Sept. 11-14 2018.
Jorge Villagra, Senior Scientist at the Center for Automation and Robotics (CSIC-UPM) in Madrid, visited us and given a seminar in novembre 2018. He also co-organized with C.Laugier an Autonomous Vehicle Demonstration event at IEEE IROS 2018 in Madrid (October 2018).
O. Simonin and J. Dibangoye visited the team of Prof. G. Czibula, at University Babes-Bolyai, Cluj-Napoca (Romania), on April 16-19. The visit was organized in the context of the PHC project "DRONEM" (2017-18). O. Simonin given a talk on the Chroma researches.
C. laugier was Program Co-Chair of IEEE/RSJ IROS 2018. He also co-organized in the scope of IROS 2018, three interconnected events on Autonomous vehicles: a Workshop having attracted more than 360 people
C. Laugier has been appointed as General co-Chair for IEEE/RSJ IROS 2019 (Makau).
O. Simonin was general co-chair of PDIA 2018 "Perspective et Defis de l'IA" with Y. Demazeau (CNRS), organized in Paris Descartes, by AFIA, on October 11. The 2018 topic was "Véhicule Autonome et Intelligence Artificielle" (
O. Simonin is Chair of JFSMA
O. Simonin is co-chair with F. Chaprillet (Inria Larsen) of the National Conference on Robotics (JNRR), to be organized at Vittel, on October 2019.
C. Laugier co-organized with F. Nashashibi (Inria), Ph. Martinet (Inria) and D. Wang (NTU Singapore) two special sessions on Mobile Robotics and Deep Learning at IEEE ICARCV 2018 (Singapore).
C. Wolf co-organized the Workshop GDR ISIS & GDR IA on Machine Learning and Reasoning for Signal and Image Processing, October 4th (
F. Jumel is Member of Organizing Committees of the Robocup@Home league.
F. Jumel is elected member of Technical Committees of the Robocup@Home league.
O. Simonin is Chair of JFSMA 2019 to be organized at the PFIA national platform, in Toulouse, July 2019.
C. Laugier was Associate Editor for IEEE ICRA 2018 (Brisbane) and for IEEE ICRA 2019 (Monreal). He was also member of the Senior Program Committee of IEEE/RSJ IROS 2018 (Madrid).
A. Martinelli was Associate Editor for IEEE ICRA 2019.
Jilles S. Dibangoye served, in quality of program committee member, for the following conferences: AAAI, IJCAI
O. Simonin served, in quality of program committee member, for the following conferences : AAMAS (Autonomous Agent and Multi-agent Systems International Conference) Track Robotics, ICAPS (International Conference on Automated Planning and Scheduling) Track Robotics.
O. Simonin is Program Committee member of the JFSMA conference since 2008 (Journées Francophones sur les Systèmes Multi-Agents).
C. Wolf served, in quality of program committee member, for the following conferences: IJCAI, BMVC, CVPR 2018 Workshop on Human Pose, Motion, Activities and Shape in 3D, ECCV 2018 Workshop on Hands in Action.
Agostino Martinelli served, in quality of reviewer, for following conferences: ICRA, IROS.
Jilles S. Dibangoye served, in quality of reviewer, for the following conferences: AAAI, IJCAI, ICRA.
Olivier Simonin served, in quality of reviewer, for the following conferences: IROS, ICAPS, ACC.
Anne Spalanzani served, in quality of reviewer, for the following conferences: IROS, RO-MAN.
C. Wolf served, in quality of reviewer, for the following conferences: CVPR, NIPS, ICLR, ICML, IJCAI, BMVC.
C. Laugier is Member of the Steering Committee of the journal IEEE Transaction on Intelligent Vehicles.
C. Laugier is member of the Editorial Board of the journal IEEE ROBOMECH.
O. Simonin is a member of the editorial board of RIA Revue d'Intelligence Artificielle.
C. Laugier is Senior Editor of the journal IEEE Transaction on Intelligent Vehicles.
A. Martinelli served, in quality of reviewer, for the following journals: Transaction on Robotics, Transaction on Automatic Control, Robotics and Automation Letters.
Jilles S. Dibangoye served, in quality of reviewer, for the following journals: Revue d'Intelligence Artificielle, Mathematics and Artificial Intelligence Journal
O. Simonin served, in quality of reviewer, for the following journals : Autonomous Robots (AURO) and RIA (Revue d'Intelligence Artificielle).
Anne Spalanzani served, in quality of reviewer, for the International Journal Robotica.
O. Simonin was invited for two talks at "Institut d'Automne en IA (IA
O. Simonin gave an invited talk at the "Journée sur les Systèmes Hors Equilibres" ENS Lyon, November 27 : "Coordination d'essaims de robots, vers des systèmes auto-organisés".
C. Laugier gave an invited tutorial at “Institut d'Automne en IA (IA
C. Laugier gave an invited keynote talk at the ECCV 2018 Workshop on “Vision-based Navigation for Autonomous Vehicles”, Munich, September 2018.Title: Dynamic Traffic Scene Understanding using Bayesian Sensor Fusion and Motion Prediction.
C. Laugier gave an invited talk at Vedecom SMIV
C. Laugier gave an invited plenary talk at RWIA
A. Spalanzani gave a talk at the "Journée Véhicule autonome et intelligence artificielle" organized by AFIA, Paris Descartes, October 11.
C. Wolf gave an invited tutorial talk at the national "CORESA" conference, Poitiers, France, on November 12th.
C. Wolf gave an invited talk at the Strasbourg doctoral school on mathematics, computer science and engineering, Strasbourg, France, on September 21st.
C. Wolf gave an invited talk at the CVPR 2018 Workshop on "HUMAN 3D: HUman pose, Motion, Activities aNd Shape in 3D", Salt Lake City, USA, June 18th.
C. Wolf gave an invited talk at the Heudiasyc Laboratory, Compiegne, France, June 5th.
C. Wolf gave an Invited talk at Inria Stars, Nice, France, May 30th.
C. Wolf gave an invited talk at the LABRI Laboratory, Bordeaux, France, March 5th.
C. Wolf gave an invited talk at Inria THOTH, Grenoble, France, February 23rd.
J.S. Dibangoye was an invited talk at "a NeurIPS-18 international workshop on Deep Reinforcement Learning in partially observable domains" (NeurIPS-18), on December, Montreal, Canada.
A. Spalanzani served, in quality of Vice President, for the 2018 ANR project selection in Interaction and Robotics.
C. Laugier is co-chair with Philippe Martinet and Christoph Stiller, of the IEEE RAS Technical Committee on “Autonomous Ground Vehicles and Intelligent Transportation Systems (AGVITS)’’.
C. Laugier is member of the Committee "safety of autonomous vehicles" (committee leaded by ARDI in the scope of the Innovation Regional Strategy).
C. Laugier is member of the Scientific Committee of the French GDR Robotique.
C. Laugier is member of several International Award Committee. In 2018, he was Chair of the Best Paper Award Committee of IEEE/RSJ IROS 2018 and Member of several IROS 2018 award committees (Fellow Award, Harashima Award for Innovative Technologies, Distinguished Service Award and Young Professional Award). In 2018, he was also a member of the Best Paper Award Committee of IEEE ICARCV 2018 and Member of the IEEE Chapter Award Committee.
O. Simonin is an elected member of the Board of AFIA, the French Association for Artificial Intelligence.
C. Wolf is part of the Comité de direction of the GDR ISIS "Information, Signal, Image et ViSion" and co-leads its theme "Machine Learning" (together with N. Thome, CNAM)
C. Wolf is part of the Comité scientifique of the GDR IA "Aspects Formels et Algorithmiques de l'Intelligence Artificielle"
C. Wolf co-leads the theme "Machine Learning and Robotics" of the GDR Robotique (together with D. Filiat, ENSTA)
C. Laugier is member of the Advisory Board of ISR University of Coimbra.
C. Laugier is Scientific Advisor for the ProbaYes SA and for Baidu.
O. Simonin served, in quality of reviewer, for ANR project submissions.
J. Dibangoye served, in quality of reviewer, for ANR project submissions.
C. Wolf was part of the ANR Committee CES 33 "Interaction and Robotics"
C. Laugier is a member of several Ministerial and Regional French Committees on Robotics and Autonomous Cars.
O. Simonin is an elected member of the AFIA Council (Association Française pour l'Intelligence Artificielle)
O. Simonin is member of the Auvergne-Rhone-Alpes Robotics cluster (Coboteam), for Inria and INSA de Lyon entities.
O. Simonin is member of the Scientific Council of the Digital League (Auvergne-Rhone-Alpes).
F. Jumel is member of the International RoboCup competition (as Chief Referee for the RoboCup@Home league)
F. Jumel is member of the board of IMAGINOVE cluster (digital content industry)
F. Jumel is member of the Rhone-Alpes Robotics cluster (Coboteam)
CPE Lyon 4-5th year : F. Jumel, resp. of the Robotics option, 400h M1/ M2, Dept. SN CPE Lyon France.
CPE Lyon 4-5th year : F. Jumel, 250h (robotic vision, cognitive science, Interface robot machine, deeplearning, Robotic frameworks, robotic plateforms, Kalman Filter)
INSA Lyon 3rd year : Jilles S. Dibangoye, Algorithmics, 24h, L3, Dept. Telecom INSA de Lyon, France.
INSA Lyon 3rd year : Jilles S. Dibangoye, WEB, 42h, L3, Dept. Telecom INSA de Lyon, France.
INSA Lyon 3rd year : Jilles S. Dibangoye, Operating Systems, 56h, L3, Dept. Telecom INSA de Lyon, France.
INSA Lyon 4rd year : Jilles S. Dibangoye, Operating Systems, 16h, Master, Dept. Telecom INSA de Lyon, France.
INSA Lyon 5th year : Jilles S. Dibangoye, the Robotics option : AI for Robotics, Robotics projects, 8h, M2, Dept. Telecom INSA de Lyon, France.
M2R MoSIG: A. Martinelli, Autonomous Robotics, 12h, ENSIMAG Grenoble.
INSA Lyon 5th year : O. Simonin, Resp. of the Robotics option (25 students): AI for Robotics, Software and Hardware for robotics, Robotics projects, 90h, M2, Telecom Dept., France.
INSA Lyon 3rd year : O. Simonin, Resp. of Introduction to Algorithmics, 32h (100 students), L3, Telecom Dept., France.
INSA Lyon 5th year : A. Spalanzani, Navigation en environnement humain, 2h, M2, INSA de Lyon, France.
Master : Laetitia Matignon, Multi-Agents and Self-* Systems, 10h TD, M2 Artificial Intelligence, Lyon 1 University, France.
Master : Laetitia Matignon, Multi-Robot Systems, 20h TD, 5th year of engineer, Polytech Lyon Informatics Department, France.
PhD in progress: David Sierra Gonzalez, Autonomous Driving (cooperation Toyota), 2014, C. Laugier, J. Dibangoye, E. Mazer (Inria Prima). Defense planned in March 2019.
PhD in progress: Mathieu Barbier, Decision making for Intelligent Vehicles (cooperation Renault), 2015, C. Laugier, O. Simonin and E. Mazer (Inria Pervasive Interaction). Defense planned in April 2019.
PhD in progress: Mihai Popescu, Robot fleet mobility under communication constraints, 2015, O. Simonin, A. Spalanzani, F. Valois (CITI/Inria Agora).
PhD in progress: Pavan Vasishta, Natural vision based perception and prediction, 2016, A. Spalanzani and D. Vaufreydaz (Inria Pervasive Interaction)
PhD in progress: Guillaume Bono, Global-local Optimization Under Uncertainty for Goods Distribution Using a Fleet of Autonomous Vehicles, 2016, O. Simonin, J. Dibangoye, L. Matignon.
PhD in progress: Remy Grunblatt, Mobilité contrôlée dans les réseaux de drones autonomes", 2017, I. Guerrin-Lassous (Inria Dante) and O. Simonin.
Starting PhD: Maria Kabtoul, Proactive Navigation in dense crowds, A. Spalanzani and P. Martinet (Inria Chorale).
Starting PhD: Manon Prédhumeau, Crowd simulation and autonomous vehicle, A. Spalanzani and J. Dugdale (LIG Hawai).
Starting PhD: Luiz Serafim-Guardini, Conduite Automobile Autonome : Utilisation de grilles d'occupation probabilistes dynamiques pour la planification contextualisée de trajectoire d'urgence à criticité minimale, A. Spalanzani, C. Laugier, P. Martinet (Inria Chorale).
Starting PhD: Benoit Renault, Navigation coopérative et sociale de robots mobiles en environnement modifiable, O. Simonin and J. Saraydaryan.
Starting PhD: Edward Beeching, Large-scale automatic learning of autonomous agent behavior with structured deep reinforcement learning, C. Wolf, O. Simonin and J. Dibangoye.
PhD thesis juries
C. Laugier was reviewer and president of the defense committee of the PhD thesis of Fernando Ireta Munoz, I3S Sophia-Antipolis, April 4th 2018.
C. Laugier was member of the defense committee of the PhD thesis of Tomasz Kucners, Örebro University (Sweden), September 20th 2018.
O. Simonin was reviewer and member of the defense committee of the PhD thesis of Nesrine Mahdoui Chedly, Compiegne (UTC), December 7th, 2018.
O. Simonin was reviewer and member of the defense committee of the PhD thesis of Thadeu Knychala Tucci, Université Franche Comté, Montbéliard, November 12th, 2018.
C. Wolf was reviewer and member of the defense committee of the PhD thesis of Michaël Blot, Sorbonne Université, Paris, November 11th.
C. Wolf was reviewer and member of the defense committee of the PhD thesis of Farhood Negin, Université de Nice, Inria Stars, October 15th.
C. Wolf was reviewer and member of the defense committee of the PhD thesis of Bruno Stuner, Université de Rouen, June 11th.
C. Wolf was member of the defense committee of the PhD thesis of Stéphane Lathuilière, Université de Grenoble, Inria Perception, May 22nd.
C. Wolf was reviewer and member of the defense committee of the PhD thesis of Nicolas Chesneau, Université de Grenoble, Inria Thoth, February 2nd.
O. Simonin participated to the writing of "Livre blanc sur les Véhicules Autonomes et Connectés"
O. Simonin given several interviews in order to popularize AI & Robotics, and Autonomous Vehicles: JDN (Journal du Net), INSA letter, etc.
C. Laugier, J. Lussereau and J.A. David have been interviewed in October 2018 by Radio France in order to popularize Autonomous Vehicles. The interview has been broadcasted by France Bleu Isere on October 11th 2018 at respectively 6:40 a.m and 8:15 a.m.
J. Dibangoye created a MOOC on deep reinforcement learning for drones, which will be launched in 2019 on OpenClassRooms.
C. Wolf gave a talk for developers at the journéé ARAMIS on May 24th.
C. Wolf gave a talk for developers at the conference #MixIT, Lyon on April 20th.
Demo "Crazyflie micro-drones" at the Journées Recherche Inria & Industrie (O. Simonin and S. d'Alu), Paris, November 20th. Lien video :
https://