Main challenge:The project-team e-Motionaims at developing models and algorithms allowing us to build “artificial systems”including advanced sensorimotors loops, and exhibiting sufficiently efficient and robust behaviors for being able to operate in open and dynamic environments(i.e. in partially known environments, where time and dynamics play a major role), while leading to various types of interaction with humans. This Challenge is part of a more general challenge that we call Robots in Human Environments. Recent technological progress on embedded computational power, on sensor technologies, and on miniaturised mechatronic systems, make the required technological breakthroughs potentially possible (including from the scalability point of view).
Approach and research themes:In order to try to reach the previous objective, we combine the respective advantages of computational geometryand of the theory of probabilities. We are also working in cooperation with neurophysiologists on sensorimotor systems, for trying to apply and experiment some biological models. This approach leads us to study, under these different points of view, three strongly correlated fundamental research themes:
Perception and multimodal modelling of space and motion. The basic idea consists in continuously building (using preliminary knowledge and current perceptive data) several types of models having complementary functional specialisations (as suggested by neurophysiologists). This leads us to address the following questions : how to model the various aspects of the real world ? how to consistently combine a priori knowledge and flows of perceptive data ? how to predict the motions and behaviors of the sensed object ?
Motion planning and autonomous navigation in the physical world. The main problem is to simultaneously take into account various constraints of the physical world such as non-collision, environment dynamicity, or reaction time, while mastering the related algorithmic complexity. Our approach for solving this problem consists in addressing two main questions : how to construct incrementally efficient and reliable space-time representations for both motion planning and navigation ? how to define an iterative motion planning paradigm taking into account kinematics, dynamics, time constraints, and safety issues ? How to integrate Human-Robot interactions into the decisional processes ?
Learning, decision, and probabilistic inference. The main problem to solve is to be able to correctly reason about prior and learned knowledge, while taking explicitly into account the related uncertainty. Our approach for addressing this problem is to use and develop our bayesian programming paradigm, while collaborating with neurophysiologists on some particular topics such as the modeling of human navigation mechanisms or of biological sensorimotors loops. The main questions we are addressing are the followings : how to model sensorimotor systems and related behaviors ? how to take safe navigation decisions under uncertainty ? What kind of models and computational tools are required for implementing the related bayesian inference paradigms ?
C. Laugier invited to be Keynote speaker at the FSR'09 conference (Cambridge).
Signature of an agreement with Probayes SAS for the valorization of the already patented BOF Technology.
T. Fraichard was nominated as an IEEE Senior Member.
Publication of special issues in the following journals : JFR 2008, IJRR 2009, IJVAS 2008, IEEE Trans on ITS 2009.
A common patent with Toyota on “ Risk assessment for driving assistance ” has been registered in August 2009.
Alejandro-Dizan Vasquez has received the Euron Georges Giralt award 2009 for his PhD thesis.
The PhD of Trung-dung Vu “Detection d'obstacles par fusion de flux optique et stereovision dans des grilles d'occupation” has been defended on Oct. 15th 2009.
The PhD of Christopher Tay “Perception for Urban Driverless Vehicles: Design and Implementation” has been defended on September 4th 2009.
The PhD of Estelle Gilet “Modélisation Bayésienne d'une boucle perception-action : application à la lecture et à l'écriture” has been defended on Oct. 2nd 2009.
The PhD of Chiara Fulgenzi “Autonomous navigation in dynamic uncertain environment using probabilistic models of perception and collision risk prediction” on June 8th 2009.
The PhD of Manuel Yguel “Variational Algorithms for Online Updating of Dense Maps in Sparse Multi-Scale Representations Application to Robot Navigation in 3D environments” has been defended on Dec 11th 2009.
The main applications of our research are those aiming at introducing advanced and secured robotized systems into human environments (i.e. “Robots in human environments”). In this context, we are focussing onto the following application domains: Future cars and transportation systems, Service and intervention robotics, Potential spin-offs in some other application domains.
This application domain should quickly change under the effects of both new technologies and current economical and security requirements of our modern society. Various technologies are
currently studied and developed by research laboratories and industry. Among these technologies, we are interested in
ADAS
This application domain should really explode as soon as robust industrial products, easily usable by non-specialists, and of a reasonable cost will appear on the market. One can quote in this field of application, home robots, active surveillance systems (e.g. surveillance mobile robots, civilian or military safety, etc.), entertainment robots, or robotized systems for assisting elderly and/or disabled people. The technologies we are developing should obviously be of a major interest for such types of applications.
The software technologies we are developing (for bayesian programming) should also have a potential impact on a large spectrum of application domains, covering fields as varied as the interaction with autonomous agents in a virtual world (e.g.. in the video games), the modelling of some biological sensory-motor systems for helping neurophysiologists to understand living systems, or applications in economic sectors far away from robotics like those of finance or plant maintenance (applications currently covered by our start-up Probayescommercializing products based on Bayesian programming).
ProBT.
People involved : Juan-Manuel Ahuactzin, Kamel Mekhnacha, Pierre Bessière, Emmanuel Mazer, Manuel Yguel
ProBT is both available as a commercial product (ProBAYES.com) and as a free library for public research and academic purposes (
http://
Cycab Simulator and programming toolbox.
People involved : Amaury Nègre, participation of the SED team.
In order to perform pre-test and to provide help for Cycab developers, a robot simulator has been developed. This simulator is intended to simulate hardware and low-level drivers, in order to produce a temporal behaviour (refresh frequency, scheduling...) similar to what can be found on real robots with real sensors.
A middleware called Hugr has been developed to allow easy switching between simulated and real platform. Application that uses this middleware do not need to be recompiled when going from the simulator to the real hardware. Moreover Hugr makes it easy to design distributed application. It uses network to share data between applications that are not located on the same machine as easily as if they were on the same one.
Several sensors and robots have been simulated, among them the most original ones are catadioptric and fisheye cameras. Realistic models have been developed for laser sensors, GPS, cameras, ... All these models rely on state-of-the-art GPU techniques. Computing most of the simulated data on the GPU means that the CPU is free for applications. Therefore there is practically no difference between simulation and real sensors or robots.
Applications written and tested on the simulated robot can then be settled to the real one without any modification. Sensors and environment are also simulated, so that complete applications can be developed on this test bed.
The simulator project is available on the INRIA Forge (
http://
Bayesian Occupancy Filter (BOF) Toolbox.
People involved: Kamel Mekhnacha, Tay Meng Keat Christopher, C. Laugier, M. Yguel, Pierre Bessière, Thierry Fraichard.
The BOF toolbox is a C++ library that implements the Bayesian Occupancy Filter. It is often used for modelling dynamic environments. It contains the relevant functions for performing bayesian filtering in grid spaces. The output from the BOF toolbox are the estimated probability distributions of each cell's occupancy and velocity. Some basic sensor models such as the laser scanner sensor model or Gaussian sensor model for gridded spaces are also included in the BOF toolbox. The sensor models and BOF mechanism in the BOF toolbox provides the necessary tools for modelling dynamic environments in most robotic applications. This toolbox is patented under two patents : “Procédé d'assistance à la conduite d'un véhicule et dispositif associé ” n. 0552735 (9 september 2005) and “Procédé d'assistance á la conduite d'un véhicule et dispositif associé amélioré” n. 0552736 (9 september 2005) and commercialized by ProBayes.
ColDetect.
People involved : Christian Laugier, Kenneth Sundaraj.
This library has been implemented for providing robust and efficient collision detection, exact distance computation, and contact localisation of three-dimensional polygonal objects. It is patented under the French APP patent #IDDN.FR.001.280011.000.S.P.2004.000.10000. This library is still available on the web and used by several researchers from different countries.
Related to close field of research of the e-Motion team-project, these softwares are not used anymore by the researchers of our research team.
Grid Occupancy Wavelets (GROW).
People involved : Manuel Yguel, Francis Colas, David Raulo.
These software components are C++ libraries for designing applications that build dense representation of the occupancy function of a environment from telemetric sensor measurements either 2D or 3D. It is available for Linux. This Grid Occupancy Wavelets software components are declared under the french APP declaration and has been used to scientific experiments.
VisteoPhysic.
People involved : Cesar Mendoza, Kenneth Sundaraj, Christian Laugier.
This library provides efficient tools for deformable object simulation. It is patented under the French APP patent #IDDN.FR.001.210025.000.S.P.2004.000.10000.
Markov models toolbox.
People involved : Olivier Aycard.
This toolbox is a C++ library for prototyping applications for interpretation of temporal sequences of noisy data. It is available for Linux and PC Windows (Visual C++). The Markov models toolbox has two main components: (i) a definition of Markov models and learning of its parameters component. This component permits to manually define the topology of a Markov model, and to automatically learns the parameters of the defined model. Original learning algorithms have also been developed to automatically build the topology of the model and estimate its parameters. The result of this part is a set of Markov models, where each model is trained (ie, estimated) to recognize a particular type of temporal sequence of noisy data. (ii) an interpretation component. Its goal is to interpret a temporal sequence of noisy data and to determine the most probable corresponding Markov models. This Markov models toolbox is patented under the French APP patent #IDDN.FR.001.280011.000.S.P.2004.000.10000 and has been used to perform a preliminary study of recognition of behaviours of a car driver in cooperation with TOYOTA and also to interpret sequence of noisy sensor data of mobile robots.
In the context of visual navigation in an open and dynamic environment, obstacle detection play a major role. In this work, we are interested in the characterization of obstacles by the Time To Collision (TTC). We showed that this TTC can be computed directly in an image by using the characteristic scale. Next, we developed a detector and a tracker invariant to change of scale and well adapted to urban environments. This detector can extract regions of interest called “ridge segment” corresponding to contrasted and elongated shapes in the image. The tracking of such structures is based on a particles filter and allows the calculation of the scale in order to estimate the TTC. At last, we studied two applications for visual navigation such as automatic stopping of vehicle and a Bayesian reactive obstacles avoidance system.
This work was presented in the PhD thesis of Amaury Nègre defended in March 2009 and was done in collaboration with Guillem Alenya of the IRI laboratory (UPC, Barcelona).These results has been published at IROS09 .
Perceiving or understanding the environment surrounding a vehicle is a very important step in driving assistance systems or autonomous vehicles. The task involves both Simultaneous Localization And Mapping (SLAM) and Detection And Tracking of Moving Objects (DATMO). In this context, we have designed and developed a generic architecture to solve SLAM and DATMO in dynamic outdoor environments. For the SLAM problem, this architecture uses a maximum likelihood approach to build a consistent local map using occupancy grid and to localize the ego vehicle inside the map. After a consistent local map has been constructed, moving objects can be detected using inconsistencies between observed free space and occupied space in the local grid map .
Regarding DATMO, we take a model based approach to overcome the classical problems of tracking using raw laser data. We formulate the detection and tracking problem as finding the most likely trajectories of moving objects given measurements over a sliding window of time (Fig. ). A trajectory (track) is regarded as a sequence of object shapes produced over time by an object satisfying the constraint of an underlying measurement model and the smoothness in motion from frame to frame . This work was presented in the PhD thesis of Trung-Dung Vu defended in October 2009 .
This architecture has been used in the European project PReVENT-ProFusion2 , . Currently, we are using this architecture in the context of INTERSAFE-2 project. This European project is concerned about the safety on intersections. To achieve this goal, we are working on fusion between onboard sensor data of the ego-vehicle and sensor data of the infrastructure. First results on laser data processing are shown in the Figure . Currently we are working on fusion between different onboard sensors including Laser, Stereo Vision and Radars. Different strategies for fusion of data of onboard sensors will be studied and also for fusion between onboard sensors and infrastructure sensors.
Perceiving of the surrounding physical world reliably is a fundamental demanding of the autonomous vehicle applications. The major requirement for such a system is a robust target detection and tracking algorithm. In our former work, we proposed a grid based representation of the environment, the bayesian occupancy filter(BOF) , , and a fast clustering-tracking algorithm(FCTA) which works with the BOF to provide the object-level representation. The FCTA takes the occupancy/velocity grid of the BOF as its input, extracts the objects from the grid with a clustering module and tracks the targets.
On the other hand, when we applied these algorithms to the lidar based autonomous vehicle applications, we found that in the outdoor environment a great part of the lidar impacts come from the stationary objects, e.g. buildings, trees and parked vehicles. Because the BOF works in the local frame rather than a global inertial frame, if we use the lidar data directly to build the sensor model for the BOF and FCTA algorithms, the stationary objects will be detected and tracked which can be seen as the false alarms. We solve this problem by introducing a local SLAM based preprocessing module. This algorithm localize the ego-vehicle and build an occupancy grid map of the surrounding environments at the same time. Because we are not concerned with the global precision of the vehicle's states, instead of building a map of a large cyclic environment as in the SLAM problem, we only need to build a map of a limited area, which moves with regard to the vehicle. The precision of this map can be guaranteed by the relatively accurate lidar sensor. Thus, it is feasible to apply a maximum likelihood algorithm. We estimate the maximum likelihood map and the maximum likelihood state at each time step, by decomposing the SLAM problem into a mapping problem and a localization problem. Then, we classify the lidar beam clusters into a stational set and a moving set according to the occupancy map.
Therefore, the local SLAM preprocessing module, the BOF algorithm and the FCTA algorithm form a complete framework for the detection and tracking of moving objects in the dynamic environment.
However, the proposed framework suffers from its expensive computational cost which prevented it from being used in the real-time demanding applications. The temporal performance of the
BOF is a function of both the dimension of the BOF grid and the number of discretized velocities. Let the BOF grid contains
Ncells and the number of discretized velocity is
M, the computational complexity is
O(
N·
M). On the other hand, assume there are
Kobjects being tracked, the computational complexity of the FCT is a linear function of
K, written as
O(
K). It has been shown in
and
that, compared with the expensive computational cost of the BOF,
the cost of the FCT can be neglected. Thanks to the grid based algorithm of the BOF which provides a way to well parallelizing the computation. Thus, a GPU based implementation is carried out
by taking full advantage of the parallel computation ability of the GPU. A comparison of the BOF on CPU and the BOF on GPU is shown in Fig.
. Both the two versions are implemented with C++ programming language without
optimization. The NVidia CUDA library is used to implement the BOF on GPU. Simulations were performed on a desktop with an Intel Core Duo 2.0GHz and 2GB of memory. In the simulations, the
velocity discretization number of the BOF is set to
M= 65. The
xaxis of the figure is the number of cells in the BOF output grid, the
yaxis is the average processing time of a single frame. We compared the costs of running the BOF on CPU, on a GPU with 4 multiprocessors(NVidia Quadro FX 1700), and on a GPU with 30
multiprocessors(NVidia GeForce GTX 280 ). As shown in the figure, the BOF on GPU can greatly reduce the processing time which guarantees the feasibility of applying the propose framework to
the real-time demanding applications.
To test this framework, we applied it on the lidar dataset
which is recorded from a vehicle moving in the urban traffic
environment. The GPU implementation of the framework is utilized which guarantees the system to run at
20
H
z. The maximum detection range of the lidar is about 80 meters, the angular range is from 0 degree to 180 degrees, the angular resolution is
0.5degree. The odometry data is also provided. The camera images in this dataset is only used to demonstrate the scenario. The grid resolution of the occupancy grid
map is
0.2
m, and the grid resolution of the BOF is
0.3
m. The results are presented in Fig.
. From top to bottom the figures demonstrate the occupancy grid map view, the
BOF output view and the camera view respectively. In the occupancy grid map, the rectangle represents the host vehicle, the ellipses represent the mean positions and their covariances of the
moving objects. It can be seen that most of the moving objects, e.g. cars, buses and pedestrians, are well detected and tracked, while the stationary objects, e.g. poles, parked vehicles and
buildings on the roadside, are correctly represented by the occupancy grid map built on-line. This work has been submitted to ICRA10.
In many computer vision related applications it is necessary to distinguish between the background of an image and the objects that are contained in it. This is a difficult problem because of the double constraint on the available time and the computational cost of robust object extraction algorithms.
For the specific problem of finding moving objects from static cameras, the traditional segmentation approach is to separate pixels into two classes: background and foreground. This is called Background Subtraction and constitutes an active research domain (see ). Having the classified pixels, the next processing step consists of merging foreground pixels to form bigger groups corresponding to candidate objects, this process is known as object extraction.
In previous work, we focused on the background/foreground classifier exploring the new capability of the extraction algorithm. We implemented a statistical classifier that models the background pixels as Gaussians and we performed the foreground classification based on the Mahalanobis distance between the intensity of the input pixel and its correspondent background model. This give us a continuous output between 0 and 1, for each pixel in a frame, corresponding to our believe that the pixel belong to the foreground.
In 2009, we adopt the Mixture of Gaussians (MoG) to model the background, but still, instead of using the traditional binary classification for the foreground, we extract a continuous value, in the same idea we discussed before.
We compared the results on combining our clustering algorithm with 4 different background/foreground classification techniques:
(a)A binary bitmap obtained by thresholding the absolute difference between the intensity level of the current and previous video frames;
(b)Traditional MoG classification, with 3 Gaussians (binary output);
(c)Background as a single Gaussian and continuous foreground representation;
(d)MoG with 3 Gaussians and continuous foreground representation.
To evaluate the performance we adopted the following parameters:
Detection ratio (
dt)
Matching ratio (
match)
False positive ratio (
fp)
False negative ratio (
fn)
input | ||||
ideal | 1 | 1 | 0 | 0 |
(a) | 2.52 | 0.62 | 1.28 | 0.38 |
(b) | 8.73 | 0.88 | 5.9 | 0.12 |
(c) | 0.96 | 0.73 | 7.08 | 0.27 |
(d) | 1.72 | 0.93 | 1.06 | 0.07 |
The results summarized on table suggest that the last approach produces significantly better results than the other ones.
Is important to notice that the traditional MoG, with the binary output, can produce better results if combined with post processing techniques to filter the noisy detections. Although this post processing shows to be unnecessary in our approach, since the noise tends to have lower significance on the foreground classification phase and the clustering algorithm can naturally filter this input. In we have shown as well that for a single Gaussian background model the application of the continuous input produced detections slightly better than the thresholded foreground.
In respect to processing time, under the described experiment, the detection can be performed in real time, running on a standard dual core Pentium IV machine.
Future work resides on the extension of the detector to incorporate tracking capabilities, and possibly explore the use use of our SON to perform data fusion on a multicamera system.
This work is implied in the BACS project.
Obstacle detection is a widely explored domain of mobile robotics. It presents a particular interest for the intelligent vehicle community, as it is an essential building block for Advanced Driving Assistance Systems (ADAS). In the LOVe project, the E-Motion team proposes to perform obstacle detection within the occupancy grid framework. For this purpose, the Bayesian Occupancy Filter (BOF) was presented as previous work . Its performances and functionalities were demonstrated in particular with data from laser scanner.
To use other sensors in this framework, it is essential to develop an associated probabilistic sensor model that take into consideration the uncertainty over measurements. In 2009, we proposed such a sensor model for stereo-vision . The originality of the approach relies on the decision to work in the disparity space, instead of the classical metric space. This idea gives two major improvements. First, the errors on measurements are more accurately modeled. In particular, the proposed Gaussian model takes into account that the measurement uncertainty is related to the range of observed object. Second, the use of accumulation methods in the disparity space, such as the u-v-disparity approach , helps to be computationally efficient.
In the LOVe project, our novel method has been applied to real urban dataset for obstacle detection and tracking. Figure gives some example of the obtained results. Good detection rates were obtained, even in very dynamic and rich environment such as the downtown driving context. It also appeared from the study of the results that our Gaussian sensor model made it easier to track moving objects using the previously proposed Fast Clustering Tracking Algorithm .
The sensor model for stereo-vision is also used for dynamic scene analysis in the Arosdyn project.
To navigate or plan motions for a robotic system placed in an environment with moving objects, reasoning about the future behaviour of the moving objects is required. In most cases, this future behaviour is unknown and one has to resort to predictions.
Most prediction techniques found in the literature are limited to short-term prediction only (a few seconds at best) which is not satisfactory especially from a motion planning point of view.
We have first started to explore the problem of medium-term motion prediction for moving objects. As a result, we have proposed a novel cluster-based technique that learns typical motion patterns using pairwise clustering and use those patterns to predict future motion. We have developed a new learn and predict approach which addresses issues that were not solved by our first proposal: (a) Prediction of unobserved patterns and (b) On-line/adaptive learning. The new approach represents motion on the basis of a proposed extension to the well-known Hidden Markov Models framework, that we have named Growing Hidden Markov Models (GHMM). Basically, the extension allows incremental, adaptive, real time learning of the models parameters and structure. Incorporating final positions, objects' goals, in the GHMM state several motion patterns can be represented with a single GHMM.
During 2009, we are working to incorporate behavioral information to improve predictions and to deal with interactions in multiple moving objects scenarios. Objects behaviours are modeled independently and their information are used to adjust parameters on the GHMM. These works have been published in IJRR , .
This work is implied in the BACS project
Previously, work has been done on motion prediction prediction and its application to probabilistic risk averse motion planning . The work has been further extended to the more realistic domain of estimating collision risk under dynamic urban traffic conditions.
Current commercially available crash warning systems are usually equipped with radar based sensors on the front, rear or sides to measure the velocity and distance to obstacles. The algorithms for determining the risk of collision are based on variants of time-to-collision (TTC). However, it might be misleading in situations where the roads are curved and the assumption that motion is linear does not hold. In these situations, the risk tends to be underestimated. Furthermore, instances of roads which are not straight can be commonly found in urban environments, like the roundabout or cross junctions.
Simply knowing that there is an object at a certain location at a specific instance in time does not provide sufficient information to asses its safety. A framework for understanding behaviours of vehicle motion is indispensable. In addition, environmental constraints should be taken into account especially for urban traffic environments.
We propose a complete probabilistic model for risk estimation. Motion at the trajectory level based on the Gaussian Process (GP) . A GP is a generalization of the Gaussian distribution to function spaces where a Gaussian distribution is placed on functions. Not only discretization issues are avoided in using GP, it provides a theoretically sound bayesian framework in which one can obtain probability distributions of motion in space as well as a proper way of performing motion prediction in a fully probabilistic manner. The GP motion model is adapted to urban road structures with varying degrees of curvature and turns in intersections using Least Squares Conformal Maps (LSCM) .
Driving behaviours are modelled with a variant of the Hidden Markov Model. The combination of these two models provides a complete probabilistic model for vehicle evolution in time. Additionally a general method of probabilistically evaluating collision risk is presented, where different forms of risk values with different semantics can be obtained, depending on its applications.
This work has been performed in collaboration with Toyota Motors Europe (TME) and ProBayes SAS. An European patent has also been awarded for this technique .
This activity is the follow up of a collaboration with the ETHZ in Zurich and the BlueBotics company in Lausanne in the framework of the European project BACS.
During 2009, the problem of sensor self-calibration in mobile robotics by only using a single feature (e.g. a vertical or a horizontal line) has been considered.
The main results achieved during this year have been published in , , and .
In particular, a first contribution provided by these works is the analytical computation to derive the part of the system which is observable when a mobile robot accomplishes circular trajectories. This computation consists in performing a local decomposition of the system, based on a new theoretical concept which is very general and which has been introduced for the first time in the context of autonomous navigation. It is the concept of continuous symmetry. Detecting the continuous symmetries of a given system has a very practical importance. It allows us to detect an observable state whose components are non linear functions of the original non observable state.
Then, starting from this decomposition, a method to efficiently estimate the parameters describing the extrinsic exteroceptive sensor calibration and simultaneously the odometry calibration has been derived.
We have considered two cases of exteroceptive sensor: the former is a bearing sensor (camera) the latter is a range sensor.
Simulations and experiments with the robot e-Puck equipped with encoder sensors and a camera validate the approach. Furthermore, we are currently performing experiments with several platforms equipped with a laser range finder available at the BlueBotics company.
Finally, the problem of feature extraction and data association has also been considered in collaboration with the autonomous system lab at ETHZ in Zurich. The results have been published in .
This activity has been carried out in the framework of the European project sFly.
During 2009 we introduced a new strategy to perform Cooperative Localization (CL), which is the problem of simultaneously localizing the robots of a team able to observes one each other. This activity was carried out in collaboration with prof. Stergios Roumeliotis from the Minnesota State University in Minneapolis which is an invited partner in sFly.
Previous approaches to CL have several shortcomings. In most of cases they are centralized approaches, which can be very sensitive to communication failures. Furthermore, as in many other estimation problems, they are sensitive to the system non linearities.
In order to overcome the drawbacks of the centralized approach, we developed a distributed algorithm for CL that is robust to single point failures and has reduced processing and memory requirements per robot.
The main contribution of our proposed approach is as follows: we exploit the sparse structure of the CL problem to develop a distributed algorithm for CL based on the MAP estimation framework and our proposed algorithm reduces the computational complexity.
The salient features of our algorithm are as follows:
It is robust to single-point failures.
It distributes data and computations amongst the robots, hence improving the efficiency of CL.
It has adjustable processing and communication requirements depending on the resources available to the team.
In previous years, we have developed a point-based approach to map representation that uses Mahalanobis matrices at its core to represent the local shape of the objects represented. In 2009, we have developed and tested algorithms for the special case of Gaussian-based maps. This year, we have published this work at the ISRR conference . The framework is able to represent and simplify Gaussian-based maps in 2D and 3D. We introduced a clustering energy to measure the “goodness“ of fit of the coarse representation with respect to the fine ones. Hence we can use gradient descent techniques to optimize the choice of the coarse elements such that the result map is locally optimal. This representation was also tested with colored points on a dataset provided by the IBEO company. This dataset contains more than 3 Million of different colored points. We produce a colored map for this dataset with different resolutions: 0.1m, 1m and 10m allowing multiple Gaussians per cells at coarse resolutions. It leads to maps with 300,000 Gaussians at fine resolution and less than 5,000 Gaussians at the coarsest one. The coarse representations obtained are also designed to fit the Expectation Conditional Maximization Point Registration (ECMPR) framework developed in 2008 . Therefore, the coarse map is used to accelerate localization algorithm by providing accurate localization at coarse scales that can be used as an initial guess for precise localization using the fine scale. Hence a multi-scale algorithm is obtained. This algorithm was used in a joint paper with Radu Horaud submitted to PAMI and accepted under minor revisions .
The next steps of this work are: to develop the theory for hybrid maps based on this framework, to publish the multi-scale localization algorithm developed for this map for the N-scan problem i.e. the bundle adjustment problem.
The problem addressed in this work is the autonomous navigation in an uncertain and dynamic environment. The aim is to develop techniques allowing a robot to move autonomously and safely in an environment which is not perfectly known a prioriand in which static and moving obstacles are present. The task of the autonomous robot is to find and execute a continuous sequence of actions. This allows the robot to go to a given position each time, avoiding collisions with the obstacles.
We address particularly the problem of putting in relation the decision and action process with a realistic perception input, assuming that the robot has little a prioriknowledge about the static environment and about the surrounding moving obstacles. We investigate the problems and limits inherent to sensor perception and future prediction and focus on models of environments that best express and update the information and uncertainty coming from perception. We propose decision processes exploiting the rich information of these models. For example, we aim to give a robot the possibility to exploit the fact that pedestrians and vehicles moving in a given environment usually do not move at random but often engage in typical behaviours or motion patterns. The robot may use this information to predict the behavior of these moving obstacles and adapt its behavior according to its prediction.
We focus on the following considerations:
The uncertaintyand incompletenessof the information perceived by the robot is not negligible and some mean to take it into account into the decision process should be introduced;
The fact that the environment is dynamiccannot be ignored: the robot performance is influenced by the fact that obstacles move in the environment and the robot should be able to take safe and good decisions at anytime and act promptly in the dynamic environment.
We integrated a Partial Motion Planning algorithm with probabilistic representation and prediction. PeP-RRT (Probabilistic environmental Partial Rapidly Randomly exploring Tree) is the new approach we propose for planning and navigation based on the following concepts :
We consider unknown dynamic environments. The probabilities of collision are computed on the basis of the probabilistic models which map the static and the dynamic obstacles: the static environment is mapped with an occupancy grid, which is updated on-line integrating the incoming information. The moving obstacles are also detected and tracked on-line.
We have models of the future for predicting the behavior of the moving obstacles.
We have a probabilistic risk function to guide the search of safe paths. The search method is based on the well known RRT framework . The search algorithm is integrated in an anytime planning and replanning approach: the probabilities of collision and the decisions of the robot are updated on-line with the most recent observations.
In 2008, we integrated our search algorithm with a representation of typical patterns based on Markov graphs and started to use another representation based on Gaussian Processes . In 2009, we proposed more complete results published in IROS 2009 where a robot need to reach goals in the hall of INRIA avoiding pedestrians (see figure ). The INRIA hall was recreated in a simulated environment where moving obstacles and their typical behaviors can be generated as observed in the real environment. This allowed to test our method in many test conditions with a chosen number of obstacles. Figure (d) shows a snapshot of this environment where the robot (green rectangle) aims to reach a goal (green point) while avoiding pedestrians crossing the hall (red points).
This research has been carried out in the framework of the European project sFly. In recent years it is revealed more and more the importance of using multi-robot systems for security application, otherwise impossible to be performed by a single robot. In particular by employing flying robots, many fundamental tasks are now possible. Some of these very important tasks are: surveillance of dangerous regions, like areas of chemical, biological or nuclear contamination; environmental monitoring ( egair quality, forest fire); aiding police during surveillance missions and so on. For these purposes we try to develop efficient, robust and versatile methodologies for distributed control of multi-MAV systems under environmental and communication constraints such as global path, obstacles, loss of local communication.
The first problem approached is the optimal coverage of an environment with unknown obstacles. We have developed a new method based on the artificial potential fields , . In these works we studied the problem of optimal placement for a team of mobile robots with surveillance tasks in a 2-D environment with unknown obstacles. The optimal solution for the case without obstacles is already known in literature and it can be obtained analytically. It is based on the Voronoi partition generated by the positions of the robots and the Loyd algorithm . For the non-convex case, i.e. an environment with obstacles, a possible strategy is using a repulsion robot-robot and robot-obstacles to spread the robots, but with this method there is a high probability to find local minima very far from the optimal solution. The proposed algorithm is based on a combination of the repulsive potential field method and the Voronoi partition. In other words the movement of each robot is generated by the repulsion of the other robots, by the repulsion of the closest obstacle and by the attraction of the center of mass of its Voronoi region. This method allows overcoming the problem of local minima and the robots are better spread in the environment. In the case of an environment without obstacles, our algorithm performs as well as the optimal solution.
In , we extend the problem also for a heterogeneous team of mobile robots, where each robot has a different maximum speed. The proposed strategy for this case is to use the same algorithm for the homogeneous case, but with a new definition of the Voronoi regions. The result is a different topology for the regions, which now are no more polygons. Indeed, the new boundaries are arcs of circle, where the center and the radius depend on the speeds of the robots.
Furthermore, the problem of coverage has also been approached, and extended to the 3-D case, by using a stochastic optimization method. This work is in collaboration with professor Elias Kosmatopoulos and his group, of the Technical University of Crete, partner in the sFly project.
The Kosmatopoulos's group has proposed a new approach for developing efficient and scalable methodologies for a general class of multi-robot passive and active sensing applications , . This method employs an estimation scheme that switches among linear elements and, as a result, its computational requirements are about the same as those of a linear estimator. The parameters of the switching estimator are calculated off-line using a convex optimization algorithm which is based on optimization and approximation using Sum-of-Squares (SoS) polynomials. Stable and convergent estimator's performance is guaranteed, overcoming the shortcoming of many existing methodologies where there is always the possibility of estimator error divergence.
Preliminary simulation results for its application to the 3-D multi-robot coverage problem exhibited excellent performance and a joint publication with the Technical University of Crete is under preparation.
Where to move next?is a key question for an autonomous robotic system. This fundamental issue has been largely addressed in the past forty years. Many motion determination strategies have been proposed. They can broadly be classified into deliberativeversus reactivestrategies: deliberative strategies aim at computing a complete motion all the way to the goal, whereas reactive strategies determine the motion to execute during the next time-step only. Deliberative strategies have to solve a motion planning problem . They require a model of the environment as complete as possible and their intrinsic complexity is such that it may preclude their application in dynamic environments. Reactive strategies on the other hand can operate on-line using local sensor information: they can be used in any kind of environment whether unknown, changing or dynamic, but convergence towards the goal is difficult to guarantee.
To bridge the gap between deliberative and reactive approaches, a complementary approach has been proposed based upon motion deformation. The principle is simple: a complete motion to the goal is computed first using a priori information. It is then passed on to the robotic system for execution. During the course of the execution, the still-to-be-executed part of the motion is continuously deformed in response to sensor information acquired on-line, thus accounting for the incompleteness and inaccuracies of the a priori world model. Deformation usually results from the application of constraints both external (imposed by the obstacles) and internal (to maintain motion feasibility and connectivity). Provided that the motion connectivity can be maintained, convergence towards the goal is achieved.
The different motion deformation techniques that have been proposed , , , , all performs path deformation. In other words, what is deformed is a geometric curve, iethe sequence of positions that the robotic system is to take in order to reach its goal. The problem with path deformation techniques is that, by design, they cannot take into account the time dimension of a dynamic environment. However in many cases, it would be more appropriate to try to anticipate the motion of dynamic obstacles by considering their current and future motion, in the aim to avoid collision with them in a close future. To achieve this, it is necessary to depart from the path deformation paradigm and resort to trajectory deformationinstead. A trajectory is essentially a geometric path parametrized by time. It tells us where the robotic system should be but also when and with what velocity. By taking into account a forecast model of the future behavior of obstacles (Fig. ), both spatial and temporaldeformations may take place. They result from the application of external forces ierepulsive forces exerted from the obstacles and internal forcesused to maintain the connectivity of the trajectory. Both path and velocity profiles of the robotic system may thus be altered to adapt the behavior of the robot to its environment.
Our trajectory deformer named Teddyis designed to be one component of an otherwise complete autonomous navigation architecture. A motion planning module is required to provide Teddywith the nominal trajectory to be deformed. Teddyoperates periodically with a given time period. At each cycle, Teddyoutputs a deformed trajectory which is passed to a motion control module that determines the actual commands for the actuators of the robotic system.
First, Teddywas designed to handle holonomic systems and was presented at the 2008 EUROS Conference . An extension of Teddyto nonholonomic systems yielded a paper in the 2008 IROS Conference . A journal paper on trajectory deformation was finally published in 2009 .
This research topic is related to our work on Trajectory Deformation (see Section ). The main difficulty of trajectory deformation lies in the maintenance of the connectivity of the trajectory for complex dynamic systems since it requires the characterization of their set of reachable states. To solve this problem, we have decided instead to base the internal forces computation upon a steering methodcomputed by a new trajectory generatorcalled Tiji.
Trajectory generation for a given robotic system is the problem of determining a feasible trajectory (that respects the system's dynamics) between an initial and a final state. From the preliminary work of Dubins to the latest methods used by the Carnegie Mellon University during the Darpa Urban Challenge , many trajectory generation methods have been proposed like primitive combinations , , , , two-point boundary value problems , , , or variational approaches , , , .
Among all these approaches, it is interesting to note that, in no circumstances, have people tried to compute a trajectory reaching the goal state at a specific time instant. However, several problems like Trajectory Deformation require to fix the final time or at least an interval of time during which a goal state must be reached. We proposed thus a trajectory generation scheme called Tijiwhich integrates the final time constraint. Tijiis geared towards complex dynamic systems subject to differential constraints, such as car-like vehicles, and its efficiency warrants it can be used in real-time. The approach is similar in spirit to that of : a parametric trajectory representation is assumed in order to reduce the search space. An initial set of parameters is selected yielding a trajectory that does not necessarily reach the goal state. The parameter space is then searched and efficient numerical optimization is used to optimize a cost function involving the distance between the end of the trajectory computed and the (goal state, final time) pair. Should the goal state be unreachable (if the final range of time is ill-chosen), the method returns a trajectory that ends as close as possible to the (goal state, final time) pair. Tijidiffers from previous works first because it takes into account the final time constraint and also because the control definition chosen is such that it ensures that all trajectory constraints are met.
Figures depict examples of the trajectory generated by Tijifor a differential drive, a car-like and a spaceship system. This work has been presented at the 2009 IEEE ICRA Conference and recent results concerning different applications of Tijihave yielded a submission to the 2010 IEEE ICRA Conference .
Safe navigation is undoubtedly a critical issue that needs to be solved for autonomous mobile robots/vehicles to leave labs environments and be deployed in real world applications. This
problem has been studied extensively by the robotics community and some results were illustrated rather brilliantly by the 2007 DARPA Urban Challenge
To address this issue, we have explored the novel concept of Inevitable Collision States(ICS) since 2002. An ICS for a given robotic system is a state for which, no matter what the future trajectory followed by the system is, a collision with an object eventually occurs. For obvious safety reasons, a robotic system should never ever end up in an ICS. ICS have already been used in a number of applications: (i) mobile robot subject to sensing constraints, iea limited field of view, and moving in a partially known static environment , (ii) car-like vehicle moving in a roadway-like environment , (iii) spaceship moving in an asteroid field . In all cases, the future motion of the robotic system at hand is computed so as to keep the system away from ICS. To that end, an ICS-Checker is used. As the name suggests, it is an algorithm that determines whether a given state is an ICS or not. Similar to a Collision-Checker that plays a key role in path planning and navigation in static environments, it could be argued that an ICS-Checker is a fundamental tool for motion planning and navigation in dynamic environments. Like its static counterpart, an ICS-Checker must be computationally efficient so that it can meet the real-time constraint imposed by dynamic environments.
Since 2007, we have been working on genericand efficientICS-Checker for planar robotic systems with arbitrary dynamics moving in dynamic environments. The efficiency is obtained by applying three principles: (a) reasoning on 2D slices of the state space of the robotic system, (b) precomputing off-line as many things as possible, and (c) exploiting graphics hardware performances. A preliminary version of the ICS-Checker was presented at the 2007 IEEE Int. Conf. on Robotics and Automation (ICRA) . A final version have been presented in 2008 at the IEEE-RSJ Int. Conf. on Intelligent Robots and Systems (IROS) wherein the ICS-Checker was applied to two different robotic systems: a car-like vehicle and a spaceship.
Next, we moved to design an ICS-based collision avoidance scheme, iea decision-making module whose primary task is to keep the robotic system at hand safe from collisions. This collision avoidance scheme, henceforth called Ics-Avoid, guarantees motion safety with respect to the model of the future which is used. To demonstrate the efficiency of Ics-Avoid, it has been extensively compared with two state-of-the-art collision avoidance schemes, both of which have been explicitly designed to handle dynamic environments. The first one has been proposed by and is a straightforward extension of the popular Dynamic Window approach . The second one builds upon the concept of Velocity Obstacle concept .
If Ics-Avoidwere provided with full knowledge about the future, it would guarantee motion safety no matter what. Given the elusive nature of the future, this assumption is unrealistic. In practice, knowledge about the future is limited. However, the results obtained show that, when provided with the same amount of information about the future evolution of the environment, Ics-Avoidperforms significantly better than the other two schemes. The first reason for this has to do with the respective time-horizon of each collision avoidance scheme thus emphasizing the fact that, reasoning about the future is not nearly enough, it must be done with an appropriatetime horizon. The second reason has to do with the decision part of each collision avoidance scheme. In all cases, their operating principle is to first characterize forbidden regions in a given control space and then select an admissible control. Accordingly motion safety also depends on the ability of the collision avoidance scheme at hand to find a such an admissible control. In the absence of a formal characterization of the forbidden regions, all schemes resort to sampling (with the inherent risk of missing the admissible regions). In contrast Ics-Avoid, through the concept of Safe Control Kernel, is the only one for which it is guaranteed that, if an admissible control exists, it will be part of the sampling set (thus guaranteeing safe transitions between non-ICS states). A paper detailing the principle of Ics-Avoid was presented in the 2009 IEEE Int. Conf. on Robotics and Automation (ICRA) while a related second paper focusing on the benchmark of the method was presented in the 2009 ICRA Workshop on Safe Navigation in Open and Dynamic Environments.
Finally we have addressed how the ICS concept can be extended to handle the uncertainty inherent to the future. So far the characterization of the ICS has been based upon deterministic models of the future. In other words, each moving object was assumed to follow a given nominal trajectory (known a priori or predicted). Such deterministic models provide a clear-cut answer to the motion safety issue: a given state is an ICS or not (simple binary answer). However, such models are not well suited to capture the uncertainty that prevails in real world situations, in particular the uncertainty concerning the future behaviour of the moving objects. Our contribution is a probabilistic formulation of the ICS concept. Probabilistic ICS permit the characterization of the motion safety likelihood of a given state, likelihood that can later be used to design safe navigation strategies in real world situations. This is the first step towards the applicability of the ICS framework to real robots operating in uncertain dynamics environments. We have submitted to the 2010 IEEE Int. Conf. on Robotics and Automation a paper that presents two novel Probabilistic ICS-Checkers, i.e. algorithms that determine the motion safety likelihood of a given state.
The goal of the PhD of Estelle Gilet, is to define a Bayesian model of the whole sensorimotor loop involved in handwriting, from visual sensors to the control of the effector . We aim to implement a simulation of handwriting based on three points: the perception, the representation and the production of letters.
In 2007, we studied the state-of-the-art of the modeling of sensorimotor systems, focusing more precisely on handwriting movements. We focused on the state-of-the-art of the perception of hand trajectory and we examined studies of the kinematic and dynamic aspects of human arm movements.
In 2008, we focused on how the central nervous system represents the sensorimotor plans associated with writing movements. In the motor theories of perception, the perception and the production share the same set of invariants and they must be linked. The model is structured around an abstract internal representation of letters, which acts as a pivot between motor models and sensors models. We assume that a letter is internally represented by a sequence of viapoints, that are part of the whole X, Y trajectory of the letter. We restrict via-points to places in the trajectory where either the X derivative or the Y derivative, or both, are zero. The representation of letters is independent of the effector usually used to perform the movement. In September 2009, Estelle Gilet defended her thesis .
The sensor model (vision) concerns the extraction of via-points from trajectories, using their geometric properties. The motor model concerns general trajectory formation. It is expressed in a Cartesian reference frame. An acceleration profile is chosen, that constrains the interpolation. In our case, we used a bang-bang profile, where the arm first applies a maximum force, followed by a maximum negative force. The effector model is made of two parts, related to the geometry of the considered effector (kinematic model) and the control of this effector for general movement production (dynamic model).
Thanks to Bayesian inference, the joint probabilistic distribution can be used to automatically solve cognitive tasks. We define a cognitive task by a probabilistic term to be computed, which we call a question. Our model allows to solve a variety of tasks, like letter reading, recognizing the writer, and letter writing (with different effectors).
This work is done under the joint supervision of Pierre Bessière and Julien Diard of the LPNC laboratory (Laboratoire de Psychologie et NeuroCognition, CNRS, Grenoble). It is implied in the BACS European project.
This work has been done in collaboration with our Start-up Probayes. ProBT is a C++ library for developing efficient Bayesian software . This library has two main components: (i) a friendly Application Program Interface (API) for building Bayesian models and (ii) a high-performance Bayesian inference and learning engine allowing execution of the probability calculus in exact or approximate ways.
The aim of ProBT is to provide a programming tool that facilitates the creation of Bayesian models and their reusability (
http://
Since a few years the main development of ProBT has been carried out by Probayes: a spin-off born from the e-motion project. Both, e-motion and Probayes, are part of the European project Bayesian Approach to Cognitive Systems (BACS). The development of ProBT has been done taking into account the goals of the BACS project partners.
Among the various possible criteria guiding eye movement selection, we investigate the role of position uncertainty in the peripheral visual field. In particular, we suggest that, in everyday life situations of object tracking, eye movement selection probably includes a principle of reduction of uncertainty.
To do so, we confront the movement predictions of computational models with human results from a psychophysical task. This task is a freely moving eye version of the Multiple Object Tracking task with the eye movements possibly compensating for lower peripheral resolution.
We design several Bayesian models of increasing complexity, whose layered structures are inspired by the neurobiology of the brain areas implied in eye movement selection.
Finally, we compare the relative performances of these models with regard to the prediction of the recorded human movements, and show the advantage of taking explicitly into account uncertainty for the prediction of eye movements.
This work as been done in collaboration with LPPA-Collège de France and with the Max Planck Institute in Tuebingen. A common publication in Biological Cybernetics describes this work in details .
persJacquesDroulez,
Biochemical Probabilistic Inference is a new area of research which started in 2008 in close collaboration with LPPA-College de France and ProBAYES.
Living organisms need to quickly react without waiting for a perfect evaluation of the consequences of their action. For instance, we perceive objects from retinal stimulation without the need for a complete knowledge of the underlying light-matter interactions. To account for this ability to reason with incomplete knowledge, it has been recently proposed that the brain works as a probabilistic machine, evaluating probability distribution over cognitively relevant variables. A number of Bayesian models have been shown to efficiently account for perceptive and behavioural tasks. However, little is known about the way subjective probabilities are represented and processed in the brain.
Numerous biochemical cellular signalling pathways have now been unravelled. These mechanisms involve the strong coupling of macromolecular assemblies, membrane voltage and diffusible messengers, including intracellular Ca2+ and other chemical substrates like cyclic nucleotides. Since transition between allosteric states and messenger diffusion are mainly powered by thermal agitation, descriptive models at the molecular level are also based on probabilistic relationships between biophysical and biochemical state variables .
Our proposal is based on the existence of a deep structural similarity between the probabilistic computation required at the macroscopic level to account for cognitive, perceptive and sensory motor abilities and the biochemical interactions of macromolecular assemblies and messengers involved in cellular signalling mechanisms.
Our working hypothesis is then that biochemical processes constitute the nanoscale components of cognitive Bayesian inferences.
To prove this hypothesis, we plan to develop: 1. A comprehensive and coherent formalism to handle both macroscopic and microscopic levels of description, 2. A software package to emulate complex biochemical interactions and to demonstrate the plausibility of our working hypothesis.
Finally, we wish to explore, through the search for new partners for future projects, the possibility to design artificial systems mimicking the biochemical interactions and working on similar principles and nanoscale space-time grain. In the future, this could open the way to the development of revolutionary probabilistic machines. During the year 2009, we made important progress on this matter. We can now propose “Bayesian gate” and extension of logical gates to do probabilistic computation. We can also propose a biochemical implementation of these bayesian gates. We are writing papers on this subject with should be ready for submission in the next months.
Several works have dealt with the implementation of the probabilist Bayesian theories. In , Lebeltel proposed a definition of a generic system of robotic programming and its experimental application. This approach is illustrated by programming a surveillance task with a mobile robot: the khepera. In , a new probabilistic formalism for modelling the interaction between a robot and its environment (the Bayesian map) is proposed. The formalism is illustrated by experiments implemented on a Koala mobile robot. In , the authors address the problem of controlling autonomous sensory-motor systems and propose a cumulative hypotheses and simplification defined within a Bayesian programming. The presented method was validated on a mobile robot.
Actually, the contribution of our works is to present a generic behaviour construction toolkit for small autonomous robots (figure ) based on probabilist modelling techniques.
To do so, we propose a tight coupling between computer vision and the Bayesian theory of probabilities. Consequently, one needs to be able to estimate the current orientation of the robot and the desired orientation of the game space.
For the localisation, we use two camera. The first one is fixed in the ceiling frame of the room or office. It gives a global information about the robot and its work space by tracking Infrared Led (figure ).
For a good positioning of the robot in front of the game space, the second camera fixed in the robot frame is used for tracking colour objects.
It shall be mentioned that all visual primitives used in the localisation system are estimated in the image space. The reconstructed and calibrated Euclidean terms are not required. This method can be easily implemented with off-the-shelf hardware and software.
Taking this localisation system, the developed Bayesian behaviour consist of two distinct steps. First, the robot use global information to join a first desired position (zone situated in front of the game space). In the second step, the robot converges to its work space using a positioning method:
1- localisation of the work space and tracking two colour objects using the camera fixed in the robot frame.
2- the robot moves linearly in straight line.
3- the robot changes direction (rotation and translation velocities) giving a pantilt heads of the second camera.
This second behaviour is inspired by boat navigation at night. Light signals consist of different colors indicate different messages such as the vessel is parked, moving slowly...
The proposed Bayesian program uses infrared proximity sensors, for obstacle avoidance. Doing so, one can detect obstacles in front of the robot within a 0.1 or 1 meters range.
Finally, this work is supported by the Imaginove (game and entertainment) and Minalogic (intelligent miniaturized products) clusters. It is implied in the GRAAL project which is funded as a FUI (Fonds Unitaire Interministériel) project by the French Ministère de l'Industrie, the Rhône-Alpes region, and the Greater Lyon metropolitan area.
[Feb 2006 - Feb 2009] [Dec 2009 - Dec 2013]
The contract with Toyota Motors Europe is a joint collaboration involving Toyota Motors Europe, INRIA and ProBayes. It follows a first successful short term collaboration with Toyota in 2005.
This contract aims at developing innovative technologies in the context of automotive safety. The idea is to improve road safety in driving situations by equipping vehicles with the technology to model on the fly the dynamic environment, to sense and identify potentially dangerous traffic participants or road obstacles, and to evaluate the collision danger. The sensing is performed using sensors commonly used in automotive applications such as cameras and lidar.
This collaboration has been extended for 4 years and Toyota provides us with an experimental vehicle Lexus equipped with various sensing and control capabilities.
[Nov 2008 - Oct 2011]
The Technology Development Action (ADT) ArosDyn aims at the development of embedded software for robust analysis of dynamic scenes and assessment of risk during car driving. The system will be used in the scope of a Driver Assistance System. ADT ArosDyn is supported by the INRIA's Direction of Technological Development (DDT).
The principal participants of ArosDyn are the project-teams: e-Motion, PERCEPTION, and SED of INRIA Grenoble Rhône-Alpes, and a project-team EVOLUTION of INRIA Sophia-Antipolis. The spin-off company Probayes and a project-team PRIMA of INRIA Grenoble Rhône-Alpes help develop the specialized modules of ArosDyn.
The robustness of the analysis methods is based on the Bayesian fusion of sensor data. The applied algorithms provide to detect and track in real time multiple moving objects in various traffic scenarios. The perception of traffic environment relies on the processing of range and visual information gathered by a laser scanner and a stereo vision camera. These two types of sensors possess complementary technical features. They ensure the detection of objects in various traffic scenarios. The proprioceptive perception makes use of the inertial and odometry sensors.
[December 2005-December 2009]
National project, Predit Programme LOVe “Logiciel d'Observation des Vulnérables”. (
http://
[January 2009 - December 2011]
The Graal project aims to produce a generic behaviour construction toolkit for video games and small autonomous robots. It is based on probabilist modelling techniques, and will last two years, starting in January 2009. It involves four partners :
INRIA/e-Motion provides the core scientific basis for probabilist modelling and autonomous robot programming;
Probayes ("Born of INRIA" in 2003) builds upon its generic Bayesian inference engine ProBT, and its expertise of decision systems;
POB-Technology develops small robots for education and entertainment, sold in high schools and universities all over the world;
Ageod (in the project during its first year) developed simulation-like historic strategy games.
The goal of the project is the extension and application of Bayesian modelling techniques for industrial behaviour construction :
programming and maintaining complex behaviours for virtual entities; - teaching simple behaviours to small robots;
bringing behaviour modification into the hands of students and hobbyists;
integrating probabilistic reasoning into the tools of industrial behaviour programmers.
The Graal project is funded as a FUI (Fonds Unitaire Interministériel) project by the French Ministère de l'Industrie, the Rhône-Alpes region, and the Greater Lyon metropolitan area. It is labelled and supported by the Imaginove (game and entertainment) and Minalogic (intelligent miniaturized products) clusters.
[January 2010 - January 2014]
Interactive is the most important european project of FP7 dedicated to Advanced Driver Assistance System (ADAS) (more than 30 partners and 20 Meuros of funding). One of the main goal of this project is to design and develop generic architecture for perception solutions for ADAS. e-motion will play a key role in this task following-up its cooperation with Daimler.
FP6-IST-027140 [January 2006 - February 2011]
Despite very extensive research efforts contemporary robots and other cognitive artifacts are not yet ready to autonomously operate in complex real world environments. One of the major reasons for this failure in creating cognitive situated systems is the difficulty in the handling of incomplete knowledge and uncertainty. In this project we are investigating and applying Bayesian models and approaches in order to develop artificial cognitive systems that can carry out complex tasks in real world environments. We are taking inspiration from the brains of mammals including humans and applying our findings to the developments of cognitive systems. The conducted research results in a consistent Bayesian framework offering enhanced tools for probabilistic reasoning in complex real world situations. The performance is demonstrated through its applications to drive assistant systems and 3D mapping, both very complex real world tasks. P. Bessière, C. Laugier and R. Siegwart edited a book titled “Probabilistic Reasoning and Decision Making in Sensory-Motor Systems” which regroups 12 different PhD theses defended within the BIBA and BACS European projects. See: , , , , , , , , .
[September 2008 - September 2011]
The INTERSAFE-2 project aims to develop and demonstrate a Cooperative Intersection Safety System (CISS) that is able to significantly reduce injury and fatal accidents at intersections.
The novel CISS combines warning and intervention functions demonstrated on three vehicles: two passenger cars and one heavy goods vehicle. Furthermore, a simulator is used for additional R&D. These functions are based on novel cooperative scenario interpretation and risk assessment algorithms.
[January 2009 - December 2011]
sFly is an European research project involving 4 research laboratories and 2 industrial partners. This project will focus on micro helicopter design, visual 3D mapping and navigation, low power communication including range estimation and multi-robot control under environmental constraints. It shall lead to novel micro flying robots that are:
Inherently safe due to very low weight ( <500g) and appropriate propeller design;
Capable of vision-based fully autonomous navigation and mapping;
Able of coordinated flight in small swarms in constrained and dense environments.
The contribution of e-Motionto sFly focuses on autonomous cooperative localization and mapping in open and dynamic environments. It started on 01/01/09. For the moment, Alessandro Renzaglia(PhD student) and Agostino Martinelli work on this project. A new Postdoc will be recruited for the project as well quickly.
[February 2008 - January 2011]
European project ICT-212154 HAVEit “Highly Automated Vehicles for Intelligent Transport”. (
http://
HAVEit aims at the realization of the long-term vision of highly automated driving for intelligent transport. The project will develop, validate and demonstrate important intermediate steps towards highly automated driving.
HAVEit will significantly contribute to higher traffic safety and efficiency usage for passenger cars, buses and trucks, thereby strongly promoting safe and intelligent mobility of both people and goods. The significant HAVEit safety, efficiency and comfort impact will be generated by three measures:
Design of the task repartition between the driver and co-drivingsystem (ADAS) in the joint system.
Failure tolerant safe vehicle architecture including advanced redundancy management.
Development and validation of the next generation of ADAS directed towards higher level of automation as compared to the current state of the art.
The contribution of e-Motionto HAVEit focuses on safe driving.
[October 2005-December 2007] and [November 2008 - December 2011]
The Fact project is a joint research project in the scope of the ICT-Asia programme founded by the French Ministry of foreign affairs, the CNRS and INRIA. It aims at conducting common research activities in the area of Intelligent Transportation Systems (ITS). The main objective is to develop new technologies related to the concept of “Cybercar”. The project involves the following research teams : e-Motion project at INRIA Rhône-Alpes (leader), Imara project at INRIA Rocquencourt, LASMEA Laboratory at Clermont-Ferrand, SungKyunKwan University (Korea), Shangai Giao Tong University (China), Nanyang Technological University (Singapore) and Tokyo University (Japan). This project has been prolongated by a new project (named “City Home”) co-leaded by Ph. Martinet from LASMEA and C. Laugier from e-Motion/INRIA. Several public demonstrations of the results have been planned in France (Clermont-Ferrand) and in China (Shanghai).
Subject 1: Coordination of Orofacial and Gestural Sensori-motor maps Enabling the Emergence of Communication between avatars and Humans. Common PhD thesis and commun publication:
Subject 2: Théorie de la "langue mère" de Ruhlen. Collaborative work and commun publication:
Subject 3: Emergence of a language through deictic games within a society of sensori-motor agents in interaction. Common PhD thesis and commun publications: ,
Subject 1: Bayesian models of superior colliculus, see and commun publications: .
Subject 2: Biochemical bayesian computation, see and common publications:
e-motion collaborate with the Nanyang Technological University of Singapore (NTU) and the National University of Singapore (NUS) since 1998 (MOU INRIA/NTU, MOU INRIA/NUS, PICS CNRS including the LPPA (College de France, Alain Berthoz), ICT-Asia FACT project and ICT-Asia CITYHOME project) in the framework of the scientific collaboration in the field of autonomous vehicles. This collaboration has brought: (a) an important number of crossed visits and stays (one week to several months) of researchers, (b) Singaporeans students in Inria (level undergraduate to graduate), (c) organization of workshops and (d) postdocs and co-directed PhD students. Brice Rebsamen has submitted his PhD Thesis in Singapore in January 2009; Christopher Meng Tay defended his PhD in Grenoble in September 2009.
see the description of the ICT-ASIA “Fact” and “City Home” projects above.
E-Motion collaborate with the "Institut de Robotica Industrial" (UPC) in the field of dynamic obstacle detection. The team received Guillem Alenyà at the end of 2008 and two publications have been written in collaboration .
The thematic network "Image et Robotique" has been implemented from the French-Mexican symposium in Computer Sciences and Control (JFMIA'99) which has been held in Mexico in March 1999.
The main goal of this network is to promote and increase the French-Mexican cooperations in Image and Robotics in scientific, academic and industrial fields. This network has been effectively
settled in 2000. It supports a yearly school (SSIR
http://
Partner: University of Coimbra Subject: Bayesian Models for Multimodal Perception of 3D Structure and Motion Collaborative work and commun publications: ,
Subject 1: Bayesian Robot programming Partner: University of Brasilia. Collaborative work and ongoing common publication.
Subject 2: Corruption detection. Partner: Catholic University of Brasilia. Collaborative work and staff exchange (Remis Balaniuk as a visiting professor at INRIA IN 2009).
Subject 1: Bayesian model of mentalizing Partner: Les hôpitaux universitaires des Genève. Collaborative work and common publication:
Subject 2: Bayesian robotics. Partner: ETH Zurich. Collaborative work and common publications on bayesian Robotics : .
Subjects 3: European project framework sFly (further details please find the description of sFly project above) Partner: Autonomous system lab at ETHZ in Zurich
Subject: Safe navigation in dynamic environments. Partner: Prof.Zvi Shiller, Ariel University Center. Collaborative work.
Subject: European project framework sFly (further details please find the description of sFly project above) Partner: University of Crete(TUC). Collaborative work and staff exchange (Alessandro Renzaglia spent two weeks in July 2009 in TUC)
Subject: Autonomous navigation in indoor environment Partner: Università Politecnica delle Marche. Common publications
Some members of e-Motionparticipate to the organization of summer schools and conferences:
C. Laugier participates every year to the organization committees of the major international conference on Robotics, in particular : IEEE International Conference on Robotics and Automation (ICRA), IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), International Conference on Field and Service Robotics (FSR).
C. Laugier is Profile Editor since 2009 in the IEEE RAS Conference Editorial Board (for IEEE RAS ICRA conference).
C. Laugier was general chair of IEEE/RSJ IROS'97, Regional Program chair of IEEE/RSJ IROS'00, Program chair of IEEE/RSJ IROS'02, Regional Program Chair of IEEE IV'06, General Chair of the 6th International conference on Field and Service Robotics in 2007, and Program Chair of IEEE/RSJ IROS 2008. He will be program co-chair of IEEE/RSJ IROS 2010.
C. Laugier has co-organized several workshops on “Safe navigation in Dynamic environments” and on “Intelligent Transportation Systems” in the scope of some major conferences of the domain (IEEE ICRA'05, IEEE ICRA'07, IEEE/RSJ IROS'06, IEEE/RSJ IROS'07, IEEE/RSJ IROS'08, IEEE ICRA'09, IEEE/RSJ IROS'09) and in the scope of the ICT-Asia project FACT and the ICT-Asia project CityHome (Seoul 2005, Tokyo 2006, Shanghai 2007, Kobe 2009, Clermont-Ferrand 2009).
Book “Probabilistic Reasoning and Decision Making in Sensory-Motor Systems”, Edited by P. Bessière, C. Laugier and R. Siegwart. Springer Tracts in Advanced Robotics (STAR) volume 46, Springer-Verlag, May 2008.
Special issue of the Journal of Field Robotics (JFR) “Field and Service Robotics”, Guest Edited by C. Pradalier, A. Martinelli, C. Laugier, and R. Siegwart. Volume 25, Issue 6/7, July 2008.
Special issue of the International Journal of Vehicle Autonomous Systems (IJVAS) “Advances in Autonomous Vehicles Technologies for Urban Environment”, Guest Edited by D. Wang, S.Sam Ge, and C. Laugier. Volume 6, 2008.
Special issue of the International Journal of Robotics Research (IJRR) “Field and Service Robotics”, Guest Edited by C. Laugier, A. Martinelli, C. Pradalier, and R. Siegwart. Volume 28, Number 2, February 2009
Special issue of the IEEE Transactions on Intelligent Transportation Systems “Perception and Navigation for Autonomous Vehicles”, Guest Edited by C. Laugier, U. Nunes, and A. Broggi. September 2009.
Some members of e-Motionspent some time in foreign laboratories.
A Martinelli spent a couple of months at the Minnesota State University to work with Prof Stergios Roumeliotis on cooperative localization. During this stay, he also gave a seminar on his recent research activity.
Alessandro Renzaglia spent two weeks in July 2009 in TUC (University of Crete)
In addition to ponctual academic lectures, the members of e-Motionhave taught the following lectures:
Lecture “Robotics Technologies & Applications” (every year since 2000): Europe-France Summer school on “Image and Robotics” (SSIR). Teacher: C. Laugier.
Lecture “Autonomous Robots”: International Master MOSIG (M2), INPG, Grenoble, (FR). Teachers: C. Laugier, O. Aycard, Th. Fraichard, A. Martinelli. Every year since 2008.
Lecture “Robotics and Computer Vision”: International Master MOSIG (M1), INPG, Grenoble, (FR). Teachers: C. Laugier, O. Aycard, E. Boyer, E. Arnaud. Every year since 2008.
Lecture “Basic tools and models for Robotics” (every year): Cnam Grenoble. Teachers: C. Laugier and J. Troccaz.
“Motion Planning course”: Summer School on Image and Robotics Puebla (MX), December 2009. Teacher: Th. Fraichard.
“Motion Planning course”: Master of Science in Informatics at Grenoble (FR), Fall 2009. Teacher: Th. Fraichard.
Lecture “Knowledge Modelling and Processing”: (every year): Master of Computer Science 2nd year, University of Grenoble, (FR). Teachers: MC. Rousset, J. Gensel, O. Aycard, E. Arnaud.
Lecture “Machine Learning”: (every year): Master of Computer Science 1st year, University of Grenoble, (FR). Teachers: E. Gaussier, O. Aycard.
Lecture “Machine Learning”: (every year): Master of Computer Science 2nd year, University of Grenoble, (FR). Teachers: G. Bisson, A. Douzal, A. Guerin, O. Aycard.
Lecture “Autonomous Robots”: (every year): International Master of Computer Science 2nd year, University of Grenoble, (FR). Teachers: C. Laugier, O. Aycard.
Lecture “Computer Vision and Autonomous Robots”: (every year): International Master of Computer Science 1st year, University of Grenoble, (FR). Teachers: C. Laugier, O. Aycard, E. Arnaud, E. Boyer.
Lecture “Knowledge Modelling and Processing”: Ecole Polytechnique de Grenoble, filière Traitement de l'Information pour la Santé, University of Grenoble, (FR). Teachers: D. Ziebelin, and O. Aycard.
Lecture “Bayesian techniques in vision and perception”: France-Mexico Summer school on “Image and Robotics” (every year). Teachers: O. Aycard, E. Sucar.
A. Martinelli held a couple of lectures on the behalf of the course “Autonomous Robots” for master students at the ENSIMAG
A. Martinelli held a couple of lectures on the behalf of the course “Model Identification” for master students at the University of L'Aquila (Italy)
Lecture A. Martinelli held a couple of lectures on the behalf of the course “Robotics and Computer Vision” for master students at the ENSIMAG
“Bayesian models of sensory-motor systems” course, Bayesian Cognition winter school. Teacher: P. Bessière
Pierre Bessière works 20% of his time for the ProBAYES company (
http://
Christian Laugier is a scientific consultant of the Probayes company in the scope of an INRIA/Probayes agreement and with the authorization of the French Deontology committee (since Oct. 2008).
C. Laugier is a member of the steering-advisory committee of IEEE/RSJ IROS (Intelligent Robots and Systems) international conference since 1997. He is also a member of the advisory committee of the ICARCV International conference on Control, Automation, Robotics and Vision.
C. Laugier is co-chair (with U. Numes and A. Broggi) of the IEEE Technical Committee on “Intelligent Transportation Systems and Autonomous vehicles” (since 2005).
C. Laugier was the coordinator of the ICT-Asia Network on ITS named FACT including partners from France, Singapore, Japan, Korea, and China (2005-2008). He is now co-coordinator with Philippe Martinet of the new ICT-Asia project “City Home” (2008-2011).
C. laugier is a member of the permanent organization committee of the Field and Service Robotics Conference (since 1997).
C. laugier is a member of the editorial board of the journal “Intelligent Service Robotics” (since 2005). He is also a member of the editorial board of the national journal “Revue d'Intelligence Artificielle” (since 1987), and Associate Editor of the journal IEEE Transactions on Intelligent Transportation Systems (since 2008).
A Martinelli is a member of the editorial board of the IEEE Transaction on Robotics as Associate Editor (since November 2007)
C. Laugier and A. Martinelli are guest editors for a special issue on IJRR journal (published in 2009)
C. Laugier and A. Martinelli are guest editors for a special issue on JFR journal (published in 2008)
Th. Fraichard is a regular member of the programme committees of the ICRA and IROS conferences. He is also Associate Editor for the ICRA 2010 edition. In 2009, he was also a member of the following programme committee: Eur. Conf. on Mobile Robots (ECMR), dUBROVNIK (HR), Sep. 2009.
O. Aycard is member of the programme committees of the ITSC'2008 conference.
A. Spalanzani is member of the editorial committee of the In Cognitocognitive sciences journal.
P. Bessière is a member of the programme committees of the following conferences : Conference ESANN (European Symposium on Artificial Neural Networks), Conference RFIA (Reconnaissance des Formes et Intelligence Artificielle), Conference IEEE/ICRA (International Conference on Robotics and Automation), Conference IEEE/IROS (International Conference on Intelligent Robots and Systems), Conference EA (International Conference on Artificial Evolution)
P. Bessière reviews regularly in the IEEE Transactions on Evolutionary Computation and Autonomous Robots journals.
C. Laugier “Present and Future of Robotics”, ICARCV 2008, Hanoi, December 2008.
C. Laugier “Robot in Human Environments. A new challenge for Robotics”, Keynote talk, International Conference 2009 on Field and Service Robotics (FSR'09),MIT Cambridge, July 2009.
C. Laugier “ICT for next car generation”, ANR-JST workshop on ICT, Paris, November 2009.
C. Laugier “Key technologies for Intelligent Vehicles”, Keynote talk, Conference on Autonomous Mobile Systems 2009 (AMS'09), Karlsruhe, December 2009.
Th. Fraichard “Safe Autonomous Navigation in Open and Dynamic Environments”, University of Karlsruhe (DE), JULY 2009.
Th. Fraichard “Trajectory Generation for Trajectory Deformation”, University of Judea and Samaria, Ariel (IL),Dec 08.