EN FR
EN FR


Section: New Results

Multi-sensory perception and control

Autonomous Parking Maneuvers

Participants: David Perez Morales (PhD, LS2N-ARMEN), Olivier Kermorgant (LS2N-ARMEN), Salvador Dominguez Quijada (LS2N-ARMEN), Philippe Martinet

Automated parking is used as new functionality to sell different model of cars right now. Mainly, the different versions of parking abilities are not autonomous and are based on motion planning only. There is no ability to evolve in dynamic environment: it remains automated in static environment, or even an assistant to park under the control of the driver. The purpose of the PhD work of David Perez Morales was to investigate how the problem of autonomous parking by using different sensor based techniques is able to handle any kind of parking situations (parallel, perpendicular, diagonal) for parking an unparking (backward and forward).

Two different frameworks has been developed. The first framework, using a Multi-Sensor-Based Control (MSBC) approach [47], [48], [46], [45] allows to formalize different parking and unparking operations in a single maneuver with either backward or forward motions. Building upon the first one and by using an MPC strategy [49], a Multi-Sensor-Based Predictive Control (MSBPC) framework has been developed, allowing the vehicle to park autonomously (with multiple maneuvers, if required) into perpendicular and diagonal parking spots with both forward and backward motions and into parallel ones with backward motions in addition to unpark from parallel spots with forward motions. These frameworks have been tested extensively using a robotized Renault ZOE with positive outcomes and now they are part of the autonomous driving architecture being developed at LS2N.

In 2019, the main focus was on MSBPC, and on taking into account the dynamic aspect in the environment (mainly pedestrians). Detection and tracking for pedestrian has been included in the perception aspect, in parallel to the detection of empty spots for parking. An additional terms has been added as a contraint in the cost function to be minimized in order to take into account the dynamic aspect, and a mecanism has been put in place in order to switch automatically the maneuver. In presence of pedestrian, an additional maneuver is engaged, which is what human are generally doing if place is enough for performing safely the maneuver. Comparison with state of the art motion planning approach have been done in simulation. The proposed method have demonstrated the efficiency while the others fails in a very long set of maneuvers. Real experiments have been done also in presence of pedestrians.

Platoon control and observer

Participants: Ahmed Khalifa (Post-Doc, LS2N-ARMEN), Olivier Kermorgant (LS2N-ARMEN), Salvador Dominguez Quijada (LS2N-ARMEN), Philippe Martinet

In the framework of the ANR Valet project, we are interested in platooning control of cars for a service of VALET Parking where it is necessary to join a platoon (after unparking), to evolve among the platoon, and leave the platoon (for parking). We are considering the case when the leader is autonomous (following an already defined path) or manually driven by a human (the path must be build on line). The lateral controller to follow a path has been designed earlier [23] and the localization technique largelly evaluated experimentally [33]. The main exteroceptive sensor is the Velodyne VLP16.

The first work [38] [15] concerned the design of a distributed longitudinal controller for car-like vehicles platooning that travel in an urban environment. The presented control strategy combines the platoon maintaining, gap closure, and collision avoidance functionality into a unified control law. A consensus-based controller designed in the path coordinates is the basis of the proposed control strategy and its role is to achieve position and velocity consensus among the platoon members taking into consideration the nature of the motion in an urban environment. For platoon creation, gap closure scenario is highly recommended for achieving a fast convergence of the platoon. For that, an algorithm is proposed to adjust the controller parameters online. A longitudinal collision between followers can occur due to several circumstances. Therefore, the proposed control strategy considers the assurance of collision avoidance by the guarantee of a minimum safe inter-vehicle distance. Convergence of the proposed algorithm is proved in the different modes of operations. Finally, studies are conducted to demonstrate and validate the efficiency of the proposed control strategy under different driving conditions. To better emulate a realistic setup, the controller is tested by an implementation of the car-like vehicles platoon in a vehicular mobility simulator called ICARS, which considers the real vehicle dynamics and other platooning staff in urban environments.

The second work [14] adresses the problem of controlling the longitudinal motion of car-like vehicles platoon navigating in an urban environment that can improve the traffic flow with a minimum number of required communication links. To achieve a higher traffic flow, a constant-spacing policy between successive vehicles is commonly used but this is at a cost of increased communication links as the leader information must broadcast to all the followers. Therefore, we propose a distributed observer-based control law that depends on a hybrid source of neighbours information in which a sensor-based link is used to get the predecessor position while the leader information is acquired through a communication-based link. Then, an observer is designed and integrated into the control law such that the velocity information of the predecessor can be estimated. We start by presenting the platoon model defined in the Curvilinear coordinates with the required transformation between that coordinate and the Cartesian Coordinates so that one can design the control law directly in the Curvilinear coordinates. After that, internal and string stability analysis are conducted. Finally, we provide simulation results, through dynamic vehicular mobility simulator called ICARS, to illustrate the feasibility of the proposed approach and corroborate our theoretical findings.

Both work have been tested in real with a platoon of 3 up to 4 cars.

High speed visual servoing

Participants: Franco Fusco (PhD, LS2N-ARMEN), Olivier Kermorgant (LS2N-ARMEN), Philippe Martinet

Controlling high speed robot with visual feedback may require to develop more complex models including the dynamics of the robots and the environment. Some previous work done in the field of dynamic visual feedback of parallel robots [42] have demonstrated the efficiency regarding the classical Joint computed torque control. Also, it has been shown that it is also possible to develop more complex interaction models [20].

In recent years, many efforts have been dedicated to extend Sampling-based planning algorithms to solve problems involving constraints, such as geometric loop-closure, which lead the valid Configuration Space to collapse to a lower-dimensional manifold. One proposed solution considers an approximation of the constrained Configuration Space that is obtained by relaxing constraints up to a desired tolerance. The resulting set has then non-zero measure, allowing therefore to exploit classical planning algorithms to search for a path that connects two given states. When the constraints involve kinematic loops in the system, relaxation generally bears to undesired contact forces, which needs to be compensated during execution by a proper control action. We propose a new tool that exploits relaxation to plan in presence of constraints [32]. Local motions inside the approximated manifold are found as the result of an iterative scheme that uses Quadratic Optimization to proceed towards a new sample without falling outside the relaxed region. By properly guiding the exploration, paths are found with smaller relaxation factors and the need of a dedicated controller to compensate errors is reduced. We complete the analysis by showing the feasibility of the approach with experiments on a real manipulator platform.

The commonly exploited approach in visual servoing is to use a model that expresses the rate of change of a set of features as a function of sensor twist. These schemes are commonly used to obtain a velocity command, which needs to be tracked by a low-level controller. Another approach that can be exploited consists in going one step further and to consider an acceleration model for the features. This strategy allows also to obtain a natural and direct link with the dynamic model of the controlled system. The work done in [13] aims at comparing the use of velocity and acceleration-based models in feed-back linearization for Visual Servoing. We consider the case of a redundant manipulator and discuss what this implies for both control techniques. By means of simulations, we show that controllers based on features acceleration give better results than those based on velocity in presence of noisy feedback signals.

We are working to propose new prediction models for Visual Predictive Control that can lead to both better motions in the feature space and shorter sensor trajectories in 3D. Contrarily to existing local models based only on the Interaction Matrix, it is proposed to integrate acceleration information provided by second-order models. This helps to better estimate the evolution of the image features, and consequently to evaluate control inputs that can properly steer the system to a desired configuration. By means of simulations, the performances of these new predictors are shown and compared to those of a classical model. Real experiments confirm the validity of the approach and show that the increased complexity.

Proactive and social navigation

Participants: Maria Kabtoul (PhD), Wanting Jin (Master), Anne Spalanzani (CHROMA), Philippe Martinet, Paolo Salaris

In the last decade, many works have been done concerning navigation of robots among humans [34], [27] or human robots interaction [22], [31]. In very few cases, a robot can realize an intention to move.

In this work, we would like that robots can express their needs for sharing spaces with humans in order to perform their task (i.e. navigation in crowdy environments). This requires to be proactive and adapt to the behavior by exploiting the potential collaborative charcateritics of the nearby environment of the robots.

In the framework of the ANR project HIANIC, Maria Kabtoul is doing her PhD on the topic Proactive Social navigation for autonomous vehicles among crowds. We consider shared spaces where humans and cars are able to evolve simultaneously. The first step done in this way is to introduce a pedestrian to vehicle interaction behavioral model. The model estimates the pedestrian’s cooperation with the vehicle in an interaction scenario by a quantitative time-varying function. Then, the trajectory of the pedestrian is predicted based on its cooperative behavior. Both parts of the model are tested and validated using real-life recorded scenarios of pedestrian-vehicle interaction. The model is capable of describing and predicting agents’ behaviors when interacting with a vehicle in both lateral and frontal crossing scenarios.

In the framework of the ANR project MOBI-DEEP, we have addressed the problem of navigating a robot in a constrained human-like environment. We provide a method to generate a control strategy that enables the robot to proactively move in order to induce desired and socially acceptable cooperative behaviors in neighboring pedestrians. Contrary to other control strategies that simply aim to passively avoid neighboring pedestrians, this approach greatly simplifies the navigation task for both robots and humans, especially in crowded and constrained environments. In order to reach this objective, the co-navigation process between humans and robots is formalized as a multi-objective optimization problem and a control strategy for the robot is obtained by using the Model Predictive Control (MPC) approach. The Social Force Model (SFM) is used to predict the human motion in cooperative situations. Different social behaviors of humans when moving in a group are also taken into account to generate the proper robot motion. Moreover, a switching strategy between purely reactive (if cooperation is not possible) and proactive-cooperative planning depending on the evaluation of the human intentions is also provided. Simulations under different navigation scenarios show how the proactive-cooperative planner enables the robot to generate more socially and understandable behaviors.

This work has been done by Wanting Jin during her Master thesis [37].

Safe navigation

Participants: Luiz Guardini (PhD), Anne Spalanzani (CHROMA), Christian Laugier (CHROMA), Philippe Martinet, Anh-Lam Do (Renault), Thierry Hermitte (Renault)

Today, car manufacturer are selling systems to brake in presence of obstacle. Those systems are based on the fact that the risk of collision is always detected and well evaluated. Their action are limited on brake only, which is in some case not sufficient to limit the risk. A global and safe system must be more efficient in environment perception awareness and also in action to be decided (break, steer, acceleration). In such a case, it is very complicated to find the best solution as long as we have to evaluate the different solutions in a near horizon in terms of risk of colisions and severity injuries. Car manufacturer are interested to find solution (i.e evaluation of trajectories (planification and action) in terms of risks and injuries.

Evaluating a scene to perform a collision avoidance maneuver is a hard task for both humans and (semi-) autonomous vehicles. There are some cases though that collision avoidance is inevitable. Interpreting the scene for a possible collision avoidance is difficult already a difficult task. Choosing how to mitigate the damage seems even harder, specially when humans have only a split of second to decide how to proceed.

Intending to decrease the reaction time and to increase safety on dangerous driving situations, one can rely on intelligent systems. Nevertheless, autonomous vehicles simulation and testing usually focus on risk assessment and path planing on regular driving conditions [40]. For instance, Waymo from Google, still do not have the full capability of avoiding collision initiated by other vehicles [28].

Developing Advanced Driver Assistance Systems (ADAS) technologies is one alternative for these emergency scenarios. It includes systems such as Active Braking System (ABS), Forward Collision Warning (FCW) and Collision Avoidance (CA). The latter is one of the most complex systems developed in order to assure safety. It perceives technologies such as Advanced Emergency Braking (AEB) and Autonomous Emergency Steering (AES) System. Those systems attempt to avoid the crash or at least reduce its severity. Developing a CA system starts by assessing the available information in the scene. This is made by establishing safe zones that the vehicle can access. The notion of safety of severity is usually addressed by the concept of risk. Risk can be intuitively understood as the likelihood and severity of the damage that an object of interest may suffer or cause in the future. Threat Assessment (also referred as Risk Assessment or Hazard Assessment) makes use of such concept.

The excellence of the data evidenced in the scene plays a major role in risk assessment and mitigation. Up to date, objects in the scene are not contextualized. For instance, pedestrians are treated as forbidden zones whereas cars are allowed to be collided when mitigation is necessary. This might be a correct assessment in some cases, but not always. The injury risk changes independently to each object according to aspects on the scene, such as the impact velocity and angle of collision.

This work focus on the development of a probabilistic cost map that expresses the Probability of Collision with Injury Risk (PCIR). On top of the information gathered by sensors, it includes the severity of injury in the event of a collision between ego and the objects in the scene. This cost map provides enhanced information to perform vehicle motion planning in emergency trajectories where collision is impending.

We represent the environment though probabilistic occupancy grids. It endures agile and robust sensor interpretation mechanisms and incremental discovery procedures. It also handles uncertainty thanks to probabilistic reasoning [25].

We use the Conditional Monte Carlo Dense Occupancy Tracker (CMCDOT developed in CHROMA). It is a generic spatial occupancy tracker that infers dynamics of the scene through a hybrid representation of the environment. The latter consists of static and dynamic occupancy, empty spaces and unknown areas. This differentiation enables the use of state-specific models as well as relevant confidence estimation and management of dataless areas [51].

Although CMCDOT occupancy grid leads to a very reliable global occupancy of the environment, it works on a sub-object level, meaning that the grid by itself does not carry the information on object classification. To overcome this, Erkent et al [26] proposes a method, which estimates an occupancy grid containing detailed semantic information. The semantic characteristics include classes like road, car, pedestrian, side-walk, building, vegetation, etc.

The proposed Probabilistic risk map has been built and validation has been done in simulation using Gazebo using different scenarios (identified by the car manufacturer).

3D Autonomous navigation using Model Predictive Path Integral approach

Participants: Ihab Mohamed (PhD), Guillaume Allibert, Philippe Martinet

Having a safe and reliable system for autonomous navigation of autonomous systems such as Unmanned Aerial Vehicles (UAVs) is a highly challenging and partially solved problem for robotics communities, especially for cluttered and GPS-denied environments such as dense forests, crowded offices, corridors, and warehouses. Such a problem is very important for solving many complex applications, such as surveillance, search-and-rescue, and environmental mapping. To do so, UAVs should be able to navigate with complete autonomy while avoiding all kinds of obstacles in real-time. To this end, they must be able to (i) perceive their environment, (ii) understand the situation they are in, and (iii) react appropriately.

To solve this problem, the applications of the path-integral control theory have recently become more prevalent. One of the most noteworthy works is Williams's iterative path integral method, namely, Model Predictive Path Integral (MPPI) control framework Williams et al. [53]. In this method, the control sequence is iteratively updated to obtain the optimal solution based on importance sampling of trajectories. In Williams et al [54], authors derived a different iterative method in which the control- and noise-affine dynamics constraints, on the original MPPI framework, are eliminated. This framework is mainly based on the information-theoretic interpretation of optimal control using KL-divergence and free energy, while it was previously based on the linearization of Hamilton-Jacob Bellman (HJB) equation and application of Feynman-Kac lemma.

The attractive features of MPPI controller, over alternative methods, can be summarized as: (i) a derivative-free optimization method, i.e., no need for derivative information to find the optimal solution; (ii) no need for approximating the system dynamics and cost functions with linear and quadratic forms, i.e., non-linear and non-convex functions can be naturally employed, even that dynamics and cost models can be easily represented using neural networks; (iii) planning and execution steps are combined into a single step, providing an elegant control framework for autonomous vehicles.

In the context of autonomous navigation, it is observed that the MPPI controller has been mainly applied to the tasks of aggressive driving and UAVs navigation in cluttered environments. For instance, to navigate in cluttered environments, the obstacle map is assumed to be known (either available a priori or built off-line), and only static 2D floor-maps are used. Conversely, in practice, the real environments are often partially observable, with dynamic obstacles. Moreover, only 2D navigation tasks are performed, which limits the applicability of the control framework.

For this reason, our work focuses on MPPI for 2D and 3D navigation tasks in cluttered environments, which are inherently uncertain and partially observable. To the best of our knowledge, this point has not been reported in the literature, presenting a generic MPPI framework that opens up new directions for research.

We propose a generic Model Predictive Path Integral (MPPI) control framework that can be used for 2D or 3D autonomous navigation tasks in either fully or partially observable environments, which are the most prevalent in robotics applications. This framework exploits directly the 3D-voxel grid, e.g., OctoMap [35], acquired from an on-board sensing system for performing collision-free navigation.We test the framework, in realistic RotorS-based simulation, on goal-oriented quadrotor navigation tasks in a 2D/3D cluttered environment, for both fully and partially observable scenarios. Preliminary results demonstrate that the proposed framework works perfectly, under partial observability, in 2D and 3D cluttered environments.

We demonstrate our proposed framework on a set of simulated quadrotor navigation tasks in a 2D and 3D cluttered environment, assuming that: (i) there is a priori knowledge about the environment (namely, fully observable case); (ii) there is not any a priori information (namely, partially observable case); here, the robot is building and updating the map, which represents the environment, online as it goes along.

Perception-aware trajectory generation for robotic systems

Participant: Paolo Salaris, Marco Cognetti (PostDoc, RAINBOW), Valerio Paduano (Master, RAINBOW), Paolo Robuffo Giordano (RAINBOW)

We now focus on our planned research activities on task-oriented perception and control of a robotic system engaged in executing a task. The main objective is to improve the execution of a given task by fruitfully coupling action and perception. We aim at finding the correct balance between efficient task execution and quality of the information content since the amount of the latter has an impact on the possibility of correctly executing the task. Indeed, a robot needs to solve an estimation problem in order to safely move in unstructured environments and accomplishing a task. For instance, it has to self-calibrate and self-localize w.r.t. the environment while, at the same time, a map of the surroundings may be built. These possibilities are highly influenced by the quality and amount of sensor information (i.e., available measurements), especially in case of limited sensing capabilities and/or low cost (noisy) sensors.

For nonlinear systems (i.e., the most of the robotics systems of our interest) the amount and quality of the collected information depends on the robot trajectories. It is hence important to find, among all possible trajectories able to accomplish a task, the most informative ones. One crucial point in this context, also known as active sensing control, is the choice of an appropriate measure of information to be optimized. The Observability Gramian (OG) measures the level of observability of the initial state and hence, its maximization (e.g. by maximizing its smallest eigenvalue) actually increase the amount of information about the initial state and hence improves the performances in estimating (observing) the initial state of the robot. However, when the objective is to estimate the current/future state of the robot (which is implicitly the goal of most of the previous literature in this subject, and of our research too), the OG is not the right metric even if is often used in the literature for this goal. Recently, in [12], we showed that, the right metric is instead the Constructibility Gramian (CG) that indeed quantifies the amount of information about the current/future state, which is obviously the state of interest for the sake of motion control/task execution. We then propose an online optimal sensing control problem whose objective is to determine at runtime, i.e. anytime a new estimate is provided by the employed observer (an EKF in our case), the future trajectory that maximizes the smallest eigenvalue of the CG. We applied our machinery to two robotics platforms: a unicycle vehicle and a quadrotor UAV moving on a vertical plane, both measuring two distances w.r.t. two markers located in the environment. Results show the effectiveness of our solution not only for pure robot's state estimation, but also with instances of active self-calibration and map building.

The proposed solution is not able to cope with the process/actuation noise as CG is not able to measure its degrading effects on the current amount of the collected information and by consequence its negative effects in the estimation process. For all the cases where an EKF is used as an observer, we overcame this issue in [21] where we minimized the largest eigenvalue of the covariance matrix of the EKF that is the solution of the Riccati differential equation.

We also extended the methodology to the problem of shared control by proposing a shared control active perception method aimed at fusing the high-level skills of a human operator in accomplishing complex tasks with the capabilities of a mobile robot in maximizing the acquired information from the onboard sensors for improving its state estimation (localization). In particular, a persistent autonomous behaviour, expressed in terms of a cyclic motion represented by a closed B-Spline, is executed by the robot. The human operator is in charge of modifying online some geometric properties of this path for executing a given task (e.g., exploration). The path is then autonomously processed by the robot, resulting in an actual path that tries to follow the human's commands while, at the same time, maximizing online the acquired information from the sensors. This work has been done by Valerio Paduano during his Master thesis [43] and submitted to ICRA 2020.

Recently we are also working on extending the methodology to Multiple Robot Systems (in particular a group of quadrotor UAVs). In this context, the goal is to propose an optimal and online trajectory planning framework for addressing the localization problem of a group of multiple robots without requiring the rigidity condition. In particular, by leveraging our recent work on optimal online active estimation, we will propose the use of CG for quantifying the localization accuracy, and develop an online decentralized optimal trajectory planning able to optimize the CG during the robot motion. We particularly focus on the online component, since the planned trajectory are continuously refined during the robot motion by exploiting the (continuously converging) decentralized estimation of the robot relative poses. In order to illustrate the approach, we will consider the localization problem for a group of quadrotor UAVs measuring relative distances with maximum range sensing constraints and a decentralized Extended Kalman Filter [39] that estimates the relative configuration of each robot in the group w.r.t. a special one (randomly chosen in the group).