EN FR
EN FR


Section: Research Program

Lifelong Autonomy

Scientific Context

So far, only a few autonomous robots have been deployed for a long time (weeks, months, or years) outside of factories or laboratories. They are mostly mobile robots that simply “move around” (e.g., vacuum cleaners or museum “guides”) and data collecting robots (e.g., boats or underwater “gliders” that collect data about the water of ocean).

A large part of the long-term autonomy community is focused on simultaneous localization and mapping (SLAM), with a recent emphasis on changing and outdoor environments [33] , [48] . A more recent theme is life-long learning: during long-term deployment, we cannot hope to equip robots with everything they need to know, therefore some things will have to be learned along the way. Most of the work on this topic leverages machine learning and/or evolutionary algorithms to improve the ability of robots to react to unforeseen changes [33] , [43] .

Main Challenges

The first major challenge is to endow robots with a stable situation awareness in open and dynamic environments. This covers both the state estimation of the robot itself as well as the perception/representation of the environment. Both problems have been claimed to be solved but it is only the case for static environments [42] .

In the Larsen team, we aim at deployment in environments shared with humans which directly translates into dynamic objects that degrade both the mapping and localization, especially in cluttered spaces. Moreover, when robots stay longer in the environment than for the acquisition of a snapshot map, they have to face structural changes, such as the displacement of a piece of furniture or the opening or closing of a door. The current approach is to simply update an implicitly static map with all observations with no attempt at distinguishing the suitable changes. For localization in not-too-cluttered or not-too-empty environments, this is generally sufficient as a significant fraction of the environment should remain stable. But for life-long autonomy, and in particular navigation, the quality of the map, and especially the knowledge of the stable parts, is primordial.

A second major obstacle to move robots outside of labs and factories is their fragility: current robots often break in a few hours, if not a few minutes. This fragility mainly stems from the overall complexity of robotics systems, which involve many actuators, many sensors, and complex decisions, and from the diversity of situations that robots can encounter. Low-cost robots exacerbate this issue because they can be broken in many ways (high-quality material is expensive), because they have low self-sensing abilities (sensors are expensive and increase the overall complexity), and because they are typically targeted towards non-controlled environments (e.g., houses, by opposition to factories, in which robots are protected from most unexpected events). More generally, this fragility is a symptom of the lack of adaptive abilities in current robots.

Angle of Attack

To solve the state estimation problem, our approach is to combine classical estimation filters (Extended Kalman Filters, Unscented Kalman Filters, or particle filters) with a Bayesian reasoning model in order to internally simulate various configurations of the robot in its environment. This should allow for adaptive estimation that can be used as one aspect of long-term adaptation. To handle dynamic and structural changes in an environment, we aim at assessing, for each piece of observation, whether it is static or not.

We also plan to address active sensing to improve the situation awareness of robots. Literally, active sensing is the fact for an interacting agent – equipped with sensors and effectors – to act so as to control what it senses from its environment. The objective is typically to acquire information about this environment. A good example of such an agent is a mobile robot operating in an unknown or a partially known dynamic environment in order to acquire information about some studied phenomena. Active sensing has applications to autonomous data collection, environment monitoring, sound sources localisation or robotic exploration missions. A formalism for representing and solving active sensing problems has already been proposed by members of the team [32] and we will aim to use it to formalize decision making problems of improving situation awareness.

Situation awareness of robots can also be tackled by cooperation whether it be between robots or between robots and sensors in the environment (led out intelligent spaces) or between robots and humans. We envision here robots with symbiotic autonomy i.e., robots that are aware of their limitations and proactively ask for help from humans, other robots or sensors. This will be addressed and formalized in the framework of distributed sensing. Distributed sensing may include active sensing, but it differs in the fact that a large number of sensors are spread in the environment. Due to recent advances in the development of sensor networks and the rapid growth of the Internet of Things, it is simple, today, to deploy a distributed sensing system. This is why the combination of environmental sensors and robots is especially appealing for monitoring complex environments, cluttered with obstacles and populated by humans. This is in rupture with classical robotics, in which robots are conceived as self-contained: they are composed of actuators, sensors and computers and are designed to carry out a multitude of tasks in full autonomy: localisation, mapping, navigation, interaction, etc. But, in order to cope with as diverse environments as possible, these classical robots use precise, expensive and specialized sensors such as for example 3D laser range finders. However, the cost of these sensors prohibits their use in large-scale deployments for service or assistance applications. Furthermore, when all sensors are on the robot, they share the same point of view on the environment with all that it entails in terms of perception complexity. Therefore, we propose to complement a cheaper robot with sensors distributed in a target environment, gathering the information flow in a usable representation for robots, controlling active sensors such as robots and mobile sensors (camera mounted on a pan-tilt unit). This is an emerging research direction that shares some of the problematics of multi-robot operation – such as synchronization and collaborative planning or swarm intelligence – and sensor networks – such as calibration. We are therefore collaborating with other teams at Inria that address the issue of communication and interoperability.

To address the fragility problem, the traditional approach is to first diagnose the situation, then use a planning algorithm to create/select a contingency plan. The main challenge here is to take uncertainties into account both in the diagnosis and in the planning, a challenge naturally suited for Bayesian methods [45] .

An alternative approach is to skip the diagnosis and let the robot discover by trial and error a behavior that works in spite of the damage, that is, to use a reinforcement learning algorithm [54] , [43] . This approach could be especially appropriate for low-cost autonomous robots because diagnostic procedures require expensive proprioceptive sensors, and because the possible faults in a complex, autonomous robot that works in an open and dynamic environments are almost infinite. However, current reinforcement learning algorithms require hundreds of trials/episodes to learn a single, often simplified, task [43] , which makes them impossible to use for real robots and more ambitious tasks. We therefore need to design new trial-and-error algorithms that will allow robots to learn with a much smaller number of trials (typically, a dozen). We think the key idea is to guide online learning on the physical robot with dynamic simulations. In particular, we will work on combining the exploration abilities of evolutionary algorithms [35] with the convergence speed of gradient-free, continuous, model-based optimization algorithms, like Bayesian Optimization [47] , [49] . In our recent work, we successfully mixed evolutionary search in simulation, physical tests on the robot, and machine learning to allow a robot to recover from physical damages [44] , [2] . We will continue in this direction.

Another approach to address fragility is to deploy several robots or a swarm of robots or make robots evolve in an active environment. We will consider several paradigms such as (1) those inspired from collective natural phenomena in which the environment plays an active role for coordinating the activity of a huge number of biological entities such as ants; (2) those based on online learning [41] . We envision to transfer our knowledge of such phenomenon to engineer new artificial devices such as an intelligent floor (which is in fact a spatially distributed network in which each node can sense, compute and communicate with contiguous nodes and can interact with moving entities on top of it) in order to assist people and robots (see the principle in [52] , [41] [18] ).