EN FR
EN FR

2022Activity reportProject-TeamLARSEN

RNSR: 201521241C
  • Research center Inria Nancy - Grand Est Center
  • In partnership with:Université de Lorraine, CNRS
  • Team name: Lifelong Autonomy and interaction skills for Robots in a Sensing ENvironment
  • In collaboration with:Laboratoire lorrain de recherche en informatique et ses applications (LORIA)
  • Domain:Perception, Cognition and Interaction
  • Theme:Robotics and Smart environments

Keywords

Computer Science and Digital Science

  • A5. Interaction, multimedia and robotics
  • A5.1. Human-Computer Interaction
  • A5.10. Robotics
  • A5.10.1. Design
  • A5.10.2. Perception
  • A5.10.3. Planning
  • A5.10.4. Robot control
  • A5.10.5. Robot interaction (with the environment, humans, other robots)
  • A5.10.6. Swarm robotics
  • A5.10.7. Learning
  • A5.10.8. Cognitive robotics and systems
  • A5.11. Smart spaces
  • A5.11.1. Human activity analysis and recognition
  • A8.2. Optimization
  • A8.2.2. Evolutionary algorithms
  • A9.2. Machine learning
  • A9.5. Robotics
  • A9.7. AI algorithmics
  • A9.9. Distributed AI, Multi-agent

Other Research Topics and Application Domains

  • B2.1. Well being
  • B2.5.3. Assistance for elderly
  • B5.1. Factory of the future
  • B5.6. Robotic systems
  • B7.2.1. Smart vehicles
  • B9.6. Humanities
  • B9.6.1. Psychology

1 Team members, visitors, external collaborators

Research Scientists

  • Francis Colas [Team leader, INRIA, Researcher, HDR]
  • Olivier Buffet [INRIA, Researcher, HDR]
  • Serena Ivaldi [INRIA, Researcher, HDR]
  • Pauline Maurice [CNRS, Researcher]
  • Jean-Baptiste Mouret [INRIA, Senior Researcher, HDR]
  • Quentin Rouxel [INRIA, Starting Research Position, from Sep 2022]

Faculty Members

  • Amine Boumaza [UL, Associate Professor]
  • Mohamed Boutayeb [UL, Professor, from Sep 2022, HDR]
  • Jessica Colombel [UL, ATER, from Sep 2022]
  • Jérôme Dinet [UL, Professor, until Aug 2022, HDR]
  • Yassine El Khadiri [UL, ATER, until Aug 2022]
  • Alexis Scheuer [UL, Associate Professor]
  • Vincent Thomas [UL, Associate Professor]

PhD Students

  • Lina Achaji [Stellantis, CIFRE]
  • Timothee Anne [UL]
  • Abir Bouaouda [UL]
  • Raphael Bousigues [INRIA]
  • Jessica Colombel [INRIA, until Aug 2022]
  • Yoann Fleytoux [INRIA]
  • Nicolas Gauville [INRIA]
  • Salomé Lepers [UL, from Sep 2022]
  • Nima Mehdi [INRIA]
  • Alexandre Oliveira Souza [Safran, CIFRE]
  • Luigi Penco [INRIA + UL, until May 2022]
  • Vladislav Tempez [INRIA, until Jul 2022]
  • Julien Uzzan [Stellantis, CIFRE, until Mar 2022]
  • Lorenzo Vianello [UL]
  • Aya Yaacoub [CNRS]
  • Yang You [INRIA]
  • Eloïse Zehnder [UL]
  • Jacques Zhong [CEA]

Technical Staff

  • Ivan Bergonzani [INRIA, until May 2022, Engineer]
  • Eloïse Dalin [INRIA, until Aug 2022, Engineer]
  • Edoardo Ghini [INRIA, Engineer, until Mar 2022]
  • Raphaël Lartot [INRIA, until Jul 2022, Engineer]
  • Glenn Maguire [INRIA, Engineer, until Feb 2022]
  • Jean Michenaud [INRIA, until Jul 2022, Engineer]
  • Avotra Ny Aina Mampionontsoa Rakotondravony [UL, Engineer]
  • Lucien Renaud [INRIA, Engineer]
  • Nicolas Valle [INRIA, Engineer]

Interns and Apprentices

  • Thais Bernardi [INRIA, Intern, from May 2022 until Aug 2022]
  • Corentin Bunel [INRIA, Intern, from Feb 2022 until Aug 2022]
  • Anaïs Douet [INRIA, Intern, from May 2022 until Jun 2022]
  • Aurélien Herbin [UL, Intern, from Apr 2022 until Aug 2022]
  • Salome Lepers [UL, Intern, from Mar 2022 until Aug 2022]
  • Louy Masset [INRIA, Intern, from Mar 2022 until Aug 2022]
  • Yves Renaud [INRIA, Intern, from Feb 2022 until Jun 2022]
  • Manu Serpette [UL, until Aug 2022, Intern]
  • Etienne Vareille [INRIA, Intern, until Mar 2022]

Administrative Assistants

  • Véronique Constant [INRIA]
  • Antoinette Courrier [CNRS]

Visiting Scientist

  • Angela Mazzeo [SCUOLA SUPERIORE SANT ANNA, from Sep 2022]

External Collaborators

  • Alexis Aubry [UL, HDR]
  • François Charpillet [INRIA, HDR]
  • Dominique Martinez [CNRS, HDR]

2 Overall objectives

The goal of the Larsen team is to move robots beyond the research laboratories and manufacturing industries: current robots are far from being the fully autonomous, reliable, and interactive robots that could co-exist with us in our society and run for days, weeks, or months. While there is undoubtedly progress to be made on the hardware side, robotic platforms are quickly maturing and we believe the main challenges to achieve our goal are now on the software side. We want our software to be able to run on low-cost mobile robots that are therefore not equipped with high-performance sensors or actuators, so that our techniques can realistically be deployed and evaluated in real settings, such as in service and assistive robotic applications. We envision that these robots will be able to cooperate with each other but also with intelligent spaces or apartments which can also be seen as robots spread in the environment. Like robots, intelligent spaces are equipped with sensors that make them sensitive to human needs, habits, gestures, etc., and actuators to be adaptive and responsive to environment changes and human needs. These intelligent spaces can give robots improved skills, with less expensive sensors and actuators enlarging their field of view of human activities, making them able to behave more intelligently and with better awareness of people evolving in their environment. As robots and intelligent spaces share common characteristics, we will use, for the sake of simplicity, the term robot for both mobile robots and intelligent spaces.

Among the particular issues we want to address, we aim at designing robots able to:

  • handle dynamic environments and unforeseen situations;
  • cope with physical damage;
  • interact physically and socially with humans;
  • collaborate with each other;
  • exploit the multitude of sensor measurements from their surroundings;
  • enhance their acceptability and usability by end-users without robotics background.

All these abilities can be summarized by the following two objectives:

  • life-long autonomy: continuously perform tasks while adapting to sudden or gradual changes in both the environment and the morphology of the robot;
  • natural interaction with robotics systems: interact with both other robots and humans for long periods of time, taking into account that people and robots learn from each other when they live together.

3 Research program

3.1 Lifelong autonomy

Scientific context

So far, only a few autonomous robots have been deployed for a long time (weeks, months, or years) outside of factories and laboratories. They are mostly mobile robots that simply “move around” (e.g., vacuum cleaners or museum “guides”) and data collecting robots (e.g., boats or underwater “gliders” that collect data about the water of the ocean).

A large part of the long-term autonomy community is focused on simultaneous localization and mapping (SLAM), with a recent emphasis on changing and outdoor environments 47, 56. A more recent theme is life-long learning: during long-term deployment, we cannot hope to equip robots with everything they need to know, therefore some things will have to be learned along the way. Most of the work on this topic leverages machine learning and/or evolutionary algorithms to improve the ability of robots to react to unforeseen changes 47, 53.

Main challenges

The first major challenge is to endow robots with a stable situation awareness in open and dynamic environments. This covers both the state estimation of the robot by itself as well as the perception/representation of the environment. Both problems have been claimed to be solved but it is only the case for static environments 52.

In the Larsen team, we aim at deployment in environments shared with humans which imply dynamic objects that degrade both the mapping and localization of a robot, especially in cluttered spaces. Moreover, when robots stay longer in the environment than for the acquisition of a snapshot map, they have to face structural changes, such as the displacement of a piece of furniture or the opening or closing of a door. The current approach is to simply update an implicitly static map with all observations with no attempt at distinguishing the suitable changes. For localization in not-too-cluttered or not-too-empty environments, this is generally sufficient as a significant fraction of the environment should remain stable. But for life-long autonomy, and in particular navigation, the quality of the map, and especially the knowledge of the stable parts, is primordial.

A second major obstacle to moving robots outside of labs and factories is their fragility: Current robots often break in a few hours, if not a few minutes. This fragility mainly stems from the overall complexity of robotic systems, which involve many actuators, many sensors, and complex decisions, and from the diversity of situations that robots can encounter. Low-cost robots exacerbate this issue because they can be broken in many ways (high-quality material is expensive), because they have low self-sensing abilities (sensors are expensive and increase the overall complexity), and because they are typically targeted towards non-controlled environments (e.g., houses rather than factories, in which robots are protected from most unexpected events). More generally, this fragility is a symptom of the lack of adaptive abilities in current robots.

Angle of attack

To solve the state estimation problem, our approach is to combine classical estimation filters (Extended Kalman Filters, Unscented Kalman Filters, or particle filters) with a Bayesian reasoning model in order to internally simulate various configurations of the robot in its environment. This should allow for adaptive estimation that can be used as one aspect of long-term adaptation. To handle dynamic and structural changes in an environment, we aim at assessing, for each piece of observation, whether it is static or not.

We also plan to address active sensing to improve the situation awareness of robots. Literally, active sensing is the ability of an interacting agent to act so as to control what it senses from its environment with the typical objective of acquiring information about this environment. A formalism for representing and solving active sensing problems has already been proposed by members of the team 46 and we aim to use it to formalize decision-making problems for improving situation awareness.

Situation awareness of robots can also be tackled by cooperation, whether it be between robots or between robots and sensors in the environment (led out intelligent spaces) or between robots and humans. This is in rupture with classical robotics, in which robots are conceived as self-contained. But, in order to cope with as diverse environments as possible, these classical robots use precise, expensive, and specialized sensors, whose cost prohibits their use in large-scale deployments for service or assistance applications. Furthermore, when all sensors are on the robot, they share the same point of view on the environment, which is a limit for perception. Therefore, we propose to complement a cheaper robot with sensors distributed in a target environment. This is an emerging research direction that shares some of the problematics of multi-robot operation and we are therefore collaborating with other teams at Inria that address the issue of communication and interoperability.

To address the fragility problem, the traditional approach is to first diagnose the situation, then use a planning algorithm to create/select a contingency plan. But, again, this calls for both expensive sensors on the robot for the diagnosis and extensive work to predict and plan for all the possible faults that, in an open and dynamic environment, are almost infinite. An alternative approach is then to skip the diagnosis and let the robot discover by trial and error a behavior that works in spite of the damage with a reinforcement learning algorithm 62, 53. However, current reinforcement learning algorithms require hundreds of trials/episodes to learn a single, often simplified, task 53, which makes them impossible to use for real robots and more ambitious tasks. We therefore need to design new trial-and-error algorithms that will allow robots to learn with a much smaller number of trials (typically, a dozen). We think the key idea is to guide online learning on the physical robot with dynamic simulations. For instance, in our recent work, we successfully mixed evolutionary search in simulation, physical tests on the robot, and machine learning to allow a robot to recover from physical damage 54, 1.

A final approach to address fragility is to deploy several robots or a swarm of robots or to make robots evolve in an active environment. We will consider several paradigms such as (1) those inspired from collective natural phenomena in which the environment plays an active role for coordinating the activity of a huge number of biological entities such as ants and (2) those based on online learning 51. We envision to transfer our knowledge of such phenomenon to engineer new artificial devices such as an intelligent floor (which is in fact a spatially distributed network in which each node can sense, compute and communicate with contiguous nodes and can interact with moving entities on top of it) in order to assist people and robots (see the principle in 60, 51, 45).

3.2 Natural interaction with robotic systems

Scientific context

Interaction with the environment is a primordial requirement for an autonomous robot. When the environment is sensorized, the interaction can include localizing, tracking, and recognizing the behavior of robots and humans. One specific issue lies in the lack of predictive models for human behavior and a critical constraint arises from the incomplete knowledge of the environment and the other agents.

On the other hand, when working in the proximity of or directly with humans, robots must be capable of safely interacting with them, which calls upon a mixture of physical and social skills. Currently, robot operators are usually trained and specialized but potential end-users of robots for service or personal assistance are not skilled robotics experts, which means that the robot needs to be accepted as reliable, trustworthy and efficient 66. Most Human-Robot Interaction (HRI) studies focus on verbal communication 61 but applications such as assistance robotics require a deeper knowledge of the intertwined exchange of social and physical signals to provide suitable robot controllers.

Main challenges

We are here interested in building the bricks for a situated Human-Robot Interaction (HRI) addressing both the physical and social dimension of the close interaction, and the cognitive aspects related to the analysis and interpretation of human movement and activity.

The combination of physical and social signals into robot control is a crucial investigation for assistance robots 63 and robotic co-workers 58. A major obstacle is the control of physical interaction (precisely, the control of contact forces) between the robot and the human while both partners are moving. In mobile robots, this problem is usually addressed by planning the robot movement taking into account the human as an obstacle or as a target, then delegating the execution of this “high-level” motion to whole-body controllers, where a mixture of weighted tasks is used to account for the robot balance, constraints, and desired end-effector trajectories 44.

The first challenge is to make these controllers easier to deploy in real robotics systems, as currently they require a lot of tuning and can become very complex to handle the interaction with unknown dynamical systems such as humans. Here, the key is to combine machine learning techniques with such controllers.

The second challenge is to make the robot react and adapt online to the human feedback, exploiting the whole set of measurable verbal and non-verbal signals that humans naturally produce during a physical or social interaction. Technically, this means finding the optimal policy that adapts the robot controllers online, taking into account feedback from the human. Here, we need to carefully identify the significant feedback signals or some metrics of human feedback. In real-world conditions (i.e., outside the research laboratory environment) the set of signals is technologically limited by the robot's and environmental sensors and the onboard processing capabilities.

The third challenge is for a robot to be able to identify and track people on board. The motivation is to be able to estimate online either the position, the posture, or even moods and intentions of persons surrounding the robot. The main challenge is to be able to do that online, in real-time and in cluttered environments.

Angle of attack

Our key idea is to exploit the physical and social signals produced by the human during the interaction with the robot and the environment in controlled conditions, to learn simple models of human behavior and consequently to use these models to optimize the robot movements and actions. In a first phase, we will exploit human physical signals (e.g., posture and force measurements) to identify the elementary posture tasks during balance and physical interaction. The identified model will be used to optimize the robot whole-body control as prior knowledge to improve both the robot balance and the control of the interaction forces. Technically, we will combine weighted and prioritized controllers with stochastic optimization techniques. To adapt online the control of physical interaction and make it possible with human partners that are not robotics experts, we will exploit verbal and non-verbal signals (e.g., gaze, touch, prosody). The idea here is to estimate online from these signals the human intent along with some inter-individual factors that the robot can exploit to adapt its behavior, maximizing the engagement and acceptability during the interaction.

Another promising approach already investigated in the Larsen team is the capability for a robot and/or an intelligent space to localize humans in its surrounding environment and to understand their activities. This is an important issue to handle both for safe and efficient human-robot interaction.

Simultaneous Tracking and Activity Recognition (STAR) 65 is an approach we want to develop. The activity of a person is highly correlated with his position, and this approach aims at combining tracking and activity recognition to make one benefit from the other. By tracking the individual, the system may help infer its possible activity, while by estimating the activity of the individual, the system may make a better prediction of his/her possible future positions (especially in the case of occlusions). This direction has been tested with simulator and particle filters 50, and one promising direction would be to couple STAR with decision making formalisms like partially observable Markov decision processes (POMDPs). This would allow us to formalize problems such as deciding which action to take given an estimate of the human location and activity. This could also formalize other problems linked to the active sensing direction of the team: how should the robotic system choose its actions in order to better estimate the human location and activity (for instance by moving in the environment or by changing the orientation of its cameras)?

Another issue we want to address is robotic human body pose estimation. Human body pose estimation consists of tracking body parts by analyzing a sequence of input images from single or multiple cameras.

Human posture analysis is of high value for human robot interaction and activity recognition. However, even though the arrival of new sensors like RGB-D cameras has simplified the problem, it still poses a great challenge, especially if we want to do it online, on a robot and in realistic world conditions (cluttered environment). This is even more difficult for a robot to bring together different capabilities both at the perception and navigation level 49. This will be tackled through different techniques, going from Bayesian state estimation (particle filtering), to learning, active and distributed sensing.

4 Application domains

4.1 Personal assistance

During the last fifty years, many medical advances as well as the improvement of the quality of life have resulted in a longer life expectancy in industrial societies. The increase in the number of elderly people is a matter of public health because although elderly people can age in good health, old age also causes embrittlement, in particular on the physical plan which can result in a loss of autonomy. That will lead us to re-think the current model regarding the care of elderly people.1 Capacity limits in specialized institutes, along with the preference of elderly people to stay at home as long as possible, explain a growing need for specific services at home.

Ambient intelligence technologies and robotics could contribute to this societal challenge. The spectrum of possible actions in the field of elderly assistance is very large, going from activity monitoring services, mobility or daily activity aids, medical rehabilitation, and social interactions. This will be based on the experimental infrastructure we have built in Nancy (Smart apartment platform) as well as the deep collaboration we have with OHS 2 and the company Pharmagest and it's subsidiary Diatelic, an SAS created in 2002 by a member of the teams and others.

At the same time, these technologies can be beneficial to address the increasing development of musculo-skeletal disorders and diseases that is caused by the non-ergonomic postures of the workers, subject to physically stressful tasks. Wearable technologies, sensors and robotics, can be used to monitor the worker's activity, its impact on their health, and anticipate risky movements. Two application domains have been particularly addressed in the last years: industry, and more specifically manufacturing, and healthcare.

4.2 Civil robotics

Many applications for robotics technology exist within the services provided by national and local government. Typical applications include civil infrastructure services 3 such as: urban maintenance and cleaning; civil security services; emergency services involved in disaster management including search and rescue; environmental services such as surveillance of rivers, air quality, and pollution. These applications may be carried out by a wide variety of robot and operating modalities, ranging from single robots to small fleets of homogeneous or heterogeneous robots. Often robot teams will need to cooperate to span a large workspace, for example in urban rubbish collection, and operate in potentially hostile environments, for example in disaster management. These systems are also likely to have extensive interaction with people and their environments.

The skills required for civil robots match those developed in the Larsen project: operating for a long time in potentially hostile environment, potentially with small fleets of robots, and potentially in interaction with people.

5 Social and environmental responsibility

5.1 Impact of research results

Hospitals- The research in the ExoTurn project led to the deployment of a total of four exoskeletons (Laevo) in the Intensive Care Unit Department of the Hospital of Nancy (CHRU). They have been used by the medical staff since April 2020 to perform Prone Positioning on COVID-19 patients with severe ARDS. To the best of our knowledge, other hospitals (in France, Belgium, Holland, Italy and Switzerland) are following on our footsteps and purchased Laevo exoskeletons for the same use. At the same time, the positive feedback from the CHRU of Nancy has motivated us to continue investigating if exoskeletons could be beneficial for the medical staff involved in other type of healthcare activities. A new study on bed bathing of hospitalized patients started in February 2021 in the department of vascular surgery. For sanitary reasons, preliminary experiments investigating the use of the Laevo for assisting nurses were conducted in the team's laboratory premises in summer 2021. An article presenting the findings is in preparation.

Ageing and health- This research line has the objective to propose technological solutions to the difficulties of elderly people in an ageing population (due to the increase of life expectancy). The placement of older people in a nursing home (EHPAD) is often only a choice of reason and can be rather poorly experienced by people. One answer to this societal problem is the development of smart home technologies that assist the elderly to stay in their homes longer than they can do today. With this objective we have a long term cooperation with Pharmagest which was supported through a PhD thesis (Cifre) which has been supported in recent years through a PhD thesis (Cifre) between June 2017 and August 2021. The objective is to enhance the CareLib solution developed by Diatelic (a subsidiary of the Wellcoop-Pharmagest group) and Larsen team through a previous collaboration (Satelor project). The Carelib offer is a solution consisting of (1) a connected box (with touch screen), (2) a 3D sensor that is able (i) to measure characteristics of the gait such as the speed and step length, (ii) to identify activities of daily life and (iii) to detect emergency situation such as a fall, and (3) universal sensors (motion, ...) installed in each part of the housing. A software licence has been grant by Inria to Pharmagest.

Environment- The new project TELEMOVTOP, in collaboration with the company Isotop, aims at automating the processes of disposal of metal sheets contaminated with asbestos from roofs. This procedure has a high environmental impact and is also a risk for the health of the workers. Robotics can be a major technology innovation in this field. With this project, the team aims at both helping to reduce the workers' risk of exposure to asbestos, and accelerating the disposal project to reduce environmental pollution.

Firefighters- The project POMPEXO, in collaboration with the SDIS 54 (firefighters from Meurthe-et-Moselle) and two laboratories from the Université de Lorraine (DevAH: biomechanics, and Perseus: Cognitive ergonomics), aims at investigating the possibility to assist firefighters with an exoskeleton during the car cutting maneuver. This frequent maneuver is physically very demanding, and due to a general trend in aging and decreasing physical condition among firefighter crews, less and less firefighters are able to perform it. Hence the SDIS 54 is looking for a solution to increase the strength and reduce the fatigue of firefighters during this maneuver. Occupational exoskeletons have the potential to alleviate the physical load on the workers, and hence may be a solution. However, feasibility and benefits of exoskeletons are task-dependent. Hence we are currently analyzing the car cutting maneuver based on data collected on-site with professional firefighters, to identify what kind of exoskeleton may be suitable.

6 Highlights of the year

In 2022, two events have gone against the objective of the COP “to build an efficient and serene organisation”, which is the responsibility of the executive committee of the institute:

  • the forced deployment of the new Eksae software suite, which was not ready, and caused great disruptions, mitigated only by the strong implication of the administrative staff; and
  • the defiance of the upper management of the institute against the Evaluation Committee, which caused an uneasy atmosphere.

Nevertheless, the Larsen team was successfully evaluated. And we can report among others:

  • The Horizon Europe project euRobin (European Network of Excellence in Robotics and AI) was funded, and started in July 2022 (PI: Serena Ivaldi). Our team is leading the WP2 "Personal robotics".
  • Serena Ivaldi was an international judge of the ANA Avatar Xprize competition
  • Serena Ivaldi was named Vice-President of the IEEE RAS Members Activities Board (01/2022) and was named co-chair of the IEEE RAS ICRA Steering committee (06/2022).

6.1 Awards

Best Conference Paper Award Finalist – IEEE International Conference on Advanced Robotics and Mechatronics.

7 New software and platforms

7.1 New software

7.1.1 inria_wbc

  • Name:
    Inria whole body controller
  • Keyword:
    Robotics
  • Scientific Description:
    This software implements Task-Space Inverse Dynamics for the Talos robot, iCub Robot, Franka-Emika Panda robot, and Tiago Robot.
  • Functional Description:
    This controller exploits the TSID library (QP-based inverse dynamics) to implement a position controller for the Talos humanoid robot. It includes:
    • flexible configuration files,
    • links with the RobotDART library for easy simulations,
    • stabilizer and torque safety checks.
  • Release Contributions:
    First version
  • News of the Year:
     
    • static walking
    • better support of the iCub robot
    • new self-collision check
    • support of AVX/SSE
    • several bug fixes and improvements
  • URL:
  • Publication:
  • Contact:
    Jean-Baptiste Mouret
  • Participants:
    Jean-Baptiste Mouret, Eloise Dalin, Ivan Bergonzani, Olivier Rochel

7.1.2 libProMP

  • Name:
    Probabilistic Motion Primitives
  • Keywords:
    Machine learning, Robotics
  • Functional Description:
    This library implements Probabilistic Motion Priimitives for motion prediction, motion recognition, and learning from demonstration in robotics. It leverages the Eigen3 library.
  • News of the Year:
    FIrst public release
  • URL:
  • Publication:
  • Contact:
    Serena Ivaldi
  • Participants:
    Luigi Penco, Ivan Bergonzani, Waldez Azevedo Gomes Junior, Serena Ivaldi, Jean-Baptiste Mouret

8 New results

8.1 Lifelong autonomy

8.1.1 Planning and decision making

Heuristic search for (partially observable) stochastic games

Participants: Olivier Buffet, Aurélien Delage, Vincent Thomas.

Many robotic scenarios involve multiple interacting agents, robots or humans, e.g., security robots in public areas.

We have first worked in the past on the collaborative setting, where all agents share one objective, in particular through solving Dec-POMDPs by (i) turning them into occupancy MDPs and (ii) using heuristic search techniques and value function approximation 2. A key idea is to take the point of view of a central planner and reason on a sufficient statistics called occupancy state.

We are now also working on applying similar approaches in the important 2-player zero-sum setting, i.e., with two competing agents. We have proposed an algorithm for partially observable Stochastic Games (POSGs), turning the problem into an occupancy Markov game, and deriving bounding approximators that build on two types of continuity properties: Lipschitz-continuity, and convexity and concavity properties. On similar topics, we have proposed an approach for global max-min optimization for Lipschitz-continuous function.

[This line of research is pursued through Jilles Dibangoye's ANR JCJC PLASMA.]

Publications: 28, 37.

Planning with uncertain action durations

Participants: Salomé Lepers, Olivier Buffet, Vincent Thomas.

Probabilistic planning often accounts for uncertainty about the outcome of actions, but little work has looked at the uncertainty about the duration of processes, which can be an action triggered by the agent, or some externally trigerred process. To handle such uncertain durations, we here assume that the progress of a given process is modeled by a hidden Markov chain, so that the planning problem is a partially observable Markov decision process (POMDP).

Based on this approach, we explored the impact of (1) using different canonical Markov chains to model the same uncertain durations (typically distinguishing whether the hidden state represents the elapsed time or the remaining time), and (2) compressing a large Markov chain into a smaller one (which takes time, but may not speed up the planning phase).

Learning reflexes for a damaged humanoid robot (Talos robot)

Participants: Timothée Anne, Eloïse Dalin, Serena Ivaldi, Jean-Baptiste Mouret.

We showed that D-Reflex allows the Talos robot to avoid more than 75% of the avoidable falls, with experiments both in reality and in simulation.

Publication: 3; Press coverage by IEEE Spectrum: [link]

Video: Youtube video (11k views).

Learning of object-centric grasp preferences with 1 to 3 labels (Franka robot)

Participants: Yoann Fleytoux, Serena Ivaldi, Jean-Baptiste Mouret.

This work is part of the ANR/Chist-ERA project HEAP.

There are many objects for which it is not possible to choose a grasp by only looking at an RGB-D image, might it be for physical reasons (e.g., a hammer with uneven mass distribution) or task constraints (e.g., food that should not be spoiled). In such situations, the preferences of experts need to be taken into account.

We introduced a data-efficient grasping pipeline (Latent Space GP Selector — LGPS) that learns grasp preferences with only a few labels per object (typically 1 to 4) and generalizes to new views of this object. Our pipeline is based on learning a latent space of grasps with a dataset generated with any state-of-the-art grasp generator (e.g., Dex-Net). This latent space is then used as a low-dimensional input for a Gaussian process classifier that selects the preferred grasp among those proposed by the generator. The results show that our method outperforms both GR-ConvNet and GG-CNN (two state-of-the-art methods that are also based on labeled grasps) on the Cornell dataset, especially when only a few labels are used: only 80 labels are enough to correctly choose 80% of the grasps (885 scenes, 244 objects).

Publication: 19

Video: Youtube video

VP-GO: A “Light” Action-Conditioned Visual Prediction Model for Grasping Objects

Participants: Yoann Fleytoux, Anji Ma, Serena Ivaldi, Jean-Baptiste Mouret.

This work is part of the ANR/Chist-ERA project HEAP.

Visual prediction models are promising solutions for visual-based robotic grasping of cluttered, unknown soft objects. Previous models from the literature are computationally greedy, which limits reproducibility; although some consider stochasticity in the prediction model, it is often too weak to catch the reality of robotics experiments involving grasping such objects. Furthermore, previous work focused on elementary movements that are not efficient to reason in terms of more complex semantic actions. To address these limitations, we propose VP-GO, a “light” stochastic action-conditioned visual prediction model. We propose a hierarchical decomposition of semantic grasping and manipulation actions into elementary end-effector movements, to ensure compatibility with existing models and datasets for visual prediction of robotic actions such as RoboNet. We also record and release a new open dataset for visual prediction of object grasping, called PandaGrasp. Our model can be pre-trained on RoboNet and fine-tuned on PandaGrasp, and performs similarly to more complex models in terms of signal prediction metrics. Qualitatively, it outperforms when predicting the outcome of complex grasps performed by our robot.

Publication: 21

Multi-objective tuning of whole-body controllers (Talos Robot)

Participants: Evelyn D'Elia, Jean-Baptiste Mouret, Serena Ivaldi.

Collaboration with Jens Kober (TU Delft, Netherlands)

Hand-tuning whole-body controllers controllers (like those used in our work with the Talos and iCub humanoid robots) is a time-consuming approach that yields a single controller which cannot generalize well to varied tasks. We extended our previous work on using black-box optimization to search for optimal weights and gain to search for the Pareto-optimal set of trade-offs 55. The learned Pareto front is then used in a Bayesian optimization (BO) algorithm both as a search space and as a source of prior information in the initial mean estimate, a technique inspired by our previous work on damage recovery 48.

This combined learning approach, leveraging the two optimization methods together, finds a suitable parameter set for a new trajectory within 20 trials and outperforms both BO in the continuous parameter search space and random search along the precomputed Pareto front.

Publication: 15

Reinforcement learning for autonomous vehicles

Participants: Julien Uzzan, François Aioun (PSA), Thomas Hannagan (PSA), François Charpillet.

This work is funded by Stellantis through a Cifre PhD Grant and it pursues the objective to study how reinforcement learning techniques can help design autonomous cars. Two application domains have been chosen : longitudinal control and "merge and split". The PhD has been defended November 22.

Publication : thèse + Brevet FR3114896.

8.1.2 Multi-robot and swarm

Exploration of an unknown environments with a fleet of autonomous robots

Participants: Nicolas Gauville, Dominique Malthese (Safran), Christophe Guettier (Safran), François Charpillet.

Publication: 5

8.1.3 Multirobot and swarm

Specialization and diversity in heterogeneous robot swarm

Participants: Amine Boumaza.

8.2 Natural interaction with robotics systems

8.2.1 Teleoperation and Human-Robot collaboration

Simultaneous Action Recognition and Human Whole-Body Motion and Dynamics Prediction from Wearable Sensors

Participants: Serena Ivaldi.

This is a work in collaboration with the AMI team of the Italian Institute of Technology. It was started during the European project AnDy, to develop a predictor if human or robot future motion given wearable sensor information. Its purpose is to be used to teleoperate robots in presence of delays, but the technique can be used to anticipe future non-ergonomic postures of humans, if monitored by wearable sensors.

Video: video of the paper

Publication: 16

Adaptive control of collaborative robots for preventing musculoskeletal disorders

Participants: Aya Yaacoub, Francis Colas, Vincent Thomas, Pauline Maurice.

This work is part of Pauline Maurice's ANR JCJC ROOIBOS project.

The use of collaborative robots in direct physical collaboration with humans constitutes a possible answer to musculoskeletal disorders: not only can they relieve the worker from heavy loads, but they could also guide him/her towards more ergonomic postures. In this context, one objective of the ROOIBOS Project is to build adaptive robot strategies that are optimal regarding productivity but also the long-term health and comfort of the human worker, by adapting the robot behavior to the human's physiological state.

To do so, we are developing tools to compute a robot policy taking into account the long-term consequences of the biomechanical demands on the human worker's joints (joint loading) and to distribute the efforts among the different joints during the execution of a repetitive task. The proposed platform merges within the same framework several works conducted in the LARSEN team, namely virtual human modeling and simulation, fatigue estimate and decision making in the face of uncertainties.

This year, we have put efforts on developing a first version of the platform. In order to address the continuous state space, we used POMCP 59, a probabilistic planning online algorithm based on MonteCarlo Tree Search, to compute the collaborative robot strategy. First results on a simple context are investigated to validate our approach. In the future, we plan to refine this simple context to investigate several scientific questions, like, fitting human reactions to experimental data, or considering more complex reward functions to represent compromises between the reduction of the worker physical demand, the efficacy of the task execution and the required cognitive load.

The followed approach and preliminary results were presented during the JJCR (Journée des Jeunes Chercheurs en Robotique).

Publication: 42

Task-planning for human robot collaboration

Participants: Yang You, François Charpillet, Francis Colas, Olivier Buffet, Vincent Thomas.

Collaboration with Rachid Alami (CNRS Senior Scientist, LAAS Toulouse).

This work is part of the ANR project Flying Co-Worker (FCW) and focuses on high-level decision making for collaborative robotics. When a robot has to assist a human worker, it has no direct access to his current intention or his preferences but has to adapt its behavior to help the human complete his task.

We previously looked at this collaboration problem as a multi-agent problem and formalized in the decentralized-POMDP framework, where a common reward is shared among the agents. However, the cost of solving this multi-agent problem is prohibitive and, even if the optimal joint policy could be built, it may be too complex to be realistically executed by a human worker.

To address the collaboration issue, we thus proposed this year to consider a single-agent problem by taking the robot's perspective and assuming the human is part of the environment and follows a known policy. In this context, building the robot consists in computing its best response given the human policy and can be formalized as a Partially Observable Markov Decision Process (POMDP). This makes the problem computationally simpler, but the difficulty lies in how to choose a relevant human policy for which the robot's best response could be built.

To synthesize the various behaviors the human may actually adopt, we assume that

  • he may follow one of several possible objectives;
  • he acts as if controlling the robot, so as to ensure that he accounts for the robot's ability to help him; and
  • he picks actions using a "soft-max", i.e., sampling more valuable actions with higher probability, which allows accounting not only for the multiplicity of optimal actions, but also for the possibly sub-optimal action choices.

Having generated such a human behavior (as a finite state controller (FSC)), the robot then has to solve a POMDP to find a "best response" solution behavior.

Experiments on a toy problem allowed to demonstrate that our approach is robust not only to a variety of synthetic human behaviors, but also to actual humans who interacted with our "robot" through a terminal game.

Publications: 29.

8.2.2 Human understanding

PAAVUP - models of visual attention

Participants: Olivier Buffet, Vincent Thomas.

Understanding human decisions and resulting behaviors is of paramount importance to design interactive and collaborative systems. This project focuses on visual attention, a central process of human-system interaction. The aim of the PAAVUP project is to develop and test a predictive model of visual attention that would be adapted to task-solving situations, and thus would depend on the task to solve and the knowledge of the participant, and not only on the visual characteristics of the environment as usually done.

This year we have defined an experiment, a simple task on a computer, wherein the human has to quickly locate a target image among other images on the screen. Before each such task, a hint is presented, which, depending on the current scenario may indicate with high (or low) probability whether the target will be (or not be) in a given direction. Experimental data has been gathered with a cohort.

We also took advantage of that project to organize a work seminar entitled Journée sur l'attention visuelle with other researchers working on similar subjects.

A next step will be to take inspiration from Wickens et al.'s theoretical model based on the expectancy and value of information 64 so as to propose biologically plausible computational models of visual attention (based on the POMDP formalism for sequential decision-making under partial observability) and compare their outcome on the same task with this data.

Sensor fusion for human pose estimation

Participants: Nima Mehdi, Francis Colas, Serena Ivaldi, Vincent Thomas.

This work is part of the ANR project Flying Co-Worker (FCW) and focuses on the perception of a human collaborator by a mobile robot for physical interaction. Physical interaction, for instance, object hand-over, requires the precise estimation of the pose of the human and, in particular her hands, with respect to the robot.

On the one hand, the human worker can wear a sensor suit with inertial sensors able to reconstruct the pose with good precision. However, such sensors cannot observe the absolute position and bearing of the human. All positions and orientations are therefore estimated by the suit with respect to an initial frame and this estimation is subject to drift (at a rate of a few cm per minute). On the other hand, the mobile robot can be equipped with cameras for which human pose estimation solutions are available such as OpenPose. The estimation is less precise but the error on the relative positioning is bounded.

This year we proposed a two-step particle filter to simultaneously estimate the pose and the posture of a human worker by leveraging the advantages of both sensors. This approach is based on four main ideas: (1) decompose this filter into two coupled filters: one dedicated to the posture estimation using information from the wearable sensor and the other to the pose estimation using the camera information; (2) use the result of the posture estimation as the proposal of the pose estimation filter; (3) for the posture estimation, separately process joint groups such as the arms, the legs or the trunk to drastically reduce the number of required particles; (4) and, for the pose estimation, estimate the drift of the wearable sensor rather than absolute pose to reduce uncertainties.

Currently, we are investigating the use of Hidden Semi-Markov Models (HSMM) to take into account activities durations for activity recognition and prediction. This work considers different categories of HSMMs (Explicit duration HMM and Variable Transition HMM) and uses the Andy dataset.

The aim would be to combine those two approaches to track activities of a human worker by using different sensors and probabilistic models of activity durations.

Publications: 4123

Human motion decomposition

Participants: Jessica Colombel, David Daney (Auctus Bordeaux), François Charpillet.

Inverse Optimal Control (IOC) is a popular method for human motion analysis. In the context of these methods it is necessary to pay attention to the reliability of the results. This paper proposes an approach based on the evaluation of Karush-Kuhn-Tucker conditions relying on a complete analysis with Singular Value Decomposition and provides a detailed analysis of reliability. With respect to a ground truth, our simulations illustrate how the proposed method analyzes the reliability of the resolution. After introducing a clear methodology, the properties of the matrices are studied with different noise levels and different experimental models and conditions. We show how to implement the method, step by step, by explaining the numerical difficulties encountered during the resolution and thus how to make the results of the IOC problem reliable.

Related publication: 14

Pedestrian behavior prediction

Participants: Lina Achaji, François Aioun (Stellantis), Julien Moreau (Stellantis, François Charpillet.

This work is done as part of the Ph.D. thesis of Lina Achaji (started on the 1st of March 2020) in the context of the OpenLab collaboration between Inria Nancy and Stellantis. The PhD is related to the development of Autonomous Vehicles in urban places. It underpins essential safety concerns for vulnerable road users (VRUs) such as pedestrians.

This year, a framework based on multiple variations of the Transformer models to reason attentively about the dynamic evolution of the pedestrians' past trajectory and predict its futur trajectories.

Publication : 11, 12 .

Social robots and loneliness: Acceptability and trust

Participants: Eloïse Zehnder, Jérôme Dinet (2LPN), François Charpillet.

This PhD work is done between the Larsen team and 2LPN, the psychology laboratory of the University of Lorraine. The main objective of the PhD program is to study how social robots or avatars can fight loneliness and how this is related to acceptability and trust

Related Publications : 20, 26, 25

8.2.3 Exoskeleton and ergonomics

Simulating operators' morphological variability for ergonomics assessment

Participants: Jacques Zhong, Pauline Maurice, Francis Colas.

In collaboration with Vincent Weisstroffer and Claude Andriot from CEA-LIST (PhD of Jacques Zhong funded by CEA).

Digital human models (DHM) are a powerful tool to assess the ergonomics of a workstation during the design process and easily modify/optimize the workstation, without the need for a physical mock-up nor lengthy human subject measurements. However, the morphological variability of the workers is rarely taken into account. Generally, only the height and volume are considered for reachability and space questions. But morphology has other effects, such as changing the way in which a person can perform the task, changing the effort distribution in the body, etc.

In this work, we aim to leverage dynamic simulation with a digital human model (animated using a quadratic programming controller) to simulate virtual assembly tasks for workers of any morphology. The key challenge is to transfer the task execution from one morphology to another.

In a first step, we developed a digital human simulation based on a QP controller, in which a user can manipulate the posture of the digital human using virtual reality. Hence the user can easily pupeteer the human model (e.g. by moving its hands or feet), which then changes its posture while still being dynamically consistent. The user can then evaluate the ergonomics and the efforts of the task, through indicators computed in simulation 43.

In our ongoing work, we use multi-task quality-diversity to optimize an ergonomics map of a DHM across various morphologies in a specific environment to perform a given task. This allows us to assess the suitability of that workstation for extreme body shapes.

Influence of a passive back support exoskeleton on simulated patient bed bathing: Results of an exploratory study

Participants: Felix Cuny-Enault, Pauline Maurice, Serena Ivaldi.

This paper is a result of the collaboration with the CHRU Nancy, in particular with the Department of Vascular Surgery. The long-term aim is to find an exoskeleton suitable to assist the nurses and physicians in their daily activities.

Low-back pain is a major concern among healthcare workers. One cause is the frequent adoption of repetitive forward bent postures in their daily activities. Occupational exoskeletons have the potential to assist workers in such situations. However, their efficacy is largely task-dependent, and their biomechanical benefit in the healthcare sector has rarely been evaluated. The present study investigates the effects of a passive back support exoskeleton (Laevo v2.5) in a simulated patient bed bathing task. The task was simulated in the "smart apartment" of Creativ'lab, on an experimental setup that simulates the patient on an hospital bed. A patient simulator (medical manikin) was used. Nine participants performed the task on the medical manikin, with and without the exoskeleton. Results show that working with the exoskeleton induced a significantly larger trunk forward flexion, by 13 deg in average. Due to this postural change, using the exoskeleton did not affect substantially the muscular and cardiovascular demands nor the perceived effort. These results illustrate that postural changes induced by exoskeleton use, whether voluntary or not, should be considered carefully since they may cancel out biomechanical benefits expected from the assistance.

Publication: 7

Towards ergonomics optimization of human motion

Participants: Lorenzo Vianello, Waldez Gomes, Pauline Maurice, Serena Ivaldi.

Work-related musculoskeletal disorders are a major health issue often caused by awkward postures. Identifying and recommending more ergonomic body postures requires optimizing the worker’s motion with respect to ergonomics criteria based on the human kinematic/kinetic state.

This year we proposed tools to optimize a human whole-body motion to improve its ergonomics.

The first tool consists in a multi-objective optimisation method. Many ergonomics scores assess different risks at different places of the human body, and therefore, optimizing for only one score might lead to postures that are either inefficient or that transfer the risk to a different location. We verified, in two work activities, that optimizing for a single ergonomics score may lead to motions that degrade scores other than the optimized one. To address this problem, we propose a multi-objective optimization approach that can find better Pareto-optimal trade-off motions that simultaneously optimize multiple scores. Our simulation-based approach is also user-specific and can be used to recommend ergonomic postures to workers with different body morphologies. Additionally, it can be used to generate ergonomic reference trajectories for robot controllers in human-robot collaboration.

The second tool consists in visual feedback about ergonomics. We consider real-time ergonomic feedback a key technology to improve workstation and workers' gestures to reduce musculoskeletal disorders on the long term. To this end, we present supportive tools for online evaluation and visualization of strenuous efforts and postures of a worker, also when physically interacting with a robot. A digital human model is used to estimate human kinematics and dynamics and visualize non-ergonomic joint angles, based on the on-line data acquired from a wearable motion tracking device. We introduced a novel and potentially easy-to-use visualization tool for ergonomics assessment: Latent Ergonomics Maps (LEM). To construct the LEM, we project a given ergonomics score onto a 2D latent space that represents the human posture.

Publications: 6, 9

9 Bilateral contracts and grants with industry

9.1 Bilateral grants with industry

Two PhD grants (Cifre) with Stellantis

Participants: François Charpillet, Julien Uzzan, Lina Achaji.

Stellantis and Inria announced on July 5th, 2018 the creation of an OpenLab dedicated to artificial intelligence. The studied areas include autonomous and intelligent vehicles, mobility services, manufacturing, design development tools, the design itself and digital marketing as well as quality and finance. Two PhD programs have been launched with the Larsen team in this context : One with Lina Achaji about pedestrian trajectory prediction, one with Julien Uzzan about Reinforcement Learning. Julien defended his PhD this year on 22th November.

PhD grant with SAFRAN

Participants: François Charpillet, Nicolas Gauville, Christophe Guettier (Safran).

PhD work co-advised with CEA-LIST

Participants: Jacques Zhong, Francis Colas, Pauline Maurice.

Collaboration with Vincent Weistroffer (CEA-LIST) and Claude Andriot (CEA-LIST)

This PhD work started in October, 2020. The objective is to develop a software tool that allows taking into account the diversity of workers' morphology when designing an industrial workstation. The developed tool will enable us to test the feasibility and ergonomics of a task for any morphology of workers, based on a few demonstrations of the task in virtual reality by one single worker. The two underlying scientific questions are i) the automatic identification of the task features from a few VR demonstrations, and ii) the transfer of the identified task to digital human models of various morphologies.

10 Partnerships and cooperations

10.1 European initiatives

10.1.1 Horizon Europe

euROBIN

euROBIN project on cordis.europa.eu

  • Title:
    European ROBotics and AI Network
  • Duration:
    From July 1, 2022 to June 30, 2026
  • Partners:
    • INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE (INRIA), France
    • C.R.E.A.T.E. CONSORZIO DI RICERCA PER L'ENERGIA L AUTOMAZIONE E LE TECNOLOGIE DELL'ELETTROMAGNETISMO (C.R.E.A.T.E.), Italy
    • PAL ROBOTICS SL (PAL ROBOTICS), Spain
    • KUNGLIGA TEKNISKA HOEGSKOLAN (KTH), Sweden
    • INSTITUT JOZEF STEFAN (JSI), Slovenia
    • FRAUNHOFER GESELLSCHAFT ZUR FORDERUNG DER ANGEWANDTEN FORSCHUNG EV (FHG), Germany
    • FUNDACION TECNALIA RESEARCH & INNOVATION (TECNALIA), Spain
    • TECHNISCHE UNIVERSITAET MUENCHEN (TUM), Germany
    • DHL EXPRESS SPAIN SL, Spain
    • COMMISSARIAT A L ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES (CEA), France
    • INTERUNIVERSITAIR MICRO-ELECTRONICA CENTRUM (IMEC), Belgium
    • TEKNOLOGISK INSTITUT (DANISH TECHNOLOGICAL INSTITUTE), Denmark
    • UNIVERSITEIT TWENTE (UNIVERSITEIT TWENTE), Netherlands
    • ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE (EPFL), Switzerland
    • MATADOR INDUSTRIES AS (MATADOR Industries), Slovakia
    • ASTI MOBILE ROBOTICS SA (ASTI), Spain
    • DEUTSCHES ZENTRUM FUR LUFT - UND RAUMFAHRT EV (DLR), Germany
    • ASSOCIACAO DO INSTITUTO SUPERIOR TECNICO PARA A INVESTIGACAO E DESENVOLVIMENTO (IST ID), Portugal
    • UNIVERSITA DI PISA (UNIPI), Italy
    • FUNDINGBOX ACCELERATOR SP ZOO (FBA), Poland
    • UNIVERSITAET BREMEN (UBREMEN), Germany
    • FONDAZIONE ISTITUTO ITALIANO DI TECNOLOGIA (IIT), Italy
    • KARLSRUHER INSTITUT FUER TECHNOLOGIE (KIT), Germany
    • EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH (ETH Zürich), Switzerland
    • CESKE VYSOKE UCENI TECHNICKE V PRAZE (CVUT), Czechia
    • OREBRO UNIVERSITY (ORU), Sweden
    • CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS (CNRS), France
    • VOLKSWAGEN AKTIENGESELLSCHAFT (VW AG), Germany
    • SIEMENS AKTIENGESELLSCHAFT, Germany
    • SORBONNE UNIVERSITE, France
    • UNIVERSIDAD DE SEVILLA, Spain
  • Inria contact:
    Serena Ivaldi
  • Coordinator:
    Alin Albu Schaeffer (DLR)
  • Summary:

    As robots are entering unstructured environments with a large variety of tasks, they will need to quickly acquire new abilities to solve them. Humans do so very effectively through a variety of methods of knowledge transfer – demonstration, verbal explanation, writing, the Internet. In robotics, enabling the transfer of skills and software between robots, tasks, research groups, and application domains will be a game changer for scaling up the robot abilities.

    euROBIN therefore proposes a threefold strategy: First, leading experts from the European robotics and AI research community will tackle the questions of transferability in four main scientific areas: 1) boosting physical interaction capabilities, to increase safety and reliability, as well as energy efficiency 2) using machine learning to acquire new behaviors and knowledge about the environment and the robot and to adapt to novel situations 3) enabling robots to represent, exchange, query, and reason about abstract knowledge 4) ensuring a human-centric design paradigm, that takes the needs and expectations of humans into account, making AI-enabled robots accessible, usable and trustworthy.

    Second, the relevance of the scientific outcomes will be demonstrated in three application domains that promise to have substantial impact on industry, innovation, and civil society in Europe. 1) robotic manufacturing for a circular economy 2) personal robots for enhanced quality of life 3) outdoor robots for sustainable communities. Advances are made measurable by collaborative competitions.

    Finally, euROBIN will create a sustainable network of excellence to foster exchange and inclusion. Software, data and knowledge will be exchanged over the EuroCore repository, designed to become a central platform for robotics in Europe.

    The vision of euROBIN is a European ecosystem of robots that share their data and knowledge and exploit their diversity to jointly learn to perform the endless variety of tasks in human environments.

10.2 National initiatives

10.2.1 ANR : The Flying Co-Worker

Participants: François Charpillet, Olivier Buffet, Francis Colas, Serena Ivaldi, Vincent Thomas.

  • Program:
    ANR
  • Project title:
    Flying Co-Worker
  • Duration:
    October 2019 – October 2023
  • Coordinator:
    Daniel Sidobre (LAAS-CNRS, Toulouse)
  • Local coordinator:
    François Charpillet
  • Abstract:
    Bringing together recent progress in physical and decisional interaction between humans and robots with the control of aerial manipulators, this project addresses the flying coworker, an aerial manipulator robot that act as a teammate of a human worker to transport a long bar or to realise complex tasks. Safety and human-aware robot abilities are at the core of the proposed research to progressively build robots capable to do cooperative handling and to assist a worker by notably delivering objects directly in a safe, efficient, pertinent and acceptable manner. The methodologies developed for ground manipulators cannot be directly used for aerial manipulator systems because of the floating base, of a limited payload, and of strong actuation and energy constraints. From the perception and the interpretation of the human activity, the objective of the project is to build an aerial manipulator capable to plan and control human aware motions to achieve collaborative tasks.

10.2.2 ANR : PLASMA

Participant: Olivier Buffet.

  • Program:
    ANR
  • Project acronym:
    PLASMA
  • Project title:
    Planification et Apprentissage pour Agir dans des Systèmes Multi-Agents
  • Duration:
    February 2020 – October 2023
  • Coordinator:
    Jilles Dibangoye (INSA-Lyon)
  • Local coordinator:
    Olivier Buffet
  • Abstract:
    The main research goal is to develop a general theory and algorithms with provable guarantees to treat planning and (deep) Reinforcement Learning problems arising from the study of multi-agent sequential decision-making, which may be described as Partially Observable Stochastic Games (POSGs).

10.2.3 ANR : Proxilearn

Participant: Jean-Baptiste Mouret.

  • Program:
    ANR-DGA ASTRID
  • Project acronym:
    Proxilearn
  • Project title:
    Learning for Proximity Flying
  • Duration:
    January 2020 – December 2023
  • Coordinator:
    Jean-Baptiste Mouret
  • Partner:
    • Institut des sciences du Mouvement, CNRS/Aix-Marseille Université
  • Summary:
    The Proxilearn project leverages artificial intelligence techniques to make it possible for a micro-UAV (10-20 cm / 50-80 g) to fly in very confined spaces (diameter between 40 cm and 1.5 m): air ducts, tunnels, natural caves, quarries, ...It is focused on two challenges: (1) stabilizing a UAV in spite of the turbulences created by the interaction between the rotors and the environment, and (2) the flight in autonomy with very little light.

10.2.4 ANR: ROOIBOS

Participants: Pauline Maurice, Francis Colas, Vincent Thomas.

  • Program:
    ANR JCJC
  • Project acronym:
    ROOIBOS
  • Project title:
    User-Specific Adaptation of Collaborative Robot Motion for Improved Ergonomics
  • Duration:
    March 2021 – February 2025
  • Coordinator:
    Pauline Maurice (CNRS)
  • Summary:
    Collaborative robots have the potential to reduce work-related musculoskeletal disorders not only by decreasing the workers' physical load, but also by modifying and improving their postures. Imposing a sudden modification of one's movement can however be detrimental to the acceptance and efficacy of the human-robot collaboration. In ROOIBOS, we will develop a framework to plan user-specific trajectories for collaborative robots, to gradually optimize the efficacy of the collaboration and the long-term occupational health of the user. We will use machine learning and probabilistic methods to perform user-specific prediction of whole-body movements. We will define dedicated metrics to evaluate the movement ergonomic performance and intuitiveness. We will integrate those elements in a digital human simulation to plan a progressive adaptation of the robot motion accounting for the user's motor preferences. We will then use probabilistic decision-making to adapt the plan on-line to the user's motor adaptation capabilities. This will enable a smooth deployment of collaborative robots at work.

10.2.5 ANR-PIA Pherosensor: Early detection of pest insects using pheromone receptor-based olfactory sensors

Participants: Dominique Martinez.

  • Coordinator
    Philippe Lucas (Inrae Versailles)
  • Duration
    February 2021 – February 2026
  • Summary:
    Insects directly or indirectly destroy one third of the annual world's crop production. In a context of climate change and world trade increase, early detection of native and invasive insect pests is an urgent challenge for optimal action before infestation settles. Insect use species-specific pheromones to attract conspecifics from the same sex (sex pheromones, e.g. in moths) or from both sexes (aggregation pheromones, e.g. in weevils). The detection of insect pheromones is a promising alternative for insect surveillance, yet it is a challenge because of the low amounts emitted. PheroSensor goes beyond the current state-of-the-art solutions of artificial odor sensing by developing and evaluating innovative bioinspired sensors to detect insect pests. We develop sensors engineered from insect pheromone receptors coupled with artificial intelligence techniques to detect and monitor three installed or potentially invasive insect pests in France, the fall armyworm and cotton leafworm moths, and the red palm weevil.

10.2.6 Projet CNES: PHOeBUS

Participants: Dominique Martinez, François Charpillet, Abir Bouaouda.

  • Project title:
    Flight Observation of Butterflies Under Space-like gravity
  • Duration:
    June 2021 – December 2022
  • Coordinator
    Mickaël Bourgoin (ENS Lyon)
  • Summary:
    PHOeBUS, standing for “Flight ObsErvation of Butterflies Under Space-like gravity”, is a project that aims at studying flight adaptation of butterflies to micro and macro-gravity, during parabolic flights, as compared to free flights recorded using the cable-driven robot at Loria. The goals are both biological (how does the butterfly adapt to a radical environmental change) and aerodynamic (how does the flapping pattern and frequency evolve to adjust the lift force to new gravity conditions). The outcomes of the project are therefore of interest to a broad community, including biology, aerodynamics and bio-inspired dronautic technology (dimensionless flow numbers for the flight of Ingenuity drone on Mars are equivalent to those of butterflies on Earth).

10.2.7 Inria-DGA Resilient Humanoid

Participant: Jean-Baptiste Mouret.

  • Program:
    Convention Inria-DGA about Artificial Intelligence
  • Project title:
    Resilient humanoid
  • Duration:
    October 2019 – October 2022
  • Coordinator:
    Jean-Baptiste Mouret
  • Summary:
    This project aims at designing adaptive controllers for humanoid robots: a tele-operated humanoid should be able to adapt to unforeseen conditions, including damage and obstacles. To do so, this project bridges the gap between task-space whole-body controllers based on quadratic programming and data-efficient reinforcement learning. [funding for 1 research engineer]

10.2.8 Inria-DGA U-Drone

Participant: Jean-Baptiste Mouret.

  • Program:
    Convention Inria-DGA about Artificial Intelligence
  • Project acronym:
    U-Drone
  • Project title:
    Underground Drone
  • Duration:
    October 2019 – October 2022
  • Coordinator:
    Jean-Baptiste Mouret
  • Summary:
    This project aims at making progress in the design of flying vehicles for highly confined spaces like pipes, caves, quarries, etc. It complements the Proxilearn ANR project. [funding for 1 research engineer]

10.2.9 DGA-RAPID ASMOA

Participants: François Charpillet, Serena Ivaldi, Pauline Maurice.

  • Program:
    DGA-RAPID
  • Project acronym:
    ASMOA
  • Project title:
    Prédiction des intentions d'un utilisateur d'un exosquelette
  • Duration:
    October 2019 – October 2022
  • Coordinator:
    Kompaï Robotics
  • Local coordinator:
    François Charpillet
  • Partners:
    • Inria
    • Safran Electronics & Defense
  • Summary:
    The ASMOA project is about intention prediction for exoskeleton control. The challenge is to detect the intention of movement of a user of exoskeletons for load handling, to optimize the exoskeleton control following the user’s intended movement, and to evaluate with an experimental campaign the effect of the intelligent controller on the user’s movement and performance. INRIA will develop the prediction and control method based on machine learning techniques for an upper-body active exoskeleton.

11 Dissemination

Participants: Amine Boumaza, Olivier Buffet, Francis Colas, Serena Ivaldi, Pauline Maurice, Jean-Baptiste Mouret, Alexis Scheuer, Vincent Thomas.

11.1 Promoting scientific activities

11.1.1 Scientific events: organisation

General chair, scientific chair
  • General Chair of IEEE/RAS HUMANOIDS 2024 [Serena Ivaldi]
  • Program Chair of IEEE ARSO 2023 [Serena Ivaldi]
  • Publicity Chair of IEEE/RAS HUMANOIDS 2022 [Serena Ivaldi]
  • Media Chair of IEEE ICDL 2022 [Serena Ivaldi]
Member of the organizing committees
  • Serena Ivaldi was co-organizer of the following international workshops:
    • ERF 2022: Workshop on Telerobotics
    • RSS 2022: Workshop on Scaling Robot Learning
    • ICRA 2022: Workshop on Shared Autonomy in Physical Human-Robot Interaction: Adaptability and Trust
    • ICRA 2022: 4th Workshop on Integrating Multidisciplinary Approaches to Advance Physical Human-Robot Interaction: Challenges of Interfacing Wearable Robots with the Human
    • ICRA 2022: Workshop on Scaling Robot Learning
  • Jean-Baptiste Mouret co-organizer of the workshop “Benchmarking Quality Diversity Algorithms” at the conference GECCO 2022.

11.1.2 Scientific events: selection

Member of the conference program committees
  • Program Committee Member of the 31st Joint Conference on Artificial Intelligence (IJCAI 2022) [Olivier Buffet, Francis Colas, Jean-Baptiste Mouret]
  • Program Committee Member of the 32nd International Conference on Planning and Scheduling (ICAPS 2022) [Olivier Buffet]
  • Program Committee Member of the 20th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2022) [Olivier Buffet]
  • Program Committee Member of the 36th conference on Neural Information Processing Systems (NeurIPS 2022) [Olivier Buffet]
  • Program Committee Member of the journées francophones planification, décision et apprentissage (JFPDA 2022) [Olivier Buffet, Vincent Thomas]
  • Associate Editor for the IEEE/RAS International Conference on Humanoid Robots 2022 (HUMANOIDS) [Serena Ivaldi]
  • Area Chair for the Conference on Robot Learning (CoRL) [Serena Ivaldi, Jean-Baptiste Mouret]
  • Program Committee Member of the ACM Genetic and Evolutionary Computation Conference (GECCO'22) [Amine Boumaza, Jean-Baptiste Mouret]
  • Program Committee Member of the IEEE Congress On Evolutionary Computation (CEC'22) [Amine Boumaza]
  • Program Committee Member of the Artificial Life Conference (ALIFE'22) [Amine Boumaza]
  • Program Committee Member of the Evolutionnary Computation Conference (EA'22) [Amine Boumaza]
Reviewer
  • IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022) [Olivier Buffet, Francis Colas]
  • IEEE International Conference on Robotics and Automation (ICRA 2023) [Francis Colas]
  • IEEE Conference on Robotics and its Social Impact (ARSO) [Francis Colas]
  • RSS Pioneers [Serena Ivaldi]

11.1.3 Journal

Editor in chief
  • Springer-Nature International Journal of Social Robotics [Serena Ivaldi] (Editor in Chief)
Member of the editorial boards / associate editor
  • ACM Transactions on Evolutionary Optimization and Learning [Jean-Baptiste Mouret] (Associate editor)
Reviewer - reviewing activities
  • Journal of Artificial Intelligence Research [Olivier Buffet]
  • IEEE Robotics and Automation Letters [Olivier Buffet, Francis Colas]
  • Autonomous Robots [Francis Colas]
  • IEEE Robotics and Automation Magazine [Serena Ivaldi]

11.1.4 Invited talks

  • Serena Ivaldi gave an invited talk at the ICRA 2022 Workshop on Legged Robots; the European Robotics Forum 2022 Workshop "AI & Robotics benchmarking: towards harmonized evaluation strategies"; Journées Scientifique Inria on Robotics.
  • Jean-Baptiste Mouret gave a keynote talk at the 16th European Workshop on Advanced Control Diagnosis (ACD), Nancy, France; an invited talk (in remote) for the University of Southern California (Distinguished Lectures Seminar Series in the Department of Computer Science); an invited talk at Huawei Research (Paris); an invited talk (in remote) for Naver Labs (Grenoble); an invited talk (in remote) at the European Robotics Forum.

11.1.5 Leadership within the scientific community

Serena Ivaldi is:

  • Co-leader of the GT7 "Robotique Humanoide" of the GDR Robotique.
  • Vice-President of the IEEE-RAS Members Activities Board (MAB).
  • Co-chair of the IEEE-RAS ICRA (International Conference on Robotics And Automation - flagship conference of RAS) Steering Committee

11.1.6 Scientific expertise

  • Francis Colas was expert for the SESAME call from Région Ile-de-France.
  • Serena Ivaldi was expert for the selection of the QMUL UK best PhD thesis in robotics.
  • Serena Ivaldi was reviewer of the first reporting period of the H2020 European Project FELICE (Grant Agreement number: 101017151 — FELICE — H2020-ICT-2018-20 / H2020-ICT-2020-2).
  • Serena Ivaldi was international judge of the ANA Avatar XPRIZE competition on robot teleoperation.
  • Jean-Baptiste Mouret was reviewer for an ERC StG proposal (remote referee).

11.1.7 Research administration

  • Francis Colas is member of the Evaluation Commission of Inria.
  • Francis Colas was member of the hiring committee for junior research scientists at Inria Sophia-Antipolis.
  • Serena Ivaldi was member of the hiring committees of two maitre de conference positions at University of Lorraine: 27-MCF-0177 and 61-MCF-0098.
  • Jean-Baptiste Mouret is the chair of the Department 5 of the LORIA UMR (Inria/CNRS/Univ. Lorraine) [since March 2022] and member of the scientific council of LORIA
  • Jean-Baptiste Mouret was member of the hiring commitee for the junior research scientists at Inria Paris
  • Jean-Baptiste Mouret is the scientific leader of the Creativ'Lab platform of Loria/Inria Nancy Grand Est [since March 2022]

11.2 Teaching - Supervision - Juries

11.2.1 Teaching

  • Master: Amine Boumaza, “Méta-heuristiques et recherche locale stochastique”, 30h eq. TD, M1 informatique, Univ. Lorraine, France.
  • Master: Amine Boumaza, “Modélisation de Phénomènes Biologiques ”, 12h eq. TD, M1 Sciences Cognitives, Univ. Lorraine, France.
  • Master: Francis Colas, “Planification de trajectoires”, 12h eq. TD, M2 “Apprentissage, Vision, Robotique”, Univ. Lorraine, France.
  • Master: Francis Colas, “Intégration méthodologique”, 36h eq. TD, M2 “Apprentissage, Vision, Robotique”, Univ. Lorraine, France.
  • Master: Francis Colas, “Autonomous Robotics”, 39.5h eq. TD, M1, CentraleSupélec, France.
  • Master: Serena Ivaldi, “Analyse Comportementale”, 16h CM/TD, M2 “Sciences Cognitives”, Univ. Lorraine, France.
  • Master: Pauline Maurice “Robotique pour l’industrie du futur”, 3h CM, M2 “Control Engineering”, Centrale-Supélec, France
  • Master: Pauline Maurice, “Analyse Comportementale”, 10h CM/TP, M2 “Sciences Cognitives”, Univ. Lorraine, France.
  • Master: Jean-Baptiste Mouret, “Quality Diversity”, 3h (M2 Innovation, Mines Paristech)
  • Tutorial, Jean-Baptiste Mouret, “Quality Diversity Optimization”, 3h, ACM GECCO 2020 (with A. Cully, Imperial College, and S. Doncieux, Sorbonne Université)
  • Master: Alexis Scheuer, “Introduction à la robotique autonome”, 30h eq. TD, M1 informatique, Univ. Lorraine, France.
  • Master: Alexis Scheuer, “Modélisation et commande en robotique”, 16h eq. TD, M2 “Apprentissage, Vision, Robotique”, Univ. Lorraine, France.
  • Master: Alexis Scheuer, “Éléments de robotique”, 4h eq. TD, Master MEEF 2d degré, INSPÉ, Univ. Lorraine, France.
  • Master: Vincent Thomas, “Apprentissage et raisonnement dans l'incertain”, 15h eq. TD, M2 M2 “Apprentissage, Vision, Robotique”, Univ. Lorraine, France.
  • Master: Vincent Thomas, “Game Design”, 30h eq. TD, M1 Sciences Cognitives, Univ. Lorraine, France.
  • Master: Vincent Thomas, “Agent intelligent”, 30h eq. TD, M1 Sciences Cognitives, Univ. Lorraine, France.

11.2.2 Supervision

  • PhD: Vladislav Tempez, “Learning to fly a micro-UAVs in highly confined environments”, 2022-06-27, Jean-Baptiste Mouret (advisor), Franck Ruffier (co-advisor, CNRS/Aix-Marseille Université) 35
  • PhD: Julien Uzzan, Cifre with PSA, “Navigation of an autonomous vehicle in a complex environment using unsupervised or semi-supervised learning”, 2022-11-22, François Charpillet (advisor), François Aioun (co-advisor, PSA) [REF HAL?]
  • PhD: Jessica Colombel, “Analysis of human movement for assistance”, 2022-12-05, François Charpillet (advisor) David Daney(advisor, Auctus team, Bordeaux)
  • PhD: Luigi Penco, “Intelligent whole-body tele-operation of humanoid robots”, 2022-06-07, Jean-Baptiste Mouret (advisor), Serena Ivaldi (co-advisor) 34
  • PhD: Nicolas Gauville, Cifre with SAFRAN, “Coordination of autonomous robots evolving in unstructured and unknown environment for Search and Rescue”, 2022-11-18, François Charpillet (advisor), Christophe Guettier (co-adviser, Safran).
  • PhD: Lorenzo Vianello, “Adaptation in human-robot collaboration”, 2022-11-12, Alexis Aubry (advisor, CRAN, Univ. Lorraine), Serena Ivaldi (co-advisor) [REF HAL?]
  • PhD in progress: Yoann Fleytoux, “Human-guided manipulation learning of irregular objects”, started in April 2019, Jean-Baptiste Mouret (advisor), Serena Ivaldi (co-advisor)
  • PhD in progress: Yang You, “Modèles probabilistes pour la collaboration humain-robot”, started in October 2019, Olivier Buffet (advisor), Vincent Thomas (co-advisor)
  • PhD in progress : Lina Achaji, Cifre with PSA, “Modélisation de systeèmes dynamiques par des réseaux de neurones à mémoire courte et longue : application pour la prédiction de l'état d'environnement routier”, started in March 2020, François Charpillet (advisor), François Aioun (Co-advisor, PSA).
  • PhD in progress: Nima Mehdi, “Perception et interprétation de l’activité humaine”, started in May 2020, Francis Colas (advisor).
  • PhD in progress: Timothée Anne “Meta-learning for adaptive whole-body control”, started in September 2020, Jean-Baptiste Mouret (advisor)
  • PhD in progress: Jacques Zhong, “Prise en compte de la variabilité de morphologie de l’opérateur dans des tâches de montage simulées en réalité virtuelle”, started in October 2020, Francis Colas (advisor), Pauline Maurice (co-advisor), Vincent Weistroffer (co-advisor, CEA-LIST).
  • PhD in progress: Abir Bouaouda, “Apprentissage automatique pour le contrôle des systèmes complexes. Application aux robots à câbles”, started in October 2020, Mohamed Boutayeb (advisor, CRAN), Dominique Martinez (co-advisor)
  • PhD in progress: Raphaël Bousigues, “Collaborative robots as a tool for optimizing skill acquisition through the appropriate use of human motor variability”, started in December 2020, Pauline Maurice (co-advisor), Vincent Padois (advisor, INRIA Bordeaux), David Daney (co-advisor, INRIA Bordeaux).
  • PhD in progress: Aya Yaacoub, “Planification individu-spécifique du comportement d’un robot collaboratif pour la prévention des troubles musculo-squelettiques”, started in December 2021, Francis Colas (advisor), Pauline Maurice (co-advisor).
  • PhD in progress: Alexandre Oliveira Souza, “Intelligence Artificielle et contrôle de systèmes interactifs : application aux exosquelettes”, started in October 2022, François Charpillet (advisor), Pauline Maurice (co-advisor).
  • PhD in progress: Salomé Lepers, "Explicabilité et interprétabilité en planification probabiliste", started in October 2022, Olivier Buffet (advisor), Vincent Thomas (co-advisor).

11.2.3 Juries

  • Olivier Buffet was
    • reviewer for the PhD of Maxence Grand (Université Grenoble Alpes)
  • Francis Colas was
    • invited to the PhD of Guilherme Alves (Université de Lorraine)
  • Serena Ivaldi was
    • reviewer of the PhD of Jason Chemin(Université de Toulouse / LAAS)
    • examiner of the PhD of Noelie Ramuzat (Université de Toulouse / INSA Lyon / LAAS)
    • examiner of the PhD of Sebastian Walkotter (University of Uppsala, Sweden)
    • examiner of the PhD of Vincent Fortineau (Université Paris-Saclay)
  • Jean-Baptiste Mouret was:
    • reviewer of the HDR of Clément Moulin Frier (Université Bordeaux / Inria Bordeaux)
    • reviewer of the PhD of Alexis Duburcq (Paris Science Lettre / Dauphine)
    • reviewer of the PhD of Philipp Kratzer (University of Stuttgart)
    • reviewer of the PhD of Alexandre Letalenet (Sorbonne Université)
    • examiner and president of the jury for the PhD of Thibault Tricard (Université de Lorraine)

11.3 Popularization

11.3.1 Internal or external Inria responsibilities

  • Amine Boumaza was
    • a member of the editorial board of Interstice

11.3.2 Articles and contents

  • Francis Colas was interviewed for a chronicle in Science et Vie
  • Serena Ivaldi was featured in. a reportage by Ludovic B: "Une journée avec #39 : une chercheuse en robotique" (youtube video, 39k views)
  • Serena Ivaldi was interviewed by L'Usine Nouvelle, Industrie & Technologie, Mesures

11.3.3 Education

  • Vincent Thomas
    • proposed two workshops for future teachers during journée SNT-NSI 2022: “physical engine and game design” and “path planning in mazes”

12 Scientific production

12.1 Major publications

  • 1 articleA.Antoine Cully, J.Jeff Clune, D.Danesh Tarapore and J.-B.Jean-Baptiste Mouret. Robots that can adapt like animals.Nature5217553May 2015, 503-507HALDOIback to text
  • 2 articleJ. S.Jilles Steeve Dibangoye, C.Christopher Amato, O.Olivier Buffet and F.François Charpillet. Optimally Solving Dec-POMDPs as Continuous-State MDPs.Journal of Artificial Intelligence Research55February 2016, 443-497HALDOIback to text

12.2 Publications of the year

International journals

International peer-reviewed conferences

  • 11 inproceedingsL.Lina Achaji, T.Thierno Barry, T.Thibault Fouqueray, J.Julien Moreau, F.Francois Aioun and F.François Charpillet. PreTR: Spatio-Temporal Non-Autoregressive Trajectory Prediction Transformer.2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)Macau, ChinaIEEE2022, 2457-2464HALDOIback to text
  • 12 inproceedingsL.Lina Achaji, J.Julien Moreau, T.Thibault Fouqueray, F.Francois Aioun and F.François Charpillet. Is attention to bounding boxes all you need for pedestrian action prediction?2022 IEEE Intelligent Vehicles Symposium (IV)Aachen, GermanyIEEE2022, 895-902HALDOIback to text
  • 13 inproceedingsA.Amine Boumaza. Seeking Specialization Through Novelty in Distributed Online Collective Robotics.EvoApplications 2022 - 25th International Conference on the Applications of Evolutionary ComputationLecture Notes in Computer Science13224Madrid, SpainApril 2022, 635–650HALDOI
  • 14 inproceedingsJ.Jessica Colombel, D.David Daney and F.François Charpillet. On the Reliability of Inverse Optimal Control.2022 International Conference on Robotics and Automation (ICRA)ICRA 2022 - IEEE International Conference on Robotics and AutomationPhiladelphia, United StatesMay 2022, 8504-8510HALback to text
  • 15 inproceedingsE.Evelyn D'Elia, J.-B.Jean-Baptiste Mouret, J.Jens Kober and S.Serena Ivaldi. Automatic Tuning and Selection of Whole-Body Controllers.2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)Kyoto, JapanOctober 2022HALback to text
  • 16 inproceedingsK.Kourosh Darvish, S.Serena Ivaldi and D.Daniele Pucci. Simultaneous Action Recognition and Human Whole-Body Motion and Dynamics Prediction from Wearable Sensors.IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)Ginowan, JapanIEEENovember 2022, 488-495HALDOIback to text
  • 17 inproceedingsJ.Jérôme Dinet. "Would you be friends with a robot?”: The impact of perceived autonomy and perceived risk.AHFE 2022 - 13th International Conference on Applied Human Factors and ErgonomicsNew York, United StatesJuly 2022HAL
  • 18 inproceedingsJ.Jérôme Dinet and R.Rui Nouchi. Promoting Physical Activity for Elderly People with Immersive Virtual Reality (IVR).AHFE 2022 - 13th International Conference on Applied Human Factors and ErgonomicsNew York, United StatesJuly 2022HAL
  • 19 inproceedingsY.Yoann Fleytoux, A.Anji Ma, S.Serena Ivaldi and J.-B.Jean-Baptiste Mouret. Data-efficient learning of object-centric grasp preferences.ICRA 2022 - IEEE International Conference on Robotics and AutomationPhiladelphia, United StatesMay 2022HALback to text
  • 20 inproceedings M.Melanie Jouaiti, E.Eloïse Zehnder and F.François Charpillet. The Sound of Actuators in Children with ASD, Beneficial or Disruptive? Lecture Notes in Computer Science ICSR 2022 : 14th International Conference on Social Robotics Florence, Italy Springer December 2022 HAL back to text
  • 21 inproceedingsA.Anji Ma, Y.Yoann Fleytoux, J.-B.Jean-Baptiste Mouret and S.Serena Ivaldi. VP-GO: A 'Light' Action-Conditioned Visual Prediction Model for Grasping Objects.ICARM 2022 - IEEE International Conference on Advanced Robotics and MechatronicsICARM 2022 - IEEE International Conference on Advanced Robotics and MechatronicsGuilin, ChinaJuly 2022HALback to text
  • 22 inproceedingsG.Glenn Maguire, N.Nicholas Ketz, P.Praveen Pilly and J.-B.Jean-Baptiste Mouret. A-EMS: An Adaptive Emergency Management System for Autonomous Agents in Unforeseen Situations.Lecture Notes in Computer ScienceTAROS 2022 - Towards Autonomous Robotic SystemsLNAI - 13546LNAI - Annual Conference Towards Autonomous Robotic SystemsAbingdon, United KingdomSpringer International PublishingSeptember 2022, 266-281HALDOI
  • 23 inproceedingsN.Nima Mehdi, V.Vincent Thomas, S.Serena Ivaldi and F.Francis Colas. Simultaneous Pose and Posture Estimation with a Two-stage Particle Filter for Visuo-inertial Fusion.ICARM 2022 - IEEE International Conference on Advanced Robotics and MechatronicsIEEE International Conference on Advanced Robotics and Mechatronics (ICARM 2022)Guilin, ChinaJuly 2022HALback to text
  • 24 inproceedingsJ.Jérôme Truc, P.-T.Phani-Teja Singamaneni, D.Daniel Sidobre, S.Serena Ivaldi and R.Rachid Alami. KHAOS: a Kinematic Human Aware Optimization-based System for Reactive Planning of Flying-Coworker.IEEE International Conference on Robotics and Automation (ICRA 2022)Philadelphia, United StatesMay 2022HALDOI
  • 25 inproceedingsE.Eloïse Zehnder, J.Jérôme Dinet and F.François Charpillet. Perception of physical and virtual agents: exploration of factors influencing the acceptance of intrusive domestic agents.2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)Napoli, ItalyIEEE2022, 1050-1057HALDOIback to text
  • 26 inproceedingsE.Eloïse Zehnder, M.Melanie Jouaiti and F.François Charpillet. Evaluating Robot Acceptance in Children with ASD and their Parents.Lecture Notes in Computer Science14th International Conference on Social RoboticsFlorence (IT), ItalySpringerDecember 2022HALback to text

National peer-reviewed Conferences

Conferences without proceedings

Doctoral dissertations and habilitation theses

  • 33 thesisS.Serena Ivaldi. From humans to humanoids: learning and interaction for human-humanoid collaboration.Université de Lorraine (UL)November 2022HAL
  • 34 thesisL.Luigi Penco. Whole-body Teleoperation of Humanoid Robots.Université de LorraineJune 2022HALback to text
  • 35 thesisV.Vladislav Tempez. Learning of an optimal control law for a small quadrotor flying in a cylindric pipe.Université de LorraineJune 2022HALback to text

Reports & preprints

Other scientific publications

12.3 Cited publications

  • 44 articleA.Andrea Del Prete, F.Francesco Nori, G.Giorgio Metta and L.Lorenzo Natale. Prioritized Motion-Force Control of Constrained Fully-Actuated Robots: "Task Space Inverse Dynamics". Robotics and Autonomous Systems 2014, URL: http://dx.doi.org/10.1016/j.robot.2014.08.016back to text
  • 45 inproceedingsM.Mihai Andries and F.François Charpillet. Multi-robot taboo-list exploration of unknown structured environments.2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2015)Hamburg, GermanySeptember 2015HALback to text
  • 46 inproceedingsM.Mauricio Araya-López, O.Olivier Buffet, V.Vincent Thomas and F.François Charpillet. A POMDP Extension with Belief-dependent Rewards.Advances in Neural Information Processing Systems (NIPS)Vancouver, CanadaMIT PressDecember 2010HALback to text
  • 47 articleT.Tim Barfoot, J.Jonathan Kelly and G.Gabe Sibley. Special Issue on Long-Term Autonomy.The International Journal of Robotics Research32142013, 1609--1610back to textback to text
  • 48 articleA.Antoine Cully, J.Jeff Clune, D.Danesh Tarapore and J.-B.Jean-Baptiste Mouret. Robots that can adapt like animals.Nature5217553May 2015, 503-507HALDOIback to text
  • 49 inproceedingsA.Abdallah Dib and F.François Charpillet. Pose Estimation For A Partially Observable Human Body From RGB-D Cameras.IEEE/RJS International Conference on Intelligent Robots and Systems (IROS)Hamburg, GermanySeptember 2015, 8HALback to text
  • 50 inproceedingsA.Arsène Fansi Tchango, V.Vincent Thomas, O.Olivier Buffet, F.Fabien Flacher and A.Alain Dutech. Simultaneous Tracking and Activity Recognition (STAR) using Advanced Agent-Based Behavioral Simulations.ECAI - Proceedings of the Twenty-first European Conference on Artificial IntelligencePragues, Czech RepublicAugust 2014HALback to text
  • 51 inproceedingsI.Iñaki Fernández Pérez, A.Amine Boumaza and F.François Charpillet. Comparison of Selection Methods in On-line Distributed Evolutionary Robotics.ALIFE 14: The fourteenth international conference on the synthesis and simulation of living systemsArtificial Life 14New York, United StatesJuly 2014HALDOIback to textback to text
  • 52 articleU.Udo Frese. Interview: Is SLAM Solved?KI - Künstliche Intelligenz2432010, 255-257URL: http://dx.doi.org/10.1007/s13218-010-0047-xDOIback to text
  • 53 articleJ.J. Kober, J. A.J. A. Bagnell and J.J. Peters. Reinforcement Learning in Robotics: A Survey.The International Journal of Robotics ResearchAugust 2013back to textback to textback to text
  • 54 articleS.Sylvain Koos, A.Antoine Cully and J.-B.Jean-Baptiste Mouret. Fast damage recovery in robotics with the t-resilience algorithm.The International Journal of Robotics Research32142013, 1700--1723back to text
  • 55 articleL.Luigi Penco, E. M.Enrico Mingo Hoffman, V.Valerio Modugno, W.Waldez Gomes, J.-B.Jean-Baptiste Mouret and S.Serena Ivaldi. Learning robust task priorities and gains for control of redundant robots.IEEE Robotics and Automation Letters522020, 2626--2633back to text
  • 56 inproceedingsF.François Pomerleau, P.Philipp Krüsi, F.Francis Colas, P.Paul Furgale and R.Roland Siegwart. Long-term 3D map maintenance in dynamic environments.Robotics and Automation (ICRA), 2014 IEEE International Conference onIEEE2014, 3712--3719back to text
  • 57 techreportSPARC. Robotics 2020 Multi-Annual Roadmap.2014, URL: http://www.eu-robotics.net/ppp/objectives-of-our-topic-groups/back to textback to text
  • 58 inproceedingsJ.J. Shah, J.J. Wiken, B.B. Williams and C.C. Breazeal. Improved human-robot team performance using Chaski, A human-inspired plan execution system. ACM/IEEE International Conference on Human-Robot Interaction (HRI)2011, 29-36back to text
  • 59 inproceedingsD.David Silver and J.Joel Veness. Monte-Carlo Planning in Large POMDPs.Advances in Neural Information Processing Systems 23Curran Associates, Inc.2010, 2164--2172back to text
  • 60 inproceedingsO.Olivier Simonin, T.Thomas Huraux and F.François Charpillet. Interactive Surface for Bio-inspired Robotics, Re-examining Foraging Models.23rd IEEE International Conference on Tools with Artificial Intelligence (ICTAI)Boca Raton, United StatesIEEENovember 2011HALback to text
  • 61 inproceedingsN.N. Stefanov, A.A. Peer and M.M. Buss. Role determination in human-human interaction.3rd Joint EuroHaptics Conf. and World Haptics2009, 51-56back to text
  • 62 bookR. S.R. S. Sutton and A. G.A. G. Barto. Introduction to Reinforcement Learning.MIT Press1998back to text
  • 63 articleA.A. Tapus, M.M.J. Matarić and B.B. Scassellati. The grand challenges in Socially Assistive Robotics.IEEE Robotics and Automation Magazine - Special Issue on Grand challenges in Robotics1412007, 1-7back to text
  • 64 articleC. D.Christopher D. Wickens, J.Juliana Goh, J.John Helleberg, W. J.William J. Horrey and D. A.Donald A. Talleur. Attentional Models of Multitask Pilot Performance Using Advanced Display Technology.Human Factors4532003, 360-380back to text
  • 65 articleD.D.H. Wilson and C.C. Atkeson. Simultaneous Tracking and Activity Recognition (STAR) Using Many Anonymous, Binary Sensors.34682005, 62-79URL: http://dx.doi.org/10.1007/11428572_5DOIback to text
  • 66 articleG.Gregor Wolbring and S.Sophya Yumakulov. Social Robots: Views of Staff of a Disability Service Organization.International Journal of Social Robotics632014, 457-468back to text
  1. 1See the Robotics 2020 Multi-Annual Roadmap 57.
  2. 2OHS (Office d'Hygiène Sociale) is an association managing several rehabilitation or retirement home structures.
  3. 3See the Robotics 2020 Multi-Annual Roadmap 57, section 2.5.