2023Activity reportProject-TeamLARSEN

RNSR: 201521241C
  • Research center Inria Centre at Université de Lorraine
  • In partnership with:Université de Lorraine, CNRS
  • Team name: Lifelong Autonomy and interaction skills for Robots in a Sensing ENvironment
  • In collaboration with:Laboratoire lorrain de recherche en informatique et ses applications (LORIA)
  • Domain:Perception, Cognition and Interaction
  • Theme:Robotics and Smart environments


Computer Science and Digital Science

  • A5. Interaction, multimedia and robotics
  • A5.1. Human-Computer Interaction
  • A5.10. Robotics
  • A5.10.1. Design
  • A5.10.2. Perception
  • A5.10.3. Planning
  • A5.10.4. Robot control
  • A5.10.5. Robot interaction (with the environment, humans, other robots)
  • A5.10.6. Swarm robotics
  • A5.10.7. Learning
  • A5.10.8. Cognitive robotics and systems
  • A5.11. Smart spaces
  • A5.11.1. Human activity analysis and recognition
  • A8.2. Optimization
  • A8.2.2. Evolutionary algorithms
  • A9.2. Machine learning
  • A9.5. Robotics
  • A9.7. AI algorithmics
  • A9.9. Distributed AI, Multi-agent

Other Research Topics and Application Domains

  • B1.1.9. Biomechanics and anatomy
  • B2.1. Well being
  • B2.5.3. Assistance for elderly
  • B5.1. Factory of the future
  • B5.2.4. Aerospace
  • B5.6. Robotic systems
  • B7.2.1. Smart vehicles
  • B9.6. Humanities
  • B9.6.1. Psychology

1 Team members, visitors, external collaborators

Research Scientists

  • Francis Colas [Team leader, INRIA, Researcher, HDR]
  • Olivier Buffet [INRIA, Researcher, HDR]
  • Serena Ivaldi [INRIA, Senior Researcher, from Oct 2023, HDR]
  • Serena Ivaldi [INRIA, Researcher, until Sep 2023, HDR]
  • Pauline Maurice [CNRS, Researcher]
  • Enrico Mingo Hoffman [INRIA, ISFP, from Oct 2023]
  • Jean-Baptiste Mouret [INRIA, Senior Researcher, HDR]
  • Quentin Rouxel [INRIA, Starting Research Position]

Faculty Members

  • Amine Boumaza [UL, Associate Professor]
  • Mohamed Boutayeb [UL, Professor Delegation, HDR]
  • Sophie Lemonnier [UL, Associate Professor Delegation, from Sep 2023]
  • Alexis Scheuer [UL, Associate Professor]
  • Vincent Thomas [UL, Associate Professor]

Post-Doctoral Fellow

  • Jessica Colombel [CNRS, Post-Doctoral Fellow, from Sep 2023]

PhD Students

  • Lina Achaji [GROUPE PSA, until Jun 2023]
  • Timothee Anne [INRIA, until Aug 2023]
  • Timothee Anne [INRIA, from Dec 2023]
  • Timothee Anne [UL, until Nov 2023]
  • Abir Bouaouda [UL, ATER, from Oct 2023]
  • Abir Bouaouda [UL, until Sep 2023]
  • Raphael Bousigues [INRIA, until Nov 2023]
  • Jessica Colombel [UL, ATER, until Aug 2023]
  • Yoann Fleytoux [INRIA, until Jan 2023]
  • Salome Lepers [UL]
  • Nima Mehdi [UL, from Dec 2023]
  • Nima Mehdi [INRIA, until Nov 2023]
  • Alexandre Oliveira Souza [SAFRAN, CIFRE]
  • Dionis Totsila [INRIA, from Nov 2023]
  • Aya Yaacoub [CNRS]
  • Yang You [INRIA, until Feb 2023]
  • Jacques Zhong [CNRS, from Oct 2023]
  • Jacques Zhong [CEA, until Oct 2023]

Technical Staff

  • Alexis Biver [INRIA, Engineer]
  • Raphael Lartot [INRIA, Engineer, until Sep 2023]
  • Thomas Martin [UL, from May 2023]
  • Nicolas Valle [INRIA, Engineer, until Sep 2023]

Interns and Apprentices

  • Jules Brunet [UL, Intern, from Jun 2023 until Sep 2023]
  • Maxime Chalumeau [UL, Intern, from May 2023 until Aug 2023]
  • Lucie Jeannin [INRIA, Intern, from Apr 2023 until Jul 2023]
  • Louis Malterre [INRIA, Intern, from Jun 2023 until Aug 2023]
  • Chloe Matina [INRIA, Intern, from Jun 2023 until Nov 2023]

Administrative Assistants

  • Véronique Constant [INRIA]
  • Antoinette Courrier [CNRS]

Visiting Scientists

  • Dallin Cordon [Univ Brigham, from Jun 2023 until Jul 2023]
  • Marc Killpack [Univ Brigham, from Jun 2023 until Jul 2023]
  • Agisilaos Kounelis [Univ Athenes Grece, until Jan 2023]
  • Shaden Moss [Univ Brigham, from Jun 2023 until Jul 2023]
  • Hortencia Ramirez Vazquez [ITESM, from Sep 2023]
  • John Salmon [Univ Brigham, from Jun 2023 until Jul 2023]

External Collaborators

  • François Charpillet [INRIA, HDR]
  • Lucien Renaud [PAL ROBOTICS, until Jun 2023]

2 Overall objectives

The goal of the Larsen team is to move robots beyond the research laboratories and manufacturing industries: current robots are far from being the fully autonomous, reliable, and interactive robots that could co-exist with us in our society and run for days, weeks, or months. While there is undoubtedly progress to be made on the hardware side, robotic platforms are quickly maturing and we believe the main challenges to achieve our goal are now on the software side. We want our software to be able to run on low-cost mobile robots that are therefore not equipped with high-performance sensors or actuators, so that our techniques can realistically be deployed and evaluated in real settings, such as in service and assistive robotic applications. We envision that these robots will be able to cooperate with each other but also with intelligent spaces or apartments which can also be seen as robots spread in the environment. Like robots, intelligent spaces are equipped with sensors that make them sensitive to human needs, habits, gestures, etc., and actuators to be adaptive and responsive to environment changes and human needs. These intelligent spaces can give robots improved skills, with less expensive sensors and actuators enlarging their field of view of human activities, making them able to behave more intelligently and with better awareness of people evolving in their environment. As robots and intelligent spaces share common characteristics, we will use, for the sake of simplicity, the term robot for both mobile robots and intelligent spaces.

Among the particular issues we want to address, we aim at designing robots that are able to:

  • handle dynamic environments and unforeseen situations;
  • cope with physical damage;
  • interact physically and socially with humans;
  • collaborate with each other;
  • exploit the multitude of sensor measurements from their surroundings;
  • enhance their acceptability and usability by end-users without robotics background.

All these abilities can be summarized by the following two major objectives:

  • life-long autonomy: continuously perform tasks while adapting to sudden or gradual changes in both the environment and the morphology of the robot;
  • natural interaction with robotics systems: interact with both other robots and humans for long periods of time, taking into account that people and robots learn from each other when they live together.

3 Research program

3.1 Lifelong autonomy

Scientific context

So far, only a few autonomous robots have been deployed for a long time (weeks, months, or years) outside of factories and laboratories. They are mostly mobile robots that simply “move around” (e.g., vacuum cleaners or museum “guides”) and data collecting robots (e.g., boats or underwater “gliders” that collect data about the water of the ocean).

A large part of the long-term autonomy community is focused on simultaneous localization and mapping (SLAM), with a recent emphasis on changing and outdoor environments 42, 52. A more recent theme is life-long learning: during long-term deployment, we cannot hope to equip robots with everything they need to know, therefore some things will have to be learned along the way. Most of the work on this topic leverages machine learning and/or evolutionary algorithms to improve the ability of robots to react to unforeseen changes 42, 48.

Main challenges

The first major challenge is to endow robots with a stable situation awareness in open and dynamic environments. This covers both the state estimation of the robot by itself as well as the perception/representation of the environment. Both problems have been claimed to be solved but it is only the case for static environments 47.

In the Larsen team, we aim at deployment in environments shared with humans which imply dynamic objects that degrade both the mapping and localization of a robot, especially in cluttered spaces. Moreover, when robots stay longer in the environment than for the acquisition of a snapshot map, they have to face structural changes, such as the displacement of a piece of furniture or the opening or closing of a door. The current approach is to simply update an implicitly static map with all observations but without attempt at distinguishing the suitable changes. For localization in not-too-cluttered or not-too-empty environments, this is generally sufficient since a significant fraction of the environment should remain stable. But for life-long autonomy, and in particular for navigation, the quality of the map, and especially the knowledge of the stable parts, is primordial.

A second major obstacle to moving robots outside of labs and factories is their fragility: Current robots often break in a few hours, if not a few minutes. This fragility mainly stems from the overall complexity of robotic systems, which involve many actuators, many sensors, and complex decisions, and from the diversity of situations that robots can encounter. Low-cost robots exacerbate this issue because they can be broken in many ways (high-quality material is expensive), because they have low self-sensing abilities (sensors are expensive and increase the overall complexity), and because they are typically targeted towards non-controlled environments (e.g., houses rather than factories, in which robots are protected from most unexpected events). More generally, this fragility is a symptom of the lack of adaptive abilities in current robots.

Angle of attack

To solve the state estimation problem, our approach is to combine classical estimation filters (Extended Kalman Filters, Unscented Kalman Filters, or particle filters) with a Bayesian reasoning model in order to internally simulate various configurations of the robot in its environment. This should allow for adaptive estimation that can be used as one aspect of long-term adaptation. To handle dynamic and structural changes in an environment, we aim at assessing, for each piece of observation, whether it is static or not.

We also plan to address active sensing to improve the situation awareness of robots. Literally, active sensing is the ability of an interacting agent to act so as to control what it senses from its environment with the typical objective of acquiring information about this environment. A formalism for representing and solving active sensing problems has already been proposed by members of the team 41 and we aim to use it to formalize decision-making problems for improving situation awareness.

Situation awareness of robots can also be tackled by cooperation, whether it be between robots or between robots and sensors in the environment (led out intelligent spaces) or between robots and humans. This is in rupture with classical robotics, in which robots are conceived as self-contained. But, in order to cope with as diverse environments as possible, these classical robots use precise, expensive, and specialized sensors, whose cost prohibits their use in large-scale deployments for service or assistance applications. Furthermore, when all sensors are on the robot, they share the same point of view on the environment, which is a limit for perception. Therefore, we propose to complement a cheaper robot with sensors distributed in a target environment. This is an emerging research direction that shares some of the problematics of multi-robot operation and we are therefore collaborating with other teams at Inria that address the issue of communication and interoperability.

To address the fragility problem, the traditional approach is to first diagnose the situation, then use a planning algorithm to create/select a contingency plan. But, again, this calls for both expensive sensors on the robot for the diagnosis and extensive work to predict and plan for all the possible faults that, in an open and dynamic environment, are almost infinite. An alternative approach is then to skip the diagnosis and let the robot discover by trial and error a behavior that works in spite of the damage with a reinforcement learning algorithm 57, 48. However, current reinforcement learning algorithms require hundreds of trials/episodes to learn a single, often simplified, task 48, which makes them impossible to use for real robots and more ambitious tasks. We therefore need to design new trial-and-error algorithms that will allow robots to learn with a much smaller number of trials (typically, a dozen). We think the key idea is to guide online learning on the physical robot with dynamic simulations. For instance, in our recent work, we successfully mixed evolutionary search in simulation, physical tests on the robot, and machine learning to allow a robot to recover from physical damage 49, 1.

A final approach to address fragility is to deploy several robots or a swarm of robots or to make robots evolve in an active environment. We will consider several paradigms such as (1) those inspired from collective natural phenomena in which the environment plays an active role for coordinating the activity of a huge number of biological entities such as ants and (2) those based on online learning 45. We envision to transfer our knowledge of such phenomenon to engineer new artificial devices such as an intelligent floor (which is in fact a spatially distributed network in which each node can sense, compute and communicate with contiguous nodes and can interact with moving entities on top of it) in order to assist people and robots (see the principle in 55, 45, 40).

3.2 Natural interaction with robotic systems

Scientific context

Interaction with the environment is a primordial requirement for an autonomous robot. When the environment is sensorized, the interaction can include localizing, tracking, and recognizing the behavior of robots and humans. One specific issue lies in the lack of predictive models for human behavior and a critical constraint arises from the incomplete knowledge of the environment and the other agents.

On the other hand, when working in the proximity of or directly with humans, robots must be capable of safely interacting with them, which calls upon a mixture of physical and social skills. Currently, robot operators are usually trained and specialized but potential end-users of robots for service or personal assistance are not skilled robotics experts, which means that the robot needs to be accepted as reliable, trustworthy and efficient 61. Most Human-Robot Interaction (HRI) studies focus on verbal communication 56 but applications such as assistance robotics require a deeper knowledge of the intertwined exchange of social and physical signals to provide suitable robot controllers.

Main challenges

We are here interested in building the bricks for a situated HRI addressing both the physical and social dimension of the close interaction, and the cognitive aspects related to the analysis and interpretation of human movement and activity.

The combination of physical and social signals into robot control is a crucial investigation for assistance robots 58 and robotic co-workers 54. A major obstacle is the control of physical interaction (precisely, the control of contact forces) between the robot and the human while both partners are moving. In mobile robots, this problem is usually addressed by planning the robot movement taking into account the human as an obstacle or as a target, then delegating the execution of this “high-level” motion to whole-body controllers, where a mixture of weighted tasks is used to account for the robot balance, constraints, and desired end-effector trajectories 39.

The first challenge is to make these controllers easier to deploy in real robotics systems, as currently they require a lot of tuning and can become very complex to handle the interaction with unknown dynamical systems such as humans. Here, the key is to combine machine learning techniques with such controllers.

The second challenge is to make the robot react and adapt online to the human feedback, exploiting the whole set of measurable verbal and non-verbal signals that humans naturally produce during a physical or social interaction. Technically, this means finding the optimal policy that adapts the robot controllers online, taking into account feedback from the human. Here, we need to carefully identify the significant feedback signals or some metrics of human feedback. In real-world conditions (i.e., outside the research laboratory environment) the set of signals is technologically limited by the robot's and environmental sensors and the onboard processing capabilities.

The third challenge is for a robot to be able to identify and track people on board. The motivation is to be able to estimate online either the position, the posture, or even moods and intentions of persons surrounding the robot. The main challenge is to be able to do that online, in real-time and in cluttered environments.

Angle of attack

Our key idea is to exploit the physical and social signals produced by the human during the interaction with the robot and the environment in controlled conditions, to learn simple models of human behavior and consequently to use these models to optimize the robot movements and actions. In a first phase, we will exploit human physical signals (e.g., posture and force measurements) to identify the elementary posture tasks during balance and physical interaction. The identified model will be used to optimize the robot whole-body control as prior knowledge to improve both the robot balance and the control of the interaction forces. Technically, we will combine weighted and prioritized controllers with stochastic optimization techniques. To adapt online the control of physical interaction and make it possible with human partners that are not robotics experts, we will exploit verbal and non-verbal signals (e.g., gaze, touch, prosody). The idea here is to estimate online from these signals the human intent along with some inter-individual factors that the robot can exploit to adapt its behavior, maximizing the engagement and acceptability during the interaction.

Another promising approach already investigated in the Larsen team is the capability for a robot and/or an intelligent space to localize humans in its surrounding environment and to understand their activities. This is an important issue to handle both for safe and efficient human-robot interaction.

Simultaneous Tracking and Activity Recognition (STAR) 60 is an approach we want to develop. The activity of a person is highly correlated with his position, and this approach aims at combining tracking and activity recognition to make one benefit from the other. By tracking the individual, the system may help infer its possible activity, while by estimating the activity of the individual, the system may make a better prediction of his/her possible future positions (especially in the case of occlusions). This direction has been tested with simulator and particle filters 44, and one promising direction would be to couple STAR with decision making formalisms like partially observable Markov decision processes (POMDPs). This would allow us to formalize problems such as deciding which action to take given an estimate of the human location and activity. This could also formalize other problems linked to the active sensing direction of the team: how should the robotic system choose its actions in order to better estimate the human location and activity (for instance by moving in the environment or by changing the orientation of its cameras)?

Another issue we want to address is robotic human body pose estimation. Human body pose estimation consists of tracking body parts by analyzing a sequence of input images from single or multiple cameras.

Human posture analysis is of high value for human robot interaction and activity recognition. However, even though the arrival of new sensors like RGB-D cameras has simplified the problem, it still poses a great challenge, especially if we want to do it online, on a robot and in realistic world conditions (cluttered environment). This is even more difficult for a robot to bring together different capabilities both at the perception and navigation level 43. This will be tackled through different techniques, going from Bayesian state estimation (particle filtering), to learning, active and distributed sensing.

4 Application domains

4.1 Personal assistance

During the last fifty years, many medical advances as well as the improvement of the quality of life have resulted in a longer life expectancy in industrial societies. The increase in the number of elderly people is a matter of public health because although elderly people can age in good health, old age also causes embrittlement, in particular on the physical plan which can result in a loss of autonomy. That will lead us to re-think the current model regarding the care of elderly people.1 Capacity limits in specialized institutes, along with the preference of elderly people to stay at home as long as possible, explain a growing need for specific services at home.

Ambient intelligence technologies and robotics could contribute to this societal challenge. The spectrum of possible actions in the field of elderly assistance is very large, going from activity monitoring services, mobility or daily activity aids, medical rehabilitation, and social interactions. This will be based on the experimental infrastructure we have built in Nancy (Smart apartment platform) as well as the deep collaboration we have with OHS 2 and the company Pharmagest and it's subsidiary Diatelic, an SAS created in 2002 by a member of the teams and others.

At the same time, these technologies can be beneficial to address the increasing development of musculo-skeletal disorders and diseases that is caused by the non-ergonomic postures of the workers, subject to physically stressful tasks. Wearable technologies, sensors and robotics, can be used to monitor the worker's activity, its impact on their health, and anticipate risky movements. Two application domains have been particularly addressed in the last years: industry, and more specifically manufacturing, and healthcare.

4.2 Civil robotics

Many applications for robotics technology exist within the services provided by national and local government. Typical applications include civil infrastructure services 3 such as: urban maintenance and cleaning; civil security services; emergency services involved in disaster management including search and rescue; environmental services such as surveillance of rivers, air quality, and pollution. These applications may be carried out by a wide variety of robots and operating modalities, ranging from single robots to small fleets of homogeneous or heterogeneous robots. Often robot teams will need to cooperate to span a large workspace, for example in urban rubbish collection, and operate in potentially hostile environments, for example in disaster management. These systems are also likely to have extensive interaction with people and their environments.

The skills required for civil robots match those developed in the Larsen project: operating for a long time in potentially hostile environment, potentially with small fleets of robots, and potentially in interaction with people.

5 Social and environmental responsibility

The team is engaged in reducing its carbon footprint by taking actions to reduce the number of travels. Project meetings are carried out in remote, when possible, and trains are the most preferable form of travel when possible.

5.1 Impact of research results

Hospitals- The research in the ExoTurn project led to the deployment of a total of four exoskeletons (Laevo) in the Intensive Care Unit Department of the Hospital of Nancy (CHRU). They have been used by the medical staff since April 2020 to perform Prone Positioning on COVID-19 patients with severe ARDS. To the best of our knowledge, other hospitals (in France, Belgium, Holland, Italy and Switzerland) are following on our footsteps and purchased Laevo exoskeletons for the same use. At the same time, the positive feedback from the CHRU of Nancy has motivated us to continue investigating if exoskeletons could be beneficial for the medical staff involved in other type of healthcare activities. A new study on bed bathing of hospitalized patients started in February 2021 in the department of vascular surgery. For sanitary reasons, preliminary experiments investigating the use of the Laevo for assisting nurses were conducted in the team's laboratory premises in summer 2021. An article presenting the findings is in preparation.

Ageing and health- This research line has the objective to propose technological solutions to the difficulties of elderly people in an ageing population (due to the increase of life expectancy). The placement of older people in a nursing home (EHPAD) is often only a choice of reason and can be rather poorly experienced by people. One answer to this societal problem is the development of smart home technologies that assist the elderly to stay in their homes longer than they can do today. With this objective we have a long term cooperation with Pharmagest which has been supported in recent years through a PhD thesis (Cifre) between June 2017 and August 2021. The objective is to enhance the CareLib solution developed by Diatelic (a subsidiary of the Wellcoop-Pharmagest group) and Larsen team through a previous collaboration (Satelor project). The Carelib offer is a solution consisting of (1) a connected box (with touch screen), (2) a 3D sensor that is able (i) to measure characteristics of the gait such as the speed and step length, (ii) to identify activities of daily life and (iii) to detect emergency situation such as a fall, and (3) universal sensors (motion, ...) installed in each part of the housing. A software licence has been grant by Inria to Pharmagest.

Environment- The new project TELEMOVTOP, in collaboration with the company Isotop, aims at automating the processes of disposal of metal sheets contaminated with asbestos from roofs. This procedure has a high environmental impact and is also a risk for the health of the workers. Robotics can be a major technology innovation in this field. With this project, the team aims at both helping to reduce the workers' risk of exposure to asbestos, and accelerating the disposal project to reduce environmental pollution.

Firefighters- The project POMPEXO, in collaboration with the SDIS 54 (firefighters from Meurthe-et-Moselle) and two laboratories from the Université de Lorraine (DevAH: biomechanics, and Perseus: Cognitive ergonomics), aims at investigating the possibility to assist firefighters with an exoskeleton during the car cutting maneuver. This frequent maneuver is physically very demanding, and due to a general trend in aging and decreasing physical condition among firefighter crews, less and less firefighters are able to perform it. Hence the SDIS 54 is looking for a solution to increase the strength and reduce the fatigue of firefighters during this maneuver. Occupational exoskeletons have the potential to alleviate the physical load on the workers, and hence may be a solution. However, feasibility and benefits of exoskeletons are task-dependent. Hence we are currently analyzing the car cutting maneuver based on data collected on-site with professional firefighters, to identify what kind of exoskeleton may be suitable.

6 Highlights of the year

6.1 Awards

  • NORA Annual Conference Award 2023 for Publication of the decade 2012-2022 for the paper: Kai Olav Ellefsen, Jean-Baptiste Mouret, and Jeff Clune. "Neural modularity helps organisms evolve to learn new skills without forgetting old skills." PloS computational biology 11.4 (2015): e1004128.
  • Best conference paper award at IEEE ARSO 2023 for the paper 17:

    Oliveira Souza, A.; Grenier, J.; Charpillet, F.; Maurice, P.; Ivaldi, S. (2023) Towards data-driven predictive control of active upper-body exoskeletons for load carrying. IEEE ARSO 2023.

  • Lorenzo Vianello, former PhD of the team (defended on December 2022) was finalist for the best Ph.D. thesis by GdR MACS.
  • Best paper award at IEEE ICTAI 2023 for the paper 11:

    Delage A., Buffet O., Dibangoye J. (2023) Global min-max Computation for α-Hölder Games IEEE ICTAI 2023.

6.2 Misc

  • Serena Ivaldi and Jean-Baptiste Mouret were visiting experts in the Human-Robot Interaction Lab at ESA-ESTEC (Netherlands) in March-April 2023.
  • Serena Ivaldi was keynote speaker at IEEE IROS 2023.

7 New software, platforms, open data

7.1 New software

7.1.1 inria_wbc

  • Name:
    Inria whole body controller
  • Keyword:
  • Scientific Description:
    This software implements Task-Space Inverse Dynamics for the Talos robot, iCub Robot, Franka-Emika Panda robot, and Tiago Robot.
  • Functional Description:
    This controller exploits the TSID library (QP-based inverse dynamics) to implement a position controller for the Talos humanoid robot. It includes:
    • flexible configuration files,
    • links with the RobotDART library for easy simulations,
    • stabilizer and torque safety checks.
  • Release Contributions:
    First version
  • URL:
  • Publication:
  • Contact:
    Jean-Baptiste Mouret
  • Participants:
    Jean-Baptiste Mouret, Eloise Dalin, Ivan Bergonzani, Olivier Rochel

7.1.2 libProMP

  • Name:
    Probabilistic Motion Primitives
  • Keywords:
    Machine learning, Robotics
  • Functional Description:
    This library implements Probabilistic Motion Primitives for motion prediction, motion recognition, and learning from demonstration in robotics. It leverages the Eigen3 library.
  • URL:
  • Publication:
  • Contact:
    Serena Ivaldi
  • Participants:
    Luigi Penco, Ivan Bergonzani, Waldez Azevedo Gomes Junior, Serena Ivaldi, Jean-Baptiste Mouret

8 New results

8.1 Lifelong autonomy

8.1.1 Planning and decision making

Heuristic search for (partially observable) stochastic games

Participants: Olivier Buffet, Aurélien Delage, Vincent Thomas.

Collaboration with Jilles Dibangoye (INSA-Lyon, INRIA team CHROMA) and Abdallah Saffidine (University of New South Wales (UNSW), Sydney, Australia). This line of research is pursued through Jilles Dibangoye's ANR JCJC PLASMA.

Many robotic scenarios involve multiple interacting agents, robots or humans, e.g., security robots in public areas.

After addressing in the past the collaborative setting, where all agents share one objective 2, we have applied a similar approach in the important 2-player zero-sum setting, i.e., with two competing agents, and proposed an algorithm for partially observable Stochastic Games (POSGs), turning the problem into an occupancy Markov game, and deriving bounding approximators that build on two types of continuity properties: Lipschitz-continuity, and convexity and concavity properties. On similar topics, we have proposed an approach for global max-min optimization for Lipschitz-continuous function.

This year, this line of work led to several publications 4, 11, 23, 24.

Explicability and interpretability in probabilistic planning

Participants: Salomé Lepers, Olivier Buffet, Vincent Thomas.

In a human-agent collaboration scenario, some properties of the agent behavior can be useful for the human and sometimes allow a better collaboration. These properties include, for instance, legibility (legible behaviors convey intentions, i.e., actual task at hand, via action choices), explicability (explicable behaviors conform to observers’ expectations, i.e., they appear to have some purpose), and predictability (a behavior is usually considered predictable if it is easy to guess the end of an on-going trajectory). Recent theoretical frameworks allow formalizing such properties and proposing algorithms to enforce them.

This year, we have looked in particular at predictability in the context of a stochastic environment. In this setting, the end of the current trajectory may highly depend on the outcome of each action. We thus proposed that predictability should be about minimizing the number of errors when asked repeatedly to guess the next action or next state. This has been formalized in a variant of the observer-aware Markov Decision Process (OAMDP) formalism, naturally coming with simple algorithms that efficiently find optimal solutions.

In silico experiments on simple grid-world problems allowed validating the approach and illustrating the types of resulting behaviors, sometimes making surprising decisions early on to then encounter less states where action choices are hard to predict (because multiple actions are optimal). In vivo experiments, where actual humans observe the behavior of artificial agents, confirmed that our approach improves their predictions in most settings, but also allowed pointing out that humans efficiently learn biases in an agent's behavior, which further helps him.

This year, this line of work led to a first publication 15.

Path planning with probabilistic motion models

Participants: Maxime Chalumeau, Alexis Scheuer, Francis Colas.

This work was done in collaboration with Olivier Rochel from SED.

Path planning is most often done with deterministic motion models. The standard approach to cope with the discrepancy between the model and the reality in this case is to introduce significant safety margins. This can lead to the robot not moving since it considers there is not enough space to maneuver.

In order to design more precise models of mobile robot motion, we aim to collect data that includes in particular: the command sent to the platform, the measured odometry, and the ground-truth displacement. During the internship of Maxime Chalumeau, we started this work using the Turtlebot4. His first task was to setup the robot inside our existing robotic platform with the use of ROS2 instead of ROS1. Maxime also designed and implemented a whole data collection framework. The aim is to have a fully-autonomous control for a robot moving randomly in an enclosed environment while maximizing the diversity of the motion performed and the transition between the motions.

Next steps are the actual data collection and the modeling using probabilistic models or artificial neural networks. Then, with these new models, we can evaluate whether the planning taking into account this uncertainty leads to better trajectories.

8.1.2 Learning and optimization

Learning height for top-down grasps with the DIGIT sensor

Participants: Thais Bernardi, Yoann Fleytoux, Serena Ivaldi, Jean-Baptiste Mouret.

This work is part of the ANR/Chist-ERA project HEAP.

Following our previous work about top-down grasping unknown objects identified from top-down images with a parallel gripper 46, we worked on learning the height of the grasp (how far the gripper should be from the table). When no 3D object model is available, the state-of-the-art grasp generators identify the best candidate locations for planar grasps using the RGBD image. However, while they generate the Cartesian location and orientation of the gripper, the height of the grasp center is often determined by heuristics that are based on the highest point in the depth map. This leads to unsuccessful grasps when the objects are not thick, or have transparencies or curved shapes. We proposed to learn a regressor that predicts the best grasp height based on the image. We train this regressor with a dataset that is automatically acquired thanks to the DIGIT optical tactile sensors, which is able to evaluate grasp success and stability. Using our predictor, the grasping success is improved by 6% for all objects, by 16% on average on difficult objects, and by 40% for objects that are notably very difficult to grasp (e.g., transparent, curved, thin).

This year, this line of work led to a publication 8 as well as a Youtube video.

Quality diversity algorithms

Participants: Timothée Anne, Jean-Baptiste Mouret.

Quality diversity algorithms are black-box optimization algorithms that search for a large set of high-quality solution that all behave differently 50. In 2023, we worked on two topics:

  • Fast centroid generation: in MAP-Elites, which is the main quality diversity algorithm, the behavioral space must be divided into “cells” of equal volume. The current approach is a Centroidal Voronoi Tesselation 59, but this approach is slow. We investigated other approaches to evenly spread centroid points in high-dimensional (2 to 100) spaces, like blue-noise generators and scrambled Sobol numbers. We concluded that the Centroidal Voronoi Tesselation gives the best quality but faster approach are similar in high-dimensional spaces.
  • Multi-task, Multi-behavior MAP-Elites: we often need to find the optimum for a family of cost functions, which we called Multi-task MAP-Elites. We extended our previous work on this topic 51 by combining it with MAP-Elites to find many solutions that behave differently for many tasks.

This year, this line of work led to a publication 7.

Dynamic modeling and AI-based control of a cable-driven parallel robot

Participants: Abir Bouaouda, Mohamed Boutayeb, François Charpillet, Dominique Martinez.

This work is done in collaboration with Rémi Pannequin (from CRAN) and Dominique Martinez (ISM, Marseille).

Controlling over-constrained cable-driven parallel robots (CDPRs) is a challenging task due to the complex dynamics of the system. Classical controllers require force distribution algorithms that involve an optimization problem, which is time consuming. This year, we proposed an AI-based approach that learns a controller from simulated trajectories. A dynamic model of the CDPR has been first validated experimentally on a real robot. Then, the controller has been trained on a CDPR simulator with randomly generated trajectories using a deep deterministic policy gradient (DDPG). Finally, the trained controller has been tested on different trajectories. Validation results show that the proposed approach is able to track unknown trajectories with a good accuracy.

This year, this line of work led to a publication 9.

Learning in multi-robot and swarm contextes

Participants: Amine Boumaza.

Designing behaviors in the multi-robots context is a challenging open problem which we address using embodied evolutionary robotics (EER). These are algorithms in which optimization is carried out decentralized, where each mobile robot runs an evolutionary algorithm on board and exchanges genetic material with other agents when they meet.

In this context, we address the problem of promoting behavioural diversity. We show that it is not easy to adapt existing diversity algorithms from traditional evolutionary robotics to this context and describe a method in which selection is based on originality and which allows a swarm of heterogeneous agents to maintain a high degree of diversity in behavioral space. We also describe a behavioral distance measure that compares behaviors reliably in distributed contexts. We tested the selection scheme on an open-ended survival task and show its effectiveness. Without any other pressure besides that of the environment, the evolved strategies tend toward simplicity, exploiting the existing affordances and the additional external pressure from our selection measure enables the emergence of rich and diverse behaviors.

This year, this line of work led to a publication 35.

8.2 Natural interaction with robotics systems

8.2.1 Teleoperation and Human-Robot collaboration

Survey of humanoid teleoperation

Participants: Luigi Penco, Serena Ivaldi.

International collaboration with the Italian Institute of Technology, the University of Illinois at Urbana-Champaign, the CNRS-AIST Joint Robotics Laboratory (JRL), the Florida Institute for Human and Machine Cognition, and the Tokyo University of Science

Teleoperation of humanoid robots enables the integration of the cognitive skills and domain expertise of humans with the physical capabilities of humanoid robots. The operational versatility of humanoid robots makes them the ideal platform for a wide range of applications when teleoperating in a remote environment. However, the complexity of humanoid robots imposes challenges for teleoperation, particularly in unstructured dynamic environments with limited communication. Many advancements have been achieved in the last decades in this area, but a comprehensive overview is still missing. This survey article gives an extensive overview of humanoid robot teleoperation, presenting the general architecture of a teleoperation system and analyzing the different components. We also discuss different aspects of the topic, including technological and methodological advances, as well as potential applications.

This year, this line of work led to a publication 3.

Teleoperation of humanoid robots with 2 s delays

Participants: Luigi Penco, Jean-Baptiste Mouret, Serena Ivaldi.

Humanoid robots could be versatile and intuitive human avatars that operate remotely in inaccessible places: the robot could reproduce in the remote location the movements of an operator equipped with a wearable motion capture device while sending visual feedback to the operator. While substantial progress has been made on transferring ("retargeting") human motions to humanoid robots, a major problem preventing the deployment of such systems in real applications is the presence of communication delays between the human input and the feedback from the robot: even a few hundred milliseconds of delay can irreversibly disturb the operator, let alone a few seconds. To overcome these delays, we introduce a system in which a humanoid robot executes commands before it actually receives them, so that the visual feedback appears to be synchronized to the operator, whereas the robot executed the commands in the past. To do so, the robot continuously predicts future commands by querying a machine learning model that is trained on past trajectories and conditioned on the last received commands. In our experiments, an operator was able to successfully control a humanoid robot (32 degrees of freedom) with stochastic delays up to 2 seconds in several whole-body manipulation tasks, including reaching different targets, picking up a bottle, and placing a box at distinct locations.

This year, this line of work led to a publication 18 as well as a Youtube video.

Adaptive control of collaborative robots for preventing musculoskeletal disorders

Participants: Aya Yaacoub, Francis Colas, Vincent Thomas, Pauline Maurice.

This work is part of Pauline Maurice's ANR JCJC ROOIBOS project.

The use of collaborative robots in direct physical collaboration with humans constitutes a possible answer to musculoskeletal disorders: not only can they relieve the worker from heavy loads, but they could also guide them towards more ergonomic postures. In this context, one objective of the ROOIBOS Project is to build adaptive robot strategies that are optimal regarding productivity but also the long-term health and comfort of the human worker, by adapting the robot behavior to the human's physiological state.

To do so, we are developing tools to compute a robot policy taking into account the long-term consequences of the biomechanical demands on the human worker's joints (joint loading) and to distribute the efforts among the different joints during the execution of a repetitive task. The proposed platform merges within the same framework several works conducted in the LARSEN team, namely virtual human modeling and simulation, fatigue estimate and decision making in the face of uncertainties.

One important challenge for computing a cobot policy lies in the continuous nature of the action and the state space. This year, we have thus focused on how to select a small discrete set of relevant actions from the continuous action space and, to do so, we leveraged dominance relationships between robot actions in the fatigue space.

This year, this line of work led to several publications: 26, 5.

Task-planning for human robot collaboration

Participants: Yang You, François Charpillet, Francis Colas, Olivier Buffet, Vincent Thomas.

Collaboration with Rachid Alami (CNRS Senior Scientist, LAAS Toulouse).

This work is part of the ANR project Flying Co-Worker (FCW) and focuses on high-level decision making for collaborative robotics. When a robot has to assist a human worker, it has no direct access to his current intention or his preferences but has to adapt its behavior to help the human complete his task.

We previously looked at this collaboration problem as a multi-agent problem and formalized in the decentralized-POMDP framework, where a common reward is shared among the agents. However, the cost of solving this multi-agent problem is prohibitive and, even if the optimal joint policy could be built, it may be too complex to be realistically executed by a human worker.

To address the collaboration issue, we thus proposed this year to consider a single-agent problem by taking the robot's perspective and assuming the human is part of the environment and follows a known policy. In this context, building the robot consists in computing its best response given the human policy and can be formalized as a POMDP. This makes the problem computationally simpler, but the difficulty lies in how to choose a relevant human policy for which the robot's best response could be built.

To synthesize the various behaviors the human may actually adopt, we assume that

  • they may follow one of several possible objectives;
  • they act as if controlling the robot, so as to ensure that they account for the robot's ability to help them; and
  • they pick actions using a "soft-max", i.e., sampling more valuable actions with higher probability, which allows accounting not only for the multiplicity of optimal actions, but also for the possibly sub-optimal action choices.

Having generated such a human behavior (as a finite state controller (FSC)), the robot then has to solve a POMDP to find a "best response" solution behavior.

Experiments on a toy problem allowed to demonstrate that our approach is robust not only to a variety of synthetic human behaviors, but also to actual humans who interacted with our "robot" through a terminal game.

This year, this line of work led to several publications 29, 20, 21.

8.2.2 Human understanding

Prediction of human activity

Participants: Nima Mehdi, Francis Colas, Serena Ivaldi, Vincent Thomas.

This work is part of the ANR project Flying Co-Worker (FCW) and focuses on the perception of a human collaborator by a mobile robot for physical interaction. Physical interaction, for instance, object hand-over, requires the precise estimation of the pose of the human and, in particular their hands, with respect to the robot.

Last year we proposed a two-step particle filter to simultaneously estimate the pose and the posture of a human worker by leveraging the advantages of both sensors.

This year, we investigated the efficiency of generative models to estimate and predict sequences of human activities by leveraging on a model of their duration. Instead of only relying on Hidden Markov Models (HMMs) limited by exponentially distributed durations in each state, we studied several Hidden Semi-Markov Models (HSMMs) which explicitely model the duration of state transitions. We focused on two models with different assumptions regarding probabilistic independences between activity transitions and durations, namely Explicit Duration Hidden Markov Models (EDHMMs) and Variable Transition Hidden Markov Model (VTHMMs).

Results showed that HMM, EDHMM and VTHMM have similar performance for activity recognition due to the use of pre-trained observations with high discriminative power. However, the prediction strength (evaluated as predicting the future activity based on past observations) is indeed greater for EDHMM and VTHMM than HMM.

The next step of this work would be to combine those two approaches to track activities of a human worker by using different sensors and probabilistic models of activity durations.

This year, this line of work led to a publication 31.

Inverse optimal control (IOC) for human motion analysis

Participants: Jessica Colombel, David Daney (Auctus Bordeaux), François Charpillet.

This work is a collaboration with David Daney from Inria team Auctus in Bordeaux.

Inverse optimal control (IOC) is a framework used in many fields, especially in robotics and human motion analysis. In this context, various methods of resolution have been proposed in the literature. This year we have presented at ICRA 2023 conference Projected Inverse Optimal Control (PIOC), an approach that offers a simple and comprehensive view of IOC methods. Especially, we explain how uncertainties can be properly addressed in our view. Thus, this article highlights how classical methods can be understood as projections of trajectories in the solution space of the underlying Direct Optimal Control (DOC) problem. This perspective allows for an examination of projections other than the classical methods, which can be fruitful for researchers in the field. As an example, we propose a projection that allows us to choose the underlying cost functions of an IOC problem from a set. The IOC's sub-problems are also addressed, such as modeling observed trajectories, noise measurement and the reliability of solutions obtained by IOC. Our proposal is supported by a simple and canonical example throughout the document.

This year, this line of work led to a publication 10.

Pedestrian and vehicule motion prediction

Participants: Lina Achaji, François Charpillet.

This is a collaboration with François Aioun and Julien Moreau from Stellantis.

This work is done as part of the PhD thesis of Lina Achaji (started on the 1st of March 2020) in the context of the OpenLab collaboration between Inria Nancy and Stellantis. The PhD is related to the development of Autonomous Vehicles in urban places. It underpins essential safety concerns for vulnerable road users (VRUs) such as pedestrians.

This year, we have presented at 3rd Workshop on the Prediction of Pedestrian Behaviors for Automated Driving @ 26th IEEE International Conference on Intelligent Transportation Systems ITSC 2023 a paper with title : Anticipating human actions in front of autonomous vehicles is a challenging task. Several papers have recently proposed model architectures to address this problem by combining multiple input features to predict pedestrian crossing actions. Our paper focuses specifically on using images of the pedestrian's context as an input feature. We present several spatio-temporal model architectures that utilize standard CNN and Transformer modules to serve as a backbone for pedestrian anticipation. However, the objective of this paper is not to surpass state-of-the-art benchmarks but rather to analyze the positive and negative predictions of these models. Therefore, we provide insights on the explainability of vision-based Transformer models in the context of pedestrian action prediction. We highlight cases where the model can achieve correct quantitative results but falls short in providing human-like explanations qualitatively, emphasizing the importance of investing in explainability for pedestrian action anticipation problems.

Lina Achaji defended her PhD thesis on 5th July 2023 (the thesis manuscript is under embargo).

This year, this line of work led to a publication 22.

8.2.3 Exoskeleton and ergonomics

Simulating operators' morphological variability for ergonomics assessment

Participants: Jacques Zhong, Pauline Maurice, Francis Colas.

This work is a collaboration with Vincent Weisstroffer and Claude Andriot from CEA-LIST (the PhD of Jacques Zhong funded by CEA).

Digital human models (DHM) are a powerful tool to assess the ergonomics of a workstation during the design process and easily modify/optimize the workstation, without the need for a physical mock-up nor lengthy human subject measurements. However, the morphological variability of the workers is rarely taken into account. Generally, only the height and volume are considered for reachability and space questions. But morphology has other effects, such as changing the way in which a person can perform the task, changing the effort distribution in the body, etc.

In this work, we aim to leverage dynamic simulation with a digital human model (animated using a quadratic programming controller) to simulate virtual assembly tasks for workers of any morphology. The key challenge is to transfer the task execution from one morphology to another.

This year, we propose to couple multi-task quality-diversity algorithm with digital human simulation, in order to optimize an ergonomics map of a DHM across various morphologies in a specific environment to perform a given task. Indeed, different morphologies of workers may perform a task in different ways (using different motor strategies). Hence it is necessary to find the most ergonomic motor strategies for each morphology to assess the suitability of a workstation for any morphology. Our approach allows to optimize the motor strategy of each morphology for a large number of morphologies, while being computationally efficient.

This year, this line of work led to several publications 6, 38.

Motion prediction for active exoskeleton control

Participants: Alexandre Oliveira Souza, Francois Charpillet, Pauline Maurice, Serena Ivaldi.

This work is a joint PhD thesis with Safran (supervisors: Jordane Grenier and Christophe Guettier).

Occupational exoskeletons are a promising solution to physically assist people in strenuous tasks, such as load carrying. Compared to passive exoskeletons, active exoskeletons are more powerful and more versatile, so they can offer a better assistance for a wide variety of tasks. However, their interaction with the user remains a problem currently because there is usually a delay in the assistance, and the selection of the assistance remains often manual. Hence motion prediction could be a promising way to improve exoskeleton control by anticipating the required assistance.

In this work, we leverage machine learning technique to predict the arm motion of the user in order to adapt the exoskeleton assistance. We rely only on sensors embedded in the exoskeleton to facilitate the use outside of the lab. But learning human motion models requires training data, which are expensive to collect in the case of human-exoskeleton interaction. Hence we proposed to leverage dynamic simulation to generate synthetic training data. We simulate a digital mock-up of an exoskeleton using a quadratic programming controller to track human motions coming from motion dataset recorded without exoskeleton (hence the exoskeleton is virtually worn by a user). Since many such datasets (human motion without exoskeleton) exist, this enables to easily generate a large dataset of exoskeleton motion and the associated virtual sensor measurements. We then use the virtual sensor data with a Long-Short Term Memory (LSTM) to learn motion models or directly the exoskeleton torque assistance. We demonstrated that this method can accurately predict either directly the assistive torque or the motion. This work received a best paper award at the IEEE ARSO conference.

This year, this line of work led to a publication 17.

Influence of non-biological motion pattern on human-robot physical collaboration

Participants: Pauline Maurice.

This work is in collaboration with the Action Lab of Northeastern University, USA (PI: Dagmar Sternad, PhD student: Mahdi Edraki).

In a previous work we showed that when physically collaborating with a robot, the interaction is easier for the human when the robot moves according to a human-like motion pattern (compared to a non-human-like pattern). However the motion profile of a robot can be constrained by the task or the environment and then cannot necessarily follow a human-like pattern. In this work, we conducted an experimental user-study to assess if and how humans can improve their performance when collaborating with a robot that moves according to a non-human-like pattern. 41 subjects practiced a collaborative task with a robot over 3 days, with and without augmented feedback, with various motion profiles. We showed that humans can improve even with a non-biological profile, but only when augmented feedback is provided. This has important application for the training and deployment of collaborative robots. We are currently investigating the features of motion that could explain why non-biological profiles are hard to anticipate or generate.

This year, this line of work led to several publications 12, 36.

Exoskeletons for firefighters

Participants: Pauline Maurice.

This work is a joint work with Guillaume Mornieux (Devah lab) and Sophie Lemonnier (Perseus lab) from University of Lorraine.

This is the follow-up of a study conducted in 2022 with the firefigthers of Nancy in which we conducted an experiment where 14 firefighters performed a car-cutting maneuver both with and without a passive exoskeleton for shoulder support. We measured and analyzed biomechanical, physiological and cognitive metrics. We showed that the selected exoskeleton does not provide any significant biomechanical benefit, probably because of the large diversity of postures adopted by the firefighters. Surprisingly however, the firefighters reported a positive perceived effect of the exoskeleton. The acceptance study was also very positive, which opens the door for the use of another better-suited (and possibly ad-hoc) exoskeleton.

This year, this line of work led to several publications 14, 16.

9 Bilateral contracts and grants with industry

9.1 Bilateral grants with industry

Two PhD grants (Cifre) with Stellantis

Participants: François Charpillet, Julien Uzzan, Lina Achaji.

Stellantis and Inria announced on July 5th, 2018 the creation of an OpenLab dedicated to artificial intelligence. The studied areas include autonomous and intelligent vehicles, mobility services, manufacturing, design development tools, the design itself and digital marketing as well as quality and finance. Two PhD programs have been launched with the Larsen team in this context : One with Lina Achaji about pedestrian trajectory prediction, one with Julien Uzzan about Reinforcement Learning. Julien had defended his PhD in 2022 and Lina defended on 2023-07-05. Both theses are under embargo for publication.

PhD grant with SAFRAN

Participants: François Charpillet, Pauline Maurice, Serena Ivaldi, Alexandre Oliveira Souza.

Collaboration with Jordane Grenier (Safran) and Christophe Guettier (Safran).

The thesis is funded by Safran to develop the AI-based control of their hybrid exoskeleton, based on the one developed in the DGA-Rapid porject ASMOA. It consists in developing methods to predict the amount of assistance that is needed by the human in tasks invovling payload manipulation.

PhD work co-advised with CEA-LIST

Participants: Jacques Zhong, Francis Colas, Pauline Maurice.

Collaboration with Vincent Weistroffer (CEA-LIST) and Claude Andriot (CEA-LIST)

This PhD work started in October, 2020. The objective is to develop a software tool that allows taking into account the diversity of workers' morphology when designing an industrial workstation. The developed tool will enable us to test the feasibility and ergonomics of a task for any morphology of workers, based on a few demonstrations of the task in virtual reality by one single worker. The two underlying scientific questions are i) the automatic identification of the task features from a few virtual reality (VR) demonstrations, and ii) the transfer of the identified task to digital human models of various morphologies.

10 Partnerships and cooperations

10.1 International research visitors

10.1.1 Visits to international teams

Research stays abroad

Serena Ivaldi and Jean-Baptiste Mouret spent 2 months (March and April) at the European Space Agency (Nordwijk, Netherlands) to work on teleoperated robots for space application. They were hosted by Thomas Kruger and Gianfranco Visentin from the Automation and Robotics section. They were introduced to the constraints of space projects (both technically and administratively) and how the current teleoperated robots work at the European Space Agency.

A Memoradum of Understanding between the European Space Agency and Inria is discussed between the legal teams to initiate a common space robotics project.

10.2 European initiatives

10.2.1 Horizon Europe


euROBIN project on cordis.europa.eu

  • Title:
    European ROBotics and AI Network
  • Duration:
    From July 1, 2022 to June 30, 2026
  • Partners:
  • Inria contact:
    Serena Ivaldi
  • Coordinator:
    Prof. Dr. Alin Albu-Schäffer (DLR)
  • Summary:

    As robots are entering unstructured environments with a large variety of tasks, they will need to quickly acquire new abilities to solve them. Humans do so very effectively through a variety of methods of knowledge transfer – demonstration, verbal explanation, writing, the Internet. In robotics, enabling the transfer of skills and software between robots, tasks, research groups, and application domains will be a game changer for scaling up the robot abilities.

    euROBIN therefore proposes a threefold strategy: First, leading experts from the European robotics and AI research community will tackle the questions of transferability in four main scientific areas: 1) boosting physical interaction capabilities, to increase safety and reliability, as well as energy efficiency 2) using machine learning to acquire new behaviors and knowledge about the environment and the robot and to adapt to novel situations 3) enabling robots to represent, exchange, query, and reason about abstract knowledge 4) ensuring a human-centric design paradigm, that takes the needs and expectations of humans into account, making AI-enabled robots accessible, usable and trustworthy.

    Second, the relevance of the scientific outcomes will be demonstrated in three application domains that promise to have substantial impact on industry, innovation, and civil society in Europe: 1) robotic manufacturing for a circular economy 2) personal robots for enhanced quality of life 3) outdoor robots for sustainable communities. Advances are made measurable by collaborative competitions.

    Finally, euROBIN will create a sustainable network of excellence to foster exchange and inclusion. Software, data and knowledge will be exchanged over the EuroCore repository, designed to become a central platform for robotics in Europe.

    The vision of euROBIN is a European ecosystem of robots that share their data and knowledge and exploit their diversity to jointly learn to perform the endless variety of tasks in human environments.

10.3 National initiatives

10.3.1 ANR : The Flying Co-Worker

Participants: François Charpillet, Olivier Buffet, Francis Colas, Serena Ivaldi, Vincent Thomas.

  • Program:
  • Project title:
    Flying Co-Worker
  • Duration:
    October 2019 – October 2023
  • Coordinator:
    Daniel Sidobre (LAAS-CNRS, Toulouse)
  • Local coordinator:
    François Charpillet
  • Abstract:
    Bringing together recent progress in physical and decisional interaction between humans and robots with the control of aerial manipulators, this project addresses the flying coworker, an aerial manipulator robot that acts as a teammate of a human worker to transport a long bar or to realise complex tasks. Safety and human-aware robot abilities are at the core of the proposed research to progressively build robots capable to do cooperative handling and to assist a worker by notably delivering objects directly in a safe, efficient, pertinent and acceptable manner. The methodologies developed for ground manipulators cannot be directly used for aerial manipulator systems because of the floating base, of a limited payload, and of strong actuation and energy constraints. From the perception and the interpretation of the human activity, the objective of the project is to build an aerial manipulator capable to plan and control human aware motions to achieve collaborative tasks.

10.3.2 ANR : PLASMA

Participant: Olivier Buffet.

  • Program:
  • Project acronym:
  • Project title:
    Planification et Apprentissage pour Agir dans des Systèmes Multi-Agents
  • Duration:
    February 2020 – October 2023
  • Coordinator:
    Jilles Dibangoye (INSA-Lyon)
  • Local coordinator:
    Olivier Buffet
  • Abstract:
    The main research goal is to develop a general theory and algorithms with provable guarantees to treat planning and (deep) Reinforcement Learning problems arising from the study of multi-agent sequential decision-making, which may be described as Partially Observable Stochastic Games (POSGs).

10.3.3 ANR : Proxilearn

Participant: Jean-Baptiste Mouret.

  • Program:
  • Project acronym:
  • Project title:
    Learning for Proximity Flying
  • Duration:
    January 2020 – December 2023
  • Coordinator:
    Jean-Baptiste Mouret
  • Partner:
    • Institut des sciences du Mouvement, CNRS/Aix-Marseille Université
  • Summary:
    The Proxilearn project leverages artificial intelligence techniques to make it possible for a micro-UAV (10-20 cm / 50-80 g) to fly in very confined spaces (diameter between 40 cm and 1.5 m): air ducts, tunnels, natural caves, quarries, ...It is focused on two challenges: (1) stabilizing a UAV in spite of the turbulences created by the interaction between the rotors and the environment, and (2) the flight in autonomy with very little light.


Participants: Pauline Maurice, Francis Colas, Vincent Thomas.

  • Program:
  • Project acronym:
  • Project title:
    User-Specific Adaptation of Collaborative Robot Motion for Improved Ergonomics
  • Duration:
    March 2021 – February 2025
  • Coordinator:
    Pauline Maurice (CNRS)
  • Summary:
    Collaborative robots have the potential to reduce work-related musculoskeletal disorders not only by decreasing the workers' physical load, but also by modifying and improving their postures. Imposing a sudden modification of one's movement can however be detrimental to the acceptance and efficacy of the human-robot collaboration. In ROOIBOS, we will develop a framework to plan user-specific trajectories for collaborative robots, to gradually optimize the efficacy of the collaboration and the long-term occupational health of the user. We will use machine learning and probabilistic methods to perform user-specific prediction of whole-body movements. We will define dedicated metrics to evaluate the movement ergonomic performance and intuitiveness. We will integrate those elements in a digital human simulation to plan a progressive adaptation of the robot motion accounting for the user's motor preferences. We will then use probabilistic decision-making to adapt the plan on-line to the user's motor adaptation capabilities. This will enable a smooth deployment of collaborative robots at work.

10.3.5 ANR : EpiRL

Participants: Olivier Buffet.

  • Program:
  • Project title:
    Apprentissage par renforcement épistémique
  • Duration:
    February 2023 – February 2027
  • Coordinator:
    François Schwarzentruber (IRISA, École normale supérieure de Rennes)
  • Abstract:

    EpiRL project aims at investigating the combination of epistemic planning and reinforcement learning (RL), by proposing new algorithms that are efficient, adaptive, and capable of computing decisions relying on theory of knowledge and belief. We expect from this approach an efficiency in the generation of epistemic plans, while decisions made RL algorithms will be explainable. Moreover, the algorithms of EpiRL will be tested and evaluated within a real application that exploits autonomous agents.

    The project will address the weaknesses of both epistemic planning and RL: on the one hand, existing epistemic planning algorithms are costly, do not adapt to the environment, and concepts are hand-crafted and are not learned; on the other hand, in reinforcement learning, agents adapt to their environments but are unable to reason about beliefs of other agents. The newly developed algorithms will combine the strengths of both fields.

    We propose four workpackages:

    1. Study representations of states
    2. Develop RL algorithms
    3. Study representations of policies
    4. Validating our algorithms with our industrial partner DAVI. In particular, we aim at developing a debunking chatbot whose use case will apply to raising awareness about environmental issues.

11 Dissemination

Participants: Amine Boumaza, Olivier Buffet, Francis Colas, Serena Ivaldi, Pauline Maurice, Enrico Mingo Hoffman, Jean-Baptiste Mouret, Alexis Scheuer, Vincent Thomas.

11.1 Promoting scientific activities

11.1.1 Scientific events: organisation

Member of the organizing committees
  • Amine Boumaza for "Drôles d'objets - Un nouvel art de faire 2023" at Nancy (Conference site).
  • Jean-Baptiste Mouret co-organized the workshop on Quality Diversity Algorithm Benchmarks at GECCO 2023.
  • Serena Ivaldi co-organized two workshops at IEEE IROS 2023 (Workshop on Assistive robotics for citizens) and IEEE ICRA 2023 (Workshop Toward Robot Avatars). She was also the chair of the track "Handicap & Numerique" at the Journées Scientifiques Inria 2023.
  • Pauline Maurice was Publication Chair for the conference IEEE ARSO 2023.
  • Olivier Buffet co-organized the Journées INRAE - Inria 2023, 5-6 July 2023, Champenoux.

11.1.2 Scientific events: selection

Chair of conference program committees
  • Serena Ivaldi was Program Chair of IEEE ARSO 2023, Senior Editor of IEEE HUMANOIDS 2023, member of the HUMANOIDS 2023 SPC, and co-chair of the Best student paper Award committee at IEEE ICRA 2023.
  • Jean-Baptiste Mouret was co-chair of the Evolutionary Machine Learning track of GECCO'2023.
Member of conference program committees
  • Program Committee Member of the 32nd Joint Conference on Artificial Intelligence (IJCAI 2023) [Olivier Buffet4, Francis Colas]
  • Program Committee Member of the 33rd International Conference on Planning and Scheduling (ICAPS 2023) [Olivier Buffet]
  • Program Committee Member of the 21st International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2023) [Olivier Buffet]
  • Program Committee Member of the 26th European Conference on Artificial Intelligence (ECAI 2023) [Olivier Buffet, Francis Colas, Vincent Thomas]
  • Program Committee Member of the journées d'intelligence artificielle fondamentale et journées francophones planification, décision et apprentissage (JIAF-JFPDA 2023) [Olivier Buffet, Vincent Thomas]
  • Program Committee Member of the ACM Genetic and Evolutionary Computation Conference (GECCO'23) [Amine Boumaza, Jean-Baptiste Mouret]
  • Program Committee Member of the IEEE Congress On Evolutionary Computation (CEC'23) [Amine Boumaza]
  • Associate Editor for the IEEE/RAS International Conference on Humanoid Robots 2023 (HUMANOIDS) [Pauline Maurice]
  • Associate Editor for the IEEE/RAS International Conference on Robotics and Automation 2023 (ICRA) [Pauline Maurice]
  • IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023) [Olivier Buffet, Francis Colas]
  • IEEE Conference on Advanced Robotics and its Social Impact (ARSO 2023) [Olivier Buffet, Francis Colas, Serena Ivaldi]
  • International Conference on Advanced Robotics (ICAR 2023) [Francis Colas]
  • IEEE International Conference on Robotics and Automation (ICRA 2024) [Francis Colas, Pauline Maurice, Enrico Mingo Hoffman]
  • IEEE International Conference on Robotics and Biomimetics (ROBIO 2023) [Pauline Maurice]

11.1.3 Journal

Member of the editorial boards
  • Serena Ivaldi is Editor in Chief of the Springer International Journal of Social Robotics
  • Jean-Baptiste Mouret is associate editor for ACM Transactions on Evolutionary Learning and Optimization (TELO)
  • Pauline Maurice is associate editor for IEEE Transactions on Neural Systems and Rehabilitation Engineering (TNSRE)
  • Enrico Mingo Hoffman is associate editor for IEEE Robotics and Automation Letters (RA-L)
  • Enrico Mingo Hoffman is associate editor for International Journal on Robotics Research (IJRR)
Reviewer - reviewing activities
  • Journal of Artificial Intelligence Research (JAIR) [Olivier Buffet]
  • IEEE Robotics and Automation Letters (RA-L) [Francis Colas, Pauline Maurice, Enrico Mingo Hoffman]
  • Transactions on Pattern Analysis and Machine Intelligence (TPAMI) [Olivier Buffet]
  • ACM Transactions on Evolutionary Learning and Optimization [Jean-Baptiste Mouret]
  • Nature [Jean-Baptiste Mouret]
  • Robotics and Autonomous Systems [Pauline Maurice]

11.1.4 Invited talks

  • Serena Ivaldi was invited as keynote speaker at IEEE IROS 2023; then she gave invited talks at 3 IEEE IROS workshops. She was also invited as speaker at Wearacon Europe 2023.

11.1.5 Leadership within the scientific community

  • Serena Ivaldi was co-chair of the IEEE/RAS ICRA Steering Committee; Associate Vice-President of IEEE/RAS Member Activities Boards (MAB); member of the IEEE/RAS Education Committee; co-leader of the Working Group on Humanoid Robots (GT7 Humanoïde) of the French Robotics Society (GDR Robotique) until summer 2023.

11.1.6 Scientific expertise

  • Olivier Buffet reviewed a project for ANR.
  • Pauline Maurice was member of an ANR reviewing committee
  • Serena Ivaldi was reviewer of the European Project FELICE (2nd project evaluation); she is in the advisory board of two European projects (Agimus and PILLAR-Robots) ; she reviewed two ERC projects submissions (ERC ADV and ERC Synergy)

11.1.7 Research administration

  • Amine Boumaza and Francis Colas are members of the comité de centre of Inria Centre at Université de Lorraine.
  • Francis Colas was member of the Evaluation Commission of Inria (until Aug. 2023).
  • Francis Colas was member of the hiring committee for junior research scientists at Inria Centre at Rennes University.
  • Francis Colas was member of the hiring committee for junior research scientists at Inria Centre at the University Grenoble Alpes.
  • Francis Colas is member of the Comipers committee of Inria Centre at Université de Lorraine.
  • Serena Ivaldi is member of Bureau du Comité des Projets (BCP) of Inria Centre at Université de Lorraine.
  • Enrico Mingo Hoffman is corresponding chair of the IEEE-RAS Technical Committee on Whole-Body Control.
  • Jean-Baptiste Mouret is head of the Department 5 of the LORIA (UMR 7503).

11.2 Teaching - Supervision - Juries

11.2.1 Teaching

  • Master: Vincent Thomas is co-responsible for the parcours “Apprentissage, Vision, Robotique” of the Computer Science Master, Univ. Lorraine and co-conducted the evolution of this parcours for the new training offer of Univ. Lorraine.
  • Master: Amine Boumaza, “Méta-heuristiques et recherche locale stochastique”, 30h eq. TD, M1 informatique, Univ. Lorraine, France.
  • Master: Amine Boumaza, “Modélisation de Phénomènes Biologiques ”, 12h eq. TD, M1 Sciences Cognitives, Univ. Lorraine, France.
  • Tutorial: Olivier Buffet, “Planification dans l'incertain", 2h CM, CNRS Formation Entreprise.
  • Master: Francis Colas, “Planification de trajectoires”, 15h eq. TD, M2 “Apprentissage, Vision, Robotique”, Univ. Lorraine, France.
  • Master: Francis Colas, “ROS”, 6h eq. TD, M2 “Apprentissage, Vision, Robotique”, Univ. Lorraine, France.
  • Master: Francis Colas, “Intégration méthodologique”, 36h eq. TD, M2 “Apprentissage, Vision, Robotique”, Univ. Lorraine, France.
  • Master: Serena Ivaldi, “Analyse Comportementale”, 16h CM/TD, M2 “Sciences Cognitives”, Univ. Lorraine, France.
  • Master: Pauline Maurice, “Analyse Comportementale”, 12h CM/TP, M2 “Sciences Cognitives”, Univ. Lorraine, France.
  • Master: Jean-Baptiste Mouret, “Quality Diversity”, 3h (M2 Innovation, Mines Paristech)
  • Tutorial, Jean-Baptiste Mouret, “Quality Diversity Optimization”, 3h, ACM GECCO 2020 (with A. Cully, Imperial College, and S. Doncieux, Sorbonne Université)
  • Master: Alexis Scheuer, “Introduction à la robotique autonome”, 30h eq. TD, M1 informatique, Univ. Lorraine, France.
  • Master: Alexis Scheuer, “Modélisation et commande en robotique”, 16h eq. TD, M2 “Apprentissage, Vision, Robotique”, Univ. Lorraine, France.
  • Master: Alexis Scheuer, “Éléments de robotique”, 4h eq. TD, Master MEEF 2d degré, INSPÉ, Univ. Lorraine, France.
  • Master: Vincent Thomas, “Apprentissage et raisonnement dans l'incertain”, 15h eq. TD, M2 M2 “Apprentissage, Vision, Robotique”, Univ. Lorraine, France.
  • Master: Vincent Thomas, “Game Design”, 30h eq. TD, M1 Sciences Cognitives, Univ. Lorraine, France.
  • Master: Vincent Thomas, “Agent intelligent”, 15h eq. TD, M1 Sciences Cognitives, Univ. Lorraine, France.
  • Tutorial: Olivier Buffet, “Planification dans l'incertain", 2h CM, CNRS Formation Entreprise.

11.2.2 Supervision

  • PhD: Yoann Fleytoux, “Human-guided manipulation learning of irregular objects”, 2023-12-01, Jean-Baptiste Mouret (advisor), Serena Ivaldi (co-advisor) 28
  • PhD: Yang You, “Probabilistic Decision-Making Models for Multi-Agent Systems and Human-Robot Collaboration”, 2023-02-22, Olivier Buffet (advisor), Vincent Thomas (co-advisor) 29
  • PhD: Lina Achaji, Cifre with PSA, “Modélisation de systèmes dynamiques par des réseaux de neurones à mémoire courte et longue : application pour la prédiction de l'état d'environnement routier”, 2023-07-05, François Charpillet (advisor), François Aioun (Co-advisor, PSA), confidential.
  • PhD in progress: Nima Mehdi, “Perception et interprétation de l'activité humaine”, started in May 2020, Francis Colas (advisor).
  • PhD in progress: Timothée Anne “Meta-learning for adaptive whole-body control”, started in September 2020, Jean-Baptiste Mouret (advisor)
  • PhD in progress: Jacques Zhong, “Prise en compte de la variabilité de morphologie de l'opérateur dans des tâches de montage simulées en réalité virtuelle”, started in October 2020, Francis Colas (advisor), Pauline Maurice (co-advisor), Vincent Weistroffer (co-advisor, CEA-LIST).
  • PhD in progress: Abir Bouaouda, “Apprentissage automatique pour le contrôle des systèmes complexes. Application aux robots à câbles”, started in October 2020, Mohamed Boutayeb (advisor, CRAN), Dominique Martinez (co-advisor)
  • PhD in progress: Raphaël Bousigues, “Collaborative robots as a tool for optimizing skill acquisition through the appropriate use of human motor variability”, started in December 2020, Pauline Maurice (co-advisor), Vincent Padois (advisor, INRIA Bordeaux), David Daney (co-advisor, INRIA Bordeaux).
  • PhD in progress: Aya Yaacoub, “Planification individu-spécifique du comportement d’un robot collaboratif pour la prévention des troubles musculo-squelettiques”, started in December 2021, Francis Colas (advisor), Pauline Maurice (co-advisor).
  • PhD in progress: Alexandre Oliveira Souza, “Intelligence Artificielle et contrôle de systèmes interactifs : application aux exosquelettes”, started in October 2022, François Charpillet (advisor), Pauline Maurice (co-advisor).
  • PhD in progress: Salomé Lepers, “Explicabilité et interprétabilité en planification probabiliste”, started in October 2022, Olivier Buffet (advisor), Vincent Thomas (co-advisor).
  • PhD in progress: Dionis Totsila, EuROBIN project, “Learning Bimanual robot skills from human demonstrations and natural language ”, started in November 2023, Serena Ivaldi (advisor) and Jean-Baptiste Mouret (co-advisor)

11.2.3 Juries

  • Olivier Buffet was
    • reviewer for the PhD of Yassine El Manyari (Université de Nantes)
  • Francis Colas was
    • examiner for the PhD of Jérôme Truc (Université de Toulouse)
  • Vincent Thomas was
    • examiner for the PhD of Giorgio Angelotti (Université de Toulouse Midi-Pyrénées)
    • examiner for the PhD of Junkang Li (Université de Caen Normandie)
  • Jean-Baptiste Mouret was:
    • reviewer for the PhD of Michel Aractingi (CNRS-LAAS & INSA Toulouse)
  • Serena Ivaldi was:
    • reviewer for the PhD of Angela Mazzeo (Scuola Superiore Sant'Anna, Italy), Trifun Savic (Inria Paris & Sorbonne Université)
    • examiner for the PhD of Samuel Beaussant (Université Clermont Auvergne), Nuno Duarte (IST University of Lisbon, Portugal)

11.3 Popularization

11.3.1 Internal or external Inria responsibilities

  • Amine Boumaza is a member of the editorial board of Interstices

11.3.2 Articles and contents

  • Francis Colas, Serena Ivaldi and Jean-Baptiste Mouret were interviewed for l'Est Républicain
  • Jean-Baptiste Mouret was interviewed by France Info (France Info Junior)
  • Jean-Baptiste Mouret was interviewed by the German TV Channel Saarländischer Rundfunk
  • Jean-Baptiste Mouret was hosted by the Arte TV show “Scope” [2.5 hours]
  • Pauline Maurice gave an interview for the podcast "Plus tard je serai"
  • Serena Ivaldi was interviewed by el Diario (Spain)

11.3.3 Education

  • Vincent Thomas
    • proposed one workshop for future teachers during journée SNT-NSI 2023: “reinforcement learning”;
    • participated as a jury member of the scientific Game Jam organized by the “Ecole des Mines de Nancy” and Jean Lamour Institute;
    • participated at the “fête de la science” by co-animating an introductory robotics workshop for pupils.
  • Olivier Buffet participated at the “fête de la science” by co-animating an introductory robotics workshop for pupils.
  • Alexis Scheuer participated at the “fête de la science” by organizing and co-animating an introductory robotics workshop for pupils.

12 Scientific production

12.1 Major publications

  • 1 articleA.Antoine Cully, J.Jeff Clune, D.Danesh Tarapore and J.-B.Jean-Baptiste Mouret. Robots that can adapt like animals.Nature5217553May 2015, 503-507HALDOIback to text
  • 2 articleJ. S.Jilles Steeve Dibangoye, C.Christopher Amato, O.Olivier Buffet and F.François Charpillet. Optimally Solving Dec-POMDPs as Continuous-State MDPs.Journal of Artificial Intelligence Research55February 2016, 443-497HALDOIback to text

12.2 Publications of the year

International journals

International peer-reviewed conferences

  • 7 inproceedingsT.Timothee Anne and J.-B.Jean-Baptiste Mouret. Multi-Task Multi-Behavior MAP-Elites.GECCO '23 Companion: Proceedings of the Companion Conference on Genetic and Evolutionary ComputationGECCO '23 Companion: Proceedings of the Companion Conference on Genetic and Evolutionary ComputationLisbon, Portugal, France2023HALDOIback to text
  • 8 inproceedingsT.Thais Bernardi, Y.Yoann Fleytoux, J.-B.Jean-Baptiste Mouret and S.Serena Ivaldi. Learning height for top-down grasps with the DIGIT sensor.Proc. IEEE RAS Int. Conf. Robotics and Automation (ICRA)IEEE RAS Int. Conf. Robotics and Automation (ICRA)London, United Kingdom2023HALback to text
  • 9 inproceedingsA.Abir Bouaouda, R.Rémi Pannequin, F.François Charpillet, D.Dominique Martinez and M.Mohamed Boutayeb. Dynamic modeling and AI-based control of a cable-driven parallel robot.22nd IFAC World Congress, IFAC 2023Yokohama, JapanJuly 2023HALDOIback to text
  • 10 inproceedingsJ.Jessica Colombel, D.David Daney and F.François Charpillet. Holistic view of Inverse Optimal Control by introducing projections on singularity curves.IEEE International Conference on Robotics and Automation (ICRA)2023 IEEE International Conference on Robotics and Automation (ICRA)London, United KingdomIEEEMay 2023, 12240-12246HALDOIback to text
  • 11 inproceedingsA.Aurélien Delage, O.Olivier Buffet and J.Jilles Dibangoye. Global min-max Computation for α-Hölder Games.Proceedings of the 35th IEEE International Conference on Tools with Artificial Intelligence (ICTAI)ICTAI 2023 - 35th IEEE International Conference on Tools with Artificial IntelligenceAtlanta (Georgia), United StatesIEEE Computer SocietyNovember 2023, 518-525HALback to textback to text
  • 12 inproceedingsM.Mahdiar Edraki, P.Pauline Maurice and D.Dagmar Sternad. Humans Need Augmented Feedback to Physically Track Non-Biological Robot Movements.2023 IEEE International Conference on Robotics and Automation (ICRA)IEEE International Conference on Robotics and Automation (ICRA 2023)Londres, United KingdomIEEE2023HALDOIback to text
  • 13 inproceedingsS.Serena Ivaldi and E.Edoardo Ghini. Teleoperating a robot for removing asbestos tiles on roofs: insights from a pilot study.IEEE ARSO - 2023 IEEE International Conference on Advanced Robotics and Its Social ImpactsBerlin (Germany), GermanyJune 2023HALDOI
  • 14 inproceedingsS.Sophie Lemonnier, L.Laura Cavagnac, N.Nathan Kohili, K.Kévin Bouillet, P.Pauline Maurice and G.Guillaume Mornieux. Sapeurs-pompiers et désincarcération : effet du port d'un exosquelette sur la charge cognitive et l'acceptation.EPIQUE 2023 - 12ème colloque de Psychologie ErgonomiqueParis, France2023HALback to text
  • 15 inproceedingsS.Salomé Lepers, V.Vincent Thomas and O.Olivier Buffet. Comment rendre des comportements plus prédictibles.JIAF-JFPDA - Journées d’Intelligence Artificielle FondamentaleStrasbourg, FranceJuly 2023HALback to text
  • 16 inproceedingsP.P Maurice, S.S Lemonnier, N.N Kohili, L.L Cavagnac and G.G Mornieux. Biomechanical effects of using a passive upper-limb exoskeleton to assist firefighters during vehicle extrication maneuver.48ème congrès de la Société de BiomécaniqueGrenoble, FranceOctober 2023HALback to text
  • 17 inproceedingsA.Alexandre Oliveira Souza, J.Jordane Grenier, F.François Charpillet, P.Pauline Maurice and S.Serena Ivaldi. Towards data-driven predictive control of active upper-body exoskeletons for load carrying.2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)International Conference on Advanced Robotics and its Social ImpactsBerlin, Germany2023HALDOIback to textback to text
  • 18 inproceedingsL.Luigi Penco, J.-B.Jean-Baptiste Mouret and S.Serena Ivaldi. Prescient Teleoperation of Humanoid Robots.2023 IEEE RAS International Conference on Humanoid Robots2023 IEEE RAS International Conference on Humanoid RobotsAustin, United StatesIEEEDecember 2023HALDOIback to text
  • 19 inproceedingsQ.Quentin Rouxel, R.Ruoshi Wen, Z.Zhibin Li, C.Carlo Tiseo, J.-B.Jean-Baptiste Mouret and S.Serena Ivaldi. Feasibility Retargeting for Multi-contact Teleoperation and Physical Interaction.2nd Workshop Toward Robot Avatars, 2023 IEEE International Conference on Robotics and Automation (ICRA)London, United KingdomJune 2023HALDOI
  • 20 inproceedingsY.Yang You, V.Vincent Thomas, F.Francis Colas, R.Rachid Alami and O.Olivier Buffet. Robust Robot Planning for Human-Robot Collaboration.2023 IEEE International Conference on Robotics and Automation (ICRA)London, United KingdomIEEEMay 2023, 9793-9799HALDOIback to text
  • 21 inproceedingsY.Yang You, V.Vincent Thomas, F.Francis Colas and O.Olivier Buffet. Monte-Carlo Search for an Equilibrium in Dec-POMDPs.The 39th Conference on Uncertainty in Artificial Intelligence (UAI)Pittsburgh, PA, United StatesMay 2023HALDOIback to text

Conferences without proceedings

  • 22 inproceedingsL.Lina Achaji, J.Julien Moreau, F.François Aioun and F.François Charpillet. Analysis over vision-based models for pedestrian action anticipation.3rd Workshop on the Prediction of Pedestrian Behaviors for Automated Driving @ 26th IEEE International Conference on Intelligent Transportation Systems ITSC 2023Bilbao, SpainMay 2023, 6HALback to text
  • 23 inproceedingsA.Aurélien Delage, O.Olivier Buffet and J.Jilles Dibangoye. Global Min-Max Computation for α-Hölder Zero-Sum Games.GAIW 2023 - 5th Games, Agents, and Incentives WorkshopLondres (London), United Kingdom2023, 1-9HALback to text
  • 24 inproceedingsA.Aurélien Delage, O.Olivier Buffet, J.Jilles Dibangoye and A.Abdallah Saffidine. Heuristic Search Value Iteration can solve zero-sum Partially Observable Stochastic Games.MSDM 202311th Multiagent Sequential Decision Making under Uncertainty Workshop ; Held as part of the Workshops at the IFAAMAS 2023 - 21st International Conference on Autonomous Agents and Multiagent SystemsLondres, United KingdomMay 2023HALback to text
  • 25 inproceedingsJ.-B.Jean-Baptiste Mouret. Fast generation of centroids for MAP-Elites.GECCO '23 Companion: Companion Conference on Genetic and Evolutionary ComputationLisbon, PortugalACMJuly 2023, 155-158HALDOI
  • 26 inproceedingsA.Aya Yaacoub, V.Vincent Thomas, F.Francis Colas and P.Pauline Maurice. Fatigue Mitigation with Cobots: Investigating the Cobot Workspace.Journée des Jeunes Chercheurs en RobotiqueMoliets et Maa, FranceOctober 2023HALback to text

Edition (books, proceedings, special issue of a journal)

  • 27 periodicalMaison Intelligente.Revue Ouverte d'Intelligence Artificielle4https://roia.centre-mersenne.org/volume/ROIA_2023__4_1/1May 2023, 1-156HAL

Doctoral dissertations and habilitation theses

Reports & preprints

Other scientific publications

12.3 Cited publications

  • 39 articleA.Andrea Del Prete, F.Francesco Nori, G.Giorgio Metta and L.Lorenzo Natale. Prioritized Motion-Force Control of Constrained Fully-Actuated Robots: "Task Space Inverse Dynamics". Robotics and Autonomous Systems 2014, URL: http://dx.doi.org/10.1016/j.robot.2014.08.016back to text
  • 40 inproceedingsM.Mihai Andries and F.François Charpillet. Multi-robot taboo-list exploration of unknown structured environments.2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2015)Hamburg, GermanySeptember 2015HALback to text
  • 41 inproceedingsM.Mauricio Araya-López, O.Olivier Buffet, V.Vincent Thomas and F.François Charpillet. A POMDP Extension with Belief-dependent Rewards.Advances in Neural Information Processing Systems (NIPS)Vancouver, CanadaMIT PressDecember 2010HALback to text
  • 42 articleT.Tim Barfoot, J.Jonathan Kelly and G.Gabe Sibley. Special Issue on Long-Term Autonomy.The International Journal of Robotics Research32142013, 1609--1610back to textback to text
  • 43 inproceedingsA.Abdallah Dib and F.François Charpillet. Pose Estimation For A Partially Observable Human Body From RGB-D Cameras.IEEE/RJS International Conference on Intelligent Robots and Systems (IROS)Hamburg, GermanySeptember 2015, 8HALback to text
  • 44 inproceedingsA.Arsène Fansi Tchango, V.Vincent Thomas, O.Olivier Buffet, F.Fabien Flacher and A.Alain Dutech. Simultaneous Tracking and Activity Recognition (STAR) using Advanced Agent-Based Behavioral Simulations.ECAI - Proceedings of the Twenty-first European Conference on Artificial IntelligencePragues, Czech RepublicAugust 2014HALback to text
  • 45 inproceedingsI.Iñaki Fernández Pérez, A.Amine Boumaza and F.François Charpillet. Comparison of Selection Methods in On-line Distributed Evolutionary Robotics.ALIFE 14: The fourteenth international conference on the synthesis and simulation of living systemsArtificial Life 14New York, United StatesJuly 2014HALDOIback to textback to text
  • 46 inproceedingsY.Yoann Fleytoux, A.Anji Ma, S.Serena Ivaldi and J.-B.Jean-Baptiste Mouret. Data-efficient learning of object-centric grasp preferences.2022 International Conference on Robotics and Automation (ICRA)IEEE2022, 6337--6343back to text
  • 47 articleU.Udo Frese. Interview: Is SLAM Solved?KI - Künstliche Intelligenz2432010, 255-257URL: http://dx.doi.org/10.1007/s13218-010-0047-xDOIback to text
  • 48 articleJ.J. Kober, J. A.J. A. Bagnell and J.J. Peters. Reinforcement Learning in Robotics: A Survey.The International Journal of Robotics ResearchAugust 2013back to textback to textback to text
  • 49 articleS.Sylvain Koos, A.Antoine Cully and J.-B.Jean-Baptiste Mouret. Fast damage recovery in robotics with the t-resilience algorithm.The International Journal of Robotics Research32142013, 1700--1723back to text
  • 50 articleJ.-B.Jean-Baptiste Mouret and J.Jeff Clune. Illuminating search spaces by mapping elites.arXiv preprint arXiv:1504.049092015back to text
  • 51 inproceedingsJ.-B.Jean-Baptiste Mouret and G.Glenn Maguire. Quality diversity for multi-task optimization.Proceedings of the 2020 Genetic and Evolutionary Computation Conference2020, 121--129back to text
  • 52 inproceedingsF.François Pomerleau, P.Philipp Krüsi, F.Francis Colas, P.Paul Furgale and R.Roland Siegwart. Long-term 3D map maintenance in dynamic environments.Robotics and Automation (ICRA), 2014 IEEE International Conference onIEEE2014, 3712--3719back to text
  • 53 techreportSPARC. Robotics 2020 Multi-Annual Roadmap.2014, URL: http://www.eu-robotics.net/ppp/objectives-of-our-topic-groups/back to textback to text
  • 54 inproceedingsJ.J. Shah, J.J. Wiken, B.B. Williams and C.C. Breazeal. Improved human-robot team performance using Chaski, A human-inspired plan execution system. ACM/IEEE International Conference on Human-Robot Interaction (HRI)2011, 29-36back to text
  • 55 inproceedingsO.Olivier Simonin, T.Thomas Huraux and F.François Charpillet. Interactive Surface for Bio-inspired Robotics, Re-examining Foraging Models.23rd IEEE International Conference on Tools with Artificial Intelligence (ICTAI)Boca Raton, United StatesIEEENovember 2011HALback to text
  • 56 inproceedingsN.N. Stefanov, A.A. Peer and M.M. Buss. Role determination in human-human interaction.3rd Joint EuroHaptics Conf. and World Haptics2009, 51-56back to text
  • 57 bookR. S.R. S. Sutton and A. G.A. G. Barto. Introduction to Reinforcement Learning.MIT Press1998back to text
  • 58 articleA.A. Tapus, M.M.J. Matarić and B.B. Scassellati. The grand challenges in Socially Assistive Robotics.IEEE Robotics and Automation Magazine - Special Issue on Grand challenges in Robotics1412007, 1-7back to text
  • 59 articleV.Vassilis Vassiliades, K.Konstantinos Chatzilygeroudis and J.-B.Jean-Baptiste Mouret. Scaling Up MAP-Elites Using Centroidal Voronoi Tessellations.arXiv:1610.057292016back to text
  • 60 articleD.D.H. Wilson and C.C. Atkeson. Simultaneous Tracking and Activity Recognition (STAR) Using Many Anonymous, Binary Sensors.34682005, 62-79URL: http://dx.doi.org/10.1007/11428572_5DOIback to text
  • 61 articleG.Gregor Wolbring and S.Sophya Yumakulov. Social Robots: Views of Staff of a Disability Service Organization.International Journal of Social Robotics632014, 457-468back to text
  1. 1See the Robotics 2020 Multi-Annual Roadmap 53.
  2. 2OHS (Office d'Hygiène Sociale) is an association managing several rehabilitation or retirement home structures.
  3. 3See the Robotics 2020 Multi-Annual Roadmap 53, section 2.5.
  4. 4Distinguished PC member