EN FR
EN FR


Section: New Results

Natural Interaction with Robotics Systems

Control of Interaction

Towards Human-aware Whole-Body Controllers for Physical Human-Robot Interaction

Participants : Oriane Dermy, Serena Ivaldi.

The success of robots in real-world environments is largely dependent on their ability to interact with both humans and said environment. The FP7 EU project CoDyCo focused on the latter of these two challenges by exploiting both rigid and compliant contacts dynamics in the robot control problem. Regarding the former, to properly manage interaction dynamics on the robot control side, an estimation of the human behaviours and intentions is necessary. We contributed to the building blocks of such a human-in-the-loop controller, and validate them in both simulation and on the iCub humanoid robot for the final demo of the CoDyCo project where a human assists the robot in standing up from being seated on a bench.

The controller is the basis for our current researches in the AnDy project.

Publications: [20]

Generating Motions for a Humanoid Robot that Assists a Human in a Co-manipulation Task

Participants : Karim Bouyarmane, Kazuya Otani, Serena Ivaldi.

We proposed a method to make a humanoid robot adapt its motion to help a human collaborator in simulation realize a collaborative manipulation task with the robot while the robot figures out its configuration in real-time through symmetric retargeting.

Publications: [40]

Human-to-humanoid Motion Retargeting

Participants : Karim Bouyarmane, Kazuya Otani.

We continue the development of our human-to-humanoid motion retargeting method by extending it to whole-body manipulation motions based on our previously-proposed multi-robot QP paradigm. The motion retargeting system is now able to autonomously adapt the motion of the robot to dynamics parameters of the manipulated object that substantially differ from those used to provide the human demonstration.

Publications: [31]

Non-verbal Interaction

Multimodal Prediction of Intention via Probabilistic Movement Primitives (ProMP)

Participants : François Charpillet, Oriane Dermy, Serena Ivaldi.

We designed a method for predicting the intention of a user interacting (physically or not) with the humanoid robot iCub, and implemented an associated open-source software (cf. ProMP_iCub in the Software section). Our goal is to allow the robot to infer the intention of the human partner during collaboration, by predicting the future intended trajectory: this capability is critical to design anticipatory behaviors that are crucial in human–robot collaborative scenarios, such as in co-manipulation, cooperative assembly, or transportation. We propose an approach to endow the iCub with basic capabilities of intention recognition, based on Probabilistic Movement Primitives (ProMPs), a versatile method for representing, generalizing, and reproducing complex motor skills. The robot learns a set of motion primitives from several demonstrations, provided by the human via physical interaction. During training, we model the collaborative scenario using human demonstrations. During the reproduction of the collaborative task, we use the acquired knowledge to recognize the intention of the human partner. Using a few early observations of the state of the robot, we can not only infer the intention of the partner but also complete the movement, even if the user breaks the physical interaction with the robot. We evaluated our approach both in simulation and with the real iCub robot. We also proposed a method to exploit referential gaze and combine it with physical interaction, to improve the prediction of primitives. The software implementing our approach is open source and available on the GitHub platform. In addition, we provide tutorials and videos.

Publications: [15]

PsyPhINe: Cogito Ergo Es

Participant : Amine Boumaza.

PsyPhINe is an interdisciplinary and exploratory project (see 9.1.2) between philosophers, psychologists and computer scientists. The goal of the project is related to cognition and behavior. Cognition is a set of processes that are difficult to unite in a general definition. The project aims to explore the idea of assignments of intelligence or intentionality, assuming that our intersubjectivity and our natural tendency to anthropomorphize play a central role: we project onto others parts of our own cognition. To test these hypotheses, our aim is to design a “non-verbal” Turing Test, which satisfies the definitions of our various fields (psychology, philosophy, neuroscience and computer science) using a robotic prototype. Some of the questions that we aim to answer are: is it possible to give the illusion of cognition and/or intelligence through such a technical device? How elaborate must be the control algorithms or “behaviors” of such a device so as to fool test subjects? How many degrees of freedom must it have?

This year an experimental campaigns was organized in which around 40 test subjects where asked to solve a task in front of the moving robotic device. These interactions were recorded on video along with eye tracking data. To analyze the data, a web application was created that crowd-sources video annotation to internet users. A preliminary analysis of the data was presented at the third edition of the PsyPhINe workshop organized by the group, gathering top researchers from philosophy, anthropology, psychology and computer science to discuss and exchange on our methodology (see 10.1.1.1).

Active Audio Source Localization

Participants : François Charpillet, Francis Colas, Van Quan Nguyen.

We collaborate on this subject with Emmanuel Vincent from the Multispeech team (Inria Nancy - Grand Est).

We considered, here, the task of audio source localization using a microphone array on a mobile robot. Active localization algorithms have been proposed in the literature that can estimate the 3D position of a source by fusing the measurements taken for different poses of the robot. However, the robot movements are typically fixed or they obey heuristic strategies, such as turning the head and moving towards the source, which may be suboptimal. This work proposes an approach to control the robot movements so as to locate the source as quickly as possible using the Monte-Carlo Tree Search algorithm [30]. We represent the belief about the source using our mixture Kalman filter that explicitly includes the discrete activity of the source in the estimated state vector, alongside the continuous states such as the position of the robot or the sound source.

This work was carried through the PhD Thesis of Van Quan Nguyen under the supervision of Emmanuel Vincent and Francis Colas. This thesis was defended on the 3rd November 2017.

Publication: [30]