EN FR
EN FR
2021
Activity report
Project-Team
RAINBOW
RNSR: 201822637G
In partnership with:
CNRS, Université Rennes 1, Institut national des sciences appliquées de Rennes
Team name:
Sensor-based Robotics and Human Interaction
In collaboration with:
Institut de recherche en informatique et systèmes aléatoires (IRISA)
Domain
Perception, Cognition and Interaction
Theme
Robotics and Smart environments
Creation of the Project-Team: 2018 June 01

Keywords

Computer Science and Digital Science

  • A5.1.3. Haptic interfaces
  • A5.1.7. Multimodal interfaces
  • A5.4.4. 3D and spatio-temporal reconstruction
  • A5.4.6. Object localization
  • A5.4.7. Visual servoing
  • A5.5.4. Animation
  • A5.6. Virtual reality, augmented reality
  • A5.6.1. Virtual reality
  • A5.6.2. Augmented reality
  • A5.6.3. Avatar simulation and embodiment
  • A5.6.4. Multisensory feedback and interfaces
  • A5.9.2. Estimation, modeling
  • A5.10.2. Perception
  • A5.10.3. Planning
  • A5.10.4. Robot control
  • A5.10.5. Robot interaction (with the environment, humans, other robots)
  • A5.10.6. Swarm robotics
  • A5.10.7. Learning
  • A6.4.1. Deterministic control
  • A6.4.3. Observability and Controlability
  • A6.4.4. Stability and Stabilization
  • A6.4.5. Control of distributed parameter systems
  • A6.4.6. Optimal control
  • A9.5. Robotics
  • A9.7. AI algorithmics
  • A9.9. Distributed AI, Multi-agent

Other Research Topics and Application Domains

  • B2.4.3. Surgery
  • B2.5. Handicap and personal assistances
  • B5.1. Factory of the future
  • B5.6. Robotic systems
  • B8.1.2. Sensor networks for smart buildings
  • B8.4. Security and personal assistance

1 Team members, visitors, external collaborators

Research Scientists

  • Paolo Robuffo Giordano [Team leader, CNRS, Senior Researcher, HDR]
  • François Chaumette [Inria, Senior Researcher, HDR]
  • Alexandre Krupa [Inria, Senior Researcher, HDR]
  • Claudio Pacchierotti [CNRS, Researcher]
  • Julien Pettré [Inria, Senior Researcher, HDR]

Faculty Members

  • Marie Babel [INSA Rennes, Associate Professor, HDR]
  • Quentin Delamare [École normale supérieure de Rennes]
  • Vincent Drevelle [Univ de Rennes I, Associate Professor]
  • Maud Marchal [INSA Rennes, Professor, HDR]
  • Eric Marchand [Univ de Rennes I, Professor, HDR]

Post-Doctoral Fellows

  • Khairidine Benali [Inria, from Sep 2021]
  • Elodie Bouzbib [Inria, from Nov 2021]
  • Thomas Howard [CNRS, until Sep 2021]
  • Pratik Mullick [Inria]
  • Gennaro Notomista [CNRS, until Sep 2021]

PhD Students

  • Vicenzo Abichequer Sangalli [Inria]
  • Julien Albrand [INSA Rennes, until Sept 2021]
  • Javad Amirian [Inria, until Sep 2021]
  • Maxime Bernard [CNRS, from Oct 2021]
  • Pascal Brault [Inria]
  • Pierre Antoine Cabaret [Inria, from Oct 2021]
  • Antoine Cellier [INSA Rennes, from Oct 2021]
  • Thomas Chatagnon [Inria]
  • Adele Colas [Inria]
  • Cedric De Almeida Braga [Inria, until Feb 2021]
  • Nicola De Carli [CNRS]
  • Mathieu Gonzalez [Institut de recherche technologique B-com]
  • Fabien Grzeskowiak [Inria, until Jun 2021]
  • Alberto Jovane [Inria]
  • Glenn Kerbiriou [Interdigital, from June 2021]
  • Lisheng Kuang [China Scholarship Council]
  • Ines Lacote [Inria]
  • Emilie Leblong [Pôle Saint-Hélier]
  • Fouad Makiyeh [Inria]
  • Thibault Noel [Inria, from Feb 2021]
  • Erwan Normand [Univ de Rennes I, from Oct 2021]
  • Alexander Oliva [Inria]
  • Maxime Robic [Inria]
  • Lev Smolentsev [Inria]
  • Gustavo Souza Vieira Dutra [Inria, Dec 2021]
  • Ali Srour [CNRS, from Oct 2021]
  • John Thomas [Inria]
  • Guillaume Vailland [INSA Rennes, until Nov 2021]
  • Tairan Yin [Inria]

Technical Staff

  • Marco Aggravi [CNRS, Engineer, until Aug 2021]
  • Dieudonne Atrevi [Inria, Engineer, until Apr 2021]
  • Julien Bruneau [Inria, Engineer]
  • Louise Devigne [INSA Rennes, Engineer]
  • Julien Dufour [Inria, Engineer, from May 2021]
  • Solenne Fortun [Inria, Engineer]
  • Thierry Gaugry [INSA Rennes, Engineer, until Jun 2021]
  • Guillaume Gicquel [CNRS, Engineer]
  • Fabien Grzeskowiak [INSA Rennes, Engineer, from Jul 2021]
  • Thomas Howard [INSA Rennes, Engineer, from Oct 2021]
  • Joudy Nader [CNRS, Engineer]
  • Noura Neji [Inria, Engineer, until Apr 2021]
  • François Pasteau [INSA Rennes, Engineer]
  • Yuliya Patotskaya [Inria, Engineer]
  • Fabien Spindler [Inria, Engineer]
  • Wouter Van Toll [Inria, Engineer, until Oct 2021]

Interns and Apprentices

  • Merwane Bouri [INSA Rennes, from Aug 2021]
  • Pierre Antoine Cabaret [INSA Rennes, from Feb 2021 until Aug 2021]
  • Alex Coudray [Inria, from Feb 2021 until Jul 2021]
  • Arthur Furet [INSA Rennes, from Jun 2021 until Aug 2021]
  • Alexis Hobl [CNRS, from May 2021 until Jul 2021]
  • Emilie Hummel [CNRS, from Feb 2021 until Aug 2021]
  • Alexis Jensen [INSA Rennes, from Jul 2021 until Sep 2021]
  • Divyesh Kanagavel [CNRS, from Feb 2021 until Aug 2021]
  • Alex Keryhuel [Inria, from May 2021 until Aug 2021]
  • Hussein Lezzaik [CNRS, from Feb 2021 until Aug 2021]
  • Arthur Luciani [École normale supérieure de Rennes, from Feb 2021 until Jul 2021]
  • Thomas Mabit [École normale supérieure de Rennes, from Feb 2021 until Aug 2021]
  • Thibaut Rolland [Inria, from Feb 2021 until Jul 2021]
  • Octavie Somoza Salgado [CNRS, from Mar 2021 until Aug 2021]
  • Guillaume Sonnet [Inria, from May 2021 until Sep 2021]
  • Gustavo Souza Vieira Dutra [INSA Rennes, Intern, from Feb 2021 until Aug 2021]

Administrative Assistant

  • Hélène de La Ruée [Univ de Rennes I]

2 Overall objectives

The long-term vision of the Rainbow team is to develop the next generation of sensor-based robots able to navigate and/or interact in complex unstructured environments together with human users. Clearly, the word “together” can have very different meanings depending on the particular context: for example, it can refer to mere co-existence (robots and humans share some space while performing independent tasks), human-awareness (the robots need to be aware of the human state and intentions for properly adjusting their actions), or actual cooperation (robots and humans perform some shared task and need to coordinate their actions).

One could perhaps argue that these two goals are somehow in conflict since higher robot autonomy should imply lower (or absence of) human intervention. However, we believe that our general research direction is well motivated since: (i) despite the many advancements in robot autonomy, complex and high-level cognitive-based decisions are still out of reach. In most applications involving tasks in unstructured environments, uncertainty, and interaction with the physical word, human assistance is still necessary, and will most probably be for the next decades. On the other hand, robots are extremely capable of autonomously executing specific and repetitive tasks, with great speed and precision, and of operating in dangerous/remote environments, while humans possess unmatched cognitive capabilities and world awareness which allow them to take complex and quick decisions; (ii) the cooperation between humans and robots is often an implicit constraint of the robotic task itself. Consider for instance the case of assistive robots supporting injured patients during their physical recovery, or human augmentation devices. It is then important to study proper ways of implementing this cooperation; (iii) finally, safety regulations can require the presence at all times of a person in charge of supervising and, if necessary, of taking direct control of the robotic workers. For example, this is a common requirement in all applications involving tasks in public spaces, like autonomous vehicles in crowded spaces, or even UAVs when flying in civil airspace such as over urban or populated areas.

Within this general picture, the Rainbow activities will be particularly focused on the case of (shared) cooperation between robots and humans by pursuing the following vision: on the one hand, empower robots with a large degree of autonomy for allowing them to effectively operate in non-trivial environments (e.g., outside completely defined factory settings). On the other hand, include human users in the loop for having them in (partial and bilateral) control of some aspects of the overall robot behavior. We plan to address these challenges from the methodological, algorithmic and application-oriented perspectives. The main research axes along which the Rainbow activities will be articulated are: three supporting axes (Optimal and Uncertainty-Aware Sensing; Advanced Sensor-based Control; Haptics for Robotics Applications) that are meant to develop methods, algorithms and technologies for realizing the central theme of Shared Control of Complex Robotic Systems.

3 Research program

3.1 Main Vision

The vision of Rainbow (and foreseen applications) calls for several general scientific challenges: (i) high-level of autonomy for complex robots in complex (unstructured) environments, (ii) forward interfaces for letting an operator giving high-level commands to the robot, (iii) backward interfaces for informing the operator about the robot `status', (iv) user studies for assessing the best interfacing, which will clearly depend on the particular task/situation. Within Rainbow we plan to tackle these challenges at different levels of depth:

  • the methodological and algorithmic side of the sought human-robot interaction will be the main focus of Rainbow. Here, we will be interested in advancing the state-of-the-art in sensor-based online planning, control and manipulation for mobile/fixed robots. For instance, while classically most control approaches (especially those sensor-based) have been essentially reactive, we believe that less myopic strategies based on online/reactive trajectory optimization will be needed for the future Rainbow activities. The core ideas of Model-Predictive Control approaches (also known as Receding Horizon) or, in general, numerical optimal control methods will play a role in the Rainbow activities, for allowing the robots to reason/plan over some future time window and better cope with constraints. We will also consider extending classical sensor-based motion control/manipulation techniques to more realistic scenarios, such as deformable/flexible objects (“Advanced Sensor-based Control” axis). Finally, it will also be important to spend research efforts into the field of Optimal Sensing, in the sense of generating (again) trajectories that can optimize the state estimation problem in presence of scarce sensory inputs and/or non-negligible measurement and process noises, especially true for the case of mobile robots (“Optimal and Uncertainty-Aware Sensing” axis). We also aim at addressing the case of coordination between a single human user and multiple robots where, clearly, as explained the autonomy part plays even a more crucial role (no human can control multiple robots at once, thus a high degree of autonomy will be required by the robot group for executing the human commands);
  • the interfacing side will also be a focus of the Rainbow activities. As explained above, we will be interested in both the forward (human robot) and backward (robot human) interfaces. The forward interface will be mainly addressed from the algorithmic point of view, i.e., how to map the few degrees of freedom available to a human operator (usually in the order of 3–4) into complex commands for the controlled robot(s). This mapping will typically be mediated by an “AutoPilot” onboard the robot(s) for autonomously assessing if the commands are feasible and, if not, how to least modify them (“Advanced Sensor-based Control” axis).

    The backward interface will, instead, mainly consist of a visual/haptic feedback for the operator. Here, we aim at exploiting our expertise in using force cues for informing an operator about the status of the remote robot(s). However, the sole use of classical grounded force feedback devices (e.g., the typical force-feedback joysticks) will not be enough due to the different kinds of information that will have to be provided to the operator. In this context, the recent interest in the use of wearable haptic interfaces is very interesting and will be investigated in depth (these include, e.g., devices able to provide vibro-tactile information to the fingertips, wrist, or other parts of the body). The main challenges in these activities will be the mechanical conception (and construction) of suitable wearable interfaces for the tasks at hand, and in the generation of force cues for the operator: the force cues will be a (complex) function of the robot state, therefore motivating research in algorithms for mapping the robot state into a few variables (the force cues) (“Haptics for Robotics Applications” axis);

  • the evaluation side that will assess the proposed interfaces with some user studies, or acceptability studies by human subjects. Although this activity will not be a main focus of Rainbow (complex user studies are beyond the scope of our core expertise), we will nevertheless devote some efforts into having some reasonable level of user evaluations by applying standard statistical analysis based on psychophysical procedures (e.g., randomized tests and Anova statistical analysis). This will be particularly true for the activities involving the use of smart wheelchairs, which are intended to be used by human users and operate inside human crowds. Therefore, we will be interested in gaining some level of understanding of how semi-autonomous robots (a wheelchair in this example) can predict the human intention, and how humans can react to a semi-autonomous mobile robot.
Figure 1
Figure 1: An illustration of the prototypical activities foreseen in Rainbow in which a human operator is in partial (and high-level) control of single/multiple complex robots performing semi-autonomous tasks

Figure 1 depicts in an illustrative way the prototypical activities foreseen in Rainbow. On the righthand side, complex robots (dual manipulators, humanoid, single/multiple mobile robots) need to perform some task with high degree of autonomy. On the lefthand side, a human operator gives some high-level commands and receives a visual/haptic feedback aimed at informing her/him at best of the robot status. Again, the main challenges that Rainbow will tackle to address these issues are (in order of relevance): (i) methods and algorithms, mostly based on first-principle modeling and, when possible, on numerical methods for online/reactive trajectory generation, for enabling the robots with high autonomy; (ii) design and implementation of visual/haptic cues for interfacing the human operator with the robots, with a special attention to novel combinations of grounded/ungrounded (wearable) haptic devices; (iii) user and acceptability studies.

3.2 Main Components

Hereafter, a summary description of the four axes of research in Rainbow.

3.2.1 Optimal and Uncertainty-Aware Sensing

Future robots will need to have a large degree of autonomy for, e.g., interpreting the sensory data for accurate estimation of the robot and world state (which can possibly include the human users), and for devising motion plans able to take into account many constraints (actuation, sensor limitations, environment), including also the state estimation accuracy (i.e., how well the robot/environment state can be reconstructed from the sensed data). In this context, we will be particularly interested in (i) devising trajectory optimization strategies able to maximize some norm of the information gain gathered along the trajectory (and with the available sensors). This can be seen as an instance of Active Sensing, with the main focus on online/reactive trajectory optimization strategies able to take into account several requirements/constraints (sensing/actuation limitations, noise characteristics). We will also be interested in the coupling between optimal sensing and concurrent execution of additional tasks (e.g., navigation, manipulation). (ii) Formal methods for guaranteeing the accuracy of localization/state estimation in mobile robotics, mainly exploiting tools from interval analysis. The interest of these methods is their ability to provide possibly conservative but guaranteed accuracy bounds on the best accuracy one can obtain with the given robot/sensor pair, and can thus be used for planning purposes or for system design (choice of the best sensors for a given robot/task). (iii) Localization/tracking of objects with poor/unknown or deformable shape, which will be of paramount importance for allowing robots to estimate the state of “complex objects” (e.g., human tissues in medical robotics, elastic materials in manipulation) for controlling its pose/interaction with the objects of interest.

3.2.2 Advanced Sensor-based Control

One of the main competences of the previous Lagadic team has been, generally speaking, the topic of sensor-based control, i.e., how to exploit (typically onboard) sensors for controlling the motion of fixed/ground robots. The main emphasis has been in devising ways to directly couple the robot motion with the sensor outputs in order to invert this mapping for driving the robots towards a configuration specified as a desired sensor reading (thus, directly in sensor space). This general idea has been applied to very different contexts: mainly standard vision (from which the Visual Servoing keyword), but also audio, ultrasound imaging, and RGB-D.

Use of sensors for controlling the robot motion will also clearly be a central topic of the Rainbow team too, since the use of (especially onboard) sensing is a main characteristics of any future robotics application (which should typically operate in unstructured environments, and thus mainly rely on its own ability to sense the world). We then naturally aim at making the best out of the previous Lagadic experience in sensor-based control for proposing new advanced ways of exploiting sensed data for, roughly speaking, controlling the motion of a robot. In this respect, we plan to work on the following topics: (i) “direct/dense methods” which try to directly exploit the raw sensory data in computing the control law for positioning/navigation tasks. The advantages of these methods is the little need for data pre-processing which can minimize feature extraction errors and, in general, improve the overall robustness/accuracy (since all the available data is used by the motion controller); (ii) sensor-based interaction with objects of unknown/deformable shapes, for gaining the ability to manipulate, e.g., flexible objects from the acquired sensed data (e.g., controlling online a needle being inserted in a flexible tissue); (iii) sensor-based model predictive control, by developing online/reactive trajectory optimization methods able to plan feasible trajectories for robots subjects to sensing/actuation constraints with the possibility of (onboard) sensing for continuously replanning (over some future time horizon) the optimal trajectory. These methods will play an important role when dealing with complex robots affected by complex sensing/actuation constraints, for which pure reactive strategies (as in most of the previous Lagadic works) are not effective. Furthermore, the coupling with the aforementioned optimal sensing will also be considered; (iv) multi-robot decentralised estimation and control, with the aim of devising again sensor-based strategies for groups of multiple robots needing to maintain a formation or perform navigation/manipulation tasks. Here, the challenges come from the need of devising “simple” decentralized and scalable control strategies under the presence of complex sensing constraints (e.g., when using onboard cameras, limited fov, occlusions). Also, the need of locally estimating global quantities (e.g., common frame of reference, global property of the formation such as connectivity or rigidity) will also be a line of active research.

3.2.3 Haptics for Robotics Applications

In the envisaged shared cooperation between human users and robots, the typical sensory channel (besides vision) exploited to inform the human users is most often the force/kinesthetic one (in general, the sense of touch and of applied forces to the human hand or limbs). Therefore, a part of our activities will be devoted to study and advance the use of haptic cueing algorithms and interfaces for providing a feedback to the users during the execution of some shared task. We will consider: (i) multi-modal haptic cueing for general teleoperation applications, by studying how to convey information through the kinesthetic and cutaneous channels. Indeed, most haptic-enabled applications typically only involve kinesthetic cues, e.g., the forces/torques that can be felt by grasping a force-feedback joystick/device. These cues are very informative about, e.g., preferred/forbidden motion directions, but are also inherently limited in their resolution since the kinesthetic channel can easily become overloaded (when too much information is compressed in a single cue). In recent years, the arise of novel cutaneous devices able to, e.g., provide vibro-tactile feedback on the fingertips or skin, has proven to be a viable solution to complement the classical kinesthetic channel. We will then study how to combine these two sensory modalities for different prototypical application scenarios, e.g., 6-dof teleoperation of manipulator arms, virtual fixtures approaches, and remote manipulation of (possibly deformable) objects; (ii) in the particular context of medical robotics, we plan to address the problem of providing haptic cues for typical medical robotics tasks, such as semi-autonomous needle insertion and robot surgery by exploring the use of kinesthetic feedback for rendering the mechanical properties of the tissues, and vibrotactile feedback for providing with guiding information about pre-planned paths (with the aim of increasing the usability/acceptability of this technology in the medical domain); (iii) finally, in the context of multi-robot control we would like to explore how to use the haptic channel for providing information about the status of multiple robots executing a navigation or manipulation task. In this case, the problem is (even more) how to map (or compress) information about many robots into a few haptic cues. We plan to use specialized devices, such as actuated exoskeleton gloves able to provide cues to each fingertip of a human hand, or to resort to “compression” methods inspired by the hand postural synergies for providing coordinated cues representative of a few (but complex) motions of the multi-robot group, e.g., coordinated motions (translations/expansions/rotations) or collective grasping/transporting.

3.2.4 Shared Control of Complex Robotics Systems

This final and main research axis will exploit the methods, algorithms and technologies developed in the previous axes for realizing applications involving complex semi-autonomous robots operating in complex environments together with human users. The leitmotiv is to realize advanced shared control paradigms, which essentially aim at blending robot autonomy and user's intervention in an optimal way for exploiting the best of both worlds (robot accuracy/sensing/mobility/strength and human's cognitive capabilities). A common theme will be the issue of where to “draw the line” between robot autonomy and human intervention: obviously, there is no general answer, and any design choice will depend on the particular task at hand and/or on the technological/algorithmic possibilities of the robotic system under consideration.

A prototypical envisaged application, exploiting and combining the previous three research axes, is as follows: a complex robot (e.g., a two-arm system, a humanoid robot, a multi-UAV group) needs to operate in an environment exploiting its onboard sensors (in general, vision as the main exteroceptive one) and deal with many constraints (limited actuation, limited sensing, complex kinematics/dynamics, obstacle avoidance, interaction with difficult-to-model entities such as surrounding people, and so on). The robot must then possess a quite large autonomy for interpreting and exploiting the sensed data in order to estimate its own state and the environment one (“Optimal and Uncertainty-Aware Sensing” axis), and for planning its motion in order to fulfil the task (e.g., navigation, manipulation) by coping with all the robot/environment constraints. Therefore, advanced control methods able to exploit the sensory data at its most, and able to cope online with constraints in an optimal way (by, e.g., continuously replanning and predicting over a future time horizon) will be needed (“Advanced Sensor-based Control” axis), with a possible (and interesting) coupling with the sensing part for optimizing, at the same time, the state estimation process. Finally, a human operator will typically be in charge of providing high-level commands (e.g., where to go, what to look at, what to grasp and where) that will then be autonomously executed by the robot, with possible local modifications because of the various (local) constraints. At the same time, the operator will also receive online visual-force cues informative of, in general, how well her/his commands are executed and if the robot would prefer or suggest other plans (because of the local constraints that are not of the operator's concern). This information will have to be visually and haptically rendered with an optimal combination of cues that will depend on the particular application (“Haptics for Robotics Applications” axis).

4 Application domains

The activities of Rainbow fall obviously within the scope of Robotics. Broadly speaking, our main interest is in devising novel/efficient algorithms (for estimation, planning, control, haptic cueing, human interfacing, etc.) that can be general and applicable to many different robotic systems of interest, depending on the particular application/case study. For instance, we plan to consider

  • applications involving remote telemanipulation with one or two robot arms, where the arm(s) will need to coordinate their motion for approaching/grasping objects of interest under the guidance of a human operator;
  • applications involving single and multiple mobile robots for spatial navigation tasks (e.g., exploration, surveillance, mapping). In the multi-robot case, the high redundancy of the multi-robot group will motivate research in autonomously exploiting this redundancy for facilitating the task (e.g., optimizing the self-localization of the environment mapping) while following the human commands, and vice-versa for informing the operator about the status of a multi-robot group. In the single robot case, the possible combination with some manipulation devices (e.g., arms on a wheeled robot) will motivate research into remote tele-navigation and tele-manipulation;
  • applications involving medical robotics, in which the “manipulators” are replaced by the typical tools used in medical applications (ultrasound probes, needles, cutting scalpels, and so on) for semi-autonomous probing and intervention;
  • applications involving a direct physical “coupling” between human users and robots (rather than a “remote” interfacing), such as the case of assistive devices used for easing the life of impaired people. Here, we will be primarily interested in, e.g., safety and usability issues, and also touch some aspects of user acceptability.

These directions are, in our opinion, very promising since nowadays and future robotics applications are expected to address more and more complex tasks: for instance, it is becoming mandatory to empower robots with the ability to predict the future (to some extent) by also explicitly dealing with uncertainties from sensing or actuation; to safely and effectively interact with human supervisors (or collaborators) for accomplishing shared tasks; to learn or adapt to the dynamic environments from small prior knowledge; to exploit the environment (e.g., obstacles) rather than avoiding it (a typical example is a humanoid robot in a multi-contact scenario for facilitating walking on rough terrains); to optimize the onboard resources for large-scale monitoring tasks; to cooperate with other robots either by direct sensing/communication, or via some shared database (the “cloud”).

While no single lab can reasonably address all these theoretical/algorithmic/technological challenges, we believe that our research agenda can give some concrete contributions to the next generation of robotics applications.

5 Highlights of the year

5.1 Awards

  • Best Paper Award at the conference 2021 ICAT-EGVE (International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments) 52
  • Best Demonstration Awards at 2021 IEEE WHC (IEEE World Haptics) 61

5.2 Highlights

  • P. Robuffo Giordano is part of the euRobotics “George Giralt” PhD Award panel for awarding the best PhD Thesis in robotics in Europe
  • P. Robuffo Giordano has been elected member of the Section 07 of the Comité National de la Recherche Scientifique
  • A. Krupa was promoted in 2021 to the grade of Inria Senior Research Scientist (Inria DR2)
  • C. Pacchierotti has been proposed for the 2022 CNRS Bronze medal by the Section 7 of the CoNRS (The National Committee for Scientific Research).

6 New software and platforms

6.1 New software

6.1.1 HandiViz

  • Name:
    Driving assistance of a wheelchair
  • Keywords:
    Health, Persons attendant, Handicap
  • Functional Description:

    The HandiViz software proposes a semi-autonomous navigation framework of a wheelchair relying on visual servoing.

    It has been registered to the APP (“Agence de Protection des Programmes”) as an INSA software (IDDN.FR.001.440021.000.S.P.2013.000.10000) and is under GPL license.

  • Contact:
    Marie Babel
  • Participants:
    Francois Pasteau, Marie Babel
  • Partner:
    INSA Rennes

6.1.2 UsTk

  • Name:
    Ultrasound toolkit for medical robotics applications guided from ultrasound images
  • Keywords:
    Echographic imagery, Image reconstruction, Medical robotics, Visual tracking, Visual servoing (VS), Needle insertion
  • Functional Description:
    UsTK, standing for Ultrasound Toolkit, is a cross-platform extension of ViSP software dedicated to 2D and 3D ultrasound image processing and visual servoing based on ultrasound images. Written in C++, UsTK architecture provides a core module that implements all the data structures at the heart of UsTK, a grabber module that allows acquiring ultrasound images from an Ultrasonix or a Sonosite device, a GUI module to display data, an IO module for providing functionalities to read/write data from a storage device, and a set of image processing modules to compute the confidence map of ultrasound images, generate elastography images, track a flexible needle in sequences of 2D and 3D ultrasound images and track a target image template in sequences of 2D ultrasound images. All these modules were implemented on several robotic demonstrators to control the motion of an ultrasound probe or a flexible needle by ultrasound visual servoing.
  • URL:
  • Contact:
    Alexandre Krupa
  • Participants:
    Alexandre Krupa, Fabien Spindler
  • Partners:
    Inria, Université de Rennes 1

6.1.3 ViSP

  • Name:
    Visual servoing platform
  • Keywords:
    Augmented reality, Computer vision, Robotics, Visual servoing (VS), Visual tracking
  • Scientific Description:

    Since 2005, we develop and release ViSP [1], an open source library available from https://­visp.­inria.­fr. ViSP standing for Visual Servoing Platform allows prototyping and developing applications using visual tracking and visual servoing techniques at the heart of the Rainbow research. ViSP was designed to be independent from the hardware, to be simple to use, expandable and cross-platform. ViSP allows designing vision-based tasks for eye-in-hand and eye-to-hand systems from the most classical visual features that are used in practice. It involves a large set of elementary positioning tasks with respect to various visual features (points, segments, straight lines, circles, spheres, cylinders, image moments, pose...) that can be combined together, and image processing algorithms that allow tracking of visual cues (dots, segments, ellipses...), or 3D model-based tracking of known objects or template tracking. Simulation capabilities are also available.

    [1] E. Marchand, F. Spindler, F. Chaumette. ViSP for visual servoing: a generic software platform with a wide class of robot control skills. IEEE Robotics and Automation Magazine, Special Issue on "Software Packages for Vision-Based Control of Motion", P. Oh, D. Burschka (Eds.), 12(4):40-52, December 2005.

  • Functional Description:
    ViSP provides simple ways to integrate and validate new algorithms with already existing tools. It follows a module-based software engineering design where data types, algorithms, sensors, viewers and user interaction are made available. Written in C++, ViSP is based on open-source cross-platform libraries (such as OpenCV) and builds with CMake. Several platforms are supported, including OSX, iOS, Windows and Linux. ViSP online documentation allows to ease learning. More than 300 fully documented classes organized in 17 different modules, with more than 408 examples and 88 tutorials are proposed to the user. ViSP is released under a dual licensing model. It is open-source with a GNU GPLv2 or GPLv3 license. A professional edition license that replaces GNU GPL is also available.
  • URL:
  • Contact:
    Fabien Spindler
  • Participants:
    Éric Marchand, Fabien Spindler, Francois Chaumette
  • Partners:
    Inria, Université de Rennes 1

6.1.4 DIARBENN

  • Name:
    Obstacle avoidance through sensor-based servoing
  • Keywords:
    Servoing, Shared control, Navigation
  • Functional Description:
    DIARBENN's objective is to define an obstacle avoidance solution adapted to a mobile robot such as a powered wheelchair. Through a shared control system, the system corrects progressively and if necessary the trajectory when approaching an obstacle while respecting the user's intention.
  • Contact:
    Marie Babel
  • Partner:
    INSA Rennes

6.2 New platforms

6.2.1 Robot Vision Platform

Participant: François Chaumette [contact], Alexandre Krupa [contact], Eric Marchand [contact], Fabien Spindler [contact].

We exploit two industrial robotic systems built by Afma Robots in the nineties to validate our research in visual servoing and active vision. The first one is a 6 DoF Gantry robot, the other one is a 4 DoF cylindrical robot (see Fig. 2). These robots are equipped with monocular RGB cameras. The Gantry robot also allows mounting grippers on its end-effector. Attached to this platform, we can also find a collection of various RGB and RGB-D cameras used to validate vision-based real-time tracking algorithms.

In 2021, this platform has been used to validate experimental results in 2 accepted publications 4016.

Figure 2
Figure 2: Rainbow robotics platform for vision-based manipulation

6.2.2 Mobile Robots

Participants: Marie Babel [contact], Solenne Fortun [contact], François Pasteau [contact], Julien Pettré [contact], Quentin Delamare [contact], Fabien Spindler [contact].

To validate our research in personally assisted living topic (see Section 7.4.4), we have three electric wheelchairs, one from Permobil, one from Sunrise and the last from YouQ (see Fig. 3.a). The control of the wheelchair is performed using a plug and play system between the joystick and the low level control of the wheelchair. Such a system lets us acquire the user intention through the joystick position and control the wheelchair by applying corrections to its motion. The wheelchairs have been fitted with cameras, ultrasound and time of flight sensors to perform the required servoing for assisting handicapped people. A wheelchair haptic simulator completes this platform to develop new human interaction strategies in a virtual reality environment (see Fig. 3(b)).

Pepper, a human-shaped robot designed by SoftBank Robotics to be a genuine day-to-day companion (see Fig. 3.c) is also part of this platform. It has 17 DoF mounted on a wheeled holonomic base and a set of sensors (cameras, laser, ultrasound, inertial, microphone) that makes this platform interesting for robot-human interactions during locomotion and visual exploration strategies (Sect. 7.2.8).

Moreover, for fast prototyping of algorithms in perception, control and autonomous navigation, the team uses a Pioneer 3DX from Adept (see Fig. 3.d). This platform is equipped with various sensors needed for autonomous navigation and sensor-based control.

In 2021, these platforms was used to obtain experimental results presented in 3 papers 2, 7, 50.

Figure 3: Mobile Robot Platform. a) wheelchairs from Permobil, Sunrise and YouQ, b) Wheelchair haptic simulator, c) Pepper human-shaped robot, d) Pioneer P3-DX robot

6.2.3 Medical Robotic Platform

Participants: Alexandre Krupa [contact], Fabien Spindler [contact].

This platform is composed of two 6 DoF Adept Viper arms (see Figs. 4.a–b). Ultrasound probes connected either to a SonoSite 180 Plus or an Ultrasonix SonixTouch 2D and 3D imaging system can be mounted on a force torque sensor attached to each robot end-effector. The haptic Virtuose 6D or Omega 6 device (see Fig. 7.a) can also be used with this platform.

This platform was extended with a ATI Nano43 force/torque sensor attached to one of the Viper arm. It allows to perform experiments for needle insertion applications.

This testbed is of primary interest for researches and experiments concerning ultrasound visual servoing applied to probe positioning, soft tissue tracking, elastography or robotic needle insertion tasks (see Sect. 7.4.3). It can also be used to validate more classical tracking and visual servoing researches.

In 2021, this platform was used to obtain experimental results presented in 2 papers 8, 48.

Figure 4: Rainbow medical robotic platforms. a) On the right Viper S850 robot arm equipped with a SonixTouch 3D ultrasound probe. On the left Viper S650 equipped with a tool changer that allows to attach a classical camera or biopsy needles. b) Robotic setup for autonomous needle insertion by visual servoing.

6.2.4 Advanced Manipulation Platform

Participants: Claudio Pacchierotti [contact], Paolo Robuffo Giordano [contact], Fabien Spindler [contact].

This platform is composed by 2 Panda lightweight arms from Franka Emika equipped with torque sensors in all seven axes. An electric gripper, a camera, a soft hand from qbrobotics or a Reflex TakkTile 2 gripper from RightHand Labs (see Fig. 5.b) can be mounted on the robot end-effector (see Fig. 5.a). A force/torque sensor from Alberobotics is also attached to one of the robots end-effector to get more precision during torque control. This setup is used to validate our researches in coupling force and vision for controlling robot manipulators (see Section 7.2.11) and in shared control for remote manipulation (see Section 7.4.1). Other haptic devices (see Section 6.2.6) can also be coupled to this platform.

2 new papers 19, 26 published this year include experimental results obtained with this platform.

Figure 5: Rainbow advanced manipulation platform. a) One of the two Panda lightweight arms from Franka Emika, with mounted the Pisa SoftHand, b) the Reflex TakkTile 2 gripper that could be mounted on the Panda robot end-effector.

6.2.5 Unmanned Aerial Vehicles (UAVs)

Participants: Joudy Nader [contact], Paolo Robuffo Giordano [contact], Claudio Pacchierotti [contact], Fabien Spindler [contact].

Rainbow is involved in several activities involving perception and control for single and multiple quadrotor UAVs. To this end, we exploit four quadrotors from Mikrokopter Gmbh, Germany (see Fig. 6.a), and one quadrotor from 3DRobotics, USA (see Fig. 6.b). The Mikrokopter quadrotors have been heavily customized by: (i) reprogramming from scratch the low-level attitude controller onboard the microcontroller of the quadrotors, (ii) equipping each quadrotor with a NVIDIA Jetson TX2 board running Linux Ubuntu and the TeleKyb-3 software based on genom3 framework developed at LAAS in Toulouse (the middleware used for managing the experiment flows and the communication among the UAVs and the base station), and (iii) purchasing new Realsense RGB-D cameras for visual odometry and visual servoing. The quadrotor group is used as robotic platforms for testing a number of single and multiple flight control schemes with a special attention on the use of onboard vision as main sensory modality.

This year 3 papers 9, 7, 1 contain experimental results obtained with this platform.

Figure 6: Unmanned Aerial Vehicles Platform. a) Quadrotor XL1 from Mikrokopter, b) Formation control with 3 XL1 from Mikrokopter in our flying arena equipped with the Vicon localization system.

6.2.6 Haptics and Shared Control Platform

Participants: Claudio Pacchierotti [contact], Paolo Robuffo Giordano [contact], Fabien Spindler [contact].

Various haptic devices are used to validate our research in shared control. We have a Virtuose 6D device from Haption (see Fig. 7.a). This device is used as master device in many of our shared control activities (see, e.g., Sections 7.4.1). It could also be coupled to the Haption haptic glove in loan from the University of Birmingham. An Omega 6 (see Fig. 7.b) from Force Dimension and devices in loan from Ultrahaptics complete this platform that could be coupled to the other robotic platforms.

This platform was used to obtain experimental results presented in 6 papers 19, 8, 7, 42, 45 published this year.

Figure 7: Haptics and Shared Control Platform. a) Virtuose 6D, b) Omega 6 haptic devices, c) Ultraleap STRATOS.

6.2.7 Portable immersive room

Participants: François Pasteau [contact], Fabien Grzeskowiak [contact], Marie Babel [contact].

To validate our research on assistive robotics and its applications in virtual conditions, we very recently acquire a portable immersive room that is planned to be easily deployed in different rehabilitation structures in order to conduct clinical trials. The system has been designed by Trinoma company and has been funded by Interreg ADAPT project.

Figure 8: Portable Immersive Room and the multisensory wheelchair driving simulator

7 New results

7.1 Optimal and Uncertainty-Aware Sensing

7.1.1 3D tracking of deformable objects from RGB-D data

Participants: Alexandre Krupa, Eric Marchand.

Within our research activities on deformable object tracking, this year we proposed a novel framework for tracking the deformation of soft objects using a RGB-D camera. It requires a coarse 3D mesh and physical model of the object based on FEM whose parameters (Young Modulus, Poisson's ratio, etc) do not need to be precise. The approach consists in minimizing both a geometric error and a direct photometric intensity error while relying on the co-rotational Finite Element Method as the underlying deformation model 48. The proposed method has been validated both on synthetic data with groundtruth and real data.

7.1.2 Trajectory Generation for Optimal State Estimation

Participants: Nicola De Carli, Gennaro Notomista, Claudio Pacchierotti, Paolo Robuffo Giordano.

This activity addresses the general problem of active sensing where the goal is to analyze and synthesize optimal trajectories for a robotic system that can maximize the amount of information gathered by (few) noisy outputs (i.e., sensor readings) while at the same time reducing the negative effects of the process/actuation noise. We have recently developed a general framework for solving online the active sensing problem by continuously replanning an optimal trajectory that maximizes a suitable norm of the Constructibility Gramian (CG)  64.

In 36, we have extended this framework for considering the problem of localization for a group of multiple robots that can obtain distance measurements and communicate only with local neighbors. We showed that, thanks to a proper change of coordinates, the CG for the multi-robot group can be computed in a decentralized way with only minor approximations. This allowed us to formulate an online and decentralized trajectory generation problem for optimal localization. We considered as case study the localization of a quadrotor group with noisy distance measurements and sensing constraints, and showed the effectiveness of the appraoch via a monte-carlo simulation. We are now considering the case of bearing measurments (obtained from onboard cameras) and the associated constraints of limited fov and possible occlusions. We are also working towards an experimental validation of the approach.

In 25, we have instead considered a different active sensing problem that involves a single robot but in an environmental monitoring task. The goal is to estimate some (possibly time-varying) parameters of a distributed scalar field representative of, e.g., a gas or other quantities in the atmosphere. The robot is equipped with a sensor able to locally measure the value of this field, and the estimation goal is to recover the (unknown) field parameters (e.g., location of the source, decaying rate) by suitably planning an optimal trajectory. To this end we have formulated a trajectory optimization problem that maximizes a norm of the CG and also takes into account the energy level of the robot by modeling a battery with a discharge dynamics that depends on the control effort. The results have been validated in simulation and are quite promising. We are now working on a multi-robot formulation of this problem.

7.1.3 Leveraging Multiple Environments for Learning and Decision Making

Participants: Maud Marchal, Thierry Gaugry, Antonin Bernardin.

Learning is usually performed by observing real robot executions. Physics-based simulators are a good alternative for providing highly valuable information while avoiding costly and potentially destructive robot executions. Within the Imagine project, we presented a novel approach for learning the probabilities of symbolic robot action outcomes. This is done by leveraging different environments, such as physics-based simulators, in execution time. To this end, we proposed MENID (Multiple Environment Noise Indeterministic Deictic) rules, a novel representation able to cope with the inherent uncertainties present in robotic tasks. MENID rules explicitly represent each possible outcomes of an action, keep memory of the source of the experience, and maintain the probability of success of each outcome. We also introduced an algorithm to distribute actions among environments, based on previous experiences and expected gain. Before using physics-based simulations, we proposed a methodology for evaluating different simulation settings and determining the least time-consuming model that could be used while still producing coherent results. We demonstrated the validity of the approach in a dismantling use case, using a simulation with reduced quality as simulated system, and a simulation with full resolution where we add noise to the trajectories and some physical parameters as a representation of the real system.

7.1.4 A Plane-based Approach for Indoor Point Clouds Registration

Participant: Eric Marchand.

Traditional 3D point clouds registration algorithms, based on Iterative Closest Point (ICP), rely on point matching of large point clouds. In well-structured environments, such as buildings, planes can be segmented and used for registration, similarly to the classical point-based ICP approach. Using planes tremendously reduces the number of inputs. An efficient plane-based registration algorithm has been proposed. The optimal transformation is estimated through a two-steps approach, successively performing robust plane-to- plane minimization and non-linear robust point-to-plane registration  39, 57, 38. This work was done in cooperation with IETR Lab.

7.1.5 Visual SLAM

Participant: Eric Marchand.

We proposed a novel visual SLAM method with dense planar reconstruction using a monocular camera: TT-SLAM. The method exploits planar template-based trackers (TT) to compute camera poses and reconstructs a multi-planar scene representation. Multiple homographies are estimated simultaneously by clustering a set of template trackers supported by superpixelized regions. Compared to ou previous work (RANSAC-based multiple homographies method), data association and keyframe selection issues are handled by the continuous nature of template trackers. A non-linear optimization process is applied to all the homographies to improve the precision in pose estimation. This work 53 was done in cooperation with the Mimetic team.

We also proposed a novel binary graph descriptor to improve loop detection for visual SLAM systems. Our contribution is twofold: i) a graph embedding technique for generating binary descriptors which conserve both spatial and histogram information extracted from images; ii) a generic mean of combining multiple layers of heterogeneous data into the proposed binary graph descriptor, coupled with a matching and geometric checking method. We also introduce an implementation of our descriptor into an incremental Bag-of-Words (iBoW) structure that improves efficiency and scalability, and propose a method to interpret Deep Neural Network (DNN) results. This work 31 was done in cooperation with the Mimetic team.

7.1.6 Learn Offsets for robust 6DoF object pose estimation

Participants: Mathieu Gonzalez, Eric Marchand.

Estimating the 3D translation and orientation of an object is a challenging task that can be considered within augmented reality or robotic applications. In 16 we propose a novel approach to perform 6 DoF object pose estimation from a single RGB-D image in cluttered scenes. We adopt an hybrid pipeline in two stages: data-driven and geometric respectively. The first data-driven step consists of a classification CNN to estimate the object 2D location in the image from local patches, followed by a regression CNN trained to predict the 3D location of a set of keypoints in the camera coordinate system. We robustly perform local voting to recover the location of each keypoint in the camera coordinate system. To extract the pose information, the geometric step consists in aligning the 3D points in the camera coordinate system with the corresponding 3D points in world coordinate system by minimizing a registration error, thus computing the pose.

7.2 Advanced Sensor-Based Control

7.2.1 Trajectory Generation for Minimum Closed-Loop State Sensitivity

Participants: Pascal Brault, Ali Srour, Quentin Delamare, Paolo Robuffo Giordano.

The goal of this research activity is to propose a new point of view in addressing the control of robots under parametric uncertainties: rather than striving to design a sophisticated controller with some robustness guarantees for a specific system, we propose to attain robustness (for any choice of the control action) by suitably shaping the reference motion trajectory so as to minimize the state sensitivity to parameter uncertainty of the resulting closed-loop system.

In 34 we have proposed to couple the previously introduced “state sensitivity’’ metric with an “input sensitivity” metric, which allows us to obtain trajectories that, when perturbed, require minimal change of the control inputs and in the final tracking error. We applied this machinery to the case of a planar quadrotor. An off-the-shelf nonlinear optimization scheme was also employed for allowing us to take into account (nonlinear) input constraints. A large statistical analysis was performed in simulation, showing the effectiveness of the approach in producing intrinsically-robust motion plans. We are now working towards an implementation on a real quadorotor by considering offsets in the center of mass (CoM) as one of the main sources of uncertainty. We are also working on the combination of sensitivity and observability metrics for taking into account both robustness and optimal state estimation when producing motion plans. Finally, we are studying how to formulate an optimization problem that can optimize both the trajectory and the control gains of a familty of controllers for further improving the robustness of the generated trajectory.

7.2.2 Comfortable path generation for wheelchair navigation

Participants: Guillaume Vailland, Marie Babel.

In the case of non-holonomic robot navigation, path planning algorithms such as Rapidly-exploring Random Tree (RRT) rarely provides feasible and smooth path without the need of additional processing. Furthermore, in a transport context like power wheelchair navigation, user comfort should be a priority and influence path planning strategy.

We then proposed a local path planner which guarantees curvature bounded value and continuous Cubic Bézier piecewise curves connections. To simulate and test this Cubic Bézier local path planner, we developed a new RRT version (CBB-RRT*) which generates on-the fly comfortable path adapted to non-holonomic constraints 51.

7.2.3 UWB beacon navigation of assisted power wheelchair

Participants: Vincent Drevelle, Marie Babel, Eric Marchand, François Pasteau, Merwane Bouri.

Typical problems in robots are those of perception of the environment and localization. Visual sensors are poorly adapted to the context of autonomous wheelchair navigation, both in terms of acceptability (intrusiveness) and in terms of adaptation to the wheelchair and of overall cost.

New sensors, based on Ultra Wide Band (UWB) radio technology, are emerging in particular for indoor localization and object tracking applications. This low-cost system allows for the measurement of distances between fixed beacons and a mobile sensor, in order to obtain localization at decimeter level accuracy in the best case. We seek to exploit these sensors for the navigation of a wheelchair, despite the low accuracy of the measurements they provide.

The problem here lies in the definition of an autonomous or shared sensor based control solution, which fully exploits the notion of measurement uncertainty related to UWB beacons. By modeling the measurements of uncertain distances by intervals, we will try to propagate these uncertainties to the calculation of the speeds to be applied to the wheelchair. This will be done by using the methods of set inversion and constraint propagation, which lead to the characterization of solutions in the form of sets.

7.2.4 Visual Servoing for Cable-Driven Parallel Robots

Participant: François Chaumette.

This study was done in collaboration with IRT Jules Verne (Zane Zake, Nicolo Pedemonte) and LS2N (Stéphane Caro) in Nantes (see Section 8.2). It was devoted to the analysis of the robustness of visual servoing to modeling and calibration errors for cable-driven parallel robots. Zane Zake defended her Phd in February and her previous works on pose estimation, control workspace, and tension management have been published this year 555456.

7.2.5 Singularities in visual servoing

Participant: François Chaumette.

This study is done in the scope of the ANR Sesame project (see Section 9.3).

We have performed a complete theoretical study about the singularities of image-based visual servoing and pose estimation (PnP problem) from the observation of four image points. Highly original results have been exhibited. In particular, it was shown that 2 to 6 camera positions correspond to singularities for a general configuration of 4 non-coplanar points, while it was wrongly believed before that no singularities occur for such configuration 4

7.2.6 Multi-sensor-based control for accurate and safe assembly

Participants: John Thomas, François Chaumette.

This study is done in the scope of the BPI Lichie project 9.3. Its goal is to design sensor-based control strategies coupling vision and proximetry data for ensuring precise positioning while avoiding obstacles in dense environements. The targeted application is the assembly of satellite parts.

7.2.7 Visual servo of a satellite constellation

Participants: Maxime Robic, Eric Marchand, François Chaumette.

This study is also done in the scope of the BPI Lichie project 9.3. Its goal is to control the orientation of a satellite constellation from a camera mounted on each of them to track particular objects on the ground. We study new control law compatible with the control of the satellites.

7.2.8 Visual Exploration of an Indoor Environment

Participants: Thibault Noël, Eric Marchand, François Chaumette.

This study is done in collaboration with the Creative company in Rennes (see Section 7.2.8) It is devoted to the exploration of indoor environments by a mobile robot, Pepper typically (see Section 6.2.2) for a complete and accurate reconstruction of the environment.

7.2.9 Model-Based Deformation Servoing of Soft Objects

Participants: Fouad Makiyeh, Alexandre Krupa, Maud Marchal, François Chaumette.

This study takes place in the context of the GentleMAN project (see Section 9.1.3). The objective is to elaborate a new visual servoing approach aiming to control the shape of an object towards a desired deformation. This year we developed a new control approach that relies on a coarse model of the soft object to be manipulated. This model is composed by a 3D mesh and we chose to represent the mechanical behavior of the object using a Mass-Spring model because it provides real-time capability. We derived an analytical expression of a new controller that allows us to indirectly move any feature point of the soft object to a desired 3D position by acting with the end-effector of a robot on a distant manipulated point. This controller was implemented in an eye-to-hand visual servoing scheme using a RGB-D camera and the approach was tested on several soft objects with different geometries and materials. The experimental results demonstrated that this approach can accurately position a feature point belonging to a soft object to a desired 3D location even if it is based on a model that approximates the physical behavior of the real object. This work has been recently submitted to the special isssue of IEEE RA-L devoted to robotic handling of deformable objects.

7.2.10 Manipulation of a deformable wire by two UAVs

Participants: Lev Smolentsev, Alexande Krupa, François Chaumette.

This study takes place in the context of the CominLabs MAMBO project (see Section 9.4). Its main objective is the development of a visual-based control framework for performing autonomous manipulation of a deformable wire attached between two UAVs using data provided by onboard RGB-D cameras. Toward this direction, we have developed a visual servoing approach that considers as visual features the coefficients of a parabolic curve representing the shape of the wire and we analytically derived the interaction matrix that relates the variations of this features to the RGB-D camera displacement. Preliminary results obtained from experiments using an eye-to-hand RGB-D camera observing a wire with one extremity attached to a 6-DOF robotic arm validated the modelling and the design of a control law that automatically positions the wire to a desired shape configuration.

7.2.11 Coupling Force and Vision for Controlling Robot Manipulators

Participants: Alexander Oliva, François Chaumette, Paolo Robuffo Giordano.

The goal of this activity is about coupling visual and force information for advanced manipulation tasks. To this end, we are exploiting the recently acquired Panda robot (see Sect. 6.2.4), a state-of-the-art 7-dof manipulator arm with torque sensing in the joints, and the possibility to command torques at the joints or forces at the end-effector. The use of vision in torque-controlled robot is limited because of many issues, among which the difficulty of fusing low-rate images (about 30 Hz) with high-rate torque commands (about 1 kHz), the delays caused by any image processing and tracking algorithms, and the unavoidable occlusions that arise when the end-effector needs to approach an object to be grasped.

In this context we have proposed a general framework for combining force and visual information directly in the visual feature space, by reformulating and unifying the classical admittance control law in the image space. The proposed visual/force control framework has been extensively evaluated via numerous experiments performed on the Panda robot in peg-in-hole tasks where both the pose and the exchanged forces could be regulated with high accuracy and good stability 26. We have recently considered the case of visual/force control for moving targets by exploiting a Kalman filter that can estimate the target state and provide this information to the control loop. In order to fast prototyping the developments on these activities, we have also developed a realistic dynamic simulator of the Franka robot called “FrankaSim’’ that has been publicly released.

7.2.12 End-to-end deep visual servoing

Participants: Eric Marchand, Samuel Felton.

We proposed a deep architecture and the associated learning strategy for end-to-end direct visual servoing. The considered approach allows to sequentially predict, in se(3), the velocity of a camera mounted on the robot’s end-effector for positioning tasks. Positioning is achieved with high precision despite large initial errors in both cartesian and image spaces. Training is fully done in simulation, alleviating the burden of data collection. We demonstrate the efficiency of our method in experiments in both simulated and real-world environments. We also show that the proposed approach is able to handle multiple scenes. This work 40 is done in collaboration with the Lacodam team.

We also proposed a new framework to perform VS in the latent space learned by a convolutional autoencoder. We show that this latent space avoids explicit feature extraction and tracking issues and provides a good representation, smoothing the cost function of the VS process. Besides, our experiments show that this unsupervised learning approach allows us to obtain, without labelling cost, an accurate end-positioning, often on par with the best DVS methods in terms of accuracy but with a larger convergence area. This work 15 is done in collaboration with the Lacodam team.

7.3 Haptic Cueing for Robotic Applications

7.3.1 Wearable Haptics Systems Design

Participants: Claudio Pacchierotti, Maud Marchal, Thomas Howard, Xavier de Tinguy.

We have been working on wearable haptics since few years now, both from the hardware (design of interfaces) and software (rendering and interaction techniques) points of view.

In 3, we presents an approach for automatically adapting the hardware design of a wearable haptic interface for a given user. We analyze the performance of a 3-DoF fingertip cutaneous device as a function of its main geometrical dimensions. Then, starting from the user's fingertip characteristics, we define a numerical procedure that best adapts the dimension of the device to (i) maximize the range of renderable haptic stimuli, (ii) avoid unwanted contacts between the device and the skin, (iii) avoid singular configurations, and (iv) minimize the device encumbrance and weight. Together with the mechanical analysis and evaluation of the adapted design, we present a MATLAB script that calculates the device dimensions customized for a target fingertip as well as an online CAD utility for generating a ready-to-print STL file of the personalized design.

One of the main issues when designing haptic systems for the fingertip is their tracking, especially when interacting with tangible/real objects at the same time. In this respect, in 14, we combined tracking information from a tangible object instrumented with capacitive sensors and an optical tracking system, to improve contact rendering when interacting with tangibles in Virtual Reality. Combining capacitive sensing with optical tracking significantly improves the visuohaptic synchronization and immersion of the experience, which is promising for haptic-enabled interaction with tangible environments.

Finally, in the framework of H2020 project TACTILITY, we are working on designing interaction and rendering techniques for wearable electrotactile interfaces. In this respect, we proposed the use of electrotactile feedback to render the interpenetration distance between the user's finger and the virtual content that is touched 52. The approach consists of modulating the perceived intensity (frequency and pulse width modulation) of the electrotactile stimuli according to the registered interpenetration distance. We assessed the performance of four different interpenetration feedback approaches: electrotactile-only, visual-only, electrotactile and visual, and no interpenetration feedback. Results suggest that electrotactile feedback could be an efficient replacement of visual feedback for enhancing contact information in virtual reality avoiding the need of active visual focus and the rendering of additional visual artefacts.

7.3.2 Mid-Air Haptic Feedback

Participants: Claudio Pacchierotti, Thomas Howard, Guillaume Gicquel, Maud Marchal.

In the framework of H2020 projects H-Reality and E-TEXTURE, we are working to develop novel mid-air haptics paradigms that can convey the information spectrum of touch sensations in the real world, motivating the need to develop new, natural interaction techniques.

In 45, we developed an open-source framework to aid in designing mid-air stimuli, named DOLPHIN. It allows to the study of the impact of rendering parameters on perceived stimulus properties. This platform-agnostic framework standardizes stimulus descriptions as a step toward more replicability and easier communication in the field. It enables reproduction of stimuli between perceptual experiments and ensures that stimuli used in applications correspond to those evaluated in prior perceptual studies.

In 62, we kept studying the perceptual aspects of ultrasound haptic stimulation, investigating the influence of the rendering sampling strategies on a user's ability to differentiate arc curvatures.

7.3.3 Encountered-Type Haptic Devices

Participants: Maud Marchal, Thomas Howard.

Encountered-Type Haptic Displays (ETHDs) provide haptic feedback by positioning a tangible surface for the user to encounter. This allows users to freely elicit haptic feedback with a surface during a virtual simulation. ETHDs differ from most of current haptic devices which rely on an actuator always in contact with the user. In 23, we intend to describe and analyze the different research efforts carried out in this field. In addition, we analyze ETHD literature concerning definitions, history, hardware, haptic perception processes involved, interactions and applications. The paper proposes a formal definition of ETHDs, a taxonomy for classifying hardware types, and an analysis of haptic feedback used in literature. Taken together the overview of this survey intends to encourage future work in the ETHD field.

In 22, we propose an example of ETHD with an approach towards an infinite surface haptic display. Our approach, named ENcountered-Type ROtating Prop Approach (ENTROPiA) is based on a cylindrical spinning prop attached to a robot's end-effector serving as an ETHD. This type of haptic display allows the users to have an unconstrained, free-hand contact with a surface being provided by a robotic device for the users to encounter a surface to be touched. In our approach, the sensation of touching a virtual surface is given by an interaction technique that couples with the sliding movement of the prop under the users' finger by tracking their hand location and establishing a path to be explored. This approach enables large motion for a larger surface rendering, permits rendering multi-textured haptic feedback, and leverages the ETHD approach introducing large motion and sliding/friction sensations. As a part of our contribution, a proof of concept was designed for illustrating our approach. A user study was conducted to assess the perception of our approach showing a significant performance for rendering the sensation of touching a large flat surface. Our approach could be used to render large haptic surfaces in applications such as rapid prototyping for automobile design.

In 44, we propose a novel haptic paradigm for object manipulation in 3D immersive VR. It uses a robotic manipulator to move tangible objects in its workspace such that they match the pose of virtual objects to be interacted with. Users can then naturally touch, grasp and manipulate a virtual object while feeling congruent and realistic haptic feedback from the tangible proxy. The tangible proxies can detach from the robot, allowing natural and unconstrained manipulation in the 3D virtual environment. When a manipulated virtual object comes into contact with the virtual environment, the robotic manipulator acts as an encounter-type haptic display, positioning itself so as to render reaction forces of the environment onto the manipulated physical object.

7.3.4 Multimodal Cutaneous Haptics to Assist Navigation

Participants: Louise Devigne, Marco Aggravi, Inès Lacôte, Pierre-Antoine Cabaret, François Pasteau, Maud Marchal, Claudio Pacchierotti, Marie Babel.

Within the project Inria Challenge DORNELL, we got interested on using cutaneous haptics for aiding the navigation of people with sensory disabilities. In particular, we investigated the ability of vibrotactile sensations and tap stimulations in conveying haptic motion illusions 43 in a handle-like device. We also evaluated the capability of vibrotactile feedback for rendering spatialized impacts with external (virtual) objects.

7.4 Shared Control Architectures

7.4.1 Shared Control for Remote Manipulation

Participants: Paolo Robuffo Giordano, Claudio Pacchierotti, Rahaf Rahal, Raul Fernandez Fernandez.

As teleoperation systems become more sophisticated and flexible, the environments and applications where they can be employed become less structured and predictable. This desirable evolution toward more challenging robotic tasks requires an increasing degree of training, skills, and concentration from the human operator. In this respect, shared control algorithms have been investigated as one of the main tools to design complex but intuitive robotic teleoperation systems, helping operators in carrying out several increasingly difficult robotic applications such as assisted vehicle navigation, surgical robotics, brain-computer interface manipulation, rehabilitation. Indeed, this approach makes it possible to share the available degrees of freedom of the robotic system between the operator and an autonomous controller.

Along this general line of research, during this year we gave the following contributions:

  • in 24 we presented an adaptive impedance control architecture for robotic teleoperation of contact tasks featuring continuous interaction with the environment. We used Learning from Demonstration (LfD) as a framework to learn variable stiffness control policies. Then, the learnt state-varying stiffness was used to command the remote manipulator, so as to adapt its interaction with the environment based on the sensed forces. The proposed system only relies on the on-board torque sensors of a commercial robotic manipulator and it does not require any additional hardware or user input for the estimation of the required stiffness. We also provide a passivity analysis of our system, where the concept of energy tanks is used to guarantee a stable behavior. Finally, the system was evaluated in a representative teleoperated cutting application. Results showed that the proposed variable-stiffness approach outperforms two standard constant-stiffness approaches in terms of safety and robot tracking performance.
  • in 19 we focused on robotic manipulation of fragile, compliant objects, such as food items. In particular we developed a haptic-based, Learning from Demonstration (LfD) policy that enables pre-trained autonomous grasping of food items using an anthropomorphic robotic system. The policy combines data from teleoperation and direct human manipulation of objects, embodying human intent and interaction areas of significance. We evaluated the proposed solution against a recent state-of-the-art LfD policy as well as against two standard impedance controller techniques. The results showed that the proposed policy performs significantly better than the other considered techniques, leading to high grasping success rates while guaranteeing the integrity of the food at hand.
  • in 5 we have proposed a shared control for robot manipulators transporting an object on a tray. Differently from many existing studies about remotely operated robots with firm grasping capabilities, we considered the case in which, in principle, the object can break its contact with the robot end-effector. The proposed shared-control approach automatically regulates the remote robot motion, commanded by the user, and the end-effector orientation to prevent the object from sliding over the tray. Furthermore, the human operator is provided with haptic cues informing about the discrepancy between the commanded and executed robot motion, assisting the operator throughout the task execution. We carried out several experiments and user's studies employing a 7-DOF torque-controlled manipulator. In all experiments, the results clearly show that our control approach outperforms the other solutions in terms of sliding prevention, robustness, commands tracking, and user's preference.

7.4.2 Shared Control for Multiple Robots

Participants: Marco Aggravi, Paolo Robuffo Giordano, Claudio Pacchierotti.

Following our previous works on flexible formation control of multiple robots with global requirements, in particular connectivity maintenance, in 9, 7, 1 we have instead presented a decentralized haptic-enabled connectivity-maintenance control framework for heterogeneous human-robot teams. The proposed framework controls the coordinated motion of a team consisting of mobile robots and one human, for collaboratively achieving various exploration and SAR tasks. The human user physically becomes part of the team, moving in the same environment than the robots, while receiving rich haptic feedback about the team connectivity and the direction toward a safe path. We carried out two human subjects studies, both in simulated and real environments. The results showed that the proposed approach is effective and viable in a wide range of SAR scenarios. Moreover, providing haptic feedback showed increased performance with respect to providing visual information only. Finally, conveying distinct feedback regarding the team connectivity and the path to follow performed better than providing the same information combined together.

7.4.3 Shared Control of Flexible Needles

Participant: Marco Aggravi, Claudio Pacchierotti, Alexandre Krupa.

We proposed a shared-control strategy where the user is only in charge of teleoperating directly and intuitively in the 3D ultrasound image the needle tip desired position via the use of a haptic interface. In this approach, an autonomous "low level" controller based on visual sevoing using 3D ultrasound images is in charge of handling the complexity of the 6-DOF motion that needs to be applied to the needle base in such a way to reach the desired needle tip position. We also proposed in this shared-control strategy to assist the user 3D navigation through kinesthetic stimulation by increasing the stiffness of the haptic device in the direction that is orthogonal to the one that points to the anatomical target and provide to the user a feedback on the needle tip cutting force. This force is obtained by subtracting an estimate of the friction force acting along the needle shaft from the total force that is measured at the base of the needle by a force sensor 8. In order to obtain a real-time estimate, we proposed a method that relies on the deformation of the needle shaft that is automatically tracked in 3D ultrasound. We then provided to the user both stimulation for 3D navigation assistance toward the target and the cutting force feedback by using the grounded haptic interface and a wearable cutaneous interface on the user forearm. We carried out a human subject study to validate the insertion system in a gelatine phantom and compare seven different feedback techniques. The best performance was registered when providing navigation cues through kinesthetic feedback and needle tip cutting force through cutaneous vibrotactile feedback.

7.4.4 Shared Control of a Wheelchair for Navigation Assistance

Participants: Louise Devigne, François Pasteau, Marie Babel.

Power wheelchairs allow people with motor disabilities to have more mobility and independence. In order to improve the access to mobility for people with disabilities, we previously designed a semi-autonomous assistive wheelchair system which progressively corrects the trajectory as the user manually drives the wheelchair and smoothly avoids obstacles.

Despite the COVID situation, INSA and the rehabilitation center of Pôle Saint Hélier managed to co-organize clinical trials in July 2021 at INSA and in September 2021 at Pôle Saint Hélier. Based on the previous trial results 2, the objective was to evaluate the clinical benefit of a driving assistance for people with disabilities experiencing high difficulties while steering a wheelchair. 18 people participated to the trials. We clearly confirmed the excellent ability of the system to assist users and the relevant usage of such an assistive technology.

Figure 9
Figure 9: A participant drives our smart power wheelchair along the circuit during clinical tests (September 2021).

In addition, in collaboration with MIS laboratory (Fabio Morbidi, Guillame Caron), we evaluated the use of additional visual sensors such as spherical vision to enhance the navigation experience and situation awareness by providing adequate feedback to the user 37. The idea is to generate an augmented view of the surrounding environment, presented to the user on a display. We conducted user trial at INSA in July 2021 with able-bodied subjects and older adults with mobility impairments. Our field results indicate that SpheriCol is effective in improving safety and situational awareness, and in supporting a driver's decision during challenging but prevalent maneuvers, such as reversing into an elevator or corridor centering.

Finally, driving safely such a vehicle is a daily challenge particularly in urban environments while navigating on sidewalks, negotiating curbs or dealing with uneven grounds. Indeed, differences of elevation have been reported to be one of the most challenging environmental barrier to negotiate, with tipping and falling being the most common accidents power wheelchair users encounter. To this aim, we proposed a shared-control algorithm which provides assistance while navigating with a wheelchair in an environment consisting of negative obstacles. We designed a dedicated sensor-based control law allowing trajectory correction while approaching negative obstacles e.g. steps, curbs, descending slopes. This shared control method takes into account the humanin-the loop factor. We are currently preparing clinical trials and ethics committee (Comité de Protection des Personnes) procedures to evaluate the clinical benefit of it.

7.4.5 Multisensory power wheelchair simulator

Participants: Guillaume Vailland, Louise Devigne, François Pasteau, Marie Babel.

Power wheelchairs are one of the main solutions for people with reduced mobility to maintain or regain autonomy and a comfortable and fulfilling life. However, driving a power wheelchair in a safe way is a difficult task that often requires training methods based on real-life situations. Although these methods are widely used in occupational therapy, they are often too complex to implement and unsuitable for some people with major difficulties.

In this context, we collaborated with clinicians to develop a Virtual Reality based power wheelchair simulator. This simulator is an innovative training tool adapted to any type of situations and impairments 50. It relies on a modular and versatile workflow enabling not only easy interfacing with any virtual display, but also with any user interface such as wheelchair controllers or feedback devices. A clinical trial has been conducted in May 2021 and October 2021 in which 26 power wheelchair regular users were asked to complete a clinically validated task designed by clinicians within four display conditions: using the HTC Vive Pro HMD, Immersia immersive room or a screen (with or without haptic and vestibular feedback). The objective of this study was to compare performances between the four conditions and to evaluate the Quality of Experience. First analyses clearly show that immersive conditions allow high driving performances to be achieved.

Figure 10
Figure 10: Participant driving in a virtual environment with our simulator and driving in real conditions with a real power wheelchair

7.4.6 Integrating social interaction in a VR powered wheelchair driving simulator

Participants: Emilie Leblong, Antoine Cellier, Marie Babel.

Navigating in the city while driving a powered wheelchair, in a complex and dynamic environment made of various interactions with other humans, can be challenging for a person with disabilities. Learning how to drive a powered wheelchair remains then a major issue for the clinical teams prescribing these technical mobility aids. The work carried out as part of the Interreg ADAPT project has made it possible to design a powered wheelchair simulator in VR. This work was done in cooperation with Anne-Hélène Olivier (MimeTIC team) and Valérie Gouranton (Hybrid team).

To promote the transfer of skills from virtual to real, the use of such a platform requires the deployment of environmentally friendly interactive populated virtual environments. These are currently empty of any pedestrians, even though the question of social interaction in the framework of an inclusive urban mobility is fundamental.

The objective is to better understand how pedestrians and powered wheelchair users interact. Especially, this study will aim to characterize the personal space from both the perspective of the pedestrian and the powered wheelchair driver in a laboratory setting.

The second objective is to use these new models of interaction to improve dynamic virtual environments by including virtual humans that faithfully reproduce the behaviors modeled in terms of the simulator user's reaction in a handicap situation.

Finally, the third objective is to evaluate the fidelity of this new generation of wheelchair simulator by comparing the resulting interactions with the ones previously observed in real conditions. In particular, we will consider the perception of the risk of collision as well as the benefit of learning to drive on the simulator via clinical studies.

7.5 Crowd Simulation for Robotics

7.5.1 Perceptual evaluation of crowd simulations

Participants: Julien Pettré.

This topic has been addressed in collaboration with the MimeTIC team, and more especially with Ludovic Hoyet and Anne-Hélène Olivier who brought expertise on the perceptual evaluation of graphics content.

Simulating crowds requires controlling a very large number of trajectories and is usually performed using crowd motion algorithms for which appropriate parameter values need to be found. The study of the relation between parametric values for simulation techniques and the quality of the resulting trajectories has been studied either through perceptual experiments or by comparison with real crowd trajectories. In the work presented in 13, we integrate both strategies. A quality metric, QF, is proposed to abstract from reference data while capturing the most salient features that affect the perception of trajectory realism. QF weights and combines cost functions that are based on several individual, local and global properties of trajectories. These trajectory features are selected from the literature and from interviews with experts. To validate the capacity of QF to capture perceived trajectory quality, we conducted an online experiment that demonstrates the high agreement between the automatic quality score and non-expert users. To further demonstrate the usefulness of QF, we used it in a data-free parameter tuning application able to tune any parametric microscopic crowd simulation model that outputs independent trajectories for characters. The learnt parameters for the tuned crowd motion model maintain the influence of the reference data which was used to weight the terms of QF.

Our perceptual evaluation not only concerned the global trajctories as generated by simulators, but also the motion of characters resulting from an animation layer , as reported in 33. The more diverse the characters and their behaviors are, the more realistic the virtual crowd is expected to be perceived. Hence, creating virtual crowds is a trade-off between the cost associated with acquiring more diverse assets, namely more virtual characters with their animations, and achieving better realism. In this paper, our focus is on the perceived variety in virtual crowd character motions. We present an experiment exploring whether observers are able to identify virtual crowds including motion clones in the case of large-scale crowds (from 250 to 1000 characters). As it is not possible to acquire individual motions for such numbers of characters, we rely on a state-of-the-art motion variation approach to synthesize unique variations of existing examples for each character in the crowd. Participants then compared pairs of videos, where each character was animated either with a unique motion or using a subset of these motions. Our results show that virtual crowds with more than two motions (one per gender) were perceptually equivalent, regardless of their size. We believe these findings can help create efficient crowd applications, and are an additional step into a broader understanding of the perception of motion variety.

As we show that the motion of virtual characters can play a great role in immersive simulation, we started a research work on visually-driven animation techniques, the first results of which are presented in 47.

7.5.2 Influence of path curvature on collision avoidance behaviour between two walkers

Participants: Julien Pettré.

This topic has been addressed in collaboration with the MimeTIC team, and more especially with Richard Kulpa and Anne-Hélène Olivier who directed the thesis of S. Lynch, main contributor of this work.

Navigating crowded community spaces requires interactions with pedestrians that follow rectilinear and curvilinear trajectories. In the case of rectilinear trajectories, it has been shown that the perceived action opportunities of the walkers might be afforded based on a future distance of closest approach. However, little is known about collision avoidance behaviours when avoiding walkers that follow curvilinear trajectories. In this work, presented in 20 twenty-two participants were immersed in a virtual environment and avoided a virtual human (VH) that followed either a rectilinear path or a curvilinear path with a 5 m or 10 m radius curve at various distances of closest approach. Compared to a rectilinear path (control condition), the curvilinear path with a 5 m radius yielded more collisions when the VH approached from behind the participant and more inversions when the VH approached from in-front. During each trial, the evolution of the future distance of closest approach showed similarities between rectilinear paths and curvilinear paths with a 10 m radius curve. Overall, with few collisions and few inversions of crossing order, we can conclude that participants were capable of predicting future distance of closest approach of virtual walkers that followed curvilinear trajectories. The task was solved with similar avoidance adaptations to those observed for rectilinear interactions. These findings should inform future endeavors to further understand collision avoidance strategies and the role of—for example—non-constant velocities.

7.5.3 SPH crowds: Agent-based crowd simulation up to extreme densities using fluid dynamics

Participants: Julien Pettré, Wouter van Toll, Cedric Braga, Thomas Chatagnon.

In highly dense crowds of humans, collisions between people occur often. It is common to simulate such a crowd as one fluid-like entity (macroscopic), and not as a set of individuals (microscopic, agent-based). Agent-based simulations are preferred for lower densities because they preserve the properties of individual people. However, their collision handling is too simplistic for extreme-density crowds. Therefore, neither paradigm is ideal for all possible densities. In this paper 29, we combine agent-based crowd simulation with Smoothed Particle Hydrodynamics (SPH), a particle-based method that is popular for fluid simulation. We integrate SPH into the crowd simulation loop by treating each agent as a fluid particle. The forces of SPH (for pressure and viscosity) then augment the usual navigation behavior and contact forces per agent. We extend the standard SPH model with a dynamic rest density per particle, which intuitively controls the crowd density that an agent is willing to accept. We also present a simple way to let agents blend between individual navigation and fluid-like interactions depending on the SPH density. Experiments show that SPH improves agent-based simulation in several ways: better stability at high densities, more intuitive control over the crowd density, and easier replication of wave-propagation effects. Also, density-based blending between collision avoidance and SPH improves the simulation of mixed-density scenarios. Our implementation can simulate tens of thousands of agents in real-time. As such, this work successfully prepares the agent-based paradigm for crowd simulation at all densities.

This effort to design simulators better suited to dense crowd simulation has been accompanied by an experimental effort to better understand how humans behave under crowd pressure. Our study protocol is presented here 12. We also took advantage of working on the topic to write a state-of-the-art report presenting the recent evolutions of crowd simulation. 30

7.5.4 Studying factors influencing users' perception and action for Virtual Steering Navigation

Participant: Maud Marchal.

Virtual steering techniques enable users to navigate in larger Virtual Environments (VEs) than the physical workspace available. Even though these techniques do not require physical movement of the users (e.g. using a joystick and the head orientation to steer towards a virtual direction), recent work observed that users might unintentionally move in the physical workspace while navigating, resulting in Unintended Positional Drift (UPD). This phenomenon can be a safety issue since users may unintentionally reach the physical boundaries of the workspace while using a steering technique. In this context, as a necessary first step to improve the design of navigation techniques minimizing the UPD, we propose to analyse and model the UPD during a virtual navigation task in 11. In particular, we characterize and analyze the UPD for a dataset containing the positions and orientations of eighteen users performing a virtual slalom task using virtual steering techniques. We analyzed the performed motions and proposed two UPD models: the first based on a linear regression analysis and the second based on a Gaussian Mixture Model (GMM) analysis. Then, we assessed both models through a simulation-based evaluation where we reproduced the same navigation task using virtual agents. Our results indicate the feasibility of using simulation-based evaluations to study UPD.

Rotation gains in Virtual Reality (VR) enable the exploration of wider VEs compared to the workspace users have in VR setups. The perception of these gains has been consequently explored through multiple experimental conditions in order to improve redirected navigation techniques. While most of the studies consider rotations, in which participants can rotate at the pace they desire but without translational motion, we have no information about the potential impact of the translational and rotational motions on the perception of rotation gains. In 35, we estimated the influence of these motions and compared the perceptual thresholds of rotations gains through a user study (= 14), in which participants had to perform virtual rotation tasks at a constant rotation speed. The main results are that the rotation gains are less perceivable at lower rotation speeds and that translational motion makes detection more difficult at lower rotation speeds. Furthermore, the paper provides insights into the user's gaze and body motions behaviour when exposed to rotation gains. These results contribute to the understanding of the perception of rotation gains in VEs.

7.5.5 From HRI to CRI: Crowd Robot Interaction—Understanding the Effect of Robots on Crowd Motion

Participant: Julien Pettré, Javad Amirian, Fabien Grzeskowiak.

The results reported here have been obtained in close collaboration with partners of the CrowdBot project, including T. Carlson from UCL, R. Siegwart from ETHZ, A. Billard from EPFL. Around this topic, we first empirically studied the navigation of robots among crowds, and reported on simulation tools to evaluate robots' crowd navigation capabilities. We also explored simulation environments for broade kind of interactions 46. 2 thesis resulted from the project: Javad Amirian's 58 and Fabien Grzeskowiak's 59.

How does the presence of a robot affect pedestrians and crowd dynamics, and does this influence vary across robot types ? In this paper, we took the first step towards answering this question by performing a crowd-robot gate-crossing experiment presented in 32. The study involved 28 participants and two distinct robot representatives: A smart wheelchair and a Pepper humanoid robot. Collected data includes: video recordings; robot and participant trajectories; and participants’ responses to post-interaction questionnaires. Quantitative analysis on the trajectories suggests the robot affects crowd dynamics in terms of trajectory regularity and interaction complexity. Qualitative results indicate that pedestrians tend to be more conservative and follow “social rules” while passing a wheelchair compared to a humanoid robot. These insights can be used to design a social navigation strategy that allows more natural interaction by considering the robot effect on the crowd dynamics.

To further evaluate robots' crowd navigation capabilities, we designed the simulation-based challenge presented in 41. The evaluation of robot capabilities to navigate human crowds is essential to conceive new robots intended to operate in public spaces. This paper initiates the development of a benchmark tool to evaluate such capabilities; our long term vision is to provide the community with a simulation tool that generates virtual crowded environment to test robots, to establish standard scenarios and metrics to evaluate navigation techniques in terms of safety and efficiency, and thus, to install new methods to benchmarking robots' crowd navigation capabilities. This work explores the architecture of the simulation tools, introduces first scenarios and evaluation metrics, as well as early results to demonstrate that our solution is relevant to be used as a benchmark tool. B

7.5.6 Tracking Pedestrian Heads in Dense Crowd

Participant: Julien Pettré, Eric Marchand, Ramana Sundararaman.

Tracking humans in crowded video sequences is an important constituent of visual scene understanding, and can bring crucial information and datasets in the purpose of modelling and simulating crowds, as we report in 6. Increasing crowd density challenges visibility of humans, limiting the scalability of existing pedestrian trackers to higher crowd densities. For that reason, we propose to revitalize head tracking with Crowd of Heads Dataset (CroHD), consisting of 9 sequences of 11,463 frames with over 2,276,838 heads and 5,230 tracks annotated in diverse scenes. For evaluation, we proposed a new metric, IDEucl, to measure an algorithm's efficacy in preserving a unique identity for the longest stretch in image coordinate space, thus building a correspondence between pedestrian crowd motion and the performance of a tracking algorithm. Moreover, we also propose a new head detector, HeadHunter, which is designed for small head detection in crowded scenes. We extend HeadHunter with a Particle Filter and a color histogram based re-identification module for head tracking. To establish this as a strong baseline, we compare our tracker with existing state-of-the-art pedestrian trackers on CroHD and demonstrate superiority, especially in identity preserving tracking metrics. With a lightweight head detector and a tracker which is efficient at identity preservation, we believe our contributions will serve useful in advancement of pedestrian tracking in dense crowds. We make our dataset, code and models publicly available at this link.

8 Bilateral contracts and grants with industry

8.1 Bilateral contracts with industry

IRT JV Happy

Participant: François Chaumette.

No Inria Rennes 13521, duration: 36 months.

This project ended in June 2021. It was managed by IRT Jules Verne and achieved in collaboration with LS2N, the ACsSysteme company and Airbus. Its goal was to develop local sensor-based control methods for the assembly of large parts of aircrafts.

 

Airbus React

Participants: Julien Dufour, Fabien Spindler, François Chaumette.

No Inria Rennes 16165, duration: 12 months.

This project started in September 2021. It is in collaboration with Laas in Toulouse for Airbus. Its goal is to develop a vision-based localization system so that a robot arm is able to point accurately on an industrial piece.

8.2 Bilateral grants with industry

Creative

Participants: Thibault Noël, François Chaumette, Eric Marchand.

No Inria Rennes 15737, duration: 8 months.

This project funded by Creative started in February 2021. It supported Thibault Noël's salary before the agreement of a Cifre grant for his Ph.D. about visual exploration (see Section 7.2.8).

  

IRT JV Perform

Participant: François Chaumette.

No Inria Rennes 14049, duration: 38 months.

This project funded by IRT Jules Verne in Nantes ended in February 2021. It was achieved in cooperation with Stéphane Caro from LS2N in Nantes to support Zane Zake's Ph.D. about visual servoing of cable-driven parallel robots (see Section 7.2.4).

  

MX

Participants: François Pasteau, Marie Babel.

INSA Rennes, duration: July 2020 - December 2021.

This contract with MX (Acigné) aims to define an online load estimation in order to servo truck loaders.

  

Sopra-Steria

Participants: François Pasteau, Marie Babel.

INSA Rennes, duration: 12 months. This project funded by Sopra Steria aimed to design a smart rollator equiped with haptic feedback.

  

9 Partnerships and cooperations

9.1 International initiatives

9.1.1 Associate Teams in the framework of an Inria International Lab or in the framework of an Inria International Program

ISI4NAVE

Participants: Marie Babel, Claudio Pacchierotti, Louise Devigne, François Pasteau.

  • Title:
    Innovative Sensors and adapted Interfaces for assistive NAVigation and pathology Evaluation
  • Duration:
    2016 -> 2023
  • Coordinator:
    Marie Babel (marie.babel@irisa.fr)
  • Partners:
    • University College London
  • Inria contact:
    Marie Babel
  • Summary:
    This team aims at developing adapted interfaces that should improve the understanding of people who suffer from cognitive and/or visual impairments. It focuses on two main complementary objectives: (i) compensate both sensorimotor disabilities and cognitive impairments by designing innovative and adapted interfaces; (ii) enhance the driving experience and to bring a new tool for rehabilitation purposes by defining efficient physical human-robot interaction.

9.1.2 Inria associate team not involved in an IIL or an international program

FRANTIC

Participants: Claudio Pacchierotti, Paolo Robuffo Giordano, Nicola De Carli, Fabien Spindler.

  • Title:
    French-Russian Advanced and Novel TactIle Cyberworlds
  • Duration:
    2021 -> 2023
  • Coordinator:
    Claudio Pacchierotti (claudio.pacchierotti@irisa.fr)
  • Partners:
    • Skolkovo institute of science and technology (Skoltech)
  • Inria contact:
    Claudio Pacchierotti
  • Summary:
    Ubiquitous haptic interfaces enable interaction with a virtual or augmented reality system while freely exploring the environment, unimpaired and unconstrained. The objectives of the team are to study and develop an innovative set of perceptually-motivated haptic interfaces for immersive interaction in XR and telerobotics, collaborating with Skoltech Space in the fields of haptics and human-robot interaction.

9.1.3 Participation in other International Programs

GentleMAN

Participants: Fouad Makiyeh, Alexandre Krupa, François Chaumette, Fabien Spindler.

  • Title:
    Gentle and Advanced Robotic Manipulation of 3D Compliant Objects
  • Duration:
    August 2019 - December 2023
  • Coordinator:
    Sintef Ocean (Norway)
  • Partners:
    • Sintef Ocean (Norway)
    • NTNU (Norway)
    • NMBU (Norway)
    • MIT (USA)
    • QUT (Australia)
  • Inria contact:
    Alexandre Krupa
  • Summary:
    This project is granted by The Research Council of Norway. Its main objective is to develop a novel learning framework that uses visual, force and tactile sensing to develop new multi-modal learning models to enable robots to learn new and advanced skills for the manipulation of 3D compliant objects. The Rainbow group is involved in the elaboration of new approaches for visual tracking of deformable objects, active vision perception and visual servoing for deforming soft objects into desired shape (see Section 7.2.9).
BIFROST

Participants: Alexandre Krupa, François Chaumette, Fabien Spindler.

  • Title:
    A Visual-Tactile Perception and Control Framework for Advanced Manipulation of 3D Compliant Objects
  • Duration:
    July 2021 - December 2025
  • Coordinator:
    Sintef Ocean (Norway)
  • Partners:
    • Sintef Ocean (Norway)
    • MIT (USA)
  • Inria contact:
    Alexandre Krupa
  • Summary:
    This project is granted by The Research Council of Norway. Its main objective is to develop a visual-tactile perception and Control framework for advanced manipulation of 3D compliant objects. The Rainbow group is in charge of elaborating novel visual servoing approaches fusing visual and tactile feedback for dexterous manipulation of soft objects.
Study on the distributed control of Heterogeneous Human-Robot Teams with tactile feedback for Collaborative Exploration and Patrolling

Participants: Claudio Pacchierotti, Paolo Robuffo Giordano.

  • Title:
    Study on the distributed control of Heterogeneous Human-Robot Teams with tactile feedback for Collaborative Exploration and Patrolling
  • Duration:
    January 2021 - December 2022
  • Coordinator:
    IRISA/Inria Rennes
  • Partners:
    • Skolkovo Institute of Science and Technology (Russia)
  • Inria contact:
    Claudio Pacchierotti
  • Summary:
    This project is jointly granted by the CNRS and the RFBR (Russian Foundation for Basic Research) as a CNRS International Emerging Action (IEA). It proposes a paradigm for the natural control of a heterogeneous team composed of humans and robots (grounded and aerial). One or more human users move in the same environment as the robotic team, directing their coordinated motion. Each unit in the team shares information with its neighbors, processes it in a distributed way, and carries out its given tasks (e.g., complex environment exploration). The project will advance research in the theory of distributed, shared, and multi-robot control, wearable haptics, and aerial field robotics.

9.2 European initiatives

9.2.1 FP7 & H2020 projects

PRESENT

Participants: Claudio Pacchierotti, Julien Pettré, Alberto Jovane, Adèle Colas.

  • Title:
    Photoreal REaltime Sentient ENTity
  • Duration:
    September 2019 - August 2022
  • Coordinator:
    UNIVERSIDAD POMPEU FABRA (Spain)
  • Partners:
    • BRAINSTORMMULTIMEDIA SL (Spain)
    • CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS (France)
    • CREATIVEWORKERS-CREATIEVE WERKERS VZW (Belgium)
    • ETUITUS SRL (Italy)
    • INFOCERT SPA (Italy)
    • THE FRAMESTORE LIMITED (U.K.)
    • UNIVERSIDAD POMPEU FABRA (Spain)
    • UNIVERSITAET AUGSBURG (Germany)
    • UNIVERSITE RENNES II (France)
  • Inria contact:
    Julien Pettré
  • Summary:
    Our relationship with virtual entities is deepening. Already, we are using technologies like Siri, Alexa and Google Assistant to aid in day-to-day tasks. The EU-funded PRESENT project will develop a virtual digital companion, which will not only sound human but also look natural, demonstrate emotional sensitivity, and establish meaningful dialogue. Advances in photorealistic computer-generated characters, combined with emotion recognition and behaviour, and natural language technologies, will allow these virtual agents to not only look realistic but respond like a human. The project will demonstrate a set of practical tools, a pipeline and an application programming interface.
CLIPE

Participants: Julien Pettré, Tairan Yin, Vicenzo Abichequer.

  • Title:
    Creating Lively Interactive Populated Environments
  • Duration:
    March 2019 - February 2023
  • Coordinator:
    UNIVERSITY OF CYPRUS (Cyprus)
  • Partners:
    • ECOLE POLYTECHNIQUE (France)
    • KUNGLIGA TEKNISKA HOEGSKOLAN (Sweden)
    • MAX-PLANCK-GESELLSCHAFT ZUR FORDERUNG DERWISSENSCHAFTEN EV (Germany)
    • SILVERSKY3D VR TECHNOLOGIES LTD (Cyprus)
    • THE PROVOST, FELLOWS, FOUNDATION SCHOLARS & THE OTHER MEMBERS OF BOARD OF THE COLLEGE OF THE HOLY UNDIVIDED TRINITY OF QUEEN ELIZABETH NEAR DUBLIN (Ireland)
    • UNIVERSITAT POLITECNICA DE CATALUNYA (Spain)
    • UNIVERSITY OF CYPRUS (Cyprus)
  • Inria contact:
    Julien Pettré
  • Summary:
    The project addresses the core challenges of designing new techniques to create and control interactive virtual characters, benefiting from opportunities open by the wide availability of emergent technologies in the domains of human digitization and displays, as well as recent progresses of artificial intelligence. CLIPE aspires to train the new generation of researchers in these techniques, looking at the area holistically. The training and research programme is based on a multi-disciplinary and cross-sectoral philosophy, bringing together industry and academia experts and focusing on both technical and transversal skills development.
CrowdDNA

Participants: Julien Pettré, Thomas Chatagnon.

  • Title:
    TECHNOLOGIES FOR COMPUTER-ASSISTED CROWDMANAGEMENT
  • Duration:
    November 2020 -March 2023
  • Coordinator:
    Inria (France)
  • Partners:
    • ECOLE NORMALE SUPERIEURE DE RENNES (France)
    • FORSCHUNGSZENTRUM JULICH GMBH (Germany)
    • ONHYS (France)
    • UNIVERSIDAD REY JUAN CARLOS (Spain)
    • UNIVERSITAET ULM(Germany)
    • UNIVERSITE RENNES II (France)
    • UNIVERSITY OF LEEDS (U.K.)
  • Inria contact:
    Julien Pettré
  • Summary:
    Crowd management is a difficult task. Large crowds gathering for an outdoor event and heavy pedestrian traffic are events of serious concern for officials tasked with managing public spaces. Existing methods rely on simulation technologies and require the measurement of simulation variables that are difficult to estimate. The EU-funded CrowdDNA project proposes a new technology based on innovative crowd simulation models. It facilitates predictions on the dynamics, behaviour and risk factors of high-density crowds, addressing the need for safe and comfortable mass events. The project suggests that the analysis of some specific macroscopic characteristics of a crowd such as its apparent motion can offer important information about its internal structure and allow the exact assessment of its state.
CrowdBot

Participants: Javad Amirian, Fabien Grzeskowiak, Solenne Fortun, Marie Babel, Julien Pettré, Fabien Spindler.

  • Title:
    Robot navigation in dense crowds
  • Duration:
    January 2018 - December 2021
  • Coordinator:
    Inria (France)
  • Partners:
    • UCL (UK)
    • SoftBank Robotics (France)
    • Univ. Aachen (Germany)
    • EPFL (Switzerland)
    • ETHZ (Switzerland)
    • Locomotec (Germany)
  • Inria contact:
    Julien Pettré
  • Summary:
    CROWDBOT will enable mobile robots to navigate autonomously and assist humans in crowded areas. Today’s robots are programmed to stop when a human, or any obstacle is too close, to avoid coming into contact while moving. This prevents robots from entering densely frequented areas and performing effectively in these high dynamic environments. CROWDBOT aims to fill in the gap in knowledge on close interactions between robots and humans during navigation tasks. The project considers three realistic scenarios: 1) a semi-autonomous wheelchair that must adapt its trajectory to unexpected movements of people in its vicinity to ensure neither its user nor the pedestrians around it are injured; 2) the commercially available Pepper robot that must navigate in a dense crowd while actively approaching people to assist them; 3) the under development robot cuyBot will adapt to compact crowd, being touched and pushed by people. These scenarios generate numerous ethical and safety concerns which this project addresses through a dedicated Ethical and Safety Advisory Board that will design guidelines for robots engaging in interaction in crowded environments. CROWDBOT gathers the required expertise to develop new robot capabilities to allow robots to move in a safe and socially acceptable manner. This requires achieving step changes in a) sensing abilities to estimate the crowd motion around the robot, b) cognitive abilities for the robot to predict the short term evolution of the crowd state and c) navigation abilities to perform safe motion at close range from people. Through demonstrators and open software components, CROWDBOT will show that safe navigation tasks can be achieved within crowds and will facilitate incorporating its results into mobile robots, with significant scientific and industrial impact. By extending the robot operation field toward crowded environments, we enable possibilities for new applications, such as robot-assisted crowd traffic management.

9.2.2 Other european programs/initiatives

Interreg VA France (Channel) England ADAPT

Participants: Marie Babel, François Pasteau, Fabien Grzeskowiak, Louise Devigne, Guillaume Vailland.

  • Title:
    Assistive Devices for empowering disAbled People through robotic Technologies
  • Duration:
    Jan 2017 - June 2022
  • Coordinators:
    ESIGELEC/IRSEEM Rouen
  • Partners:
    • INSA Rennes - IRISA, LGCGM, IETR (France)
    • Université de Picardie Jules Verne -MIS (France),
    • Pôle Saint Hélier (France), CHU Rouen (France),
    • Réseau Breizh PC (France),
    • Pôle TES (France),
    • University College of London - Aspire CREATE (UK),
    • University of Kent (UK),
    • East Kent Hospitals Univ NHS Found. Trust (UK),
    • Health and Europe Centre (UK),
    • Plymouth Hospitals NHS Trust (UK),
    • Canterbury Christ Church University (UK),
    • Kent Surrey Sussex Academic Health Science Network (UK),
    • CornwallMobility Center (UK)
  • Inria contact:
    Marie Babel
  • Summary:
    This project aims at developing innovative assistive technologies in order to support the autonomy and to enhance the mobility of power wheelchair users with severe physical/cognitive disabilities. In particular, the objective is to design and evaluate a power wheelchair simulator as well as to design a multi-layer driving assistance system.

 

9.3 National initiatives

Equipex Robotex

Participants: Fabien Spindler, François Chaumette.

no Inria Rennes 6388, duration: 10 years.

Rainbow was one of the 15 French academic partners involved in the Equipex Robotex network that started in February 2011. It was devoted to get and manage significant equipment in the main robotics labs in France. In the scope of this project, we have obtained the humanoid robot Romeo in 2015.

 

ANR Marsurg

Participants: Eric Marchand, François Chaumette, Fabiens Spindler.

no Inria 16162, duration: 48 months.

This project started in September 2021. It involves a consortium managed by ISIR (Paris) with Pixee Medical and Rainbow group. It aims at researching markerless augmented reality solution for orthopedic surgery

 

ANR Sesame

Participant: François Chaumette.

no Inria 13722, duration: 48 months.

This project started in January 2019. It involves a consortium managed by LS2N (Nantes) with LIP6 (Paris) and Rainbow group. It aims at analysing singularity and stability issues in visual servoing (see Section 7.2.5)

 

Inria Challenge DORNELL

Participants: Marie Babel, Maud Marchal, Claudio Pacchierotti, François Pasteau, Louise Devigne, Marco Aggravi, Inès Lacôte, Pierre-Antoine Cabaret, Lisheng Kuang.

  • Title:
    DORNELL: A multimodal, shapeable haptic handle for mobility assistance of people with disabilities
  • Duration:
    November 2020 - December 2024
  • Coordinators:
    Marie Babel, Claudio Pacchierotti
  • Partners:
    • Potioc Inria team
    • MFX Inria team
    • LGCGM (Rennes)
    • Centre de rééducation Pôle Saint Hélier (Rennes)
    • ISIR (Paris)
    • Institut des jeunes aveugles (Yzeure)
  • Inria contact:
    Marie Babel, Claudio Pacchierotti
  • Summary:
    While technology helps people to compensate for a broad set of mobility impairments, visual perception and/or cognitive deficiencies still significantly affect their ability to move safely and easily. We propose an innovative multisensory, multimodal, smart haptic handle that can be easily plugged onto a wide range of mobility aids, including white canes, precanes, walkers, and power wheelchairs. Specifically fabricated to fit the needs of a person, it provides a wide set of ungrounded tactile sensations (e.g., pressure, skin stretch, vibrations) in a portable and plug-and-play format – bringing haptics in assistive technologies all at once. The project will address important scientific and technological challenges, including the study of multisensory perception, the use of new materials for multimodal haptic feedback, and the development of a haptic rendering API to adapt the feedback to different assistive scenarios and user’s wishes. We will co-design DORNELL with users and therapists, driving our development by their expectations and needs.

 

BPI Lichie

Participants: Maxime Robic, John Thomas, Julien Dufour, Eric Marchand, François Chaumette.

no Inria 14876, duration: 45 months.

This project started in March 2020. It involves a consortium managed by Airbus (Toulouse) with many companies, Onera and Inria. It aims at designing a new constellation of satellites with on-board imaging facilities. Robotics for the assembly of the satellites is also studied. As for Rainbow, this project funds Maxime Robic and John Thomas PhDs (see Sections 7.2.6 and 7.2.7, as well as the software developments achieved for ViSP by Julien Dufour.

 

ANR CAMP

Participants: P. Robuffo Giordano, Q. Delamare, F. Spindler.

  • Title:
    Intrinsically-Robust and Control-Aware Motion Planning for Robots in Real-World Conditions
  • Duration:
    October 2020 - September 2024
  • Coordinator:
    P. Robuffo Giordano
  • Partners:
    • LAAS (Toulouse)
    • Univ. Twente (Netherlands)
  • Inria contact:
    P. Robuffo Giordano
  • Summary:
    An effective way of dealing with the complexity of robots operating in real (uncertain) environments is the paradigm of “feedforward/feedback” or “planning/control”: in a first step a suitable nominal trajectory (feedforward) for the robot states/controls is planned exploiting the available information (e.g., a model of the robot and of the environment). While there has been an effort in proposing “robust planners” or more “global controllers” (e.g., Model Predictive Control (MPC)), a truly unified approach that fully exploits the techniques of the motion planning and control/estimation communities is still missing and the existing state-of-the-art has several important limitations, namely (1) lack of generality, (2) lack of computational efficiency, and (3) poor robustness. In this respect, the ambition of CAMP is to (1) develop a general and unified “intrinsically-robust and control-aware motion planning framework” able to address all the above-mentioned issues, and to (2) demonstrate the applicability of this new framework to real robots in real-world challenging tasks. In particular we envisage two robotics demonstrators for showing at best the effectiveness and generality of our methodology: (1) an indoor pick- and-place/assembly task involving a 7-dof torque-controlled arm for a first validation in “controlled conditions” and (2) an outdoor cooperative mobile manipulation task involving an aerial manipulator (a quadrotor UAV equipped with an onboard arm) and a skid-steering mobile robot with an onboard arm for a final validation in much less favorable experimental conditions (see Sect. 7.2.1)
ANR MULTISHARED

Participants: P. Robuffo Giordano, C. Pacchierotti, V. Drevelle, J. Pettré, G. Notomista, J. Nader.

  • Title:
    Shared-Control Algorithms for Human/Multi-Robot Cooperation
  • Duration:
    September 2020 - August 2024
  • Coordinator:
    P. Robuffo Giordano
  • Inria contact:
    P. Robuffo Giordano
  • Summary:
    The goal of the Chaire AI MULTISHARED is to significantly advance the state-of-the-art in multi-robot autonomy and human-multi-robot interaction for allowing a human operator to intuitively control the coordinated motion of a multi-UAV group navigating in remote environments, with a strong emphasis on the division of roles between multi-robot autonomy (in controlling its motion/configuration and online decision-making) and human intervention/guidance for providing high-level commands to the group while being most aware of the group status via VR and haptics technology (see Sect. 7.1.2 and Sect. 7.4.2).

9.4 Regional initiatives

CominLabs MAMBO

Participants: Lev Smolentsev, Alexandre Krupa, François Chaumette, Paolo Robuffo Giordano, Fabien Spindler.

  • Title:
    Manipulation of Soft Bodies with Multiple Drones
  • Duration:
    October 2020 - September 2024
  • Coordinator:
    LS2N (Nantes)
  • Partners:
    • LS2N (Nantes)
  • Inria contact:
    Alexandre Krupa
  • Summary:
    This project is funded by the Labex CominLabs. It is led by the ARMEN team at LS2N (Nantes) and implies the collaboration of the Rainbow Project-Team. Its objective is to propose a scientific framework for allowing the manipulation of an object by the combined action of two drones equipped with onboard cameras and force sensors. The envisaged solution is to manipulate a deformable body (a slender beam) attached between the two drones in order to grasp an object on the floor and move it to another location. In the scope of this project, the Rainbow group is involved in the elaboration of new approaches for controlling the 2 drones by visual servoing using data provided by onboard RGB-D cameras (see Section 7.2.10).

 

Cartam

Participants: Julien Dufour, Fabien Spindler, François Chaumette.

No Inria Rennes 14041 and 13954, duration: 36 months.
  • This project started in January 2019 . It is supported by Brittany region and FEDER program. It is managed by Unilet with Copeeks, Neotec Vision, Rainbow group, and our start-up Dilepix. It aims at designing a vision system able to detect and locate adventices in a field. We are in charge of tracking the adventices once they are detected and building a geo-localized cartography of the field locating them.
Silver Connect

Participants: Marie Babel.

  • Title:
    SilverConnect - Le digital au service des EHPAD
  • Duration:
    September 2018 - April 2021
  • Coordinator:
    Hoppen (Rennes)
  • Partners:
    • INSA Rennes
    • Hoppen (Rennes)
    • Centre de médecine physique et de réadaptation Pôle Saint Hélier (Rennes)
    • Famileo (Rennes)
  • Inria contact:
    Marie Babel
  • Summary:
    This project started in November 2018 and is supported by Brittany region/BPI as well as FEDER. This project aims at designing a fall detection framework by means of vision-based algorithms coupled with deep learning solutions.
Ambrougerien

Participants: Marie Babel, Vincent Drevelle, François Pasteau, Merwane Bouri.

  • Title:
    Autonomie, MoBilité et fauteuil ROUlant robotisé : GEolocalisation indoor et Recharge IntelligENte
  • Duration:
    December 2020 - December 2024
  • Coordinator:
    DK Innovation (Plérin)
  • Partners:
    • INSA Rennes
    • Hoppen (Rennes)
    • Centre de médecine physique et de réadaptation Pôle Saint Hélier (Rennes)
  • Inria contact:
    Marie Babel
  • Summary:
    This project started in December 2020 and is supported by Brittany region and Rennes Métropole. AMBROUGERIEN aims at supporting the independence of people in electric wheelchairs. A dedicated interface allows the wheelchair to move autonomously to secure the transfer and to return to an intelligent induction recharging base. Information on the internal state of the wheelchairs facilitates fleet management.
Academic Chair IH2A

Participants: Marie Babel, Vincent Drevelle, Maud Marchal, Claudio Pacchierotti, François Pasteau, Louise Devigne, Fabien Grzeskowiak, Guillaume Vailland, Anne-Hélène Olivier (MimeTIC), Bruno Arnaldi (Hybrid), Valérie Gouranton (Hybrid), Florian Nouviale (Hybrid), Alexandre Audinot (Hybrid).

  • Title:
    Academic Chair on Innovations, Handicap, Autonomy and Accessibility (IH2A)
  • Duration:
    September 2020 -
  • Coordinator:
    Marie Babel
  • Partners:
    • IETR Rennes
    • LGCGM Rennes
    • Centre de médecine physique et de réadaptation Pôle Saint Hélier (Rennes)
  • Inria contact:
    Marie Babel
  • Summary:
    This research chair (Innovations, Handicap, Autonomy and Accessibility - IH2A) is a continuation of the research work developed at INSA Rennes/Rainbow team on assistive robotics. The idea is to propose the most suitable technological solutions to compensate for sensory-motor handicaps limiting the mobility and autonomy of people in daily life tasks and leisure activities. The Chair thus aims at perpetuating these activities, both from a societal point of view and from a scientific and clinical point of view, and is intended to be an effective and innovative tool for the deployment of large-scale research in this area. The creation of a new type of multidisciplinary and innovative collaborative site of experimentations will allow the clinical and scientific validation of the technical assistance offered, while ensuring the accessibility of the solutions deployed.

10 Dissemination

Participant: François Chaumette, Paolo Robuffo Giordano, Claudio Pacchierotti, Marie Babel, Maud Marchal, Alexandre Krupa, Julien Pettré, Eric Marchand, Vincent Drevelle, Fabien Spindler.

10.1 Promoting scientific activities

10.1.1 Scientific events: organisation

General chair, scientific chair
  • P. Robuffo Giordano was Program co-chair for the IEEE International Symposium On Multi-Robot And Multi-Agent Systems 2021 (IEEE MRS 2021)
  • M. Marchal was Program co-chair for the Journal Papers Track of IEEE Virtual Reality and 3D User Interfaces 2021 (IEEE VR 2021). She was also Program co-chair for Conference Papers Track of IEEE Symposium on Mixed and Augmented Reality 2021 (IEEE ISMAR 2021).

10.1.2 Scientific events: selection

Member of the conference program committees
  • P. Robuffo Giordano was Associate Editor of IEEE ICRA 2022
  • E. Marchand has been Associate Editor of IEEE ICRA 2021.
  • M. Babel was Associate Editor of IEEE ICRA 2022.
  • C. Pacchierotti was Associate Editor of IEEE ICRA 2022, Editorial Committee Member of Eurohaptics 2022, and Chair of the Cross-Cutting Challenges for IEEE HAPTICS 2022.
  • M. Marchal was a Member of the Technical Papers Committee of Siggraph 2021, a Program Committee Member of the ACM/SIGGRAPH conference on Motion, Interaction and Games 2021 (MIG 2021), a Program Committee Member of ACM/Eurographics Symposium on Computer Animation 2021 (SCA 2021).
  • J. Pettré was member of program committee for CASA 2021 and ACM MIG 2021.
Reviewer
  • P. Robuffo Giordano: IEEE MRS (1), IEEE/RSJ IROS (1), IEEE ICRA (2)
  • F. Chaumette: IEEE ICRA (1)
  • A. Krupa: IEEE ICRA (1)
  • E. Marchand: IEEE ICRA (1), IEEE IROS (1), BMVC (3)
  • F. Spindler: RobAgri (1)
  • M. Babel: IEEE ICRA (1)
  • C. Pacchierotti: IEEE HAPTICS (1)
  • J. Pettré: ACM SIGGRAPH (1) ACM SIGGRAPH Asia (1) ACM SCA (1)

10.1.3 Journal

Member of the editorial boards
  • P. Robuffo Giordano and F. Chaumette are Editors for the IEEE Transactions on Robotics
  • A. Krupa is Associate Editor for the IEEE Transactions on Robotics
  • E. Marchand is Senior Editor for the IEEE Robotics and Automation Letters
  • C. Pacchierotti is Associate Editor for the IEEE Robotics and Automation Letters
  • M. Marchal is Associate Editor of IEEE Transactions on Visualization and Computer Graphics, IEEE Computer Graphics and Applications, Computers & Graphics and IEEE Transactions on Haptics.
  • J. Pettré is associate editor for Computer Graphics Forum and Computer Animation and Virtual Worlds.
Reviewer - reviewing activities
  • P. Robuffo Giordano: IEEE T-RO (1), IEEE RA-L (2), IEEE L-CSS (1)
  • F. Chaumette: IEEE RA-L (4)
  • A. Krupa: IEEE RA-L (1)
  • E. Marchand: IEEE RA-L (2), IEEE TVCG (1)
  • M. Babel: Springer International Journal of Social Robotics (1), IEEE TRO (1), Disability and Rehabilitation: Assistive Technology (1)
  • C. Pacchierotti: IEEE TOH (7), IEEE T-RO (1), Journal of Intelligent & Robotic Systems (1), ACM Computing Survey (1), Behavior & Information Technology (2), IEEE Transactions on Medical Robotics and Bionics (1), IEEE Transactions on Visualization and Computer Graphics (1), Frontiers in Virtual Reality (1), IEEE Computer Magazine (1)
  • M. Marchal: ACM TOG (2), IEEE TOH (1)
  • J. Pettré: Scientific Reports (1), Physica A (1), Computer and Graphics (1)

10.1.4 Invited talks

  • P. Robuffo Giordano. “An Overview of Formation Control and Localization for Multiple Robot Systems”. ENS, Computer Science Department, Rennes, September 2021
  • P. Robuffo Giordano “Recent Advances in Shared Control for Tele-manipulation and Tele-navigation”. DIAG, “La Sapienza” University of Rome, Italy, June 2021
  • P. Robuffo Giordano. “Recent Advances in Shared Control for Tele-manipulation and Tele-navigation”. Rencontres mécatroniques 2021, ENS Rennes, April 2021
  • M. Babel, “Robotique d'assistance et handicap : la mobilité pour tous ”60ème Congrès annuel du Club EEA, June 2021
  • M. Babel, “ADAPT project : knowledge, technology ”, Interreg EDUCAT project closing event, June 2021
  • C. Pacchierotti. “Less is more: the challenge of wearable haptics in the era of immersive technologies.” University of Aarhus, Herning (then held online), Denmark, 2021.
  • C. Pacchierotti. “Touching virtual reality: Extending immersive experiences through haptics.” MatchPoints 2021, Aarhus (then held online), Denmark, 2021.
  • M. Marchal. “Playing with tangibles in Virtual Reality”, Journées IHM IG RV, June 2021
  • M. Marchal. “Multisensory feedback for 3D Hand Interaction with virtual environments”, Univ. Grenoble Alpes, September 2021.
  • J. Pettré, “Virtual Population for Urban Digital Twins”, TICS4Ci (Réseau latino-americain sur les applications TIC pour les villes intelligentes).

10.1.5 Leadership within the scientific community

  • F. Chaumette is a member of the "Haut Comité des Grandes Infrastructures de Recherche" of the French Ministery of Research. He also serves as a member of the Scientific Council of the Mathematics and Computer Science Department of INRAe. He is also a founding member of the Scientific Council of the GdR Robotique. Finally, he is in the Advisory Board of the H2020 ERA Chair AIFORS project.
  • C. Pacchierotti is Senior Chair of the IEEE Technical Committee on Haptics and Secretary of the Eurohaptics Society.
  • M. Marchal is a Board Member of Institut Universitaire de France and a junior member of the institute. She is a board member of GDR Informatique Graphique-Réalité Virtuelle.

10.1.6 Scientific expertise

  • P. Robuffo Giordano has been elected member of the Section 07 of the Comité National de la Recherche Scientifique. He also served as expert/reviewer for the euRobotics “George Giralt” award for the best European PhD thesis in robotics, for evaluating research projects from the ANR, SNSF (Switzerland) and Advanced Grants from the ERC. He was reviewer for the H2020 projects AirBorne, HyFliers and PILOTING.
  • F. Chaumette served as member of the IEEE RAS Fellow Evaluation Committee in 2021. He was also in the CNRS evaluation committe of the JRL lab (Tsukuba)
  • A. Krupa served as expert/reviewer for the Best French PhD Thesis in robotics awarded by the GdR Robotique in 2021.
  • Marie Babel serves since 2017 as an expert for the International Mission of the French Research Ministry (MEIRIES) - Campus France since 2017. In 2021, she also served as an expert for the Haute Autorité de Santé and for the Ligue du Cancer.
  • M. Marchal served as a reviewer for Starting Grants from the ERC.

10.1.7 Research administration

  • F. Chaumette served as the president of the committee in charge of all the temporary recruitments (“Commission Personnel”) at Inria Rennes-Bretagne Atlantique and IRISA and was a member of the Head team of Inria Rennes-Bretagne Atlantique till August 2021. He serves for the Scientific Steering Committee (COSS) of IRISA. He is also a member of the Inria COERLE committee (in charge of the ethical aspects of all Inria research). This year, he evaluated the ethical part of the 30 projects selected by the Brittany Bienvenue Program.
  • A. Krupa is a member of the CUMIR (“Commission des Utilisateurs des Moyens Informatiques pour la Recherche”) of Inria Rennes-Bretagne Atlantique.
  • E. Marchand is the head of "Digital Signals and Images, Robotics" department at IRISA.
  • M. Marchal is a council member of the INSA component of IRISA.cc
  • J. Pettré serves as president of the CUMIR (“Commission des Utilisateurs des Moyens Informatiques pour la Recherche”) of Inria Rennes-Bretagne Atlantique.

10.2 Teaching - Supervision - Juries

10.2.1 Teaching

François Chaumette:

  • Master SISEA: “Robot Vision”, 12 hours, M2, Université de Rennes 1
  • Master ENS: “Visual servoing”, 6 hours, M1, Ecole Nationale Supérieure de Rennes;
  • Master ESIR3: “Visual servoing”, 8 hours, M2, Ecole supérieure d'ingénieurs de Rennes.

Alexandre Krupa:

  • Master FIP TIC-Santé: “Ultrasound visual servoing”, 6 hours, M2, Télécom Physique Strasbourg
  • Master ESIR3: “Ultrasound visual servoing”, 9 hours, M2, Esir Rennes
  • Master INSA1: “Computer programming”, 42 hours, L1, INSA Rennes

Eric Marchand:

  • Master Esir2: “Colorimetry”, 12 hours, M1, Esir Rennes
  • Master Esir2: “Computer vision: geometry”, 24 hours, M1, Esir Rennes
  • Master Esir3: “Robotics Vision 1”, 24 hours, M2, Esir Rennes
  • Master Esir3: “Robotics Vision 2”, 14 hours, M2, Esir Rennes
  • Master MRI: “Computer vision”, 12 hours, M2, Université de Rennes 1
  • Master ENS: “Computer vision”, 16 hours, M2, ENS Rennes
  • Master MIA: “Augmented reality”, 4 hours, M2, Université de Rennes 1
  • Licence ESIR 1: "System", 16h, L3, Université de Rennes 1

Marie Babel:

  • Master INSA2: “Robotics”, 26 hours, M1, INSA Rennes
  • Master INSA1: “Concepts de la logique à  la programmation”, 20 hours, L3, INSA Rennes
  • Master INSA1: “Langage C”, 12 hours, L3, INSA Rennes
  • Master INSA2: “Computer science project”, 30 hours, M1, INSA Rennes
  • Master INSA1: “Practical studies”, 16 hours, L3, INSA Rennes
  • Master INSA2: “Image analysis”, 26 hours, M1, INSA Rennes
  • Master INSA1: “Remedial math courses”, 50 hours, L3, INSA Rennes
  • Master INSA 1: “Probability”, 14 hours, L3, INSA Rennes
  • Master INSA: tutoring and support for students with disabilities, 30 hours, INSA Rennes

Claudio Pacchierotti:

  • Master “Artificial Intelligence & Advanced Visual Computing”: INF644 – Virtual/Augmented Reality & 3D Interactions”, 6 hours, M2, École Polytechnique
  • Master SIF: “Virtual Reality and Multi-Sensory Interaction”, 4 hours, M2, IRISA.

M. Marchal:

  • Master SIF: “Computer Graphics”, 8 hours, M2, INSA/Univ. Rennes 1.
  • Master INSA1: “Computer Graphics”, 20 hours, M1, INSA Rennes.
  • Master INSA1: “Complexity and algorithms”, 26 hours, L3, INSA Rennes.

Vincent Drevelle:

  • Master 2 ILA/CCNA: “Transverse project”, 30 hours, M2, Université de Rennes 1
  • Master 1 Info: “Artificial intelligence”, 20 hours, M1, Université de Rennes 1
  • Licence Info: “Computer systems architecture”, 60 hours, L1, Université de Rennes 1
  • Portail Info-Elec: “Discovering programming and electronics”, 18 hours, L1, Université de Rennes 1
  • Licence 3 Miage: “Computer programming”, 78 hours, L3, Université de Rennes 1
  • Master 2 EEEA-SE: “Instrumentation, localization, GPS”, 4 hours, M2, Université de Rennes 1
  • Master 2 EEEA-SE: “Multisensor data fusion”, 20 hours, M2, Université de Rennes 1
  • Master 2 IL/CCN: “Mobile robotics”, 32 hours, M2, Université de Rennes 1

Julien Pettré:

  • Master 2 SIF: "Motion for Animation and Robotics", 6 hours, Université de Rennes 1
  • Master 2 Advanced 3D Graphics, "Crowd simulation", 3 hours, Ecole Polytechnique

Paolo Robuffo Giordano:

  • Parcours Intelligence artificielle confirmés, SAFRAN, modules “Advanced Robotics”, 12 heures

10.2.2 Supervision

  • Ph.D. in progress: Lisheng Kuang, “Design and development of novel wearable haptic interfaces for teleoperation of robots”, started in March 2020, supervised by Claudio Pacchierotti and Paolo Robuffo Giordano.
  • Ph.D. in progress: Pascal Brault, “Planification et optimisation de trajectoires robustes aux incertitudes paramétriques pour des taches robotiques fondées sur l'usage de capteurs”, started in September 2019, supervised by Paolo Robuffo Giordano and Quentin Delamare
  • Ph.D. in progress: Alexander Oliva, “Coupling Vision and Force for Robotic Manipulation”, started in October 2018, supervised by François Chaumette and Paolo Robuffo Giordano
  • Ph.D. in progress: Ali Srour, “Robust and Control-Aware Motion Planning”,started in October 2021, supervised by Q. Delamare and Paolo Robuffo Giordano
  • Ph.D. in progress: Maxime Bernard, “Shared Control for Multi-Robot Systems”,started in October 2021, supervised by Claudio Pacchierotti and Paolo Robuffo Giordano
  • Ph.D. in progress: Nicola De Carli, “Reactive Trajectory Planning Methods for Formation Control and Localization of Multi-Robot System”,started in January 2021, supervised by P. Salaris (Univ. Pisa, Italy) and Paolo Robuffo Giordano
  • Ph.D. in progress: Lev Smolentsev, “Manipulation of soft bodies with multiple drones”, started in November 2020, supervised by Alexandre Krupa, François Chaumette and Isabelle Fantoni (LS2N, Nantes)
  • Ph.D. in progress: Fouad Makiyeh,“Shape servoing based on visual information using RGB-D sensor for dexterous manipulation of 3D compliant objects”, started in September 2020, supervised by Alexandre Krupa, Maud Marchal and François Chaumette
  • Ph.D. in progress: Xi Wang “Robustness of Visual SLAM techniques to light changing conditions”, started in September 2018, supervised by Eric Marchand and Marc Christie (MimeTIC group)
  • Ph.D. in progress: Samuel Felton “Deep Learning for visual servoing”, started in October 2019, supervised by Eric Marchand and Elisa Fromont (Lacodam group)
  • Ph.D. in progress: Mathieu Gonzalez “SLAM in time varying environment”, started in October 2019, supervised by Eric Marchand and Jérome Royan (IRT B<>COM)
  • Ph.D. in progress: Maxime Robic, “Visual servoing of a satellite constellation”, started in November 2020, supervised by Eric Marchand and François Chaumette.
  • Ph.D. in progress: John Thomas, “Visual servoing of a satellite constellation”, started in December 2020, supervised by François Chaumette.
  • Ph.D. in progress: Thibault Noël, “3D environment exploration”, started in October 2021, supervised by Eric Marchand and François Chaumette.
  • Ph.D. in progress: Erwan Normand, “3D environment exploration”, started in October 2021, supervised by Eric Marchand, Maud Marchal and Claudio Pacchierotti.
  • Ph.D. in progress: Emilie Leblong, “Taking into account social interactions in a virtual reality power wheelchair driving simulator: promoting learning for inclusive mobility”, started in October 2020, supervised by Marie Babel and Anne-Hélène Olivier (Mimetic team)
  • Ph.D. in progress: Inès Lacôte, “Investigate haptic and multisensory illusions to design an assistive navigation”, started in January 2021, supervised by Maud Marchal, Claudio Pacchierotti, David Gueorguiev (ISIR, Paris)
  • Ph.D. in progress: Pierre-Antoine Cabaret, “Design of navigation techniques for a multi-sensory handle for mobility assistance”, started in October 2021, supervised by Maud Marchal, Marie Babel and Claudio Pacchierotti
  • Ph.D. in progress: Antoine Cellier, “Inclusive navigation in a power wheelchair for people with people with neurological pathologies: from virtual to real”, started in October 2021, supervised by Marie Babel and Valérie Gouranton (Hybrid team)
  • Ph.D. in progress: Glenn Kerbiriou, “Semantic Modeling of the Face”, started in April 2021, supervised by Maud Marchal, Fabien Danieau and Quentin Avril (both from Interdigital)
  • Ph.D. in progress: Alberto Jovane, “Modélisation de mouvements réactifs et comportements non verbaux pour la création d'acteurs digitaux pour la réalité virtuelle”, started in September 2019, supervised by Julien Pettré, Marc Christie, Ludovic Hoyet, Claudio Pacchierrotti.
  • Ph.D. in progress: Thomas Chatagnon, “Micro-to-macro energy-based interaction models for dense crowds behavioral simulations ”, started in November 2020, supervised by Julien Pettré, as well as (from Mimetic team): Charles Pontonnier, Anne-Hélène Olivier and Ludovic Hoyet
  • Ph.D. in progress: Adèle Colas, “Modélisation de comportements collectifs réactides et expressifs pour la réalité virtuelle”, started in December 2019, supervised by Julien Pettré, Claudio Pacchierrotti as well as (from Mimetic team): Anne-Hélène Olivier and Ludovic Hoyet
  • Ph.D. in progress: Tairan Yin, “Création de scènes peuplées dynamiques pour la réalité virtuelle”, started in November 2020, supervised by Julien Pettré, and Marie-Paule Cani (Ecole Polytechnique) as well as (from Mimetic team): marc Christie and Ludovic Hoyet
  • Ph.D. in progress: Vicenzo Abichequer, “Humains virtuels expressifs et réactifs pour la réalité virtuelle ”, started in November 2020, supervised by Julien Pettré, and Carol O'Sullivan (Trinity College Dublin) as well as (from Mimetic team): marc Christie and Ludovic Hoyet
  • Ph.D. defended: Guillaume Vailland, “Power Wheelchair Navigation : From Virtual Reality Simulation Towards Real Smart Designs” defended in December 2021, supervised by Marie Babel and Valérie Gouranton (Hybrid team)
  • Ph.D. defended: Ketty Favre, “Lidar-based localization”, defended in september 2021, supervised by Eric Marchand, Muriel Pressigout and Luce Morin (Vador group, IETR)
  • Ph. D. defended: Fabien Grzeskowiak, “Crowd simulation and experiments for the evaluation of robot navigation in crowds”, defended in June 2021, supervized by Julien Pettré and Marie Babel
  • Ph. D. defended: Javad Amirian, “Human motion trajectory prediction for robot navigation”, supervised by Julien Pettré and Jean-Bernard Hayet (CIMAT, Mexico) defended in July 2021

10.2.3 Juries

PhD and HDR juries
  • P. Robuffo Giordano: Julian Erskine (Ph.D, President), LS2N, Nantes; Esteban Restrepo (Ph.D., reviewer) L2S, Paris; José de Jesús Castillo Zamora (Ph.D., reviewers) L2S, Paris; Amanhoud Walid (Ph.D., reviewer) EPFL, Switzerland; Mahmoud Hamandi (Ph.D., reviewer), LAAS-CNRS; Paolo Ferrari (Ph.D., reviewer) University of Rome “La Sapienza”, Italy; Luca Bigazzi (Ph.D., reviewer) University of Florence, Italy; Riccardo Mengacci (Ph.D., member), University of Pisa, Italy; Chiara Gabellieri (Ph.D., reviewer) University of Pisa, Italy
  • F. Chaumette: Marianne Bakken (PhD, reviewer, NTNU, Norway), Ferran Argelaguet (HDR, president, Irisa)
  • A. Krupa: Florent Nageotte (HDR, member), ICube, Université de Strasbourg, France
  • E. Marchand: Kevin Chappellet (PhD, Lirmm), Tom François (PhD, Institut Pascal, president), Antoine André (PhD, Femto-ST, president), Simon Evain (PhD, Inria Rennes, president), Javad Amirian (PhD, Inria Rennes, president), Fabien Grzeskoviac (PhD, Irisa, president), Ific Goudé (PhD, Irisa Rennes, president), Florian Berton (PhD, Inria Rennes, president),
  • M. Babel Franck Pouvrasseau (PhD, Université Paris Saclay, member), Hazem Khaled Mohamed Abdelkawy (PhD, Université Paris-Est Créteil, member), Tassut Tagnithammou (PhD, Université Paris-Saclay, member), Michael Gray (PhD, INSA Hauts de France, member), Yann Morere (HDR, Université de Lorraine, member), Viet Thuan Nguyen (Université Polytechnique Hauts de France, reviewer)
  • M. Marchal: Thomas Buffet (PhD, Ecole Polytechnique, reviewer), Jocelyn Monnoyer (PhD, Aix Marseille Université, reviewer), Thibaud Delrieu, Université de Poitiers, reviewer), Daniel Lobo (University of Rey Juan Carlos, Madrid, Spain, reviewer), Mickaël Ly (Université Grenoble Alpes, reviewer), Camille Brunel (Université de Bordeaux, reviewer), David Lopez (Université de Lorraine, reviewer), Hector Bareiro (University of Rey Juan Carlos, Madrid, Spain, reviewer and president), Elodie Bouzbib (Sorbonne Université, reviewer and president), Camille Apamon (INSA Rennes, member)
  • V. Drevelle: Krushna Shinde (PhD, member, Université de Technologie de Compiègne)
  • J. Pettré: Manon Predhumeau (PhD, member, Université Grenoble-Alpes),
Other juries
  • P. Robuffo Giordano: member of the selection committee for the recruitment of researchers (CRCN) at Inria Nancy-Grand Est; Member of the selection committee for a Professor position at the University of Montpellier
  • F. Chaumette: member of the selection committee for the recruitment of researchers (CRCN) at Inria Rennes Bretagne Atlantique; Member of the selection committee for a Professor position at the University of Toulouse
  • A. Krupa: member of the selection committee for the recruitment of an Assistant Professor at the Université de Strasbourg
  • E. Marchand: president of the selection committee for the recruitment of an Assistant Professor at the Université de Rennes 1
  • M. Babel: member of assistant professor committee selections at Centrale Nantes, Université Picardie Hauts-de-France, ENS Rennes

10.3 Popularization

10.3.1 Internal or external Inria responsibilities

E. Marchand is an Editor for Interstices )i(

10.3.2 Articles and contents

10.3.3 Interventions

Due to the visibility of our experimental platforms, the team is often requested to present its research activities to students, researchers or industry. Our panel of demonstrations allows us to highlight recent results concerning the vision-based shared control using our haptic device for object manipulation, the control of a fleet of quadrotors, vision-based detection and tracking for space navigation in a rendezvous context, the semi-autonomous navigation of a wheelchair, the power wheelchair simulator and augmented reality applications. Some of these demonstrations are available as videos on VispTeam YouTube channel.

  • Maud Marchal was invited by France culture for the radio program "La méthode scientifique" about haptics and virtual reality (podcast)
  • Marie Babel was interviewed by Radio Laser on Nov. 2021 for the radio program "Voyages extraordinaires dans le monde des sciences" (podcast).
  • Marie Babel gave a speech during a conference organized in Inria Rennes about "J'peux pas, j'ai informatique" on Nov. 2021 for secondary-school teachers.
  • Claudio Pacchierotti and Maud Marchal have been featured in a video and article entitled “Le sens du toucher fait son entrée dans la réalité virtuelle” (The sense of touch makes its entrance in VR) on the French newspaper Le Monde.
  • The CrowdDNA project has been the topic of several articles in press (Ouest France, Sciences Ouest, Actu IA), as well as a TV report in Télématin (France TV).

11 Scientific production

11.1 Major publications

  • 1 articleM.Marco Aggravi, G.Giuseppe Sirignano, P.Paolo Robuffo Giordano and C.Claudio Pacchierotti. Decentralized control of a heterogeneous human-robot team for exploration and patrolling.IEEE Transactions on Automation Science and EngineeringAugust 2021, 1-17
  • 2 articleE.Emilie Leblong, B.Bastien Fraudet, L.Louise Devigne, M.Marie Babel, F.François Pasteau, B.Benoit Nicolas and P.Philippe Gallien. SWADAPT1: assessment of an electric wheelchair-driving robotic module in standardized circuits: a prospective, controlled repeated measure design pilot study.Journal of NeuroEngineering and Rehabilitation181September 2021, 1-12
  • 3 articleM.Monica Malvezzi, F.Francesco Chinello, D.Domenico Prattichizzo and C.Claudio Pacchierotti. Design of personalized wearable haptic interfaces to account for fingertip size and shape.IEEE Transactions on Haptics (ToH)142April 2021, 266 - 272
  • 4 articleB.Beatriz Pascual-Escudero, A.Abhilash Nayak, S.Sébastien Briot, O.Olivier Kermorgant, P.Philippe Martinet, M.Mohab Safey El Din and F.François Chaumette. Complete Singularity Analysis for the Perspective-Four-Point Problem.International Journal of Computer Vision1294April 2021, 1217–1237
  • 5 articleM.Mario Selvaggio, J.Jonathan Cacace, C.Claudio Pacchierotti, F.Fabio Ruggiero and P.Paolo Robuffo Giordano. A Shared-control Teleoperation Architecture for Nonprehensile Object Transportation.IEEE Transactions on RoboticsJune 2021, 1-15
  • 6 inproceedingsR.Ramana Sundararaman, C.Cédric De Ameida Braga, E.Eric Marchand and J.Julien Pettré. Tracking Pedestrian Heads in Dense Crowd.CVPR 2021 - IEEE/CVF Conference on Computer Vision and Pattern RecognitionVirtual, United StatesIEEEJune 2021, 1-11

11.2 Publications of the year

International journals

  • 7 articleM.Marco Aggravi, A. A.Ahmed Alaaeldin Said Elsherif, P.Paolo Robuffo Giordano and C.Claudio Pacchierotti. Haptic-enabled decentralized control of a heterogeneous human-robot team for search and rescue in partially-known environments.IEEE Robotics and Automation Letters63March 2021, 4843-4850
  • 8 articleM.Marco Aggravi, D. A.Daniel A L Estima, A.Alexandre Krupa, S.Sarthak Misra and C.Claudio Pacchierotti. Haptic teleoperation of flexible needles combining 3D ultrasound guidance and needle tip force feedback.IEEE Robotics and Automation Letters63March 2021, 4859-4866
  • 9 articleM.Marco Aggravi, C.Claudio Pacchierotti and P.Paolo Robuffo Giordano. Connectivity-Maintenance Teleoperation of a UAV Fleet with Wearable Haptic Feedback.IEEE Transactions on Automation Science and Engineering183June 2021, 1243-1262
  • 10 articleM.Marco Aggravi, G.Giuseppe Sirignano, P.Paolo Robuffo Giordano and C.Claudio Pacchierotti. Decentralized control of a heterogeneous human-robot team for exploration and patrolling.IEEE Transactions on Automation Science and EngineeringAugust 2021, 1-17
  • 11 articleH.Hugo Brument, G.Gerd Bruder, M.Maud Marchal, A.-H.Anne-Hélène Olivier and F.Ferran Argelaguet Sanz. Understanding, Modeling and Simulating Unintended Positional Drift during Repetitive Steering Navigation Tasks in Virtual Reality.IEEE Transactions on Visualization and Computer Graphics2711November 2021, 4300-4310
  • 12 articleT.Thomas Chatagnon, A.-H.Anne-Hélène Olivier, L.Ludovic Hoyet, J.Julien Pettré and C.Charles Pontonnier. Modeling physical interactions in human crowds: a pilot study of individual response to controlled external pushes.Computer Methods in Biomechanics and Biomedical Engineering2021, 1-2
  • 13 articleB. C.Beatríz Cabrero Daniel, R.Ricardo Marques, L.Ludovic Hoyet, J.Julien Pettré and J.Josep Blat. A Perceptually-Validated Metric for Crowd Trajectory Quality Evaluation.Proceedings of the ACM on Computer Graphics and Interactive Techniques43September 2021, 1 - 18
  • 14 articleX.Xavier De Tinguy, C.Claudio Pacchierotti, A.Anatole Lécuyer and M.Maud Marchal. Capacitive Sensing for Improving Contact Rendering with Tangible Objects in VR.IEEE Transactions on Visualization and Computer Graphics274April 2021, 2481-2487
  • 15 articleS.Samuel Felton, P.Pascal Brault, E.Elisa Fromont and E.Eric Marchand. Visual Servoing in Autoencoder Latent Space.IEEE Robotics and Automation Letters2022
  • 16 articleM.Mathieu Gonzalez, A.Amine Kacete, A.Albert Murienne and E.Eric Marchand. L6DNet: Light 6 DoF Network for Robust and Precise Object Pose Estimation with Small Datasets.IEEE Robotics and Automation Letters62April 2021, 2914-2921
  • 17 articleS.Salma Jiddi, P.Philippe Robert and E.Eric Marchand. Detecting Specular Reflections and Cast Shadows to Estimate Reflectance and Illumination of Dynamic Indoor Scenes.IEEE Transactions on Visualization and Computer Graphics282February 2022, 1249 - 1260
  • 18 articleE.Emilie Leblong, B.Bastien Fraudet, L.Louise Devigne, M.Marie Babel, F.François Pasteau, B.Benoit Nicolas and P.Philippe Gallien. SWADAPT1: assessment of an electric wheelchair-driving robotic module in standardized circuits: a prospective, controlled repeated measure design pilot study.Journal of NeuroEngineering and Rehabilitation181September 2021, 1-12
  • 19 articleA.Aleksander Lillienskiold, R.Rahaf Rahal, P.Paolo Robuffo Giordano, C.Claudio Pacchierotti and E.Ekrem Misimi. Human-Inspired Haptic-Enabled Learning from Prehensile Move Demonstrations.IEEE Transactions on Systems, Man, and Cybernetics: SystemsJanuary 2021, 1-12
  • 20 articleS.Sean Lynch, R.Richard Kulpa, L. A.Laurentius Antonius Meerhoff, A.Anthony Sorel, J.Julien Pettré and A.-H.Anne-Hélène Olivier. Influence of path curvature on collision avoidance behaviour between two walkers.Experimental Brain Research2391January 2021, 329-340
  • 21 articleM.Monica Malvezzi, F.Francesco Chinello, D.Domenico Prattichizzo and C.Claudio Pacchierotti. Design of personalized wearable haptic interfaces to account for fingertip size and shape.IEEE Transactions on Haptics (ToH)142April 2021, 266 - 272
  • 22 articleV.Victor Mercado, M.Maud Marchal and A.Anatole Lécuyer. ENTROPiA: Towards Infinite Surface Haptic Displays in Virtual Reality Using Encountered-Type Rotating Props.IEEE Transactions on Visualization and Computer Graphics273March 2021, 2237-2243
  • 23 articleV. R.Victor Rodrigo Mercado, M.Maud Marchal and A.Anatole Lécuyer. “Haptics On-Demand”: A Survey on Encountered-Type Haptic Displays.IEEE Transactions on Haptics (ToH)143July 2021, 449-464
  • 24 articleY.Youssef Michel, R.Rahaf Rahal, C.Claudio Pacchierotti, P.Paolo Robuffo Giordano and D.Dongheui Lee. Bilateral teleoperation with adaptive impedance control for contact tasks.IEEE Robotics and Automation LettersMarch 2021, 8
  • 25 articleG.Gennaro Notomista, C.Claudio Pacchierotti and P.Paolo Robuffo Giordano. Online Robot Trajectory Optimization for Persistent Environmental Monitoring.IEEE Control Systems LettersSeptember 2021, 1-7
  • 26 articleA.Alexander Oliva, P.Paolo Robuffo Giordano and F.François Chaumette. A General Visual-Impedance Framework for Effectively Combining Vision and Force Sensing in Feature Space.IEEE Robotics and Automation Letters63July 2021, 4441-4448
  • 27 articleB.Beatriz Pascual-Escudero, A.Abhilash Nayak, S.Sébastien Briot, O.Olivier Kermorgant, P.Philippe Martinet, M.Mohab Safey El Din and F.François Chaumette. Complete Singularity Analysis for the Perspective-Four-Point Problem.International Journal of Computer Vision1294April 2021, 1217–1237
  • 28 articleM.Mario Selvaggio, J.Jonathan Cacace, C.Claudio Pacchierotti, F.Fabio Ruggiero and P.Paolo Robuffo Giordano. A Shared-control Teleoperation Architecture for Nonprehensile Object Transportation.IEEE Transactions on RoboticsJune 2021, 1-15
  • 29 articleW.Wouter Van Toll, T.Thomas Chatagnon, C.Cédric Braga, B.Barbara Solenthaler and J.Julien Pettré. SPH crowds: Agent-based crowd simulation up to extreme densities using fluid dynamics.Computers and Graphics98June 2021, 306-321
  • 30 articleW.Wouter Van Toll and J.Julien Pettré. Algorithms for Microscopic Crowd Simulation: Advancements in the 2010s.Computer Graphics Forum4022021
  • 31 articleX.Xi Wang, M.Marc Christie and E.Eric Marchand. Binary Graph Descriptor for Robust Relocalization on Heterogeneous Data.IEEE Robotics and Automation Letters2022
  • 32 articleB.Bingqing Zhang, J.Javad Amirian, H.Harry Eberle, J.Julien Pettré, C.Catherine Holloway and T.Tom Carlson. From HRI to CRI: Crowd Robot Interaction—Understanding the Effect of Robots on Crowd Motion.International Journal of Social RoboticsJune 2021, 1-13

International peer-reviewed conferences

  • 33 inproceedingsR.Robin Adili, B.Benjamin Niay, K.Katja Zibrek, A.-H.Anne-Hélène Olivier, J.Julien Pettré and L.Ludovic Hoyet. Perception of Motion Variations in Large-Scale Virtual Human Crowds.MIG 2021 - 14th Annual ACM SIGGRAPH Conference on Motion, Interaction and GamesVirtual Event Switzerland, FranceACMNovember 2021, 1-7
  • 34 inproceedingsP.Pascal Brault, Q.Quentin Delamare and P.Paolo Robuffo Giordano. Robust Trajectory Planning with Parametric Uncertainties.ICRA 2021 - IEEE International Conference on Robotics and AutomationXi'an, ChinaMay 2021, 11095-11101
  • 35 inproceedingsH.Hugo Brument, M.Maud Marchal, A.-H.Anne-Hélène Olivier and F.Ferran Argelaguet Sanz. Studying the Influence of Translational and Rotational Motion on the Perception of Rotation Gains in Virtual Environments.SUI 2021 - Symposium on Spatial User InteractionVirtual Event, United StatesNovember 2021, 1-12
  • 36 inproceedingsN.Nicola De Carli, P.Paolo Salaris and P.Paolo Robuffo Giordano. Online Decentralized Perception-Aware Path Planning for Multi-Robot Systems.MRS 2021 - 3rd IEEE International Symposium on Multi-Robot and Multi-Agent SystemsCambridge, United KingdomIEEENovember 2021, 1-9
  • 37 inproceedingsS.Sarah Delmas, F.Fabio Morbidi, G.Guillaume Caron, J.Julien Albrand, M.Meven Jeanne-Rose, L.Louise Devigne and M.Marie Babel. SpheriCol: A Driving Assistance System for Power Wheelchairs Based on Spherical Vision and Range Measurements.SII 2021 - 13th IEEE/SICE International Symposium on System IntegrationIwaki, JapanIEEEJanuary 2021, 505-510
  • 38 inproceedingsK.Ketty Favre, M.Muriel Pressigout, E.Eric Marchand and L.Luce Morin. A Plane-based Approach for Indoor Point Clouds Registration.ICPR 2020 - 25th International Conference on Pattern RecognitionMilan (Virtual), ItalyJanuary 2021, 7072-7079
  • 39 inproceedingsK.Ketty Favre, M.Muriel Pressigout, E.Eric Marchand and L.Luce Morin. Plane-based Accurate Registration of Real-world Point Clouds.SMC 2021 - IEEE International Conference on Systems, Man, and CyberneticsMelbourne / Virtual, AustraliaIEEEOctober 2021, 1-6
  • 40 inproceedingsS.Samuel Felton, E.Elisa Fromont and E.Eric Marchand. Siame-se(3): regression in se(3) for end-to-end visual servoing.ICRA 2021 - IEEE International Conference on Robotics and AutomationXi'an, ChinaIEEEMay 2021, 14454-14460
  • 41 inproceedingsF.Fabien Grzeskowiak, D.David Gonon, D.Daniel Dugas, D.Diego Paez-Granados, J. J.Jen Jen Chung, J.Juan Nieto, R.Roland Siegwart, A.Aude Billard, M.Marie Babel and J.Julien Pettré. Crowd against the machine: A simulation-based benchmark tool to evaluate and compare robot capabilities to navigate a human crowd.ICRA 2021 - IEEE International Conference on Robotics and AutomationXian, ChinaMay 2021, 3879-3885
  • 42 inproceedingsT.Thomas Howard, G.Guillaume Gicquel, M.Maud Marchal, A.Anatole Lécuyer and C.Claudio Pacchierotti. PUMAH : Pan-tilt Ultrasound Mid-Air Haptics.WHC 2021 - IEEE World Haptics ConferenceMontréal / Virtual, CanadaJuly 2021, 1
  • 43 inproceedingsI.Inès Lacôte, C.Claudio Pacchierotti, M.Marie Babel, M.Maud Marchal and D.David Gueorguiev. Generating apparent haptic motion for assistive devices.WHC 2021 - IEEE World Haptics ConferenceMontréal / Virtual, CanadaIEEEJuly 2021, 1
  • 44 inproceedingsV. R.Victor Rodrigo Mercado, T.Thomas Howard, H.Hakim Si-Mohammed, F.Ferran Argelaguet Sanz and A.Anatole Lécuyer. Alfred: the Haptic Butler On-Demand Tangibles for Object Manipulation in Virtual Reality using an ETHD.WHC 2021 IEEE World Haptics ConferenceMontreal, FranceIEEEJuly 2021, 373-378
  • 45 inproceedingsL.Lendy Mulot, G.Guillaume Gicquel, Q.Quentin Zanini, W.William Frier, M.Maud Marchal, C.Claudio Pacchierotti and T.Thomas Howard. DOLPHIN: A Framework for the Design and Perceptual Evaluation of Ultrasound Mid-Air Haptic Stimuli.SAP 2021 - ACM Symposium on Applied PerceptionRennes, FranceSeptember 2021, 1-10
  • 46 inproceedingsI.Iana Podkosova, K.Katja Zibrek, J.Julien Pettré, L.Ludovic Hoyet and A.-H.Anne-Hélène Olivier. Exploring behaviour towards avatars and agents in immersive virtual environments with mixed-agency interactions.VR 2021 - 28th IEEE Conference on Virtual Reality and 3D User InterfacesLisbon, PortugalIEEE2021
  • 47 inproceedingsP.Pierre Raimbaud, A.Alberto Jovane, K.Katja Zibrek, C.Claudio Pacchierotti, M.Marc Christie, L.Ludovic Hoyet, J.Julien Pettré and A.-H.Anne-Hélène Olivier. Reactive Virtual Agents: A Viewpoint-Driven Approach for Bodily Nonverbal Communication.IVA 2021: ACM International Conference on Intelligent Virtual AgentsVirtual Event Japan, FranceACMSeptember 2021, 164-166
  • 48 inproceedingsA.Agniva Sengupta, A.Alexandre Krupa and E.Eric Marchand. Visual Tracking of Deforming Objects Using Physics-based Models.ICRA 2021 - IEEE International Conference on Robotics and AutomationXi'an, ChinaIEEEMay 2021, 14178-14184
  • 49 inproceedingsR.Ramana Sundararaman, C.Cédric De Ameida Braga, E.Eric Marchand and J.Julien Pettré. Tracking Pedestrian Heads in Dense Crowd.CVPR 2021 - IEEE/CVF Conference on Computer Vision and Pattern RecognitionVirtual, United StatesIEEEJune 2021, 1-11
  • 50 inproceedingsG.Guillaume Vailland, L.Louise Devigne, F.François Pasteau, F.Florian Nouviale, B.Bastien Fraudet, E.Emilie Leblong, M.Marie Babel and V.Valérie Gouranton. VR based Power Wheelchair Simulator: Usability Evaluation through a Clinically Validated Task with Regular Users.VR 2021 - IEEE Conference on Virtual Reality and 3D User InterfacesLisbon, PortugalIEEEMarch 2021, 1-8
  • 51 inproceedingsG.Guillaume Vailland, V.Valérie Gouranton and M.Marie Babel. Cubic Bézier Local Path Planner for Non-holonomic Feasible and Comfortable Path Generation.ICRA 2021 - IEEE International Conference on Robotics and AutomationXian, ChinaIEEEMay 2021, 7894-7900
  • 52 inproceedingsS.Sebastian Vizcay, P.Panagiotis Kourtesis, F.Ferran Argelaguet Sanz, C.Claudio Pacchierotti and M.Maud Marchal. Electrotactile Feedback For Enhancing Contact Information in Virtual Reality.ICAT-EGVE 2021 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual EnvironmentsSankt Augustin, GermanySeptember 2021
  • 53 inproceedingsX.Xi Wang, M.Marc Christie and E.Eric Marchand. TT-SLAM: Dense Monocular SLAM for Planar Environments.ICRA 2021 - IEEE International Conference on Robotics and AutomationXi'an, ChinaMay 2021, 11690-11696
  • 54 inproceedingsZ.Zane Zake, F.François Chaumette, N.Nicolò Pedemonte and S.Stéphane Caro. Control Stability Workspace for a Cable-Driven Parallel Robot Controlled by Visual Servoing.Cable-Driven Parallel Robots, Proceedings of the 5th International Conference on Cable-Driven Parallel RobotsProceedings of the 5th International Conference on Cable-Driven Parallel Robots, CableCon 2021virtual, FranceJune 2021, 284-296
  • 55 inproceedingsZ.Zane Zake, F.François Chaumette, N.Nicolò Pedemonte and S.Stéphane Caro. Moving-Platform Pose Estimation for Cable-Driven Parallel Robots.IROS 2021 - IEEE/RSJ International Conference on Intelligent Robots and SystemsPrague, Czech RepublicJune 2021, 1-8
  • 56 inproceedingsZ.Zane Zake, F.François Chaumette, N.Nicolò Pedemonte and S.Stéphane Caro. Visual Servoing of Cable-Driven Parallel Robots with Tension Management.ICRA 2021 - IEEE International Conference on Robotics and AutomationXi'an, ChinaIEEEMay 2021, 6861-6867

National peer-reviewed Conferences

Doctoral dissertations and habilitation theses

  • 58 thesisJ.Javad Amirian. Human motion trajectory prediction for robot navigation.Université de Rennes 1July 2021
  • 59 thesisF.Fabien Grzeskowiak. Crowd simulation and experiments for the evaluation of robot navigation in public places.Université Rennes 1June 2021

Other scientific publications

  • 60 articleG.Guillaume Gravier, E.Elisa Fromont, N.Nicolas Courty, T.Teddy Furon, C.Christine Guillemot and P.Paolo Robuffo Giordano. Rennes - une IA souveraine au service de la vie publique.Bulletin de l'Association Française pour l'Intelligence Artificielle2021
  • 61 inproceedingsT.Thomas Howard, X.Xavier De Tinguy, G.Guillaume Gicquel, M.Maud Marchal, A.Anatole Lécuyer and C.Claudio Pacchierotti. WeATaViX: Wearable Actuated Tangibles for Virtual Reality Experiences.WHC 2021 - IEEE World Haptics ConferenceMontréal / Virtual, CanadaJuly 2021, 1
  • 62 inproceedingsL.Lendy Mulot, G.Guillaume Gicquel, W.William Frier, M.Maud Marchal, C.Claudio Pacchierotti and T.Thomas Howard. Curvature Discrimination for Dynamic Ultrasound Mid-Air Haptic Stimuli.WHC 2021 - IEEE World Haptics ConferenceMontréal / Virtual, CanadaJuly 2021, 1

11.3 Other

Scientific popularization

11.4 Cited publications

  • 64 articleP.Paolo Salaris, M.Marco Cognetti, R.Riccardo Spica and P.Paolo Robuffo Giordano. Online Optimal Perception-Aware Trajectory Generation.IEEE Transactions on Robotics2019, 1-16