EN FR
EN FR
2020
Activity report
Project-Team
RAINBOW
RNSR: 201822637G
In partnership with:
CNRS, Institut national des sciences appliquées de Rennes, Université Rennes 1
Team name:
Sensor-based Robotics and Human Interaction
In collaboration with:
Institut de recherche en informatique et systèmes aléatoires (IRISA)
Domain
Perception, Cognition and Interaction
Theme
Robotics and Smart environments
Creation of the Team: 2018 January 01, updated into Project-Team: 2018 June 01

Keywords

Computer Science and Digital Science

  • A5.1.3. Haptic interfaces
  • A5.1.7. Multimodal interfaces
  • A5.4.4. 3D and spatio-temporal reconstruction
  • A5.4.6. Object localization
  • A5.4.7. Visual servoing
  • A5.5.4. Animation
  • A5.6. Virtual reality, augmented reality
  • A5.6.1. Virtual reality
  • A5.6.2. Augmented reality
  • A5.6.3. Avatar simulation and embodiment
  • A5.6.4. Multisensory feedback and interfaces
  • A5.9.2. Estimation, modeling
  • A5.10.2. Perception
  • A5.10.3. Planning
  • A5.10.4. Robot control
  • A5.10.5. Robot interaction (with the environment, humans, other robots)
  • A5.10.6. Swarm robotics
  • A5.10.7. Learning
  • A6.4.1. Deterministic control
  • A6.4.3. Observability and Controlability
  • A6.4.4. Stability and Stabilization
  • A6.4.5. Control of distributed parameter systems
  • A6.4.6. Optimal control
  • A9.5. Robotics
  • A9.7. AI algorithmics
  • A9.9. Distributed AI, Multi-agent

Other Research Topics and Application Domains

  • B2.4.3. Surgery
  • B2.5. Handicap and personal assistances
  • B5.1. Factory of the future
  • B5.6. Robotic systems
  • B8.1.2. Sensor networks for smart buildings
  • B8.4. Security and personal assistance

1 Team members, visitors, external collaborators

Research Scientists

  • Paolo Robuffo Giordano [Team leader, CNRS, Senior Researcher, HDR]
  • François Chaumette [Inria, Senior Researcher, HDR]
  • Alexandre Krupa [Inria, Researcher, HDR]
  • Claudio Pacchierotti [CNRS, Researcher]
  • Julien Pettré [Inria, Senior Researcher, HDR]

Faculty Members

  • Marie Babel [INSA Rennes, Associate Professor, HDR]
  • Quentin Delamare [École normale supérieure de Rennes, Associate Professor]
  • Vincent Drevelle [Univ de Rennes I, Associate Professor]
  • Maud Marchal [INSA Rennes, Professor, from Mar 2020, HDR]
  • Eric Marchand [Univ de Rennes I, Professor, HDR]

Post-Doctoral Fellows

  • Pratik Mullick [Inria, from Nov 2020]
  • Gennaro Notomista [CNRS, from Nov 2020]

PhD Students

  • Vicenzo Abichequer Sangalli [Inria, from Nov 2020]
  • Julien Albrand [INSA Rennes, from Oct 2020]
  • Javad Amirian [Inria]
  • Benoit Antoniotti [Creative Rennes, CIFRE, until Jan 2020]
  • Antonin Bernardin [INSA Rennes, from Jul 2020]
  • Florian Berton [Inria]
  • Pascal Brault [Inria]
  • Thomas Chatagnon [Inria, from Nov 2020]
  • Adele Colas [Inria]
  • Cedric De Almeida Braga [Inria]
  • Xavier De Tinguy De La Girouliere [INSA Rennes, from Sep 2020 until Oct 2020]
  • Mathieu Gonzalez [Institut de recherche technologique B-com]
  • Fabien Grzeskowiak [Inria]
  • Alberto Jovane [Inria]
  • Glenn Kerbiriou [INSA Rennes, from Dec 2020]
  • Lisheng Kuang [China Scholarship Council, from Mar 2020]
  • Romain Lagneau [INSA Rennes, from Jul 2020]
  • Emilie Leblong [Pôle Saint-Hélier, from Oct 2020]
  • Fouad Makiyeh [Inria, from Sep 2020]
  • Alexander Oliva [Inria]
  • Rahaf Rahal [Univ de Rennes I]
  • Maxime Robic [Inria, from Nov 2020]
  • Agniva Sengupta [Inria, until Jun 2020]
  • Lev Smolentsev [Inria, from Nov 2020]
  • John Thomas [Inria, from Dec 2020]
  • Guillaume Vailland [INSA Rennes]
  • Tairan Yin [Inria, from Nov 2020]

Technical Staff

  • Marco Aggravi [CNRS, Engineer]
  • Dieudonne Atrevi [Inria, Engineer]
  • Julien Bruneau [Inria, Engineer]
  • Louise Devigne [INSA Rennes, Engineer]
  • Solenne Fortun [Inria, Engineer]
  • Thierry Gaugry [INSA Rennes, Engineer, from Jul 2020]
  • Guillaume Gicquel [CNRS, Engineer]
  • Thomas Howard [CNRS, Engineer]
  • Joudy Nader [CNRS, Engineer]
  • Noura Neji [Inria, Engineer]
  • François Pasteau [INSA Rennes, Engineer]
  • Yuliya Patotskaya [Inria, Engineer, from Oct 2020]
  • Fabien Spindler [Inria, Engineer]
  • Ramana Sundararaman [Inria, Engineer, until Sep 2020]
  • Wouter Van Toll [Inria, Engineer]

Interns and Apprentices

  • Pierre Antoine Cabaret [INSA Rennes, from Jun 2020 until Sep 2020]
  • Johann Courty [Univ de Rennes I, from Mar 2020 until Aug 2020]
  • Juliette Grosset [Inria, from Feb 2020 until Jul 2020]
  • Albert Khim [Inria, from Feb 2020 until Aug 2020]
  • Muhammad Nazeer [Inria, from Feb 2020 until Jul 2020]
  • Adrien Vigne [École normale supérieure de Rennes, from May 2020 until Jul 2020]

Administrative Assistants

  • Hélène de La Ruée [Inria, until Fev 2020]
  • Hélène de La Ruée [Univ de Rennes I, from Mar 2020]

Visiting Scientists

  • Riccardo Arciulo [Région Lazio - Italie, until Apr 2020]
  • Beatriz Cabrero Daniel [Université Pompeu Fabra - Barcelone, until Feb 2020]
  • Raul Fernandez Fernandez [Université Carlos III Madrid - Espagne, from Sep 2020]
  • Jose Gallardo Monroy [Conseil national des sciences et de la technologie - Mexique, from Mar 2020 until May 2020]

2 Overall objectives

The long-term vision of the Rainbow team is to develop the next generation of sensor-based robots able to navigate and/or interact in complex unstructured environments together with human users. Clearly, the word “together” can have very different meanings depending on the particular context: for example, it can refer to mere co-existence (robots and humans share some space while performing independent tasks), human-awareness (the robots need to be aware of the human state and intentions for properly adjusting their actions), or actual cooperation (robots and humans perform some shared task and need to coordinate their actions).

One could perhaps argue that these two goals are somehow in conflict since higher robot autonomy should imply lower (or absence of) human intervention. However, we believe that our general research direction is well motivated since: (i) despite the many advancements in robot autonomy, complex and high-level cognitive-based decisions are still out of reach. In most applications involving tasks in unstructured environments, uncertainty, and interaction with the physical word, human assistance is still necessary, and will most probably be for the next decades. On the other hand, robots are extremely capable at autonomously executing specific and repetitive tasks, with great speed and precision, and at operating in dangerous/remote environments, while humans possess unmatched cognitive capabilities and world awareness which allow them to take complex and quick decisions; (ii) the cooperation between humans and robots is often an implicit constraint of the robotic task itself. Consider for instance the case of assistive robots supporting injured patients during their physical recovery, or human augmentation devices. It is then important to study proper ways of implementing this cooperation; (iii) finally, safety regulations can require the presence at all times of a person in charge of supervising and, if necessary, take direct control of the robotic workers. For example, this is a common requirement in all applications involving tasks in public spaces, like autonomous vehicles in crowded spaces, or even UAVs when flying in civil airspace such as over urban or populated areas.

Within this general picture, the Rainbow activities will be particularly focused on the case of (shared) cooperation between robots and humans by pursuing the following vision: on the one hand, empower robots with a large degree of autonomy for allowing them to effectively operate in non-trivial environments (e.g., outside completely defined factory settings). On the other hand, include human users in the loop for having them in (partial and bilateral) control of some aspects of the overall robot behavior. We plan to address these challenges from the methodological, algorithmic and application-oriented perspectives. The main research axes along which the Rainbow activities will be articulated are: three supporting axes (Optimal and Uncertainty-Aware Sensing; Advanced Sensor-based Control; Haptics for Robotics Applications) that are meant to develop methods, algorithms and technologies for realizing the central theme of Shared Control of Complex Robotic Systems.

3 Research program

3.1 Main Vision

The vision of Rainbow (and foreseen applications) calls for several general scientific challenges: (i) high-level of autonomy for complex robots in complex (unstructured) environments, (ii) forward interfaces for letting an operator giving high-level commands to the robot, (iii) backward interfaces for informing the operator about the robot `status', (iv) user studies for assessing the best interfacing, which will clearly depend on the particular task/situation. Within Rainbow we plan to tackle these challenges at different levels of depth:

  • the methodological and algorithmic side of the sought human-robot interaction will be the main focus of Rainbow. Here, we will be interest in advancing the state-of-the-art in sensor-based online planning, control and manipulation for mobile/fixed robots. For instance, while classically most control approaches (especially those sensor-based) have been essentially reactive, we believe that less myopic strategies based on online/reactive trajectory optimization will be needed for the future Rainbow activities. The core ideas of Model-Predictive Control approaches (also known as Receding Horizon) or, in general, numerical optimal control methods will play a role in the Rainbow activities, for allowing the robots to reason/plan over some future time window and better cope with constraints. We will also consider extending classical sensor-based motion control/manipulation techniques to more realistic scenarios, such as deformable/flexible objects (“Advanced Sensor-based Control” axis). Finally, it will also be important to spend research efforts into the field of Optimal Sensing, in the sense of generating (again) trajectories that can optimize the state estimation problem in presence of scarce sensory inputs and/or non-negligible measurement and process noises, especially true for the case of mobile robots (“Optimal and Uncertainty-Aware Sensing” axis). We also aim at addressing the case of coordination between a single human user and multiple robots where, clearly, as explained the autonomy part plays even a more crucial role (no human can control multiple robots at once, thus a high degree of autonomy will be required by the robot group for executing the human commands);
  • the interfacing side will also be a focus of the Rainbow activities. As explained above, we will be interested in both the forward (human robot) and backward (robot human) interfaces. The forward interface will be mainly addressed from the algorithmic point of view, i.e., how to map the few degrees of freedom available to a human operator (usually in the order of 3–4) into complex commands for the controlled robot(s). This mapping will typically be mediated by an “AutoPilot” onboard the robot(s) for autonomously assessing if the commands are feasible and, if not, how to least modify them (“Advanced Sensor-based Control” axis).

    The backward interface will, instead, mainly consist of a visual/haptic feedback for the operator. Here, we aim at exploiting our expertise in using force cues for informing an operator about the status of the remote robot(s). However, the sole use of classical grounded force feedback devices (e.g., the typical force-feedback joysticks) will not be enough due to the different kinds of information that will have to be provided to the operator. In this context, the recent interest in the use of wearable haptic interfaces is very interesting and will be investigated in depth (these include, e.g., devices able to provide vibro-tactile information to the fingertips, wrist, or other parts of the body). The main challenges in these activities will be the mechanical conception (and construction) of suitable wearable interfaces for the tasks at hand, and in the generation of force cues for the operator: the force cues will be a (complex) function of the robot state, therefore motivating research in algorithms for mapping the robot state into a few variables (the force cues) (“Haptics for Robotics Applications” axis);

  • the evaluation side that will assess the proposed interfaces with some user studies, or acceptability studies by human subjects. Although this activity will not be a main focus of Rainbow (complex user studies are beyond the scope of our core expertise), we will nevertheless devote some efforts into having some reasonable level of user evaluations by applying standard statistical analysis based on psychophysical procedures (e.g., randomized tests and Anova statistical analysis). This will be particularly true for the activities involving the use of smart wheelchairs, which are intended to be used by human users and operate inside human crowds. Therefore, we will be interested in gaining some level of understanding of how semi-autonomous robots (a wheelchair in this example) can predict the human intention, and how humans can react to a semi-autonomous mobile robot.
An illustration of the prototypical activities foreseen in Rainbow in which a human operator is in partial (and high-level) control of single/multiple complex robots performing semi-autonomous tasks
Figure 1: An illustration of the prototypical activities foreseen in Rainbow in which a human operator is in partial (and high-level) control of single/multiple complex robots performing semi-autonomous tasks

Figure 1 depicts in an illustrative way the prototypical activities foreseen in Rainbow. On the righthand side, complex robots (dual manipulators, humanoid, single/multiple mobile robots) need to perform some task with high degree of autonomy. On the lefthand side, a human operator gives some high-level commands and receives a visual/haptic feedback aimed at informing her/him at best of the robot status. Again, the main challenges that Rainbow will tackle to address these issues are (in order of relevance): (i) methods and algorithms, mostly based on first-principle modeling and, when possible, on numerical methods for online/reactive trajectory generation, for enabling the robots with high autonomy; (ii) design and implementation of visual/haptic cues for interfacing the human operator with the robots, with a special attention to novel combinations of grounded/ungrounded (wearable) haptic devices; (iii) user and acceptability studies.

3.2 Main Components

Hereafter, a summary description of the four axes of research in Rainbow.

3.2.1 Optimal and Uncertainty-Aware Sensing

Future robots will need to have a large degree of autonomy for, e.g., interpreting the sensory data for accurate estimation of the robot and world state (which can possibly include the human users), and for devising motion plans able to take into account many constraints (actuation, sensor limitations, environment), including also the state estimation accuracy (i.e., how well the robot/environment state can be reconstructed from the sensed data). In this context, we will be particularly interested in (i) devising trajectory optimization strategies able to maximize some norm of the information gain gathered along the trajectory (and with the available sensors). This can be seen as an instance of Active Sensing, with the main focus on online/reactive trajectory optimization strategies able to take into account several requirements/constraints (sensing/actuation limitations, noise characteristics). We will also be interested in the coupling between optimal sensing and concurrent execution of additional tasks (e.g., navigation, manipulation). (ii) Formal methods for guaranteeing the accuracy of localization/state estimation in mobile robotics, mainly exploiting tools from interval analysis. The interest in these methods is their ability to provide possibly conservative but guaranteed accuracy bounds on the best accuracy one can obtain with the given robot/sensor pair, and can thus be used for planning purposes of for system design (choice of the best sensor suite for a given robot/task). (iii) Localization/tracking of objects with poor/unknown or deformable shape, which will be of paramount importance for allowing robots to estimate the state of “complex objects” (e.g., human tissues in medical robotics, elastic materials in manipulation) for controlling its pose/interaction with the objects of interest.

3.2.2 Advanced Sensor-based Control

One of the main competences of the previous Lagadic team has been, generally speaking, the topic of sensor-based control, i.e., how to exploit (typically onboard) sensors for controlling the motion of fixed/ground robots. The main emphasis has been in devising ways to directly couple the robot motion with the sensor outputs in order to invert this mapping for driving the robots towards a configuration specified as a desired sensor reading (thus, directly in sensor space). This general idea has been applied to very different contexts: mainly standard vision (from which the Visual Servoing keyword), but also audio, ultrasound imaging, and RGB-D.

Use of sensors for controlling the robot motion will also clearly be a central topic of the Rainbow team too, since the use of (especially onboard) sensing is a main characteristics of any future robotics application (which should typically operate in unstructured environments, and thus mainly rely on its own ability to sense the world). We then naturally aim at making the best out of the previous Lagadic experience in sensor-based control for proposing new advanced ways of exploiting sensed data for, roughly speaking, controlling the motion of a robot. In this respect, we plan to work on the following topics: (i) “direct/dense methods” which try to directly exploit the raw sensory data in computing the control law for positioning/navigation tasks. The advantages of these methods is the need for little data pre-processing which can minimize feature extraction errors and, in general, improve the overall robustness/accuracy (since all the available data is used by the motion controller); (ii) sensor-based interaction with objects of unknown/deformable shapes, for gaining the ability to manipulate, e.g., flexible objects from the acquired sensed data (e.g., controlling online a needle being inserted in a flexible tissue); (iii) sensor-based model predictive control, by developing online/reactive trajectory optimization methods able to plan feasible trajectories for robots subjects to sensing/actuation constraints with the possibility of (onboard) sensing for continuously replanning (over some future time horizon) the optimal trajectory. These methods will play an important role when dealing with complex robots affected by complex sensing/actuation constraints, for which pure reactive strategies (as in most of the previous Lagadic works) are not effective. Furthermore, the coupling with the aforementioned optimal sensing will also be considered; (iv) multi-robot decentralised estimation and control, with the aim of devising again sensor-based strategies for groups of multiple robots needing to maintain a formation or perform navigation/manipulation tasks. Here, the challenges come from the need of devising “simple” decentralized and scalable control strategies under the presence of complex sensing constraints (e.g., when using onboard cameras, limited fov, occlusions). Also, the need of locally estimating global quantities (e.g., common frame of reference, global property of the formation such as connectivity or rigidity) will also be a line of active research.

3.2.3 Haptics for Robotics Applications

In the envisaged shared cooperation between human users and robots, the typical sensory channel (besides vision) exploited to inform the human users is most often the force/kinesthetic one (in general, the sense of touch and of applied forces to the human hand or limbs). Therefore, a part of our activities will be devoted to study and advance the use of haptic cueing algorithms and interfaces for providing a feedback to the users during the execution of some shared task. We will consider: (i) multi-modal haptic cueing for general teleoperation applications, by studying how to convey information through the kinesthetic and cutaneous channels. Indeed, most haptic-enabled applications typically only involve kinesthetic cues, e.g., the forces/torques that can be felt by grasping a force-feedback joystick/device. These cues are very informative about, e.g., preferred/forbidden motion directions, but are also inherently limited in their resolution since the kinesthetic channel can easily become overloaded (when too much information is compressed in a single cue). In recent years, the arise of novel cutaneous devices able to, e.g., provide vibro-tactile feedback on the fingertips or skin, has proven to be a viable solution to complement the classical kinesthetic channel. We will then study how to combine these two sensory modalities for different prototypical application scenarios, e.g., 6-dof teleoperation of manipulator arms, virtual fixtures approaches, and remote manipulation of (possibly deformable) objects; (ii) in the particular context of medical robotics, we plan to address the problem of providing haptic cues for typical medical robotics tasks, such as semi-autonomous needle insertion and robot surgery by exploring the use of kinesthetic feedback for rendering the mechanical properties of the tissues, and vibrotactile feedback for providing with guiding information about pre-planned paths (with the aim of increasing the usability/acceptability of this technology in the medical domain); (iii) finally, in the context of multi-robot control we would like to explore how to use the haptic channel for providing information about the status of multiple robots executing a navigation or manipulation task. In this case, the problem is (even more) how to map (or compress) information about many robots into a few haptic cues. We plan to use specialized devices, such as actuated exoskeleton gloves able to provide cues to each fingertip of a human hand, or to resort to “compression” methods inspired by the hand postural synergies for providing coordinated cues representative of a few (but complex) motions of the multi-robot group, e.g., coordinated motions (translations/expansions/rotations) or collective grasping/transporting.

3.2.4 Shared Control of Complex Robotics Systems

This final and main research axis will exploit the methods, algorithms and technologies developed in the previous axes for realizing applications involving complex semi-autonomous robots operating in complex environments together with human users. The leitmotiv is to realize advanced shared control paradigms, which essentially aim at blending robot autonomy and user's intervention in an optimal way for exploiting the best of both worlds (robot accuracy/sensing/mobility/strength and human's cognitive capabilities). A common theme will be the issue of where to “draw the line” between robot autonomy and human intervention: obviously, there is no general answer, and any design choice will depend on the particular task at hand and/or on the technological/algorithmic possibilities of the robotic system under consideration.

A prototypical envisaged application, exploiting and combining the previous three research axes, is as follows: a complex robot (e.g., a two-arm system, a humanoid robot, a multi-UAV group) needs to operate in an environment exploiting its onboard sensors (in general, vision as the main exteroceptive one) and deal with many constraints (limited actuation, limited sensing, complex kinematics/dynamics, obstacle avoidance, interaction with difficult-to-model entities such as surrounding people, and so on). The robot must then possess a quite large autonomy for interpreting and exploiting the sensed data in order to estimate its own state and the environment one (“Optimal and Uncertainty-Aware Sensing” axis), and for planning its motion in order to fulfil the task (e.g., navigation, manipulation) by coping with all the robot/environment constraints. Therefore, advanced control methods able to exploit the sensory data at its most, and able to cope online with constraints in an optimal way (by, e.g., continuously replanning and predicting over a future time horizon) will be needed (“Advanced Sensor-based Control” axis), with a possible (and interesting) coupling with the sensing part for optimizing, at the same time, the state estimation process. Finally, a human operator will typically be in charge of providing high-level commands (e.g., where to go, what to look at, what to grasp and where) that will then be autonomously executed by the robot, with possible local modifications because of the various (local) constraints. At the same time, the operator will also receive online visual-force cues informative of, in general, how well her/his commands are executed and if the robot would prefer or suggest other plans (because of the local constraints that are not of the operator's concern). This information will have to be visually and haptically rendered with an optimal combination of cues that will depend on the particular application (“Haptics for Robotics Applications” axis).

4 Application domains

The activities of Rainbow falls obviously within the scope of Robotics. Broadly speaking, our main interest in in devising novel/efficient algorithms (for estimation, planning, control, haptic cueing, human interfacing, etc.) that can be general and applicable to many different robotic systems of interest, depending on the particular application/case study. For instance, we plan to consider

  • applications involving remote telemanipulation with one or two robot arms, where the arm(s) will need to coordinate their motion for approaching/grasping objects of interest under the guidance of a human operator;
  • applications involving single and multiple mobile robots for spatial navigation tasks (e.g., exploration, surveillance, mapping). In the multi-robot case, the high redundancy of the multi-robot group will motivate research in autonomously exploiting this redundancy for facilitating the task (e.g., optimizing the self-localization of the environment mapping) while following the human commands, and vice-versa for informing the operator about the status of a multi-robot group. In the single robot case, the possible combination with some manipulation devices (e.g., arms on a wheeled robot) will motivate research into remote tele-navigation and tele-manipulation;
  • applications involving medical robotics, in which the “manipulators” are replaced by the typical tools used in medical applications (ultrasound probes, needles, cutting scalpels, and so on) for semi-autonomous probing and intervention;
  • applications involving a direct physical “coupling” between human users and robots (rather than a “remote” interfacing), such as the case of assistive devices used for easing the life of impaired people. Here, we will be primarily interested in, e.g., safety and usability issues, and also touch some aspects of user acceptability.

These directions are, in our opinion, very promising since nowadays and future robotics applications are expected to address more and more complex tasks: for instance, it is becoming mandatory to empower robots with the ability to predict the future (to some extent) by also explicitly dealing with uncertainties from sensing or actuation; to safely and effectively interact with human supervisors (or collaborators) for accomplishing shared tasks; to learn or adapt to the dynamic environments from small prior knowledge; to exploit the environment (e.g., obstacles) rather than avoiding it (a typical example is a humanoid robot in a multi-contact scenario for facilitating walking on rough terrains); to optimize the onboard resources for large-scale monitoring tasks; to cooperate with other robots either by direct sensing/communication, or via some shared database (the “cloud”).

While no single lab can reasonably address all these theoretical/algorithmic/technological challenges, we believe that our research agenda can give some concrete contributions to the next generation of robotics applications.

5 Highlights of the year

5.1 Awards

  • Best 2019 IEEE Robotics and Automation Magazine Paper Award received at ICRA 2020 2.
  • Best Demonstration Award, Eurohaptics 2020, Leiden, The Netherlands 61.
  • Best IEEE Trans. Haptics Short Paper - First Honorable Mention, IEEE Haptics Symposium (HAPTICS), Washington DC, USA (held online due to COVID-19) 26
  • Best IEEE Trans. Haptics Short Paper - Second Honorable Mention, IEEE Haptics Symposium (HAPTICS), Washington DC, USA (held online due to COVID-19) 29.
  • Best Video Presentation – Honorable Mention, IEEE Haptics Symposium (HAPTICS), Washington DC, USA (held online due to COVID-19) 16.
  • Best Presentation Award, IEEE International Conference on Information and Computer Technologies (ICICT), San Jose, USA 45.
  • Best Paper Award, EuroVR International Conference, Valencia, Spain, 35

5.2 Highlights

  • Alexandre Krupa received at the IEEE ICRA 2020 Conference the “Distinguished Service Award” for best Associate Editor of the IEEE Robotics and Automation Letters for his services during the period 2015 to 2019
  • P. Robuffo Giordano is the coordinator of the “MULTISHARED” project in the CHAIRE IA Programme PNIA 2019 – AAP Chaires de recherche et d'enseignement en Intelligence Artificielle
  • J. Pettré is the coordinator of the H2020 FET Open project “CrowdDNA” launched on November 2020

6 New software and platforms

6.1 New software

6.1.1 HandiViz

  • Name: Driving assistance of a wheelchair
  • Keywords: Health, Persons attendant, Handicap
  • Functional Description:

    The HandiViz software proposes a semi-autonomous navigation framework of a wheelchair relying on visual servoing.

    It has been registered to the APP (“Agence de Protection des Programmes”) as an INSA software (IDDN.FR.001.440021.000.S.P.2013.000.10000) and is under GPL license.

  • Contacts: François Pasteau, Marie Babel
  • Participants: François Pasteau, Marie Babel
  • Partner: INSA Rennes

6.1.2 UsTk

  • Name: Ultrasound toolkit for medical robotics applications guided from ultrasound images
  • Keywords: Echographic imagery, Image reconstruction, Medical robotics, Visual tracking, Visual servoing (VS), Needle insertion
  • Functional Description: UsTK, standing for Ultrasound Toolkit, is a cross-platform extension of ViSP software dedicated to 2D and 3D ultrasound image processing and visual servoing based on ultrasound images. Written in C++, UsTK architecture provides a core module that implements all the data structures at the heart of UsTK, a grabber module that allows acquiring ultrasound images from an Ultrasonix or a Sonosite device, a GUI module to display data, an IO module for providing functionalities to read/write data from a storage device, and a set of image processing modules to compute the confidence map of ultrasound images, generate elastography images, track a flexible needle in sequences of 2D and 3D ultrasound images and track a target image template in sequences of 2D ultrasound images. All these modules were implemented on several robotic demonstrators to control the motion of an ultrasound probe or a flexible needle by ultrasound visual servoing.
  • URL: https://ustk.inria.fr
  • Authors: Alexandre Krupa, Fabien Spindler, Marc Pouliquen, Pierre Chatelain, Jason Chevrie
  • Contacts: Alexandre Krupa, Fabien Spindler
  • Participants: Alexandre Krupa, Fabien Spindler
  • Partners: Inria, Université de Rennes 1

6.1.3 ViSP

  • Name: Visual servoing platform
  • Keywords: Augmented reality, Computer vision, Robotics, Visual servoing (VS), Visual tracking
  • Scientific Description:

    Since 2005, we develop and release ViSP [1], an open source library available from https://visp.inria.fr. ViSP standing for Visual Servoing Platform allows prototyping and developing applications using visual tracking and visual servoing techniques at the heart of the Rainbow research. ViSP was designed to be independent from the hardware, to be simple to use, expandable and cross-platform. ViSP allows designing vision-based tasks for eye-in-hand and eye-to-hand systems from the most classical visual features that are used in practice. It involves a large set of elementary positioning tasks with respect to various visual features (points, segments, straight lines, circles, spheres, cylinders, image moments, pose...) that can be combined together, and image processing algorithms that allow tracking of visual cues (dots, segments, ellipses...), or 3D model-based tracking of known objects or template tracking. Simulation capabilities are also available.

    [1] E. Marchand, F. Spindler, F. Chaumette. ViSP for visual servoing: a generic software platform with a wide class of robot control skills. IEEE Robotics and Automation Magazine, Special Issue on "Software Packages for Vision-Based Control of Motion", P. Oh, D. Burschka (Eds.), 12(4):40-52, December 2005.

  • Functional Description: ViSP provides simple ways to integrate and validate new algorithms with already existing tools. It follows a module-based software engineering design where data types, algorithms, sensors, viewers and user interaction are made available. Written in C++, ViSP is based on open-source cross-platform libraries (such as OpenCV) and builds with CMake. Several platforms are supported, including OSX, iOS, Windows and Linux. ViSP online documentation allows to ease learning. More than 300 fully documented classes organized in 17 different modules, with more than 408 examples and 88 tutorials are proposed to the user. ViSP is released under a dual licensing model. It is open-source with a GNU GPLv2 or GPLv3 license. A professional edition license that replaces GNU GPL is also available.
  • URL: http://visp.inria.fr
  • Authors: Fabien Spindler, François Chaumette, Aurélien Yol, Éric Marchand, Souriya Trinh
  • Contact: Fabien Spindler
  • Participants: Éric Marchand, Fabien Spindler, François Chaumette
  • Partners: Inria, Université de Rennes 1

6.2 New platforms

6.2.1 Robot Vision Platform

Participant: François Chaumette, Alexandre Krupa, Eric Marchand, Fabien Spindler.

We exploit two industrial robotic systems built by Afma Robots in the nineties to validate our research in visual servoing and active vision. The first one is a 6 DoF Gantry robot, the other one is a 4 DoF cylindrical robot (see Fig. 2). These robots are equipped with monocular RGB cameras. The Gantry robot also allows mounting grippers on its end-effector. Attached to this platform, we can also find a collection of various RGB and RGB-D cameras used to validate vision-based real-time tracking algorithms.

Rainbow robotics platform for vision-based manipulation
Figure 2: Rainbow robotics platform for vision-based manipulation

6.2.2 Mobile Robots

Participants: Marie Babel, Solenne Fortun, François Pasteau, Julien Pettré, Quentin Delamare, Fabien Spindler.

To validate our research in personally assisted living topic (see Section 7.4.4), we have three electric wheelchairs, one from Permobil, one from Sunrise and the last from YouQ (see Fig. 3.a). The control of the wheelchair is performed using a plug and play system between the joystick and the low level control of the wheelchair. Such a system lets us acquire the user intention through the joystick position and control the wheelchair by applying corrections to its motion. The wheelchairs have been fitted with cameras, ultrasound and time of flight sensors to perform the required servoing for assisting handicapped people. A wheelchair haptic simulator completes this platform to develop new human interaction strategies in a virtual reality environment (see Fig. 3(b)).

Pepper, a human-shaped robot designed by SoftBank Robotics to be a genuine day-to-day companion (see Fig. 3.c) is also part of this platform. It has 17 DoF mounted on a wheeled holonomic base and a set of sensors (cameras, laser, ultrasound, inertial, microphone) that makes this platform interesting for robot-human interactions during locomotion and visual exploration strategies (Sect. 7.2.8).

Moreover, for fast prototyping of algorithms in perception, control and autonomous navigation, the team uses a Pioneer 3DX from Adept (see Fig. 3.d). This platform is equipped with various sensors needed for autonomous navigation and sensor-based control.

Note that 5 papers 13, 38, 49, 54 exploiting the mobile robots were published this year.

IMG/fauteuil-tous
IMG/simu_platform
IMG/pepper
IMG/pioneer
(a) (b) (c) (d)
Figure 3: Mobile Robot Platform. a) wheelchairs from Permobil, Sunrise and YouQ, b) Wheelchair haptic simulator, c) Pepper human-shaped robot, d) Pioneer P3-DX robot

6.2.3 Medical Robotic Platform

Participants: Alexandre Krupa, Fabien Spindler.

This platform is composed of two 6 DoF Adept Viper arms (see Figs. 4.a–b). Ultrasound probes connected either to a SonoSite 180 Plus or an Ultrasonix SonixTouch 2D and 3D imaging system can be mounted on a force torque sensor attached to each robot end-effector. The haptic Virtuose 6D or Omega 6 device (see Fig. 7.a) can also be used with this platform.

This platform was extended with a ATI Nano43 force/torque sensor attached to one of the Viper arm. It allows to perform experiments for needle insertion applications.

This testbed is of primary interest for researches and experiments concerning ultrasound visual servoing applied to probe positioning, soft tissue tracking, elastography or robotic needle insertion tasks (see Sect. 7.4.3). It can also be used to validate more classical tracking and visual servoing researches.

In 2020, this platform was used to obtain experimental results presented in 5 papers 43, 46, 40, 3, 21. Moreover, 2 PhD Theses 66, 68 exploiting this platform were published this year.

IMG/viper-twin
  
IMG/needle_setup
(a)    (b)
Figure 4: Rainbow medical robotic platforms. a) On the right Viper S850 robot arm equipped with a SonixTouch 3D ultrasound probe. On the left Viper S650 equipped with a tool changer that allows to attach a classical camera or biopsy needles. b) Robotic setup for autonomous needle insertion by visual servoing.

6.2.4 Advanced Manipulation Platform

Participants: Claudio Pacchierotti, Paolo Robuffo Giordano, Fabien Spindler.

This new platform is composed by 2 Panda lightweight arms from Franka Emika equipped with torque sensors in all seven axes. An electric gripper, a camera or a soft hand from qbrobotics can be mounted on the robot end-effector (see Fig. 5.a) to validate our researches in coupling force and vision for controlling robot manipulators (see Section 7.2.13) and in shared control for remote manipulation (see Section 7.4.1). Other haptic devices (see Section 6.2.6) can also be coupled to this platform.

This platform was extended with a Reflex TakkTile 2 gripper from RightHand Labs (see Fig. 5.b). A force/torque sensor from Alberobotics was also mounted on the robot end-effector to get more precision during torque control.

4 new papers 26, 19, 56, 57 and 1 PhD Thesis67 published this year include experimental results obtained with this platform.

IMG/franka
  
IMG/reflex-tacktile2
(a)    (b)
Figure 5: Rainbow advanced manipulation platform. a) One of the two Panda lightweight arms from Franka Emika, with mounted the Pisa SoftHand, b) the Reflex TakkTile 2 gripper that could be mounted on the Panda robot end-effector.

6.2.5 Unmanned Aerial Vehicles (UAVs)

Participants: Joudy Nader, Paolo Robuffo Giordano, Claudio Pacchierotti, Fabien Spindler.

Rainbow is involved in several activities involving perception and control for single and multiple quadrotor UAVs. To this end, we purchased four quadrotors from Mikrokopter Gmbh, Germany (see Fig. 6.a), and one quadrotor from 3DRobotics, USA (see Fig. 6.b). The Mikrokopter quadrotors have been heavily customized by: (i) reprogramming from scratch the low-level attitude controller onboard the microcontroller of the quadrotors, (ii) equipping each quadrotor with a NVIDIA Jetson TX2 board running Linux Ubuntu and the TeleKyb-3 software based on genom3 framework developed at LAAS in Toulouse (the middleware used for managing the experiment flows and the communication among the UAVs and the base station), and (iii) purchasing the Flea Color USB3 cameras together with the gimbal needed to mount them on the UAVs. The quadrotor group is used as robotic platforms for testing a number of single and multiple flight control schemes with a special attention on the use of onboard vision as main sensory modality.

This year 1 paper 6 contains experimental results obtained with this platform.

IMG/QC
 
IMG/formation-drones
(a) (b)
Figure 6: Unmanned Aerial Vehicles Platform. a) Quadrotor XL1 from Mikrokopter, b) Formation control with 3 XL1 from Mikrokopter in our flying arena equipped with the Vicon localization system.

6.2.6 Haptics and Shared Control Platform

Participants: Claudio Pacchierotti, Paolo Robuffo Giordano, Fabien Spindler.

Various haptic devices are used to validate our research in shared control. We have a Virtuose 6D device from Haption (see Fig. 7.a). This device is used as master device in many of our shared control activities (see, e.g., Sections 7.4.1). It could also be coupled to the Haption haptic glove in loan from the University of Birmingham. An Omega 6 (see Fig. 7.b) from Force Dimension and devices in loan from Ultrahaptics complete this platform that could be coupled to the other robotic platforms.

This platform was used to obtain experimental results presented in 9 papers 11, 26, 6, 19, 63, 56, 57, 55, 72 and 2 PhD Thesis 65, 67 published this year.

IMG/virtuose
  
IMG/omega6
(a)    (b)
Figure 7: Haptics and Shared Control Platform. a) Virtuose 6D and b) Omega 6 haptic devices

7 New results

7.1 Optimal and Uncertainty-Aware Sensing

7.1.1 Simultaneous Tracking and Elasticity Estimation of Deformable Objects

Participants: Agniva Sengupta, Romain Lagneau, Alexandre Krupa, Maud Marchal, Eric Marchand.

Within our research activities on deformable object tracking 68, this year we proposed a method to simultaneously track the deformation of soft objects and estimate their elasticity parameters 46. The tracking of the deformable object is performed by combining the visual information captured by a RGB-D camera with interactive Finite Element Method simulations of the object. The elasticity parameter estimation is performed in parallel and consists in minimizing the error between the tracked object and a simulated object deformed by the forces that are measured using a force sensor. Once the elasticity parameters are estimated, the deformation forces applied to an object can be estimated from the visual tracking of the object deformation without the use of a force sensor.

7.1.2 Trajectory Generation for Optimal State Estimation

Participants: Marco Aggravi, Claudio Pacchierotti, Paolo Robuffo Giordano.

This activity addresses the general problem of active sensing where the goal is to analyze and synthesize optimal trajectories for a robotic system that can maximize the amount of information gathered by the (few) noisy outputs (i.e., sensor readings) while at the same time reducing the negative effects of the process/actuation noise. We have recently developed a general framework for solving online the active sensing problem by continuously replanning an optimal trajectory that maximizes a suitable norm of the Constructibility Gramian (CG)  75. In 11, 55 we have extended this framework for considering a shared control task in which a human operator is in partial control of the motion of a mobile robot. The robot needs to estimate its state by measuring distances w.r.t. some landmarks, and it travels along a closed trajectory with fixed length. At the same time, an operator controls the location of the barycenter of the trajectory. This trajectory is autonomously deformed for maximizing the amount of information collected by the onboard sensors, and the operator is provided with a force feedback informing about where the autonomy would like to move the barycenter; however the final choice is left to the operator. The results of serveral users' studies have shown the validity of the approach. We plan to extend this framework to the multi-robot case.

7.1.3 Leveraging Multiple Environments for Learning and Decision Making

Participants: Maud Marchal, Thierry Gaugry, Antonin Bernardin.

Learning is usually performed by observing real robot executions. Physics-based simulators are a good alternative for providing highly valuable information while avoiding costly and potentially destructive robot executions. Within the Imagine project, we presented a novel approach for learning the probabilities of symbolic robot action outcomes 47. This is done by leveraging different environments, such as physics-based simulators, in execution time. To this end, we proposed MENID (Multiple Environment Noise Indeterministic Deictic) rules, a novel representation able to cope with the inherent uncertainties present in robotic tasks. MENID rules explicitly represent each possible outcomes of an action, keep memory of the source of the experience, and maintain the probability of success of each outcome. We also introduced an algorithm to distribute actions among environments, based on previous experiences and expected gain. Before using physics-based simulations, we proposed a methodology for evaluating different simulation settings and determining the least time-consuming model that could be used while still producing coherent results. We demonstrated the validity of the approach in a dismantling use case, using a simulation with reduced quality as simulated system, and a simulation with full resolution where we add noise to the trajectories and some physical parameters as a representation of the real system.

7.1.4 A Plane-based Approach for Indoor Point Clouds Registration

Participant: Eric Marchand.

Iterative Closest Point (ICP) is one of the mostly used algorithms for 3D point clouds registration. This classical approach can be impacted by the large number of points contained in a point cloud. Planar structures, which are less numerous than points, can be used in well-structured man-made environment. We thus propose a registration method inspired by the ICP algorithm in a plane-based registration approach for indoor environments. This method is based solely on data acquired with a LiDAR sensor. A new metric based on plane characteristics is introduced to find the best plane correspondences. The optimal transformation is estimated through a two-step minimization approach, successively performing robust plane-to-plane minimization and non-linear robust point-to-plane registration. This work was done in cooperation with IETR Lab and published in IAPR ICPR 2020 (in january 2021)

7.1.5 Relative camera pose estimation and planar reconstruction from two images

Participant: Eric Marchand.

We propose a novel method to simultaneously perform relative camera pose estimation and planar reconstruction of a scene from two RGB images 52. We start by extracting and matching superpixel information from both images and rely on a novel multi-model RANSAC approach to estimate multiple homographies from superpixels and identify matching planes. Ambiguity issues when performing homography decomposition are handled by proposing a voting system to more reliably estimate relative camera pose and plane parameters. A non-linear optimization process is also proposed to perform bundle adjustment that exploits a joint representation of homographies and works both for image pairs and whole sequences of image (vSLAM). As a result, the approach provides a mean to perform a dense 3D plane reconstruction from two RGB images only without relying on RGB-D inputs or strong priors such as Manhattan assumptions, and can be extented to handle sequences of images. This work was done in cooperation with the Mimetic team.

7.1.6 Learn Offsets for robust 6DoF object pose estimation

Participants: Mathieu Gonzalez, Eric Marchand.

Estimating the 3D translation and orientation of an object is a challenging task that can be considered within augmented reality or robotic applications. In 15 we propose a novel approach to perform 6 DoF object pose estimation from a single RGB-D image in cluttered scenes. We adopt an hybrid pipeline in two stages: data-driven and geometric respectively. The first data-driven step consists of a classification CNN to estimate the object 2D location in the image from local patches, followed by a regression CNN trained to predict the 3D location of a set of keypoints in the camera coordinate system. We robustly perform local voting to recover the location of each keypoint in the camera coordinate system. To extract the pose information, the geometric step consists in aligning the 3D points in the camera coordinate system with the corresponding 3D points in world coordinate system by minimizing a registration error, thus computing the pose.

7.2 Advanced Sensor-Based Control

7.2.1 Trajectory Generation for Minimum Closed-Loop State Sensitivity

Participants: Pascal Brault, Quentin Delamare, Paolo Robuffo Giordano.

The goal of this research activity is to propose a new point of view in addressing the control of robots under parametric uncertainties: rather than striving to design a sophisticated controller with some robustness guarantees for a specific system, we propose to attain robustness (for any choice of the control action) by suitably shaping the reference motion trajectory so as to minimize the state sensitivity to parameter uncertainty of the resulting closed-loop system.

During this year we have continued working on the idea of “input sensitivity”, which allows to obtain trajectories that, when perturbed, required minimal change of the control inputs. In particular we studied how to best combine the input and state senstivities in a single metric, and applied the trajectory optimization machinery to the case of a planar quadrotor. An off-the-shelf nonlinear optimization scheme was also employed for allowing to take into account (nonlinear) input constraints. A large statistical analysis was performed in simulation, and the results were submitted to the ICRA 2021 conference. We are now working towards an implementation on a real quadorotor by considering offsets in the center of mass (CoM) as one of the main sources of uncertainty.

7.2.2 Comfortable path generation for wheelchair navigation

Participants: Guillaume Vailland, Juliette Grosset, Marie Babel.

In the case of non-holonomic robot navigation, path planning algorithms such as Rapidly-exploring Random Tree (RRT) rarely provides feasible and smooth path without the need of additional processing. Furthermore, in a transport context like power wheelchair navigation, passenger comfort should be a priority and influences path planning strategy.

We then proposed a local path planner which guarantees curvature bounded value and continuous Cubic Bézier piecewise curves connections. To simulate and test this Cubic Bézier local path planner, we developed a new RRT version (CBB-RRT*) which generates on-the fly comfortable path adapted to non-holonomic constraints. This new framework has been submitted to the ICRA 2021 conference.

In addition, autonomous navigation for a mobile robot requires knowledge of the world around it. Detecting and recognizing obstacles in cluttered environment remains challenging to ensure safe navigation. To this aim, we then collaboration with Fabio Morbidi (Université Picardie-Jules Verne, Amiens) in order to evaluate the ability of event-based camera data to favour obstace recognition. Due to the Covid situation, data collection was difficult to organized. We then design event-based camera simulator, in order to train our machine learning framework with virtual images. Next steps will consist of implementing our solution onto the smart wheelchair platform. This work was done in cooperation with Valérie Gouranton (Hybrid group) 48

7.2.3 UWB beacon navigation of assisted power wheelchair

Participants: Julien Albrand, Pierre-Antoine Cabaret, François Pasteau, Vincent Drevelle, Eric Marchand, Marie Babel.

Typical problems in robots are those of perception of the environment and localization. Visual sensors are poorly adapted to the context of autonomous wheelchair navigation, both in terms of acceptability (intrusiveness) and in terms of adaptation to the wheelchair and of overall cost.

New sensors, based on Ultra Wide Band (UWB) radio technology, are emerging in particular for indoor localization and object tracking applications. This low-cost system allows the measurement of distances between fixed beacons and a mobile sensor, in order to obtain localization at decimeter level accuracy in the best case. We seek to exploit these sensors for the navigation of a wheelchair, despite the low accuracy of the measurements they provide.

The problem here lies in the definition of an autonomous or shared sensor based control solution, which fully exploits the notion of measurement uncertainty related to UWB beacons. By modeling the measurements of uncertain distances by intervals, we will try to propagate these uncertainties to the calculation of the speeds to be applied to the wheelchair. This will be done by using the methods of set inversion and constraint propagation, which lead to the characterization of solutions in the form of sets.

7.2.4 Visual Servoing for Cable-Driven Parallel Robots

Participant: François Chaumette.

This study is done in collaboration with IRT Jules Verne (Zane Zake, Nicolo Pedemonte) and LS2N (Stéphane Caro) in Nantes (see Section 8.1). It is devoted to the analysis of the robustness of visual servoing to modeling and calibration errors for cable-driven parallel robots. This year, the coupling of path planning and visual servoing has been studied to improve the robustness during all the transient period of the servo 31.

7.2.5 Singularities in visual servoing

Participant: François Chaumette.

This study is done in the scope of the ANR Sesame project (see Section 9.3).

We have performed a complete theoretical study about the singularities of image-based visual servoing and pose estimation (PnP problem) from the observation of four image points. Highly original results have been exhibited. In particular, it was shown that 2 to 6 camera positions correspond to singularities for a general configuration of 4 non-coplanar points, while it was wrongly believed before that no singularities occur for such configuration 25

7.2.6 Multi-sensor-based control for acurate and safe assembly

Participants: John Thomas, François Chaumette.

This study is done in the scope of the BPI Lichie project 9.3. Its goal is to design sensor-based control strategies coupling vision and proximetry data for ensuring precise positioning while avoiding obstacles in dense environements. The targeted application is the assembly of satellite parts.

7.2.7 Visual servo of a satellite constellation

Participants: Maxime Robic, Eric Marchand, François Chaumette.

This study is also done in the scope of the BPI Lichie project 9.3. Its goal is to control the orientation of a satellite constellation from a camera mounted on each of them to track particular objects on the ground.

7.2.8 Visual Exploration of an Indoor Environment

Participants: Benoît Antoniotti, Eric Marchand, François Chaumette.

This study was done in collaboration with the Creative company in Rennes through Benoît Antoniotti's PhD (see Section 7.2.8).

It was devoted to the exploration of indoor environments by a mobile robot, Pepper typically (see Section 6.2.2) for a complete and accurate reconstruction of the environment. Unfortunately, Benoît decided to drop his thesis.

7.2.9 Model-Free Deformation Servoing of Soft Objects

Participants: Romain Lagneau, Alexandre Krupa, Maud Marchal.

Nowadays robots are mostly used to manipulate rigid objects. Manipulating deformable objects remains challenging due to the difficulty of accurately predicting the object deformations.

We proposed an approach able to deform a soft object towards a desired shape that does not require a priori knowledge on the object mechanical behaviour parameters 66. It is based on a model-free visual servoing method that online estimates from past observations the deformation Jacobian that relates the motion of the robot end-effector to the deformation of the object. This estimation relies on a weighted least-squares minimization of a cost function that is defined on a temporal sliding window. The estimated deformation Jacobian is then used in a control law that aims to exponentially decrease a visual error describing the difference between the current and desired object deformation. To address the issue of possible lack of observation consistency, we consider an eigenvalue-based confidence criterion during the Jacobian update to insure robustness to observation noise. The approach was validated through comparisons with a model-based and a model-free state-of-the-art methods and the results showed that the proposed approach provides better robustness to external perturbations 40.

Based on the same previous approach principle, we propose a method to automatically control the 3D shape of deformable wires that are manipulated by two robotic arms 3. In order to describe the deformation of the wire 3D shape, we considered as visual features a set of equidistant 3D points belonging to the wire. A geometrical 3D B-spline model was used to represent the shape of the wire and an image processing algorithm relying on a Sequential Importance Resampling (SIR) particle filter was developed to track the wire deformation in real-time from a sequence of point clouds provided by a RGB-D camera. This geometrical B-spline model was only considered for the visual tracking process and no mechanical model of the wire was considered in the visual control scheme since we used our model-free deformation control approach. Several experiments on wires with different mechanical properties were performed and showed that our approach succeeded to control the 3D shape in order to reach a desired one.

7.2.10 Model-Based Deformation Servoing of Soft Objects

Participants: Fouad Makiyeh, Alexandre Krupa, Maud Marchal, François Chaumette.

This study has just started and takes place in the context of the GentleMAN project (see Section 9.1.2). The objective is to elaborate a new visual servoing approach aiming to control the shape of an object towards a desired deformation. In contrast to the model-free deformation servoing approach presented in Section 7.2.9, we plan to consider a model of the mechanical behaviour of the object in order to analytically derive the deformation Jacobian that will be used in the control law. Indeed, an analytical formulation could potentially increase the size of the convergence domain that is currently a limitation of the model-free methods.

7.2.11 Manipulation of a deformable wire by two UAVs

Participants: Lev Smolentsev, Alexande Krupa, François Chaumette.

This study has just started and takes place in the context of the CominLabs MAMBO project (see Section 9.4). It concerns the development of a visual-based control framework for performing autonomous manipulation of a deformable wire attached between two UAVs. The objective is to control the two UAVs from visual data provided by onboard RGB-D cameras in order to grasp an object on the ground with the wire and move it to another location.

7.2.12 Multi-Robot Formation Control

Participant: Paolo Robuffo Giordano.

Most multi-robot applications must rely on relative sensing among the robot pairs (rather than absolute/external sensing such as, e.g., GPS). For these systems, the concept of rigidity provides the correct framework for defining an appropriate sensing and communication topology architecture. In several previous works we have addressed the problem of coordinating a team of quadrotor UAVs equipped with onboard sensors (such as distance sensors or cameras) for cooperative localization and formation control under the rigidity framework. In 24 an interesting analysis of rigidity for robotics applications is provided in the context of distance geometry, together with other examples from different domains. This analysis helps in bringing together different notions and techniques related to rigidity that are otherwise considered very far from each other.

7.2.13 Coupling Force and Vision for Controlling Robot Manipulators

Participants: Alexander Oliva, François Chaumette, Paolo Robuffo Giordano.

The goal of this activity is about coupling visual and force information for advanced manipulation tasks. To this end, we are exploiting the recently acquired Panda robot (see Sect. 6.2.4), a state-of-the-art 7-dof manipulator arm with torque sensing in the joints, and the possibility to command torques at the joints or forces at the end-effector. The use of vision in torque-controlled robot is limited because of many issues, among which the difficulty of fusing low-rate images (about 30 Hz) with high-rate torque commands (about 1 kHz), the delays caused by any image processing and tracking algorithms, and the unavoidable occlusions that arise when the end-effector needs to approach an object to be grasped.

In this context we recently proposed a general framework for combining force and visual information directly in the visual feature space, by reformulating and unifying the classical admittance control law in the image space. The proposed visual/force control framework has been extensively evaluated via numerous experiments performed on the Panda robot in peg-in-hole tasks where both the pose and the exchanged forces could be regulated with high accuracy and good stability. These results are currently under review for the RAL/ICRA 2021.

7.2.14 Dimensionality reduction for Direct Visual Servoing

Participant: Eric Marchand.

To date most of visual servoing approaches have relied on geometric features that have to be tracked and matched in the image. Recent works have highlighted the importance of taking into account the photometric information of the entire images. This leads to direct visual servoing (DVS) approaches. The main disadvantage of DVS is its small convergence domain compared to conventional techniques, which is due to the high non-linearities of the cost function to be minimized. We proposed to project the image on orthogonal bases to build a new compact set of coordinates used as visual features. The idea is to consider the Discrete Cosine Transform (DCT) which allows to represent the image in the frequency domain in terms of a sum of cosine functions that oscillate at various frequencies. This leads to a new set of coordinates in a new precomputed orthogonal basis, the coefficients of the DCT. We propose to use these coefficients as the visual features that are then considered in a visual servoing control law. We then exhibit the analytical formulation of the interaction matrix related to these coefficients 21. This is also one of the first attempt to build a visual servoing control directly in the frequency domain.

7.3 Haptic Cueing for Robotic Applications

7.3.1 Wearable Haptics Systems Design

Participants: Claudio Pacchierotti, Marco Aggravi, Lisheng Kuang.

We have been working on wearable haptics since few years now, both from the hardware (design of interfaces) and software (rendering and interaction techniques) points of view.

In 10, we presented a modular wearable interface for the fingers. It is composed of a 3-DoF fingertip cutaneous device and a 1-DoF finger kinesthetic exoskeleton, which can be either used together as a single device or separately as two different devices. The 3-DoF fingertip device is composed of a static body and a mobile platform. The mobile platform is capable of making and breaking contact with the finger pulp and re-angle to replicate contacts with arbitrarily oriented surfaces. The 1-DoF finger exoskeleton provides kinesthetic force to the proximal and distal interphalangeal finger articulations using one servo motor grounded on the proximal phalanx. We carried out three human subjects experiments: the first experiment considered a curvature discrimination task, the second one a robot-assisted palpation task, and the third one an immersive experience in Virtual Reality. Results showed that providing cutaneous and kinesthetic feedback through our device significantly improved the performance of all the considered tasks. Moreover, although cutaneous-only feedback showed promising performance, adding kinesthetic feedback improved most metrics. Finally, subjects ranked the device as highly wearable, comfortable, and effective.

In 30, we focused on the personalization of wearable haptic interfaces, aiming at devising haptic rendering techniques adapted to the specific characteristics of ones fingers. Indeed, fingertip size and shape vary significantly across humans, making it difficult to design fingertip interfaces and rendering techniques suitable for everyone. We started with an existing data-driven haptic rendering algorithm that ignores fingertip size, and we then developed two software-based approaches to personalize this algorithm for fingertips of different sizes using either additional data or geometry. We evaluate our algorithms in the rendering of pre-recorded tactile sensations onto rubber casts of six different fingertips as well as onto the real fingertips of 13 human participants. Results on the casts show that both approaches significantly improve performance, reducing force error magnitudesby an average of 78% with respect to the standard non-personalized rendering technique. Congruent results were obtained for real fingertips, with subjects rating each of the two personalized rendering techniques significantly better than the standard non-personalized method.

In 45, we developed a low-cost immersive haptic, audio, and visual experience built by using off-the-shelf components.It is composed of a vibrotactile glove, a Leap Motion sensor,and an head-mounted display, integrated together to provide compelling immersive sensations. To demonstrate its potential, we presented two human subject studies in Virtual Reality. They evaluate the capability of the system in providing (i)guidance during simulated drone operations, and (ii) contacthaptic feedback during virtual objects interaction. Results prove that the proposed haptic-enabled framework improvesthe performance and illusion of presence.

In 69, we introduced a wearable armband interface capable of tracking its orientation in space as well as providing vibrotactile sensations. It is composed of four vibrotactile motors, able to provide contact sensations, an inertial measurement unit (IMU), and a battery-powered Arduino board. We use two of these armbands, worn on the forearm and the upper arm, to interact in an immersive Virtual Reality (VR) environment. The system renders in VR the movements of the user’s arm as well as its interactions with virtual objects. Specifically, users are asked to catch a series of baseballs using their armbands-equipped arm. Whenever one baseball hits the arm, the armband closer to the contact vibrates. The amplitude of the vibration is proportional to the distance between the impact point and the position of the activated armband.

7.3.2 Mid-Air Haptic Feedback

Participants: Claudio Pacchierotti, Thomas Howard, Guillaume Gicquel, Maud Marchal.

GUIs have been the gold standard for more than 25 years. However, they only support interaction with digital information indirectly (typically using a mouse or pen) and input and output are always separated. Furthermore, GUIs do not leverage our innate human abilities to manipulate and reason with 3D objects. Recently, 3D interfaces and VR headsets use physical objects as surrogates for tangible information, offering limited malleability and haptic feedback (e.g., rumble effects). In the framework of project H-Reality (Sect. 9.2.1), we are working to develop novel mid-air haptics paradigm that can convey the information spectrum of touch sensations in the real world, motivating the need to develop new, natural interaction techniques.

Currently, focused ultrasound phased arrays are the most mature solution for providing mid-air haptic feedback. They modulate the phase of an array of ultrasound emitters so as to generate focused points of oscillating high pressure, eliciting vibrotactile sensations when encountering a user’sskin. While these arrays feature a reasonably large vertical workspace,they are not capable of displaying stimuli far beyond their horizontallimits, severely limiting their workspace in the lateral dimensions. For this reason, we proposed a solution for enlarging the workspace of focused ultrasound arrays. It features 2 degrees of freedom, rotating the array around the pan and tilt axes, thereby significantly increasing the usable workspace and enabling multi-directional feedback. Our hardware tests and human subject study in an ecological VRsetting show a 14-fold increase in workspace volume, with focal point repositioning speeds over 0.85m/s while delivering tactile feedback with positional accuracy below 18 mm 16, 72.

In addition to enlarging the workspace of ultrasound interfaces, we also worked to understand if it was possible to render stiffness sensations through ultrasound stimulation 63. A user study enrolling 20 participants showed that it was indeed possible to render the sensation of interacting with virtual objects of different stiffness. We found just noticeable difference (JND) values of 17%, 31%, and 19% for the three reference stiffness values 7358 Pa/m, 13242 Pa/m, 19126 Pa/m (sound pressure overdisplacement), respectively.

7.3.3 Wearable Haptics for Interacting with Tangible Objects

Participants: Claudio Pacchierotti, Maud Marchal, Thomas Howard, Xavier de Tinguy.

We used haptic interfaces to augment the feeling of interacting with tangible objects in Virtual Reality, e.g., making a simple tangible object feel more or less stiff than it really is, or improving the sensation of making and breaking contact with virtual objects.

In 29, we studied the effect of combining simple passive tangible objects and wearable haptics for improving the display of varying stiffness, friction, and shape sensations in these environments. By providing timely cutaneous stimuli through a wearable finger device, we can make an object feel softer or more slippery than it really is, and we can also create the illusion of encountering virtual bumps and holes. We evaluate the proposed approach carrying out three experiments with human subjects. Results confirm that we can increase the compliance of a tangible object by varying the pressure applied through a wearable device. We are also able to simulate the presence of bumps and holes by providing timely pressure and skin stretch sensations. Altering the friction of a tangible surface showed recognition rates above the chance level, albeit lower than those registered in the other experiments. Finally, we show the potential of our techniquesin an immersive medical palpation use case in VR. These results pave the way for novel and promising haptic interactions in VR, better exploiting the multiple ways of providing simple, unobtrusive, and inexpensive haptic displays.

In 61, we designed a wearable encounter-type haptic interface (ETHDs). Encounter-type haptic displays solve the issue of rendering sensations of making and breaking contact, bringing their end-effector in contact with the user only when collisions with virtual objects occur. We presented the design and evaluation of a wearable haptic interface for natural manipulation of tangible objects in Virtual Reality (VR). It proposes an interaction concept between encounter-type and tangible haptics. The actuated 1 degree-of-freedom interface brings a tangible object in and out of contact with a user’s palm, rendering making and breaking of contact sensations, and allowing grasping and manipulation of virtual objects. Device performance tests show that changes in contact states can be rendered with delays as low as 50 ms, with additional improvements to contact synchronicity obtained through our proposed interaction technique. An exploratory user study in VR showed that our device can render compelling grasp and release interactions with static and slowly moving virtual objects, contributing to user immersion.

This work won the Best Hands-on Demonstration award at Eurohaptics 2020, held in Leiden, The Netherlands.

The wearable encounter-type haptic interface presented in , which we called WeATaViX.
Figure 8: The wearable encounter-type haptic interface presented in 61, which we called WeATaViX.

While working with wearable haptics and tangible objects, we realized that one of the main issues was the tracking of the human fingertip. Indeed, one important aspect to achieve is the synchronization of motion and sensory feedback between the human users and their virtual avatars, i.e., whenever one user moves a limb, the same motion should be replicated by the avatar; similarly, whenever the avatar touches a virtual object, the user should feel the same haptic experience. In 12, we combine tracking information from a tangible object instrumented with capacitive sensors and an optical tracking system, to improve contact rendering when interacting with tangibles in VR. A human-subject study shows that combining capacitive sensing with optical tracking significantly improves the visuohaptic synchronization and immersion of the VR experience.

7.3.4 Wearable Haptics for Interacting with Virtual Crowds

Participants: Claudio Pacchierotti, Julien Pettré, Florian Berton, Fabien Grzeskowiak, Marco Aggravi, Alberto Jovane.

We have been using wearable vibrotactile armbands to render the collisons between a human user and a virtual crowd. In 9, we focus on the behavioural changes occurring with or without haptic rendering during a navigation task in a dense crowd, as well as on potential after-effects introduced by the use haptic rendering. Our objective is to provide recommendations for designing VR setup to study crowd navigation behaviour. To this end, we designed an experiment (N=23) where participants navigated in a crowded virtual train station without, then with, and then again without haptic feedback of their collisions with virtual characters. Results show that providing haptic feedback improved the overall realism of the interaction, as participants more actively avoided collisions. We also noticed a significant after-effect in the users’ behaviour when haptic rendering was once again disabled in the third part of the experiment. Nonetheless, haptic feedback did not have any significant impact on the users’ sense of presence and embodiment.

7.3.5 Wearable Haptics for an Augmented Wheelchair Driving Experience

Participants: Louise Devigne, Jeanne Hécquard, Zoé Levrel, François Pasteau, Marco Aggravi, Maud Marchal, Claudio Pacchierotti, Marie Babel.

Smart powered wheelchairs can increase mobility and independence for people with disability by providing navigation support. For rehabilitation or learning purposes, it would be of great benefit for wheelchair users to have a better understanding of the surrounding environment while driving. Therefore, a way of providing navigation support is to communicate information through a dedicated and adapted feedback interface.

In 13, we envisaged the use of wearable vibrotactile haptics, i.e. two haptic armbands, each composed of four evenly-spaced vibrotactile actuators. With respect to other available solutions, our approach provides rich navigation information while always leaving the patient in control of the wheelchair motion. We then conducted experiments with volunteers who experienced wheelchair driving in conjunction with the use of the armbands to provide drivers with information either on the presence of obstacles. Results show that providing information on closest obstacle position improved significantly the safety of the driving task (least number of collisions). This work is jointly conducted in the context of ADAPT project (Sect. 9.2.2) and ISI4NAVE associate team (Sect. 9.1.1).

To enlarge this work, new interfaces have been also explored such as mid-air interfaces in order to design inclusive solutions for museum and cultural heritage exploration for wheelchair users.

A participant drives our power wheelchair along the circuit while being equipped with vibrotactile armbands. The wheelchair is commanded using a standard 2D joystick placed in the right armrest, and it is instrumented with 12 ultrasonic sensors to detect obstacles. The armbands provide information either on a trajectory to follow or the presence of obstacles, depending on the condition being tested.
Figure 9: A participant drives our power wheelchair along the circuit while being equipped with vibrotactile armbands. The wheelchair is commanded using a standard 2D joystick placed in the right armrest, and it is instrumented with 12 ultrasonic sensors to detect obstacles. The armbands provide information either on a trajectory to follow or the presence of obstacles, depending on the condition being tested.

7.4 Shared Control Architectures

7.4.1 Shared Control for Remote Manipulation

Participants: Paolo Robuffo Giordano, Claudio Pacchierotti, Rahaf Rahal, Raul Fernandez Fernandez.

As teleoperation systems become more sophisticated and flexible, the environments and applications where they can be employed become less structured and predictable. This desirable evolution toward more challenging robotic tasks requires an increasing degree of training, skills, and concentration from the human operator. In this respect, shared control algorithms have been investigated as one the main tools to design complex but intuitive robotic teleoperation system, helping operators in carrying out several increasingly difficult robotic applications, such as assisted vehicle navigation, surgical robotics, brain-computer interface manipulation, rehabilitation. Indeed, this approach makes it possible to share the available degrees of freedom of the robotic system between the operator and an autonomous controller.

Along this general line of research, during this year we gave the following contributions:

  • in 26, 57 we studied how to enhance the user's comfort during a telemanipulation task. Using an inverse kinematic model of the human arm and the Rapid Upper Limb Assessment (RULA) metric, the proposed approach estimates the current user’s comfort online. From this measure and an a priori knowledge of the task, we then generate dynamic active constraints guiding the users towards a successful completion of the task, along directions that improve their posture and increase their comfort. Studies with human subjects have shown the effectiveness of the proposed approach, yielding a 30% perceived reduction of the workload with respect to using standard guided humanin-the-loop teleoperation.
  • in 23 we presented an adaptive impedance control architecture for robotic teleoperation of contact tasks featuring continuous interaction with the environment. We used Learning from Demonstration (LfD) as a framework to learn variable stiffness control policies. Then, the learnt state-varying stiffness was used to command the remote manipulator, so as to adapt its interaction with the environment based on the sensed forces. The proposed system only relies on the on-board torque sensors of a commercial robotic manipulator and it does not require any additional hardware or user input for the estimation of the required stiffness. We also provide a passivity analysis of our system, where the concept of energy tanks is used to guarantee a stable behavior. Finally, the system was evaluated in a representative teleoperated cutting application. Results showed that the proposed variable-stiffness approach outperforms two standard constant-stiffness approaches in terms of safety and robot tracking performance.
  • in 19 we focused on robotic manipulation of fragile, compliant objects, such as food items. In particular we developed a haptic-based, Learning from Demonstration (LfD) policy that enables pre-trained autonomous grasping of food items using an anthropomorphic robotic system. The policy combines data from teleoperation and direct human manipulation of objects, embodying human intent and interaction areas of significance. We evaluated the proposed solution against a recent state-ofthe-art LfD policy as well as against two standard impedance controller techniques. The results showed that the proposed policy performs significantly better than the other considered techniques, leading to high grasping success rates while guaranteeing the integrity of the food at hand.

7.4.2 Shared Control for Multiple Robots

Participants: Marco Aggravi, Paolo Robuffo Giordano, Claudio Pacchierotti, Muhammad Nazeer.

Following our previous works on flexible formation control of multiple robots with global requirements, in particular connectivity maintenance, we proposed in 1 a comprehensive study of a a decentralized connectivity-maintenance algorithm for the teleoperation of a team of multiple UAVs, together with an extensive human subject evaluation in virtual and real environments. The proposed connectivity-maintenance algorithm enhances our previous works by including: (i) an airflow-avoidance behavior that avoids stack downwash phenomena in rotor-based aerial robots; (ii) a consensus-based action for enabling fast displacements with minimal topology changes by having all follower robots moving at the leader’s velocity; (iii) an automatic decrease of the minimum degree of connectivity, enabling an intuitive and dynamic expansion/compression of the formation; and (iv) an automatic detection and resolution of deadlock configurations, i.e., when the robot leader cannot move due to counterbalancing connectivity and external-related inputs. We also devised and evaluated different interfaces for teleoperating the team as well as different ways of receiving information about the connectivity force acting on the leader. The results of two human subject experiments showed that the proposed algorithm is effective in various situations. Moreover, using haptic feedback to provide information about the team connectivity outperforms providing both no feedback at all and sensory substitution via visual feedback.

In 4 we have instead presented a decentralized haptic-enabled connectivity-maintenance control framework for heterogeneous human-robot teams. The proposed framework controls the coordinated motion of a team consisting of mobile robots and one human, for collaboratively achieving various exploration and SAR tasks. The human user physically becomes part of the team, moving in the same environment of the robots, while receiving rich haptic feedback about the team connectivity and the direction toward a safe path. We carried out two human subjects studies, both in simulated and real environments. The results showed that the proposed approach is effective and viable in a wide range of SAR scenarios. Moreover, providing haptic feedback showed increased performance w.r.t. providing visual information only. Finally, conveying distinct feedback regarding the team connectivity and the path to follow performed better than providing the same information combined together.

7.4.3 Robotic teleoperation with haptic feedback for ultrasound elastography imaging

Participant: Alexandre Krupa.

Ultrasound elastography is an image modality that unveils elastic parameters of a tissue, which are commonly related with certain pathologies. It is performed by applying continuous stress variation on the tissue in order to estimate a strain map (elastrogram) from successive ultrasound images. Usually, this stress variation is performed manually by the user through the manipulation of an ultrasound probe and it results therefore in an user-dependent quality of the strain map. To improve the ultrasound elastography imaging and provide quantitative measurement, we developed a shared control architecture that allows the user to remotely operate via a haptic device a robotic ultrasound probe while autonomously applying the required continuous stress variation on the tissue 43. A force feedback computed from the resulted elastogram live stream is then rendered to the user through the haptic device in order he can feel the stiffness of the explored tissue.

7.4.4 Shared Control of a Wheelchair for Navigation Assistance

Participants: Louise Devigne, François Pasteau, Marie Babel.

Power wheelchairs allow people with motor disabilities to have more mobility and independence. In order to improve the access to mobility for people with disabilities, we previously designed a semi-autonomous assistive wheelchair system which progressively corrects the trajectory as the user manually drives the wheelchair and smoothly avoids obstacles.

Despite the COVID situation, INSA and the rehabilitation center of Pôle Saint Hélier managed to co-organize clinical trials in July 2020 at INSA 74. The objective was to evaluate the clinical benefit of a driving assistance for people with disabilities experiencing high difficulties while steering a wheelchair. 10 people participated to the trial. We clearly demonstrate the excellent ability of the system to assist users and the relevant usage of such an assistive technology.

A participant drives our smart power wheelchair along the circuit during clinical tests (July 2020).
Figure 10: A participant drives our smart power wheelchair along the circuit during clinical tests (July 2020).

In addition, driving safely such a vehicle is a daily challenge particularly in urban environments while navigating on sidewalks, negotiating curbs or dealing with uneven grounds. Indeed, differences of elevation have been reported to be one of the most challenging environmental barrier to negotiate, with tipping and falling being the most common accidents power wheelchair users encounter. It is thus our challenge to design assistive solutions for power wheelchair navigation in order to improve safety while navigating in such environments. To this aim, we proposed a shared-control algorithm which provides assistance while navigating with a wheelchair in an environment consisting of negative obstacles. We designed a dedicated sensor-based control law allowing trajectory correction while approaching negative obstacles e.g. steps, curbs, descending slopes. This shared control method takes into account the humanin-the loop factor. We are currently preparing clinical trials to evaluate the clinical benefit of such an assitive tool.

7.4.5 Multisensory power wheelchair simulator

Participants: Guillaume Vailland, Louise Devigne, François Pasteau, Marie Babel.

Power wheelchairs are one of the main solutions for people with reduced mobility to maintain or regain autonomy and a comfortable and fulfilling life. However, driving a power wheelchair in a safe way is a difficult task that often requires training methods based on real-life situations. Although these methods are widely used in occupational therapy, they are often too complex to implement and unsuitable for some people with major difficulties.

In this context, we collaborated with clinicians to develop a Virtual Reality based power wheelchair simulator. This simulator is an innovative training tool adapted to any type of situations and impairments. It relies on a modular and versatile workflow enabling not only easy interfacing with any virtual display, but also with any user interface such as wheelchair controllers or feedback devices. A clinical trial has been conducted during January 2020 in which 29 power wheelchair regular users were asked to complete a clinically validated task designed by clinicians within two conditions: driving in a virtual environment with our simulator and driving in real conditions with a real power wheelchair. The objective of this study was to compare performances between the two conditions and to evaluate the Quality of Experience provided by our simulator in terms of Sense of Presence and Cybersickness.

Results have shown that participants complete the tasks in a similar amount of time for both real and virtual conditions, using respectively a real power wheelchair and our simulator, while providing a high level of Sense of Presence and a valuable Quality of Experience 49, 54.

Participant driving in a virtual environment with our simulator and driving in real conditions with a real power wheelchair
Figure 11: Participant driving in a virtual environment with our simulator and driving in real conditions with a real power wheelchair

7.4.6 Integrating social interaction in a VR powered wheelchair driving simulator

Participants: Emilie Leblong, Marie Babel.

Navigating in the city while driving a powered wheelchair, in a complex and dynamic environment made of various interactions with other humans, can be challenging for a person with disabilities. Learning how to drive a powered wheelchair remains then a major issue for the clinical teams prescribing these technical mobility aids. The work carried out as part of the Interreg ADAPT project has made it possible to design a powered wheelchair simulator in VR (see Sect. 9.2.2). This work was done in cooperation with Anne-Hélène Olivier (MimeTIC team).

To promote the transfer of skills from virtual to real, the use of such a platform requires the deployment of environmentally friendly interactive populated virtual environments. These are currently empty of any walkers, even though the question of social interaction in the framework of an inclusive urban mobility is fundamental.

The first objective of the PhD work of Emilie Leblong (medical doctor as well) is to better understand how pedestrians and powered wheelchair users interact. Especially, this study will aim to characterize the personal space from both the perspective of the pedestrian and the powered wheelchair driver in a laboratory setting.

The second objective is to use these new models of interaction to improve dynamic virtual environments by including virtual humans that faithfully reproduce the behaviors modeled in terms of the simulator user's reaction in a handicap situation.

Finally, the third objective is to evaluate the fidelity of this new generation of wheelchair simulator by comparing the resulting interactions with the ones previously observed in real conditions. In particular, we will consider the perception of the risk of collision as well as the benefit of learning to drive on the simulator via clinical studies.

7.5 Crowd Simulation for Robotics

7.5.1 High-density crowd simulation

Participants: Julien Pettré, Wouter van Toll, Cédric Braga.

In highly dense crowds of humans, collisions between people occur often. It is common to simulate such a crowd as one fluid-like entity (macroscopic), and not as a set of individuals (microscopic, agent-based). Agent-based simulations are preferred for lower densities because they preserve the properties of individual people. However, their collision handling is too simplistic for extreme-density crowds. Therefore, neither paradigm is ideal for all possible densities. In this work 50, we combine agent-based crowd simulation with the concept of Smoothed Particle Hydrodynamics (SPH), a particle-based method that is popular for fluid simulation. Our combination augments the usual agent-collision handling with fluid dynamics when the crowd density is sufficiently high. A novel component of our method is a dynamic rest density per agent, which intuitively controls the crowd density that an agent is willing to accept. Experiments show that SPH improves agent-based simulation in several ways: better stability at high densities, more intuitive control over the crowd density, and easier replication of wave-propagation effects. Our implementation can simulate tens of thousands of agents in real-time. As such, this work successfully prepares the agent-based paradigm for crowd simulation at all densities.

7.5.2 Generalized Microscropic Crowd Simulation using Costs in Velocity Space

Participants: Julien Pettré, Wouter van Toll, Fabien Grzeskowiak, Javad Amirian.

To simulate the low-level (‘microscopic’) behavior of human crowds, a local navigation algorithm computes how a single person (‘agent’) should move based on its surroundings. Many algorithms for this purpose have been proposed, each using different principles and implementation details that are difficult to compare.

These works 28, 51 introduce a novel framework that describes local agent navigation generically as optimizing a cost function in a velocity space. We show that many state-of-the-art algorithms can be translated to this framework, by combining a particular cost function with a particular optimization method. As such, we can reproduce many types of local algorithms using a single general principle.

Our implementation of this framework, named umans (Unified Microscopic Agent Navigation Simulator), is freely available online. This software enables easy experimentation with different algorithms and parameters. We expect that our work will help understand the true differences between navigation methods, enable honest comparisons between them, simplify the development of new local algorithms, make techniques available to other communities, and stimulate further research on crowd simulation.

7.5.3 Synchronizing navigation algorithms for crowd simulation via topological strategies

Participants: Julien Pettré, Wouter van Toll.

We present a novel topology-driven method for enhancing the navigation behavior of agents in virtual environments and crowds. In agent-based crowd simulations, each agent combines multiple navigation algorithms for path planning, collision avoidance, and more. This may lead to undesired motion whenever the algorithms disagree on how an agent should pass an obstacle or another agent.

In this paper 27, we argue that all navigation algorithms yield a strategy: a set of decisions to pass obstacles and agents along the left or right. We show how to extract such a strategy from a (global) path and from a (local) velocity. Next, we propose a general way for an agent to resolve conflicts between the strategies of its algorithms. For example, an agent may re-plan its global path when collision avoidance suggests a detour. As such, we bridge conceptual gaps between algorithms, and we synchronize their results in a fundamentally new way. Experiments with an example implementation show that our strategy concept can improve the behavior of agents while preserving real-time performance. It can be applied to many agent-based simulations, regardless of their specific navigation algorithms. The concept is also suitable for explicitly sending agents in particular directions, e.g. to simulate signage.

7.5.4 Influence of path curvature on collision avoidance behaviour between two walkers

Participant: Julien Pettré.

Navigating crowded community spaces requires interactions with pedestrians that follow rectilinear and curvilinear trajectories. In the case of rectilinear trajectories, it has been shown that the perceived action opportunities of the walkers might be afforded based on a future distance of closest approach. However, little is known about collision avoidance behaviours when avoiding walkers that follow curvilinear trajectories. In this paper 20, twenty-two participants were immersed in a virtual environment and avoided a virtual human (VH) that followed either a rectilinear path or a curvilinear path with a 5 m or 10 m radius curve at various distances of closest approach. Compared to a rectilinear path (control condition), the curvilinear path with a 5 m radius yielded more collisions when the VH approached from behind the participant and more inversions when the VH approached from in-front. During each trial, the evolution of the future distance of closest approach showed similarities between rectilinear paths and curvilinear paths with a 10 m radius curve. Overall, with few collisions and few inversions of crossing order, we can conclude that participants were capable of predicting future distance of closest approach of virtual walkers that followed curvilinear trajectories. The task was solved with similar avoidance adaptations to those observed for rectilinear interactions. These findings should inform future endeavors to further understand collision avoidance strategies and the role of—for example—non-constant velocities.

7.5.5 Characterization and Evaluation of Control Laws for Virtual Steering Navigation

Participant: Maud Marchal.

During navigation in virtual environments, the control law plays a key role. In 36, we investigated the influence of the control law in virtual steering techniques, and in particular the speed update, on users' behaviour while navigating in virtual environments. To this end, we first proposed to characterize existing control laws. Then, we designed a user study to evaluate the impact of the control law on users' behaviour and performance in a navigation task. Participants had to perform a virtual slalom while wearing a head-mounted display. They were following three different sinusoidal-like trajectory (with low, medium and high curvature) using a torso-steering navigation technique with three different control laws (constant, linear and adaptive). The adaptive control law, based on the biomechanics of human walking, takes into account the relation between speed and curvature. We proposed a spatial and temporal analysis of the trajectories performed both in the virtual and the real environment. The results showed that users' trajectories and behaviors were significantly affected by the shape of the trajectory but also by the control law. In particular, users' angular velocity was higher with constant and linear laws compared to the adaptive law. The analysis of subjective feedback suggested that these differences might result in a lower perceived physical demand and effort for the adaptive control law.

7.5.6 Development and evaluation of crowd simulation platform

Participant: Julien Pettré, Adèle Colas, Alberto Jovane, Tairan Yin, Vicenzo Abichequer Sangalli.

The purpose of crowd simulators is to reproduce or predict the motion of real crowds. It is very useful to robotics so as to study Human-Robot Interactions (HRI) in the borad sense, as well as to perform studies on safety issues. One specificity of our simulation techniques is the possibility ot immerse people or robots into it. Thus, we can create virtual spaces that real humans and real robots share with simulated alter-egos. It is important then to consider the issue of the validity of using Virtual Reality to study HRI and to figure out if data or obsevrations coming from such systems can be considered the same than real data. To this end, in collaboration with the MimeTIC team, we are conducting a number of studies to evaluate, for instance how simulated human motion is perceived 32, 53, 42, 37. Also, we conduct study to evaluate behavioral biases, such as for example how the human gaze in behaving in Virtual Reality compared to real conditions 34.

8 Bilateral contracts and grants with industry

8.1 Bilateral grants with industry

Creative

Participants: Benoît Antoniotti, François Chaumette, Eric Marchand.

No Inria Rennes 13996, duration: 36 months.

This project funded by Creative started in March 2019. It supported Benoît Antoniotti's Ph.D. about visual exploration (see Section 7.2.8). Unfortunately, Benoît dropped his thesis in January 2020.

IRT JV Perform

Participant: François Chaumette.

No Inria Rennes 14049, duration: 38 months.

This project funded by IRT Jules Verne in Nantes started in January 2018. It is achieved in cooperation with Stéphane Caro from LS2N in Nantes to support Zane Zake's Ph.D. about visual servoing of cable-driven parallel robots (see Section 7.2.4).

9 Partnerships and cooperations

9.1 International initiatives

9.1.1 Inria associate team not involved in an IIL

ISI4NAVE

Participants: Marie Babel, Claudio Pacchierotti, François Pasteau, Louise Devigne, Marco Aggravi.

  • Title: Innovative Sensors and adapted Interfaces for assistive NAVigation and pathology Evaluation
  • Duration: 2016 - 2022
  • Coordinator: Marie Babel
  • Partners:
    • Aspire CREATe, University College London (United Kingdom)
    • Pôle Saint Hélier (Rennes)
  • Inria contact: Marie Babel
  • Summary:

    Using a wheelchair allows people with disability to compensate a loss of mobility. However only 5 to 15% of the 70 million people worldwide who require a wheelchair have access to this type of technical aid. In particular, visual, visuo-spatial and/or cognitive impairments can alter the ability of an individual to independently operate a wheelchair safely.

    This project focuses then on two main complementary objectives: (1) to compensate both sensorimotor disabilities and cognitive impairments by designing adapted interfaces, (2) to enhance the driving experience and to bring a new tool for rehabilitation purposes by defining efficient physical Human-Robot Interaction.

    In order to ensure a widespread use of robotic systems, innovative interfaces, enabling relevant feedback (medically validated), constitute a major challenge. Trajectory corrections, obtained thanks to an assistance module, will have to be perceived by the user by means of sensitive (visual, tactile…) feedback that will have to be easily adapted to the pathology. Conversely, user interaction with the robotic system can be interpreted to control the wheelchair. Designing such systems require a multidisciplinary study, including medical data collection and analysis.

    The scope of this ISI4NAVE Associate Team is then to provide advanced and innovative solutions for controlling wheelchair as well as providing appropriate and relevant feedback to users. See also: https://team.inria.fr/isi4nave/

9.1.2 Participation in other international programs

GentleMAN

Participants: Fouad Makiyeh, Alexandre Krupa, François Chaumette.

  • Title: Gentle and Advanced Robotic Manipulation of 3D Compliant Objects
  • Duration: August 2019 - December 2023
  • Coordinator: Sintef Ocean (Norway)
  • Partners:
    • Sintef Ocean (Norway)
    • NTNU (Norway)
    • NMBU (Norway)
    • MIT (USA)
    • QUT (Australia)
  • Inria contact: Alexandre Krupa
  • Summary: This project is granted by the Norwegian Government. Its main objective is to develop a novel learning framework that uses visual, force and tactile sensing to develop new multi-modal learning models, interfaced with underlying robot control, to enable robots to learn new and advanced skills for the manipulation of 3D compliant objects. In the scope of this project, the Rainbow group is involved in the elaboration of new approaches for visual tracking of deformable objects, active vision perception and visual servoing framework for deforming soft objects into desired shape (see Section 7.2.10). Alexandre Krupa spent a 1-week visit at Sintef in Trondheim in February 2020.

9.2 European initiatives

9.2.1 FP7 & H2020 Projects

PRESENT
  • Title: Photoreal REaltime Sentient ENTity
  • Duration: September 2019 - August 2022
  • Coordinator: UNIVERSIDAD POMPEU FABRA (Spain)
  • Partners:
    • BRAINSTORM MULTIMEDIA SL (Spain)
    • CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS (France)
    • CREATIVE WORKERS-CREATIEVE WERKERS VZW (Belgium)
    • ETUITUS SRL (Italy)
    • INFOCERT SPA (Italy)
    • THE FRAMESTORE LIMITED ((missing:COUNTRY))
    • UNIVERSIDAD POMPEU FABRA (Spain)
    • UNIVERSITAET AUGSBURG (Germany)
    • UNIVERSITE RENNES II (France)
  • Inria contact: Julien Pettré
  • Summary: Our relationship with virtual entities is deepening. Already, we are using technologies like Siri, Alexa and Google Assistant to aid in day-to-day tasks. The EU-funded PRESENT project will develop a virtual digital companion, which will not only sound human but also look natural, demonstrate emotional sensitivity, and establish meaningful dialogue. Advances in photorealistic computer-generated characters, combined with emotion recognition and behaviour, and natural language technologies, will allow these virtual agents to not only look realistic but respond like a human. The project will demonstrate a set of practical tools, a pipeline and an application programming interface.
CLIPE
  • Title: Creating Lively Interactive Populated Environments
  • Duration: March 2019 - February 2023
  • Coordinator: UNIVERSITY OF CYPRUS (Cyprus)
  • Partners:
    • ECOLE POLYTECHNIQUE (France)
    • KUNGLIGA TEKNISKA HOEGSKOLAN (Sweden)
    • MAX-PLANCK-GESELLSCHAFT ZUR FORDERUNG DER WISSENSCHAFTEN EV (Germany)
    • SILVERSKY3D VR TECHNOLOGIES LTD (Cyprus)
    • THE PROVOST, FELLOWS, FOUNDATION SCHOLARS & THE OTHER MEMBERS OF BOARD OF THE COLLEGE OF THE HOLY & UNDIVIDED TRINITY OF QUEEN ELIZABETH NEAR DUBLIN (Ireland)
    • UNIVERSITAT POLITECNICA DE CATALUNYA (Spain)
    • UNIVERSITY OF CYPRUS (Cyprus)
  • Inria contact: Julien Pettré
  • Summary: The project addresses the core challenges of designing new techniques to create and control interactive virtual characters, benefiting from opportunities open by the wide availability of emergent technologies in the domains of human digitization and displays, as well as recent progresses of artificial intelligence. CLIPE aspires to train the new generation of researchers in these techniques, looking at the area holistically. The training and research programme is based on a multi-disciplinary and cross-sectoral philosophy, bringing together industry and academia experts and focusing on both technical and transversal skills development.
CrowdDNA
  • Title: TECHNOLOGIES FOR COMPUTER-ASSISTED CROWD MANAGEMENT
  • Duration: November 2020 - March 2023
  • Coordinator: Inria
  • Partners:
    • ECOLE NORMALE SUPERIEURE DE RENNES (France)
    • FORSCHUNGSZENTRUM JULICH GMBH (Germany)
    • ONHYS (France)
    • UNIVERSIDAD REY JUAN CARLOS (Spain)
    • UNIVERSITAET ULM (Germany)
    • UNIVERSITE RENNES II (France)
    • UNIVERSITY OF LEEDS ((missing:COUNTRY))
  • Inria contact: Julien Pettré
  • Summary: Crowd management is a difficult job. Large crowds gathering for an outdoor event and heavy pedestrian traffic are events of serious concern for officials tasked with managing public spaces. Existing methods rely on simulation technologies and require the measurement of simulation variables that are difficult to estimate. The EU-funded CrowdDNA project proposes a new technology based on innovative crowd simulation models. It facilitates predictions on the dynamics, behaviour and risk factors of high-density crowds, addressing the need for safe and comfortable mass events. The project suggests that the analysis of some specific macroscopic characteristics of a crowd such as its apparent motion can offer important information about its internal structure and allow the exact assessment of its state.
H-REALITY
  • Title: Mixed Haptic Feedback for Mid-Air Interactions in Virtual and Augmented Realities
  • Duration: Oct 2018 - Mar 2022
  • Coordinator: The University of Birmingham
  • Partners:
    • Ultrahaptics Ltd. (United Kingdom)
    • Actronika SAS (France)
    • TU Delft (The Netherlands)
  • Inria contact: Claudio Pacchierotti
  • Summary: The ambition of H-Reality will be achieved by integrating the commercial pioneers of ultrasonic “non-contact” haptics, state-of-the-art vibrotactile actuators, novel mathematical and tribological modelling of the skin and mechanics of touch, and experts in the psychophysical rendering of sensation. The result will be a sensory experience where digital 3D shapes and textures are made manifest in real space via modulated, focused, ultrasound, ready for the untethered hand to feel, where next-generation wearable haptic rings provide directional vibrotactile stimulation, informing users of an object's dynamics, and where computational renderings of specific materials can be distinguished via their surface properties. The implications of this technology will be far-reaching. The computer touch-screen will be brought into the third dimension so that swipe gestures will be augmented with instinctive rotational gestures, allowing intuitive manipulation of 3D data sets and strolling about the desktop as a virtual landscape of icons, apps and files. H-Reality will transform online interactions; dangerous machinery will be operated virtually from the safety of the home, and surgeons will hone their skills on thin air.
IMAGINE
  • Title: Robots Understanding Their Actions by Imagining Their Effects
  • Duration: Feb 2017 - Feb 2021
  • Coordinator: Universität Innsbruck (Austria)
  • Partners:
    • Institut National des Sciences Appliquées de Rennes (France)
    • Georg-August Universität Göttingen (Germany)
    • Karlsruhe Institut of Technology Germany)
    • Institut de Robòtica i Informàtica Industrial (Spain)
    • Boğaziçi Üniversitesi (Turkey)
    • Electrocycling GmbH (Germany)
  • Inria contact: Maud Marchal
  • Summary: Today's robots are good at executing programmed motions, but they do not understand their actions in the sense that they could automatically generalize them to novel situations or recover from failures. IMAGINE seeks to enable robots to understand the structure of their environment and how it is affected by its actions. “Understanding” here means the ability of the robot (a) to determine the applicability of an action along with parameters to achieve the desired effect, and (b) to discern to what extent an action succeeded, and to infer possible causes of failure and generate recovery actions. The core functional element is a generative model based on an association engine and a physics simulator. “Understanding” is given by the robot's ability to predict the effects of its actions, before and during their execution. This allows the robot to choose actions and parameters based on their simulated performance, and to monitor their progress by comparing observed to simulated behavior. This scientific objective is pursued in the context of recycling of electromechanical appliances. Current recycling practices do not automate disassembly, which exposes humans to hazardous materials, encourages illegal disposal, and creates significant threats to environment and health, often in third countries. IMAGINE will develop a TRL-5 prototype that can autonomously disassemble prototypical classes of devices, generate and execute disassembly actions for unseen instances of similar devices, and recover from certain failures. For robotic disassembly, IMAGINE will develop a multi-functional gripper capable of multiple types of manipulation without tool changes.
TACTILITY
  • Title: TACTIle feedback enriched virtual interaction through virtual realITY and beyond
  • Duration: Jul 2019 - Jun 2022
  • Coordinator: Fundacion TECNALIA Research & Innovation
  • Partners:
    • Aalborg Universitet (Denmark)
    • Università degli Studi Di Genova (Italy)
    • Institut National De Recherche Eninformatique Et Automatique (France)
    • Universitat De Valencia (Spain)
    • Tecnalia Serbia Doo Beograd (Serbia)
    • Manus Machinae Bv (The Netherlands)
    • Smartex S.r.l. (Italy)
    • Immersion (France)
  • Inria contact: Ferran Argelaguet (HYBRID team)
  • Summary: TACTILITY is a multidisciplinary innovation and research action with the overall aim of including rich and meaningful tactile information into the novel interaction systems through technology for closed-loop tactile interaction with virtual environments. By mimicking the characteristics of the natural tactile feedback, it will substantially increase the quality of immersive VR experience used locally or remotely (tele-manipulation). The approach is based on transcutaneous electro-tactile stimulation delivered through electrical pulses with high resolution spatio-temporal distribution. To achieve it, we are working on technologies for transcutaneous stimulation, textile-based multi-pad electrodes and tactile sensation electronic skin, coupled with ground-breaking research of perception of elicited tactile sensations in VR. The Rainbow staff involved in this project are C. Pacchierotti and M. Marchal.

9.2.2 Collaborations in European programs, except FP7 and H2020

ADAPT
  • Title: Assistive Devices for empowering disAbled People through robotic Technologies
  • Program: Interreg VA France (Channel) England
  • Duration: Jan 2017 - Jun 2022
  • Coordinator: ESIGELEC/IRSEEM Rouen
  • Partners:
    • INSA Rennes - IRISA, LGCGM, IETR (France),
    • Université de Picardie Jules Verne - MIS (France),
    • Pôle Saint Hélier (France), CHU Rouen (France),
    • Réseau Breizh PC (France),
    • Pôle TES (France),
    • University College of London - Aspire CREATE (UK),
    • University of Kent (UK),
    • East Kent Hospitals Univ NHS Found. Trust (UK),
    • Health and Europe Centre (UK),
    • Plymouth Hospitals NHS Trust (UK),
    • Canterbury Christ Church University (UK),
    • Kent Surrey Sussex Academic Health Science Network (UK),
    • Cornwall Mobility Center (UK).
  • Inria contact: Marie Babel
  • Summary: This project aims to develop innovative assistive technologies in order to support the autonomy and to enhance the mobility of power wheelchair users with severe physical/cognitive disabilities. In particular, the objective is to design and evaluate a power wheelchair simulator as well as to design a multi-layer driving assistance system.

9.3 National initiatives

ANR Sesame

Participant: François Chaumette.

no Inria 13722, duration: 48 months.

This project started in January 2019. It involves a consortium managed by LS2N (Nantes) with LIP6 (Paris) and Rainbow group. It aims at analysing singularity and stability issues in visual servoing (see Section 7.2.5)

Equipex Robotex

Participants: Fabien Spindler, François Chaumette.

no Inria Rennes 6388, duration: 9 years.

Rainbow is one of the 15 French academic partners involved in the Equipex Robotex network that started in February 2011. It was devoted to get and manage significant equipment in the main robotics labs in France. In the scope of this project, we have obtained the humanoid robot Romeo in 2015.

Inria Challenge DORNELL

Participants: Marie Babel, Maud Marchal, Claudio Pacchierotti, François Pasteau, Louise Devigne, Marco Aggravi.

  • Title: DORNELL: A multimodal, shapeable haptic handle for mobility assistance of people with disabilities
  • Duration: November 2020 - December 2024
  • Coordinator: Marie Babel, Claudio Pacchierotti
  • Partners:
    • Potioc Inria team
    • MFX Inria team
    • Inria SED Paris
    • LGCGM (Rennes)
    • Centre de rééducation Pôle Saint Hélier (Rennes)
    • ISIR (Paris)
    • Institut des jeunes aveugles (Yzeure)
  • Inria contact: Marie Babel, Claudio Pacchierotti
  • Summary:

    While technology helps people to compensate for a broad set of mobility impairments, visual perception and/or cognitive deficiencies still significantly affect their ability to move safely and easily.

    We propose an innovative multisensory, multimodal, smart haptic handle that can be easily plugged onto a wide range of mobility aids, including white canes, precanes, walkers, and power wheelchairs. Specifically fabricated to fit the needs of a person, it provides a wide set of ungrounded tactile sensations (e.g., pressure, skin stretch, vibrations) in a portable and plug-and-play format – bringing haptics in assistive technologies all at once.

    The project will address important scientific and technological challenges, including the study of multisensory perception, the use of new materials for multimodal haptic feedback, and the development of a haptic rendering API to adapt the feedback to different assistive scenarios and user’s wishes. We will co-design DORNELL with users and therapists, driving our development by their expectations and needs.

BPI Lichie

Participants: Maxime Robic, John Thomas, Eric Marchand, François Chaumette.

no Inria 14876, duration: 45 months.

This project started in March 2020. It involves a consortium managed by Airbus (Toulouse) with many companies, Onera and Inria. It aims at designing a new constellation of satellites with on-board imaging facilities. Robotics for the assembly of the satellites is also studied. As for Rainbow, this project funds Maxime Robic and John Thomas PhDs (see Sections 7.2.6 and 7.2.7).

ANR CAMP

Participants: P. Robuffo Giordano, Q. Delamare, F. Spindler.

  • Title: Intrinsically-Robust and Control-Aware Motion Planning for Robots in Real-World Conditions
  • Duration: October 2020 - September 2024
  • Coordinator: P. Robuffo Giordano
  • Partners:
    • LAAS (Toulouse)
    • Univ. Twente (Netherlands)
  • Inria contact: P. Robuffo Giordano
  • Summary: An effective way of dealing with the complexity of robots operating in real (uncertain) environments is the paradigm of “feedforward/feedback” or “planning/control”: in a first step a suitable nominal trajectory (feedforward) for the robot states/controls is planned exploiting the available information (e.g., a model of the robot and of the environment). While there has been an effort in proposing “robust planners” or more “global controllers” (e.g., Model Predictive Control (MPC)), a truly unified approach that fully exploits the techniques of the motion planning and control/estimation communities is still missing and the existing state-of-the-art has several important limitations, namely (1) lack of generality, (2) lack of computational efficiency, and (3) poor robustness. All these shortcomings are a major limiting factor in the autonomy and decision-making capabilities of robots operating in all those complex scenarios (real-world conditions, non-negligible effects of the uncertainties, fast dynamics) which are instead the typical conditions in which future robots are expected to operate (see for instance the latest 2015 DARPA challenge whose mixed results clearly showed how robustness to unmodeled effects, including perception errors, is still one of the main bottlenecks for advancing the robot automony.). In this respect, the ambition of CAMP is to (1) develop a general and unified “intrinsically-robust and control-aware motion planning framework” able to address all the above-mentioned issues, and to (2) demonstrate the applicability of this new framework to real robots in real-world challenging tasks. In particular we envisage two robotics demonstrators for showing at best the effectiveness and generality of our methodology: (1) an indoor pick- and-place/assembly task involving a 7-dof torque-controlled arm for a first validation in “controlled conditions” and (2) an outdoor cooperative mobile manipulation task involving an aerial manipulator (a quadrotor UAV equipped with an onboard arm) and a skid-steering mobile robot with an onboard arm for a final validation in much less favorable experimental conditions.
ANR MULTISHARED
  • Title: Shared-Control Algorithms for Human/Multi-Robot Cooperation
  • Duration: September 2020 - August 2024
  • Coordinator: P. Robuffo Giordano
  • Inria contact: P. Robuffo Giordano
  • Summary:

    The goal of the Chaire AI MULTISHARED is to significantly advance the state-of-the-art in multi-robot autonomy and human-multi-robot interaction for allowing a human operator to intuitively control the coordinated motion of multi-UAV group navigating in remote environments, with a strong emphasis on the division of roles between multi-robot autonomy (in controlling its motion/configuration and online decision-making) and human intervention/guidance for providing high-level commands to the group while being most aware of the group status via VR and haptics technology.

    One goal of this project will then be to study how to apply modern reactive trajectory planning approaches to the problems of decentralized formation control and localization for a multi-robot group, with the long-term aim of increasing the group autonomy and decision-making possibilities (e.g., better handling environmental constraints such as obstacles of multi-robot collisions, limited sensing/communication constraints, limited energy/actuation, and finally optimality w.r.t. any criterion of interest). Another goal of this project will then be to investigate the problem of intuitively teleoperating multiple robots of different nature. We will start by tackling innovative supervised learning approaches coming from our long experience in the robotics field, aiming at maximizing the similarity of action between the human body and the robotic team with respect to the chosen task. Another important objective will be to investigate the problem of conveying multiple pieces of feedback information in a comfortable and unobtrusive way, by advancing the state-of-the-art of multi-type and multi-point haptic rendering techniques through machine learning. We will start by studying the perceptual effect of providing distributed multiple haptic stimuli (e.g., skin stretch, vibrotactile, pressure) to learn the best way of providing the target sensations. Then, we will investigate supervised learning approaches able to map the many different information registered by the robot team to the human user, trying to match the actions of the slave robots and human hand with respect to the environment.

9.4 Regional initiatives

IRT Jules Verne Happy

Participant: François Chaumette.

No Inria Rennes 13521, duration: 36 months.

This project started in June 2018. It is managed by IRT Jules Verne and achieved in collaboration with LS2N, the ACsSysteme company and Airbus. Its goal is to develop local sensor-based control methods for the assembly of large parts of aircrafts.

CominLabs MAMBO

Participants: Lev Smolentsev, Alexandre Krupa, François Chaumette, Paolo Robuffo Giordano.

  • Title: Manipulation with Multiples Drones for Soft Bodies
  • Duration: November 2020 - September 2024
  • Coordinator: LS2N (Nantes)
  • Partners:
    • LS2N (Nantes)
  • Inria contact: Alexandre Krupa
  • Summary: This project started in November 2020 and is funded by the Labex CominLabs. It is led by the ARMEN team at LS2N (Nantes) and implies the collaboration of the Rainbow Project-Team. Its objective is to propose a scientific framework for allowing the manipulation of an object by the combined action of two drones equipped with onboard cameras and force sensors. The envisaged solution is to manipulate a deformable body (a slender beam) attached between the two drones in order to grasp an object on the floor and move it to another location. In the scope of this project, the Rainbow group is involved in the elaboration of new approaches for controlling the 2 drones by visual servoing using data provided by onboard RGB-D cameras (see Section 7.2.11).
Silver Connect

Participants: Marie Babel.

  • Title: SilverConnect - Le digital au service des EHPAD
  • Duration: September 2018 - April 2021
  • Coordinator: Guillaume Roul (Hoppen)
  • Partners:
    • IETR (Rennes)
    • Hoppen (Rennes)
    • Centre de rééducation Pôle Saint Hélier (Rennes)
    • Famileo (Rennes)
  • Inria contact: Marie Babel
  • Summary: This project started in November 2018 and is supported by Brittany region/BPI as well as FEDER. This project aims at designing a fall detection framework by means of vision-based algorithms coupled with deep learning solutions.
AMBROUGERIEN

Participants: Marie Babel, Vincent Drevelle, François Pasteau, Julien Albrand.

  • Title: Autonomie, MoBilité et fauteuil ROUlant robotisé : GEolocalisation indoor et Recharge IntelligENte
  • Duration: Décembre 2020 - Décembre 2024
  • Coordinator: Arnaud Dekytspotter (DK Innovation)
  • Partners:
    • IETR (Rennes)
    • Hoppen (Rennes)
    • DK Innovation (Plerin)
    • LGCGM (Rennes)
  • Inria contact: Marie Babel
  • Summary: This project started in December 2020 and is supported by Brittany region and Rennes Métropole. AMBROUGERIEN aims to support the independence of people in electric wheelchairs. A dedicated interface allows the wheelchair to move autonomously to secure the transfer and to return to an intelligent induction recharging base. Information on the internal state of the wheelchairs facilitates fleet management.
Academic Chair IH2A

Participants: Marie Babel, Maud Marchal, Vincent Drevelle, Claudio Pacchierotti, François Pasteau, Louise Devigne, Marco Aggravi, Anne-Hélène Olivier (MimeTIC), Valérie Gouranton (HYBRID), Bruno Arnaldi (HYBRID), Florian Nouviale (HYBRID), Alexandre Audinot (HYBRID).

  • Title: Academic Chair on Innovations, Handicap, Autonomy and Accessibility (IH2A)
  • Duration: September 2020 - ...
  • Coordinator: Marie Babel
  • Partners:
    • IETR (Rennes)
    • LGCGM (Rennes)
    • Centre de rééducation Pôle Saint Hélier (Rennes)
    • M2S (Rennes)
  • Inria contact: Marie Babel
  • Summary:

    This research chair is a continuation of the research work developed at INSA Rennes on assistive robotics. The idea is to propose the most suitable technological solutions to compensate for sensory-motor handicaps limiting the mobility and autonomy of people in daily life tasks and leisure activities. The Chair thus aims to perpetuate these activities, both from a societal point of view and from a scientific and clinical point of view, and is intended to be an effective and innovative tool for the deployment of large-scale research in this area. The creation of a new type of multidisciplinary and innovative collaborative site of experimentations will allow the clinical and scientific validation of the technical assistance offered, while ensuring the accessibility of the solutions deployed.

    The Chair is thus defined as a federating tool, associated with a unique experimental space, through a cross-view of users from scientific and clinical circles, for the development of adapted technologies.

10 Dissemination

10.1 Promoting scientific activities

10.1.1 Scientific events: organisation

General chair, scientific chair
  • Marie Babel was the Scientific Chair and the General co-chair of the workshop " Innovation robotique et santé - Rééducation et distanciation : peut-on rester connectés ? " organized in Rennes on December 10th 2020.
  • François Chaumette was co-program chair of ICRA 2020 that was held in Paris in June 2020.
Member of the organizing committees
  • Claudio Pacchierotti has been Publicity Co-Chair, AsiaHaptics, Beijing, China, 2020; Demonstrations Co-Chair, IEEE Haptics Symposium (HAPTICS), Washington DC, USA, 2020.

10.1.2 Scientific events: selection

Chair of conference program committees
  • Maud Marchal was Journal Program co-Chair of IEEE International Conference on Virtual Reality, Atlanta, USA, 2020.
Member of the conference program committees
  • Claudio Pacchierotti has been Associate Editor of IEEE ICRA 2020
  • Julien Pettré has been Associate Editor of IROS 2020, IEEE 2020 VR conference track
  • P. Robuffo Giordano has been Associated Editor of the IEEE ICRA 2020 and IEEE ICRA 2021
  • P. Robuffo Giordano co-organized the Workshop on “Power On And Go Robots” at the RSS 2020 Conference
  • Eric Marchand has been Associate Editor of 2020 IEEE ICRA, 2021 IEEE ICRA.
  • Maud Marchal has been a Program Committee Member of 2020 ACM SIGGRAPH Conference on Motion, Interaction and Games; ACM SIGGRAPH/Eurographics Symposium on Computer Animation 2020; XR SIGGRAPH Asia 2020.
Reviewer
  • Eric Marchand: IEEE ICRA (4), IEEE IROS (2)
  • Alexandre Krupa: ICRA 2020 (1), BioRob 2020 (1)
  • Maud Marchal: ACM SIGGRAPH Asia (1), Eurohaptics (1)
  • Marie Babel: IEEE ICRA (3)
  • Vincent Drevelle: IEEE IROS (2), IEEE CDC (1)
  • François Chaumette: IEEE ICRA 2021 (1)
  • Paolo Robuffo Giordano: IEEE ICRA 2021 (2), IEEE/RSJ IROS (2)
  • Julien Pettré: ISMAR 2020 (1),

10.1.3 Journal

Member of the editorial boards
  • Eric Marchand is a Senior Editor of IEEE Robotics and Automation Letters (RA-L).
  • Alexandre Krupa is Associate Editor for the IEEE Transactions on Robotics
  • Maud Marchal is Associate Editor of IEEE Transactions on Visualization and Computer Graphics, IEEE Computer Graphics and Applications, and Computers & Graphics. She is a Review Editor of Frontiers, section on Haptics.
  • François Chaumette: Editor of the IEEE Transactions on Robotics, Editorial Board of the Int. Journal of Robotics Research, Board Member of the Springer Encyclopedia of Robotics.
  • P. Robuffo Giordano is Editor for the IEEE Transactions on Robotics
  • P. Robuffo Giordano is Guest Editor for the special issue on “Power-On-and-Go Autonomy: Right, Out of the Box” in Autonomous Robots, 2020
  • Julien Pettré is associate editor for Wiley's CAVW and Wiley's Computer Graphics Forum
Reviewer - reviewing activities
  • Alexandre Krupa: IEEE RA-L (3)
  • Eric Marchand: IJRR (1)
  • Maud Marchal: Eurohaptics (1), ACM SIGGRAPH Asia (1), JFIG (3)
  • P. Robuffo Giordano: IJRR (1), IEEE RA-L (3), IEEE T-CST (1)
  • Marie Babel: IEEE TRO (1)
  • François Chaumette: IEEE TRO (1), IEEE RA-L (2), IEEE Trans. on Industrial Electronics (1), IEEE Trans. on Mechatronics (1)
  • Claudio Pacchierotti: IEEE TOH (14), Intl. J. Human Computer Studies (1), IEEE TMECH (2), IEEE RA-L (1), ACM Computing Surveys (2), J. Micro-bio robotics (3), Intl. J. Humanoid Robotics (1)
  • Julien Pettré: C&G (2), Robotics and Autonomous Systems (1), Computing Surveys (1), Wiley's CGF (2), IEEE T-RO

10.1.4 Invited talks

  • Marie Babel, “Robotique d'assistance et aide à la mobilité : compenser, rééduquer, interagir”, Journée IA & Robotique CEA/Inria, 15th December, 2020
  • Marie Babel, “Assistive robotics and co-creation with clinicians: user-centered design of mobility aids”, University College of London IOMS Research Webinar Series, 27th August, 2020
  • P. Robuffo Giordano, “Human-Assisted Robotics”, 13th International Workshop on Human-Friendly Robotics (HFR 2020), October 2020
  • P. Robuffo Giordano, “Formation Control and Localization with Onboard Sensing and Local Communication”. Frontiers in Mathematical Computing, CIMAT, October 2020
  • P. Robuffo Giordano, “An Introduction to Formation Control and Localization of Multi-Robot Systems”. 2020 IEEE RAS Summer School on Multi-Robot Systems, September 2020
  • P. Robuffo Giordano. “Human-assisted robotics”. ICRA 2020 Workshop on Shared Autonomy: Learning and Control, June 2020

10.1.5 Leadership within the scientific community

  • Claudio Pacchierotti is Chair of the IEEE Technical Committee on Haptics and Secretary of the Eurohaptics Society.
  • Maud Marchal is a Steering Committee Member of the French Chapter of Eurographics. She is a committee member of the Best PhD Thesis Award Committee for GdR IG-RV.
  • François Chaumette is a member of the Scientific Council of the Mathematics and Computer Science Department of Inrae, RTE Vedecom, ImVIA lab in Dijon and Le Creusot. He is also a founding member of the Scientific Council of the “GdR Robotique”

10.1.6 Scientific expertise

  • Marie Babel serves as an expert for the International Mission of the French Research Ministry (MEIRIES) - Campus France.
  • Maud Marchal serves as a reviewer of ANRT.
  • François Chaumette served for the Best French PhD Thesis awarded by the GdR Robotique. He also serves as a member of the "Haut Comité des Très Grandes Infrastructures de Recherche" of the French Ministery of Research.
  • P. Robuffo Giordano served as expert/reviewer for the ICRA Milestone Paper Award, for the Best PhD Thesis Award Committee for french Theses in Robotics (GdR Robotique 2019), for evaluating EU projects in the H2020-ICT-46 and 47 calls, for FISR projects of the Italian Ministry for University and Research, for research projects from the ANR

10.1.7 Research administration

  • Eric Marchand is the head of "Digital Signals and Images, Robotics" department at IRISA.
  • Maud Marchal is a member of the “Bureau” of Institut Universitaire de France (IUF).
  • François Chaumette serves as the president of the committee in charge of all the temporary recruitments (“Commission Personnel”) at Inria Rennes-Bretagne Atlantique and IRISA. He is also a member of the Head team of Inria Rennes-Bretagne Atlantique, and of the Scientific Steering Committee (COSS) of IRISA. Finally, he is a member of the Inria COERLE committee (in charge of the ethical aspects of all Inria research).
  • Alexandre Krupa is a member of the CUMIR (“Commission des Utilisateurs des Moyens Informatiques pour la Recherche”) of Inria Rennes-Bretagne Atlantique.
  • Alexandre Krupa serves as Inria representative (correspondent) at the IRT Jules Verne.

10.2 Teaching - Supervision - Juries

Teaching

Marie Babel:

  • Master INSA2: “Robotics”, 26 hours, M1, INSA Rennes
  • Master INSA1: “Concepts de la logique à la programmation”, 20 hours, L3, INSA Rennes
  • Master INSA1: “Probability”, 15 hours, L3, INSA Rennes
  • Master INSA1: “Langage C”, 12 hours, L3, INSA Rennes
  • Master INSA2: “Computer science project”, 30 hours, M1, INSA Rennes
  • Master INSA1: “Practical studies”, 16 hours, L3, INSA Rennes
  • Master INSA2: “Image analysis”, 26 hours, M1, INSA Rennes
  • Master INSA1: “Remedial math courses”, 50 hours, L3, INSA Rennes

François Chaumette:

  • Master SISEA: “Robot Vision”, 12 hours, M2, Université de Rennes 1
  • Master ENS: “Visual servoing”, 6 hours, M2, Ecole Nationale Supérieure de Rennes
  • Master ESIR3: “Visual servoing”, 8 hours, M2, Ecole supérieure d'ingénieurs de Rennes

Vincent Drevelle:

  • Master ILA: “Interactive transverse project”, 30 hours, M2, Université de Rennes 1
  • Master Info: “Artificial intelligence”, 20 hours, M1, Université de Rennes 1
  • Licence Info: “Computer systems architecture”, 42 hours, L1, Université de Rennes 1
  • Portail Info-Elec: “Discovering programming and electronics”, 22 hours, L1, Université de Rennes 1
  • Licence Miage: “Computer programming”, 76 hours, L3, Université de Rennes 1
  • Master Elec: “Instrumentation, localization, GPS”, 4 hours, M2, Université de Rennes 1
  • Master Elec: “Multisensor data fusion”, 20 hours, M2, Université de Rennes 1
  • Master IL: “Mobile robotics”, 32 hours, M2, Université de Rennes 1

Alexandre Krupa:

  • Master FIP TIC-Santé: “Ultrasound visual servoing”, 6 hours, M2, Télécom Physique Strasbourg
  • Master ESIR3: “Ultrasound visual servoing”, 9 hours, M2, Esir Rennes

Maud Marchal:

  • Master INSA: “Computer Graphics”, 26 hours, M1, INSA Rennes.
  • Licence INSA: “Algorithms and Complexity”, 26 hours, L3, INSA Rennes.
  • Master SIF, “Virtual Reality and Interactions”, 4 hours, M2, Université Rennes 1.
  • Master SIF, “Computer Graphics”, 8 hours, M2, Université Rennes 1.
  • Master Artificial Intelligence and Advanced Visual Computing, “Immersion and interaction with virtual worlds”, M2, 8 hours, Ecole Polytechnique.

Eric Marchand:

  • Master Esir2: “Colorimetry”, 24 hours, M1, Esir Rennes
  • Master Esir2: “Computer vision: geometry”, 24 hours, M1, Esir Rennes
  • Master Esir3: “Special effects”, 24 hours, M2, Esir Rennes
  • Master Esir3: “Computer vision: tracking and recognition”, 24 hours, M2, Esir Rennes
  • Master MRI: “Computer vision”, 24 hours, M2, Université de Rennes 1
  • Master ENS: “Vision for robots”, 16 hours, M2, ENS Rennes
  • Master MIA: “Augmented reality”, 4 hours, M2, Université de Rennes 1

Julien Pettré:

  • Master SIF: "Motion for Animation and Robotics", 6 hours, Unviersité de Rennes 1
  • Master Artificial Intelligence and Advanced Visual Computing: "Advanced 3D graphics", 3 hours, Ecole Polytechnique

Claudio Pacchierotti:

  • Master SIF: "Virtual Reality and Multi-Sensory Interaction", 4 hours, INSA Rennes
  • Master 2 Robotics: “Medical robotics”, 2 hours, Univ. Roma “La Sapienza”
Supervision
  • Ph.D. in progress: John Thomas, “Sensor-based control for assembly in congested area”, started in December 2020, supervised by François Chaumette.
  • Ph.D. in progress: Maxime Robic, “Visual servoing of a satellite constellation”, started in November 2020, supervised by Eric Marchand and François Chaumette.
  • Ph.D. in progress: Lev Smolentsev, “Manipulation of soft bodies with multiple drones”, started in November 2020, supervised by Alexandre Krupa, François Chaumette and Isabelle Fantoni (LS2N, Nantes).
  • Ph.D. in progress: Emilie Leblong, “Prise en compte des interactions sociales dans un simulateur de conduite de fauteuil roulant électrique en réalité virtuelle : favoriser l’apprentissage pour une mobilité inclusive”, started in October 2020, supervised by Marie Babel, Anne-Hélène Olivier (MimeTIC group).
  • Ph.D. in progress: Julien Albrand, “Conduite assistée d’un fauteuil roulant : navigation par intervalles à l'aide de beacons Ultra Large Bande”, started in October 2020, supervised by Marie Babel, Vincent Drevelle, Eric Marchand.
  • Ph.D. in progress: Fouad Makiyeh, “Shape servoing based on visual information using RGB-D sensor for dexterous manipulation of 3D compliant objects”, started in September 2020, supervised by Alexandre Krupa, Maud Marchal and François Chaumette.
  • Ph.D. in progress: Lisheng Kuang, “Design and development of novel wearable haptic interfaces for teleoperation of robots”, started in March 2020, supervised by Claudio Pacchierotti and Paolo Robuffo Giordano.
  • Ph.D. in progress: Adèle Colas, “Modélisation de comportements collectifs réactifs et expressifs pour la réalité virtuelle”, started in November 2019, supervised by Claudio Pacchierotti, Anne-Hélène Olivier, Ludovic Hoyet (MimeTIC group) and Julien Pettré
  • Ph.D. in progress: Sebastian Vizcay, “Design of interaction techniques for electrotactile haptic interface”, started in November 2019, supervised by Maud Marchal, Claudio Pacchierotti and Ferran Argelaguet (Hybrid group)
  • Ph.D. in progress: Samuel Felton “Deep Learning for visual servoing”, started in October 2019, supervised by Eric Marchand and Elisa Fromont (Lacodam group)
  • Ph.D. in progress: Mathieu Gonzalez “SLAM in time varying environment”, started in October 2019, supervised by Eric Marchand and Jérome Royan (IRT B<>COM)
  • Ph.D. in progress: Pascal Brault, “Planification et optimisation de trajectoires robustes aux incertitudes paramétriques pour des taches robotiques fondées sur l'usage de capteurs”, started in September 2019, supervised by Paolo Robuffo Giordano and Quentin Delamare
  • Ph.D. in progress: Alberto Jovane, “Modélisation de mouvements réactifs et comportements non verbaux pour la création d’acteurs digitaux pour la réalité virtuelle”, started in September 2019, supervised by Marc Christie (MimeTIC group), Claudio Pacchierotti, Ludovic Hoyet (MimeTIC group) and Julien Pettré
  • Ph.D. in progress: Guillaume Vailland, “Outdoor wheelchair assisted navigation: reality versus virtuality”, started in November 2018, supervised by Marie Babel and Valérie Gouranton (Hybrid group)
  • Ph.D. in progress: Hugo Brument, “Design of interaction techniques based on human locomotion for navigation in virtual reality”, started in October 2018, supervised by Maud Marchal, Anne-Hélène Olivier(MimeTIC group) and Ferran Argelaguet (Hybrid group)
  • Ph.D. in progress: Alexander Oliva, “Coupling Vision and Force for Robotic Manipulation”, started in October 2018, supervised by François Chaumette and Paolo Robuffo Giordano
  • Ph.D. in progress: Ketty Favre “Lidar-based localization”, started in October 2018, supervised by Eric Marchand, Muriel Pressigout and Luce Morin (IMAGE group)
  • Ph.D. in progress: Fabien Grzeskowiak, “Crowd simulation for testing robot navigation in dense crowds”, started in October 2018, supervised by Marie Babel and Julien Pettré
  • Ph.D. in progress: Xi Wang “Robustness of Visual SLAM techniques to light changing conditions”, started in September 2018, supervised by Eric Marchand and Marc Christie (MimeTIC group)
  • Ph.D. in progress: Zane Zake (IRT Jules Verne), “Visual servoing for cable-driven parallel robots”, started in January 2018, supervised by Stéphane Caro (LS2N) and François Chaumette
  • Ph.D. in progress: Javad Amirian, “Crowd motion prediction for robot navigation in dense crowds”, started in January 2018, supervised by Jean-Bernard Hayet (CIMAT, Guana) and Julien Pettré
  • Ph.D. in progress: Florian Berton, “Gaze analysis for crowd behaviours study”, started in October 2017, supervised by Anne-Hélène Olivier (MimeTIC group), Ludovic Hoyet (MimeTIC group) and Julien Pettré
  • Ph.D. in progress: Antonin Bernardin, “Interactive physics-based simulation of the suction phenomenon”, started in September 2017, supervised by Maud Marchal and Christian Duriez (Defrost group)
  • Ph.D. in progress: Tairan Yin “Authoring Crowded Dynamic Scenes for Virtual Reality”, started in november 2020, supervised by J. Pettré and M.-P. Cani (Ecole Polytechnique)
  • Ph.D. in progress: Cedric De Almeida Braga “Dense crowd motion analysis”, started in january 2020, supervised by J. Pettré
  • Ph.D. in progress: Thomas Chatagnon “Dense crowd modeling and simulation”, started in november 2020, supervised by J. Pettré and C. Pontonnier (ENS Rennes)
  • Ph.D. in progress: Vicenzo Abichequer Sangalli “Reactive and Expressive virtual characters for Virtual Reality”, supervised by J. Pettré and C. O'Sullivan (Trinity COllege Dublin)
  • Ph.D. defended: Xavier De Tinguy de la Girouliére, “Conception de techniques d'interaction multisensorielles pour la manipulation dextre d'objets en réalité virtuelle”, defended in December 2020, supervised by Maud Marchal and Claudio Pacchierotti
  • Ph.D. defended: Rahaf Rahal, “Shared Control and Authority Distribution for Robotic Teleoperation”, defended in December 2020, supervised by Paolo Robuffo Giordano and Claudio Pacchierotti 67
  • Ph.D. defended: Romain Lagneau, “Shape control of deformable objects by adaptive visual servoing”, defended in December 2020, supervised by Alexandre Krupa and Maud Marchal 66
  • Ph.D. defended: Agniva Sengupta, “Visual Tracking of Deformable Objects with RGB-D Camera”, defended in June 2020, supervised by Alexandre Krupa and Eric Marchand 68
  • Ph.D. defended: Hadrien Gurnel “Needle comanipulation with haptic guidance for percutaneous interventions”, defended in January 2020, supervised by Alexandre Krupa, Maud Marchal and Laurent Launay (IRT b<>com) 65
Internships
  • Muhammad Nazeer, École Centrale de Nantes (EMARO), “AI-enabled control system for the intuitive and effective control of a team of drones”, supervised by C. Pacchierotti.
  • Riccardo Arciulo, Univ. of Rome “La Sapienza”, Italy, “Novel Solution For Human-Robot physical interaction in collaborative tasks”, supervised by C. Pacchierotti and P. Robuffo Giordano
  • Samuel Felton, INSA Rennes - Univ. Rennes 1, “Deep Learning for Visual Servoing”, supervised by E. Marchand
  • Zoé Levrel, INSA Rennes, “Design of interaction techniques using mid-air haptic interface for navigating with a wheelchair simulator”, supervised by M. Marchal and M. Babel
  • Jeanne Hecquard, INSA Rennes, `Design of virtual environments for navigation in virtual museums with a wheelchair”, supervised by M. Marchal and M. Babel
  • Juliette Grosset, ENIB Brest, “Object recognition with event-based camera for wheelchair navigation”, supervised by M. Babel and F. Morbidi (UPJV)
  • Alexandre Duvivier, INSA Rennes, “Smart walker and multisensory feedback”, supervised by M. Babel and F. Pasteau
  • Pierre-Antoine Cabaret, INSA Rennes, “UWB data analysis for wheelchair orientation estimation”, supervised by M. Babel and F. Pasteau
  • Albert Khim, European Master Emaro, Ecole Centrale de Nantes, "Combining Gaussian filters and photometric moments in visual servoing", supervised by François Chaumette.
Juries
  • Alexandre Krupa: Mohammad Alkhatib (Ph.D., reviewer, PRISME, Univ. Orléans)
  • Paolo Robuffo Giordano: External Reviewer for a professorship opening at the University of Klagenfurt, Austria
  • Julien Pettré: Jennifer Vandoni (Ph.D., member, Paris Saclay) and Maxime Garcia (Ph.D., reviewer, Grenoble Alpes)
  • Eric Marchand: Ioannis Avrithis (HDR president, univ de Rennes 1), Xuchong Qiu (Ph.D. reviewer, Ecole Nat. des Ponts et Chaussées), Giorgia Pitteri (Ph.D. reviewer, univ. de Bordeaux), Florian Berton, univ de Rennes 1 (2020), ; Amaury Louarn (Ph.D. president, univ de Rennes 1); Antoine Legouhy (Ph.D. president, univ de Rennes 1)
  • Marie Babel: Marie-Pierre Pacaux-Lemoine (HdR, member, LAMIH, Université Polytechnique Hauts de France, Valenciennes)
  • François Chaumette: Jihong Zhu (Ph.D, reviewer, Lirmm, Montpellier), Romain Lagneau (Ph.D, president, Irisa)
  • Maud Marchal: Florence Zara (HdR, member, Univ. Lyon 1), Charles Barnouin (Ph.D, reviewer, Univ. Lyon 1), Thibault Louis (Ph.D, reviewer, Univ. Grenoble Alpes)

10.3 Popularization

10.3.1 Internal or external Inria responsibilities

  • Eric Marchand is an Editor for Interstices

10.3.2 Articles and contents

  • Le Magazine de la Santé, broadcast on France 5 on February 10, 2020, realized a report on Rainbow power wheelchair driving assistance. User feedback and experience, clinical trials with the Pôle Saint Hélier rehabilitation center, and a benefit for people with disabilities have been highlighted.
  • Eric Marchand wrote the entry devoted to visual tracking for the Springer Encyclopedia on Robotics 64
  • François Chaumette wrote the entry on visual servoing for the Springer Encyclopedia on Robotics 59. He also revised his entries for the Encyclopedia on Computer Vision 60 and for the Encyclopediaof Systems and Control 58.

10.3.3 Interventions

Due to the visibility of our experimental platforms, the team is often requested to present its research activities to students, researchers or industry. Our panel of demonstrations allows us to highlight recent results concerning the positioning of an ultrasound probe by visual servoing, grasping and dual arm manipulation by Romeo, vision-based shared control using our haptic device for object manipulation, the control of a fleet of quadrotors, vision-based detection and tracking for space navigation in a rendezvous context, the semi-autonomous navigation of a wheelchair, the power wheelchair simulator and augmented reality applications. Some of these demonstrations are available as videos on VispTeam YouTube channel (https://www.youtube.com/user/VispTeam/videos).

In particular, the French Secretary of State for People with Disabilities, Sophie Cluzel, visited in December 2020 Pôle Saint Hélier to see how the Rainbow smart wheelchair technology and virtual reality wheelchair driving simulator are benefiting patients with mobility difficulties.

11 Scientific production

11.1 Major publications

  • 1 articleMarcoM. Aggravi, ClaudioC. Pacchierotti and PaoloP. Robuffo Giordano. Connectivity-Maintenance Teleoperation of a UAV Fleet with Wearable Haptic FeedbackIEEE Transactions on Automation Science and EngineeringJune 2020, 1-20
  • 2 articleAbderrahmaneA. Kheddar, StéphaneS. Caron, PierreP. Gergondet, AndrewA. Comport, ArnaudA. Tanguy, ChristianC. Ott, BerndB. Henze, GeorgeG. Mesesan, JohannesJ. Englsberger, Máximo AM. Roa, Pierre-BriceP.-B. Wieber, FrançoisF. Chaumette, FabienF. Spindler, GiuseppeG. Oriolo, LeonardoL. Lanari, AdrienA. Escande, KévinK. Chappellet, FumioF. Kanehiro and PatriceP. Rabate. Humanoid robots in aircraft manufacturingIEEE Robotics and Automation Magazine264December 2019, 30-45
  • 3 articleRomainR. Lagneau, AlexandreA. Krupa and MaudM. Marchal. Automatic Shape Control of Deformable Wires based on Model-Free Visual ServoingIEEE Robotics and Automation Letters54also presented at IROS'202020, 5252 - 5259

11.2 Publications of the year

International journals

  • 4 article MarcoM. Aggravi, Ahmed Alaaeldin SaidA. Elsherif, PaoloP. Robuffo Giordano and ClaudioC. Pacchierotti. Haptic-enabled decentralized control of a heterogeneous human-robot team for search and rescue in partially-known environments IEEE Robotics and Automation Letters March 2021
  • 5 article MarcoM. Aggravi, Daniel A LD. Estima, AlexandreA. Krupa, SarthakS. Misra and ClaudioC. Pacchierotti. Haptic teleoperation of flexible needles combining 3D ultrasound guidance and needle tip force feedback IEEE Robotics and Automation Letters March 2021
  • 6 articleMarcoM. Aggravi, ClaudioC. Pacchierotti and PaoloP. Robuffo Giordano. Connectivity-Maintenance Teleoperation of a UAV Fleet with Wearable Haptic FeedbackIEEE Transactions on Automation Science and EngineeringJune 2020, 1-20
  • 7 articleGuglielmo S.G. Aglietti, BenB. Taylor, SimonS. Fellowes, SeanS. Ainley, DanD. Tye, ChristopherC. Cox, AliA. Zarkesh, AndreaA. Mafficini, N. Vinkoff, KatieK. Bashford, ThierryT. Salmon, IngoI. Retat, ChristopherC. Burgess, AlexanderA. Hall, ThomasT. Chabot, KeyvanK. Kanani, AurélienA. Pisseloup, CesarC. Bernal, FrançoisF. Chaumette, AlexandreA. Pollini and WillemW. Steyn. Survey Paper RemoveDEBRIS: An in-orbit demonstration of technologies for the removal of space debrisAeronautical Journal -New Series-2020, 1-23
  • 8 articleGuglielmoG. Aglietti, BenB. Taylor, SimonS. Fellowes, ThierryT. Salmon, IngoI. Retat, AlexanderA. Hall, ThomasT. Chabot, AurélienA. Pisseloup, Christopher M.C. Cox, AliA. Zarkesh, AndreaA. Mafficini, N. Vinkoff, KatieK. Bashford, CesarC. Bernal, FrançoisF. Chaumette, AlexandreA. Pollini and WillemW. Steyn. The active space debris removal mission RemoveDebris. Part 2: in orbit operationsActa Astronautica168March 2020, 310-322
  • 9 articleFlorianF. Berton, FabienF. Grzeskowiak, AlexandreA. Bonneau, AlbertoA. Jovane, MarcoM. Aggravi, LudovicL. Hoyet, Anne-HélèneA.-H. Olivier, ClaudioC. Pacchierotti and JulienJ. Pettré. Crowd Navigation in VR: exploring haptic rendering of collisionsIEEE Transactions on Visualization and Computer Graphics2020, 12
  • 10 articleFrancescoF. Chinello, MonicaM. Malvezzi, DomenicoD. Prattichizzo and ClaudioC. Pacchierotti. A modular wearable finger interface for cutaneous and kinesthetic interaction: control and evaluationIEEE Transactions on Industrial Electronics671January 2020, 706-716
  • 11 articleMarcoM. Cognetti, MarcoM. Aggravi, ClaudioC. Pacchierotti, PaoloP. Salaris and PaoloP. Robuffo Giordano. Perception-Aware Human-Assisted Navigation of Mobile Robots on Persistent TrajectoriesIEEE Robotics and Automation Letters53July 2020, 4711-4718
  • 12 article XavierX. De Tinguy, ClaudioC. Pacchierotti, AnatoleA. Lécuyer and MaudM. Marchal. Capacitive Sensing for Improving Contact Rendering with Tangible Objects in VR IEEE Transactions on Visualization and Computer Graphics January 2021
  • 13 articleLouiseL. Devigne, MarcoM. Aggravi, MorganeM. Bivaud, NathanN. Balix, StefanS. Teodorescu, TomT. Carlson, TomT. Spreters, ClaudioC. Pacchierotti and MarieM. Babel. Power wheelchair navigation assistance using wearable vibrotactile hapticsIEEE Transactions on Haptics (ToH)January 2020, 1-6
  • 14 articleJasonJ. Forshaw, GuglielmoG. Aglietti, SimonS. Fellowes, ThierryT. Salmon, IngoI. Retat, AlexanderA. Hall, ThomasT. Chabot, AurélienA. Pisseloup, DanielD. Tye, CesarC. Bernal, FrançoisF. Chaumette, AlexandreA. Pollini and WillemW. Steyn. The active space debris removal mission RemoveDebris. Part 1: from concept to launchActa Astronautica168March 2020, 293-309
  • 15 article MathieuM. Gonzalez, AmineA. Kacete, AlbertA. Murienne and EricE. Marchand. L6DNet: Light 6 DoF Network for Robust and Precise Object Pose Estimation with Small Datasets IEEE Robotics and Automation Letters February 2021
  • 16 articleThomasT. Howard, MaudM. Marchal, AnatoleA. Lécuyer and ClaudioC. Pacchierotti. PUMAH : Pan-tilt Ultrasound Mid-Air Haptics for larger interaction workspace in virtual realityIEEE Transactions on Haptics (ToH)January 2020, 1-6
  • 17 articleSalmaS. Jiddi, PhilippeP. Robert and EricE. Marchand. Detecting Specular Reflections and Cast Shadows to Estimate Reflectance and Illumination of Dynamic Indoor ScenesIEEE Transactions on Visualization and Computer Graphics2020, 1-12
  • 18 articleRomainR. Lagneau, AlexandreA. Krupa and MaudM. Marchal. Automatic Shape Control of Deformable Wires based on Model-Free Visual ServoingIEEE Robotics and Automation Letters542020, 5252 - 5259
  • 19 article AleksanderA. Lillienskiold, RahafR. Rahal, Paolo RobuffoP. Giordano, ClaudioC. Pacchierotti and EkremE. Misimi. Human-Inspired Haptic-Enabled Learning from Prehensile Move Demonstrations IEEE Transactions on Systems, Man, and Cybernetics: Systems January 2021
  • 20 article SeanS. Lynch, RichardR. Kulpa, Laurentius AntoniusL. Meerhoff, AnthonyA. Sorel, JulienJ. Pettré and Anne-HélèneA.-H. Olivier. Influence of path curvature on collision avoidance behaviour between two walkers Experimental Brain Research 2020
  • 21 articleEricE. Marchand. Direct visual servoing in the frequency domainIEEE Robotics and Automation Letters52April 2020, 620-627
  • 22 articleVictorV. Mercado, MaudM. Marchal and AnatoleA. Lécuyer. ENTROPiA: Towards Infinite Surface Haptic Displays in Virtual Reality Using Encountered-Type Rotating PropsIEEE Transactions on Visualization and Computer Graphics273March 2021, 2237-2243
  • 23 article YoussefY. Michel, RahafR. Rahal, ClaudioC. Pacchierotti, PaoloP. Robuffo Giordano and DongheuiD. Lee. Bilateral teleoperation with adaptive impedance control for contact tasks IEEE Robotics and Automation Letters March 2021
  • 24 articleAntonioA. Mucherino, JérémyJ. Omer, LudovicL. Hoyet, PaoloP. Robuffo Giordano and FranckF. Multon. An application-based characterization of dynamical distance geometry problemsOptimization Letters1422020, 493–507
  • 25 article BeatrizB. Pascual-Escudero, AbhilashA. Nayak, SébastienS. Briot, OlivierO. Kermorgant, PhilippeP. Martinet, MohabM. Safey El Din and FrançoisF. Chaumette. Complete Singularity Analysis for the Perspective-Four-Point Problem International Journal of Computer Vision 2020
  • 26 articleRahafR. Rahal, GiuliaG. Matarese, MarcoM. Gabiccini, AlessioA. Artoni, DomenicoD. Prattichizzo, PaoloP. Robuffo Giordano and ClaudioC. Pacchierotti. Caring about the human operator: haptic shared control for enhanced user comfort in robotic telemanipulationIEEE Transactions on Haptics (ToH)131January 2020, 197-203
  • 27 article WouterW. Van Toll and JulienJ. Pettré. Synchronizing navigation algorithms for crowd simulation via topological strategies Computers and Graphics April 2020
  • 28 article WouterW. Van Toll, RoyR. Triesscheijn, MarceloM. Kallmann, RamonR. Oliva, NuriaN. Pelechano, JulienJ. Pettré and RolandR. Geraerts. Comparing navigation meshes: Theoretical analysis and practical metrics Computers and Graphics July 2020
  • 29 article SteevenS. Villa Salazar, ClaudioC. Pacchierotti, XavierX. De Tinguy, AndersonA. Maciel and MaudM. Marchal. Altering the Stiffness, Friction, and Shape Perception of Tangible Objects in Virtual Reality Using Wearable Haptics IEEE Transactions on Haptics (ToH) January 2020
  • 30 article Eric ME. Young, DavidD. Gueorguiev, Katherine JK. Kuchenbecker and ClaudioC. Pacchierotti. Compensating for Fingertip Size to Render Tactile Cues More Accurately IEEE Transactions on Haptics (ToH) January 2020
  • 31 articleZaneZ. Zake, FrançoisF. Chaumette, NicolòN. Pedemonte and StéphaneS. Caro. Robust 2 1/2D Visual Servoing of a Cable-Driven Parallel Robot Thanks to Trajectory TrackingIEEE Robotics and Automation Letters52January 2020, 660-667
  • 32 articleKatjaK. Zibrek, BenjaminB. Niay, Anne-HélèneA.-H. Olivier, LudovicL. Hoyet, JulienJ. Pettré and RachelR. Mcdonnell. The effect of gender and attractiveness of motion on proximity in virtual realityACM Transactions on Applied Perception174November 2020, 1-15

International peer-reviewed conferences

  • 33 inproceedings JavadJ. Amirian, BingqingB. Zhang, Francisco ValenteF. Castro, Juan JoséJ. Baldelomar, Jean-BernardJ.-B. Hayet and JulienJ. Pettré. OpenTraj: Assessing Prediction Complexity in Human Trajectories Datasets ACCV 2020 - 15th Asian Conference on Computer Vision Kyoto / Virtual, Japan https://accv2020.github.io/ November 2020
  • 34 inproceedingsFlorianF. Berton, LudovicL. Hoyet, Anne-HélèneA.-H. Olivier, JulienJ. Bruneau, OlivierO. Le Meur and JulienJ. Pettré. Eye-Gaze Activity in Crowds: Impact of Virtual Reality and DensityVR 2020 - 27th IEEE Conference on Virtual Reality and 3D User InterfacesAtlanta, United StatesMarch 2020, 1-10
  • 35 inproceedingsHugoH. Brument, MaudM. Marchal, Anne-HélèneA.-H. Olivier and FerranF. Argelaguet Sanz. Influence of Dynamic Field of View Restrictions on Rotation Gain Perception in Virtual EnvironmentsEuroVR 2020 - 17th EuroVR International ConferenceValencia, SpainOctober 2020, 20-40
  • 36 inproceedingsHugoH. Brument, Anne-HélèneA.-H. Olivier, MaudM. Marchal and FerranF. Argelaguet. Does the Control Law Matter? Characterization and Evaluation of Control Laws for Virtual Steering NavigationICAT-EGVE 2020 - International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual EnvironmentsFlorida, United StatesDecember 2020, 1-10
  • 37 inproceedingsT. Duverné, T. Rougnant, FrançoisF. Le Yondre, F. Berton, J. Bruneau, JulienJ. Pettré, LudovicL. Hoyet and Anne-HélèneA.-H. Olivier. Effect of Social Settings on Proxemics During Social Interactions in Real and Virtual Conditions17th International Conference on Virtual Reality and Augmented Reality, EuroVR 202012499 LNCSLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)Valencia, Spain2020, 3-19
  • 38 inproceedings FabienF. Grzeskowiak, MarieM. Babel, JulienJ. Bruneau and JulienJ. Pettré. Toward Virtual Reality-based Evaluation of Robot Navigation among People VR 2020 - 27th IEEE Conference on Virtual Reality and 3D User Interfaces Atlanta, United States 2020
  • 39 inproceedingsAlbertoA. Jovane, AmauryA. Louarn and MarcM. Christie. Topology-aware Camera Control for Real-time ApplicationsMIG 2020 - Motion, Interaction and GamesN. Charleston, United StatesOctober 2020, 1-9
  • 40 inproceedingsRomainR. Lagneau, AlexandreA. Krupa and MaudM. Marchal. Active Deformation through Visual Servoing of Soft ObjectsICRA 2020 - IEEE International Conference on Robotics and AutomationParis, FranceMay 2020, 1-7
  • 41 inproceedingsVictorV. Mercado, MaudM. Marchal and AnatoleA. Lécuyer. Design and Evaluation of Interaction Techniques Dedicated to Integrate Encountered-Type Haptic Displays in Virtual EnvironmentsIEEE Conference on Virtual Reality and 3D User InterfacesAtlanta / Virtual, United StatesMarch 2020, 230-238
  • 42 inproceedingsBenjaminB. Niay, Anne-HélèneA.-H. Olivier, KatjaK. Zibrek, JulienJ. Pettré and LudovicL. Hoyet. Walk Ratio: Perception of an Invariant Parameter of Human Walk on Virtual CharactersSAP 2020 - ACM Symposium on Applied PerceptionVirtual Event, United StatesSeptember 2020, 1-9
  • 43 inproceedingsPedro AP. Patlan-Rosales and AlexandreA. Krupa. Robotic assistance for ultrasound elastography providing autonomous palpation with teleoperation and haptic feedback capabilitiesBioRob 2020 - 8th IEEE RAS/EMBS International Conference for Biomedical Robotics and BiomechatronicsNew York, United StatesNovember 2020, 1-6
  • 44 inproceedingsOle-MagnusO.-M. Pedersen, EkremE. Misimi and FrançoisF. Chaumette. Grasping Unknown Objects by Coupling Deep Reinforcement Learning, Generative Adversarial Networks, and Visual ServoingICRA 2020 - IEEE International Conference on Robotics and AutomationParis, FranceMay 2020, 1-8
  • 45 inproceedingsFilippoF. Sanfilippo and ClaudioC. Pacchierotti. A Low-Cost Multi-Modal Auditory-Visual-Tactile Framework for Remote TouchICICT 2020 - 3rd International Conference on Information and Computer TechnologiesSan Jose, United StatesMarch 2020, 1-6
  • 46 inproceedingsAgnivaA. Sengupta, RomainR. Lagneau, AlexandreA. Krupa, EricE. Marchand and MaudM. Marchal. Simultaneous Tracking and Elasticity Parameter Estimation of Deformable ObjectsICRA 2020 - IEEE International Conference on Robotics and AutomationParis, FranceMay 2020, 1-7
  • 47 inproceedingsAlejandroA. Suárez-Hernández, ThierryT. Gaugry, JavierJ. Segovia-Aguas, AntoninA. Bernardin, CarmeC. Torras, MaudM. Marchal and GuillemG. Alenyà. Leveraging Multiple Environments for Learning and Decision Making: a Dismantling Use CaseIROS 2020 - IEEE/RSJ International Conference on Intelligent Robots and SystemsLas Vegas / Virtual, United StatesOctober 2020, 6902-6908
  • 48 inproceedings GuillaumeG. Vailland, LouiseL. Devigne, FrançoisF. Pasteau, FlorianF. Nouviale, BastienB. Fraudet, EmilieE. Leblong, MarieM. Babel and ValérieV. Gouranton. VR based Power Wheelchair Simulator: Usability Evaluation through a Clinically Validated Task with Regular Users IEEE Conf. on Virtual Reality and 3D User Interfaces, IEEE VR 2021 Lisbon, Portugal March 2021
  • 49 inproceedings GuillaumeG. Vailland, YorenY. Gaffary, LouiseL. Devigne, ValérieV. Gouranton, BrunoB. Arnaldi and MarieM. Babel. Vestibular Feedback on a Virtual Reality Wheelchair Driving Simulator: A Pilot Study HRI 2020 - ACM/IEEE International Conference on Human-Robot Interaction Cambridge, United Kingdom 2020
  • 50 inproceedingsWouterW. Van Toll, CédricC. Braga, BarbaraB. Solenthaler and JulienJ. Pettré. Extreme-Density Crowd Simulation: Combining Agents with Smoothed Particle HydrodynamicsMIG 2020 - 13th ACM SIGGRAPH Conference on Motion, Interaction and GamesVirtual Event, United StatesOctober 2020, 1-10
  • 51 inproceedingsWouterW. Van Toll, FabienF. Grzeskowiak, AxelA. López, JavadJ. Amirian, FlorianF. Berton, JulienJ. Bruneau, Beatriz CabreroB. Daniel, AlbertoA. Jovane and JulienJ. Pettré. Generalized Microscropic Crowd Simulation using Costs in Velocity Spacei3D 2020 - ACM SIGGRAPH Symposium on Interactive 3D Graphics and GamesSan Francisco, United StatesSeptember 2020, 1-9
  • 52 inproceedingsXiX. Wang, MarcM. Christie and EricE. Marchand. Relative Pose Estimation and Planar Reconstruction via Superpixel-Driven Multiple HomographiesIROS 2020 - IEEE/RSJ International Conference on Intelligent Robots and SystemsLas Vegas, United StatesOctober 2020, 1-8
  • 53 inproceedingsKatjaK. Zibrek, BenjaminB. Niay, Anne-HélèneA.-H. Olivier, LudovicL. Hoyet, JulienJ. Pettré and RachelR. Mcdonnell. Walk this way: Evaluating the effect of perceived gender and attractiveness of motion on proximity in virtual realityVR 2020 - 27th IEEE Conference on Virtual Reality and 3D User InterfacesVirtuel, United States2020, 169-170

National peer-reviewed Conferences

  • 54 inproceedingsGuillaumeG. Vailland, YorenY. Gaffary, LouiseL. Devigne, ValérieV. Gouranton, BrunoB. Arnaldi and MarieM. Babel. Simulateur de Conduite de Fauteuil Roulant avec Retours Vestibulaires : Une Etude PiloteHandicap 2020 - 11ème Conférence sur les Aides Techniques pour les Personnes en Situation de HandicapParis, FranceNovember 2020, 1-8

Conferences without proceedings

  • 55 inproceedings MarcoM. Cognetti, MarcoM. Aggravi, ClaudioC. Pacchierotti, PaoloP. Salaris and PaoloP. Robuffo Giordano. Shared Control Active Perception for Human-Assisted Navigation 2nd Italian Conference on Robotics and Intelligent Machines (I-RIM) Online, Italy December 2020
  • 56 inproceedings RahafR. Rahal, FirasF. Abi-Farraj, PaoloP. Robuffo Giordano and ClaudioC. Pacchierotti. Haptic Shared-Control Methods for Robotic Cutting 2nd Italian Conference on Robotics and Intelligent Machines Online, Italy December 2020
  • 57 inproceedings RahafR. Rahal, GiuliaG. Matarese, MarcoM. Gabiccini, AlessioA. Artoni, DomenicoD. Prattichizzo, PaoloP. Robuffo Giordano and ClaudioC. Pacchierotti. Haptic shared control for enhanced user comfort in robotic telemanipulation 2020 IEEE ICRA workshop on Shared Autonomy: Learning and Control Online, France 2020

Scientific book chapters

  • 58 inbookFrançoisF. Chaumette. Robot Visual ControlEncyclopedia of Systems and Control, 2nd editionDecember 2020, 1-18
  • 59 inbookFrançoisF. Chaumette. Visual ServoingEncyclopedia of RoboticsJuly 2020, 1-9
  • 60 inbook FrançoisF. Chaumette. Visual Servoing Computer Vision October 2020
  • 61 inbookXavierX. De Tinguy, Thomas M.T. Howard, ClaudioC. Pacchierotti, MaudM. Marchal and AnatoleA. Lécuyer. WeATaViX: WEarable Actuated TAngibles forVIrtual reality eXperiences12272Haptics: Science, Technology, Applications12th International Conference, EuroHaptics 2020, Leiden, The Netherlands, September 6–9, 2020, ProceedingsLeiden, NetherlandsSeptember 2020, 262-270
  • 62 inbookIde-FloreI.-F. Kenmogne, VincentV. Drevelle and EricE. Marchand. Using Constraint Propagation for Cooperative UAV Localization from Vision and Ranging276Decision Making under Constraints, Studies in Systems, Decision and ControlMarch 2020, 133-138
  • 63 inbookMaudM. Marchal, GerardG. Gallagher, AnatoleA. Lécuyer and ClaudioC. Pacchierotti. Can Stiffness Sensations be Rendered in VirtualReality Using Mid-air Ultrasound Haptic Technologies?12272Haptics: Science, Technology, Applications12th International Conference, EuroHaptics 2020, Leiden, The Netherlands, September 6–9, 2020Leiden, NetherlandsSeptember 2020, 297-306
  • 64 inbookEricE. Marchand. Visual TrackingEncyclopedia of Robotics2020, 1-16

Doctoral dissertations and habilitation theses

Other scientific publications

  • 69 misc MarcoM. Aggravi, TommasoT. Lisini Baldi, ClaudioC. Pacchierotti and DomenicoD. Prattichizzo. Combined tracking and vibrotactile rendering with a wearable armband Washington, United States March 2020
  • 70 miscNoraN. Ayanian, PaoloP. Robuffo Giordano, RobertR. Fitch, AntonioA. Franchi and LorenzoL. Sabattini. Guest editorial: special issue on multi-robot and multi-agent systemsMarch 2020,
  • 71 misc AdèleA. Colas, WouterW. Van Toll, LudovicL. Hoyet, ClaudioC. Pacchierotti, MarcM. Christie, KatjaK. Zibrek, Anne-HélèneA.-H. Olivier and JulienJ. Pettré. Interaction Fields: Sketching Collective Behaviours N. Charleston, United States October 2020
  • 72 misc Thomas M.T. Howard, GuillaumeG. Gicquel, MaudM. Marchal, AnatoleA. Lécuyer and ClaudioC. Pacchierotti. PUMAH : Pan-tilt Ultrasound Mid-Air Haptics Washington, United States March 2020
  • 73 misc EmilieE. Leblong, BastienB. Fraudet, NicolasN. Benoit, MARIEM. Dandois, MarieM. Babel, ESTELLEE. Ceze, MohammedM. Sakel, MatthewM. Pepper, GuillaumeG. Caron, NicolasN. Ragot and PhilippeP. Gallien. Training and Provision Concerning Power Wheelchair Driving : a Survey Comparing Practices in France and United Kingdom Lyon, France October 2020
  • 74 misc EmilieE. Leblong, BastienB. Fraudet, LouiseL. Devigne, MarieM. Babel, FrançoisF. Pasteau, NicolasN. Benoit and PhilippeP. Gallien. SWADAPT1 : Evaluation on standardised circuits of the interest of a robotic module for assisting the driver of an electric wheelchair: pilot, prospective, controlled, randomised study Orlando, United States March 2020

11.3 Cited publications

  • 75 articlePaoloP. Salaris, MarcoM. Cognetti, RiccardoR. Spica and PaoloP. Robuffo Giordano. Online Optimal Perception-Aware Trajectory GenerationIEEE Transactions on Robotics2019, 1-16