EN FR
EN FR

2023Activity reportProject-TeamRAINBOW

RNSR: 201822637G
  • Research center Inria Centre at Rennes University
  • In partnership with:CNRS, Institut national des sciences appliquées de Rennes, Université de Rennes
  • Team name: Sensor-based Robotics and Human Interaction
  • In collaboration with:Institut de recherche en informatique et systèmes aléatoires (IRISA)
  • Domain:Perception, Cognition and Interaction
  • Theme:Robotics and Smart environments

Keywords

Computer Science and Digital Science

  • A5.1.2. Evaluation of interactive systems
  • A5.1.3. Haptic interfaces
  • A5.1.7. Multimodal interfaces
  • A5.1.9. User and perceptual studies
  • A5.4.4. 3D and spatio-temporal reconstruction
  • A5.4.6. Object localization
  • A5.4.7. Visual servoing
  • A5.6. Virtual reality, augmented reality
  • A5.6.1. Virtual reality
  • A5.6.2. Augmented reality
  • A5.6.3. Avatar simulation and embodiment
  • A5.6.4. Multisensory feedback and interfaces
  • A5.9.2. Estimation, modeling
  • A5.10.2. Perception
  • A5.10.3. Planning
  • A5.10.4. Robot control
  • A5.10.5. Robot interaction (with the environment, humans, other robots)
  • A5.10.6. Swarm robotics
  • A5.10.7. Learning
  • A6.4.1. Deterministic control
  • A6.4.3. Observability and Controlability
  • A6.4.4. Stability and Stabilization
  • A6.4.5. Control of distributed parameter systems
  • A6.4.6. Optimal control
  • A9.5. Robotics
  • A9.7. AI algorithmics
  • A9.9. Distributed AI, Multi-agent

Other Research Topics and Application Domains

  • B2.4.3. Surgery
  • B2.5. Handicap and personal assistances
  • B2.5.1. Sensorimotor disabilities
  • B2.5.2. Cognitive disabilities
  • B2.5.3. Assistance for elderly
  • B5.1. Factory of the future
  • B5.6. Robotic systems
  • B8.1.2. Sensor networks for smart buildings
  • B8.4. Security and personal assistance

1 Team members, visitors, external collaborators

Research Scientists

  • Paolo Robuffo Giordano [Team leader, CNRS, Senior Researcher, HDR]
  • François Chaumette [INRIA, Senior Researcher, HDR]
  • Alexandre Krupa [INRIA, Senior Researcher, HDR]
  • Claudio Pacchierotti [CNRS, Researcher, HDR]
  • Marco Tognon [INRIA, ISFP]

Faculty Members

  • Marie Babel [INSA RENNES, Associate Professor, HDR]
  • Vincent Drevelle [UNIV RENNES, Associate Professor]
  • Sylvain Guegan [INSA RENNES, Associate Professor Delegation, until Aug 2023]
  • Maud Marchal [INSA RENNES, Professor, HDR]
  • Éric Marchand [UNIV RENNES, Professor, HDR]

Post-Doctoral Fellow

  • Elodie Bouzbib [INRIA, Post-Doctoral Fellow, until Nov 2023]

PhD Students

  • Jose Eduardo Aguilar Segovia [INRIA]
  • Lorenzo Balandi [INRIA, from Oct 2023]
  • Maxime Bernard [CNRS]
  • Antoine Bout [INSA RENNES, from Oct 2023]
  • Pascal Brault [ENS RENNES, ATER, until Aug 2023]
  • Pierre-Antoine Cabaret [INRIA]
  • Nicola De Carli [CNRS]
  • Glenn Kerbiriou [INTERDIGITAL]
  • Lisheng Kuang [CSC Scholarship, until Aug 2023]
  • Ines Lacote [INRIA]
  • Theo Le Terrier [INSA RENNES, from Oct 2023]
  • Emilie Leblong [POLE ST HELIER]
  • Fouad Makiyeh [INRIA]
  • Maxime Manzano [INSA RENNES]
  • Antonio Marino [UNIV RENNES]
  • Lendy Mulot [INSA RENNES]
  • Thibault Noel [CREATIVE]
  • Erwan Normand [UNIV RENNES]
  • Mandela Ouafo Fonkoua [INRIA]
  • Mattia Piras [INRIA, from Dec 2023]
  • Leon Raphalen [CNRS, from Sep 2023]
  • Maxime Robic [INRIA]
  • Lev Smolentsev [INRIA]
  • Ali Srour [CNRS]
  • John Thomas [INRIA]
  • Oluwapelumi Williams [INSA RENNES, until Aug 2023]

Technical Staff

  • Gianluca Corsini [CNRS, Engineer, from Dec 2023]
  • Louise Devigne [INRIA, Engineer]
  • Samuel Felton [INRIA, Engineer]
  • Marco Ferro [CNRS, Engineer]
  • Guillaume Gicquel [INSA RENNES, Engineer]
  • Fabien Grzeskowiak [INSA RENNES, Engineer]
  • Thomas Howard [INSA RENNES, Engineer, until Oct 2023]
  • Romain Lagneau [INRIA, Engineer]
  • Paul Mefflet [CNRS, Engineer, from Nov 2023]
  • François Pasteau [INSA RENNES, Engineer]
  • Pierre Perraud [INRIA, Engineer, until Nov 2023]
  • Esteban Restrepo [CNRS, Engineer, from Feb 2023]
  • Olivier Roussel [INRIA, Engineer, from Apr 2023]
  • Fabien Spindler [INRIA, Engineer]
  • Sebastien Thomas [INRIA, Engineer]
  • Thomas Voisin [INRIA]

Interns and Apprentices

  • Lorenzo Balandi [INRIA, Intern, until Feb 2023]
  • Riccardo Belletti [INRIA, Intern, from Sep 2023]
  • Martin Bichon Reynaud [ENS RENNES, Intern, from Sep 2023]
  • Sarah Emery [ENS RENNES, Intern, until Jun 2023]
  • Gauthier Gendreau [INRIA, Intern, from Apr 2023 until Aug 2023]
  • Theo Goupil [CNRS, Intern, from Jun 2023 until Aug 2023]
  • Noe Guillaumin [INRIA, Intern, from Apr 2023 until Aug 2023]
  • Evelyne Le Bezvoet [ENS PARIS-SACLAY, Intern, from Jun 2023 until Jul 2023]
  • Nicolas Martinet [CNRS, Intern, from Dec 2023]
  • Loane Vadaine [INRIA, Intern, from May 2023 until Jul 2023]
  • Quentin Zanini [CNRS, from Feb 2023 until Jul 2023]

Administrative Assistant

  • Hélène de La Ruée [UNIV RENNES]

Visiting Scientists

  • Tommaso Belvedere [SAPIENZA ROME, from Mar 2023 until Sep 2023]
  • Massimiliano Bertoni [UNIV PADOUE, from Oct 2023]
  • Salvatore Marcellini [UNIV NAPLES, from Feb 2023 until Jul 2023]
  • Francesca Pagano [UNIV NAPLES, from Sep 2023]
  • Danilo Troisi [UNIV PISE, from Nov 2023]

2 Overall objectives

The long-term vision of the Rainbow team is to develop the next generation of sensor-based robots able to navigate and/or interact in complex unstructured environments together with human users. Clearly, the word “together”' can have very different meanings depending on the particular context: for example, it can refer to mere co-existence (robots and humans share some space while performing independent tasks), human-awareness (the robots need to be aware of the human state and intentions for properly adjusting their actions), or actual cooperation (robots and humans perform some shared task and need to coordinate their actions).

One could perhaps argue that these two goals are somehow in conflict since higher robot autonomy should imply lower (or absence of) human intervention. However, we believe that our general research direction is well motivated since: (i) despite the many advancements in robot autonomy, complex and high-level cognitive-based decisions are still out of reach. In most applications involving tasks in unstructured environments, uncertainty, and interaction with the physical word, human assistance is still necessary, and will most probably be for the next decades. On the other hand, robots are extremely capable of autonomously executing specific and repetitive tasks, with great speed and precision, and of operating in dangerous/remote environments, while humans possess unmatched cognitive capabilities and world awareness which allow them to take complex and quick decisions; (ii) the cooperation between humans and robots is often an implicit constraint of the robotic task itself. Consider for instance the case of assistive robots supporting injured patients during their physical recovery, or human augmentation devices. It is then important to study proper ways of implementing this cooperation; (iii) finally, safety regulations can require the presence at all times of a person in charge of supervising and, if necessary, of taking direct control of the robotic workers. For example, this is a common requirement in all applications involving tasks in public spaces, like autonomous vehicles in crowded spaces, or even UAVs when flying in civil airspace such as over urban or populated areas.

Within this general picture, the Rainbow activities will be particularly focused on the case of (shared) cooperation between robots and humans by pursuing the following vision: on the one hand, empower robots with a large degree of autonomy for allowing them to effectively operate in non-trivial environments (e.g., outside completely defined factory settings). On the other hand, include human users in the loop for having them in (partial and bilateral) control of some aspects of the overall robot behavior. We plan to address these challenges from the methodological, algorithmic and application-oriented perspectives. The main research axes along which the Rainbow activities will be articulated are: three supporting axes (Optimal and Uncertainty-Aware Sensing; Advanced Sensor-based Control; Haptics for Robotics Applications) that are meant to develop methods, algorithms and technologies for realizing the central theme of Shared Control of Complex Robotic Systems.

3 Research program

3.1 Main Vision

The vision of Rainbow (and foreseen applications) calls for several general scientific challenges: (i) high-level of autonomy for complex robots in complex (unstructured) environments, (ii) forward interfaces for letting an operator giving high-level commands to the robot, (iii) backward interfaces for informing the operator about the robot `status', (iv) user studies for assessing the best interfacing, which will clearly depend on the particular task/situation. Within Rainbow we plan to tackle these challenges at different levels of depth:

  • the methodological and algorithmic side of the sought human-robot interaction will be the main focus of Rainbow. Here, we will be interested in advancing the state-of-the-art in sensor-based online planning, control and manipulation for mobile/fixed robots. For instance, while classically most control approaches (especially those sensor-based) have been essentially reactive, we believe that less myopic strategies based on online/reactive trajectory optimization will be needed for the future Rainbow activities. The core ideas of Model-Predictive Control approaches (also known as Receding Horizon) or, in general, numerical optimal control methods will play a role in the Rainbow activities, for allowing the robots to reason/plan over some future time window and better cope with constraints. We will also consider extending classical sensor-based motion control/manipulation techniques to more realistic scenarios, such as deformable/flexible objects (“Advanced Sensor-based Control” axis). Finally, it will also be important to spend research efforts into the field of Optimal Sensing, in the sense of generating (again) trajectories that can optimize the state estimation problem in presence of scarce sensory inputs and/or non-negligible measurement and process noises, especially true for the case of mobile robots (“Optimal and Uncertainty-Aware Sensing” axis). We also aim at addressing the case of coordination between a single human user and multiple robots where, clearly, as explained the autonomy part plays even a more crucial role (no human can control multiple robots at once, thus a high degree of autonomy will be required by the robot group for executing the human commands);
  • the interfacing side will also be a focus of the Rainbow activities. As explained above, we will be interested in both the forward (human robot) and backward (robot human) interfaces. The forward interface will be mainly addressed from the algorithmic point of view, i.e., how to map the few degrees of freedom available to a human operator (usually in the order of 3–4) into complex commands for the controlled robot(s). This mapping will typically be mediated by an “AutoPilot” onboard the robot(s) for autonomously assessing if the commands are feasible and, if not, how to least modify them (“Advanced Sensor-based Control” axis).

    The backward interface will, instead, mainly consist of a visual/haptic feedback for the operator. Here, we aim at exploiting our expertise in using force cues for informing an operator about the status of the remote robot(s). However, the sole use of classical grounded force feedback devices (e.g., the typical force-feedback joysticks) will not be enough due to the different kinds of information that will have to be provided to the operator. In this context, the recent interest in the use of wearable haptic interfaces is very interesting and will be investigated in depth (these include, e.g., devices able to provide vibro-tactile information to the fingertips, wrist, or other parts of the body). The main challenges in these activities will be the mechanical conception (and construction) of suitable wearable interfaces for the tasks at hand, and in the generation of force cues for the operator: the force cues will be a (complex) function of the robot state, therefore motivating research in algorithms for mapping the robot state into a few variables (the force cues) (“Haptics for Robotics Applications” axis);

  • the evaluation side that will assess the proposed interfaces with some user studies, or acceptability studies by human subjects. Although this activity will not be a main focus of Rainbow (complex user studies are beyond the scope of our core expertise), we will nevertheless devote some efforts into having some reasonable level of user evaluations by applying standard statistical analysis based on psychophysical procedures (e.g., randomized tests and Anova statistical analysis). This will be particularly true for the activities involving the use of smart wheelchairs, which are intended to be used by human users and operate inside human crowds. Therefore, we will be interested in gaining some level of understanding of how semi-autonomous robots (a wheelchair in this example) can predict the human intention, and how humans can react to a semi-autonomous mobile robot.
Figure 1

An illustration of the prototypical activities foreseen in Rainbow in which a human operator is in partial (and high-level) control of single/multiple complex robots performing semi-autonomous tasks

Figure1: An illustration of the prototypical activities foreseen in Rainbow in which a human operator is in partial (and high-level) control of single/multiple complex robots performing semi-autonomous tasks

Figure 1 depicts in an illustrative way the prototypical activities foreseen in Rainbow. On the righthand side, complex robots (dual manipulators, humanoid, single/multiple mobile robots) need to perform some task with high degree of autonomy. On the lefthand side, a human operator gives some high-level commands and receives a visual/haptic feedback aimed at informing her/him at best of the robot status. Again, the main challenges that Rainbow will tackle to address these issues are (in order of relevance): (i) methods and algorithms, mostly based on first-principle modeling and, when possible, on numerical methods for online/reactive trajectory generation, for enabling the robots with high autonomy; (ii) design and implementation of visual/haptic cues for interfacing the human operator with the robots, with a special attention to novel combinations of grounded/ungrounded (wearable) haptic devices; (iii) user and acceptability studies.

3.2 Main Components

Hereafter, a summary description of the four axes of research in Rainbow.

3.2.1 Optimal and Uncertainty-Aware Sensing

Future robots will need to have a large degree of autonomy for, e.g., interpreting the sensory data for accurate estimation of the robot and world state (which can possibly include the human users), and for devising motion plans able to take into account many constraints (actuation, sensor limitations, environment), including also the state estimation accuracy (i.e., how well the robot/environment state can be reconstructed from the sensed data). In this context, we will be particularly interested in (i) devising trajectory optimization strategies able to maximize some norm of the information gain gathered along the trajectory (and with the available sensors). This can be seen as an instance of Active Sensing, with the main focus on online/reactive trajectory optimization strategies able to take into account several requirements/constraints (sensing/actuation limitations, noise characteristics). We will also be interested in the coupling between optimal sensing and concurrent execution of additional tasks (e.g., navigation, manipulation). (ii) Formal methods for guaranteeing the accuracy of localization/state estimation in mobile robotics, mainly exploiting tools from interval analysis. The interest of these methods is their ability to provide possibly conservative but guaranteed accuracy bounds on the best accuracy one can obtain with the given robot/sensor pair, and can thus be used for planning purposes or for system design (choice of the best sensors for a given robot/task). (iii) Localization/tracking of objects with poor/unknown or deformable shape, which will be of paramount importance for allowing robots to estimate the state of “complex objects” (e.g., human tissues in medical robotics, elastic materials in manipulation) for controlling its pose/interaction with the objects of interest.

3.2.2 Advanced Sensor-based Control

One of the main competences of the previous Lagadic team has been, generally speaking, the topic of sensor-based control, i.e., how to exploit (typically onboard) sensors for controlling the motion of fixed/ground robots. The main emphasis has been in devising ways to directly couple the robot motion with the sensor outputs in order to invert this mapping for driving the robots towards a configuration specified as a desired sensor reading (thus, directly in sensor space). This general idea has been applied to very different contexts: mainly standard vision (from which the Visual Servoing keyword), but also audio, ultrasound imaging, and RGB-D.

Use of sensors for controlling the robot motion will also clearly be a central topic of the Rainbow team too, since the use of (especially onboard) sensing is a main characteristic of any future robotics application (which should typically operate in unstructured environments, and thus mainly rely on its own ability to sense the world). We then naturally aim at making the best out of the previous Lagadic experience in sensor-based control to propose new advanced ways of exploiting sensed data for, roughly speaking, controlling the motion of a robot. In this respect, we plan to work on the following topics: (i) “direct/dense methods” which try to directly exploit the raw sensory data in computing the control law for positioning/navigation tasks. The advantages of these methods is the little need for data pre-processing which can minimize feature extraction errors and, in general, improve the overall robustness/accuracy (since all the available data is used by the motion controller); (ii) sensor-based interaction with objects of unknown/deformable shapes, for gaining the ability to manipulate, e.g., flexible objects from the acquired sensed data (e.g., controlling online a needle being inserted in a flexible tissue); (iii) sensor-based model predictive control, by developing online/reactive trajectory optimization methods able to plan feasible trajectories for robots subjects to sensing/actuation constraints with the possibility of (onboard) sensing for continuously replanning (over some future time horizon) the optimal trajectory. These methods will play an important role when dealing with complex robots affected by complex sensing/actuation constraints, for which pure reactive strategies (as in most of the previous Lagadic works) are not effective. Furthermore, the coupling with the aforementioned optimal sensing will also be considered; (iv) multi-robot decentralised estimation and control, with the aim of devising again sensor-based strategies for groups of multiple robots needing to maintain a formation or perform navigation/manipulation tasks. Here, the challenges come from the need of devising “simple” decentralized and scalable control strategies under the presence of complex sensing constraints (e.g., when using onboard cameras, limited fov, occlusions). Also, the need of locally estimating global quantities (e.g., common frame of reference, global property of the formation such as connectivity or rigidity) will also be a line of active research.

3.2.3 Haptics for Robotics Applications

In the envisaged shared cooperation between human users and robots, the typical sensory channel (besides vision) exploited to inform the human users is most often the force/kinesthetic one (in general, the sense of touch and of applied forces to the human hand or limbs). Therefore, a part of our activities will be devoted to study and advance the use of haptic cueing algorithms and interfaces for providing a feedback to the users during the execution of some shared task. We will consider: (i) multi-modal haptic cueing for general teleoperation applications, by studying how to convey information through the kinesthetic and cutaneous channels. Indeed, most haptic-enabled applications typically only involve kinesthetic cues, e.g., the forces/torques that can be felt by grasping a force-feedback joystick/device. These cues are very informative about, e.g., preferred/forbidden motion directions, but are also inherently limited in their resolution since the kinesthetic channel can easily become overloaded (when too much information is compressed in a single cue). In recent years, the arise of novel cutaneous devices able to, e.g., provide vibro-tactile feedback on the fingertips or skin, has proven to be a viable solution to complement the classical kinesthetic channel. We will then study how to combine these two sensory modalities for different prototypical application scenarios, e.g., 6-dof teleoperation of manipulator arms, virtual fixtures approaches, and remote manipulation of (possibly deformable) objects; (ii) in the particular context of medical robotics, we plan to address the problem of providing haptic cues for typical medical robotics tasks, such as semi-autonomous needle insertion and robot surgery by exploring the use of kinesthetic feedback for rendering the mechanical properties of the tissues, and vibrotactile feedback for providing with guiding information about pre-planned paths (with the aim of increasing the usability/acceptability of this technology in the medical domain); (iii) finally, in the context of multi-robot control we would like to explore how to use the haptic channel for providing information about the status of multiple robots executing a navigation or manipulation task. In this case, the problem is (even more) how to map (or compress) information about many robots into a few haptic cues. We plan to use specialized devices, such as actuated exoskeleton gloves able to provide cues to each fingertip of a human hand, or to resort to “compression” methods inspired by the hand postural synergies for providing coordinated cues representative of a few (but complex) motions of the multi-robot group, e.g., coordinated motions (translations/expansions/rotations) or collective grasping/transporting.

3.2.4 Shared Control of Complex Robotics Systems

This final and main research axis will exploit the methods, algorithms and technologies developed in the previous axes for realizing applications involving complex semi-autonomous robots operating in complex environments together with human users. The leitmotiv is to realize advanced shared control paradigms, which essentially aim at blending robot autonomy and user's intervention in an optimal way for exploiting the best of both worlds (robot accuracy/sensing/mobility/strength and human's cognitive capabilities). A common theme will be the issue of where to “draw the line” between robot autonomy and human intervention: obviously, there is no general answer, and any design choice will depend on the particular task at hand and/or on the technological/algorithmic possibilities of the robotic system under consideration.

A prototypical envisaged application, exploiting and combining the previous three research axes, is as follows: a complex robot (e.g., a two-arm system, a humanoid robot, a multi-UAV group) needs to operate in an environment exploiting its onboard sensors (in general, vision as the main exteroceptive one) and deal with many constraints (limited actuation, limited sensing, complex kinematics/dynamics, obstacle avoidance, interaction with difficult-to-model entities such as surrounding people, and so on). The robot must then possess a quite large autonomy for interpreting and exploiting the sensed data in order to estimate its own state and the environmental one (“Optimal and Uncertainty-Aware Sensing” axis), and for planning its motion in order to fulfil the task (e.g., navigation, manipulation) by coping with all the robot/environment constraints. Therefore, advanced control methods able to exploit the sensory data at its most, and able to cope online with constraints in an optimal way (by, e.g., continuously replanning and predicting over a future time horizon) will be needed (“Advanced Sensor-based Control” axis), with a possible (and interesting) coupling with the sensing part for optimizing, at the same time, the state estimation process. Finally, a human operator will typically be in charge of providing high-level commands (e.g., where to go, what to look at, what to grasp and where) that will then be autonomously executed by the robot, with possible local modifications because of the various (local) constraints. At the same time, the operator will also receive online visual-force cues informative of, in general, how well her/his commands are executed and if the robot would prefer or suggest other plans (because of the local constraints that are not of the operator's concern). This information will have to be visually and haptically rendered with an optimal combination of cues that will depend on the particular application (“Haptics for Robotics Applications” axis).

4 Application domains

The activities of Rainbow fall obviously within the scope of Robotics. Broadly speaking, our main interest is in devising novel/efficient algorithms (for estimation, planning, control, haptic cueing, human interfacing, etc.) that can be general and applicable to many different robotic systems of interest, depending on the particular application/case study. For instance, we plan to consider

  • applications involving remote telemanipulation with one or two robot arms, where the arm(s) will need to coordinate their motion for approaching/grasping objects of interest under the guidance of a human operator;
  • applications involving single and multiple mobile robots for spatial navigation tasks (e.g., exploration, surveillance, mapping). In the multi-robot case, the high redundancy of the multi-robot group will motivate research in autonomously exploiting this redundancy for facilitating the task (e.g., optimizing the self-localization of the environment mapping) while following the human commands, and vice-versa for informing the operator about the status of a multi-robot group. In the single robot case, the possible combination with some manipulation devices (e.g., arms on a wheeled robot) will motivate research into remote tele-navigation and tele-manipulation;
  • applications involving medical robotics, in which the “manipulators” are replaced by the typical tools used in medical applications (ultrasound probes, needles, cutting scalpels, and so on) for semi-autonomous probing and intervention;
  • applications involving a direct physical “coupling” between human users and robots (rather than a “remote” interfacing), such as the case of assistive devices used for easing the life of impaired people. Here, we will be primarily interested in, e.g., safety and usability issues, and also touch some aspects of user acceptability.

These directions are, in our opinion, very promising since nowadays and future robotics applications are expected to address more and more complex tasks: for instance, it is becoming mandatory to empower robots with the ability to predict the future (to some extent) by also explicitly dealing with uncertainties from sensing or actuation; to safely and effectively interact with human supervisors (or collaborators) for accomplishing shared tasks; to learn or adapt to the dynamic environments from small prior knowledge; to exploit the environment (e.g., obstacles) rather than avoiding it (a typical example is a humanoid robot in a multi-contact scenario for facilitating walking on rough terrains); to optimize the onboard resources for large-scale monitoring tasks; to cooperate with other robots either by direct sensing/communication, or via some shared database (the “cloud”).

While no single lab can reasonably address all these theoretical/algorithmic/technological challenges, we believe that our research agenda can give some concrete contributions to the next generation of robotics applications.

5 Highlights of the year

5.1 Awards

  • Maud Marchal has been nominated as a Junior Member of Institut Universitaire de France (2023-2028).
  • The article 47 coauthored by F. Makiyeh, F. Chaumette, M. Marchal and A. Krupa was among the 10 finalists for the best paper award at the IEEE/RSJ IROS conference held in Detroit in October 2023.
  • Marie Babel started a term as deputy director for the GDR Robotique.
  • Claudio Pacchierotti has been nominated Co-Chair of the IEEE Technical Committee on Telerobotics.

5.2 Highlights

  • Maud Marchal is the Principal Investigator of an ERC Consolidator Grant (2023-2028) for her project ADVHANDTURE

6 New software, platforms, open data

6.1 New software

6.1.1 HandiViz

  • Name:
    Driving assistance of a wheelchair
  • Keywords:
    Health, Persons attendant, Handicap
  • Functional Description:

    The HandiViz software proposes a semi-autonomous navigation framework of a wheelchair relying on visual servoing.

    It has been registered to the APP (“Agence de Protection des Programmes”) as an INSA software (IDDN.FR.001.440021.000.S.P.2013.000.10000) and is under GPL license.

  • Contact:
    Marie Babel
  • Participants:
    François Pasteau, Marie Babel
  • Partner:
    INSA Rennes

6.1.2 UsTk

  • Name:
    Ultrasound toolkit for medical robotics applications guided from ultrasound images
  • Keywords:
    Echographic imagery, Image reconstruction, Medical robotics, Visual tracking, Visual servoing (VS), Needle insertion
  • Functional Description:
    UsTK, standing for Ultrasound Toolkit, is a cross-platform extension of ViSP software dedicated to 2D and 3D ultrasound image processing and visual servoing based on ultrasound images. Written in C++, UsTK architecture provides a core module that implements all the data structures at the heart of UsTK, a grabber module that allows acquiring ultrasound images from an Ultrasonix or a Sonosite device, a GUI module to display data, an IO module for providing functionalities to read/write data from a storage device, and a set of image processing modules to compute the confidence map of ultrasound images, generate elastography images, track a flexible needle in sequences of 2D and 3D ultrasound images and track a target image template in sequences of 2D ultrasound images. All these modules were implemented on several robotic demonstrators to control the motion of an ultrasound probe or a flexible needle by ultrasound visual servoing.
  • URL:
  • Contact:
    Alexandre Krupa
  • Participants:
    Alexandre Krupa, Fabien Spindler
  • Partners:
    Inria, Université de Rennes 1

6.1.3 ViSP

  • Name:
    Visual servoing platform
  • Keywords:
    Computer vision, Robotics, Visual servoing (VS), Visual tracking
  • Scientific Description:

    Since 2005, we develop and release ViSP [1], an open source library available from https://­visp.­inria.­fr. ViSP standing for Visual Servoing Platform allows prototyping and developing applications using visual tracking and visual servoing techniques at the heart of the Rainbow research. ViSP was designed to be independent from the hardware, to be simple to use, expandable and cross-platform. ViSP allows designing vision-based tasks for eye-in-hand and eye-to-hand systems from the most classical visual features that are used in practice. It involves a large set of elementary positioning tasks with respect to various visual features (points, segments, straight lines, circles, spheres, cylinders, image moments, pose...) that can be combined together, and image processing algorithms that allow tracking of visual cues (dots, segments, ellipses...), or 3D model-based tracking of known objects or template tracking. Simulation capabilities are also available.

    We have extended ViSP with a new open-source dynamical simulator named FrankaSim based on CoppeliaSim and ROS for the popular Franka Emika Robot [2]. The simulator fully integrated in the ViSP ecosystem features a dynamic model that has been accurately identified from a real robot, leading to more realistic simulations. Conceived as a multipurpose research simulation platform, it is well suited for visual servoing applications as well as, in general, for any pedagogical purpose in robotics. All the software, models and CoppeliaSim scenes presented in this work are publicly available under free GPL-2.0 license.

    We have also recently introduced a module dedicated to deep neural networks (DNN) to facilitate image classification and object detection. This module is used to infer the convolutional networks Faster-RCNN, SSD-MobileNet, ResNet 10, Yolo v3, Yolo v4, Yolo v5, Yolo v7 and Yolo v8, which simultaneously predict object boundaries and prediction scores at each position.

    [1] E. Marchand, F. Spindler, F. Chaumette. ViSP for visual servoing: a generic software platform with a wide class of robot control skills. IEEE Robotics and Automation Magazine, Special Issue on "Software Packages for Vision-Based Control of Motion", P. Oh, D. Burschka (Eds.), 12(4):40-52, December 2005. URL: https://hal.inria.fr/inria-00351899v1

    [2] A. A. Oliva, F. Spindler, P. Robuffo Giordano and F. Chaumette. ‘FrankaSim: A Dynamic Simulator for the Franka Emika Robot with Visual-Servoing Enabled Capabilities’. In: ICARCV 2022 - 17th International Conference on Control, Automation, Robotics and Vision. Singapore, Singapore, 11th Dec. 2022, pp. 1–7. URL: https://hal.inria.fr/hal-03794415.

  • Functional Description:
    ViSP provides simple ways to integrate and validate new algorithms with already existing tools. It follows a module-based software engineering design where data types, algorithms, sensors, viewers and user interaction are made available. Written in C++, ViSP is based on open-source cross-platform libraries (such as OpenCV) and builds with CMake. Several platforms are supported, including OSX, iOS, Windows and Linux. ViSP online documentation allows to ease learning. More than 307 fully documented classes organized in 18 different modules, with more than 475 examples and 114 tutorials are proposed to the user. ViSP is released under a dual licensing model. It is open-source with a GNU GPLv2 or GPLv3 license. A professional edition license that replaces GNU GPL is also available.
  • URL:
  • Contact:
    Fabien Spindler
  • Participants:
    Éric Marchand, Fabien Spindler, François Chaumette, Olivier Roussel

6.1.4 DIARBENN

  • Name:
    Obstacle avoidance through sensor-based servoing
  • Keywords:
    Servoing, Shared control, Navigation
  • Functional Description:
    DIARBENN's objective is to define an obstacle avoidance solution adapted to a mobile robot such as a powered wheelchair. Through a shared control system, the system corrects progressively and if necessary the trajectory when approaching an obstacle while respecting the user's intention.
  • Contact:
    Marie Babel
  • Participants:
    Marie Babel, François Pasteau, Sylvain Guegan
  • Partner:
    INSA Rennes

6.2 New platforms

The platforms described in the next sections are labeled by the University of Rennes 1 and by ROBOTEX 2.0.

6.2.1 Robot Vision Platform

Participant: François Chaumette, Eric Marchand, Fabien Spindler [contact].

We are using an industrial robot built by Afma Robots in the nineties to validate our research in visual servoing and active vision. This robot is a 6 DoF Gantry on which it is possible to mount a gripper and an RGB-D camera on its end effector (see Fig. 2). This equipment is mainly used to validate vision-based visual servoing and real-time tracking algorithms. It should be noted that the 4 DoF cylindrical robot acquired in 1995 has been withdrawn from the platform.

In 2023, this platform has been used to validate experimental results in 1 accepted publication 38 and in 1 PhD thesis 66.

Figure 2

In this image we can see our Gantry robot.

Figure2: Rainbow robotics platform for vision-based manipulation

6.2.2 Mobile Robots

Participants: Marie Babel, François Pasteau, Fabien Spindler [contact].

To validate our research in personally assisted living topic (see Sect. 7.4.2), we have three electric wheelchairs, one from Permobil, one from Sunrise and the last from YouQ (see Fig. 3.a). The control of the wheelchair is performed using a plug and play system between the joystick and the low level control of the wheelchair. Such a system lets us acquire the user intention through the joystick position and control the wheelchair by applying corrections to its motion. The wheelchairs have been fitted with cameras, ultrasound and time of flight sensors to perform the required servoing for assisting handicapped people. A wheelchair haptic simulator completes this platform to develop new human interaction strategies in a virtual reality environment (see Fig. 3(b)).

Moreover, for fast prototyping of algorithms in perception, control and autonomous navigation, the team uses a Pioneer 3DX from Adept (see Fig. 3.d). This platform is equipped with various sensors needed for autonomous navigation and sensor-based control.

Note that the Pepper robot, a human-shaped robot designed by SoftBank Robotics, has been removed from the platform.

In 2023, these platforms were used to obtain experimental results presented in 5 papers 23, 9, 13, 20, 46.

In the left image, our wheelchairs from Permobil, Sunrise and YouQ. In the middle image our wheelchair simulator. In the next image, our Pioneer P3DX mobile robot equipped with a camera mounted on a pan-tilt head.

Figure3: Mobile Robot Platform. a) wheelchairs from Permobil, Sunrise and YouQ, b) Wheelchair haptic simulator, c) Pioneer P3-DX robot

6.2.3 Advanced Manipulation Platform

Participants: Alexandre Krupa, Claudio Pacchierotti, Paolo Robuffo Giordano, François Chaumette, Fabien Spindler [contact].

This platform consists of by 2 Panda lightweight 7 DoF arms from Franka Emika equipped with torque sensors in all seven axes. An electric gripper, a camera, a soft hand from qbrobotics or a Reflex TakkTile 2 gripper from RightHand Labs (see Fig. 4.b) can be mounted on the robot end-effector (see Fig. 4.a). A force/torque sensor from Alberobotics is also attached to one of the robots end-effector to provide greater accuracy in torque control.

Two Adept 6 DoF arms (one Viper 650 robot and one Viper 850 robot) and a 6 DoF Universal Robots UR5, which can also be fitted with a force sensor and a camera, complete the platform.

This setup is mainly used to manipulate deformable objects and to validate our activities in coupling force and vision for controlling robot manipulators (see Section 7.4.1) and in controlling the deformation of soft objects (Sect. 7.2.8). Other haptic devices (see Section 7.3) can also be coupled to this platform.

In 2023, 5 new papers 47, 56, 27, 31, 58 and 1 PhD thesis 65. were published including experimental results obtained with this platform.

In the left image our Franka robot equipped with the Pisa SoftHand grasping a box. In the middle image the Reflex TakkTile 2 gripper grasping a yellow ball. In the right image The 5 arms composing the platform with in the foreground a Franka, in the second plan our two Viper robots, then in the background, a UR 5 arm and our second Franka robot.

Figure4: Rainbow advanced manipulation platform. a) One of the two Panda lightweight arms from Franka Emika, with mounted the Pisa SoftHand, b) the Reflex TakkTile 2 gripper that could be mounted on the Panda robot end-effector, c) The 5 arms composing the platform with in the foreground a Franka, in the second plan our two Viper robots, then in the background, a UR 5 arm and our second Franka robot.

6.2.4 Unmanned Aerial Vehicles (UAVs)

Participants: Gianluca Corsini, Paolo Robuffo Giordano, Marco Tognon, Claudio Pacchierotti, Pierre Perraud, Fabien Spindler [contact].

Rainbow is involved in several activities concerning perception and control for single and multiple quadrotor UAVs. To this end, we can exploit two indoor flying arenas. The first is relatively small (3m x 5m x H1.8m) and is equipped with 11 Vicon cameras for motion capture. The second (9m x 9m x H2.5m) is equipped with 8 Qualisys cameras and is available from January to August. It allows us to fly our drones in a much larger volume than the first arena.

In these arenas, we operate four quadrotors from Mikrokopter Gmbh, Germany (see Fig. 5.a) which have been heavily customised by: (i) reprogramming from scratch the low-level attitude controller onboard the microcontroller of the quadrotors, (ii) equipping each quadrotor with a NVIDIA Jetson TX2 board running Linux Ubuntu and the TeleKyb-3 software based on genom3 framework developed at LAAS in Toulouse (the middleware used for managing the experiment flows and the communication among the UAVs and the base station), and (iii) purchasing new Realsense RGB-D cameras for visual odometry and visual servoing.

To upgrade the platform's hardware, we first set up a new drone architecture based on a DJI F450 equipped with a Pixhawk capable of communicating with a Jetson TX2, itself connected to an Intel Realsense RGB-D camera. With the DJI F450s coming to the end of their life, we continued to renew the hardware by developing a new mechanical structure using 3D printing to house a Pixhawk and a Jetson Nano. This new generation of more modular drones has been called Acanthis.

The quadrotor group is used as robotic platform for testing a number of single and multiple flight control schemes with a special attention on the use of onboard vision as main sensory modality. The first experiments allowed us to carry out a visual servoing using ViSP to position the drone w.r.t. a target.

In 2023, 2 PhD thesies 63, 66 contains experimental results obtained with this platform.

In the left image, one of our Mikrokopter quadrotor. In the middle image, 3 Mikrokopter quadrotors flying in the small arena equipped with Vicon Motion Capture System. In the right image, our new DJI-F450 drone equipped with a Pixhawk board flying in our larger arena equipped with a Qualisys Motion Capture system.

Figure5: Unmanned Aerial Vehicles Platform. a) Quadrotor XL1 from Mikrokopter, b) Formation control with 3 XL1 from Mikrokopter in our flying arena equipped with the Vicon localization system, (c) DJI F540 equipped with a Pixhawk and a Jetson TX2 flying in our new arena equipped with a Qualisys Motion Capture system.

6.2.5 Interactive interfaces and systems

Participants: Claudio Pacchierotti, Paolo Robuffo Giordano, Maud Marchal, Marie Babel, Fabien Spindler [contact].

Interactive technologies enables the communication between artificial systems and human users. Examples of such technologies are haptic interfaces and virtual reality headsets.

Various haptic devices are used to validate our research in, e.g., shared control, extended reality. We design wearable haptics devices to give user feedback and use also some devices out of the shelf. We have a Virtuose 6D device from Haption (see Fig. 6.a). This device is used as master device in many of our shared control activities. An Omega 6 (see Fig. 6.b) from Force Dimension and devices from Ultrahaptics (see Fig. 6.c) complete this platform that could be coupled to the other robotic platforms.

Similarly, in order to augment the immersiveness of virtual scenarios, we make use of virtual and augmented reality headsets. We have HTC Vive headsets for VR and Microsoft Hololens for AR interactions (see Fig. 6.d).

In 2023, this platform was used to obtain experimental results presented in 13 papers 6, 7, 8, 15, 17, 18, 21, 22, 24, 25, 27, 45, 49 and 1 PhD thesis 64.

Left image, our Virtuose 6D haptic device, in the middle-left our Omega6 haptic device, in the middle-right image the Ultraleap STRATOS device, and in the right our Microsoft Hololens 2 AR headset

Figure6: Haptics and Shared Control Platform. a) Virtuose 6D, b) Omega 6 haptic devices, c) Ultraleap STRATOS, d) Hololens 2.

6.2.6 Portable immersive room

Participants: François Pasteau, Fabien Grzeskowiak, Marie Babel [contact].

To validate our research on assistive robotics and its applications in virtual conditions, we very recently acquired a portable immersive room that is planned to be easily deployed in different rehabilitation structures in order to conduct clinical trials. The system has been designed by Trinoma company and has been funded by Interreg ADAPT project.

In 2023, this platform was used to obtain experimental results presented in 1 paper 42.

On person sitting on the wheelchair simulator placed in the portable Immersive room

Figure7: Portable Immersive Room and the multisensory wheelchair driving simulator

7 New results

7.1 Optimal and Uncertainty-Aware Sensing

7.1.1 Trajectory Generation for Optimal State Estimation

Participants: Nicola De Carli, Lorenzo Balandi, Paolo Robuffo Giordano.

This activity addresses the general problem of active sensing where the goal is to charactrerize and generate optimal trajectories for single or multiple robotic systems that can maximize the amount of information gathered by (few) noisy outputs (i.e., sensor readings) while at the same time reducing the negative effects of the process/actuation noise. We have previously developed a general framework for solving online the active sensing problem by continuously replanning an optimal trajectory that maximizes a suitable norm of the Constructibility Gramian (CG)  67. The following works have extended or leveraged this formulation in several directions. Part of these activities is in the scope of the ANR project MULTISHARED (Sect. 9.3.10).

In 36 we have considered the active sensing problem for formations of drones measuring relative bearings. To be able to localize their relative positions from bearing measurements, the drone formation must satisfy specific Persistency of Excitation (PE) conditions. In this work we haved proposed a solution that can meet these PE conditions by maximizing the information collected from onboard cameras via a distributed gradient-based algorithm. Additionally, we have also shown how to consider presence of an additional task of interest (e.g., formation control) besides the active sensing task, thanks to the use of Quadratic Program-based control with Control Lyapunov Functions (CLFs).

In 4 we have instead considered the problem of persistently monitoring a set of moving targets using a team of aerial vehicles equipped with a camera with limited range and Field of View (FoV) providing bearing measurements. The aerial vehicles implement an Information Consensus Filter (ICF) to estimate the state of the target(s), and the ICF is proven to be uniformly globally exponentially stable under a Persistency of Excitation (PE) condition. The goal of the work is to then propose a distributed control scheme that allows maintaining a prescribed minimum PE level so as to ensure filter convergence. At the same time, the agents in the group are also allowed to perform additional tasks of interest while maintaining a collective observability of the target(s).

7.1.2 Visual SLAM

Participant: Mathieu Gonzalez, Eric Marchand.

Most classical SLAM systems rely on the static scene assumption, which limits their applicability in real world scenarios. Recent SLAM frameworks have been proposed to simultaneously track the camera and moving objects. However they are often unable to estimate the canonical pose of the objects and exhibit a low object tracking accuracy. To solve this problem we proposed this year TwistSLAM++, a semantic, dynamic, SLAM system that fuses stereo images and LiDAR information. Using semantic information, we track potentially moving objects and associate them to 3D object detections in LiDAR scans to obtain their pose and size. Then, we perform registration on consecutive object scans to refine object pose estimation. Finally, object scans are used to estimate the shape of the object and constrain map points to lie on the estimated surface within the bundle adjustment. We show on classical benchmarks that this fusion approach based on multimodal information improves the accuracy of object tracking. This work 41 was done in cooperation with the IRT B-Com.

7.2 Advanced Sensor-Based Control

7.2.1 Trajectory Generation for Minimum Closed-Loop State Sensitivity

Participants: Pascal Brault, Ali Srour, Paolo Robuffo Giordano.

The goal of this research activity is to propose a new point of view in addressing the control of robots under parametric uncertainties: rather than striving to design a sophisticated controller with some robustness guarantees for a specific system, we propose to attain robustness (for any choice of the control action) by suitably shaping the reference motion trajectory so as to minimize the state sensitivity to parameter uncertainty of the resulting closed-loop system. This activity in the scope of the ANR project CAMP (Sect. 9.3.9).

Over the past years, we have developed the notion of closed-loop “state sensitivity” and “input sensitivity” metrics. These allow formulating an optimal trajectory optimization problem whose solution results in a reference trajectory that, when perturbed, requires minimal change of the control inputs and has minimal deviations in the final tracking error. During the year we have also worked on several additional extensions that resulted in the following contributions:

  • together with Ali Srour we have extended in 57 the trajectory optimization problem to also consider the control gains as possible optimization variables. We could then generate minimally-sensitive trajectories that further reduce the state sensitivity by a proper choice of the control gains. The idea has been tested on three case studies involving a 3D quadrotor with a Lee (geometric) controller, a near-hovering (NH) controller and a sliding mode controller. The results of a large-scale simulative campaign showed an interesting pattern in the approach to the final hovering state, as well as unexpected trends in the optimized gains (e.g., from a statistical point of view, the “derivative” gains of the position loop were often reduced w.r.t. their initial values whereas the proportional gains were increased in average). These findings motivate us to keep working on the idea of jointly optimizing the motion and control gains for attaining robustness
  • together with Andrea Pupa and Cristian Secchi (Univ. Modena and Reggio Emilia, Italy), we have proposed in 51 a trajectory optimization formulation for determining the optimal energy tank initialization that allows implementing passive actions while being robust to model uncertainties. Indeed, energy tanks have gained popularity inside the robotics and control communities over the last years since they represent a powerful tool to enforce passivity (and, thus, input/output stability) of a controlled robot, possibly interacting with uncertain environments. However, an open issue when using energy tanks is the choice of the initial tank energy (a free parameter). In this work, we show how this initial energy can be optimally chosen for guaranteeting execution of a task also in presence of model uncertainties, thanks to the clever use of the state sensitivity (and derived quantities) in the optimization problem
  • together with Simon Wasiela (LAAS-CNRS), we have proposed in 59 a global control-aware motion planner for optimizing the state sensitivity metric and producing collision-free reference motions that are robust against parametric uncertainties for a large class of complex dynamical systems. In particular, we have proposed a RRT*-based planner called SAMP that uses an appropriate steering method to first compute a (near) time-optimal and kinodynamically feasible trajectory that is then locally deformed to improve robustness and decrease its sensitivity to uncertainties. The evaluation performed on planar/full-3D quadrotor UAV models shows that the SAMP method produces low sensitivity robust solutions with a much higher performance than a planner directly optimizing the sensitivity.

7.2.2 UWB beacon navigation of assisted power wheelchair

Participants: Vincent Drevelle, Marie Babel, François Pasteau, Theo Le Terrier.

New sensors, based on ultra-wideband (UWB) radio technology, are emerging for indoor localization and object tracking applications. Contrarily to vision, these sensors are low-cost, non-intrusive and easy to adapt on the wheelchair. They enable measuring distances between fixed beacons and mobile sensors. We seek to exploit these UWB sensors for the navigation of a wheelchair, despite the low accuracy of the measurements they provide. The problem here lies in defining an autonomous or shared sensor-based control solution, which takes into account the measurement uncertainty related to UWB beacons.

Because of multipath or non-line-of-sight propagation, erroneous measurements can perturb radio beacon ranging in cluttered indoors environment. This happens when people, objects or even the wheelchair and its driver mask the direct line-of-sight between the two beacons.

We designed a robust wheelchair positioning method, based on an extended Kalman filter with outlier identification and rejection. The method fuses UWB range measurements with low-cost gyro and motor commands to estimate the orientation and position of the wheelchair. A dataset with various density of people around the wheelchair driver has been recorded. Preliminary results show decimeter-level planar accuracy in crowded environment.

Based on this pose estimator, a demonstration of autonomous navigation between defined poses has been shown to practitioners and power wheelchair users during the IH2A boot camp. This will provide the ground of coarse-grained navigation, while the use of sensor-based control and complementary sensors is investigated for precision tasks.

7.2.3 Imitation Learning for Visual Servoing

Participant: Paolo Robuffo Giordano.

One active research direction in robotics is to make them adaptive and easy to use also by unskilled operators. In this context, the framework of dynamical system-based imitation learning plays an important role. In fact, it allows to realize stable and complex robotic tasks without explicitly coding them, thus facilitating the robot use. However, the adaptation capabilities of dynamical systems have not been fully exploited due to the lack of closed-loop implementations making use of visual feedback. In this regard, the integration of visual information allows higher flexibility to cope with environmental changes. To this end, in 50 we have presented a dynamical system-based imitation learning for visual servoing based on the large projection task priority formulation. The proposed scheme enables complex and stable visual tasks, as demonstrated by a simulation analysis and experiments with a robotic manipulator.

7.2.4 Visual servo of the orientation of an Earth observation satellite

Participants: Maxime Robic, Eric Marchand, François Chaumette.

This study is done in the scope of the BPI Lichie project (see 9.3.8). Its goal is to control the orientation of a satellite to track particular objects on the Earth. In a first part, we designed an image-based control scheme able to compensate for the satellite, Earth, and potential object motions. We are currently considering how to avoid motion-blur effects in the images acquired by the camera.

7.2.5 Multi-sensor-based control for accurate and safe assembly

Participants: John Thomas, François Pasteau, François Chaumette.

This study is also done in the scope of the BPI Lichie project (see 9.3.8). Its goal is to design sensor-based control strategies coupling vision and proximetry data for ensuring precise positioning while avoiding obstacles in dense environments. The targeted application is the assembly of satellite parts.

In a first part of this study, we designed a general ring of proximetry sensors and modeled the system so that tasks such that plane-to-plane positioning can be achieved. The stability analysis of this system has been done in 31 while its expression using screw formalism has been presented at 58

7.2.6 Sensor-based Control for Cable-Driven Parallel Robots

Participant: François Chaumette.

This study is done through the Ph.D. of Thomas Rousseau at IRT Jules Verne in collaboration with Stéphane Caro (LS2N) (see 8.2.2). We use a combination of proximetry sensors and an eye-to-hand vision sensor so that the plaform of a cable-driven parallel robot is able to inspect large parts 53. The stability analysis of this system has been presented in 54

7.2.7 Visual Exploration of an Indoor Environment

Participants: Thibault Noël, Eric Marchand, François Chaumette.

This study is done in collaboration with the Creative company in Rennes (see Section 7.2.7). It is devoted to the exploration of indoor environments by a mobile robot for a complete and accurate reconstruction of the environment.

In this context, we proposed a novel roadmap construction method in unknown environments, which relies on the extraction of the Hamilton-Jacobi skeleton of the free space. This skeleton is used to construct a graph of free-space bubbles, effectively compressing the skeleton information in a sparse data structure but retaining its topology. The bubbles also enforce safety directly in the roadmap structure. We first demonstrate the relevance of this approach for standard path-planning tasks. We also propose a frontiers-based exploration strategy able to autonomously and safely build a complete 2D map of the environment 23.

7.2.8 Shape servoing of soft objects using Fourier descriptors

Participants: Fouad Makiyeh, Alexandre Krupa, Maud Marchal, François Chaumette.

This study takes place in the context of the GentleMAN project (see Section 9.1.1). In 47, we proposed a shape visual servoing approach to deform a soft object toward a desired 3D shape using a limited number of handling points. For this purpose, the shape of the deformable object is represented using Fourier descriptors. We derived the analytical relation that provides the variation of the Fourier coefficients as a function of the movements of the handling points by considering a mass-spring model (MSM) and a control law was then designed from this relation. Since the MSM provides an approximation of the object behavior, which in practice can lead to a drift between the object and its model, an online realignment of the model with the real object is performed by tracking its surface from data provided by a remote RGB-D camera. Simulation results validated the approach for the case where many points interact on a 2D soft object while experimental results obtained with two robotic arms demonstrated the autonomous shaping of a 3D soft object.

7.2.9 Constraint-based simulation of passive suction cups

Participants: Maud Marchal.

In 5, we proposed a physics-based model of suction phenomenon to achieve simulation of deformable objects like suction cups. Our model uses a constraint-based formulation to simulate the variations of pressure inside suction cups. The respective internal pressures are represented as pressure constraints which are coupled with anti-interpenetration and friction constraints. Furthermore, our method is able to detect multiple air cavities using information from collision detection. We solve the pressure constraints based on the ideal gas law while considering several cavity states. We test our model with a number of scenarios reflecting a variety of uses, for instance, a spring loaded jumping toy, a manipulator performing a pick and place task, and an octopus tentacle grasping a soda can. We also evaluate the ability of our model to reproduce the physics of suction cups of varying shapes, lifting objects of different masses, and sliding on a slippery surface. The results show promise for various applications such as the simulation in soft robotics and computer animation.

7.2.10 Deep visual servoing

Participants: Eric Marchand, Samuel Felton.

We proposed a new visual servoing method that controls a robot's motion in a latent space. We aim to extract the best properties of two previously proposed servoing methods: we seek to obtain the accuracy of photometric methods such as Direct Visual Servoing (DVS), as well as the behavior and convergence of pose-based visual servoing (PBVS). Photometric methods suffer from limited convergence area due to a highly non-linear cost function, while PBVS requires estimating the pose of the camera which may introduce some noise and incurs a loss of accuracy. Our approach relies on shaping (with metric learning) a latent space, in which the representations of camera poses and the embeddings of their respective images are tied together. By leveraging the multimodal aspect of this shared space, our control law minimizes the difference between latent image representations thanks to information obtained from a set of pose embeddings 38.

7.2.11 Multi-Robot Formation Control and Localization

Participant: Paolo Robuffo Giordano, Claudio Pacchierotti, Maxime Bernard, Nicola De Carli, Lorenzo Balandi, Esteban Restrepo, Antonio Marino, Francesca Pagano.

Systems composed by multiple robots are useful in several applications where complex tasks need to be performed. Examples range from target tracking, to search and rescue operations and to load transportation. We have been very active over the last years on the topics of coordination, estimation, localization and control of multiple robots under the possible guidance of a human operator (see, e.g., Sect. 9.3.10). During this year we have produced the contributions discussed in Sect. 7.1.1 (which consider “active sensing” tasks or constraints) as well as the following ones focused on other aspects of multi-robotics:

  • in 34 we have proposed a decentralized connectivity maintenance algorithm for controlling a group of quadrotor UAVs with limited field of view (FOV) and not sharing a common reference frame for collectively expressing measurements and commands. This is in contrast to the vast majority of previous works on this topic which, instead, make the (simplifying) assumptions of omnidirectional sensing and presence of a common shared frame. This goal is achieved by designing a gradient-based connectivity-maintenance controller able to take into account the presence of a limited FOV. The controller needs availability of the UAVs relative orientations which are not assumed to be directly measurable. We then also proposed a novel decentralized estimator of the UAV relative orientation for correctly implementing the connectivity-maintenance action. The framework is validated in realistic simulations involving a human operator in control of 6 UAVs
  • in 28 we consider the problem of increasing the robustness of a multi-agent network to node failure or removal. This is obtained by proposing a control approach able to maintain biconnectivity of an initially biconnected graph. Remarkably, if the graph is not initially biconnected or if the property is lost after a node removal or failure, our approach is also able to render the graph biconnected after a time instant which can be tuned by the user. The proposed control algorithm is completely distributed using only locally available information and requires the estimation of a single global parameter akin to existing connectivity maintenance algorithms in the literature. Numerical simulations illustrate the effectiveness of the approach
  • in 62 we have addressed the problem of designing a control algorithm for multi-agent nonholonomic vehicles, to make them adopt a formation around a non pre-specified point on the plane, and with common but non pre-imposed orientation. We consider the rendezvous problem for second-order (force-controlled) nonholonomic systems with time-varying measurement delays for which velocity measurements are not available. The novelty and most appealing feature of the approach is that it is physics-based; it relies on the design of distributed dynamic controllers that may be assimilated to second-order mechanical systems. The consensus task is achieved by making the controllers, not the vehicles themselves directly, achieve consensus. Then, the vehicles are steered into a formation by virtue of fictitious mechanical couplings with their respective controllers. We cover different settings of increasing technical difficulty, from consensus formation control of second-order integrators to second-order nonholonomic vehicles and in scenarii including both state- and output-feedback control. The approach is tested in a realistic case study in which the vehicles communicate over a common WiFi network that introduces time-varying delays.
  • in 29 we have addressed the problem of simultaneous topology identification and synchronization of a complex dynamical network with directed interconnections. In many situations of complex dynamical networks the topology of the interconnections between the nodes (agents) is unknown, therefore the network is not directly accessible to control or analysis. We studied the graph identification problem using an edge-agreement framework that allows us to write the system in a form usually found in the literature of adaptive control. Based on this representation we could propose a new algorithm where the dynamics of an auxiliary network are designed to be δ-persistently exciting, allowing us to identify the unknown topology while guaranteeing that the synchronization behavior of the system is achieved (preserved). This result distinguishes itself from the the previous works in that we were able to provide strong stability results for the identification errors. Such characterization of the error trajectories is not only stronger than the convergence results generally proposed in the previous literature, but we believe that it is also an important property that may allow to apply these results to more complex systems and applications. This effectiveness of this approach was also demonstrated with a numerical example.
  • We are also considering approaches from a different point of view involving the use of machine learning for easing the computational and communication burden of (more classical) model-based formation control and estimation laws. In this direction, we have submitted to the IEEE Transactions on Control of Network Systems a work where we study the conditions for input-state stability (ISS) and incremental input-state stability (δISS) of Gated Graph Neural Networks (GGNNs). We show that this recurrent version of Graph Neural Net-works (GNNs) can be expressed as a dynamical distributed system and, as a consequence, can be analysed using model-based techniques to assess its stability and robustness properties. Then, the stability criteria found can be exploited as constraints during the training process to enforce the internal stability of the neural network. We illustrate these findings in two distributed control examples, flocking and multi-robot motion control, showing that the use of these conditions increases the performance and robustness of the gated GNNs

7.2.12 Warping Character Animations using Visual Motion Features

Participants: Claudio Pacchierotti.

We addressed the challenge of efficiently and automatically adapting existing character animations to new contexts. While some methods exist to edit animations, few consider the impact of camera angles on visual features and how they affect character motion. The proposed solution introduces viewpoint-dependent motion warping units that make subtle adjustments to animations based on specified visual motion features like visibility and spatial extent. This technique is framed as a visual servoing problem, regulating animation warping by controlling specific visual features. The results demonstrate the effectiveness of this approach in various motion editing tasks and suggest its potential in enhancing virtual characters' communication abilities by incorporating attention-awareness 16.

7.3 Haptic Cueing for Robotic Applications and Virtual Reality (VR)

7.3.1 Wearable haptics for human-centered robotics and Virtual Reality (VR)

Participants: Claudio Pacchierotti, Maud Marchal, Lisheng Kuang.

We have been working on wearable haptics since few years now, both from the hardware (design of interfaces) and software (rendering and interaction techniques) points of view. This line of research has continued also in this year.

In 17, we present a versatile 4-DoF hand wearable haptic device tailored for VR. Its adaptable design accommodates various end-effectors, facilitating a wide spectrum of tactile experiences. Comprising a fixed upper body attached to the hand's back and interchangeable end-effectors on the palm, the device employs articulated arms actuated by four servo motors. The work outlines its design, kinematics, and a positional control strategy enabling diverse end-effector functionality. Through three distinct end-effector demonstrations mimicking interactions with rigid, curved, and soft surfaces, we showcase its capabilities. Human trials in immersive VR confirm its efficacy in delivering immersive interactions with varied virtual objects, prompting discussions on additional end-effector designs.

In 12, we conducted experiments to assess various haptic feedback types during needle insertion by a remote robot into soft tissues. Our study involved human subjects and analyzed grounded kinesthetic, wearable cutaneous vibrotactile, and wearable cutaneous pressure feedback. Results highlighted that employing different channels—kinesthetic and cutaneous—enhanced performance in identifying tissue layers compared to utilizing the same commercial kinesthetic interface for both feedback types. Additionally, cutaneous pressure feedback was found more effective in representing the elastic component than vibrotactile sensations. Interestingly, our findings suggest that delocalized wearable cutaneous sensations effectively render the elastic component, emphasizing the importance of multi-channel feedback in simulating needle-tissue interactions.

In 32, we investigated the potential of wearable electrotactile feedback for improving contact information in virtual reality (VR)interactions. Introducing a novel electrotactile rendering method, we modulated stimuli intensity based on the user's finger-to-virtual surface interpenetration distance. In an initial study (N=21), compared to visual or no feedback, interpenetration feedback notably enhanced contact precision and accuracy. Calibration precision for electrotactile stimuli emerged as crucial, prompting a second study (N=16) comparing non-VR keyboard versus VR direct interaction calibration techniques. Despite similar usability and accuracy, the VR method notably excelled in speed. These findings advocate wearable electrotactile feedback as a potent substitute for visual feedback, promising improved contact perception in VR interactions.

In 60, we have introduced a prototyping toolkit designed to facilitate wearable haptic interaction experiences, addressing challenges in integrating various smart materials and devices into coherent experiences due to disparate programming interfaces. The toolkit integrates actuators and sensors into the design environment, accommodating humans, robots, and the interactive space. It offers an easy-to-use graphical interface for non-programmers, seamless device extensibility via the Robot Operating System (ROS), and the ability to develop complex actuation sequences. It enables manual or adaptive actuation of sensations based on sensor readings, providing a modular and versatile system for diverse haptic interactions.

7.3.2 Affective and persuasive haptics for Virtual Reality (VR)

Participants: Claudio Pacchierotti.

Affective and persuasive haptics in Virtual Reality (VR) constitute an evolving frontier that explores the integration of tactile feedback to evoke emotional responses and influence user behavior within virtual environments. By leveraging haptic technologies, these systems aim to create immersive experiences that go beyond visual and auditory stimuli, introducing touch as a compelling tool for emotional engagement and persuasion.

In 44, we investigated promoting positive social interactions in VR by enhancing empathy among users through affective wearable haptic feedback. Our study utilized a virtual meeting scenario where a human user interacts with virtual agents, experiencing the stress level changes of a presenting virtual agent. Employing compression belts and vibrators mimicking the presenter's breathing and heart rate, we conducted a user study comparing “sympathetic” haptic feedback to an “indifferent” feedback approach. While results varied among users, generally, sympathetic wearable haptic rendering was preferred and enhanced empathy and perceived connection to the presenter. These findings advocate for integrating affective haptics in social VR applications, particularly in fostering positive relationships between users.

In 55, we investigated the impact of synchronized tactile feedback with speech on social interactions within Virtual Reality (VR). Participants engaged in immersive VR scenarios involving verbal communication with virtual agents across two experiments. In the first, we augmented speech with vibrotactile feedback for one virtual agent, resulting in enhanced co-presence, persuasiveness, and leadership perception. In the second experiment, participants' own speech augmented with vibrotactile feedback led to heightened co-presence and self-perceived persuasiveness. These findings underscore the significant potential of haptic feedback in VR to augment social interactions, indicating broader applications in contexts where verbal communication is pivotal.

In 52, we investigated gaze behavior's impact on social interactions in virtual reality. Analyzing a within-subject experiment involving 21 users navigating a virtual street with idle or moving crowds of agents, we manipulated agents' gaze (averted, directed, shifting). Results revealed the preserved stare-in-the-crowd effect in dynamic scenarios, influencing users' gaze and social anxiety. Directed gazes affected gaze interaction and proximity behaviors, while locomotion remained unchanged. These findings underscore the significance of virtual agents' gaze in VR environments and emphasize the need for further exploration to better comprehend factors influencing gaze and locomotion interactions in such settings.

7.3.3 Mid-Air Haptic Feedback

Participants: Claudio Pacchierotti, Maud Marchal, Thomas Howard, Guillaume Gicquel, Lendy Mulot.

In the framework of H2020 projects H-Reality and E-TEXTURE, we have been working to develop novel mid-air haptics paradigms that can convey the information spectrum of touch sensations in the real world, motivating the need to develop new, natural interaction techniques. Both projects ended in 2022, but we have continued to work on this exciting subject.

In 15, we explore merging acoustically transparent tangible objects (ATTs) with ultrasound mid-air haptic (UMH) feedback for seamless haptic interactions with digital content. Balancing unique strengths and limitations, both methods offer unobtrusive user experiences. The work outlines the haptic design space and technical prerequisites for this fusion. Addressing potential sound interference by ATTs affecting UMH delivery, we examine single ATT surfaces' impact on ultrasound focal points. Human trials assessing detection thresholds, motion discrimination, and UMH localization reveal that ATT surfaces can maintain UMH fidelity. Findings indicate the feasibility of combining ATTs and UMH, fostering potential applications in haptic technology without compromising UMH effectiveness.

In 49, we explored mid-air shape rendering methodologies. Dynamic Tactile Pointers (DTP) have emerged as a superior method for conveying shape information by guiding an amplitude-modulated focal point along contours. Contrarily, the conventional approach, Spatio-Temporal Modulation (STM), produces blurry shapes affecting identification accuracy. Although DTP yields clearer shapes, they are perceived weaker compared to STM. To address this, we propose investigating Spatio-temporally-modulated Tactile Pointers (STP), aiming to enhance tactile intensity using spatio-temporal modulation instead of amplitude modulation. This exploration seeks to improve perceived intensity while maintaining shape clarity in ultrasound-based haptic feedback systems.

In 21, we explored enhancing ultrasound mid-air haptic (UMH) shape rendering methods. Spatio-temporal modulation (STM) offers design flexibility but produces blurry shapes, while Dynamic Tactile Pointers (DTP) enhance clarity but with reduced stimulus intensity. Introducing Spatio-temporally-modulated Tactile Pointers (STP), we aimed to improve shape clarity while maintaining strong vibrotactile sensations. Our experiments revealed that STP shapes were perceived as notably stronger than DTP shapes, with improved shape identification akin to DTP and surpassing STM. This work provided insights for effective UMH shape rendering, shedding light on vibrotactile shape perception and potentially guiding future psychophysical investigations in UMH technology.

In 22, we introduced and assessed mid-air ultrasound haptic strategies for 2-degree-of-freedom position and orientation guidance in Virtual Reality. Four strategies for position guidance and two for orientation guidance were devised and evaluated in a human subject study. Findings revealed significant enhancements in positioning performance within static scenarios compared to visual feedback alone. Conversely, orientation guidance significantly improved performance in dynamic scenarios but not in static settings, highlighting the effectiveness of these strategies based on environmental context in VR.

7.3.4 Encounter-Type Haptic Devices

Participants: Claudio Pacchierotti, Lisheng Kuang, Elodie Bouzbib.

Encounter-Type Haptic Displays (ETHDs) provide haptic feedback by positioning a tangible surface for the user to encounter. This allows users to freely elicit haptic feedback with a surface during a virtual simulation. ETHDs differ from most of current haptic devices which rely on an actuator always in contact with the user.

In 40, we introduced a device featuring an origami-based structure for haptic stimuli on the palm in virtual reality tasks. The device consisted of two platforms connected by eight curved links, allowing the origami-like structure to adapt into various shapes. With a static platform housing eight motors and a mobile platform maneuvering a 7-DoF structure, the device offered diverse surface configurations. Utilizing a parallel configuration, it achieved seven degrees of freedom, enabling reconfiguration to mimic flat or angular surfaces. Despite being over-actuated, positioning all actuators on the dorsal hand platform provided enhanced control for effective haptic rendering.

In 45, we introduced a grounded haptic device featuring interchangeable end-effectors. Its design comprises a self-constrained mechanism with three legs connected between a static lower platform and a moving upper platform, maneuvering over a sphere surface. Utilizing spiral cam-driven joints, it enables various end-effector activations. We showcase soft and origami-inspired rigid end-effectors, adaptable to different shapes and curvatures. Additionally, we conducted a perceptual experiment with 12 participants, assessing the soft end-effector's capability to simulate curvatures. Demonstrating its functionality, we applied the device as an encounter-type haptic interface for interacting with virtual objects, emphasizing its versatility across different interactions and end-effector types.

In 7, we introduced PalmEx, aiming to enhance haptic exoskeleton gloves in VR by incorporating palmar force-feedback, a crucial but often lacking aspect. Our approach, demonstrated through a self-contained hardware system, integrates a palmar contact interface into hand exoskeletons, enhancing grasping sensations and manual haptic interactions in VR. By extending existing taxonomies, we evaluated PalmEx's capabilities for virtual object exploration and manipulation. Technical assessments optimized virtual-physical interaction delays, followed by a user study (N=12) examining PalmEx's design space. Findings highlighted PalmEx's superior rendering capabilities for realistic grasping in VR, emphasizing the significance of palmar stimulation. This innovation offers an affordable solution to augment high-end consumer hand exoskeletons, addressing the deficiency in in-hand haptic sensations.

7.3.5 Pseudo-haptics effects

Participants: Claudio Pacchierotti, Elodie Bouzbib.

Pseudohaptics emerges as a compelling technology in the realm of human-computer interaction, offering an intriguing approach that simulates haptic sensations without direct physical contact. Contrary to traditional haptics that rely on physical feedback mechanisms, pseudohaptics manipulates visual or auditory cues to create the illusion of tactile perception. However, while pseudohaptics offers exciting possibilities, its effectiveness relies on finely tuned sensory manipulation to convincingly simulate haptic sensations, requiring further exploration and refinement for optimal integration across different interactive applications.

In 6, we explored perceptual thresholds for pseudo-stiffness in a virtual reality grasping task, leveraging pseudo-haptic techniques. Fifteen participants engaged in a study aimed at inducing compliance on a non-compressible tangible object. Results demonstrated that compliance could be induced in a rigid tangible object and pseudo-haptics simulated stiffness beyond 24 N/cm. Additionally, the efficiency of pseudo-stiffness was influenced by object scales and strongly correlated with user input force. These findings not only simplify future haptic interface design but also extend the potential haptic properties of passive props within VR.

7.3.6 Multimodal Cutaneous Haptics to Assist Navigation and Interaction in VR

Participants: Louise Devigne, Marco Aggravi, Inès Lacôte, Pierre-Antoine Cabaret, François Pasteau, Maud Marchal, Claudio Pacchierotti, Marie Babel.

Within the project Inria Challenge 9.3.7, we got interested on using cutaneous haptics for aiding the navigation of people with sensory disabilities. In particular, we investigated the ability of vibrotactile sensations and tap stimulations in conveying haptic motion and sensory illusions.

In 18, we examined the perception of 2D directional cues conveyed through a handheld tangible interface resembling a cylindrical handle, featuring five custom electromagnetic actuators. Twenty-four participants engaged in an experiment evaluating cue recognition using actuators for vibrating or tapping sequences across the palm. Findings highlighted the impact of handle positioning, stimulation mode, and directional cues on recognition rates. Participants exhibited higher confidence in recognizing vibration patterns. Results consistently supported the haptic handle's potential for accurate guidance, achieving recognition rates surpassing 70% across all conditions and exceeding 75% in precane and power wheelchair configurations.

In 8, we focused on advancing Virtual Reality (VR) manipulation by exploring enhanced haptic feedback. While tangible objects offer realistic haptic sensations, their static properties limit adaptability to virtual interactions. Contrastingly, vibrotactile feedback presents dynamic cues, such as impacts or textures, yet current VR controllers offer limited vibration patterns. This study investigated spatializing vibrotactile cues within tangible objects to broaden the range of sensations and interactions in VR. Through perception studies, we assessed the feasibility and advantages of leveraging multiple actuators for rendering schemes. Results indicated discernible vibrotactile cues from localized actuators and reveal their benefits for specific rendering methods, underscoring the potential for enriched VR experiences.

On a similar line, we also addressed the development of the HaptiComm 10, a specialized device designed to emulate the tactile sensations experienced during fingerspelling communication for enhancing the communication with deaf/blind people. Comprising twenty-four strategically positioned electrodynamic actuators, the HaptiComm generates distinct tactile feedback corresponding to the fingerspelling alphabet. Initial experimental assessments have shown promising results, indicating the need for further investigations into variables such as stimulus timing and pace to enhance the device's efficacy and usability in deaf/blind communication.

7.3.7 Passivity control for haptic robotic teleoperation

Participants: Claudio Pacchierotti.

Passivity control represents a fundamental approach within haptic robotic teleoperation, playing a crucial role in refining human-machine interaction dynamics. In this context, where a human operator guides a remote robotic system, stability and safety are primary concerns. Passivity control strategies prioritize managing the system's energy behavior to uphold stability and transparency between the human operator and the robotic platform. By regulating energy flow, this control method addresses issues such as instability and delays in haptic feedback, ensuring a smoother connection between the operator's inputs and the robotic actions. Its focus on managing energy dynamics helps overcome challenges associated with teleoperation, like delays and uncertainties. Consequently, passivity control enables more reliable and responsive teleoperation experiences across diverse sectors, including healthcare, manufacturing, and space exploration.

In 35, we introduced a time-domain passivity controller for multi-DoF haptic-enabled teleoperation systems, aiming to enhance task-specific transparency while ensuring system stability. Employing online convex optimization, this method prioritizes transparency along key directions in the environment space. Experimentally assessed with twenty participants, we compared its performance against a standard energy-bounding time-domain algorithm while exploring a virtual sphere. Findings revealed that as communication delay increased between local and remote agents, the proposed technique outperformed in preserving transparency along crucial task-oriented directions, showcasing its effectiveness in mitigating delays for essential task aspects. This work was a finalist for the Best Paper Award at IEEE World Haptics 2023.

7.3.8 Digital Twins for robotics and industrial training

Participants: Claudio Pacchierotti.

Among the most recent enabling technologies, Digital Twins (DTs) emerge as data-intensive network-based computing solutions in multiple domains—from Industry 4.0 to Connected Health. A DT works as a virtual system for replicating, monitoring, predicting, and improving the processes and the features of a physical system – the Physical Twin (PT), connected in real-time with its DT. Such a technology, based on advances in fields like the Internet of Things (IoT) and machine learning, proposes novel ways to face the issues of complex systems as in Human-Robot Interaction (HRI) domains.

In 61, we explore the multifaceted landscape of Digital Twins. Their diverse representations in literature challenge a singular definition, showcasing their adaptability and potential convergence with phygital systems. This adaptability underscores their capacity to address heterogeneous real-world problems and inspire novel strategies for managing uncertainty in complex systems. Notably, in healthcare, Digital Twins hold promise in domains like Digital Health and mHealth, contributing to advancements in biomedical sciences, translational research, and precision medicine. Emphasizing the potential for human-centric resource optimization in healthcare settings, this work highlights the need to explore ethical considerations and opportunities in leveraging Digital Twins. Specifically, it discusses their potential in forecasting future states of individuals, populations, and medical devices, particularly relevant in the context of events like the COVID-19 pandemic.

In 26, we investigated the correlation between fine motor skill training in VR, haptic feedback, and physiological arousal. Designing a buzzwire task with a custom vibrotactile attachment for Geomagic Touch, we conducted a controlled experiment with 73 participants across three feedback conditions: visual/kinesthetic, visual/vibrotactile, and visual-only. Results showed performance improvement across all conditions post-training, with no reported changes in self-efficacy or perceived presence and task load. Interestingly, arousal levels remained consistent across feedback conditions, yet positive performance changes correlated with higher arousal levels. These findings suggest haptic feedback's potential to influence arousal, prompting further exploration to enhance VR-based motor skill training.

7.4 Shared Control Architectures

7.4.1 Shared Control for Remote Manipulation

Participants: Paolo Robuffo Giordano, Claudio Pacchierotti, Marco Ferro, Leon Raphalen, Paul Mefflet.

As teleoperation systems become more sophisticated and flexible, the environments and applications where they can be employed become less structured and predictable. This desirable evolution toward more challenging robotic tasks requires an increasing degree of training, skills, and concentration from the human operator. In this respect, shared control algorithms have been investigated as one of the main tools to design complex but intuitive robotic teleoperation systems, helping operators in carrying out several increasingly difficult robotic applications such as assisted vehicle navigation, surgical robotics, brain-computer interface manipulation, rehabilitation. Indeed, this approach makes it possible to share the available degrees of freedom of the robotic system between the operator and an autonomous controller.

Along this general line of research, during this year we have presented in 27 a novel haptic shared control method for minimising the manipulator torque effort during remote manipulative actions. In the proposed scheme the operator is assisted in selecting a suitable grasping pose for then displacing an object along a desired trajectory. The role of the autonomy is to cue the opertor towards grasping poses that will then minimize the torque effort along the future trajectory for manipulated object. Minimising torque is important because it reduces the system operating cost and extends the range of objects that can be manipulated. The approach is demonstrated in a series of representative real-world pick-and-place experiments as well as in a human subjects study. The reported results prove the effectiveness of our shared control vs. a standard teleoperation approach. We also find that haptic-only guidance performs better than visualonly guidance, although combining them together leads to the best overall results. This work has been done in collaboration with Prof. A. Ghalamzan at the University of Lincoln, UK.

Finally, un the context of the H2020 Rego project (Sect. 9.2.2) we are starting to investigate how to employ shared control strategies for allowing a human operator to control a group of micro-robots for drug delivery in the human body.

7.4.2 Shared Control of a Wheelchair for Navigation Assistance

Participants: Louise Devigne, François Pasteau, Marie Babel.

Power wheelchairs allow people with motor disabilities to have more mobility and independence. In order to improve the access to mobility for people with disabilities, we previou,sly designed a semi-autonomous assistive wheelchair system which progressively corrects the trajectory as the user manually drives the wheelchair and smoothly avoids obstacles.

In 13, we presented the results from the clinical trials that took place in July 2021 at INSA. The objective was to evaluate the clinical benefit of a driving assistance for people with disabilities experiencing high difficulties while steering a wheelchair. 18 people participated to the trials. We clearly confirmed the excellent ability of the system to assist users and the relevant usage of such an assistive technology.

Thanks to the long term partnership between INSA and the rehabilitation center of Pôle Saint Hélier, we managed to co-organize clinical trials in July 2022 at INSA. The idea is to evaluate the benefice of a driving assistance module for powered wheelchair taking into accoung negative obstacles such as curbs or potholes. 18 wheelchair users experiencing high driving difficulties participated to the trials. We confirmed the ability of the system to efficiently assist users and the relevant usage of such an assistive technology in a controlled environment.

As a final result related to the ADAPT project, 20 describes the robotic assistive technologies developed for users of electrically-powered wheelchairs. We detailled our work and our various collaborations with our scientific and medical partners, as well as the validation of co-developped systems through extensive experimental campaigns and large-scale clinical trials.

In 9 we introduced as well the extension of these works - in cooperation with UPJV, on the Sphericol system which improves situation awareness by overlaying colour-coded range measurements from the ring of ToF sensors on a stream of 360° images of the surrounding environment.

Figure 8

A participant drives our smart power wheelchair on a curb during clinical trials.

Figure8: A participant drives our smart power wheelchair on a curb during clinical trials.

7.4.3 Style Transfer for Robotics

Participants: Paolo Robuffo Giordano, Claudio Pacchierotti, Raul Fernandez Fernandez.

Neural Style Transfer (NST) was originally proposed to define the Content and Style of an image using neural networks. For example, it was used to define the Content as the elements represented in a painting (e.g., people, animals, houses), and the Style as the low-level features specific to the artist (e.g., vivid colors, long brushstrokes). NST allows the definition of a high-level layer of abstraction that can extend the same definition of Content and Style in a larger range of applications, including robotic motion. Here the Content can be defined as the movements performed by the robot (e.g., moving forward, walk in circles, move sideways) and the Style as the emotion/style conveyed by these movements.

In 11, we proposed Neural Policy Style Transfer TD3 (NPST3) for transferring human motion styles to robot motions, filling the gap in pre-trained classification architectures for robot motions. This framework enables various human-centered styles (e.g., “angry," “happy") to be executed by robots. Utilizing Twin Delayed Deep Deterministic Policy Gradient (TD3) networks for control policies and an autoencoder network for style extraction, NPST3 facilitates both offline and online style transfer. Testing on robotic platforms, including a manipulator for telemanipulation and a humanoid robot for social interaction, involved 147 human subject questionnaires. Results evaluated the recognition of human-style robot motions, demonstrating the efficacy of our approach across diverse robot platforms.

7.4.4 Multisensory power wheelchair simulator

Participants: Sylvain Guegan, Louise Devigne, François Pasteau, Marie Babel.

Power wheelchairs are one of the main solutions for people with reduced mobility to maintain or regain autonomy and a comfortable and fulfilling life. However, driving a power wheelchair in a safe way is a difficult task that often requires training methods based on real-life situations. Although these methods are widely used in occupational therapy, they are often too complex to implement and unsuitable for some people with major difficulties.

In this context, we collaborated with clinicians ,to develop a Virtual Reality based power wheelchair simulator. This simulator is an innovative training tool adapted to any type of situations and impairments. It relies on a modular and versatile workflow enabling not only easy interfacing with any virtual display, but also with any user interface such as wheelchair controllers or feedback devices. A clinical trial has been conducted in May 2021 and October 2021 in which 26 power wheelchair regular users were asked to complete a clinically validated task designed by clinicians within four display conditions: using the HTC Vive Pro HMD, Immersia immersive room or a screen (with or without haptic and vestibular feedback). The objective of this study was to compare performances between the four conditions and to evaluate the Quality of Experience. First analyses clearly show that immersive conditions allow high driving performances to be achieved 20.

To improve the motion perception inside the wpowered wheelchair simulator, we propose in 42 a generic power wheelchair lumped model needed to provide a better kinematic and dynamic estimation of the wheelchair motion.

We acquired a mini immersive room in order to perform trials in clinical structures. We are currently preparing clinical trials and ethics committee (Comité de Protection des Personnes) procedures to evaluate the clinical benefit of the long-term use of this simulator.

Figure 9

Participant driving in a virtual environment with our simulator

Figure9: Participant driving in a virtual environment with our simulator

7.4.5 Integrating social interaction in a VR powered wheelchair driving simulator

Participants: Emilie Leblong, Fabien Grzeskowiak, Sebastien Thomas, François Pasteau, Marie Babel.

Navigating in the city while driving a powered wheelchair, in a complex and dynamic environment made of various interactions with other humans, can be challenging for a person with disabilities. Learning how to drive a powered wheelchair remains then a major issue for the clinical teams prescribing these technical mobility aids. The work carried out as part of the Interreg ADAPT project has made it possible to design a powered wheelchair simulator in VR. This work is done in cooperation with A,nne-Hélène Olivier (MimeTIC team).

To promote the transfer of skills from virtual to real, the use of such a platform requires the deployment of environmentally friendly interactive populated virtual environments. These are currently empty of any pedestrians, even though the question of social interaction in the framework of an inclusive urban mobility is fundamental.

The objective is to better understand how pedestrians and powered wheelchair users interact. New models of interaction will be used to improve dynamic virtual environments by including virtual humans that faithfully reproduce the behaviors modeled in terms of the simulator user's reaction in a handicap situation. User trials has been conducted, including 24 participants, to evaluate the interaction strategies between pedestrians and wheelchair users, in real and virtual conditions.

7.4.6 Upper-limb exoskelon for reach-to-grasp assistance for power wheelchair users

Participants: Marie Babel, Maxime Manzano, Sylvain Guégan.

Wearable Upper-Limb (UL) assistive robots are designed to increase autonomy and social participation for people with UL impairments as they assist tasks involved in Activities of Daily Living (ADLs). When an active device is coupled with a power wheelchair, it is usually controlled through push-buttons located near the wheelchair joystick, thus preventing bimanual tasks and requiring a large mental load to perform complex UL trajectories. Therefore, there is a need for strategies to detect user intent and get rid of manual control, allowing a distinctive, intuitive control of devices with multiple active degrees of freedom.

However, commercially available active devices are controlled with push buttons, which adds cognitive load and discomfort. To alleviate this issue, we propose a new assistive control framework. Our idea is to propose assistive control to be adapted to an exoskeleton prototype following the recommendations of physical medicine therapists. We designed a first prototype and performed two pilot studies. The first one aim to evaluate four strategies based on a biomechanical model of the upper limb, tuned using anthropometric measurements 48. The second one involved one participant with no impairment performing pick-and-place tasks (paper in submission).

7.5 Aerial Physical Interaction

7.5.1 Manipulation of a deformable wire by two UAVs

Participants: Lev Smolentsev, Alexande Krupa, François Chaumette.

This study takes place in the context of the CominLabs MAMBO project (see Section 9.4.1). Its main objective is the development of a visual-based control framework for performing autonomous manipulation of a deformable wire attached between two UAVs using data provided by onboard RGB-D cameras. Toward this direction, we developed a visual servoing approach that controls the deformation of a suspended tether cable subject to gravity. The cable shape is modelled with a parabolic curve together with the orientation of the plane containing the tether. The visual features considered are the parabolic coefficients and the yaw angle of that plane. We derived the analytical expression of the interaction matrix that relates the variation of these visual features to the velocities of the cable extremities. Simulations and experimental results obtained with a robotic arm manipulating one extremity of the cable with an eye-to-hand configuration demonstrated the efficiency of this visual servoing approach to deform the tether cable toward a desired shape configuration 56. Recently, we have adapted our visual servoing approach to aerial robotic manipulation and experimentally validated it on a scenario that involves grasping and transporting an object by a cable whose extremities are manipulated by two quadrotor UAVs.

7.5.2 Estimation of Interaction forces

Participants: Marco Tognon, Massimiliano Bertoni.

To improve accuracy and robustness of interactive aerial robots, the knowledge of the forces acting on the platform is of uttermost importance. The robot should distinguish interaction forces from external disturbances in order to be compliant with the firsts and reject the seconds. This represents a challenge since disturbances might be of different nature (physical contact, aerodynamic, modeling errors) and be applied to different points of the robot. The work in 19 presents a new extended Kalman filter (EKF) based estimator for both external disturbance and interaction forces. The estimator fuses information coming from the system’s dynamic model and it’s state with wrench measurements coming from a Force-Torque sensor. This allows for robust interaction control at the tool’s tip even in presence of external disturbance wrenches acting on the platform. We employ the filter estimates in a novel hybrid force/motion controller to perform force tracking not only along the tool direction, but from any platform’s orientation, without losing the stability of the pose controller. The proposed framework is extensively tested on an omnidirectional aerial manipulator (AM) performing push and slide operations and transitioning between different interaction surfaces, while subject to external disturbances. The experiments are done equipping the AM with two different tools: a rigid interaction stick and an actuated delta manipulator, showing the generality of the approach. Moreover, the estimation results are compared to a state-of-the-art momentum-based estimator, clearly showing the superiority of the EKF approach.

The previous method strongly relies on a Force-Torque sensor which is in general heavy and very expensive. To alleviate this problem, together with Massimiliano Bertoni , we are investigating how we can instead use a standard camera and a deformable end-effector in order to estimate interaction forces from the observation of deformations.

7.5.3 Manipulation of Articulated Objects

Participants: Mattia Piras, Jose-Luis Sanchez-Lopez, Marco Tognon.

The field of aerial manipulation has seen rapid advances, transitioning from push-and-slide tasks to interaction with articulated objects. The motion trajectory of these complex actions is usually hand-crafted or a result of online optimization methods like Model Predictive Control (MPC) or Model Predictive Path Integral (MPPI) control. However, these methods rely on heuristics or model simplifications to efficiently run on onboard hardware, limiting their robustness, and making them sensitive to disturbances and differences between the real environment and its model. In 37, we propose a Reinforcement Learning (RL) approach to learn reactive motion behaviors for a manipulation task while producing policies that are robust to disturbances and modeling errors. Specifically, we train a policy to perform a door-opening task with an omnidirectional micro aerial vehicle (OMAV). The policy is trained in a physics simulator and shown in the real world, where it is able to generalize also to door closing tasks never seen in training. We also compare our method against a state-of-the-art MPPI solution in simulation, showing a considerable increase in robustness and speed.

Mattia Piras will continue this work aiming at manipulating articulated objects using onboard sensors only.

In order to exploit those results in real applications in the construction industry, we are collaborating with Jose-Luis Sanchez-Lopez to setup an international project that will hopefully fund this investigation.

7.5.4 Mixed Reality-based Human-Robot Interface for Teleoperation of Aerial Vehicles

Participants: Marco Tognon.

Omnidirectional aerial vehicles are an attractive solution for visual inspection tasks that require observations from different views. However, the decisional autonomy of modern robots is limited. Therefore, human input is often necessary to safely explore complex industrial environments. Existing teleoperation tools rely on on-board camera views or 3D renderings of the environment to improve situational awareness. Mixed-Reality (MR) offers an exciting alternative, allowing the user to perceive and control the robot’s motion in the physical world. Furthermore, since MR technology is not limited by the hardware constraints of standard teleoperation interfaces, like haptic devices or joysticks, it allows us to explore new reference generation and user feedback methodologies. In 33, we investigate the potential of MR in teleoperating 6DoF aerial robots by designing a holographic user interface to control their translational velocity and orientation. A user study with 13 participants is performed to assess the proposed approach. The evaluation confirms the effectiveness and intuitiveness of our methodology, independent of prior user experience with aerial vehicles or MR. However, prior familiarity with MR improves task completion time. The results also highlight limitation to line-of-sight operation at distances where relevant details in the physical environment can still be visually distinguished

7.5.5 Physical Aerial Human-Robot Interaction

Participants: Marco Tognon.

The area of Aerial Physical Interaction has seen significant advancements, creating the opportunity for aerial robots to physically interact with humans. Our previous works established a framework for safe, human-aware path guidance via a tether, physically connecting a human to an aerial vehicle. However, the previous controller is purely reactive and does not leverage modern path-following methods. Further, its design does not properly account for the non-holonomic nature of the tethered human-robot system. In 43 we improved performance by addressing both problems. First, we incorporate modern path-following methods into our guidance framework to account for path geometry and current system velocity. Second, we propose a polar parametrization of the guidance law to achieve faster convergence of the guidance force to the desired value. Finally, the performance and human comfort of the different extensions is evaluated in simulation. The final method is shown to increase guidance accuracy and comfort, thereby increasing the usefulness of guidance via aerial robot interaction.

7.5.6 Cooperative Multi-Aerial Robot Manipulation

Participants: Marco Tognon.

The work in 39, 14 studies how parametric uncertainties and force measurement/estimation biases affect the cooperative manipulation of a cable-suspended beamshaped load by means of two aerial robots not explicitly communicating with each other. In particular, these articles shed light on the impact of the uncertain knowledge of the model parameters available to an established communicationless force-based controller. First, we find the closed-loop equilibrium configurations in the presence of the aforementioned uncertainties, and then, we study their stability. Hence, we show the fundamental role played in the robustness of the load attitude control by the internal force induced in the manipulated object by nonvertical cables. Furthermore, we formally study the sensitivity of the attitude error to such parametric variations, and we provide a method to act on the load position error in the presence of uncertainties. Eventually, we validate the results through an extensive set of numerical tests in a realistic simulation environment, including underactuated aerial vehicles and sagging-prone cables, and through hardware experiments.

On the other side, considering communication based collaboration methods, in 30, we propose an inverse-kinematics controller for a class of multi-robot systems in the scenario of sampled communication. The goal is to make a group of robots perform trajectory tracking in a coordinated way when the sampling time of communications is much larger than the sampling time of low-level controllers, disrupting theoretical convergence guarantees of standard control design in continuous time. Given a desired trajectory in configuration space which is pre-computed offline, the proposed controller receives configuration measurements, possibly via wireless, to re-compute velocity references for the robots, which are tracked by a low-level controller. We propose joint design of a sampled proportional feedback plus a novel continuous-time feedforward that linearizes the dynamics around the reference trajectory: this method is amenable to distributed communication implementation where only one broadcast transmission is needed per sample. Also, we provide closedform expressions for instability and stability regions and convergence rate in terms of proportional gain k and sampling period T . We test the proposed control strategy via numerical simulations in the scenario of cooperative aerial manipulation of a cable-suspended load using a realistic simulator (Fly-Crane). Finally, we compare our proposed controller with centralized approaches that adapt the feedback gain online through smart heuristics, and show that it achieves comparable performance

For the similar problem of collaborative manipulation of cable-suspended loads, Riccardo Belletti is now investigating distributed MPC control methods for optimal and agressive maneuvers.

8 Bilateral contracts and grants with industry

8.1 Bilateral contracts with industry

8.1.1 IRT JV Happy2

Participant: François Chaumette.

No Inria Rennes 13521, duration: 72 months.

F. Chaumette is in secondment (at 20%) at IRT Jules Verne in Nantes since 2018. This year, he was involved the Happy 2 project with Airbus & Naval group to give his expertise in visual servoing and to develop basic software with ViSP.

 

8.1.2 Airbus React

Participants: Romain Lagneau, Fabien Spindler, François Chaumette.

No Inria Rennes 16165, duration: 36 months.

This project started in September 2021. It is in collaboration with Laas in Toulouse for Airbus in Saint Nazaire and Toulouse. Its goal is to develop a vision-based localization system so that a robot arm is able to point accurately on an aircraft panel.

 

8.1.3 Trasys/NRB

Participants: Romain Lagneau, Fabien Spindler, François Chaumette.

No Inria Rennes 2023000390, duration: 19 months.

This project started in May 2023. It is in collaboration with the Trasys/NRB company in Belgium. Its goal is to develop an embedded vision-based localization system with respect to satellite parts.

 

8.1.4 Sopra-Steria

Participants: François Pasteau, Marie Babel, Sylvain Guegan, Fabien Grzeskowiak.

INSA Rennes, duration: 12 months.

This project funded by Sopra Steria aimed to design a smart rollator equiped with haptic and auditory feedback, coupled with indoor localisation.

8.2 Bilateral grants with industry

8.2.1 Creative

Participants: Thibault Noël, François Chaumette, Eric Marchand.

No Inria Rennes 2022000032, duration: 36 months.

This project funded by Creative started in October 2021. It supports Thibault Noël's PhD who benefits from a CIFRE grant (see Section 7.2.7).

 

8.2.2 IRT JV Perform

Participant: François Chaumette.

No Inria Rennes 16107, duration: 36 months.

This project funded by IRT Jules Verne started in November 2021. It is achieved in cooperation with Stéphane Caro from LS2N in Nantes to support Thomas Rousseau's PhD at IRT Jules Verne about visual servoing of cable-driven parallel robots

9 Partnerships and cooperations

9.1 International initiatives

9.1.1 Participation in other International Programs

GentleMAN

Participants: Fouad Makiyeh, Alexandre Krupa, François Chaumette, Fabien Spindler.

  • Title:
    Gentle and Advanced Robotic Manipulation of 3D Compliant Objects
  • Duration:
    August 2019 - December 2023
  • Coordinator:
    Sintef Ocean (Norway)
  • Partners:
    • Sintef Ocean (Norway)
    • NTNU (Norway)
    • NMBU (Norway)
    • MIT (USA)
    • QUT (Australia)
  • Inria contact:
    Alexandre Krupa
  • Summary:
    This project is granted by The Research Council of Norway. Its main objective is to develop a novel learning framework that uses visual, force and tactile sensing to develop new multi-modal learning models to enable robots to learn new and advanced skills for the manipulation of 3D compliant objects. The Rainbow group is involved in the elaboration of new approaches for visual tracking of deformable objects, active vision perception and visual servoing for deforming soft objects into desired shape (see Section 7.2.8).
BIFROST

Participants: Mandela Ouafo Fonkoua, Fouad Makiyeh, Alexandre Krupa, François Chaumette, Fabien Spindler.

  • Title:
    A Visual-Tactile Perception and Control Framework for Advanced Manipulation of 3D Compliant Objects
  • Duration:
    July 2021 - December 2025
  • Coordinator:
    Sintef Ocean (Norway)
  • Partners:
    • Sintef Ocean (Norway)
    • MIT (USA)
  • Inria contact:
    Alexandre Krupa
  • Summary:
    This project is granted by The Research Council of Norway. Its main objective is to develop a visual-tactile perception and Control framework for advanced manipulation of 3D compliant objects. The Rainbow group is in charge of elaborating novel visual servoing approaches fusing visual and tactile feedback for dexterous manipulation of soft objects.

9.2 European initiatives

9.2.1 Horizon Europe

euROBIN

euROBIN project on cordis.europa.eu

  • Title:
    European ROBotics and AI Network
  • Duration:
    From July 1, 2022 to June 30, 2026
  • Partners:
    • INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE (INRIA), France
    • C.R.E.A.T.E. CONSORZIO DI RICERCA PER L'ENERGIA L AUTOMAZIONE E LE TECNOLOGIE DELL'ELETTROMAGNETISMO (C.R.E.A.T.E.), Italy
    • PAL ROBOTICS SL (PAL ROBOTICS), Spain
    • KUNGLIGA TEKNISKA HOEGSKOLAN (KTH), Sweden
    • INSTITUT JOZEF STEFAN (JSI), Slovenia
    • FRAUNHOFER GESELLSCHAFT ZUR FORDERUNG DER ANGEWANDTEN FORSCHUNG EV (Fraunhofer), Germany
    • FUNDACION TECNALIA RESEARCH & INNOVATION (TECNALIA), Spain
    • TECHNISCHE UNIVERSITAET MUENCHEN (TUM), Germany
    • DHL EXPRESS SPAIN SL, Spain
    • COMMISSARIAT A L ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES (CEA), France
    • INTERUNIVERSITAIR MICRO-ELECTRONICA CENTRUM (IMEC), Belgium
    • TEKNOLOGISK INSTITUT (DANISH TECHNOLOGICAL INSTITUTE), Denmark
    • UNIVERSITEIT TWENTE (UNIVERSITEIT TWENTE), Netherlands
    • ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE (EPFL), Switzerland
    • MATADOR INDUSTRIES AS, Slovakia
    • ASTI MOBILE ROBOTICS SA (ASTI), Spain
    • DEUTSCHES ZENTRUM FUR LUFT - UND RAUMFAHRT EV (DLR), Germany
    • IST-ID ASSOCIACAO DO INSTITUTO SUPERIOR TECNICO PARA A INVESTIGACAO E O DESENVOLVIMENTO (IST ID), Portugal
    • UNIVERSITA DI PISA (UNIPI), Italy
    • FUNDINGBOX ACCELERATOR SP ZOO (FBA), Poland
    • UNIVERSITAET BREMEN (UBREMEN), Germany
    • FONDAZIONE ISTITUTO ITALIANO DI TECNOLOGIA (IIT), Italy
    • KARLSRUHER INSTITUT FUER TECHNOLOGIE (KIT), Germany
    • EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH (ETH Zürich), Switzerland
    • CESKE VYSOKE UCENI TECHNICKE V PRAZE (CVUT), Czechia
    • OREBRO UNIVERSITY (ORU), Sweden
    • CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS (CNRS), France
    • VOLKSWAGEN AKTIENGESELLSCHAFT (VW AG), Germany
    • SIEMENS AKTIENGESELLSCHAFT, Germany
    • SORBONNE UNIVERSITE, France
    • UNIVERSIDAD DE SEVILLA, Spain
  • Inria contact:
    Serena Ivaldi
  • Coordinator:
    Alin Albu Schaeffer (DLR)
  • Summary:

    As robots are entering unstructured environments with a large variety of tasks, they will need to quickly acquire new abilities to solve them. Humans do so very effectively through a variety of methods of knowledge transfer – demonstration, verbal explanation, writing, the Internet. In robotics, enabling the transfer of skills and software between robots, tasks, research groups, and application domains will be a game changer for scaling up the robot abilities.

    euROBIN therefore proposes a threefold strategy: First, leading experts from the European robotics and AI research community will tackle the questions of transferability in four main scientific areas: 1) boosting physical interaction capabilities, to increase safety and reliability, as well as energy efficiency 2) using machine learning to acquire new behaviors and knowledge about the environment and the robot and to adapt to novel situations 3) enabling robots to represent, exchange, query, and reason about abstract knowledge 4) ensuring a human-centric design paradigm, that takes the needs and expectations of humans into account, making AI-enabled robots accessible, usable and trustworthy.

    Second, the relevance of the scientific outcomes will be demonstrated in three application domains that promise to have substantial impact on industry, innovation, and civil society in Europe. 1) robotic manufacturing for a circular economy 2) personal robots for enhanced quality of life 3) outdoor robots for sustainable communities. Advances are made measurable by collaborative competitions.

    Finally, euROBIN will create a sustainable network of excellence to foster exchange and inclusion. Software, data and knowledge will be exchanged over the EuroCore repository, designed to become a central platform for robotics in Europe.

    The vision of euROBIN is a European ecosystem of robots that share their data and knowledge and exploit their diversity to jointly learn to perform the endless variety of tasks in human environments.

9.2.2 H2020 projects

GuestXR

GuestXR project on cordis.europa.eu

  • Title:
    GuestXR: A Machine Learning Agent for Social Harmony in eXtended Reality
  • Duration:
    From January 1, 2022 to December 31, 2025
  • Partners:
    • INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE (INRIA), France
    • UNIWERSYTET WARSZAWSKI (UNIWARSAW), Poland
    • VIRTUAL BODYWORKS SL (Virtual Bodyworks S.L.), Spain
    • UNIVERSITEIT MAASTRICHT, Netherlands
    • UNIVERSITAT DE BARCELONA (UB), Spain
    • FUNDACIO EURECAT (EURECAT), Spain
    • REICHMAN UNIVERSITY (REICHMAN UNIVERSITY), Israel
    • CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS (CNRS), France
    • G.TEC MEDICAL ENGINEERING GMBH (G.TEC MEDICAL ENGINEERING GMBH), Austria
  • Inria contact:
    Anatole LECUYER
  • Coordinator:
    EURECAT
  • Summary:

    Immersive online social spaces will soon become ubiquitous. However, there is also a warning that we need to heed from social media.

    User content is the ‘lifeblood of social media’. However, it often stimulates antisocial interaction and abuse, ultimately posing a danger to vulnerable adults, teenagers, and children.

    In the VR space this is backed up by the experience of current virtual shared spaces. While they have many positive aspects, they have also become a space full of abuse.

    Our vision is to develop GuestXR, a socially interactive multisensory platform system that uses eXtended Reality (virtual and augmented reality) as the medium to bring people together for immersive, synchronous face-to-face interaction with positive social outcomes.

    The critical innovation is the intervention of artificial agents that learn over time to help the virtual social gathering realise its aims. This is an agent that we refer to as “The Guest” that exploits Machine Learning to learn how to facilitate the meeting towards specific outcomes.

    Underpinning this is neuroscience and social psychology research on group behaviour, which will deliver rules to Agent Based Models (ABM).

    The combination of AI with immersive systems (including haptics and immersive audio), virtual and augmented reality will be a hugely challenging research task, given the vagaries of social meetings and individual behaviour. Several proof of concept applications will be developed during the project, including a conflict resolution application in collaboration with the UN. A strong User Group made up of a diverse range of stakeholders from industry, academia, government and broader society will provide continuous feedback. An Open Call will be held to bring in artistic support and additional use cases from wider society. Significant work is dedicated to ethics “by design”, to identify problems and look eventually towards an appropriate regulatory framework for such socially interactive systems.

REGO

REGO project on cordis.europa.eu

  • Title:
    Cognitive robotic tools for human-centered small-scale multi-robot operations
  • Duration:
    From October 1, 2022 to September 30, 2026
  • Partners:
    • INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE (INRIA), France
    • CENTRE HOSPITALIER UNIVERSITAIRE DE RENNES (CHU RENNES), France
    • UNIVERSITEIT TWENTE (UNIVERSITEIT TWENTE), Netherlands
    • SCUOLA SUPERIORE DI STUDI UNIVERSITARI E DI PERFEZIONAMENTO S ANNA (SSSA), Italy
    • FONDAZIONE ISTITUTO ITALIANO DI TECNOLOGIA (IIT), Italy
    • HAPTION SA (HAPTION), France
    • CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS (CNRS), France
    • HELMHOLTZ-ZENTRUM DRESDEN-ROSSENDORF EV (HZDR), Germany
  • Inria contact:
    Claudio Pacchierotti
  • Coordinator:
  • Summary:
    Robots are still often regarded as large machines with links, gears, and electric motors, autonomously interacting with the surrounding environment. Despite the great research efforts in robotics and human-robot interaction (HRI), the way we design, use, and control robots has not fundamentally changed in the past 20 years. We see in small-scale wireless multi-robot systems and cognitive HRI a revolutionary answer to nowadays robots limitations. Instead of large, tethered machines, that are difficult for the human user to control, REGO proposes an innovative set of AI-powered, modular, microsized swarms of robots. They are wirelessly steered by electromagnetic fields as well as able to react to other external stimuli, and then naturally controlled by humans through intuitive dexterous interfaces and interaction techniques. Taking advantage of AI multi-robot control strategies, these robots can team up and collaborate to fulfill complex tasks in a robust and unprecedented flexible way. By exploiting multisensory interaction techniques and cognitive shared control, the operator will achieve an unparalleled level of seamless interaction and continuous collaboration with the robotic team. According to the application at hand, the robotic team will feature different task-specific characteristics (e.g., biocompatibility for medical procedures, biodenitrification for cleaning water, ability to carry drugs to fight infections) and be dispatched through various delivery systems, including a stimuli-responsive milli-scale wireless robotic carrier developed within the project. To achieve this revolution, REGO will develop magnetic multi-robot motion control systems, autonomous swarm control techniques for micro-sized robots, human-robot haptic-centered interfaces, and cognitive shared-control techniques. REGO enables the next generation of AI-powered interactive small-size multi-robots systems, with increased capabilities to work with each other and their human operators.

9.3 National initiatives

9.3.1 Equipex+ Tirrex

Participants: Fabien Spindler, François Chaumette.

no Inria Rennes OIP 03-22-01, duration: 8 years.

This large project devoted to open robotics platforms started in December 2021. Rainbow is responsible of the manipulation axis for which a new M4 platform (Multi-arm Multi-sensor Mobile Manipulator) will be designed and installed in our lab. This year, we ended the negotiations with potential suppliers and selected PAL Robotics as provider. The M4 platform should be delivered in 2024.  

9.3.2 PEPR O2R

Participants: Maud Marchal, Marie Babel, Claudio Pacchierotti, Marco Tognon.

duration: 8 years.

The Organic Robotics program proposes to implement a responsible and socially acceptable robotics. This PEPR will intensify the multidisciplinary approach of the community (digital, life sciences, engineering, environmental, social sciences) in a strategy that radically differs from the current vision of robotics and its limitations. The Organic Robotics program therefore aims to initiate a shift in robotics allowing to create a new generation of robots capable of interacting and working in symbiosis with humans. We propose to consider the robot, no longer as an automation machine, but as a tool, in line with those that humans have created, used and optimized in order to explore and act on their environment. More efficient, modular, reconfigurable and adaptive, organic robots will become an extension of humans. The PEPR O2R will start in September 2023. M. Marchal has contributed to the proposal writing and is a member of the executive committee.

 

9.3.3 PEPR O2R - AS2 structuring action

Participants: Alexandre Krupa, Fabien Spindler.

duration: 8 years.

In the context of the national PEPR O2R robotic exploration program, Alexandre Krupa contributed in the set up of the AS2 structuring action. This structuring action, entitled "Robot motion with physical interactions and social adaptation", aims to rethink the problem of motion generation in robotic systems, taking a global approach and redefining research objectives in conjunction with the Human and Social Sciences. Within this AS2 action, the Rainbow group will be involved on the development of multi-sensor control strategies for the control of physically interacting robotic systems. The AS2 structuring action of the PEPR O2R will start in January 2024 and cover a period of 8 years.

 

9.3.4 PEPR eNSEMBLE

Participants: Maud Marchal.

duration: 8 years.

The purpose of eNSEMBLE (Future of Digital Collaboration) is to fundamentally redefine digital tools for collaboration. To address this challenge, a paradigm shift in the design of collaborative systems is needed, comparable to the one that saw the advent of personal computing. To collaborate in a fluid and natural way while taking advantage of computer capabilities, collaboration and sharing must become native features of computer systems, in the same way that files or applications are today. To achieve this goal, we need to invent mixed (i.e. physical and digital) collaboration spaces that do not simply replicate the physical world in virtual environments, enabling co-located and/or geographically distributed teams to work together smoothly and efficiently. The PEPR eNSEMBLE will start in September 2023. M. Marchal has contributed to the writing of the proposal.

 

9.3.5 ANR Marsurg

Participants: Eric Marchand, François Chaumette, Fabien Spindler.

no Inria 16162, duration: 48 months.

This project started in September 2021. It involves a consortium managed by ISIR (Paris) with Pixee Medical and Rainbow group. It aims at researching markerless augmented reality solution for orthopedic surgery

 

9.3.6 ANR Sesame

Participant: François Chaumette.

no Inria 13722, duration: 48 months.

This project started in January 2019. It involves a consortium managed by LS2N (Nantes) with LIP6 (Paris) and Rainbow group. It aims at analysing singularity and stability issues in visual servoing.

 

9.3.7 Inria Challenge DORNELL

Participants: Marie Babel, Claudio Pacchierotti, Maud Marchal, François Pasteau, Sylvain Guegan, Louise Devigne, Marco Aggravi, Inès Lacôte, Pierre-Antoine Cabaret, Lisheng Kuang.

  • Title:
    DORNELL: A multimodal, shapeable haptic handle for mobility assistance of people with disabilities
  • Duration:
    November 2020 - December 2024
  • Coordinators:
    Marie Babel, Claudio Pacchierotti
  • Partners:
    • Potioc Inria team
    • MFX Inria team
    • LGCGM (Rennes)
    • Centre de rééducation Pôle Saint Hélier (Rennes)
    • ISIR (Paris)
    • Institut des jeunes aveugles (Yzeure)
  • Inria contact:
    Marie Babel, Claudio Pacchierotti
  • Summary:
    While technology helps people to compensate for a broad set of mobility impairments, visual perception and/or cognitive deficiencies still significantly affect their ability to move safely and easily. We propose an innovative multisensory, multimodal, smart haptic handle that can be easily plugged onto a wide range of mobility aids, including white canes, precanes, walkers, and power wheelchairs. Specifically fabricated to fit the needs of a person, it provides a wide set of ungrounded tactile sensations (e.g., pressure, skin stretch, vibrations) in a portable and plug-and-play format – bringing haptics in assistive technologies all at once. The project will address important scientific and technological challenges, including the study of multisensory perception, the use of new materials for multimodal haptic feedback, and the development of a haptic rendering API to adapt the feedback to different assistive scenarios and user’s wishes. We will co-design DORNELL with users and therapists, driving our development by their expectations and needs.

 

9.3.8 BPI Lichie

Participants: Maxime Robic, John Thomas, Samuel Felton, Pierre Perraud, Eric Marchand, François Chaumette.

no Inria 14876, duration: 45 months.

This project started in March 2020. It involves a consortium managed by Airbus Defense and Space (Toulouse) with many companies, Onera and Inria. It aims at designing a new constellation of satellites with on-board imaging facilities. Robotics for the assembly of the satellites is also studied. As for Rainbow, this project funds Maxime Robic and John Thomas PhDs (see Sections 7.2.5 and 7.2.4, as well as the developments achieved by Samuel Felton and Pierre Perraud.

 

9.3.9 ANR CAMP

Participants: Paolo Robuffo Giordano, Fabien Spindler, Ali Srour, Tommaso Belvedere, Salvatore Marcellini.

  • Title:
    Intrinsically-Robust and Control-Aware Motion Planning for Robots in Real-World Conditions
  • Duration:
    October 2020 - March 2025
  • Coordinator:
    P. Robuffo Giordano
  • Partners:
    • LAAS (Toulouse)
    • Univ. Twente (Netherlands)
  • Inria contact:
    P. Robuffo Giordano
  • Summary:
    An effective way of dealing with the complexity of robots operating in real (uncertain) environments is the paradigm of “feedforward/feedback” or “planning/control”: in a first step a suitable nominal trajectory (feedforward) for the robot states/controls is planned exploiting the available information (e.g., a model of the robot and of the environment). While there has been an effort in proposing “robust planners” or more “global controllers” (e.g., Model Predictive Control (MPC)), a truly unified approach that fully exploits the techniques of the motion planning and control/estimation communities is still missing and the existing state-of-the-art has several important limitations, namely (1) lack of generality, (2) lack of computational efficiency, and (3) poor robustness. In this respect, the ambition of CAMP is to (1) develop a general and unified “intrinsically-robust and control-aware motion planning framework” able to address all the above-mentioned issues, and to (2) demonstrate the applicability of this new framework to real robots in real-world challenging tasks. In particular we envisage two robotics demonstrators for showing at best the effectiveness and generality of our methodology: (1) an indoor pick- and-place/assembly task involving a 7-dof torque-controlled arm for a first validation in “controlled conditions” and (2) an outdoor cooperative mobile manipulation task involving an aerial manipulator (a quadrotor UAV equipped with an onboard arm) and a skid-steering mobile robot with an onboard arm for a final validation in much less favorable experimental conditions (see Sect. 7.2.1)

9.3.10 ANR MULTISHARED

Participants: Paolo Robuffo Giordano, Claudio Pacchierotti, Vincent Drevelle, Nicola De Carli, Maxime Bernard, Esteban Restrepo.

  • Title:
    Shared-Control Algorithms for Human/Multi-Robot Cooperation
  • Duration:
    September 2020 - October 2025
  • Coordinator:
    P. Robuffo Giordano
  • Inria contact:
    P. Robuffo Giordano
  • Summary:
    The goal of the Chaire AI MULTISHARED is to significantly advance the state-of-the-art in multi-robot autonomy and human-multi-robot interaction for allowing a human operator to intuitively control the coordinated motion of a multi-UAV group navigating in remote environments, with a strong emphasis on the division of roles between multi-robot autonomy (in controlling its motion/configuration and online decision-making) and human intervention/guidance for providing high-level commands to the group while being most aware of the group status via VR and haptics technology (see Sect. 7.1.1 and Sect. 7.2.11).

9.3.11 ANR JCJC AirHandyBot

Participants: Marco Tognon, Paolo Robuffo Giordano, Lorenzo Balandi, Mattia Piras, Gianluca Corsini, Fabien Spindler.

  • Title:
    Aerial Robots for True Manipulation of Dynamic and Uncertain Environments
  • Duration:
    November 2023 - October 2026
  • Coordinator:
    M. Tognon
  • Inria contact:
    M. Tognon
  • Summary:
    One of the main goals of robotics is to realize autonomous systems that can help human operators in tasks that are hard and dangerous (e.g., in elevated areas). It is therefore important to conceive robots that can perform physical work executing complex tasks requiring the interaction with the environment and the manipulation of objects. In particular, having aerial robots able to interact with the environment, would open the door new applications in dangerous and hardly accessible area, like manipulation of objects, contact-based inspection, and construction. Aiming to show the feasibility of Aerial Physical Interaction (APhI), previous works focused on the design and control of aerial manipulators. However, current investigations and applications are still limited to simple interaction tasks, involving limited contact behaviors with static and rigid surfaces, moreover performed in known and structured environments. To deploy aerial manipulators in real scenarios, they must be able to perform more complex manipulation tasks in less structured situations. Because of the application, scientific interest, and possible future impact of APhI, FlyHandyBot aims to enhance APhI capabilities of highly dynamical aerial manipulators by considering: - manipulations tasks of movable and articulated objects, relying on onboard sensors only; - real scenarios characterized by disturbances and uncertainties due to system modeling errors, noisy and imprecise measurements coming from lightweight onboard sensors, imprecise actuation models due to complex aerodynamic effects, and partially unknown environments. The investigation, including fundamental theoretical results, real experiments and practical demonstrations, will focus on the design of new conception, modeling and control methods to make aerial robots much more precise, robust and safe while performing physical interaction tasks in real environments. This will allow aerial robots to be in the future valid companions of human operators.

9.3.12 AeX AEROTouch

Participants: Marco Tognon, Paolo Robuffo Giordano, Lorenzo Balandi, Mattia Piras, Gianluca Corsini, Fabien Spindler.

  • Title:
    Aerial Robots with the Sense of Touch
  • Duration:
    November 2023 - October 2026
  • Coordinator:
    M. Tognon
  • Inria contact:
    M. Tognon
  • Summary:
    Researchers are trying to make aerial robots perform physical work. Current methodologies show promising results, but they fail in real scenarios, mostly because of inaccurate visual perception. Inspired by nature, this project investigates how to also provide aerial robots with the sense of touch and how to use it for improving their manipulation capabilities.

9.4 Regional initiatives

9.4.1 CominLabs MAMBO

Participants: Lev Smolentsev, Alexandre Krupa, François Chaumette, Paolo Robuffo Giordano, Fabien Spindler.

  • Title:
    Manipulation of Soft Bodies with Multiple Drones
  • Duration:
    October 2020 - September 2024
  • Coordinator:
    LS2N (Nantes)
  • Inria contact:
    Alexandre Krupa
  • Summary:
    This project is funded by the Labex CominLabs. It is led by the ARMEN team at LS2N (Nantes) and implies the collaboration of the Rainbow Project-Team. Its objective is to propose a scientific framework for allowing the manipulation of an object by the combined action of two drones equipped with onboard cameras and force sensors. The envisaged solution is to manipulate a deformable body (a slender beam) attached between the two drones in order to grasp an object on the floor and move it to another location. In the scope of this project, the Rainbow group is involved in the elaboration of new approaches for controlling the 2 drones by visual servoing using data provided by onboard RGB-D cameras (see Section 7.5.1).

9.4.2 CominLabs EM-ART

Participants: Marco Ferro, Claudio Pacchierotti, Paolo Robuffo Giordano.

  • Title:
    Electromagnetic artificial human: paradigm shift in dosimetry for 5G and beyond
  • Duration:
    June 2022 - December 2024
  • Coordinator:
    IETR (Rennes)
  • Inria contact:
    Claudio Pacchierotti
  • Summary:
    The growth of mobile data traffic driven by wireless user terminals and data-intensive applications has led to a surge in demand for ultra-low latency and ultra-high data rates. This has prompted the wireless industry to explore underused spectrum above 6 GHz for the development of 5G/6G mobile communications. However, the shift to higher microwave frequencies poses challenges for exposure assessment due to limitations in conventional dosimetry techniques. EM-ART aims to address this by proposing a novel approach for accurate, realistic, and high-sensitivity dosimetry measurements at frequencies relevant to 5G/6G. The goal is to address public concerns about environmental safety and facilitate the certification of emerging millimeter-wave technologies in 5G devices.

9.4.3 Ambrougerien

Participants: Marie Babel, François Pasteau, Vincent Drevelle, Théo Le Terrier.

  • Title:
    Autonomie, MoBilité et fauteuil ROUlant robotisé : GEolocalisation indoor et Recharge IntelligENte
  • Duration:
    December 2020 - December 2024
  • Coordinator:
    DK innovation (Plérin)
  • Partners:
    INSA Rennes, Hoppen (Rennes), Centre de médecine physique et de réadaptation Pôle Saint Hélier (Rennes)
  • Inria contact:
    Marie Babel
  • Summary:
    This project started in December 2020 and is supported by Brittany region and Rennes Métropole. AMBROUGERIEN aims at supporting the independence of people in electric wheelchairs. A dedicated interface allows the wheelchair to move autonomously to secure the transfer and to return to an intelligent induction recharging base. Information on the internal state of the wheelchairs facilitates fleet management.

9.4.4 Academic Chair IH2A

  • Title:
    Academic Chair on Innovations, Handicap, Autonomy and Accessibility (IH2A)
  • Duration:
    September 2020 -
  • Coordinator:
    Marie Babel
  • Partners:
    LGCGM Rennes, IETR Rennes, M2S, CHU Pontchaillou Rennes, Centre de médecine physique et de réadaptation Pôle Saint Hélier (Rennes)
  • Inria contact:
    Marie Babel
  • Summary:
    This research chair (Innovations, Handicap, Autonomy and Accessibility - IH2A) is a continuation of the research work developed at INSA Rennes/Rainbow team on assistive robotics.The idea is to propose the most suitable technological solutions to compensate for sensory-motor handicaps limiting the mobility and autonomy of people in daily life tasks and leisure activities. The Chair thus aims at perpetuating these activities, both from a societal point of view and from a scientific and clinical point of view, and is intended to be an effective and innovative tool for the deployment of large-scale research in this area. The creation of a new type of multidisciplinary and innovative collaborative site of experimentations will allow the clinical and scientific validation of the technical assistance offered, while ensuring the accessibility of the solutions deployed.

9.4.5 Hubert

Participants: Marie Babel, François Pasteau, Vincent Drevelle, Fabien Grzeskowiak, Louise Devigne.

  • Title:
    Hubert
  • Duration:
    January 2023 - December 2025
  • Coordinator:
    BA HEALTHCARE (Pacé)
  • Partners:
    INSA Rennes, CIMTECH (Pacé), Centre de médecine physique et de réadaptation Pôle Saint Hélier (Rennes)
  • Inria contact:
    Marie Babel
  • Summary:
    The aim of this project is to create a range of robotized walkers for geriatric use in health and social care establishments, to give residents or patients greater independence. Inspired by existing prototypes at BA HEALTHCARE, this new range will be aimed at two types of user: those who still have some mobility, but reduced, and those with little or no mobility. The aim is to offer users an aid to mobility and the transition from sitting to standing.

10 Dissemination

Paolo Robuffo Giordano François Chaumette Alexandre Krupa Claudio Pacchierotti Marco Tognon Marie Babel Vincent Drevelle Maud Marchal Eric Marchand

10.1 Promoting scientific activities

10.1.1 Scientific events: organisation

Member of the organizing committees
  • P. Robuffo Giordano and M. Babel were Associate Editor for IEEE ICRA 2024
  • M. Tognon was Area Chair for RSS 2023 and RSS 2024
  • C. Pacchierotti was guest editor of the IEEE Transactions on Haptics Special Issue on “Haptics in the metaverse: Haptic feedback for Virtual, Augmented, Mixed, and eXtended Realities” and Work-In-Progress Co-Chair of IEEE World Haptics 2023.

10.1.2 Scientific events: selection

Chair of conference program committees
  • C. Pacchierotti is Program Co-Chair for the 2024 Eurohaptics Conference.
Member of the conference program committees
  • M. Marchal was Program Chair of the Journal Track of IEEE ISMAR 2023. She was program committee member of IEEE VR 2023, WIGRAPH's Rising Stars 2023.
Reviewer
  • F. Chaumette: ISER 2023 (1), Humanoid 2023 (1), ICRA 2024 (1)
  • P. Robuffo Giordano: ERF 2024 (4), IEEE MSC 2024 (1)
  • C. Pacchierotti: ICRA 2024 (2), HAPTICS (1), ARSO (1), IROS (1)

10.1.3 Journal

Member of the editorial boards
  • F. Chaumette was Editor of the IEEE Transactions on Robotics till June 2023.
  • E. Marchand was Senior Editor of the IEEE Robotics and Automation Letters till April 2023
  • P. Robuffo Giordano is Editor of the IEEE Transactions on Robotics
  • M. Tognon is Associate Editor of the IEEE Transactions on Robotics
  • M. Marchal is Associate Editor of IEEE Transactions on Visualization and Computer Graphics, IEEE Transactions on Haptics, ACM Transactions on Applied Perception, Computers & Graphics, IEEE Computers Graphics and Applications.
  • M. Babel is Associate Editor of Springer Social Robotics and IEEE Robotics and Automation Letters
  • C. Pacchierotti is Associated Editor of the IEEE Robotics and Automation Letters and the International Journal of Robotics Research.
Reviewer - reviewing activities
  • F. Chaumette: Autonomous Robots (1)
  • A. Krupa: IEEE RA-L(1), IEEE T-MRB(1)
  • E. Marchand: IEEE RA-L (1)
  • M. Marchal: ACM TAP (1), ACM Siggraph (2), IEEE TVCG (1)
  • M. Babel: Social Robotics (1)
  • M. Tognon: IEEE RA-L (3), IEEE TRO (5), IJRR (2)
  • C. Pacchierotti: IEEEE Trans. Haptics (10), IEEE TVCG (1), Intl. J. Human-Computer Interaction (1)

10.1.4 Invited talks

  • C. Pacchierotti: “Cutaneous haptic feedback for telemanipulation and Virtual Reality.” I-RIM 3D: Conferenza Italiana di Robotica e Macchine Intelligenti, Rome, Italy, 2023.
  • C. Pacchierotti: “Technologies for touching virtual content.” High school ITIS Delpozzo in Cuneo (Italy), online, 2023.
  • C. Pacchierotti: “Interaction haptique homme-machine avec la réalité physique et virtuelle.” Conseil Scientifique de l’Institut CNRS-INS2I, Paris, France, 2023.
  • F. Chaumette: "Visual servoing" 8th Int. Workshop on Advanced Cooperative Systems, Zagreb, Croatia, September 2023
  • P. Robuffo Giordano: “An Overview of Activities at the Rainbow Team of IRISA/Inria Rennes”, LAAS-CNRS, France, November 2023
  • P. Robuffo Giordano: “Shared Control for Tele-Navigation of Multi-Robot Systems”, IROS 2023 Workshop on Human Multi-Robot Interaction, October 2023
  • P. Robuffo Giordano: “Shared Control for Tele-manipulation and Tele- navigation”, AIRS in the AIR Academic Series, Shenzhen Institute of Artificial Intelligence and Robotics, China, May 2023
  • P. Robuffo Giordano: “Human-Assisted Robotics”, International Conference on IT-Bio Convergence, South Korea, February 2023
  • M. Marchal: "3D Interaction with Haptic interfaces", CominLabs Days, Rennes, France, September 2023.
  • M. Marchal: "Playing with Tangibles in Virtual Reality", Journées Nationales de Recherche en Robotique, Moliets, France, October 2023.
  • L. Devigne: "Robotique d’assistance et handicap : la mobilité pour tous", Journées Scientifiques Inria, Bordeaux, France, August 2023.
  • M. Babel: "DORNELL, quand un projet de recherche s'intéresse au handicap", Conférence Agir pour la mise en accessibilité numérique, tous concernés !, Rennes, October 2023.
  • M. Tognon: "Aerial Physical Interaction? YES WE CAN!! Past and recent results toward new directions and opportunities", Technical University of Denmark, Copenhagen, October 2023.
  • M. Tognon: "Aerial Physical Interaction from contactless ... to contact-based", Rutgers University, NY, April 2023.

10.1.5 Leadership within the scientific community

  • C. Pacchierotti is Senior Chair of the IEEE Techical Committee on Haptics, Co-Chair of the IEEE Techical Committee on Telerobotics, Secretary of the Eurohaptics Society.
  • F. Chaumette serves as a member of the Scientific Council of the Mathematics and Computer Science Department of INRAE. He is also a founding member of the Scientific Council of the GdR Robotique.
  • M. Babel serves as the Vice-Director of the GdR Robotique.
  • Maud: co-head of GdR IG-RV (Informatique Géométrique et Graphique, Réalité Virtuelle et Visualisation), Member of the Steering Committee of ISMAR (since November), Member of the Eurographics Executive Committee.

10.1.6 Scientific expertise

  • P. Robuffo Giordano is member (elected) of the Section 07 of the Comité' National de la Recherche Scientifique. He also served as expert/reviewer for the euRobotics “George Giralt” award for the best European PhD thesis in robotics, for evaluating research projects from ANR, the SNSF (Switzerland), NWO (Netherlands). He was reviewer for the H2020 projects ACROBA, PILOTING, Aerial-Core and HyFliers
  • F. Chaumette served as a jury member evaluating the ANR "Défi Transfert Robotique" proposals.
  • F. Spindler was a member of the HCERES expert committee evaluating the activities of the CIAD research unit.
  • M. Marchal was the Chair of the VGTC IEEE Best VR Dissertation Award.She was also a member of the CORE committee, a member of the SIF Gilles Kahn PhD award.
  • M. Babel is the vice-president of the Comité d'évaluation of ANR (CE33 - Interaction, robotique). She serves since 2017 as an expert for the International Mission of the French Research Ministry (MEIRIES). She also serves as a member of the Selection and Validation Committee of the Pôle Images et Réseaux.

10.1.7 Research administration

  • C. Pacchierotti is a permanent member of the Comité de centre (Inria Rennes).
  • F. Chaumette is a member of the Inria COERLE committee (in charge of the ethical aspects of all Inria research)
  • E. Marchand is the head of the Matisse Doctoral School (ED 601).

10.2 Teaching - Supervision - Juries

10.2.1 Teaching

François Chaumette:

  • Doctoral school I2S: : “Visual Servoing”, 3 hours, Montpellier
  • Master SIVOS: “Visual Servoing”, 9 hours, M2, Université de Rennes 1
  • Master ENS: “Visual servoing”, 6 hours, M1, Ecole Nationale Supérieure de Rennes;
  • Master ESIR3: “Visual servoing”, 9 hours, M2, Ecole supérieure d'ingénieurs de Rennes.

Alexandre Krupa:

  • Master ESIR3: “Ultrasound visual servoing”, 9 hours, M2, Esir Rennes
  • Master INSA1: “Computer programming”, 42 hours, L1, INSA Rennes

Eric Marchand:

  • Master Esir2: “BINP”, 9 hours, M1, Esir Rennes
  • Master Esir2: “Computer vision: geometry”, 24 hours, M1, Esir Rennes
  • Master Esir3: “Robotics Vision 1”, 12 hours, M2, Esir Rennes
  • Master Esir3: “Robotics Vision 2”, 7 hours, M2, Esir Rennes
  • Master SIVOS: “Geometric Computer Vision”, 8 hours, M2, Université de Rennes 1
  • Master ENS: “Computer vision”, 6 hours, M2, ENS Rennes
  • Master MIA: “Augmented reality”, 4 hours, M2, Université de Rennes 1

Marie Babel:

  • Master INSA2: “Robotics”, 26 hours, M1, INSA Rennes
  • Master INSA1: “Concepts de la logique à  la programmation”, 20 hours, L3, INSA Rennes
  • Master INSA1: “Langage C”, 12 hours, L3, INSA Rennes
  • Master INSA2: “Computer science project”, 30 hours, M1, INSA Rennes
  • Master INSA1: “Practical studies”, 16 hours, L3, INSA Rennes
  • Master INSA2: “Image analysis”, 26 hours, M1, INSA Rennes
  • Master INSA1: “Remedial math courses”, 50 hours, L3, INSA Rennes
  • Master INSA 1: “Probability”, 14 hours, L3, INSA Rennes
  • Master INSA: tutoring and support for students with disabilities, 30 hours, INSA Rennes
  • Master SIVOS: “Mecatronics for healthcare”, 12 hours, M2, ENS Rennes

Claudio Pacchierotti:

  • Master “Artificial Intelligence & Advanced Visual Computing”: INF644 – Virtual/Augmented Reality & 3D Interactions”, 6 hours, M2, École Polytechnique
  • Master SIF: “Virtual Reality and Multi-Sensory Interaction”, 4 hours, M2, IRISA.
  • Doctoral School: “Computer and Control Engineering”, 5 hours, PhD, Politecnico di Torino.

Maud Marchal:

  • Master INSA1: “Computer Graphics”, 20 hours, M1, INSA Rennes.
  • Master INSA1: “Complexity and algorithms”, 26 hours, L3, INSA Rennes.
  • Master INSA2: “Human-Computer Interaction”, 15 hours, M2, INSA Rennes.
  • Master INSA1: “Software design for medical applications”, 6 hours, M1, INSA Rennes.
  • Master SIF: “Computer Graphics”, 20 hours, M2, Univ. Rennes.

Vincent Drevelle:

  • Master 2 ILA/CCNA: “Transverse project”, 28 hours, M2, Université de Rennes 1
  • Master 1 Info: “Artificial intelligence”, 20 hours, M1, Université de Rennes 1
  • Licence Info: “Computer systems architecture”, 52 hours, L1, Université de Rennes 1
  • Portail Info-Elec: “Discovering programming and electronics”, 11 hours, L1, Université de Rennes 1
  • Licence 3 Miage: “Computer programming”, 78 hours, L3, Université de Rennes 1
  • Master 2 EEEA-SE: “Instrumentation, localization, GPS”, 4 hours, M2, Université de Rennes 1
  • Master 2 EEEA-SE: “Multisensor data fusion”, 20 hours, M2, Université de Rennes 1
  • Master 2 IL/CCN: “Mobile robotics”, 32 hours, M2, Université de Rennes 1

Fabien Spindler:

  • Master SIVOS: “Geometric Computer Vision and Visual Servoing”, 12 hours, M2, Université de Rennes 1
  • Master ENS: “Geometric Computer Vision and Visual servoing”, 12 hours, M1, Ecole Nationale Supérieure de Rennes

Paolo Robuffo Giordano:

  • Master 2 SPIA HCR: “Energy-Based Modeling and Control for Robotics”, 12 hours, M2, Ecole Nationale Supérieure de Rennes

Marco Tognon:

  • Master 2 SPIA HCR: “Energy-Based Modeling and Control for Robotics”, 6 hours, M2, Ecole Nationale Supérieure de Rennes

10.2.2 Supervision

  • Ph.D. completed: Maxime Robic, "Visual Servoing of the Orientation of an Earth Observation Satellite", defended in December 2023, supervised by François Chaumette and Eric Marchand
  • Ph.D. completed: Fouad Makiyeh, "Vision-based Shape Servoing of Soft Objetcs using Mass-Spring Model", defended in December 2023, supervised by Alexandre Krupa, François Chaumette and Maud Marchal
  • Ph.D. completed: Pascal Brault, "Robust trajectory planning algorithms for robots with parametric uncertainties", defended in April 2023, supervised by Paolo Robuffo Giordano
  • Ph.D. completed: Lisheng Kuang, "Ungrounded haptic interfaces for guidance and interaction rendering", defended in June 2023, supervised by Claudio Pacchierotti and Paolo Robuffo Giordano
  • Ph.D. in progress: Lev Smolentev, “Shape visual servoing of a tether cable”, started in November 2020, supervised by Alexandre Krupa and François Chaumette
  • Ph.D. in progress: John Thomas, "Assembly Task in Congested Area using Sensor-based Control", started in December 2020, supervised by François Chaumette
  • Ph.D. in progress: Thibault Noël, "Exploration of indoor environments”, started in October 2021, supervised by Eric Marchand and François Chaumette
  • Ph.D. in progress: Thomas Rousseau, "Sensor-based control for cable-driven parallel robots", started in November 2021, supervised by Stéphane Caro (LS2N), François Chaumette, and Nicolo Pedemonte (IRT Jules Verne)
  • Ph.D. in progress: Mandela Ouafo Fonkoua, “Visual perception and visual servoing for dexterous robotic manipulation of compliant objects”, started in October 2022, supervised by Alexandre Krupa and François Chaumette
  • Ph.D. in progress: Erwan Normand, “Augmenting the interaction with everyday objects with wearable haptics and Augmented Reality,” started in October 2021, supervised by Eric Marchand, Maud Marchal, and Claudio Pacchierotti.
  • Ph.D. in progress: Ali Srour, “Robust and Control-Aware Motion Planning”, started in October 2021 supervised by Paolo Robuffo Giordano, M. Cognetti (LAAS-CNRS) and A. Franchi (Univ Twente, Netherlands)
  • Ph.D. in progress: Nicola De Carli, “Reactive Trajectory Planning Methods for Formation Control and Localization of Multi-Robot System”, started in January 2021, supervised by P. Salaris (Univ. Pisa, Italy) and Paolo Robuffo Giordano
  • Ph.D. in progress: Antonio Marino, “Machine learning techniques for the control of multi-robot systems”,started in September 2022, supervised by Paolo Robuffo Giordano and Claudio Pacchierotti
  • Ph.D. in progress: Maxime Bernard, “Shared control for multi-robot systems”, started in October 2021, supervised by Paolo Robuffo Giordano and Claudio Pacchierotti
  • Ph.D. in progress: Lorenzo Balandi, “Full-Body Design and Control of an Aerial Manipulator for Advance Physical Interaction”, started in October 2023, supervised by Marco Tognon and Paolo Robuffo Giordano
  • Ph.D. in progress: Mattia Piras, “Sensors-based Control of an Aerial Manipulator for Complex Manipulation of Articulated Objects”, started in October 2023, supervised by Marco Tognon and Paolo Robuffo Giordano
  • Ph.D. in progress: Yash Vyas, “Design and Control of Dynamically Balanced Aerial Manipulators for Physical Interaction”, at University of Padova, started in September 2022, supervised by Marco Tognon and Silvio Cocuzza
  • Ph.D. in progress: Lendy Mulot, “Design of coupling schemes for vibrotactile rendering in virtual reality”, started in October 2022, supervised by Maud Marchal and Claudio Pacchierotti.
  • Ph.D. in progress: Maxime Manzano, “Reach-to-grasp in activities of daily living: shared control of a multisensory assistive device to compensate for upper extremity neuromuscular disorders.”, started in September 2022, supervised by Marie Babel and Sylvain Guégan
  • Ph.D. in progress: Jose Eduardo Aguilar Segovia, “Design of haptic feedback using innovative shapeable materials”, started in October 2022, supervised by Marie Babel, Sylvain Lefebvre (MFX team, Nancy) and Sylvain Guégan
  • Ph.D. in progress: Pierre-Antoine Cabaret, “Design of navigation techniques for a multisensory handle for mobility assistance”, started in October 2021, supervised by Maud Marchal, Marie Babel, and Claudio Pacchierotti
  • Ph.D. in progress: Emilie Leblong, “Taking into account social interactions in a virtual reality power wheelchair driving simulator: promoting learning for inclusive mobility”, started in October 2020, supervised by Marie Babel and Anne-Hélène Olivier (Mimetic team)
  • Ph.D. in progress: Inès Lacôte, “Study of haptic and multisensory illusions to design a navigation assistance handle”, started in January 2021, supervised by Maud Marchal, David Gueorguiev, and Claudio Pacchierotti.
  • Ph.D. in progress: Théo Le Terrier, “Contrôle basé capteurs d’un fauteuil roulant électrique à l’aide de méthodes ensemblistes par intervalles”, started in October 2023, supervised by Marie Babel and Vincent Drevelle
  • Ph.D. in progress: Lèon Raphalen, “Human-centered shared control of multi-robot systems at the microscale”, started in September 2023, supervised by Claudio Pacchierotti.

10.2.3 Juries

HdR and PhD juries
  • E. Marchand: Yuming Du (PhD, jury member, ENPC), Yann Labbé (PhD, president, ENS Paris)
  • F. Chaumette: Pascal Brault (PhD, president, IRISA), Jorge Garcia Fontan (PhD, LIP6), Miguel Arpa Perozo (PhD, president, iCube), Alessandro Colotti (PhD, president, LS2N)
  • A. Krupa: Qian Li (PhD, president), IRISA, Université de Rennes; Célia Saghour (PhD, reviewer), LIRMM, Université de Montpellier.
  • P. Robuffo Giordano: Frederik Baberg (PhD, reviewer), KTH, Sweden; Thibault Poignonec (PhD, reviewer), iCube, Université de Strasbourg; Gianluca Corsini (PhD, reviewer), LAAS-CNRS; Tariq Zioud (PhD, member), ENSIL-ENSCI, University of Limoges; Maxime Robic (PhD, president), IRISA
  • M. Marchal: Maxime Calka (PhD, reviewer, Univ. Grenoble Alpes, France), Camille Butin (PhD, committee member, Univ. Nantes, France), Jeremie Dequidt (HdR, reviewer, Univ. Lille, France), Yann Glémarec, (PhD, reviewer and president, ENIB, France), Maud Lastic (PhD, president, Ecole Polytechnique, France), Lucas Mourot (PhD, president, Univ. Rennes, France), Azim Niyazov (PhD, president, Univ. Toulouse, France), Alban Odot (PhD, reviewer, Univ. Strasbourg, France), Yuxuan Yang (PhD, committee member, Univ. Orebro, Sweden)
  • M. Babel: Jhedmar Callupe Luna (PhD, reviewer and president, Université Paris Saclay), Tristan Venot (PhD, reviewer, Sorbonne Université), Clément Trotobas (PhD, reviewer, Université de Montpellier), Lisheng Kuang (PhD, president, Université de Rennes), Pauline Morin ((PhD, president, ENS Rennes), Charles-Olivier Artizzu (PhD, president, Université Côte d’Azur)
  • C. Pacchierotti: Davide Calandra (reviewer, Politecnico di Torino, Italy)

10.3 Popularization

10.3.1 Internal or external Inria responsibilities

  • A. Krupa is the president of the CUMIR (“Commission des Utilisateurs des Moyens Informatiques pour la Recherche”) of Centre Inria de l'Université de Rennes since February 2023.

10.3.2 Articles and contents

  • P. Robuffo Giordano “Les robots face à l'incertitude”, Focus Science, CNRS le journal
  • M. Marchal, C. Pacchierotti “Odorat, Toucher, la Réalité Virtuelle s'en empare”, Science & Vie, April 2023
  • C. Pacchierotti “Réalité virtuelle: des stimulations tactiles influencent nos interactions sociales”, Le Monde, April 2023.

10.3.3 Education

  • M. Babel, F. Pasteau, L. Devigne, T. Voisin, S. Thomas, F. Grzeskowiak, M. Manzano, S. Guégan, P.A. Cabaret, J.E. Aguilar Segovia: During the Fête de la Sciences, within the Village des Sciences in INSA Rennes, they hosted a booth for school students, featuring demonstrations of mobility assistance devices for people with disabilities, in October 2023

11 Scientific production

11.1 Major publications

  • 1 articleC.Chiara Gabellieri, M.Marco Tognon, D.Dario Sanalitro and A.Antonio Franchi. Equilibria, Stability, and Sensitivity for the Aerial Suspended Beam Robotic System Subject to Parameter Uncertainty.IEEE Transactions on Robotics395October 2023, 3977 - 3993HALDOI
  • 2 inproceedingsF.Fouad Makiyeh, F.François Chaumette, M.Maud Marchal and A.Alexandre Krupa. Shape Servoing of a Soft Object Using Fourier Series and a Physics-based Model.IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, IROS'23IROS 2023 - IEEE/RSJ International Conference on Intelligent Robots and SystemsDetroit (MI), United StatesIEEE2023, 1-8HAL
  • 3 articleR.Rahaf Rahal, A. M.Amir M Ghalamzan, F.Firas Abi-Farraj, C.Claudio Pacchierotti and P.Paolo Robuffo Giordano. Haptic-guided Grasping to Minimise Torque Effort during Robotic Telemanipulation.Autonomous Robots47February 2023, 405–423HALDOI

11.2 Publications of the year

International journals

International peer-reviewed conferences

  • 33 inproceedingsM.Mike Allenspach, T.Till Kötter, R.Rik Bähnemann, M.Marco Tognon and R.Roland Siegwart. Design and Evaluation of a Mixed Reality-based Human-Robot Interface for Teleoperation of Omnidirectional Aerial Vehicles.ICUAS 2023 - International Conference on Unmanned Aircraft SystemsWarsaw, PolandIEEESeptember 2023, 1168-1174HALDOIback to text
  • 34 inproceedingsM.Maxime Bernard, C.Claudio Pacchierotti and P.Paolo Robuffo Giordano. Decentralized Connectivity Maintenance for Quadrotor UAVs with Field of View Constraints.IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, IROS'23IROS 2023 - IEEE/RSJ International Conference on Intelligent Robots and SystemsDetroit, United StatesIEEEOctober 2023, 11111-11118HALDOIback to text
  • 35 inproceedingsG.Gianni Bianchini, D.Davide Barcelli, D.Domenico Prattichizzo and C.Claudio Pacchierotti. Optimized time-domain control of passive haptic teleoperation systems for multi-DoF interaction.2023 - IEEE World Haptics ConferenceDelft, NetherlandsIEEEJuly 2023, 1-7HALback to text
  • 36 inproceedingsN.Nicola de Carli, P.Paolo Salaris and P.Paolo Robuffo Giordano. Multi-Robot Active Sensing for Bearing Formations.IEEE International Symposium on Multi-Robot and Multi-Agent Systems (IEEE MRS 2023)MSR 2023 - IEEE International Symposium on Multi-Robot & Multi-Agent SystemsBoston (MA), United StatesIEEE2023, 1-7HALback to text
  • 37 inproceedingsE.Eugenio Cuniato, I.Ismail Geles, W.Weixuan Zhang, O.Olov Andersson, M.Marco Tognon and R.Roland Siegwart. Learning to Open Doors with an Aerial Manipulator.IROS 2023 - IEEE/RSJ International Conference on Intelligent Robots and SystemsMichigan, United StatesIEEEJuly 2023, 1-7HALback to text
  • 38 inproceedingsS.Samuel Felton, É.Élisa Fromont and E.Eric Marchand. Deep metric learning for visual servoing: when pose and image meet in latent space.ICRA 2023 - IEEE International Conference on Robotics and AutomationLondon, United KingdomIEEEMay 2023, 741-747HALDOIback to textback to text
  • 39 inproceedingsC.Chiara Gabellieri, M.Marco Tognon, D.Dario Sanalitro and A.Antonio Franchi. Force-based Pose Regulation of a Cable-Suspended Load Using UAVs with Force Bias.IROS 2023 - IEEE/RSJ International Conference on Intelligent Robots and SystemsMichigan, United StatesIEEEOctober 2023, 6920-6926HALDOIback to text
  • 40 inproceedingsP.Pedro Gomez Hernandez, L.Lisheng Kuang, M.Monica Malvezzi, D.Domenico Prattichizzo, P.Paolo Robuffo Giordano, C.Claudio Pacchierotti and F.Francesco Chinello. Encounter-Type Haptic Device Enabling Edges, Vertexes and Plane Palm Interaction: the Haptic Origami Device.IEEE World Haptics (Work-in-Progress paper)Delft, NetherlandsJuly 2023HALback to text
  • 41 inproceedingsM.Mathieu Gonzalez, E.Eric Marchand, A.Amine Kacete and J.Jérome Royan. TwistSLAM++: Fusing multiple modalities for accurate dynamic semantic SLAM.IROS 2023 - IEEE/RSJ International Conference on Intelligent Robots and SystemsDetroit, MI, United StatesIEEEOctober 2023, 1-7HALback to text
  • 42 inproceedingsF.Fabien Grzeskowiak, R.Ronan Le Breton, L.Louise Devigne, F.François Pasteau, M.Marie Babel and S.Sylvain Guégan. A generic power wheelchair lumped model in the sagittal plane: towards realistic self-motion perception in a virtual reality simulator.ICRA 2023 - IEEE International Conference on Robotics and AutomationLondres, United KingdomIEEE2023, 1-6HALback to textback to text
  • 43 inproceedingsB.Ben Hallworth, M.Mike Allenspach, R.Roland Siegwart and M.Marco Tognon. State-Aware Path-Following with Humans Through Force-based Communication via Tethered Physical Aerial Human-Robot Interaction.2023 International Conference on Unmanned Aircraft Systems (ICUAS)ICUAS 2023 - International Conference on Unmanned Aircraft SystemsWarsaw, PolandIEEEJune 2023, 183-190HALDOIback to text
  • 44 inproceedingsJ.Jeanne Hecquard, J.Justine Saint-Aubert, F.Ferran Argelaguet, C.Claudio Pacchierotti, A.Anatole Lécuyer and M.-M. J.Marc J.-M. Macé. Fostering empathy in social Virtual Reality through physiologically based affective haptic feedback.WHC 2023 - IEEE World Haptics ConferenceDelft, NetherlandsJuly 2023HALback to text
  • 45 inproceedingsL.Lisheng Kuang, F.Francesco Chinello, P.Paolo Robuffo Giordano, M.Maud Marchal and C.Claudio Pacchierotti. Haptic Mushroom: a 3-DoF shape-changing encounter-type haptic device with interchangeable end-effectors.WHC 2023 - IEEE World Haptics ConferenceDelft, NetherlandsIEEEJuly 2023HALDOIback to textback to text
  • 46 inproceedingsE.Emilie Leblong, B.Bastien Fraudet, L.Louise Devigne, F.François Pasteau, S.Sylvain Guegan and M.Marie Babel. Robotics at the Service of Wheelchair Mobility for People with Disabilities: Story of a Clinical-Scientific Partnership.AAATE 2023 - 17h International Conference of the Association for the Advancement of Assistive Technology in EuropeStudies in Health Technology and InformaticsParis, FranceIOS PressAugust 2023HALDOIback to text
  • 47 inproceedingsF.Fouad Makiyeh, F.François Chaumette, M.Maud Marchal and A.Alexandre Krupa. Shape Servoing of a Soft Object Using Fourier Series and a Physics-based Model.IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, IROS'23IROS 2023 - IEEE/RSJ International Conference on Intelligent Robots and SystemsDetroit (MI), United StatesIEEE2023, 1-8HALback to textback to textback to text
  • 48 inproceedingsM.Maxime Manzano, S.Sylvain Guégan, R.Ronan Le Breton, L.Louise Devigne and M.Marie Babel. Model-based upper-limb gravity compensation strategies for active dynamic arm supports.ICORR 2023 - IEEE International Conference on Rehabilitation RoboticsSingapore, SingaporeIEEE2023, 1-6HALback to text
  • 49 inproceedings L.Lendy Mulot, T.Thomas Howard, C.Claudio Pacchierotti and M.Maud Marchal. Can We Increase the Perceived Intensity of Mid-Air Haptic Shapes Rendered With Dynamic Tactile Pointers? IEEE World Haptics conference (Work-in-Progress paper) Delft, Netherlands July 2023 HAL back to text back to text
  • 50 inproceedingsA.Antonio Paolillo, P.Paolo Robuffo Giordano and M.Matteo Saveriano. Dynamical System-based Imitation Learning for Visual Servoing using the Large Projection Formulation.ICRA 2023 - IEEE International Conference on Robotics and AutomationLondon, United KingdomIEEEMay 2023, 1-7HALback to text
  • 51 inproceedingsA.Andrea Pupa, P.Paolo Robuffo Giordano and C.Cristian Secchi. Optimal Energy Tank Initialization for Minimum Sensitivity to Model Uncertainties.IROS 2023 - IEEE/RSJ International Conference on Intelligent Robots and SystemsDetroit, MI, United StatesIEEE2023, 1-8HALback to text
  • 52 inproceedingsP.Pierre Raimbaud, A.Alberto Jovane, K.Katja Zibrek, C.Claudio Pacchierotti, M.Marc Christie, L.Ludovic Hoyet, J.Julien Pettré and A.-H.Anne-Hélène Olivier. The Stare-in-the-Crowd Effect When Navigating a Crowd in Virtual Reality.SAP 2023 - ACM Symposium on Applied PerceptionLos Angeles, United StatesACM2023, 1-10HALback to text
  • 53 inproceedingsT.Thomas Rousseau, N.Nicolò Pedemonte, S.Stéphane Caro and F.François Chaumette. Constant Distance and Orientation Following of an Unknown Surface with a Cable-Driven Parallel Robot.ICRA 2023 - IEEE International Conference on Robotics and AutomationLondon, United KingdomMarch 2023, 1-7HALback to text
  • 54 inproceedingsT.Thomas Rousseau, N.Nicolò Pedemonte, S.Stéphane Caro and F.François Chaumette. Stability Analysis of Profile Following by a CDPR using Distance and Vision Sensors.Proceedings of the the Sixth International Conference on Cable-Driven Parallel Robots (CableCon 2023)CableCon 2023 - The Sixth International Conference on Cable-Driven Parallel RobotsNantes, FranceJune 2023, 1-12HALback to text
  • 55 inproceedingsJ.Justine Saint-Aubert, F.Ferran Argelaguet, M.-M. J.Marc J.-M. Macé, C.Claudio Pacchierotti, A.Amir Amedi and A.Anatole Lécuyer. Persuasive Vibrations: Effects of Speech-Based Vibrations on Persuasion, Leadership, and Co-Presence During Verbal Communication in VR.VR 2023 - IEEE Conference on Virtual Reality and 3D User InterfacesShanghai, ChinaIEEEMarch 2023, 1-9HALback to text
  • 56 inproceedingsL.Lev Smolentsev, A.Alexandre Krupa and F.François Chaumette. Shape visual servoing of a tether cable from parabolic features.ICRA 2023 - IEEE International Conference on Robotics and AutomationLondon, United KingdomIEEEMay 2023, 1-7HALback to textback to text
  • 57 inproceedingsA.Ali Srour, A.Antonio Franchi and P.Paolo Robuffo Giordano. Controller and Trajectory Optimization for a Quadrotor UAV with Parametric Uncertainty.IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS 2023)IROS 2023 - IEEE/RSJ International Conference on Intelligent Robots and SystemsDetroit (MI), United StatesIEEE2023, 1-7HALback to text
  • 58 inproceedingsJ.John Thomas and F.François Chaumette. Use of Screw Theory in Proximity-based Control.ICRA workshop - "Geometric Representations: The Roles of Modern Screw Theory, Lie algebra, and Geometric Algebra in Robotics"ICRA workshop - "Geometric Representations: The Roles of Modern Screw Theory, Lie algebra, and Geometric Algebra in Robotics"London, United KingdomMay 2023, 1-5HALback to textback to text
  • 59 inproceedingsS.Simon Wasiela, P.Paolo Robuffo Giordano, J.Juan Cortés and T.Thierry Simeon. A Sensitivity-Aware Motion Planner (SAMP) to Generate Intrinsically-Robust Trajectories.IEEE International Conference on Robotics and Automation (ICRA)London, United KingdomMarch 2023HALback to text
  • 60 inproceedingsF.Feng Zhou, D.Dominic Price, A.Ayse Kucukyilmaz and C.Claudio Pacchierotti. Somabotics Toolkit for Rapid Prototyping Human-Robot Interaction Experiences using Wearable Haptics.IEEE World Haptics Conference 2023 (Work-in-Progress paper)Delft, NetherlandsJuly 2023HALback to text

Scientific book chapters

  • 61 inbookG.Giacinto Barresi, A.Andrea Gaggioli, F.Federico Sternini, A.Alice Ravizza, C.Claudio Pacchierotti and L.Lorenzo de Michieli. Digital Twins and Healthcare: Quick Overview and Human-Centric Perspectives.120mHealth and Human-Centered Design Towards Enhanced Health, Care, and Well-beingStudies in Big DataSpringer NatureJuly 2023, 57-78HALDOIback to text
  • 62 inbookA.Antonio Loria, E.Emmanuel Nuño, E.Elena Panteley and E.Esteban Restrepo. Physics-based output-feedback consensus-formation control of networked autonomous vehicles.Hybrid and Networked Dynamical SystemsLecture Notes in Control and Information SciencesSpringer Verlag2024HALback to text

Doctoral dissertations and habilitation theses

  • 63 thesisP.Pascal Brault. Robust trajectory planning algorithms for robots with parametric uncertainties.Université de RennesApril 2023HALback to text
  • 64 thesisL.Lisheng Kuang. Ungrounded haptic interfaces for guidance and interaction rendering.Université de RennesJune 2023HALback to text
  • 65 thesisF.Fouad Makiyeh. Vision-based shape servoing of soft objects using a mass-spring model..Université de rennesDecember 2023HALback to text
  • 66 thesisM.Maxime Robic. Visual Servoing of the Orientation of an Earth Observation Satellite.Université de RennesDecember 2024HALback to textback to text

11.3 Cited publications

  • 67 articleP.Paolo Salaris, M.Marco Cognetti, R.Riccardo Spica and P.Paolo Robuffo Giordano. Online Optimal Perception-Aware Trajectory Generation.IEEE Transactions on Robotics2019, 1-16HALDOIback to text